[pve-devel] [PATCH storage/zfsonlinux v2 0/3] fix #4997: lvm: avoid autoactivating (new) LVs after boot
Friedrich Weber
f.weber at proxmox.com
Wed Mar 12 09:39:39 CET 2025
On 11/03/2025 11:40, Fabian Grünbichler wrote:
> On March 10, 2025 3:01 pm, Friedrich Weber wrote:
>> On 07/03/2025 13:14, Fabian Grünbichler wrote:
>>>> # LVM-thick/LVM-thin
>>>>
>>>> Note that this change affects all LVs on LVM-thick, not just ones on shared
>>>> storage. As a result, also on single-node hosts, local guest disk LVs on
>>>> LVM-thick will not be automatically active after boot anymore (after applying
>>>> all patches of this series). Guest disk LVs on LVM-thin will still be
>>>> auto-activated, but since LVM-thin storage is necessarily local, we don't run
>>>> into #4997 here.
>>>
>>> we could check the shared property, but I don't think having them not
>>> auto-activated hurts as long as it is documented..
>>
>> This is referring to LVs on *local* LVM-*thick* storage, right? In that
>> case, I'd agree that not having them autoactivated either is okay
>> (cleaner even).
>
> yes
>
>> The patch series currently doesn't touch the LvmThinPlugin at all, so
>> all LVM-*thin* LVs will still be auto-activated at boot. We could also
>> patch LvmThinPlugin to create new thin LVs with `--setautoactivation n`
>> -- though it wouldn't give us much, except consistency with LVM-thick.
>
> well, if you have many volumes not activating them automatically might
> also save some time on boot ;) but yeah, shouldn't cause any issues.
Yeah, then I'll limit this change to the LVMPlugin and keep the
LvmThinPlugin unchanged.
>>>> # Transition to LVs with `--setautoactivation n`
>>>>
>>>> Both v1 and v2 approaches only take effect for new LVs, so we should probably
>>>> also have pve8to9 check for guest disks on (shared?) LVM that have
>>>> autoactivation enabled, and suggest to the user to manually disable
>>>> autoactivation on the LVs, or even the entire VG if it holds only PVE-managed
>>>> LVs.
>>>
>>> if we want to wait for PVE 9 anyway to start enabling (disabling? ;)) it, then
>>> the upgrade script would be a nice place to tell users to fix up their volumes?
>>
>> The upgrade script being pve8to9, right? I'm just not sure yet what to
>> suggest: `lvchange --setautoactivation n` on each LV, or simply
>> `vgchange --setautoactivation n` on the whole shared VG (provided it
>> only contains PVE-managed LVs).
>
> yes.
>
>>
>>> OTOH, setting the flag automatically starting with PVE 9 also for existing
>>> volumes should have no downsides, [...]
>>
>> Hmm, but how would be do that automatically?
>
> e.g., once on upgrading in postinst, or in activate_storage if we find a
> cheap way to skip doing it over and over ;)
>
>>
>>> we need to document anyway that the behaviour
>>> there changed (so that people that rely on them becoming auto-activated on boot
>>> can adapt whatever is relying on that).. or we could provide a script that does
>>> it post-upgrade..
>>
>> Yes, an extra script to run after the upgrade might be an option. Though
>> we'd also need to decide whether to deactivate on each individual LV, or
>> the complete VG (then we'd just assume that there no other
>> non-PVE-managed LVs in the VG that the user wants autoactivated).
>
> I think doing it on each volume managed by PVE is the safer and more
> consistent option..
Sounds good!
Discussed the next steps with Fabian off-list, the summary:
- I'll wait a few days if zfsonlinux patch 1 gets applied upstream so I
can send a proper cherry-pick. We can apply that one before PVE 9, as it
shouldn't break any reasonable setup (= setup that doesn't have ZFS on
an LVM LV with autoactivation disabled), and it allows the manual
`vgchange --setautoactivation n` workaround to work for FC/SAS LUNs
before PVE 9.
- we keep something like patch 2+3 which pass `--setautoactivation n` to
lvcreate in LVMPlugin's `alloc_image`. This can only be applied for PVE
9 (see "Mixed 7/8 cluster" in cover letter). It solves the
auto-activation problem for new volumes.
- to solve the auto-activation problem for existing volumes, we
- ship a script (a separate script or even a subcommand of pve8to9?)
that disables autoactivation for all PVE-managed LVs on shared VGs,
after asking for user confirmation. The user has to run this script
manually. The script probably needs to take care to cluster-lock the
storage, as disabling autoactivation triggers a metadata update.
- add a check to pve8to9 for PVE-managed LVs with autoactivation
enabled (on shared VGs) which suggests to run the script from the
previous point
More information about the pve-devel
mailing list