[pve-devel] [RFC storage] rbd: unprotect all snapshots on image removal

Thomas Lamprecht t.lamprecht at proxmox.com
Mon Dec 2 08:18:13 CET 2019


On 12/2/19 7:46 AM, Fabian Grünbichler wrote:
> On November 30, 2019 6:50 pm, Thomas Lamprecht wrote:
>> On 11/29/19 12:00 PM, Fabian Grünbichler wrote:
>>> we need to unprotect more snapshots than just the base one, since we
>>> allow linked clones of regular VM snapshots. unprotection will only work
>>> if no linked clones exist anymore.
>>>
>>> Signed-off-by: Fabian Grünbichler <f.gruenbichler at proxmox.com>
>>> ---
>>> it's still rather ugly if such a linked clone exists, since unprotection
>>> and thus deletion will fail, but the VM config is gone. at least with
>>> this patch, "pvesm free storage:image" will work after
>>> removing/flattening all the linked clones ;)
>>>
>>> alternatively we could iterate over all snapshots in vm_destroy and
>>> attempt to delete them (which will unprotect them in case of RBD),
>>> leaving non-PVE-managed snapshots untouched.
>>>
>>
>> that's sounds honestly slightly nicer - but I guess it's harder to do
>> or else you would not had chosen this way?
>>
> 
> not really harder, but I guess it depends on the storage whether it's 
> better or not ;)
> 
> for ZFS and qcow2 it's probably faster to just (recursively) delete the
> volume itself including all snapshots, instead of each snapshot that we
> know about individually up-front, and then the image and any remaining 
> snapshots as second step.
> 
> for Ceph it's about the same work anyway, the difference is just whether 
> we want to cleanup non-PVE-managed, protected snapshots or leave them 
> alone and not remove the volume instead..
> 

OK, thanks for the explanation. Then your RFC sounds OK to me, maybe
adding a small note to docs RBD related part about this could be good.

> for LVM-thin I think it does not matter much?
> 

IIRC, no.





More information about the pve-devel mailing list