[pve-devel] vm deletion succeeds even if storage deletion fails

Fabian Grünbichler f.gruenbichler at proxmox.com
Tue Jan 15 08:15:51 CET 2019


On Mon, Jan 14, 2019 at 11:04:29AM +0100, Stefan Priebe - Profihost AG wrote:
> Hello,
> 
> today i was wondering about some disk images while the vm was deleted.
> 
> Inspecting the task history i found this log:
> 
> trying to acquire cfs lock 'storage-cephstoroffice' ...
> trying to acquire cfs lock 'storage-cephstoroffice' ...
> trying to acquire cfs lock 'storage-cephstoroffice' ...
> trying to acquire cfs lock 'storage-cephstoroffice' ...
> trying to acquire cfs lock 'storage-cephstoroffice' ...
> trying to acquire cfs lock 'storage-cephstoroffice' ...
> trying to acquire cfs lock 'storage-cephstoroffice' ...
> trying to acquire cfs lock 'storage-cephstoroffice' ...
> trying to acquire cfs lock 'storage-cephstoroffice' ...
> Could not remove disk 'cephstoroffice:vm-202-disk-1', check manually:
> error with cfs lock 'storage-cephstoroffice': got lock request timeout
> trying to acquire cfs lock 'storage-cephstoroffice' ...
> trying to acquire cfs lock 'storage-cephstoroffice' ...
> trying to acquire cfs lock 'storage-cephstoroffice' ...
> trying to acquire cfs lock 'storage-cephstoroffice' ...
> trying to acquire cfs lock 'storage-cephstoroffice' ...
> trying to acquire cfs lock 'storage-cephstoroffice' ...
> trying to acquire cfs lock 'storage-cephstoroffice' ...
> trying to acquire cfs lock 'storage-cephstoroffice' ...
> trying to acquire cfs lock 'storage-cephstoroffice' ...
> error with cfs lock 'storage-cephstoroffice': got lock request timeout
> TASK OK
> 
> so the vm was deleted but the storage was left unclean. Is this a known
> bug? If not can someone point me to the code so i can provide a patch.

this was changed intentionally because users complained that there is no
way to delete a VM config that references an undeletable disk (e.g.,
because a storage is down).

if you want to improve this further, we could discuss adding a 'force'
parameter to the 'destroy_vm' API call (in PVE/API2/Qemu.pm) and
'destroy_vm' in PVE/QemuServer.pm, and adapting the latter to only
ignore disk removal errors in case it is set?

I am not sure how many people might rely on the current behaviour, which
has been like this since (the end of) 2016..




More information about the pve-devel mailing list