[pve-devel] vm deletion succeeds even if storage deletion fails
Stefan Priebe - Profihost AG
s.priebe at profihost.ag
Wed Jan 16 11:43:04 CET 2019
Hi,
Am 15.01.19 um 08:19 schrieb Stefan Priebe - Profihost AG:
>
> Am 15.01.19 um 08:15 schrieb Fabian Grünbichler:
>> On Mon, Jan 14, 2019 at 11:04:29AM +0100, Stefan Priebe - Profihost AG wrote:
>>> Hello,
>>>
>>> today i was wondering about some disk images while the vm was deleted.
>>>
>>> Inspecting the task history i found this log:
>>>
>>> trying to acquire cfs lock 'storage-cephstoroffice' ...
>>> trying to acquire cfs lock 'storage-cephstoroffice' ...
>>> trying to acquire cfs lock 'storage-cephstoroffice' ...
>>> trying to acquire cfs lock 'storage-cephstoroffice' ...
>>> trying to acquire cfs lock 'storage-cephstoroffice' ...
>>> trying to acquire cfs lock 'storage-cephstoroffice' ...
>>> trying to acquire cfs lock 'storage-cephstoroffice' ...
>>> trying to acquire cfs lock 'storage-cephstoroffice' ...
>>> trying to acquire cfs lock 'storage-cephstoroffice' ...
>>> Could not remove disk 'cephstoroffice:vm-202-disk-1', check manually:
>>> error with cfs lock 'storage-cephstoroffice': got lock request timeout
>>> trying to acquire cfs lock 'storage-cephstoroffice' ...
>>> trying to acquire cfs lock 'storage-cephstoroffice' ...
>>> trying to acquire cfs lock 'storage-cephstoroffice' ...
>>> trying to acquire cfs lock 'storage-cephstoroffice' ...
>>> trying to acquire cfs lock 'storage-cephstoroffice' ...
>>> trying to acquire cfs lock 'storage-cephstoroffice' ...
>>> trying to acquire cfs lock 'storage-cephstoroffice' ...
>>> trying to acquire cfs lock 'storage-cephstoroffice' ...
>>> trying to acquire cfs lock 'storage-cephstoroffice' ...
>>> error with cfs lock 'storage-cephstoroffice': got lock request timeout
>>> TASK OK
>>>
>>> so the vm was deleted but the storage was left unclean. Is this a known
>>> bug? If not can someone point me to the code so i can provide a patch.
>>
>> this was changed intentionally because users complained that there is no
>> way to delete a VM config that references an undeletable disk (e.g.,
>> because a storage is down).
>
> hui - really? But this leaves you unused disks behind? What happens if
> another user get's the same VM id? This sounds a bit weird to me.
>
>> if you want to improve this further, we could discuss adding a 'force'
>> parameter to the 'destroy_vm' API call (in PVE/API2/Qemu.pm) and
>> 'destroy_vm' in PVE/QemuServer.pm, and adapting the latter to only
>> ignore disk removal errors in case it is set?
>>
>> I am not sure how many people might rely on the current behaviour, which
>> has been like this since (the end of) 2016..
Can you point me to the commit / package where this change was
introduced? I've searched for it but couldn't finde it.
Stefan
>>
>> _______________________________________________
>> pve-devel mailing list
>> pve-devel at pve.proxmox.com
>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>>
More information about the pve-devel
mailing list