[pve-devel] partially-applied: [PATCH-SERIES v12 qemu-server/manager] API for disk import and OVF

Fabian Grünbichler f.gruenbichler at proxmox.com
Wed Mar 16 12:58:44 CET 2022


On March 16, 2022 12:25 pm, Fabian Ebner wrote:
> Am 16.03.22 um 11:29 schrieb Fabian Grünbichler:
>> On March 16, 2022 11:00 am, Fabian Ebner wrote:
>>> Am 14.03.22 um 16:57 schrieb Fabian Grünbichler:
>>>> applied qemu-server patches except 11 and 14-16, see comments on 
>>>> indivudal patches.
>>>>
>>>
>>> Thanks a lot for the review/feedback!
>>>
>>>> some unrelated but possibly fix-able as followup things I noticed:
>>>> - cloning a running VM with an EFI disk fails, the EFI disk is not 
>>>>   mirrorable (so we need another check like for TPM state?)
>>>
>>> Isn't that just when the target storage allocates a different-sized
>>> disk, i.e. https://bugzilla.proxmox.com/show_bug.cgi?id=3227
>> 
>> no, was my fault (the VM in question had an efi disk, but was not booted 
>> using UEFI). probably should add a check for that as well though, 
>> unrelated to this series (move disk is also affected, and I guess 
>> live-migration as well..)
> 
> Would it be enough to have a prominent warning when starting the VM,
> because it's already a configuration issue there.

yeah, a warning at startup and maybe marking the EFI disk on the GUI 
somehow - we do have both relevant settings available there?

I think warnings at startup are easily missed, but checking for such 
invalid configs in all operations that might possibly get called also 
seems like overkill (and the error if it happens is somewhat speaking 
anyway - it says there is no drive node named 'drive-efidisk0' ;))

>>>> - cancelling a running clone doesn't cleanup properly (stops with trying 
>>>>   to aquire lock and leaves the target VM locked & existing)
>>>>
>>>
>>> Will take a look.
>>>
>> 
>> the exact log messages are not always the same, but the target remains 
>> around locked with whatever state it managed to get to (and the task 
>> stopped with 'unexpected status').
> 
> For me, this seems specific to RBD? And only when stopping via killing
> the task via API/GUI which kills after 5 seconds. When interrupting on
> the CLI, it hangs for a while but eventually cleans up.

ZFS here (and yeah, clicking the stop button on the GUI), haven't tried 
other storages.. if there are some easy wins for improving this I'd go 
for it, but like I said, not related to this series at all, just 
something I noticed while testing.. flows where regular users can easily 
create config-locked guests are kinda cumbersome though.





More information about the pve-devel mailing list