[pve-devel] [PATCH v2 pve-storage 1/2] add external snasphot support
DERUMIER, Alexandre
alexandre.derumier at groupe-cyllene.com
Fri Oct 25 07:52:09 CEST 2024
>
>
> But even with that, you can still have performance impact.
> So yes, I think they are really usecase for workload when you only
> need
> snapshot time to time (before an upgrade for example), but max
> performance with no snaphot exist.
>>my main point here is - all other storages treat snapshots as
>>"cheap". if you combine raw+qcow2 snapshot overlays, suddenly
>>performance will get worse if you keep a snapshot around for whatever
>>reason..
Ok, I redone a lot of bench yesterday, with a real san storage, and I
don't see too much difference between qcow2 and raw. (something like
30000iops on raw and 28000~29000 iops on qcow2).
I have tested with 2TB qcow2 file to be sure, and with new qcow2 sub-
cluster feature with l2_extended, it's not too much.
The difference is a little more big on a local nvme (I think because of
low latency), but as the usecase is for network storage, it's ok.
Let's go for full .qcow2, it'll be easier ;)
> > > it's a bit confusing to have a volid ending with raw, with the
> > > current volume and all but the first snapshot actually being
> > > stored
> > > in qcow2 files, with the raw file being the "oldest" snapshot in
> > > the
> > > chain..
> if it's too confusing, we could use for example an .snap extension.
> (as we known that it's qcow2 behind)
>>I haven't thought yet about how to encode the snapshot name into the
>>snapshot file name, but yeah, maybe something like that would be
>>good. or maybe snap-VMID-disk-DISK.qcow2 ?
ok we can use snap-VMID-disk-DISK.qcow2 , I'll be easier for regex :p
> > > storage_migrate needs to handle external snapshots, or at least
> > > error
> > > out.
> it should already work. (I have tested move_disk, and live migration
> +
> storage migration). qemu_img_convert offline and qemu block job for
> live.
>>but don't all of those lose the snapshots? did you test it with
>>snapshots and rollback afterwards?
ok, sorry, I have tested clone a new vm from a snapshot. (which use the
same code). I don't remember how it's work with move disk of a running
vm when snaphot exist.
>
> The main problem is when you start a vm on a specific snasphot,
> we don't send the $snapname param.
>
> One way could be that qemu-server check the current snapshot from
> config when doing specific action like start.
>>if we manage to find a way to make the volid always point at the top
>>overlay, then that wouldn't be needed..
yes, indeed if we are able to rename the current snapshot file to vm-
100-disk-0.qcow2 , it's super easy :)
I need to do more test, because drive-reopen only seem to work if the
original drive is defined with -blockdev syntax. (It seem to clash on
nodename if it's not defined with -drive ).
I have begin to look to implement blockdev, it don't seem too much
difficult for the start command line, but I need check the hotplug
part.
Maybe for pve9 ? (it could open door to features like luks encryption
too or or)
I'll rework all the patches after my holiday, with both renaming of
current snapshot and only use .qcow2 format, it should be a lot of
more clean and KISS.
Thanks again for the review !
More information about the pve-devel
mailing list