[pve-devel] [PATCH v2 pve-storage 2/2] add lvmqcow2 plugin: (lvm with external qcow2 snapshot)
DERUMIER, Alexandre
alexandre.derumier at groupe-cyllene.com
Thu Oct 24 13:01:03 CEST 2024
Thanks Fabian for your time !
I have tried to respond as much as possible. (I'm going to Holiday for
1 week tomorrow, so sorry if I don't reply to all your comments)
-------- Message initial --------
De: Fabian Grünbichler <f.gruenbichler at proxmox.com>
À: "DERUMIER, Alexandre" <alexandre.derumier at groupe-cyllene.com>, pve-
devel at lists.proxmox.com <pve-devel at lists.proxmox.com>
Objet: Re: [pve-devel] [PATCH v2 pve-storage 2/2] add lvmqcow2 plugin:
(lvm with external qcow2 snapshot)
Date: 24/10/2024 09:42:00
> DERUMIER, Alexandre <alexandre.derumier at groupe-cyllene.com> hat am
> 23.10.2024 15:45 CEST geschrieben:
>
>
> > > I am not yet convinced this is somehow a good idea, but maybe you
> > > can
> > > convince me otherwise ;)
>>I maybe judged this too quickly - I thought this was combining LVM +
>>a dir-based storage, but this is putting the qcow2 overlays on LVs,
>>which I missed on the first pass!
Ah ok ! ^_^ yes, this is really 100% lvm (for shared lvm)
> > > variant A: this is just useful for very short-lived snapshots
> > > variant B: these snapshots are supposed to be long-lived
>
> Can you defined "short "/ "long" ? and the different usecase ?
>
> because for me, a snapshot is a snapshot. Sometime I take a snapshot
> before doing some critical changes, but I can't known if I need to
> rollback in next 2h, or next month.
>>yes, this would be an example of a short-lived snapshot
ok
> I think that "long-lived" usecase is backup (but we don't need it),
> or replication (this could apply here, if we want to add replication
> for disaster recovery)
>>backup would also be short-lived usually (the snapshot is just to
>>take the backup, not to keep a backup). long-lived usually is
>>something like "take daily snapshot and keep for a few weeks for file
>>recovery", in addition to regular backups. or "snapshot because we
>>just had an incidence and might need this for forensics in a few
>>months" (can also be solved with backups, of course ;)).
>>the main difference between the two is - for short-lived snapshots
>>performance implications related to snapshots existing are not that
>>important. I can live with a few hours of degraded performance, if
>>the snapshot is part of some important process/work flow. with long-
>>lived snapshots there is a requirement for them to not hurt
>>performance just by existing, because otherwise you can't use them.
>>there is a second reason why long-lived snapshots can be impossible
ok, so here, with qcow2 format, performance shouldn't be a problem.
(That's the whole point of this patch, using qcow2 format instead basic
slow lvm snasphot)
>>if you need to decide up-front how "big" the delta of that snapshot
>>can grow at most, then in PVE context, you always need to allocate
>>the full volume size (regular thick LVM had both issues - bad
>>performance, and new writes going into a thick snapshot volume).
about thick snaphot volume, technically, it could be possible to create
smaller lvm volume than the qcow2 virtual-size, and dynamically extend
it. ovirt is doing it like this. (I nave send a prelimary patch in
september, but for now, I'll like to keep it simple with thick
snapshots volume).
>>if you can support long-lived snapshots, then you automatically also
>>support short-lived snapshots. the other direction doesn't hold.
>>since PVE only has one kind of snapshots, they need to be useful for
>>long-lived snapshots.
ok got it.
> > > A is not something we want. we intentionally don't have non-thin
> > > LVM
> > > snapshots for example.
>
> AFAIK, we never had implemented it because LVM snasphot is slow as
> hell.(as a lvm extent are around 4MB, if you want 4k on a snapshot,
> you
> need to reread and rewrite the 4MB, so around 1000x over-
> amplification and slow iops)
>>see above - there's two issues, one is performance, the other is that
>>you need to either
>>- make the snapshot smaller than the original volume (and risk
>>running out of space)
>>- make the snapshot as big as the original volume (and blow up space
>>requirements)
>>
>>(thick) LVM snapshots basically barely work for the "take a
>>consistent backup during quiet periods" use case, and not much else.
> This is really the main blocker for my customers migrating from
> vmware
> (and to be honest I have some of them going to oracle ovlm (with
> ovirt), because ovirt support it this way).
> > > B once I create a single snapshot, the "original" storage only
> > > contains the data written up to that point, anything else is
> > > stored
> > > on the "snapshot" storage. this means my snapshot storage must be
> > > at
> > > least as fast/good/shared/.. as my original storage. in that
> > > case, I
> > > can just use the snapshot storage directly and ditch the original
> > > storage?
>
> Sorry, but I don't understand why you are talking about
> original/snapshot storage ? I never have thinked to use another
> storage
> for external snapshot.
>
> The patch is really to add external snapshot on same lvm storage,
> through lvm additional volume, but with qcow2 format to have good
> performance (vs slow lvm snapshot)
see above - I misread and answered too quickly.
>>I took a closer look and replied inline below with some comments -
>>much of it mimics the comments for the dir plugin..
(I'll do a full rework with snapshotname like for dir plugin)
> > Signed-off-by: Alexandre Derumier <alexandre.derumier at groupe-
> > cyllene.com>
> > ---
> > src/PVE/Storage.pm | 2 +
> > src/PVE/Storage/LvmQcow2Plugin.pm | 460
> > ++++++++++++++++++++++++++++++
> > src/PVE/Storage/Makefile | 3 +-
> > 3 files changed, 464 insertions(+), 1 deletion(-)
> > create mode 100644 src/PVE/Storage/LvmQcow2Plugin.pm
> >
> > diff --git a/src/PVE/Storage.pm b/src/PVE/Storage.pm
> > index 57b2038..119998f 100755
> > --- a/src/PVE/Storage.pm
> > +++ b/src/PVE/Storage.pm
> > @@ -28,6 +28,7 @@ use PVE::Storage::Plugin;
> > use PVE::Storage::DirPlugin;
> > use PVE::Storage::LVMPlugin;
> > use PVE::Storage::LvmThinPlugin;
> > +use PVE::Storage::LvmQcow2Plugin;
> > use PVE::Storage::NFSPlugin;
> > use PVE::Storage::CIFSPlugin;
> > use PVE::Storage::ISCSIPlugin;
> > @@ -54,6 +55,7 @@ our $KNOWN_EXPORT_FORMATS = ['raw+size',
> > 'tar+size', 'qcow2+size', 'vmdk+size',
> > PVE::Storage::DirPlugin->register();
> > PVE::Storage::LVMPlugin->register();
> > PVE::Storage::LvmThinPlugin->register();
> > +PVE::Storage::LvmQcow2Plugin->register();
> > PVE::Storage::NFSPlugin->register();
> > PVE::Storage::CIFSPlugin->register();
> > PVE::Storage::ISCSIPlugin->register();
> > diff --git a/src/PVE/Storage/LvmQcow2Plugin.pm
> > b/src/PVE/Storage/LvmQcow2Plugin.pm
> > new file mode 100644
> > index 0000000..68c8686
> > --- /dev/null
> > +++ b/src/PVE/Storage/LvmQcow2Plugin.pm
> > @@ -0,0 +1,460 @@
> > +package PVE::Storage::LvmQcow2Plugin;
> > +
> > +use strict;
> > +use warnings;
> > +
> > +use IO::File;
> > +
> > +use PVE::Tools qw(run_command trim);
> > +use PVE::Storage::Plugin;
> > +use PVE::Storage::LVMPlugin;
> > +use PVE::JSONSchema qw(get_standard_option);
> > +
> > +use base qw(PVE::Storage::LVMPlugin);
>>could we integrate this into the LVM plugin if we implement it?
>>basically add the "snapext" option, which is fixed, and if it is set,
>>disallow rootdir?
yes sure. I wanted to keep it separated for now to avoid to put "if
snapext" everwhere, and also lvmthin plugin inherit from lvmplugin,
but I can merge it, no problem.
>>probably snapext should also be fixed for dir(-based) storages, since
>>toggling it when snapshots exist would break a ton of stuff?
yes, indeed.
> > +
> > +# Configuration
> > +
> > +sub type {
> > + return 'lvmqcow2';
> > +}
> > +
> > +sub plugindata {
> > + return {
> > + #container not yet implemented #need to implemented dm-qcow2
> > + content => [ {images => 1, rootdir => 1}, { images => 1 }],
>>then rootdir shouldn't be mentioned here at all? the first member
>>contains the possible content types, the second the default if no
>>explicit ones are set..
ah ok, sorry.
> > + };
> > +}
> > +
> > +sub properties {
> > + return {
> > + };
> > +}
> > +
> > +sub options {
> > + return {
> > + vgname => { fixed => 1 },
> > + nodes => { optional => 1 },
> > + shared => { optional => 1 },
> > + disable => { optional => 1 },
> > + saferemove => { optional => 1 },
> > + saferemove_throughput => { optional => 1 },
> > + content => { optional => 1 },
> > + base => { fixed => 1, optional => 1 },
> > + tagged_only => { optional => 1 },
> > + bwlimit => { optional => 1 },
> > + snapext => { fixed => 1 },
> > + };
> > +}
> > +
> > +# Storage implementation
> > +
> > +sub parse_volname {
> > + my ($class, $volname) = @_;
> > +
> > + PVE::Storage::Plugin::parse_lvm_name($volname);
> > + my $format = $volname =~ m/^(.*)-snap-/ ? 'qcow2' : 'raw';
> > +
> > + if ($volname =~ m/^((vm|base)-(\d+)-\S+)$/) {
> > + return ('images', $1, $3, undef, undef, $2 eq 'base',
> > $format);
> > + }
>>I wonder if here we also want to keep the volid/volname like it is,
>>but name the LVs for snapshots differently?
maybe like zfs|rbd[btrfs snapshot, with volume at snapX for example ?
> > +
> > + die "unable to parse lvm volume name '$volname'\n";
> > +}
> > +
> > +sub filesystem_path {
> > + my ($class, $scfg, $volname, $snapname, $current_snap) = @_;
> > +
> > + my ($vtype, $name, $vmid) = $class->parse_volname($volname);
> > +
> > + my $vg = $scfg->{vgname};
> > +
> > + my $path = "/dev/$vg/$name";
> > +
> > + if($snapname) {
> > + $path = get_snap_volname($path, $snapname);
> > + } elsif ($current_snap) {
> > + $path = $current_snap->{file};
> > + }
see comment for the dir storage ;)
> > +
> > + return wantarray ? ($path, $vmid, $vtype) : $path;
> > +}
> > +
> > +sub create_base {
> > + my ($class, $storeid, $scfg, $volname) = @_;
> > +
> > + my $vg = $scfg->{vgname};
nit: this could move below, closer to where it is used..
> > +
> > + my ($vtype, $name, $vmid, $basename, $basevmid, $isBase) =
> > + $class->parse_volname($volname);
> > +
> > + die "create_base not possible with base image\n" if $isBase;
> > +
> > + die "unable to create base volume - found snapshot" if $class-
> > > snapshot_exist($scfg, $storeid, $volname);
> > +
> > + my $newname = $name;
> > + $newname =~ s/^vm-/base-/;
> > +
> > + my $cmd = ['/sbin/lvrename', $vg, $volname, $newname];
> > + run_command($cmd, errmsg => "lvrename '$vg/$volname' =>
> > '$vg/$newname' error");
> > +
> > + # set inactive, read-only flags
> > + $cmd = ['/sbin/lvchange', '-an', '-pr', "$vg/$newname"];
> > + eval { run_command($cmd); };
> > + warn $@ if $@;
> > +
> > + my $newvolname = $newname;
> > +
> > + return $newvolname;
> > +}
> > +
> > +sub clone_image {
> > + my ($class, $scfg, $storeid, $volname, $vmid, $snap) = @_;
> > +
> > + die "can't clone images in lvm storage\n";
> > +}
> > +
> > +sub alloc_image {
> > + my ($class, $storeid, $scfg, $vmid, $fmt, $name, $size) = @_;
> > +
> > + die "unsupported format '$fmt'" if $fmt ne 'raw';
>>here I also wonder whether it wouldn't be "easier" to only allocate
>>qcow2 formatted LVs (also for the initial allocation)?
>>
>>otherwise, this is basically alloc_image from the LVMPlugin, just
>>with the extra tags added, which could also be done where the
>>snapshots are handled further below..
If possible, I would like to try to have raw support too.
(to be able to easily enable snapshot on existing storage with needing
to convert TB of datas) and for performance too.
I'll do test with/without to see if the code is really more complex.
> > +
> > + die "illegal name '$name' - should be 'vm-$vmid-*'\n"
> > + if $name && $name !~ m/^vm-$vmid-/;
> > +
> > + my $vgs = PVE::Storage::LVMPlugin::lvm_vgs();
> > +
> > + my $vg = $scfg->{vgname};
> > +
> > + die "no such volume group '$vg'\n" if !defined ($vgs->{$vg});
> > +
> > + my $free = int($vgs->{$vg}->{free});
> > +
> > + die "not enough free space ($free < $size)\n" if $free <
> > $size;
> > +
> > + $name = $class->find_free_diskname($storeid, $scfg, $vmid)
> > + if !$name;
> > +
> > + my $tags = ["pve-vm-$vmid"];
> > + if ($name =~ m/^(((vm|base)-(\d+)-disk-(\d+)))(-snap-(.*))?/)
> > {
> > + push @$tags, "\@pve-$1";
> > + }
>>I don't like this part for two reasons:
>>
>>1. without this, alloc_image is identical to LVMPlugin's (see
>>above/below)
>>2. the way the snapshot is encoded in the volname means I can "pvesm
>>alloc" something with a snapshot volname and break things..
>>
>>I think this should be refactored:
>>- alloc_image should allow (only?) qcow2
>>- it should use a custom helper for the actual lvcreate, but be
>>restricted to "proper" volume names
>>- it should use another helper for the qcow2 formatting
>>- volume_snapshot should use the same helpers, but call them with a
>>different LV name
>>
>>the same also applies to free_image, for similar reasons (don't allow
>>to call free_image with a snapshot directly, but use a common helper
>>for free_image and volume_snapshot_delete)
ok !
> > +
> > + PVE::Storage::LVMPlugin::lvcreate($vg, $name, $size, $tags);
> > +
> > + return $name;
> > +}
> > +
> > +sub volume_snapshot_info {
> > + my ($class, $scfg, $storeid, $volname) = @_;
> > +
> > + return $class->list_snapshots($scfg, $storeid, $volname);
why have two public subs for this? the $current_only would not be
needed if the volume itself would also be the current snapshot..
> > +}
> > +
> > +sub activate_volume {
> > + my ($class, $storeid, $scfg, $volname, $snapname, $cache) =
> > @_;
> > +
> > + my $lvm_activate_mode = 'ey';
> > + my $tag = undef;
$tag is undef
> > +
> > + #activate volume && all volumes snapshots by tag
> > + if($volname =~ m/^(((vm|base)-(\d+)-disk-(\d+)))(-snap-
> > (.*))?/)
> > {
> > + $tag = "\@pve-vm-$4-disk-$5";
here only the disk itself is put into the tag, the optional snap part
isn't.. and $snapname is ignored as well
> > + }
and what if the regex didn't match?
> > +
> > + my $cmd = ['/sbin/lvchange', "-a$lvm_activate_mode", $tag];
>>so this will only ever activate the "main" volume?
all snapshots have the same tag (the main volume name), so this will
activate every snasphot .
> > + run_command($cmd, errmsg => "can't activate LV '$tag'");
> > +
> > + $cmd = ['/sbin/lvchange', '--refresh', $tag];
> > + run_command($cmd, errmsg => "can't refresh LV '$tag' for
> > activation");
this should
- not use $volname to transfer $snapname, but pass in the volname
contained in the volid
- use $snapname ;)
> > +}
> > +
> > +sub deactivate_volume {
> > + my ($class, $storeid, $scfg, $volname, $snapname, $cache) =
> > @_;
> > +
> > + my $tag = undef;
> > + #deactivate volume && all volumes snasphots by tag
> > + if($volname =~ m/^(((vm|base)-(\d+)-disk-(\d+)))(-snap-
> > (.*))?/)
> > {
> > + $tag = "\@pve-vm-$4-disk-$5";
> > + }
same as for activate_volume applies here as well..
> > +
> > + my $cmd = ['/sbin/lvchange', '-aln', $tag];
> > + run_command($cmd, errmsg => "can't deactivate LV '$tag'");
> > +}
> > +
> > +sub volume_resize {
> > + my ($class, $scfg, $storeid, $volname, $size, $running) = @_;
> > +
> > + #we should resize the base image and parents snapshots,
> > + #but how to manage rollback ?
> > +
> > + die "can't resize if snasphots exist" if $class-
> > > snapshot_exist($scfg, $storeid, $volname);
>>I don't think qemu requires backing file and overlay sizes to agree -
>>just that if you write back (commit) up the chain, you might need to
>>resize the backing file to accommodate the additional data/space. so
>>resizing should be fine (in theory at least)? also see the docs for
>>`qemu-img commit`.
I'll try thanks
> > +
> > + return 1;
> > +}
> > +
> > +sub volume_snapshot {
> > + my ($class, $scfg, $storeid, $volname, $snap) = @_;
> > +
> > + $class->activate_volume($storeid, $scfg, $volname, undef, {});
> > +
> > + my $current_path = $class->path($scfg, $volname, $storeid);
> > + my $current_format =
> > (PVE::Storage::Plugin::file_size_info($current_path))[1];
> > + my $snappath = get_snap_volname($current_path, $snap);
> > +
> > + my $snapvolname = get_snap_volname($volname, $snap);
> > + #allocate lvm snapshot volume
> > + my ($vtype, $name, $vmid, $basename, $basevmid, $isBase) =
> > + $class->parse_volname($volname);
> > + my $size = $class->volume_size_info($scfg, $storeid, $volname,
> > 5);
> > + #add 100M for qcow2 headers
> > + $size = int($size/1024) + (100*1024);
a pointer where that 100M comes from would be nice ;)
> > +
> > + $class->alloc_image($storeid, $scfg, $vmid, 'raw',
> > $snapvolname,
> > $size);
so this could instead use the regular alloc_image from stock LVM, and
then set the tags here if we want to keep alloc_image as raw-only..
which I don't think we really want ;)
> > +
> > + # create the qcow2 fs
> > + eval {
> > + my $cmd = ['/usr/bin/qemu-img', 'create', '-b',
> > $current_path,
> > + '-F', $current_format, '-f', 'qcow2',
> > $snappath];
> > + my $options = "extended_l2=on,";
> > + $options .=
> > PVE::Storage::Plugin::preallocation_cmd_option($scfg, 'qcow2');
> > + push @$cmd, '-o', $options;
> > + run_command($cmd);
> > + };
see comment for alloc_image above..
> > + if ($@) {
> > + eval { $class->free_image($storeid, $scfg, $snapvolname, 0) };
I guess this is okay, but it would read a bit cleaner if this would
call volume_snapshot_delete..
> > + warn $@ if $@;
> > + }
> > +}
> > +
> > +# Asserts that a rollback to $snap on $volname is possible.
> > +# If certain snapshots are preventing the rollback and $blockers
> > is
> > an array
> > +# reference, the snapshot names can be pushed onto $blockers prior
> > to dying.
> > +sub volume_rollback_is_possible {
> > + my ($class, $scfg, $storeid, $volname, $snap, $blockers) = @_;
> > +
> > + my $path = $class->filesystem_path($scfg, $volname);
> > + my $snappath = get_snap_volname($path, $snap);
> > + my $currentpath = $class->path($scfg, $volname, $storeid);
> > + return 1 if $currentpath eq $snappath;
> > +
> > + die "can't rollback, '$snap' is not most recent snapshot on
> > '$volname'\n";
> > +
> > + return 1;
> > +}
same comments as for the dir-based patches apply here as well - if at
all possible, having a 1:1 mapping of snapshot name to LV name would be
great.. other than LVs not being hardlinkable, I think the same
considerations apply there as well..
> > +
> > +sub volume_snapshot_rollback {
> > + my ($class, $scfg, $storeid, $volname, $snap) = @_;
> > +
> > + $class->activate_volume($storeid, $scfg, $volname, undef, {});
> > + #simply delete the current snapshot and recreate it
> > +
> > + my $snapvolname = get_snap_volname($volname, $snap);
> > +
> > + $class->free_image($storeid, $scfg, $snapvolname, 0);
> > + $class->volume_snapshot($scfg, $storeid, $volname, $snap);
> > +}
> > +
> > +sub volume_snapshot_delete {
> > + my ($class, $scfg, $storeid, $volname, $snap, $running) = @_;
> > +
> > + return 1 if $running;
> > +
> > + $class->activate_volume($storeid, $scfg, $volname, undef, {});
> > +
> > + my $snapshots = $class->volume_snapshot_info($scfg, $storeid,
> > $volname);
> > + my $snappath = $snapshots->{$snap}->{file};
> > + if(!$snappath) {
> > + warn "$snap already deleted. skip\n";
> > + return;
> > + }
how can this happen? the snapshot info is generated by querying LVM for
a list of LVs..
> > +
> > + my $snapvolname = $snapshots->{$snap}->{volname};
> > + my $parentsnap = $snapshots->{$snap}->{parent};
> > + my $childsnap = $snapshots->{$snap}->{child};
> > + die "error: can't find a parent for this snapshot" if
> > !$parentsnap;
but the first snapshot doesn't have a parent?
> > +
> > + my $parentpath = $snapshots->{$parentsnap}->{file};
> > + my $parentformat = $snapshots->{$parentsnap}->{'format'} if
> > $parentsnap;
> > + my $childpath = $snapshots->{$childsnap}->{file} if
> > $childsnap;
>>unless someone manually messed with the snapshot tree, in the current
>>scheme any "snapshot" has a child?
the $snapshots have the full chain, including the base image.
I think this why I check it
> > + my $childformat = $snapshots->{$childsnap}->{'format'} if
> > $childsnap;
> > +
> > + print "merge snapshot $snap to $parentsnap\n";
> > + my $cmd = ['/usr/bin/qemu-img', 'commit', $snappath];
> > + run_command($cmd);
> > +
> > + #if we delete an intermediate snapshot, we need to link upper
> > snapshot to base snapshot
> > + if($childpath && -e $childpath) {
> > + die "missing parentsnap snapshot to rebase child $childpath\n" if
> > !$parentpath;
> > + print "link $childsnap to $parentsnap\n";
> > + $cmd = ['/usr/bin/qemu-img', 'rebase', '-u', '-b', $parentpath,
> > '-
> > F', $parentformat, '-f', $childformat, $childpath];
> > + run_command($cmd);
> > + }
>>same here, commit + rebase -u should be the same as rebase ?
AFAIK, rebase is more used when you use multi-branch when you delete
snapshot delete and you need to merge the snapshot content into
multiple child snapshot.
and commit, is when you need to merge to parent
https://lists.nongnu.org/archive/html/qemu-devel/2019-08/msg04043.html
from what I have read in different place, rebase is slower
https://lists.defectivebydesign.org/archive/html/qemu-discuss/2019-08/msg00041.html
"Generally, rebase is going to be slower because it reads some clusters
and compares the old with the new backing file to see whether they are
the same. commit will not do that. (OTOH, if there are many clusters
in the old backing chain that happen to contain the same data as the
new
one, this will save space, because it won’t copy those clusters from
the
old backing chain.)
"
> > +
> > + #delete the snapshot
> > + $class->free_image($storeid, $scfg, $snapvolname, 0);
> > +
> > + return;
> > +}
> > +
> > +sub volume_has_feature {
> > + my ($class, $scfg, $feature, $storeid, $volname, $snapname,
> > $running) = @_;
> > +
> > + my $features = {
> > + snapshot => { current => 1 },
> > +# clone => { base => 1, snap => 1}, #don't allow to clone as we
> > can't activate the base between different host ?
>>that's only true for shared LVM though, and the rest would also work
>>for local LVM?
yes, for local lvm , it'll work. I'll check the shared option.
> > + template => { current => 1},
> > + copy => { base => 1, current => 1, snap => 1},
> > + sparseinit => { base => 1, current => 1},
> > + rename => {current => 1},
> > + };
> > +
> > + my ($vtype, $name, $vmid, $basename, $basevmid, $isBase) =
> > + $class->parse_volname($volname);
> > +
> > + my $key = undef;
> > + if($snapname){
> > + $key = 'snap';
> > + }else{
> > + $key = $isBase ? 'base' : 'current';
> > + }
> > + return 1 if $features->{$feature}->{$key};
> > +
> > + return undef;
> > +}
> > +
> > +sub get_snap_volname {
> > + my ($path, $snap) = @_;
> > +
> > + my $basepath = "";
> > + my $baseformat = "";
> > + if ($path =~ m/^((.*)((vm|base)-(\d+)-disk-(\d+)))(-snap-([a-
> > zA-
> > Z0-9]+))?(\.(raw|qcow2))?/) {
> > + $basepath = $1;
> > + $baseformat = $8;
> > + }
> > + my $snapvolname = $basepath."-snap-$snap.qcow2";
> > + return $snapvolname;
> > +}
> > +
> > +sub get_snapname_from_path {
> > + my ($path) = @_;
> > +
> > + if ($path =~ m/^((.*)((vm|base)-(\d+)-disk-(\d+)))(-snap-([a-
> > zA-
> > Z0-9]+))?(\.(raw|qcow2))?/) {
> > + my $snapname = $7;
> > + return $snapname;
> > + }
> > + die "can't parse snapname from path $path";
> > +}
> > +
> > +sub get_current_snapshot {
> > + my ($class, $scfg, $storeid, $volname) = @_;
> > +
> > + #get more recent ctime volume
> > + return $class->list_snapshots($scfg, $storeid, $volname, 1);
> > +}
> > +my $check_tags = sub {
> > + my ($tags) = @_;
> > +
> > + return defined($tags) && $tags =~ /(^|,)pve-vm-\d+(,|$)/;
> > +};
> > +
> > +sub list_images {
> > + my ($class, $storeid, $scfg, $vmid, $vollist, $cache) = @_;
> > +
> > + my $vgname = $scfg->{vgname};
> > +
> > + $cache->{lvs} = PVE::Storage::LVMPlugin::lvm_list_volumes() if
> > !$cache->{lvs};
> > +
> > + my $res = [];
> > +
> > + if (my $dat = $cache->{lvs}->{$vgname}) {
> > +
> > + foreach my $volname (keys %$dat) {
> > +
> > + next if $volname !~ m/^(vm|base)-(\d+)-/;
> > + my $owner = $2;
> > +
> > + my $info = $dat->{$volname};
> > +
> > + next if $scfg->{tagged_only} && !&$check_tags($info->{tags});
> > +
> > + # Allow mirrored and RAID LVs
> > + next if $info->{lv_type} !~ m/^[-mMrR]$/;
> > +
> > + my $volid = "$storeid:$volname";
> > +
> > + if ($vollist) {
> > + my $found = grep { $_ eq $volid } @$vollist;
> > + next if !$found;
> > + } else {
> > + next if defined($vmid) && ($owner ne $vmid);
> > + }
> > +
> > + push @$res, {
> > + volid => $volid, format => 'raw', size => $info->{lv_size}, vmid
> > =>
> > $owner,
> > + ctime => $info->{ctime},
> > + };
>>but doesn't this now include all snapshot LVs as well? while
>>pretending they are raw?
yes, I was not sure here if we want to display snasphot volume or not ?
> > + }
> > + }
> > +
> > + return $res;
> > +}
> > +
> > +sub list_snapshots {
> > + my ($class, $scfg, $storeid, $volname, $current_only) = @_;
> > +
> > + my $vgname = $scfg->{vgname};
> > +
> > + my $basevolname = $volname;
> > + my $lvs = PVE::Storage::LVMPlugin::lvm_list_volumes($vgname);
this
> > +
> > + my $vg = $lvs->{$vgname};
and this seem to be unused?
> > +
> > + my ($vtype, $name, $vmid, $basename, $basevmid, $isBase,
> > $format) = $class->parse_volname($volname);
> > + my $snapshots = $class->list_images($storeid, $scfg, $vmid);
> > +
> > + my $info = {};
> > + for my $snap (@$snapshots) {
> > + my $snap_volid = $snap->{volid};
> > + next if ($snap_volid !~ m/$basevolname/);
same issue as with the dir patch - this allows partial matching if two
volumes share a name prefix
> > +
> > + my $snapname = get_snapname_from_path($snap_volid);
> > + my (undef, $snap_volname) =
> > PVE::Storage::parse_volume_id($snap_volid);
> > + my $snapfile = $class->filesystem_path($scfg, $snap_volname,
> > $snapname);
> > + $snapname = 'base' if !$snapname;
> > + $info->{$snapname}->{file} = $snapfile;
> > + $info->{$snapname}->{volname} = $snap_volname;
> > + $info->{$snapname}->{volid} = $snap_volid;
> > + $info->{$snapname}->{ctime} = $snap->{ctime};
> > +
> > + if (!$current_only) {
> > + my (undef, $format, undef, $parentfile, undef) =
> > PVE::Storage::Plugin::file_size_info($snapfile);
> > + next if !$parentfile && $snapname ne 'base'; #bad unlinked
> > snasphot
> > +
> > + my $parentname = get_snapname_from_path($parentfile) if
> > $parentfile;
> > + $parentname = 'base' if !$parentname && $parentfile;
> > +
> > + $info->{$snapname}->{'format'} = $format;
> > + $info->{$snapname}->{parent} = $parentname if $parentname;
> > + $info->{$parentname}->{child} = $snapname if $parentname;
> > + }
> > + }
> > +
> > + my @snapshots_sorted = sort { $info->{$b}{ctime} <=> $info-
> > > {$a}{ctime} } keys %$info;
> > + my $current_snapname = $snapshots_sorted[0];
> > + my $current_snapshot = $info->{$current_snapname};
> > + return $current_snapshot if $current_only;
this (returning to hashes with different structure) is easy to miss and
get wrong..
> > +
> > + $info->{current} = { %$current_snapshot };
especially if this is done anyway, so the caller can just look at that
if they only want the current snapshot..
> > + return $info;
> > +}
> > +
> > +sub snapshot_exist {
> > + my ($class, $scfg, $storeid, $volname) = @_;
> > +
> > + my $basepath = $class->filesystem_path($scfg, $volname);
> > + my $currentpath = $class->path($scfg, $volname, $storeid);
> > +
> > + die "can't resize if snasphots exist" if $currentpath ne
> > $basepath;
I think something here is wrong ;)
> > +
> > +}
> > +1;
> > diff --git a/src/PVE/Storage/Makefile b/src/PVE/Storage/Makefile
> > index d5cc942..1af8aab 100644
> > --- a/src/PVE/Storage/Makefile
> > +++ b/src/PVE/Storage/Makefile
> > @@ -14,7 +14,8 @@ SOURCES= \
> > PBSPlugin.pm \
> > BTRFSPlugin.pm \
> > LvmThinPlugin.pm \
> > - ESXiPlugin.pm
> > + ESXiPlugin.pm \
> > + LvmQcow2Plugin.pm
> >
> > .PHONY: install
> > install:
> > --
> > 2.39.2
More information about the pve-devel
mailing list