[pve-devel] [PATCH guest-common 4/6] Get snapshots when no state is available.
Wolfgang Bumiller
w.bumiller at proxmox.com
Thu Dec 14 10:38:16 CET 2017
On Thu, Dec 14, 2017 at 08:24:50AM +0100, Thomas Lamprecht wrote:
> On 12/13/2017 03:46 PM, Wolfgang Link wrote:
> > ---
> > PVE/Replication.pm | 7 +++++++
> > 1 file changed, 7 insertions(+)
> >
> > diff --git a/PVE/Replication.pm b/PVE/Replication.pm
> > index dac0994..e69b0c2 100644
> > --- a/PVE/Replication.pm
> > +++ b/PVE/Replication.pm
> > @@ -54,6 +54,13 @@ sub find_common_replication_snapshot {
> > ($last_snapshots->{$volid}->{$parent_snapname} &&
> > $remote_snapshots->{$volid}->{$parent_snapname})) {
> > $base_snapshots->{$volid} = $parent_snapname;
> > + } elsif ($last_sync == 0) {
> > + foreach my $remote_snap (keys %{$remote_snapshots->{$volid}}) {
> > + # There should only be one snapshot which is equals;
>
> But in theory there can be more? At least it seems so when looking at prepare
> below.
> Patch 3/6 changes prepare() to only deletes replication snapshots if $last_sync
> is set - which contradicts then the comment at the start of the prepare sub
> (please address that with 3/6), so the remote side may return more, which one
> is used then and why?
> Note, I'm not to deep into the replica stack as I wasn't present at most of the
> internal design talks and there seems no general design document available, so
> I may also just miss something :)
So in theory when the process starts there could be 2 snapshots if the
node died after the send/recv but before the cleanup, but the code a
little further up (the prepare calls) should take care of removing the
old snapshots.
Still, a check to make sure and die with an internal error here
shouldn't be harmful as it might catch improperly handled error cases
in the prepare calls...
>
> > + if (defined($last_snapshots->{$volid}->{$remote_snap})) {
> > + $base_snapshots->{$volid} = $remote_snap;
> > + }
> > + }
> > }
> > }
> > }
> >
More information about the pve-devel
mailing list