[pve-devel] [PATCH v3 guest-common 1/1] guest helpers: add run_with_replication_guard

Fabian Grünbichler f.gruenbichler at proxmox.com
Tue Feb 22 11:27:42 CET 2022


On February 22, 2022 10:41 am, Fabian Ebner wrote:
> Am 21.02.22 um 12:58 schrieb Fabian Ebner:
>> Signed-off-by: Fabian Ebner <f.ebner at proxmox.com>
>> ---
>> 
>> New in v3.
>> 
>>  src/PVE/GuestHelpers.pm | 15 ++++++++++++++-
>>  1 file changed, 14 insertions(+), 1 deletion(-)
>> 
>> diff --git a/src/PVE/GuestHelpers.pm b/src/PVE/GuestHelpers.pm
>> index 970c460..1183819 100644
>> --- a/src/PVE/GuestHelpers.pm
>> +++ b/src/PVE/GuestHelpers.pm
>> @@ -3,8 +3,9 @@ package PVE::GuestHelpers;
>>  use strict;
>>  use warnings;
>>  
>> -use PVE::Tools;
>> +use PVE::ReplicationConfig;
>>  use PVE::Storage;
>> +use PVE::Tools;
>>  
>>  use POSIX qw(strftime);
>>  use Scalar::Util qw(weaken);
>> @@ -82,6 +83,18 @@ sub guest_migration_lock {
>>      return $res;
>>  }
>>  
>> +sub run_with_replication_guard {
>> +    my ($vmid, $timeout, $log, $func, @param) = @_;
>> +
>> +    my $repl_conf = PVE::ReplicationConfig->new();
>> +    if ($repl_conf->check_for_existing_jobs($vmid, 1)) {
>> +	$log->("checking/waiting for active replication..") if $log;
>> +	guest_migration_lock($vmid, $timeout, $func, @param);
> 
> I wonder if we should unconditionally take the lock? If not, we can race
> with a newly created replication job:
> 1. snapshot deletion starts
> 2. replication job is created
> 3. replication job starts
> 4. snapshot deletion runs into 'dataset is busy' error, because snapshot
> is used by replication

that could also be solved by lock_config on the guest config when 
creating/modifying a replication job, but unconditionally trying to 
obtain the lock is also fine by me.

> IIRC Thomas didn't want the log line to be printed when there is no
> replication configured, and we can still do that if we go for the
> "unconditionally acquire lock" approach (it should get the lock quickly
> except in the above edge case), but it would mean checking the
> replication config just for that. My suggestion would be to get rid of
> this helper, not log anything in the cases with 10s timeout, and log if
> replication is configured in the backup case with 600s timeout.

for the long timeout case, the following would also work? or the 
existing helper could switch to something like this, if we want to 
keep it after all..

my $lock_obtained;
my $do_delete = sub {
  $lock_obtained = 1;
  ...
};

eval {
  # no msg
  lock_with_really_short_timeout($do_delete);
};

if (my $err = $@) {
  # $lock_obtained tells us whether the error is failing to lock
  # or failing to delete
  die $err
    if $lock_obtained;

  msg(..)
  lock_with_long_timeout($do_delete);
}

that way we don't have a delay warranting a log message if no 
replication is running (or even configured), but if one is running, 
we'll quickly fail to obtain the lock, and can retry with a message that 
explains the pause and a longer timeout..

> 
>> +    } else {
>> +	$func->(@param);
>> +    }
>> +}
>> +
>>  sub check_hookscript {
>>      my ($volid, $storecfg) = @_;
>>  
> 





More information about the pve-devel mailing list