[pve-devel] [PATCH] Pass what storage ID is being used to vzdump hook scripts

Mark Casey markc at unifiedgroup.com
Fri Sep 13 11:55:04 CEST 2013

Wow. Yes, that is great. You should've seen what I'd done to 
exec_backup() and exec_backup_task() in the meantime, to test my script 
(it wasn't pretty)!  :)

I've attached the hook script this was needed for. The hook will wait 
until job-end and then rsync all of the new dumps to all of your other 
cluster nodes. If you run the hook on all nodes it is pretty effective. 
It uses rsync with --fuzzy, --delete-delay, and a custom --includes 
list, so it has /some/ aspects of a differential and IMO is a fairly 
lightweight overlay, assuming you have the local disk space for it.

I've been using this for a couple years now (I have two other less 
elegant versions [one in Perl, one in bash] that both called pvesh in a 
subshell and parsed it to build the rsync). I just finished porting the 
external pvesh calls to use the API libraries tonight, so there may 
still be a few spots of redundant code or things that could be cleaner, 
but I'm anxious to send it out and see if anyone else finds it useful.

I've also attached a sample run from the command line, with only the 
Node hostnames sanitized. It is only trying to rsync so many VMs in the 
sample because I've had a -n in the rsync call while working on it. If I 
ran it once for real then it would handle yesterday's backups and then 
each time it would only transfer the one new dump.

The sync will not run if the Storage ID that was used is Shared or is 
not selected for use on all nodes (or for no nodes [No restrictions]).

Hope this is of use somewhere, thanks,

On 9/13/2013 12:03 AM, Dietmar Maurer wrote:
>> My pleasure. Looks like I didn't quite do what I needed though (the hook script I
>> needed the new var for is a work in progress; just noticed the var still isn't
>> available in the hook stage I actually need).
> https://git.proxmox.com/?p=pve-manager.git;a=commitdiff;h=7ed025e1d5ace4aede5785ad54168cf763991a95
> Does that work for you?

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://pve.proxmox.com/pipermail/pve-devel/attachments/20130913/1f961cac/attachment.html>
-------------- next part --------------
root at fake-host2:~# vzdump --node fake-host2 --mode snapshot --compress gzip --storage backups 113
INFO: starting new backup job: vzdump 113 --mode snapshot --compress gzip --storage backups --node fake-host2
INFO: HOOK: job-start 113
INFO: Starting Backup of VM 113 (openvz)
INFO: CTID 113 exist unmounted down
INFO: status = stopped
INFO: backup mode: stop
INFO: ionice priority: 7
INFO: HOOK: backup-start stop 113
INFO: creating archive '/mnt/backups/dump/vzdump-openvz-113-2013_09_13-04_24_36.tar.gz'
INFO: Total bytes written: 436234240 (417MiB, 12MiB/s)
INFO: archive file size: 178MB
INFO: delete old backup '/mnt/backups/dump/vzdump-openvz-113-2013_09_13-03_50_20.tar.gz'
INFO: HOOK: backup-end stop 113
INFO: Finished Backup of VM 113 (00:00:39)
INFO: HOOK: log-end stop 113
INFO: HOOK: job-end stop 113
INFO: -------------------------------------------------------------------------------------------------
INFO: Starting vzdump sync of storage target 'backups' from fake-host2 to other Nodes...
INFO: -------------------------------------------------------------------------------------------------
INFO: Local VM and Container IDs to be synced are: 106 108 100 101 102 114 110 113.
INFO: Dumps will now be synced to node(s): fake-host1 ...
INFO: --- Syncing fake-host2 to fake-host1 ---
INFO: rsync -nav --delete-delay --fuzzy  --include vzdump-*-106-* --include vzdump-*-108-* --include vzdump-*-100-* --include vzdump-*-101-* --include vzdump-*-102-* --include vzdump-*-114-* --include vzdump-*-110-* --include vzdump-*-113-* --exclude '*' /mnt/backups/dump/ fake-host1:/mnt/backups/dump/
INFO: sending incremental file list
INFO: ./
INFO: vzdump-openvz-101-2013_09_12-19_30_02.log
INFO: vzdump-openvz-101-2013_09_12-19_30_02.tar.gz
INFO: vzdump-openvz-102-2013_09_12-19_32_26.log
INFO: vzdump-openvz-102-2013_09_12-19_32_26.tar.gz
INFO: vzdump-openvz-110-2013_09_12-19_36_46.log
INFO: vzdump-openvz-110-2013_09_12-19_36_46.tar.gz
INFO: vzdump-openvz-113-2013_09_12-17_06_55.log
INFO: vzdump-openvz-113-2013_09_12-17_08_40.log
INFO: vzdump-openvz-113-2013_09_12-17_09_32.log
INFO: vzdump-openvz-113-2013_09_13-03_03_48.log
INFO: vzdump-openvz-113-2013_09_13-03_09_11.log
INFO: vzdump-openvz-113-2013_09_13-03_52_13.log
INFO: vzdump-openvz-113-2013_09_13-03_52_13.tar.gz
INFO: vzdump-openvz-113-2013_09_13-03_53_50.log
INFO: vzdump-openvz-113-2013_09_13-03_53_50.tar.gz
INFO: vzdump-openvz-113-2013_09_13-04_24_36.log
INFO: vzdump-openvz-113-2013_09_13-04_24_36.tar.gz
INFO: vzdump-openvz-114-2013_09_12-19_38_22.log
INFO: vzdump-openvz-114-2013_09_12-19_38_22.tar.gz
INFO: vzdump-qemu-100-2013_09_12-19_47_01.log
INFO: vzdump-qemu-100-2013_09_12-19_47_01.vma.gz
INFO: vzdump-qemu-106-2013_09_12-20_46_42.log
INFO: vzdump-qemu-106-2013_09_12-20_46_42.vma.gz
INFO: vzdump-qemu-108-2013_09_12-21_28_07.log
INFO: vzdump-qemu-108-2013_09_12-21_28_07.vma.gz
INFO: deleting vzdump-qemu-108-2013_09_09-21_23_31.vma.gz
INFO: deleting vzdump-qemu-108-2013_09_09-21_23_31.log
INFO: deleting vzdump-qemu-106-2013_09_09-20_44_26.vma.gz
INFO: deleting vzdump-qemu-106-2013_09_09-20_44_26.log
INFO: deleting vzdump-qemu-100-2013_09_09-19_45_39.vma.gz
INFO: deleting vzdump-qemu-100-2013_09_09-19_45_39.log
INFO: deleting vzdump-openvz-114-2013_09_09-19_37_19.tar.gz
INFO: deleting vzdump-openvz-114-2013_09_09-19_37_19.log
INFO: deleting vzdump-openvz-113-2013_09_11-19_37_21.tar.gz
INFO: deleting vzdump-openvz-113-2013_09_11-19_37_21.log
INFO: deleting vzdump-openvz-113-2013_09_11-18_47_22.tar.gz
INFO: deleting vzdump-openvz-113-2013_09_11-18_47_22.log
INFO: deleting vzdump-openvz-113-2013_09_11-18_44_02.tar.gz
INFO: deleting vzdump-openvz-113-2013_09_11-18_44_02.log
INFO: deleting vzdump-openvz-110-2013_04_24-19_35_34.tar.gz
INFO: deleting vzdump-openvz-110-2013_04_24-19_35_34.log
INFO: deleting vzdump-openvz-102-2013_09_09-19_32_19.tar.gz
INFO: deleting vzdump-openvz-102-2013_09_09-19_32_19.log
INFO: deleting vzdump-openvz-101-2013_09_09-19_30_01.tar.gz
INFO: deleting vzdump-openvz-101-2013_09_09-19_30_01.log
INFO: sent 8390 bytes  received 1183 bytes  6382.00 bytes/sec
INFO: total size is 172629007780  speedup is 18032905.86 (DRY RUN)
INFO: --- Sync to fake-host1 complete ---
INFO: vzdump sync completed
INFO: Backup job finished successfully
root at fake-host2:~#
-------------- next part --------------
#!/usr/bin/perl -w
# vz-hook_cross-sync-all.pl
# VZDump hook script to cross-sync dumps to other nodes during job-end stage
# Copyright (C) 2013  Mark Casey
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program.  If not, see <http://www.gnu.org/licenses/>.

# Portions of API access and hook code borrowed from Proxmox examples

use strict;
use PVE::API2Client;
use PVE::AccessControl;
use PVE::INotify;

use Data::Dumper;

print "HOOK: " . join (' ', @ARGV) . "\n";
my $phase = shift;

if ($phase eq 'job-start' ||
	$phase eq 'job-abort' ||
	$phase eq 'backup-start' ||
	$phase eq 'backup-end' ||
	$phase eq 'backup-abort' ||
	$phase eq 'log-end' ||
	$phase eq 'pre-stop' ||
	$phase eq 'pre-restart') {

elsif ($phase eq 'job-end') {

# indentation reset

my $hostname = PVE::INotify::read_file("hostname");

# normally you use username/password,
# but we can simply create a ticket and CRSF token if we are root
my $ticket = PVE::AccessControl::assemble_ticket('root at pam');
my $csrftoken = PVE::AccessControl::assemble_csrf_prevention_token('root at pam');

my $conn = PVE::API2Client->new (
	#username => 'root at pam',
	#password => 'yourpassword',
	ticket => $ticket,
	csrftoken => $csrftoken,
	host => $hostname );
my $storeid = $ENV{STOREID};
my $dumpdir = $ENV{DUMPDIR};
my @otherNodes = ();
my @IDsToSync = ();
my $includes = "";

my $res = $conn->get("api2/json/nodes", {});
my @nodes = @{$res->{data}};

for (my $i = $#nodes; $i > -1; $i--) {
	chomp( %{$nodes[$i]} );  # Chomp all hash values
	# Make a list of other Nodes in the cluster
	if ( "$nodes[$i]->{'node'}" ne "$hostname" ) {

$res = $conn->get("api2/json/storage/$storeid", {});
my %storage = %{$res->{data}};

if ( defined( $storage{'nodes'} ) ) {
	my $nogo = 0;
	foreach my $oNode (@otherNodes) {
		if ( $storage{'nodes'} !~ m/$oNode/i ) {
			$nogo = 1;
	if ( $storage{'nodes'} !~ m/$hostname/i ) {
		$nogo = 1;

	if ($nogo == 1) {
		print "ERROR: vz-hook_cross-sync-all.pl declines to cross sync Storage IDs that are not available to all nodes.\n".
		"        (i.e.: '$storeid' must be selected for all nodes (or for none) to get synced.)\n";
		exit 0;  # FIXME: Should we die here??

if ( defined( $storage{'shared'} ) ) {
	print "ERROR: vz-hook_cross-sync-all.pl declines to cross sync Storage IDs that are already shared (causes directory to rsync itself!).\n";
	exit 0;  # FIXME: Should we die here??

# Get a list of the local Node's VM and Container IDs
$res = $conn->get("api2/json/nodes/$hostname/qemu", {});
# FIXME: replace with list of VMs actually just backed-up. However, --including a VM that is not on this storeid should still be fine for now.
foreach (@{$res->{data}}) {
$res = $conn->get("api2/json/nodes/$hostname/openvz", {});
# FIXME: replace with list of Containers actually just backed-up. However, --including a VM that is not on this storeid should still be fine for now.
foreach (@{$res->{data}}) {

# Build rsync include list
foreach (@IDsToSync) {
	$includes = "$includes"." --include vzdump-*-$_-*";

print "\n-------------------------------------------------------------------------------------------------\n";
print "Starting vzdump sync of storage target '$storeid' from $hostname to other Nodes...\n";
print     "-------------------------------------------------------------------------------------------------\n";
print "Local VM and Container IDs to be synced are: @IDsToSync.\n";
print "Dumps will now be synced to node(s): @otherNodes ...\n\n";

# Push local backups to other Nodes
foreach my $node (@otherNodes) {
	# Execute sync
	print "--- Syncing $hostname to $node ---\n";
	print "rsync -av --delete-delay --fuzzy $includes --exclude '*' $dumpdir/ $node:$dumpdir/\n\n";
	system("rsync -av --delete-delay --fuzzy $includes --exclude '*' $dumpdir/ $node:$dumpdir/");
	print "--- Sync to $node complete ---\n\n";

print "vzdump sync completed\n";


else {
	die "got unknown phase '$phase'";

exit (0);

More information about the pve-devel mailing list