[pve-devel] [PATCH storage v5 09/32] plugin: introduce new_backup_provider() method

Wolfgang Bumiller w.bumiller at proxmox.com
Mon Mar 24 16:43:37 CET 2025


Just a short high level nit today, will have to look more closely at
this and the series the next days:

There's a `new()` which takes an $scfg + $storeid.

But later there are some methods taking `$self` (which usually means the
thing returned from `new()`), which also get a `$storeid` as additional
parameter (but without any `$scfg`). IMO the `$storeid` should be
dropped there.

On Fri, Mar 21, 2025 at 02:48:29PM +0100, Fiona Ebner wrote:
> The new_backup_provider() method can be used by storage plugins for
> external backup providers. If the method returns a provider, Proxmox
> VE will use callbacks to that provider for backups and restore instead
> of using its usual backup/restore mechanisms.
> 
> The backup provider API is split into two parts, both of which again
> need different implementations for VM and LXC guests:
> 
> 1. Backup API
> 
> There are two hook callback functions, namely:
> 1. job_hook() is called during the start/end/abort phases of the whole
>    backup job.
> 2. backup_hook() is called during the start/end/abort phases of the
>    backup of an individual guest. There also is a 'prepare' phase
>    useful for container backups, because the backup method for
>    containers itself is executed in the user namespace context
>    associated to the container.
> 
> The backup_get_mechanism() method is used to decide on the backup
> mechanism. Currently, 'file-handle' or 'nbd' for VMs, and 'directory'
> for containers is possible. The method also let's the plugin indicate
> whether to use a bitmap for incremental VM backup or not. It is enough
> to implement one mechanism for VMs and one mechanism for containers.
> 
> Next, there are methods for backing up the guest's configuration and
> data, backup_vm() for VM backup and backup_container() for container
> backup, with the latter running
> 
> Finally, some helpers like getting the provider name or volume ID for
> the backup target, as well as for handling the backup log.
> 
> 1.1 Backup Mechanisms
> 
> VM:
> 
> Access to the data on the VM's disk from the time the backup started
> is made available via a so-called "snapshot access". This is either
> the full image, or in case a bitmap is used, the dirty parts of the
> image since the last time the bitmap was used for a successful backup.
> Reading outside of the dirty parts will result in an error. After
> backing up each part of the disk, it should be discarded in the export
> to avoid unnecessary space usage on the Proxmox VE side (there is an
> associated fleecing image).
> 
> VM mechanism 'file-handle':
> 
> The snapshot access is exposed via a file descriptor. A subroutine to
> read the dirty regions for incremental backup is provided as well.
> 
> VM mechanism 'nbd':
> 
> The snapshot access and, if used, bitmap are exported via NBD.
> 
> Container mechanism 'directory':
> 
> A copy or snapshot of the container's filesystem state is made
> available as a directory. The method is executed inside the user
> namespace associated to the container.
> 
> 2. Restore API
> 
> The restore_get_mechanism() method is used to decide on the restore
> mechanism. Currently, 'qemu-img' for VMs, and 'directory' or 'tar' for
> containers are possible. It is enough to implement one mechanism for
> VMs and one mechanism for containers.
> 
> Next, methods for extracting the guest and firewall configuration and
> the implementations of the restore mechanism via a pair of methods: an
> init method, for making the data available to Proxmox VE and a cleanup
> method that is called after restore.
> 
> For VMs, there also is a restore_vm_get_device_info() helper required,
> to get the disks included in the backup and their sizes.
> 
> 2.1. Restore Mechanisms
> 
> VM mechanism 'qemu-img':
> 
> The backup provider gives a path to the disk image that will be
> restored. The path needs to be something 'qemu-img' can deal with,
> e.g. can also be an NBD URI or similar.
> 
> Container mechanism 'directory':
> 
> The backup provider gives the path to a directory with the full
> filesystem structure of the container.
> 
> Container mechanism 'tar':
> 
> The backup provider gives the path to a (potentially compressed) tar
> archive with the full filesystem structure of the container.
> 
> See the PVE::BackupProvider::Plugin module for the full API
> documentation.
> 
> Signed-off-by: Fiona Ebner <f.ebner at proxmox.com>
> ---
> 
> Changes in v5:
> * Split API version+age bump into own commit.
> * Replace 'block-device' mechanism with 'file-handle'.
> 
>  src/PVE/BackupProvider/Makefile        |    3 +
>  src/PVE/BackupProvider/Plugin/Base.pm  | 1161 ++++++++++++++++++++++++
>  src/PVE/BackupProvider/Plugin/Makefile |    5 +
>  src/PVE/Makefile                       |    1 +
>  src/PVE/Storage.pm                     |    8 +
>  src/PVE/Storage/Plugin.pm              |   15 +
>  6 files changed, 1193 insertions(+)
>  create mode 100644 src/PVE/BackupProvider/Makefile
>  create mode 100644 src/PVE/BackupProvider/Plugin/Base.pm
>  create mode 100644 src/PVE/BackupProvider/Plugin/Makefile
> 
> diff --git a/src/PVE/BackupProvider/Makefile b/src/PVE/BackupProvider/Makefile
> new file mode 100644
> index 0000000..f018cef
> --- /dev/null
> +++ b/src/PVE/BackupProvider/Makefile
> @@ -0,0 +1,3 @@
> +.PHONY: install
> +install:
> +	make -C Plugin install
> diff --git a/src/PVE/BackupProvider/Plugin/Base.pm b/src/PVE/BackupProvider/Plugin/Base.pm
> new file mode 100644
> index 0000000..b31a28f
> --- /dev/null
> +++ b/src/PVE/BackupProvider/Plugin/Base.pm
> @@ -0,0 +1,1161 @@
> +package PVE::BackupProvider::Plugin::Base;
> +
> +use strict;
> +use warnings;
> +
> +=pod
> +
> +=head1 NAME
> +
> +PVE::BackupProvider::Plugin::Base - Base Plugin for Backup Provider API
> +
> +=head1 SYNOPSIS
> +
> +    use base qw(PVE::BackupProvider::Plugin::Base);
> +
> +=head1 DESCRIPTION
> +
> +This module serves as the base for any module implementing the API that Proxmox
> +VE uses to interface with external backup providers. The API is used for
> +creating and restoring backups. A backup provider also needs to provide a
> +storage plugin for integration with the front-end. The API here is used by the
> +backup stack in the backend.
> +
> +1. Backup API
> +
> +There are two hook callback functions, namely:
> +
> +=over
> +
> +=item C<job_hook()>
> +
> +Called during the start/end/abort phases of the whole backup job.
> +
> +=item C<backup_hook()>
> +
> +Called during the start/end/abort phases of the backup of an
> +individual guest.
> +
> +=back
> +
> +The backup_get_mechanism() method is used to decide on the backup mechanism.
> +Currently, 'file-handle' or 'nbd' for VMs, and 'directory' for containers is
> +possible. The method also let's the plugin indicate whether to use a bitmap for
> +incremental VM backup or not. It is enough to implement one mechanism for VMs
> +and one mechanism for containers.
> +
> +Next, there are methods for backing up the guest's configuration and data,
> +backup_vm() for VM backup and backup_container() for container backup.
> +
> +Finally, some helpers like getting the provider name or volume ID for the backup
> +target, as well as for handling the backup log.
> +
> +1.1 Backup Mechanisms
> +
> +VM:
> +
> +Access to the data on the VM's disk from the time the backup started is made
> +available via a so-called "snapshot access". This is either the full image, or
> +in case a bitmap is used, the dirty parts of the image since the last time the
> +bitmap was used for a successful backup. Reading outside of the dirty parts will
> +result in an error. After backing up each part of the disk, it should be
> +discarded in the export to avoid unnecessary space usage on the Proxmox VE side
> +(there is an associated fleecing image).
> +
> +VM mechanism 'file-handle':
> +
> +The snapshot access is exposed via a file descriptor. A subroutine to read the
> +dirty regions for incremental backup is provided as well.
> +
> +VM mechanism 'nbd':
> +
> +The snapshot access and, if used, bitmap are exported via NBD.
> +
> +Container mechanism 'directory':
> +
> +A copy or snapshot of the container's filesystem state is made available as a
> +directory.
> +
> +2. Restore API
> +
> +The restore_get_mechanism() method is used to decide on the restore mechanism.
> +Currently, 'qemu-img' for VMs, and 'directory' or 'tar' for containers are
> +possible. It is enough to implement one mechanism for VMs and one mechanism for
> +containers.
> +
> +Next, methods for extracting the guest and firewall configuration and the
> +implementations of the restore mechanism via a pair of methods: an init method,
> +for making the data available to Proxmox VE and a cleanup method that is called
> +after restore.
> +
> +For VMs, there also is a restore_vm_get_device_info() helper required, to get
> +the disks included in the backup and their sizes.
> +
> +2.1. Restore Mechanisms
> +
> +VM mechanism 'qemu-img':
> +
> +The backup provider gives a path to the disk image that will be restored. The
> +path needs to be something 'qemu-img' can deal with, e.g. can also be an NBD URI
> +or similar.
> +
> +Container mechanism 'directory':
> +
> +The backup provider gives the path to a directory with the full filesystem
> +structure of the container.
> +
> +Container mechanism 'tar':
> +
> +The backup provider gives the path to a (potentially compressed) tar archive
> +with the full filesystem structure of the container.
> +
> +=head1 METHODS
> +
> +=cut
> +
> +# plugin methods
> +
> +=pod
> +
> +=over
> +
> +=item C<new>
> +
> +The constructor. Returns a blessed instance of the backup provider class.
> +
> +Parameters:
> +
> +=over
> +
> +=item C<$storage_plugin>
> +
> +The associated storage plugin class.
> +
> +=item C<$scfg>
> +
> +The storage configuration of the associated storage.
> +
> +=item C<$storeid>
> +
> +The storage ID of the associated storage.
> +
> +=item C<$log_function>
> +
> +The function signature is C<$log_function($log_level, $message)>. This log
> +function can be used to write to the backup task log in Proxmox VE.
> +
> +=over
> +
> +=item C<$log_level>
> +
> +Either C<info>, C<warn> or C<err> for informational messages, warnings or error
> +messages.
> +
> +=item C<$message>
> +
> +The message to be printed.
> +
> +=back
> +
> +=back
> +
> +=back
> +
> +=cut
> +sub new {
> +    my ($class, $storage_plugin, $scfg, $storeid, $log_function) = @_;
> +
> +    die "implement me in subclass";
> +}
> +
> +=pod
> +
> +=over
> +
> +=item C<provider_name>
> +
> +Returns the name of the backup provider. It will be printed in some log lines.
> +
> +=back
> +
> +=cut
> +sub provider_name {
> +    my ($self) = @_;
> +
> +    die "implement me in subclass";
> +}
> +
> +=pod
> +
> +=over
> +
> +=item C<job_hook>
> +
> +The job hook function. Is called during various phases of the backup job.
> +Intended for doing preparations and cleanup. In the future, additional phases
> +might get added, so it's best to ignore an unknown phase.
> +
> +Parameters:
> +
> +=over
> +
> +=item C<$phase>
> +
> +The phase during which the function is called.
> +
> +=over
> +
> +=item C<start>
> +
> +When the job starts, before the first backup is made.
> +
> +=item C<end>
> +
> +When the job ends, after all backups are finished, even if some backups
> +failed.
> +
> +=item C<abort>
> +
> +When the job is aborted (e.g. interrupted by signal, other fundamental failure).
> +
> +=back
> +
> +=item C<$info>
> +
> +A hash reference containing additional parameters depending on the C<$phase>:
> +
> +=over
> +
> +=item C<start>
> +
> +=over
> +
> +=item C<< $info->{'start-time'} >>
> +
> +Unix time-stamp of when the job started.
> +
> +=back
> +
> +=item C<end>
> +
> +No additional information.
> +
> +=item C<abort>
> +
> +=over
> +
> +=item C<< $info->{error} >>
> +
> +The error message indicating the failure.
> +
> +=back
> +
> +=back
> +
> +=back
> +
> +=back
> +
> +=cut
> +sub job_hook {
> +    my ($self, $phase, $info) = @_;
> +
> +    die "implement me in subclass";
> +}
> +
> +=pod
> +
> +=over
> +
> +=item C<backup_hook>
> +
> +The backup hook function. Is called during various phases during the backup of a
> +given guest. Intended for doing preparations and cleanup. In the future,
> +additional phases might get added, so it's best to ignore an unknown phase.
> +
> +Parameters:
> +
> +=over
> +
> +=item C<$phase>
> +
> +The phase during which the function is called.
> +
> +=over
> +
> +=item C<start>
> +
> +Before the backup of the given guest is made.
> +
> +=item C<prepare>
> +
> +Right before C<backup_container()> is called. The method C<backup_container()>
> +is called as the ID-mapped root user of the container, so as a potentially
> +unprivileged user. The hook is still called as a privileged user to allow for
> +the necessary preparation.
> +
> +=item C<end>
> +
> +After the backup of the given guest finished successfully.
> +
> +=item C<abort>
> +
> +After the backup of the given guest encountered an error or was aborted.
> +
> +=back
> +
> +=item C<$vmid>
> +
> +The ID of the guest being backed up.
> +
> +=item C<$vmtype>
> +
> +The type of the guest being backed up. Currently, either C<qemu> or C<lxc>.
> +Might be C<undef> in phase C<abort> for certain error scenarios.
> +
> +=item C<$info>
> +
> +A hash reference containing additional parameters depending on the C<$phase>:
> +
> +=over
> +
> +=item C<start>
> +
> +=over
> +
> +=item C<< $info->{'start-time'} >>
> +
> +Unix time-stamp of when the guest backup started.
> +
> +=back
> +
> +=item C<prepare>
> +
> +The same information that's passed along to C<backup_container()>, see the
> +description there.
> +
> +=item C<end>
> +
> +No additional information.
> +
> +=item C<abort>
> +
> +=over
> +
> +=item C<< $info->{error} >>
> +
> +The error message indicating the failure.
> +
> +=back
> +
> +=back
> +
> +=back
> +
> +=back
> +
> +=cut
> +sub backup_hook {
> +    my ($self, $phase, $vmid, $vmtype, $info) = @_;
> +
> +    die "implement me in subclass";
> +}
> +
> +=pod
> +
> +=over
> +
> +=item C<backup_get_mechanism>
> +
> +Tell the caller what mechanism to use for backing up the guest. The backup
> +method for the guest, i.e. C<backup_vm> for guest type C<qemu> or
> +C<backup_container> for guest type C<lxc>, will later be called with
> +mechanism-specific information. See those methods for more information. Returns
> +C<($mechanism, $bitmap_id)>:
> +
> +=over
> +
> +=item C<$mechanism>
> +
> +Currently C<nbd> and C<file-handle> for guest type C<qemu> and C<directory> for
> +guest type C<lxc> are possible. If there is no support for one of the guest
> +types, the method should either C<die> or return C<undef>.
> +
> +=item C<$bitmap_id>
> +
> +If the backup provider supports backing up with a bitmap, the ID of the bitmap
> +to use. Return C<undef> otherwise. Re-use the same ID multiple times for
> +incremental backup.
> +
> +=back
> +
> +Parameters:
> +
> +=over
> +
> +=item C<$vmid>
> +
> +The ID of the guest being backed up.
> +
> +=item C<$vmtype>
> +
> +The type of the guest being backed up. Currently, either C<qemu> or C<lxc>.
> +
> +=back
> +
> +=back
> +
> +=cut
> +sub backup_get_mechanism {
> +    my ($self, $vmid, $vmtype) = @_;
> +
> +    die "implement me in subclass";
> +}
> +
> +=pod
> +
> +=over
> +
> +=item C<backup_get_archive_name>
> +
> +The archive name of the backup archive that will be created by the current
> +backup. The returned value needs to be the volume name that the archive can
> +later be accessed by via the corresponding storage plugin, i.e. C<$archive_name>
> +in the volume ID C<"${storeid}:backup/${archive_name}">.
> +
> +Parameters:
> +
> +=over
> +
> +=item C<$vmid>
> +
> +The ID of the guest being backed up.
> +
> +=item C<$vmtype>
> +
> +The type of the guest being backed up. Currently, either C<qemu> or C<lxc>.
> +
> +=item C<$backup_time>
> +
> +Unix time-stamp of when the guest backup started.
> +
> +=back
> +
> +=back
> +
> +=cut
> +sub backup_get_archive_name {
> +    my ($self, $vmid, $vmtype, $backup_time) = @_;
> +
> +    die "implement me in subclass";
> +}
> +
> +=pod
> +
> +=over
> +
> +=item C<backup_get_task_size>
> +
> +Returns the size of the backup after completion.
> +
> +Parameters:
> +
> +=over
> +
> +=item C<$vmid>
> +
> +The ID of the guest being backed up.
> +
> +=back
> +
> +=back
> +
> +=cut
> +sub backup_get_task_size {
> +    my ($self, $vmid) = @_;
> +
> +    die "implement me in subclass";
> +}
> +
> +=pod
> +
> +=over
> +
> +=item C<backup_handle_log_file>
> +
> +Handle the backup's log file which contains the task log for the backup. For
> +example, a provider might want to upload a copy to the backup server.
> +
> +Parameters:
> +
> +=over
> +
> +=item C<$vmid>
> +
> +The ID of the guest being backed up.
> +
> +=item C<$filename>
> +
> +Path to the file with the backup log.
> +
> +=back
> +
> +=back
> +
> +=cut
> +sub backup_handle_log_file {
> +    my ($self, $vmid, $filename) = @_;
> +
> +    die "implement me in subclass";
> +}
> +
> +=pod
> +
> +=over
> +
> +=item C<backup_vm>
> +
> +Used when the guest type is C<qemu>. Back up the virtual machine's configuration
> +and volumes that were made available according to the mechanism returned by
> +C<backup_get_mechanism>. Returns when done backing up. Ideally, the method
> +should log the progress during backup.
> +
> +Parameters:
> +
> +=over
> +
> +=item C<$vmid>
> +
> +The ID of the guest being backed up.
> +
> +=item C<$guest_config>
> +
> +The guest configuration as raw data.
> +
> +=item C<$volumes>
> +
> +Hash reference with information about the VM's volumes. Some parameters are
> +mechanism-specific.
> +
> +=over
> +
> +=item C<< $volumes->{$device_name} >>
> +
> +Hash reference with information about the VM volume associated to the device
> +C<$device_name>. The device name needs to be remembered for restoring. The
> +device name is also the name of the NBD export when the C<nbd> mechanism is
> +used.
> +
> +=item C<< $volumes->{$device_name}->{size} >>
> +
> +Size of the volume in bytes.
> +
> +=item C<< $volumes->{$device_name}->{'bitmap-mode'} >>
> +
> +How a bitmap is used for the current volume.
> +
> +=over
> +
> +=item C<none>
> +
> +No bitmap is used.
> +
> +=item C<new>
> +
> +A bitmap has been newly created on the volume.
> +
> +=item C<reuse>
> +
> +The bitmap with the same ID as requested is being re-used.
> +
> +=back
> +
> +=back
> +
> +Mechansims-specific parameters for mechanism:
> +
> +=over
> +
> +=item C<file-handle>
> +
> +=over
> +
> +=item C<< $volumes->{$device_name}->{'file-handle'} >>
> +
> +File handle the backup data can be read from. Discards should be issued via the
> +C<PVE::Storage::Common::deallocate()> function for ranges that already have been
> +backed-up successfully to reduce space usage on the source-side.
> +
> +=item C<< $volumes->{$device_name}->{'next-dirty-region'} >>
> +
> +A function that will return the offset and length of the next dirty region as a
> +two-element list. After the last dirty region, it will return C<undef>. If no
> +bitmap is used, it will return C<(0, $size)> and then C<undef>. If a bitmap is
> +used, these are the dirty regions according to the bitmap.
> +
> +=back
> +
> +=item C<nbd>
> +
> +=over
> +
> +=item C<< $volumes->{$device_name}->{'nbd-path'} >>
> +
> +The path to the Unix socket providing the NBD export with the backup data and,
> +if a bitmap is used, bitmap data. Discards should be issued after reading the
> +data to reduce space usage on the source-side.
> +
> +=item C<< $volumes->{$device_name}->{'bitmap-name'} >>
> +
> +The name of the bitmap in case a bitmap is used.
> +
> +=back
> +
> +=back
> +
> +=item C<$info>
> +
> +A hash reference containing optional parameters.
> +
> +Optional parameters:
> +
> +=over
> +
> +=item C<< $info->{'bandwidth-limit'} >>
> +
> +The requested bandwith limit. The value is in bytes/second. The backup provider
> +is expected to honor this rate limit for IO on the backup source and network
> +traffic. A value of C<0>, C<undef> or if there is no such key in the hash all
> +mean that there is no limit.
> +
> +=item C<< $info->{'firewall-config'} >>
> +
> +Present if the firewall configuration exists. The guest's firewall
> +configuration as raw data.
> +
> +=back
> +
> +=back
> +
> +=back
> +
> +=cut
> +sub backup_vm {
> +    my ($self, $vmid, $guest_config, $volumes, $info) = @_;
> +
> +    die "implement me in subclass";
> +}
> +
> +=pod
> +
> +=over
> +
> +=item C<backup_container>
> +
> +Used when the guest type is C<lxc>. Back up the container filesystem structure
> +that is made available for the mechanism returned by C<backup_get_mechanism>.
> +Returns when done backing up. Ideally, the method should log the progress during
> +backup.
> +
> +Note that this function is executed as the ID-mapped root user of the container,
> +so a potentially unprivileged user. The ID is passed along as part of C<$info>.
> +Use the C<prepare> phase of the C<backup_hook> for preparation. For example, to
> +make credentials available to the potentially unprivileged user.
> +
> +Parameters:
> +
> +=over
> +
> +=item C<$vmid>
> +
> +The ID of the guest being backed up.
> +
> +=item C<$guest_config>
> +
> +Guest configuration as raw data.
> +
> +=item C<$exclude_patterns>
> +
> +A list of glob patterns of files and directories to be excluded. C<**> is used
> +to match current directory and subdirectories. See also the following (note
> +that PBS implements more than required here, like explicit inclusion when
> +starting with a C<!>):
> +L<vzdump documentation|https://pve.proxmox.com/pve-docs/chapter-vzdump.html#_file_exclusions>
> +and
> +L<PBS documentation|https://pbs.proxmox.com/docs/backup-client.html#excluding-files-directories-from-a-backup>
> +
> +=item C<$info>
> +
> +A hash reference containing optional and mechanism-specific parameters.
> +
> +Optional parameters:
> +
> +=over
> +
> +=item C<< $info->{'bandwidth-limit'} >>
> +
> +The requested bandwith limit. The value is in bytes/second. The backup provider
> +is expected to honor this rate limit for IO on the backup source and network
> +traffic. A value of C<0>, C<undef> or if there is no such key in the hash all
> +mean that there is no limit.
> +
> +=item C<< $info->{'firewall-config'} >>
> +
> +Present if the firewall configuration exists. The guest's firewall
> +configuration as raw data.
> +
> +=back
> +
> +Mechansims-specific parameters for mechanism:
> +
> +=over
> +
> +=item C<directory>
> +
> +=over
> +
> +=item C<< $info->{directory} >>
> +
> +Path to the directory with the container's file system structure.
> +
> +=item C<< $info->{sources} >>
> +
> +List of paths (for separate mount points, including "." for the root) inside the
> +directory to be backed up.
> +
> +=item C<< $info->{'backup-user-id'} >>
> +
> +The user ID of the ID-mapped root user of the container. For example, C<100000>
> +for unprivileged containers by default.
> +
> +=back
> +
> +=back
> +
> +=back
> +
> +=back
> +
> +=cut
> +sub backup_container {
> +    my ($self, $vmid, $guest_config, $exclude_patterns, $info) = @_;
> +
> +    die "implement me in subclass";
> +}
> +
> +=pod
> +
> +=over
> +
> +=item C<restore_get_mechanism>
> +
> +Tell the caller what mechanism to use for restoring the guest. The restore
> +methods for the guest, i.e. C<restore_qemu_img_init> and
> +C<restore_qemu_img_cleanup> for guest type C<qemu>, or C<restore_container_init>
> +and C<restore_container_cleanup> for guest type C<lxc> will be called with
> +mechanism-specific information and their return value might also depend on the
> +mechanism. See those methods for more information. Returns
> +C<($mechanism, $vmtype)>:
> +
> +=over
> +
> +=item C<$mechanism>
> +
> +Currently, C<'qemu-img'> for guest type C<'qemu'> and either C<'tar'> or
> +C<'directory'> for type C<'lxc'> are possible.
> +
> +=item C<$vmtype>
> +
> +Either C<qemu> or C<lxc> depending on what type the guest in the backed-up
> +archive is.
> +
> +=back
> +
> +Parameters:
> +
> +=over
> +
> +=item C<$volname>
> +
> +The volume ID of the archive being restored.
> +
> +=item C<$storeid>
> +
> +The storage ID of the backup storage.
> +
> +=back
> +
> +=back
> +
> +=cut
> +sub restore_get_mechanism {
> +    my ($self, $volname, $storeid) = @_;
> +
> +    die "implement me in subclass";
> +}
> +
> +=pod
> +
> +=over
> +
> +=item C<restore_get_guest_config>
> +
> +Extract the guest configuration from the given backup. Returns the raw contents
> +of the backed-up configuration file.
> +
> +Parameters:
> +
> +=over
> +
> +=item C<$volname>
> +
> +The volume ID of the archive being restored.
> +
> +=item C<$storeid>
> +
> +The storage ID of the backup storage.
> +
> +=back
> +
> +=back
> +
> +=cut
> +sub restore_get_guest_config {
> +    my ($self, $volname, $storeid) = @_;
> +
> +    die "implement me in subclass";
> +}
> +
> +=pod
> +
> +=over
> +
> +=item C<restore_get_firewall_config>
> +
> +Extract the guest's firewall configuration from the given backup. Returns the
> +raw contents of the backed-up configuration file. Returns C<undef> if there is
> +no firewall config in the archive, C<die> if the configuration can't be
> +extracted.
> +
> +Parameters:
> +
> +=over
> +
> +=item C<$volname>
> +
> +The volume ID of the archive being restored.
> +
> +=item C<$storeid>
> +
> +The storage ID of the backup storage.
> +
> +=back
> +
> +=back
> +
> +=cut
> +sub restore_get_firewall_config {
> +    my ($self, $volname, $storeid) = @_;
> +
> +    die "implement me in subclass";
> +}
> +
> +=pod
> +
> +=over
> +
> +=item C<restore_vm_init>
> +
> +Prepare a VM archive for restore. Returns the basic information about the
> +volumes in the backup as a hash reference with the following structure:
> +
> +    {
> +	$device_nameA => { size => $sizeA },
> +	$device_nameB => { size => $sizeB },
> +	...
> +    }
> +
> +=over
> +
> +=item C<$device_name>
> +
> +The device name that was given as an argument to the backup routine when the
> +backup was created.
> +
> +=item C<$size>
> +
> +The virtual size of the VM volume that was backed up. A volume with this size is
> +created for the restore operation. In particular, for the C<qemu-img> mechanism,
> +this should be the size of the block device referenced by the C<qemu-img-path>
> +returned by C<restore_vm_volume>.
> +
> +=back
> +
> +Parameters:
> +
> +=over
> +
> +=item C<$volname>
> +
> +The volume ID of the archive being restored.
> +
> +=item C<$storeid>
> +
> +The storage ID of the backup storage.
> +
> +=back
> +
> +=back
> +
> +=cut
> +sub restore_vm_init {
> +    my ($self, $volname, $storeid) = @_;
> +
> +    die "implement me in subclass";
> +}
> +
> +=pod
> +
> +=over
> +
> +=item C<restore_vm_cleanup>
> +
> +For VM backups, clean up after the restore. Called in both, success and
> +failure scenarios.
> +
> +Parameters:
> +
> +=over
> +
> +=item C<$volname>
> +
> +The volume ID of the archive being restored.
> +
> +=item C<$storeid>
> +
> +The storage ID of the backup storage.
> +
> +=back
> +
> +=back
> +
> +=cut
> +sub restore_vm_cleanup {
> +    my ($self, $volname, $storeid) = @_;
> +
> +    die "implement me in subclass";
> +}
> +
> +=pod
> +
> +=over
> +
> +=item C<restore_vm_volume_init>
> +
> +Prepare a VM volume in the archive for restore. Returns a hash reference with
> +the mechanism-specific information for the restore:
> +
> +=over
> +
> +=item C<qemu-img>
> +
> +    { 'qemu-img-path' => $path }
> +
> +The volume will be restored using the C<qemu-img convert> command.
> +
> +=over
> +
> +=item C<$path>
> +
> +A path to the volume that C<qemu-img> can use as a source for the
> +C<qemu-img convert> command. E.g. this could also be an NBD URI.
> +
> +=back
> +
> +=back
> +
> +Parameters:
> +
> +=over
> +
> +=item C<$volname>
> +
> +The volume ID of the archive being restored.
> +
> +=item C<$storeid>
> +
> +The storage ID of the backup storage.
> +
> +=item C<$device_name>
> +
> +The device name associated to the volume that should be prepared for the
> +restore. Same as the argument to the backup routine when the backup was created.
> +
> +=item C<$info>
> +
> +A hash reference with optional and mechanism-specific parameters. Currently
> +empty.
> +
> +=back
> +
> +=back
> +
> +=cut
> +sub restore_vm_volume_init {
> +    my ($self, $volname, $storeid, $device_name, $info) = @_;
> +
> +    die "implement me in subclass";
> +}
> +
> +=pod
> +
> +=over
> +
> +=item C<restore_vm_volume_cleanup>
> +
> +For VM backups, clean up after the restore of a given volume. Called in both,
> +success and failure scenarios.
> +
> +Parameters:
> +
> +=over
> +
> +=item C<$volname>
> +
> +The volume ID of the archive being restored.
> +
> +=item C<$storeid>
> +
> +The storage ID of the backup storage.
> +
> +=item C<$device_name>
> +
> +The device name associated to the volume that should be prepared for the
> +restore. Same as the argument to the backup routine when the backup was created.
> +
> +=item C<$info>
> +
> +A hash reference with optional and mechanism-specific parameters. Currently
> +empty.
> +
> +=back
> +
> +=back
> +
> +=cut
> +sub restore_vm_volume_cleanup {
> +    my ($self, $volname, $storeid, $device_name, $info) = @_;
> +
> +    die "implement me in subclass";
> +}
> +
> +=pod
> +
> +=over
> +
> +=item C<restore_container_init>
> +
> +Prepare a container archive for restore. Returns a hash reference with the
> +mechanism-specific information for the restore:
> +
> +=over
> +
> +=item C<tar>
> +
> +    { 'tar-path' => $path }
> +
> +The archive will be restored via the C<tar> command.
> +
> +=over
> +
> +=item C<$path>
> +
> +The path to the tar archive containing the full filesystem structure of the
> +container.
> +
> +=back
> +
> +=item C<directory>
> +
> +    { 'archive-directory' => $path }
> +
> +The archive will be restored via C<rsync> from a directory containing the full
> +filesystem structure of the container.
> +
> +=over
> +
> +=item C<$path>
> +
> +The path to the directory containing the full filesystem structure of the
> +container.
> +
> +=back
> +
> +=back
> +
> +Parameters:
> +
> +=over
> +
> +=item C<$volname>
> +
> +The volume ID of the archive being restored.
> +
> +=item C<$storeid>
> +
> +The storage ID of the backup storage.
> +
> +=item C<$info>
> +
> +A hash reference with optional and mechanism-specific parameters. Currently
> +empty.
> +
> +=back
> +
> +=back
> +
> +=cut
> +sub restore_container_init {
> +    my ($self, $volname, $storeid, $info) = @_;
> +
> +    die "implement me in subclass";
> +}
> +
> +=pod
> +
> +=over
> +
> +=item C<restore_container_cleanup>
> +
> +For container backups, clean up after the restore. Called in both, success and
> +failure scenarios.
> +
> +Parameters:
> +
> +=over
> +
> +=item C<$volname>
> +
> +The volume ID of the archive being restored.
> +
> +=item C<$storeid>
> +
> +The storage ID of the backup storage.
> +
> +=item C<$info>
> +
> +A hash reference with optional and mechanism-specific parameters. Currently
> +empty.
> +
> +=back
> +
> +=back
> +
> +=cut
> +sub restore_container_cleanup {
> +    my ($self, $volname, $storeid, $info) = @_;
> +
> +    die "implement me in subclass";
> +}
> +
> +1;
> diff --git a/src/PVE/BackupProvider/Plugin/Makefile b/src/PVE/BackupProvider/Plugin/Makefile
> new file mode 100644
> index 0000000..bbd7431
> --- /dev/null
> +++ b/src/PVE/BackupProvider/Plugin/Makefile
> @@ -0,0 +1,5 @@
> +SOURCES = Base.pm
> +
> +.PHONY: install
> +install:
> +	for i in ${SOURCES}; do install -D -m 0644 $$i ${DESTDIR}${PERLDIR}/PVE/BackupProvider/Plugin/$$i; done
> diff --git a/src/PVE/Makefile b/src/PVE/Makefile
> index 0af3081..9e9f6aa 100644
> --- a/src/PVE/Makefile
> +++ b/src/PVE/Makefile
> @@ -9,6 +9,7 @@ install:
>  	make -C Storage install
>  	make -C GuestImport install
>  	make -C API2 install
> +	make -C BackupProvider install
>  	make -C CLI install
>  
>  .PHONY: test
> diff --git a/src/PVE/Storage.pm b/src/PVE/Storage.pm
> index d582af4..014017b 100755
> --- a/src/PVE/Storage.pm
> +++ b/src/PVE/Storage.pm
> @@ -2027,6 +2027,14 @@ sub volume_export_start {
>      PVE::Tools::run_command($cmds, %$run_command_params);
>  }
>  
> +sub new_backup_provider {
> +    my ($cfg, $storeid, $log_function) = @_;
> +
> +    my $scfg = storage_config($cfg, $storeid);
> +    my $plugin = PVE::Storage::Plugin->lookup($scfg->{type});
> +    return $plugin->new_backup_provider($scfg, $storeid, $log_function);
> +}
> +
>  # bash completion helper
>  
>  sub complete_storage {
> diff --git a/src/PVE/Storage/Plugin.pm b/src/PVE/Storage/Plugin.pm
> index 80daeea..df2ddc5 100644
> --- a/src/PVE/Storage/Plugin.pm
> +++ b/src/PVE/Storage/Plugin.pm
> @@ -1868,6 +1868,21 @@ sub rename_volume {
>      return "${storeid}:${base}${target_vmid}/${target_volname}";
>  }
>  
> +# Used by storage plugins for external backup providers. See PVE::BackupProvider::Plugin for the API
> +# the provider needs to implement.
> +#
> +# $scfg - the storage configuration
> +# $storeid - the storage ID
> +# $log_function($log_level, $message) - this log function can be used to write to the backup task
> +#   log in Proxmox VE. $log_level is 'info', 'warn' or 'err', $message is the message to be printed.
> +#
> +# Returns a blessed reference to the backup provider class.
> +sub new_backup_provider {
> +    my ($class, $scfg, $storeid, $log_function) = @_;
> +
> +    die "implement me if enabling the feature 'backup-provider' in plugindata()->{features}\n";
> +}
> +
>  sub config_aware_base_mkdir {
>      my ($class, $scfg, $path) = @_;
>  
> -- 
> 2.39.5




More information about the pve-devel mailing list