[pve-devel] [PATCH v2 qemu 7/7] squash related patches

Fiona Ebner f.ebner at proxmox.com
Mon May 15 15:39:56 CEST 2023


where there is no good reason to keep them separate. It's a pain
during rebase if there are multiple patches changing the same code
over and over again. This was especially bad for the backup-related
patches. If the history of patches really is needed, it can be
extracted via git. Additionally, compilation with partial application
of patches was broken since a long time, because one of the master key
changes became part of an earlier patch during a past rebase.

If only the same files were changed by a subsequent patch and the
changes felt to belong together (obvious for later bug fixes, but also
done for features e.g. adding master key support for PBS), the patches
were squashed together.

The PBS namespace support patch was split into the individual parts
it changes, i.e. PBS block driver, pbs-restore binary and QMP backup
infrastructure, and squashed into the respective patches.

No code change is intended, git diff in the submodule should not show
any difference between applying all patches before this commit and
applying all patches after this commit.

The query-proxmox-support QMP function has been left as part of the
"PVE-Backup: Proxmox backup patches for QEMU" patch, because it's
currently only used there. If it ever is used elsewhere too, it can
be split out from there.

The recent alloc-track and BQL-related savevm-async changes have been
left separate for now, because it's not 100% clear they are the best
approach yet. This depends on what upstream decides about the BQL
stuff and whether and what kind of issues with the changes pop up.

The qemu-img dd snapshot patch has been re-ordered to after the other
qemu-img dd patches.

Signed-off-by: Fiona Ebner <f.ebner at proxmox.com>
---

New in v2.

 ...add-l-option-for-loading-a-snapshot.patch} |    4 +-
 ...irtio-balloon-improve-query-balloon.patch} |    0
 ...0015-PVE-qapi-modify-query-machines.patch} |    0
 ...=> 0016-PVE-qapi-modify-spice-query.patch} |    0
 ...nel-implementation-for-savevm-async.patch} |    0
 ...sync-for-background-state-snapshots.patch} |   34 +-
 ...dd-optional-buffer-size-to-QEMUFile.patch} |    6 +-
 ...dd-the-zeroinit-block-driver-filter.patch} |    0
 ...Add-dummy-id-command-line-parameter.patch} |    0
 ...-target-i386-disable-LINT0-after-re.patch} |    0
 ...e-posix-make-locking-optiono-on-cre.patch} |    0
 ...-PVE-monitor-disable-oob-capability.patch} |    0
 ...ed-balloon-qemu-4-0-config-size-fal.patch} |    0
 ...-Allow-version-code-in-machine-type.patch} |    0
 ...-bcs-bitmap-initialization-to-job-c.patch} |    0
 ...E-Backup-add-vma-backup-format-code.patch} |  267 ++--
 ...Backup-add-backup-dump-block-driver.patch} |    0
 ...-sequential-job-transaction-support.patch} |    0
 ...irty-bitmap-tracking-for-incremental.patch |  460 -------
 ...kup-Proxmox-backup-patches-for-QEMU.patch} | 1117 +++++++++++------
 ...store-new-command-to-restore-from-p.patch} |   29 +-
 .../pve/0032-PVE-various-PBS-fixes.patch      |  217 ----
 ...k-driver-to-map-backup-archives-into.patch |   62 +-
 ...dd-query_proxmox_support-QMP-command.patch |   74 --
 ...t-stderr-to-journal-when-daemonized.patch} |    0
 ...grate-dirty-bitmap-state-via-savevm.patch} |   24 +-
 ...E-add-query-pbs-bitmap-info-QMP-call.patch |  441 -------
 ...irty-bitmap-migrate-other-bitmaps-e.patch} |    0
 ...ll-back-to-open-iscsi-initiatorname.patch} |    0
 ...-transaction-to-synchronize-job-stat.patch |  293 -----
 ...VE-block-stream-increase-chunk-size.patch} |    0
 ...-block-on-finishing-and-cleanup-crea.patch |  499 --------
 ...ccept-NULL-qiov-in-bdrv_pad_request.patch} |    0
 ...> 0040-block-add-alloc-track-driver.patch} |   14 +-
 ...rbd-workaround-for-ceph-issue-53784.patch} |    0
 ...fix-handling-of-holes-in-.bdrv_co_b.patch} |    0
 ...routine-QMP-for-backup-cancel_backup.patch |  595 ---------
 ...-rbd-implement-bdrv_co_block_status.patch} |    0
 .../pve/0044-PBS-add-master-key-support.patch |  100 --
 ...lloc-track-fix-deadlock-during-drop.patch} |    0
 ...st-path-reads-without-allocation-if-.patch |   53 -
 ...pshots-hold-the-BQL-during-setup-ca.patch} |    0
 ...m-async-don-t-hold-BQL-during-setup.patch} |    0
 ...-register-yank-before-migration_inco.patch |   35 -
 .../pve/0051-vma-allow-partial-restore.patch  |  407 ------
 .../pve/0052-pbs-namespace-support.patch      |  235 ----
 ...e-jobs-correctly-cancel-in-error-sce.patch |   60 -
 ...nsure-jobs-in-di_list-are-referenced.patch |   73 --
 ...d-segfault-issues-upon-backup-cancel.patch |  118 --
 ...support-64KiB-unaligned-input-images.patch |   57 -
 ...d-triggering-assertion-in-error-case.patch |   25 -
 ...ck-alloc-track-avoid-premature-break.patch |   36 -
 ...-passing-max-workers-performance-set.patch |  144 ---
 ...vevm-async-optimize-querying-pending.patch |   49 -
 ...also-initialize-compression-counters.patch |   26 -
 debian/patches/series                         |   87 +-
 56 files changed, 1041 insertions(+), 4600 deletions(-)
 rename debian/patches/pve/{0050-qemu-img-dd-add-l-option-for-loading-a-snapshot.patch => 0013-qemu-img-dd-add-l-option-for-loading-a-snapshot.patch} (97%)
 rename debian/patches/pve/{0013-PVE-virtio-balloon-improve-query-balloon.patch => 0014-PVE-virtio-balloon-improve-query-balloon.patch} (100%)
 rename debian/patches/pve/{0014-PVE-qapi-modify-query-machines.patch => 0015-PVE-qapi-modify-query-machines.patch} (100%)
 rename debian/patches/pve/{0015-PVE-qapi-modify-spice-query.patch => 0016-PVE-qapi-modify-spice-query.patch} (100%)
 rename debian/patches/pve/{0016-PVE-add-IOChannel-implementation-for-savevm-async.patch => 0017-PVE-add-IOChannel-implementation-for-savevm-async.patch} (100%)
 rename debian/patches/pve/{0017-PVE-add-savevm-async-for-background-state-snapshots.patch => 0018-PVE-add-savevm-async-for-background-state-snapshots.patch} (96%)
 rename debian/patches/pve/{0018-PVE-add-optional-buffer-size-to-QEMUFile.patch => 0019-PVE-add-optional-buffer-size-to-QEMUFile.patch} (97%)
 rename debian/patches/pve/{0019-PVE-block-add-the-zeroinit-block-driver-filter.patch => 0020-PVE-block-add-the-zeroinit-block-driver-filter.patch} (100%)
 rename debian/patches/pve/{0020-PVE-Add-dummy-id-command-line-parameter.patch => 0021-PVE-Add-dummy-id-command-line-parameter.patch} (100%)
 rename debian/patches/pve/{0021-PVE-Config-Revert-target-i386-disable-LINT0-after-re.patch => 0022-PVE-Config-Revert-target-i386-disable-LINT0-after-re.patch} (100%)
 rename debian/patches/pve/{0022-PVE-Up-Config-file-posix-make-locking-optiono-on-cre.patch => 0023-PVE-Up-Config-file-posix-make-locking-optiono-on-cre.patch} (100%)
 rename debian/patches/pve/{0023-PVE-monitor-disable-oob-capability.patch => 0024-PVE-monitor-disable-oob-capability.patch} (100%)
 rename debian/patches/pve/{0024-PVE-Compat-4.0-used-balloon-qemu-4-0-config-size-fal.patch => 0025-PVE-Compat-4.0-used-balloon-qemu-4-0-config-size-fal.patch} (100%)
 rename debian/patches/pve/{0025-PVE-Allow-version-code-in-machine-type.patch => 0026-PVE-Allow-version-code-in-machine-type.patch} (100%)
 rename debian/patches/pve/{0026-block-backup-move-bcs-bitmap-initialization-to-job-c.patch => 0027-block-backup-move-bcs-bitmap-initialization-to-job-c.patch} (100%)
 rename debian/patches/pve/{0027-PVE-Backup-add-vma-backup-format-code.patch => 0028-PVE-Backup-add-vma-backup-format-code.patch} (91%)
 rename debian/patches/pve/{0028-PVE-Backup-add-backup-dump-block-driver.patch => 0029-PVE-Backup-add-backup-dump-block-driver.patch} (100%)
 rename debian/patches/pve/{0037-PVE-Add-sequential-job-transaction-support.patch => 0030-PVE-Add-sequential-job-transaction-support.patch} (100%)
 delete mode 100644 debian/patches/pve/0031-PVE-Backup-Add-dirty-bitmap-tracking-for-incremental.patch
 rename debian/patches/pve/{0029-PVE-Backup-proxmox-backup-patches-for-qemu.patch => 0031-PVE-Backup-Proxmox-backup-patches-for-QEMU.patch} (56%)
 rename debian/patches/pve/{0030-PVE-Backup-pbs-restore-new-command-to-restore-from-p.patch => 0032-PVE-Backup-pbs-restore-new-command-to-restore-from-p.patch} (90%)
 delete mode 100644 debian/patches/pve/0032-PVE-various-PBS-fixes.patch
 delete mode 100644 debian/patches/pve/0034-PVE-add-query_proxmox_support-QMP-command.patch
 rename debian/patches/pve/{0036-PVE-redirect-stderr-to-journal-when-daemonized.patch => 0034-PVE-redirect-stderr-to-journal-when-daemonized.patch} (100%)
 rename debian/patches/pve/{0040-PVE-Migrate-dirty-bitmap-state-via-savevm.patch => 0035-PVE-Migrate-dirty-bitmap-state-via-savevm.patch} (93%)
 delete mode 100644 debian/patches/pve/0035-PVE-add-query-pbs-bitmap-info-QMP-call.patch
 rename debian/patches/pve/{0041-migration-block-dirty-bitmap-migrate-other-bitmaps-e.patch => 0036-migration-block-dirty-bitmap-migrate-other-bitmaps-e.patch} (100%)
 rename debian/patches/pve/{0042-PVE-fall-back-to-open-iscsi-initiatorname.patch => 0037-PVE-fall-back-to-open-iscsi-initiatorname.patch} (100%)
 delete mode 100644 debian/patches/pve/0038-PVE-Backup-Use-a-transaction-to-synchronize-job-stat.patch
 rename debian/patches/pve/{0046-PVE-block-stream-increase-chunk-size.patch => 0038-PVE-block-stream-increase-chunk-size.patch} (100%)
 delete mode 100644 debian/patches/pve/0039-PVE-Backup-Don-t-block-on-finishing-and-cleanup-crea.patch
 rename debian/patches/pve/{0047-block-io-accept-NULL-qiov-in-bdrv_pad_request.patch => 0039-block-io-accept-NULL-qiov-in-bdrv_pad_request.patch} (100%)
 rename debian/patches/pve/{0048-block-add-alloc-track-driver.patch => 0040-block-add-alloc-track-driver.patch} (97%)
 rename debian/patches/pve/{0053-Revert-block-rbd-workaround-for-ceph-issue-53784.patch => 0041-Revert-block-rbd-workaround-for-ceph-issue-53784.patch} (100%)
 rename debian/patches/pve/{0054-Revert-block-rbd-fix-handling-of-holes-in-.bdrv_co_b.patch => 0042-Revert-block-rbd-fix-handling-of-holes-in-.bdrv_co_b.patch} (100%)
 delete mode 100644 debian/patches/pve/0043-PVE-Use-coroutine-QMP-for-backup-cancel_backup.patch
 rename debian/patches/pve/{0055-Revert-block-rbd-implement-bdrv_co_block_status.patch => 0043-Revert-block-rbd-implement-bdrv_co_block_status.patch} (100%)
 delete mode 100644 debian/patches/pve/0044-PBS-add-master-key-support.patch
 rename debian/patches/pve/{0063-alloc-track-fix-deadlock-during-drop.patch => 0044-alloc-track-fix-deadlock-during-drop.patch} (100%)
 delete mode 100644 debian/patches/pve/0045-PVE-block-pbs-fast-path-reads-without-allocation-if-.patch
 rename debian/patches/pve/{0066-migration-for-snapshots-hold-the-BQL-during-setup-ca.patch => 0045-migration-for-snapshots-hold-the-BQL-during-setup-ca.patch} (100%)
 rename debian/patches/pve/{0067-savevm-async-don-t-hold-BQL-during-setup.patch => 0046-savevm-async-don-t-hold-BQL-during-setup.patch} (100%)
 delete mode 100644 debian/patches/pve/0049-PVE-savevm-async-register-yank-before-migration_inco.patch
 delete mode 100644 debian/patches/pve/0051-vma-allow-partial-restore.patch
 delete mode 100644 debian/patches/pve/0052-pbs-namespace-support.patch
 delete mode 100644 debian/patches/pve/0056-PVE-Backup-create-jobs-correctly-cancel-in-error-sce.patch
 delete mode 100644 debian/patches/pve/0057-PVE-Backup-ensure-jobs-in-di_list-are-referenced.patch
 delete mode 100644 debian/patches/pve/0058-PVE-Backup-avoid-segfault-issues-upon-backup-cancel.patch
 delete mode 100644 debian/patches/pve/0059-vma-create-support-64KiB-unaligned-input-images.patch
 delete mode 100644 debian/patches/pve/0060-vma-create-avoid-triggering-assertion-in-error-case.patch
 delete mode 100644 debian/patches/pve/0061-block-alloc-track-avoid-premature-break.patch
 delete mode 100644 debian/patches/pve/0062-PVE-Backup-allow-passing-max-workers-performance-set.patch
 delete mode 100644 debian/patches/pve/0064-savevm-async-optimize-querying-pending.patch
 delete mode 100644 debian/patches/pve/0065-savevm-async-also-initialize-compression-counters.patch

diff --git a/debian/patches/pve/0050-qemu-img-dd-add-l-option-for-loading-a-snapshot.patch b/debian/patches/pve/0013-qemu-img-dd-add-l-option-for-loading-a-snapshot.patch
similarity index 97%
rename from debian/patches/pve/0050-qemu-img-dd-add-l-option-for-loading-a-snapshot.patch
rename to debian/patches/pve/0013-qemu-img-dd-add-l-option-for-loading-a-snapshot.patch
index 7701b55..debfc54 100644
--- a/debian/patches/pve/0050-qemu-img-dd-add-l-option-for-loading-a-snapshot.patch
+++ b/debian/patches/pve/0013-qemu-img-dd-add-l-option-for-loading-a-snapshot.patch
@@ -1,9 +1,9 @@
 From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
 From: Fabian Ebner <f.ebner at proxmox.com>
 Date: Mon, 7 Feb 2022 14:21:01 +0100
-Subject: [PATCH] qemu-img: dd: add -l option for loading a snapshot
+Subject: [PATCH] qemu-img dd: add -l option for loading a snapshot
 
-Signed-off-by: Fabian Ebner <f.ebner at proxmox.com>
+Signed-off-by: Fiona Ebner <f.ebner at proxmox.com>
 Signed-off-by: Thomas Lamprecht <t.lamprecht at proxmox.com>
 ---
  docs/tools/qemu-img.rst |  6 +++---
diff --git a/debian/patches/pve/0013-PVE-virtio-balloon-improve-query-balloon.patch b/debian/patches/pve/0014-PVE-virtio-balloon-improve-query-balloon.patch
similarity index 100%
rename from debian/patches/pve/0013-PVE-virtio-balloon-improve-query-balloon.patch
rename to debian/patches/pve/0014-PVE-virtio-balloon-improve-query-balloon.patch
diff --git a/debian/patches/pve/0014-PVE-qapi-modify-query-machines.patch b/debian/patches/pve/0015-PVE-qapi-modify-query-machines.patch
similarity index 100%
rename from debian/patches/pve/0014-PVE-qapi-modify-query-machines.patch
rename to debian/patches/pve/0015-PVE-qapi-modify-query-machines.patch
diff --git a/debian/patches/pve/0015-PVE-qapi-modify-spice-query.patch b/debian/patches/pve/0016-PVE-qapi-modify-spice-query.patch
similarity index 100%
rename from debian/patches/pve/0015-PVE-qapi-modify-spice-query.patch
rename to debian/patches/pve/0016-PVE-qapi-modify-spice-query.patch
diff --git a/debian/patches/pve/0016-PVE-add-IOChannel-implementation-for-savevm-async.patch b/debian/patches/pve/0017-PVE-add-IOChannel-implementation-for-savevm-async.patch
similarity index 100%
rename from debian/patches/pve/0016-PVE-add-IOChannel-implementation-for-savevm-async.patch
rename to debian/patches/pve/0017-PVE-add-IOChannel-implementation-for-savevm-async.patch
diff --git a/debian/patches/pve/0017-PVE-add-savevm-async-for-background-state-snapshots.patch b/debian/patches/pve/0018-PVE-add-savevm-async-for-background-state-snapshots.patch
similarity index 96%
rename from debian/patches/pve/0017-PVE-add-savevm-async-for-background-state-snapshots.patch
rename to debian/patches/pve/0018-PVE-add-savevm-async-for-background-state-snapshots.patch
index 7c3f2ce..d42d0f0 100644
--- a/debian/patches/pve/0017-PVE-add-savevm-async-for-background-state-snapshots.patch
+++ b/debian/patches/pve/0018-PVE-add-savevm-async-for-background-state-snapshots.patch
@@ -21,7 +21,8 @@ still opened by QEMU.
 Signed-off-by: Thomas Lamprecht <t.lamprecht at proxmox.com>
 Signed-off-by: Dietmar Maurer <dietmar at proxmox.com>
 Signed-off-by: Wolfgang Bumiller <w.bumiller at proxmox.com>
-[improve aborting]
+[SR: improve aborting
+     register yank before migration_incoming_state_destroy]
 Signed-off-by: Stefan Reiter <s.reiter at proxmox.com>
 [FE: further improve aborting
      adapt to removal of QEMUFileOps
@@ -34,13 +35,13 @@ Signed-off-by: Fiona Ebner <f.ebner at proxmox.com>
  include/migration/snapshot.h |   2 +
  include/monitor/hmp.h        |   5 +
  migration/meson.build        |   1 +
- migration/savevm-async.c     | 535 +++++++++++++++++++++++++++++++++++
+ migration/savevm-async.c     | 548 +++++++++++++++++++++++++++++++++++
  monitor/hmp-cmds.c           |  58 ++++
  qapi/migration.json          |  34 +++
- qapi/misc.json               |  32 +++
+ qapi/misc.json               |  32 ++
  qemu-options.hx              |  12 +
  softmmu/vl.c                 |  10 +
- 11 files changed, 735 insertions(+)
+ 11 files changed, 748 insertions(+)
  create mode 100644 migration/savevm-async.c
 
 diff --git a/hmp-commands-info.hx b/hmp-commands-info.hx
@@ -156,10 +157,10 @@ index 8a142fc7a9..a7824b5266 100644
    'threadinfo.c',
 diff --git a/migration/savevm-async.c b/migration/savevm-async.c
 new file mode 100644
-index 0000000000..24660af014
+index 0000000000..c5db9e9c1e
 --- /dev/null
 +++ b/migration/savevm-async.c
-@@ -0,0 +1,535 @@
+@@ -0,0 +1,548 @@
 +#include "qemu/osdep.h"
 +#include "migration/channel-savevm-async.h"
 +#include "migration/migration.h"
@@ -182,6 +183,7 @@ index 0000000000..24660af014
 +#include "qemu/timer.h"
 +#include "qemu/main-loop.h"
 +#include "qemu/rcu.h"
++#include "qemu/yank.h"
 +
 +/* #define DEBUG_SAVEVM_STATE */
 +
@@ -403,12 +405,19 @@ index 0000000000..24660af014
 +
 +    while (snap_state.state == SAVE_STATE_ACTIVE) {
 +        uint64_t pending_size, pend_precopy, pend_postcopy;
++        uint64_t threshold = 400 * 1000;
 +
-+        /* pending is expected to be called without iothread lock */
++        /*
++         * pending_{estimate,exact} are expected to be called without iothread
++         * lock. Similar to what is done in migration.c, call the exact variant
++         * only once pend_precopy in the estimate is below the threshold.
++         */
 +        qemu_mutex_unlock_iothread();
-+        qemu_savevm_state_pending_exact(&pend_precopy, &pend_postcopy);
++        qemu_savevm_state_pending_estimate(&pend_precopy, &pend_postcopy);
++        if (pend_precopy <= threshold) {
++            qemu_savevm_state_pending_exact(&pend_precopy, &pend_postcopy);
++        }
 +        qemu_mutex_lock_iothread();
-+
 +        pending_size = pend_precopy + pend_postcopy;
 +
 +        /*
@@ -420,7 +429,7 @@ index 0000000000..24660af014
 +        maxlen = blk_getlength(snap_state.target) - 100*1024*1024;
 +
 +        /* Note that there is no progress for pend_postcopy when iterating */
-+        if (pending_size - pend_postcopy > 400000 && snap_state.bs_pos + pending_size < maxlen) {
++        if (pend_precopy > threshold && snap_state.bs_pos + pending_size < maxlen) {
 +            ret = qemu_savevm_state_iterate(snap_state.file, false);
 +            if (ret < 0) {
 +                save_snapshot_error("qemu_savevm_state_iterate error %d", ret);
@@ -548,6 +557,7 @@ index 0000000000..24660af014
 +     */
 +    migrate_init(ms);
 +    memset(&ram_counters, 0, sizeof(ram_counters));
++    memset(&compression_counters, 0, sizeof(compression_counters));
 +    ms->to_dst_file = snap_state.file;
 +
 +    error_setg(&snap_state.blocker, "block device is in use by savevm");
@@ -679,6 +689,10 @@ index 0000000000..24660af014
 +    dirty_bitmap_mig_before_vm_start();
 +
 +    qemu_fclose(f);
++
++    /* state_destroy assumes a real migration which would have added a yank */
++    yank_register_instance(MIGRATION_YANK_INSTANCE, &error_abort);
++
 +    migration_incoming_state_destroy();
 +    if (ret < 0) {
 +        error_setg_errno(errp, -ret, "Error while loading VM state");
diff --git a/debian/patches/pve/0018-PVE-add-optional-buffer-size-to-QEMUFile.patch b/debian/patches/pve/0019-PVE-add-optional-buffer-size-to-QEMUFile.patch
similarity index 97%
rename from debian/patches/pve/0018-PVE-add-optional-buffer-size-to-QEMUFile.patch
rename to debian/patches/pve/0019-PVE-add-optional-buffer-size-to-QEMUFile.patch
index e638f0e..cd7b508 100644
--- a/debian/patches/pve/0018-PVE-add-optional-buffer-size-to-QEMUFile.patch
+++ b/debian/patches/pve/0019-PVE-add-optional-buffer-size-to-QEMUFile.patch
@@ -192,10 +192,10 @@ index 9d0155a2a1..cc06240e8d 100644
  int qemu_fclose(QEMUFile *f);
  
 diff --git a/migration/savevm-async.c b/migration/savevm-async.c
-index 24660af014..70273a2996 100644
+index c5db9e9c1e..effe6d1e0d 100644
 --- a/migration/savevm-async.c
 +++ b/migration/savevm-async.c
-@@ -372,7 +372,7 @@ void qmp_savevm_start(const char *statefile, Error **errp)
+@@ -380,7 +380,7 @@ void qmp_savevm_start(const char *statefile, Error **errp)
  
      QIOChannel *ioc = QIO_CHANNEL(qio_channel_savevm_async_new(snap_state.target,
                                                                 &snap_state.bs_pos));
@@ -204,7 +204,7 @@ index 24660af014..70273a2996 100644
  
      if (!snap_state.file) {
          error_set(errp, ERROR_CLASS_GENERIC_ERROR, "failed to open '%s'", statefile);
-@@ -504,7 +504,8 @@ int load_snapshot_from_blockdev(const char *filename, Error **errp)
+@@ -513,7 +513,8 @@ int load_snapshot_from_blockdev(const char *filename, Error **errp)
      blk_op_block_all(be, blocker);
  
      /* restore the VM state */
diff --git a/debian/patches/pve/0019-PVE-block-add-the-zeroinit-block-driver-filter.patch b/debian/patches/pve/0020-PVE-block-add-the-zeroinit-block-driver-filter.patch
similarity index 100%
rename from debian/patches/pve/0019-PVE-block-add-the-zeroinit-block-driver-filter.patch
rename to debian/patches/pve/0020-PVE-block-add-the-zeroinit-block-driver-filter.patch
diff --git a/debian/patches/pve/0020-PVE-Add-dummy-id-command-line-parameter.patch b/debian/patches/pve/0021-PVE-Add-dummy-id-command-line-parameter.patch
similarity index 100%
rename from debian/patches/pve/0020-PVE-Add-dummy-id-command-line-parameter.patch
rename to debian/patches/pve/0021-PVE-Add-dummy-id-command-line-parameter.patch
diff --git a/debian/patches/pve/0021-PVE-Config-Revert-target-i386-disable-LINT0-after-re.patch b/debian/patches/pve/0022-PVE-Config-Revert-target-i386-disable-LINT0-after-re.patch
similarity index 100%
rename from debian/patches/pve/0021-PVE-Config-Revert-target-i386-disable-LINT0-after-re.patch
rename to debian/patches/pve/0022-PVE-Config-Revert-target-i386-disable-LINT0-after-re.patch
diff --git a/debian/patches/pve/0022-PVE-Up-Config-file-posix-make-locking-optiono-on-cre.patch b/debian/patches/pve/0023-PVE-Up-Config-file-posix-make-locking-optiono-on-cre.patch
similarity index 100%
rename from debian/patches/pve/0022-PVE-Up-Config-file-posix-make-locking-optiono-on-cre.patch
rename to debian/patches/pve/0023-PVE-Up-Config-file-posix-make-locking-optiono-on-cre.patch
diff --git a/debian/patches/pve/0023-PVE-monitor-disable-oob-capability.patch b/debian/patches/pve/0024-PVE-monitor-disable-oob-capability.patch
similarity index 100%
rename from debian/patches/pve/0023-PVE-monitor-disable-oob-capability.patch
rename to debian/patches/pve/0024-PVE-monitor-disable-oob-capability.patch
diff --git a/debian/patches/pve/0024-PVE-Compat-4.0-used-balloon-qemu-4-0-config-size-fal.patch b/debian/patches/pve/0025-PVE-Compat-4.0-used-balloon-qemu-4-0-config-size-fal.patch
similarity index 100%
rename from debian/patches/pve/0024-PVE-Compat-4.0-used-balloon-qemu-4-0-config-size-fal.patch
rename to debian/patches/pve/0025-PVE-Compat-4.0-used-balloon-qemu-4-0-config-size-fal.patch
diff --git a/debian/patches/pve/0025-PVE-Allow-version-code-in-machine-type.patch b/debian/patches/pve/0026-PVE-Allow-version-code-in-machine-type.patch
similarity index 100%
rename from debian/patches/pve/0025-PVE-Allow-version-code-in-machine-type.patch
rename to debian/patches/pve/0026-PVE-Allow-version-code-in-machine-type.patch
diff --git a/debian/patches/pve/0026-block-backup-move-bcs-bitmap-initialization-to-job-c.patch b/debian/patches/pve/0027-block-backup-move-bcs-bitmap-initialization-to-job-c.patch
similarity index 100%
rename from debian/patches/pve/0026-block-backup-move-bcs-bitmap-initialization-to-job-c.patch
rename to debian/patches/pve/0027-block-backup-move-bcs-bitmap-initialization-to-job-c.patch
diff --git a/debian/patches/pve/0027-PVE-Backup-add-vma-backup-format-code.patch b/debian/patches/pve/0028-PVE-Backup-add-vma-backup-format-code.patch
similarity index 91%
rename from debian/patches/pve/0027-PVE-Backup-add-vma-backup-format-code.patch
rename to debian/patches/pve/0028-PVE-Backup-add-vma-backup-format-code.patch
index fd0c8e6..6a343c2 100644
--- a/debian/patches/pve/0027-PVE-Backup-add-vma-backup-format-code.patch
+++ b/debian/patches/pve/0028-PVE-Backup-add-vma-backup-format-code.patch
@@ -3,17 +3,23 @@ From: Dietmar Maurer <dietmar at proxmox.com>
 Date: Mon, 6 Apr 2020 12:16:57 +0200
 Subject: [PATCH] PVE-Backup: add vma backup format code
 
+Notes about partial restoring: skipping a certain drive is done via a
+map line of the form skip=drive-scsi0. Since in PVE, most archives are
+compressed and piped to vma for restore, it's not easily possible to
+skip reads.
+
 Signed-off-by: Thomas Lamprecht <t.lamprecht at proxmox.com>
-[FE: create: register all streams before entering coroutines]
-Signed-off-by: Fabian Ebner <f.ebner at proxmox.com>
+[FE: improvements during create
+     allow partial restore]
+Signed-off-by: Fiona Ebner <f.ebner at proxmox.com>
 ---
  block/meson.build |   2 +
  meson.build       |   5 +
- vma-reader.c      | 859 ++++++++++++++++++++++++++++++++++++++++++++++
- vma-writer.c      | 791 ++++++++++++++++++++++++++++++++++++++++++
- vma.c             | 849 +++++++++++++++++++++++++++++++++++++++++++++
+ vma-reader.c      | 867 +++++++++++++++++++++++++++++++++++++++++++++
+ vma-writer.c      | 793 +++++++++++++++++++++++++++++++++++++++++
+ vma.c             | 878 ++++++++++++++++++++++++++++++++++++++++++++++
  vma.h             | 150 ++++++++
- 6 files changed, 2656 insertions(+)
+ 6 files changed, 2695 insertions(+)
  create mode 100644 vma-reader.c
  create mode 100644 vma-writer.c
  create mode 100644 vma.c
@@ -57,10 +63,10 @@ index d964e741e7..603cdb97bb 100644
    subdir('contrib/elf2dmp')
 diff --git a/vma-reader.c b/vma-reader.c
 new file mode 100644
-index 0000000000..e65f1e8415
+index 0000000000..81a891c6b1
 --- /dev/null
 +++ b/vma-reader.c
-@@ -0,0 +1,859 @@
+@@ -0,0 +1,867 @@
 +/*
 + * VMA: Virtual Machine Archive
 + *
@@ -91,6 +97,7 @@ index 0000000000..e65f1e8415
 +    bool write_zeroes;
 +    unsigned long *bitmap;
 +    int bitmap_size;
++    bool skip;
 +}  VmaRestoreState;
 +
 +struct VmaReader {
@@ -488,13 +495,14 @@ index 0000000000..e65f1e8415
 +}
 +
 +static void allocate_rstate(VmaReader *vmar,  guint8 dev_id,
-+                            BlockBackend *target, bool write_zeroes)
++                            BlockBackend *target, bool write_zeroes, bool skip)
 +{
 +    assert(vmar);
 +    assert(dev_id);
 +
 +    vmar->rstate[dev_id].target = target;
 +    vmar->rstate[dev_id].write_zeroes = write_zeroes;
++    vmar->rstate[dev_id].skip = skip;
 +
 +    int64_t size = vmar->devinfo[dev_id].size;
 +
@@ -509,28 +517,30 @@ index 0000000000..e65f1e8415
 +}
 +
 +int vma_reader_register_bs(VmaReader *vmar, guint8 dev_id, BlockBackend *target,
-+                           bool write_zeroes, Error **errp)
++                           bool write_zeroes, bool skip, Error **errp)
 +{
 +    assert(vmar);
-+    assert(target != NULL);
++    assert(target != NULL || skip);
 +    assert(dev_id);
-+    assert(vmar->rstate[dev_id].target == NULL);
-+
-+    int64_t size = blk_getlength(target);
-+    int64_t size_diff = size - vmar->devinfo[dev_id].size;
++    assert(vmar->rstate[dev_id].target == NULL && !vmar->rstate[dev_id].skip);
 +
-+    /* storage types can have different size restrictions, so it
-+     * is not always possible to create an image with exact size.
-+     * So we tolerate a size difference up to 4MB.
-+     */
-+    if ((size_diff < 0) || (size_diff > 4*1024*1024)) {
-+        error_setg(errp, "vma_reader_register_bs for stream %s failed - "
-+                   "unexpected size %zd != %zd", vmar->devinfo[dev_id].devname,
-+                   size, vmar->devinfo[dev_id].size);
-+        return -1;
++    if (target != NULL) {
++        int64_t size = blk_getlength(target);
++        int64_t size_diff = size - vmar->devinfo[dev_id].size;
++
++        /* storage types can have different size restrictions, so it
++         * is not always possible to create an image with exact size.
++         * So we tolerate a size difference up to 4MB.
++         */
++        if ((size_diff < 0) || (size_diff > 4*1024*1024)) {
++            error_setg(errp, "vma_reader_register_bs for stream %s failed - "
++                       "unexpected size %zd != %zd", vmar->devinfo[dev_id].devname,
++                       size, vmar->devinfo[dev_id].size);
++            return -1;
++        }
 +    }
 +
-+    allocate_rstate(vmar, dev_id, target, write_zeroes);
++    allocate_rstate(vmar, dev_id, target, write_zeroes, skip);
 +
 +    return 0;
 +}
@@ -623,19 +633,23 @@ index 0000000000..e65f1e8415
 +        VmaRestoreState *rstate = &vmar->rstate[dev_id];
 +        BlockBackend *target = NULL;
 +
++        bool skip = rstate->skip;
++
 +        if (dev_id != vmar->vmstate_stream) {
 +            target = rstate->target;
-+            if (!verify && !target) {
++            if (!verify && !target && !skip) {
 +                error_setg(errp, "got wrong dev id %d", dev_id);
 +                return -1;
 +            }
 +
-+            if (vma_reader_get_bitmap(rstate, cluster_num)) {
-+                error_setg(errp, "found duplicated cluster %zd for stream %s",
-+                          cluster_num, vmar->devinfo[dev_id].devname);
-+                return -1;
++            if (!skip) {
++                if (vma_reader_get_bitmap(rstate, cluster_num)) {
++                    error_setg(errp, "found duplicated cluster %zd for stream %s",
++                              cluster_num, vmar->devinfo[dev_id].devname);
++                    return -1;
++                }
++                vma_reader_set_bitmap(rstate, cluster_num, 1);
 +            }
-+            vma_reader_set_bitmap(rstate, cluster_num, 1);
 +
 +            max_sector = vmar->devinfo[dev_id].size/BDRV_SECTOR_SIZE;
 +        } else {
@@ -681,7 +695,7 @@ index 0000000000..e65f1e8415
 +                return -1;
 +            }
 +
-+            if (!verify) {
++            if (!verify && !skip) {
 +                int nb_sectors = end_sector - sector_num;
 +                if (restore_write_data(vmar, dev_id, target, vmstate_fd,
 +                                       buf + start, sector_num, nb_sectors,
@@ -717,7 +731,7 @@ index 0000000000..e65f1e8415
 +                        return -1;
 +                    }
 +
-+                    if (!verify) {
++                    if (!verify && !skip) {
 +                        int nb_sectors = end_sector - sector_num;
 +                        if (restore_write_data(vmar, dev_id, target, vmstate_fd,
 +                                               buf + start, sector_num,
@@ -742,7 +756,7 @@ index 0000000000..e65f1e8415
 +                            vmar->partial_zero_cluster_data += zero_size;
 +                        }
 +
-+                        if (rstate->write_zeroes && !verify) {
++                        if (rstate->write_zeroes && !verify && !skip) {
 +                            if (restore_write_data(vmar, dev_id, target, vmstate_fd,
 +                                                   zero_vma_block, sector_num,
 +                                                   nb_sectors, errp) < 0) {
@@ -913,7 +927,7 @@ index 0000000000..e65f1e8415
 +
 +    for (dev_id = 1; dev_id < 255; dev_id++) {
 +        if (vma_reader_get_device_info(vmar, dev_id)) {
-+            allocate_rstate(vmar, dev_id, NULL, false);
++            allocate_rstate(vmar, dev_id, NULL, false, false);
 +        }
 +    }
 +
@@ -922,10 +936,10 @@ index 0000000000..e65f1e8415
 +
 diff --git a/vma-writer.c b/vma-writer.c
 new file mode 100644
-index 0000000000..df4b20793d
+index 0000000000..ac7da237d0
 --- /dev/null
 +++ b/vma-writer.c
-@@ -0,0 +1,791 @@
+@@ -0,0 +1,793 @@
 +/*
 + * VMA: Virtual Machine Archive
 + *
@@ -1239,6 +1253,8 @@ index 0000000000..df4b20793d
 +        }
 +
 +        if (vmaw->fd < 0) {
++            error_free(*errp);
++            *errp = NULL;
 +            error_setg(errp, "can't open file %s - %s\n", filename,
 +                       g_strerror(errno));
 +            goto err;
@@ -1719,10 +1735,10 @@ index 0000000000..df4b20793d
 +}
 diff --git a/vma.c b/vma.c
 new file mode 100644
-index 0000000000..e8dffb43e0
+index 0000000000..304f02bc84
 --- /dev/null
 +++ b/vma.c
-@@ -0,0 +1,849 @@
+@@ -0,0 +1,878 @@
 +/*
 + * VMA: Virtual Machine Archive
 + *
@@ -1863,6 +1879,7 @@ index 0000000000..e8dffb43e0
 +    char *throttling_group;
 +    char *cache;
 +    bool write_zero;
++    bool skip;
 +} RestoreMap;
 +
 +static bool try_parse_option(char **line, const char *optname, char **out, const char *inbuf) {
@@ -1970,47 +1987,61 @@ index 0000000000..e8dffb43e0
 +            char *bps = NULL;
 +            char *group = NULL;
 +            char *cache = NULL;
++            char *devname = NULL;
++            bool skip = false;
++            uint64_t bps_value = 0;
++            const char *path = NULL;
++            bool write_zero = true;
++
 +            if (!line || line[0] == '\0' || !strcmp(line, "done\n")) {
 +                break;
 +            }
 +            int len = strlen(line);
 +            if (line[len - 1] == '\n') {
 +                line[len - 1] = '\0';
-+                if (len == 1) {
++                len = len - 1;
++                if (len == 0) {
 +                    break;
 +                }
 +            }
 +
-+            while (1) {
-+                if (!try_parse_option(&line, "format", &format, inbuf) &&
-+                    !try_parse_option(&line, "throttling.bps", &bps, inbuf) &&
-+                    !try_parse_option(&line, "throttling.group", &group, inbuf) &&
-+                    !try_parse_option(&line, "cache", &cache, inbuf))
-+                {
-+                    break;
++            if (strncmp(line, "skip", 4) == 0) {
++                if (len < 6 || line[4] != '=') {
++                    g_error("read map failed - option 'skip' has no value ('%s')",
++                            inbuf);
++                } else {
++                    devname = line + 5;
++                    skip = true;
++                }
++            } else {
++                while (1) {
++                    if (!try_parse_option(&line, "format", &format, inbuf) &&
++                        !try_parse_option(&line, "throttling.bps", &bps, inbuf) &&
++                        !try_parse_option(&line, "throttling.group", &group, inbuf) &&
++                        !try_parse_option(&line, "cache", &cache, inbuf))
++                    {
++                        break;
++                    }
 +                }
-+            }
 +
-+            uint64_t bps_value = 0;
-+            if (bps) {
-+                bps_value = verify_u64(bps);
-+                g_free(bps);
-+            }
++                if (bps) {
++                    bps_value = verify_u64(bps);
++                    g_free(bps);
++                }
 +
-+            const char *path;
-+            bool write_zero;
-+            if (line[0] == '0' && line[1] == ':') {
-+                path = line + 2;
-+                write_zero = false;
-+            } else if (line[0] == '1' && line[1] == ':') {
-+                path = line + 2;
-+                write_zero = true;
-+            } else {
-+                g_error("read map failed - parse error ('%s')", inbuf);
++                if (line[0] == '0' && line[1] == ':') {
++                    path = line + 2;
++                    write_zero = false;
++                } else if (line[0] == '1' && line[1] == ':') {
++                    path = line + 2;
++                    write_zero = true;
++                } else {
++                    g_error("read map failed - parse error ('%s')", inbuf);
++                }
++
++                path = extract_devname(path, &devname, -1);
 +            }
 +
-+            char *devname = NULL;
-+            path = extract_devname(path, &devname, -1);
 +            if (!devname) {
 +                g_error("read map failed - no dev name specified ('%s')",
 +                        inbuf);
@@ -2024,6 +2055,7 @@ index 0000000000..e8dffb43e0
 +            map->throttling_group = group;
 +            map->cache = cache;
 +            map->write_zero = write_zero;
++            map->skip = skip;
 +
 +            g_hash_table_insert(devmap, map->devname, map);
 +
@@ -2053,6 +2085,7 @@ index 0000000000..e8dffb43e0
 +            const char *cache = NULL;
 +            int flags = BDRV_O_RDWR;
 +            bool write_zero = true;
++            bool skip = false;
 +
 +            BlockBackend *blk = NULL;
 +
@@ -2068,6 +2101,7 @@ index 0000000000..e8dffb43e0
 +                throttling_group = map->throttling_group;
 +                cache = map->cache;
 +                write_zero = map->write_zero;
++                skip = map->skip;
 +            } else {
 +                devfn = g_strdup_printf("%s/tmp-disk-%s.raw",
 +                                        dirname, di->devname);
@@ -2086,57 +2120,60 @@ index 0000000000..e8dffb43e0
 +                write_zero = false;
 +            }
 +
-+	    size_t devlen = strlen(devfn);
-+	    QDict *options = NULL;
-+            bool writethrough;
-+            if (format) {
-+                /* explicit format from commandline */
-+                options = qdict_new();
-+                qdict_put_str(options, "driver", format);
-+            } else if ((devlen > 4 && strcmp(devfn+devlen-4, ".raw") == 0) ||
-+	               strncmp(devfn, "/dev/", 5) == 0)
-+	    {
-+                /* This part is now deprecated for PVE as well (just as qemu
-+                 * deprecated not specifying an explicit raw format, too.
-+                 */
-+		/* explicit raw format */
-+		options = qdict_new();
-+		qdict_put_str(options, "driver", "raw");
-+	    }
-+            if (cache && bdrv_parse_cache_mode(cache, &flags, &writethrough)) {
-+                g_error("invalid cache option: %s\n", cache);
-+            }
++            if (!skip) {
++                size_t devlen = strlen(devfn);
++                QDict *options = NULL;
++                bool writethrough;
++                if (format) {
++                    /* explicit format from commandline */
++                    options = qdict_new();
++                    qdict_put_str(options, "driver", format);
++                } else if ((devlen > 4 && strcmp(devfn+devlen-4, ".raw") == 0) ||
++                    strncmp(devfn, "/dev/", 5) == 0)
++                {
++                    /* This part is now deprecated for PVE as well (just as qemu
++                     * deprecated not specifying an explicit raw format, too.
++                     */
++                    /* explicit raw format */
++                    options = qdict_new();
++                    qdict_put_str(options, "driver", "raw");
++                }
 +
-+	    if (errp || !(blk = blk_new_open(devfn, NULL, options, flags, &errp))) {
-+                g_error("can't open file %s - %s", devfn,
-+                        error_get_pretty(errp));
-+            }
++                if (cache && bdrv_parse_cache_mode(cache, &flags, &writethrough)) {
++                    g_error("invalid cache option: %s\n", cache);
++                }
 +
-+            if (cache) {
-+                blk_set_enable_write_cache(blk, !writethrough);
-+            }
++                if (errp || !(blk = blk_new_open(devfn, NULL, options, flags, &errp))) {
++                    g_error("can't open file %s - %s", devfn,
++                            error_get_pretty(errp));
++                }
 +
-+            if (throttling_group) {
-+                blk_io_limits_enable(blk, throttling_group);
-+            }
++                if (cache) {
++                    blk_set_enable_write_cache(blk, !writethrough);
++                }
 +
-+            if (throttling_bps) {
-+                if (!throttling_group) {
-+                    blk_io_limits_enable(blk, devfn);
++                if (throttling_group) {
++                    blk_io_limits_enable(blk, throttling_group);
 +                }
 +
-+                ThrottleConfig cfg;
-+                throttle_config_init(&cfg);
-+                cfg.buckets[THROTTLE_BPS_WRITE].avg = throttling_bps;
-+                Error *err = NULL;
-+                if (!throttle_is_valid(&cfg, &err)) {
-+                    error_report_err(err);
-+                    g_error("failed to apply throttling");
++                if (throttling_bps) {
++                    if (!throttling_group) {
++                        blk_io_limits_enable(blk, devfn);
++                    }
++
++                    ThrottleConfig cfg;
++                    throttle_config_init(&cfg);
++                    cfg.buckets[THROTTLE_BPS_WRITE].avg = throttling_bps;
++                    Error *err = NULL;
++                    if (!throttle_is_valid(&cfg, &err)) {
++                        error_report_err(err);
++                        g_error("failed to apply throttling");
++                    }
++                    blk_set_io_limits(blk, &cfg);
 +                }
-+                blk_set_io_limits(blk, &cfg);
 +            }
 +
-+            if (vma_reader_register_bs(vmar, i, blk, write_zero, &errp) < 0) {
++            if (vma_reader_register_bs(vmar, i, blk, write_zero, skip, &errp) < 0) {
 +                g_error("%s", error_get_pretty(errp));
 +            }
 +
@@ -2252,7 +2289,7 @@ index 0000000000..e8dffb43e0
 +    struct iovec iov;
 +    QEMUIOVector qiov;
 +
-+    int64_t start, end;
++    int64_t start, end, readlen;
 +    int ret = 0;
 +
 +    unsigned char *buf = blk_blockalign(job->target, VMA_CLUSTER_SIZE);
@@ -2266,8 +2303,16 @@ index 0000000000..e8dffb43e0
 +        iov.iov_len = VMA_CLUSTER_SIZE;
 +        qemu_iovec_init_external(&qiov, &iov, 1);
 +
++        if (start + 1 == end) {
++            memset(buf, 0, VMA_CLUSTER_SIZE);
++            readlen = job->len - start * VMA_CLUSTER_SIZE;
++            assert(readlen > 0 && readlen <= VMA_CLUSTER_SIZE);
++        } else {
++            readlen = VMA_CLUSTER_SIZE;
++        }
++
 +        ret = blk_co_preadv(job->target, start * VMA_CLUSTER_SIZE,
-+                            VMA_CLUSTER_SIZE, &qiov, 0);
++                            readlen, &qiov, 0);
 +        if (ret < 0) {
 +            vma_writer_set_error(job->vmaw, "read error", -1);
 +            goto out;
@@ -2574,7 +2619,7 @@ index 0000000000..e8dffb43e0
 +}
 diff --git a/vma.h b/vma.h
 new file mode 100644
-index 0000000000..c895c97f6d
+index 0000000000..1b62859165
 --- /dev/null
 +++ b/vma.h
 @@ -0,0 +1,150 @@
@@ -2722,7 +2767,7 @@ index 0000000000..c895c97f6d
 +VmaDeviceInfo *vma_reader_get_device_info(VmaReader *vmar, guint8 dev_id);
 +int vma_reader_register_bs(VmaReader *vmar, guint8 dev_id,
 +                           BlockBackend *target, bool write_zeroes,
-+                           Error **errp);
++                           bool skip, Error **errp);
 +int vma_reader_restore(VmaReader *vmar, int vmstate_fd, bool verbose,
 +                       Error **errp);
 +int vma_reader_verify(VmaReader *vmar, bool verbose, Error **errp);
diff --git a/debian/patches/pve/0028-PVE-Backup-add-backup-dump-block-driver.patch b/debian/patches/pve/0029-PVE-Backup-add-backup-dump-block-driver.patch
similarity index 100%
rename from debian/patches/pve/0028-PVE-Backup-add-backup-dump-block-driver.patch
rename to debian/patches/pve/0029-PVE-Backup-add-backup-dump-block-driver.patch
diff --git a/debian/patches/pve/0037-PVE-Add-sequential-job-transaction-support.patch b/debian/patches/pve/0030-PVE-Add-sequential-job-transaction-support.patch
similarity index 100%
rename from debian/patches/pve/0037-PVE-Add-sequential-job-transaction-support.patch
rename to debian/patches/pve/0030-PVE-Add-sequential-job-transaction-support.patch
diff --git a/debian/patches/pve/0031-PVE-Backup-Add-dirty-bitmap-tracking-for-incremental.patch b/debian/patches/pve/0031-PVE-Backup-Add-dirty-bitmap-tracking-for-incremental.patch
deleted file mode 100644
index 1708021..0000000
--- a/debian/patches/pve/0031-PVE-Backup-Add-dirty-bitmap-tracking-for-incremental.patch
+++ /dev/null
@@ -1,460 +0,0 @@
-From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
-From: Stefan Reiter <s.reiter at proxmox.com>
-Date: Mon, 29 Jun 2020 11:06:03 +0200
-Subject: [PATCH] PVE-Backup: Add dirty-bitmap tracking for incremental backups
-
-Uses QEMU's existing MIRROR_SYNC_MODE_BITMAP and a dirty-bitmap on top
-of all backed-up drives. This will only execute the data-write callback
-for any changed chunks, the PBS rust code will reuse chunks from the
-previous index for everything it doesn't receive if reuse_index is true.
-
-On error or cancellation, remove all dirty bitmaps to ensure
-consistency.
-
-Add PBS/incremental specific information to query backup info QMP and
-HMP commands.
-
-Only supported for PBS backups.
-
-Signed-off-by: Stefan Reiter <s.reiter at proxmox.com>
-Signed-off-by: Dietmar Maurer <dietmar at proxmox.com>
-Signed-off-by: Thomas Lamprecht <t.lamprecht at proxmox.com>
----
- block/monitor/block-hmp-cmds.c |   1 +
- monitor/hmp-cmds.c             |  45 ++++++++++----
- proxmox-backup-client.c        |   3 +-
- proxmox-backup-client.h        |   1 +
- pve-backup.c                   | 104 ++++++++++++++++++++++++++++++---
- qapi/block-core.json           |  12 +++-
- 6 files changed, 143 insertions(+), 23 deletions(-)
-
-diff --git a/block/monitor/block-hmp-cmds.c b/block/monitor/block-hmp-cmds.c
-index d50e99df26..cda5de792b 100644
---- a/block/monitor/block-hmp-cmds.c
-+++ b/block/monitor/block-hmp-cmds.c
-@@ -1056,6 +1056,7 @@ void hmp_backup(Monitor *mon, const QDict *qdict)
-         NULL, // PBS fingerprint
-         NULL, // PBS backup-id
-         false, 0, // PBS backup-time
-+        false, false, // PBS incremental
-         true, dir ? BACKUP_FORMAT_DIR : BACKUP_FORMAT_VMA,
-         NULL, NULL,
-         devlist, qdict_haskey(qdict, "speed"), speed, &error);
-diff --git a/monitor/hmp-cmds.c b/monitor/hmp-cmds.c
-index 9e1bd57aeb..087161a967 100644
---- a/monitor/hmp-cmds.c
-+++ b/monitor/hmp-cmds.c
-@@ -171,19 +171,42 @@ void hmp_info_backup(Monitor *mon, const QDict *qdict)
-             monitor_printf(mon, "End time: %s", ctime(&info->end_time));
-         }
- 
--        int per = (info->has_total && info->total &&
--            info->has_transferred && info->transferred) ?
--            (info->transferred * 100)/info->total : 0;
--        int zero_per = (info->has_total && info->total &&
--                        info->has_zero_bytes && info->zero_bytes) ?
--            (info->zero_bytes * 100)/info->total : 0;
-         monitor_printf(mon, "Backup file: %s\n", info->backup_file);
-         monitor_printf(mon, "Backup uuid: %s\n", info->uuid);
--        monitor_printf(mon, "Total size: %zd\n", info->total);
--        monitor_printf(mon, "Transferred bytes: %zd (%d%%)\n",
--                       info->transferred, per);
--        monitor_printf(mon, "Zero bytes: %zd (%d%%)\n",
--                       info->zero_bytes, zero_per);
-+
-+        if (!(info->has_total && info->total))  {
-+            // this should not happen normally
-+            monitor_printf(mon, "Total size: %d\n", 0);
-+        } else {
-+            bool incremental = false;
-+            size_t total_or_dirty = info->total;
-+            if (info->has_transferred) {
-+                if (info->has_dirty && info->dirty) {
-+                     if (info->dirty < info->total) {
-+                        total_or_dirty = info->dirty;
-+                        incremental = true;
-+                    }
-+                }
-+            }
-+
-+            int per = (info->transferred * 100)/total_or_dirty;
-+
-+            monitor_printf(mon, "Backup mode: %s\n", incremental ? "incremental" : "full");
-+
-+            int zero_per = (info->has_zero_bytes && info->zero_bytes) ?
-+                (info->zero_bytes * 100)/info->total : 0;
-+            monitor_printf(mon, "Total size: %zd\n", info->total);
-+            monitor_printf(mon, "Transferred bytes: %zd (%d%%)\n",
-+                           info->transferred, per);
-+            monitor_printf(mon, "Zero bytes: %zd (%d%%)\n",
-+                           info->zero_bytes, zero_per);
-+
-+            if (info->has_reused) {
-+                int reused_per = (info->reused * 100)/total_or_dirty;
-+                monitor_printf(mon, "Reused bytes: %zd (%d%%)\n",
-+                               info->reused, reused_per);
-+            }
-+        }
-     }
- 
-     qapi_free_BackupStatus(info);
-diff --git a/proxmox-backup-client.c b/proxmox-backup-client.c
-index a8f6653a81..4ce7bc0b5e 100644
---- a/proxmox-backup-client.c
-+++ b/proxmox-backup-client.c
-@@ -89,6 +89,7 @@ proxmox_backup_co_register_image(
-     ProxmoxBackupHandle *pbs,
-     const char *device_name,
-     uint64_t size,
-+    bool incremental,
-     Error **errp)
- {
-     Coroutine *co = qemu_coroutine_self();
-@@ -98,7 +99,7 @@ proxmox_backup_co_register_image(
-     int pbs_res = -1;
- 
-     proxmox_backup_register_image_async(
--        pbs, device_name, size ,proxmox_backup_schedule_wake, &waker, &pbs_res, &pbs_err);
-+        pbs, device_name, size, incremental, proxmox_backup_schedule_wake, &waker, &pbs_res, &pbs_err);
-     qemu_coroutine_yield();
-     if (pbs_res < 0) {
-         if (errp) error_setg(errp, "backup register image failed: %s", pbs_err ? pbs_err : "unknown error");
-diff --git a/proxmox-backup-client.h b/proxmox-backup-client.h
-index 1dda8b7d8f..8cbf645b2c 100644
---- a/proxmox-backup-client.h
-+++ b/proxmox-backup-client.h
-@@ -32,6 +32,7 @@ proxmox_backup_co_register_image(
-     ProxmoxBackupHandle *pbs,
-     const char *device_name,
-     uint64_t size,
-+    bool incremental,
-     Error **errp);
- 
- 
-diff --git a/pve-backup.c b/pve-backup.c
-index f77892a509..d9942a14a1 100644
---- a/pve-backup.c
-+++ b/pve-backup.c
-@@ -7,6 +7,7 @@
- #include "sysemu/blockdev.h"
- #include "block/block_int-global-state.h"
- #include "block/blockjob.h"
-+#include "block/dirty-bitmap.h"
- #include "qapi/qapi-commands-block.h"
- #include "qapi/qmp/qerror.h"
- 
-@@ -29,6 +30,8 @@
-  *
-  */
- 
-+const char *PBS_BITMAP_NAME = "pbs-incremental-dirty-bitmap";
-+
- static struct PVEBackupState {
-     struct {
-         // Everithing accessed from qmp_backup_query command is protected using lock
-@@ -40,7 +43,9 @@ static struct PVEBackupState {
-         uuid_t uuid;
-         char uuid_str[37];
-         size_t total;
-+        size_t dirty;
-         size_t transferred;
-+        size_t reused;
-         size_t zero_bytes;
-     } stat;
-     int64_t speed;
-@@ -67,6 +72,7 @@ typedef struct PVEBackupDevInfo {
-     uint8_t dev_id;
-     bool completed;
-     char targetfile[PATH_MAX];
-+    BdrvDirtyBitmap *bitmap;
-     BlockDriverState *target;
- } PVEBackupDevInfo;
- 
-@@ -108,11 +114,12 @@ static bool pvebackup_error_or_canceled(void)
-     return error_or_canceled;
- }
- 
--static void pvebackup_add_transfered_bytes(size_t transferred, size_t zero_bytes)
-+static void pvebackup_add_transfered_bytes(size_t transferred, size_t zero_bytes, size_t reused)
- {
-     qemu_mutex_lock(&backup_state.stat.lock);
-     backup_state.stat.zero_bytes += zero_bytes;
-     backup_state.stat.transferred += transferred;
-+    backup_state.stat.reused += reused;
-     qemu_mutex_unlock(&backup_state.stat.lock);
- }
- 
-@@ -151,7 +158,8 @@ pvebackup_co_dump_pbs_cb(
-         pvebackup_propagate_error(local_err);
-         return pbs_res;
-     } else {
--        pvebackup_add_transfered_bytes(size, !buf ? size : 0);
-+        size_t reused = (pbs_res == 0) ? size : 0;
-+        pvebackup_add_transfered_bytes(size, !buf ? size : 0, reused);
-     }
- 
-     return size;
-@@ -211,11 +219,11 @@ pvebackup_co_dump_vma_cb(
-         } else {
-             if (remaining >= VMA_CLUSTER_SIZE) {
-                 assert(ret == VMA_CLUSTER_SIZE);
--                pvebackup_add_transfered_bytes(VMA_CLUSTER_SIZE, zero_bytes);
-+                pvebackup_add_transfered_bytes(VMA_CLUSTER_SIZE, zero_bytes, 0);
-                 remaining -= VMA_CLUSTER_SIZE;
-             } else {
-                 assert(ret == remaining);
--                pvebackup_add_transfered_bytes(remaining, zero_bytes);
-+                pvebackup_add_transfered_bytes(remaining, zero_bytes, 0);
-                 remaining = 0;
-             }
-         }
-@@ -251,6 +259,18 @@ static void coroutine_fn pvebackup_co_cleanup(void *unused)
-             if (local_err != NULL) {
-                 pvebackup_propagate_error(local_err);
-             }
-+        } else {
-+            // on error or cancel we cannot ensure synchronization of dirty
-+            // bitmaps with backup server, so remove all and do full backup next
-+            GList *l = backup_state.di_list;
-+            while (l) {
-+                PVEBackupDevInfo *di = (PVEBackupDevInfo *)l->data;
-+                l = g_list_next(l);
-+
-+                if (di->bitmap) {
-+                    bdrv_release_dirty_bitmap(di->bitmap);
-+                }
-+            }
-         }
- 
-         proxmox_backup_disconnect(backup_state.pbs);
-@@ -306,6 +326,12 @@ static void pvebackup_complete_cb(void *opaque, int ret)
-     // remove self from job queue
-     backup_state.di_list = g_list_remove(backup_state.di_list, di);
- 
-+    if (di->bitmap && ret < 0) {
-+        // on error or cancel we cannot ensure synchronization of dirty
-+        // bitmaps with backup server, so remove all and do full backup next
-+        bdrv_release_dirty_bitmap(di->bitmap);
-+    }
-+
-     g_free(di);
- 
-     qemu_mutex_unlock(&backup_state.backup_mutex);
-@@ -470,12 +496,18 @@ static bool create_backup_jobs(void) {
- 
-         assert(di->target != NULL);
- 
-+        MirrorSyncMode sync_mode = MIRROR_SYNC_MODE_FULL;
-+        BitmapSyncMode bitmap_mode = BITMAP_SYNC_MODE_NEVER;
-+        if (di->bitmap) {
-+            sync_mode = MIRROR_SYNC_MODE_BITMAP;
-+            bitmap_mode = BITMAP_SYNC_MODE_ON_SUCCESS;
-+        }
-         AioContext *aio_context = bdrv_get_aio_context(di->bs);
-         aio_context_acquire(aio_context);
- 
-         BlockJob *job = backup_job_create(
--            NULL, di->bs, di->target, backup_state.speed, MIRROR_SYNC_MODE_FULL, NULL,
--            BITMAP_SYNC_MODE_NEVER, false, NULL, &perf, BLOCKDEV_ON_ERROR_REPORT, BLOCKDEV_ON_ERROR_REPORT,
-+            NULL, di->bs, di->target, backup_state.speed, sync_mode, di->bitmap,
-+            bitmap_mode, false, NULL, &perf, BLOCKDEV_ON_ERROR_REPORT, BLOCKDEV_ON_ERROR_REPORT,
-             JOB_DEFAULT, pvebackup_complete_cb, di, NULL, &local_err);
- 
-         aio_context_release(aio_context);
-@@ -521,6 +553,8 @@ typedef struct QmpBackupTask {
-     bool has_backup_time;
-     const char *fingerprint;
-     int64_t backup_time;
-+    bool has_use_dirty_bitmap;
-+    bool use_dirty_bitmap;
-     bool has_format;
-     BackupFormat format;
-     const char *config_file;
-@@ -609,6 +643,7 @@ static void coroutine_fn pvebackup_co_prepare(void *opaque)
-     }
- 
-     size_t total = 0;
-+    size_t dirty = 0;
- 
-     l = di_list;
-     while (l) {
-@@ -646,6 +681,8 @@ static void coroutine_fn pvebackup_co_prepare(void *opaque)
-         int dump_cb_block_size = PROXMOX_BACKUP_DEFAULT_CHUNK_SIZE; // Hardcoded (4M)
-         firewall_name = "fw.conf";
- 
-+        bool use_dirty_bitmap = task->has_use_dirty_bitmap && task->use_dirty_bitmap;
-+
-         char *pbs_err = NULL;
-         pbs = proxmox_backup_new(
-             task->backup_file,
-@@ -665,7 +702,8 @@ static void coroutine_fn pvebackup_co_prepare(void *opaque)
-             goto err;
-         }
- 
--        if (proxmox_backup_co_connect(pbs, task->errp) < 0)
-+        int connect_result = proxmox_backup_co_connect(pbs, task->errp);
-+        if (connect_result < 0)
-             goto err;
- 
-         /* register all devices */
-@@ -676,9 +714,40 @@ static void coroutine_fn pvebackup_co_prepare(void *opaque)
- 
-             const char *devname = bdrv_get_device_name(di->bs);
- 
--            int dev_id = proxmox_backup_co_register_image(pbs, devname, di->size, task->errp);
--            if (dev_id < 0)
-+            BdrvDirtyBitmap *bitmap = bdrv_find_dirty_bitmap(di->bs, PBS_BITMAP_NAME);
-+            bool expect_only_dirty = false;
-+
-+            if (use_dirty_bitmap) {
-+                if (bitmap == NULL) {
-+                    bitmap = bdrv_create_dirty_bitmap(di->bs, dump_cb_block_size, PBS_BITMAP_NAME, task->errp);
-+                    if (!bitmap) {
-+                        goto err;
-+                    }
-+                } else {
-+                    expect_only_dirty = proxmox_backup_check_incremental(pbs, devname, di->size) != 0;
-+                }
-+
-+                if (expect_only_dirty) {
-+                    dirty += bdrv_get_dirty_count(bitmap);
-+                } else {
-+                    /* mark entire bitmap as dirty to make full backup */
-+                    bdrv_set_dirty_bitmap(bitmap, 0, di->size);
-+                    dirty += di->size;
-+                }
-+                di->bitmap = bitmap;
-+            } else {
-+                dirty += di->size;
-+
-+                /* after a full backup the old dirty bitmap is invalid anyway */
-+                if (bitmap != NULL) {
-+                    bdrv_release_dirty_bitmap(bitmap);
-+                }
-+            }
-+
-+            int dev_id = proxmox_backup_co_register_image(pbs, devname, di->size, expect_only_dirty, task->errp);
-+            if (dev_id < 0) {
-                 goto err;
-+            }
- 
-             if (!(di->target = bdrv_backup_dump_create(dump_cb_block_size, di->size, pvebackup_co_dump_pbs_cb, di, task->errp))) {
-                 goto err;
-@@ -687,6 +756,8 @@ static void coroutine_fn pvebackup_co_prepare(void *opaque)
-             di->dev_id = dev_id;
-         }
-     } else if (format == BACKUP_FORMAT_VMA) {
-+        dirty = total;
-+
-         vmaw = vma_writer_create(task->backup_file, uuid, &local_err);
-         if (!vmaw) {
-             if (local_err) {
-@@ -714,6 +785,8 @@ static void coroutine_fn pvebackup_co_prepare(void *opaque)
-             }
-         }
-     } else if (format == BACKUP_FORMAT_DIR) {
-+        dirty = total;
-+
-         if (mkdir(task->backup_file, 0640) != 0) {
-             error_setg_errno(task->errp, errno, "can't create directory '%s'\n",
-                              task->backup_file);
-@@ -786,8 +859,10 @@ static void coroutine_fn pvebackup_co_prepare(void *opaque)
-     char *uuid_str = g_strdup(backup_state.stat.uuid_str);
- 
-     backup_state.stat.total = total;
-+    backup_state.stat.dirty = dirty;
-     backup_state.stat.transferred = 0;
-     backup_state.stat.zero_bytes = 0;
-+    backup_state.stat.reused = format == BACKUP_FORMAT_PBS && dirty >= total ? 0 : total - dirty;
- 
-     qemu_mutex_unlock(&backup_state.stat.lock);
- 
-@@ -811,6 +886,10 @@ err:
-         PVEBackupDevInfo *di = (PVEBackupDevInfo *)l->data;
-         l = g_list_next(l);
- 
-+        if (di->bitmap) {
-+            bdrv_release_dirty_bitmap(di->bitmap);
-+        }
-+
-         if (di->target) {
-             bdrv_co_unref(di->target);
-         }
-@@ -852,6 +931,7 @@ UuidInfo *qmp_backup(
-     const char *fingerprint,
-     const char *backup_id,
-     bool has_backup_time, int64_t backup_time,
-+    bool has_use_dirty_bitmap, bool use_dirty_bitmap,
-     bool has_format, BackupFormat format,
-     const char *config_file,
-     const char *firewall_file,
-@@ -866,6 +946,8 @@ UuidInfo *qmp_backup(
-         .backup_id = backup_id,
-         .has_backup_time = has_backup_time,
-         .backup_time = backup_time,
-+        .has_use_dirty_bitmap = has_use_dirty_bitmap,
-+        .use_dirty_bitmap = use_dirty_bitmap,
-         .has_format = has_format,
-         .format = format,
-         .config_file = config_file,
-@@ -927,10 +1009,14 @@ BackupStatus *qmp_query_backup(Error **errp)
- 
-     info->has_total = true;
-     info->total = backup_state.stat.total;
-+    info->has_dirty = true;
-+    info->dirty = backup_state.stat.dirty;
-     info->has_zero_bytes = true;
-     info->zero_bytes = backup_state.stat.zero_bytes;
-     info->has_transferred = true;
-     info->transferred = backup_state.stat.transferred;
-+    info->has_reused = true;
-+    info->reused = backup_state.stat.reused;
- 
-     qemu_mutex_unlock(&backup_state.stat.lock);
- 
-diff --git a/qapi/block-core.json b/qapi/block-core.json
-index 16fb4c5ea0..92f90a898a 100644
---- a/qapi/block-core.json
-+++ b/qapi/block-core.json
-@@ -848,8 +848,13 @@
- #
- # @total: total amount of bytes involved in the backup process
- #
-+# @dirty: with incremental mode (PBS) this is the amount of bytes involved
-+#         in the backup process which are marked dirty.
-+#
- # @transferred: amount of bytes already backed up.
- #
-+# @reused: amount of bytes reused due to deduplication.
-+#
- # @zero-bytes: amount of 'zero' bytes detected.
- #
- # @start-time: time (epoch) when backup job started.
-@@ -862,8 +867,8 @@
- #
- ##
- { 'struct': 'BackupStatus',
--  'data': {'*status': 'str', '*errmsg': 'str', '*total': 'int',
--           '*transferred': 'int', '*zero-bytes': 'int',
-+  'data': {'*status': 'str', '*errmsg': 'str', '*total': 'int', '*dirty': 'int',
-+           '*transferred': 'int', '*zero-bytes': 'int', '*reused': 'int',
-            '*start-time': 'int', '*end-time': 'int',
-            '*backup-file': 'str', '*uuid': 'str' } }
- 
-@@ -906,6 +911,8 @@
- #
- # @backup-time: backup timestamp (Unix epoch, required for format 'pbs')
- #
-+# @use-dirty-bitmap: use dirty bitmap to detect incremental changes since last job (optional for format 'pbs')
-+#
- # Returns: the uuid of the backup job
- #
- ##
-@@ -916,6 +923,7 @@
-                                     '*fingerprint': 'str',
-                                     '*backup-id': 'str',
-                                     '*backup-time': 'int',
-+                                    '*use-dirty-bitmap': 'bool',
-                                     '*format': 'BackupFormat',
-                                     '*config-file': 'str',
-                                     '*firewall-file': 'str',
diff --git a/debian/patches/pve/0029-PVE-Backup-proxmox-backup-patches-for-qemu.patch b/debian/patches/pve/0031-PVE-Backup-Proxmox-backup-patches-for-QEMU.patch
similarity index 56%
rename from debian/patches/pve/0029-PVE-Backup-proxmox-backup-patches-for-qemu.patch
rename to debian/patches/pve/0031-PVE-Backup-Proxmox-backup-patches-for-QEMU.patch
index 3aae91f..99b2a08 100644
--- a/debian/patches/pve/0029-PVE-Backup-proxmox-backup-patches-for-qemu.patch
+++ b/debian/patches/pve/0031-PVE-Backup-Proxmox-backup-patches-for-QEMU.patch
@@ -1,32 +1,102 @@
 From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
 From: Dietmar Maurer <dietmar at proxmox.com>
 Date: Mon, 6 Apr 2020 12:16:59 +0200
-Subject: [PATCH] PVE-Backup: proxmox backup patches for qemu
+Subject: [PATCH] PVE-Backup: Proxmox backup patches for QEMU
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+For PBS, using dirty bitmaps is supported via QEMU's
+MIRROR_SYNC_MODE_BITMAP. When the feature is used, the data-write
+callback is only executed for any changed chunks, the PBS rust code
+will reuse chunks from the previous index for everything it doesn't
+receive if reuse_index is true. On error or cancellation, all dirty
+bitmaps are removed to ensure consistency.
+
+By using a JobTxn, we can sync dirty bitmaps only when *all* jobs were
+successful - meaning we don't need to remove them when the backup
+fails, since QEMU's BITMAP_SYNC_MODE_ON_SUCCESS will now handle that
+for us. A sequential transaction is used, so drives will be backed up
+one after the other.
+
+The backup and backup-cancel QMP calls are coroutines. This has the
+benefit that calls are asynchronous to the main loop, i.e. long
+running operations like connecting to a PBS server will no longer hang
+the VM.
+
+backup_job_create() and job_cancel_sync() cannot be run from a
+coroutine and requires an acuqired AioContext, so the job creation and
+canceling are extracted as bottom halves and called from the
+respective QMP coroutines.
+
+To communicate the finishing state, a dedicated property is used for
+query-backup: 'finishing'. A dedicated state is explicitly not used,
+since that would break compatibility with older qemu-server versions.
+
+The first call to job_cancel_sync() will cancel and free all jobs in
+the transaction, but it is necessary to pick a job that is:
+1. still referenced. For this, there is a job_ref directly after job
+creation paired with a job_unref in cleanup paths.
+2. not yet finalized. In job_cancel_bh(), the first job that's not
+completed yet is used. This is not necessarily the first job in the
+list, because pvebackup_co_complete_stream() might not yet have
+removed a completed job when job_cancel_bh() runs. Why even bother
+with the bottom half at all and not use job_cancel() in
+qmp_backup_cancel() directly? The reason is that qmp_backup_cancel()
+is a coroutine, so it will hang when reaching AIO_WAIT_WHILE() and
+job_cancel() might end up calling that.
+
+Regarding BackupPerf performance settings. For now, only the
+max-workers setting is exposed, because:
+1. use-copy-range would need to be implemented in backup-dump and the
+feature was actually turned off by default in QEMU itself, because it
+didn't provide the expected benefit, see commit 6a30f663d4 ("qapi:
+backup: disable copy_range by default").
+2. max-chunk: enforced to be at least the backup cluster size (4 MiB
+for PBS) and otherwise maximum of source and target cluster size.
+And block-copy has a maximum buffer size of 1 MiB, so setting a larger
+max-chunk doesn't even have an effect. To make the setting sensibly
+usable the check would need to be removed and optionally the
+block-copy max buffer size would need to be bumped. I tried doing just
+that, and tested different source/target combinations with different
+max-chunk settings, but there were no noticable improvements over the
+default "unlimited" (resulting in 1 MiB for block-copy).
 
 Signed-off-by: Dietmar Maurer <dietmar at proxmox.com>
-[PVE-Backup: avoid coroutines to fix AIO freeze, cleanups]
+[SR: Add dirty-bitmap tracking for incremental backups
+     Add query_proxmox_support and query-pbs-bitmap-info QMP calls
+     Use a transaction to synchronize job states
+     Co-routine and async-related improvements
+     Improve finishing backups/cleanups
+     Various other improvements]
 Signed-off-by: Stefan Reiter <s.reiter at proxmox.com>
+[FG: add master key support]
+Signed-off-by: Fabian Grünbichler <f.gruenbichler at proxmox.com>
 Signed-off-by: Thomas Lamprecht <t.lamprecht at proxmox.com>
+[WB: add PBS namespace support]
+Signed-off-by: Wolfgang Bumiller <w.bumiller at proxmox.com>
 [FE: add new force parameter to job_cancel_sync calls
      adapt for new job lock mechanism replacing AioContext locks
-     adapt to QAPI changes]
+     adapt to QAPI changes
+     improve canceling
+     allow passing max-workers setting]
 Signed-off-by: Fiona Ebner <f.ebner at proxmox.com>
 ---
- block/meson.build              |   5 +
- block/monitor/block-hmp-cmds.c |  33 ++
- blockdev.c                     |   1 +
- hmp-commands-info.hx           |  14 +
- hmp-commands.hx                |  29 +
- include/monitor/hmp.h          |   3 +
- meson.build                    |   1 +
- monitor/hmp-cmds.c             |  45 ++
- proxmox-backup-client.c        | 176 +++++++
- proxmox-backup-client.h        |  59 +++
- pve-backup.c                   | 938 +++++++++++++++++++++++++++++++++
- qapi/block-core.json           | 109 ++++
- qapi/common.json               |  13 +
- qapi/machine.json              |  15 +-
- 14 files changed, 1428 insertions(+), 13 deletions(-)
+ block/meson.build              |    5 +
+ block/monitor/block-hmp-cmds.c |   40 ++
+ blockdev.c                     |    1 +
+ hmp-commands-info.hx           |   14 +
+ hmp-commands.hx                |   31 +
+ include/monitor/hmp.h          |    3 +
+ meson.build                    |    1 +
+ monitor/hmp-cmds.c             |   72 +++
+ proxmox-backup-client.c        |  146 +++++
+ proxmox-backup-client.h        |   60 ++
+ pve-backup.c                   | 1097 ++++++++++++++++++++++++++++++++
+ qapi/block-core.json           |  226 +++++++
+ qapi/common.json               |   13 +
+ qapi/machine.json              |   15 +-
+ 14 files changed, 1711 insertions(+), 13 deletions(-)
  create mode 100644 proxmox-backup-client.c
  create mode 100644 proxmox-backup-client.h
  create mode 100644 pve-backup.c
@@ -48,15 +118,15 @@ index f580f95395..5bcebb934b 100644
  softmmu_ss.add(when: 'CONFIG_TCG', if_true: files('blkreplay.c'))
  softmmu_ss.add(files('block-ram-registrar.c'))
 diff --git a/block/monitor/block-hmp-cmds.c b/block/monitor/block-hmp-cmds.c
-index ca2599de44..d50e99df26 100644
+index ca2599de44..636509b83e 100644
 --- a/block/monitor/block-hmp-cmds.c
 +++ b/block/monitor/block-hmp-cmds.c
-@@ -1029,3 +1029,36 @@ void hmp_change_medium(Monitor *mon, const char *device, const char *target,
+@@ -1029,3 +1029,43 @@ void hmp_change_medium(Monitor *mon, const char *device, const char *target,
      qmp_blockdev_change_medium(device, NULL, target, arg, true, force,
                                 !!read_only, read_only_mode, errp);
  }
 +
-+void hmp_backup_cancel(Monitor *mon, const QDict *qdict)
++void coroutine_fn hmp_backup_cancel(Monitor *mon, const QDict *qdict)
 +{
 +    Error *error = NULL;
 +
@@ -65,7 +135,7 @@ index ca2599de44..d50e99df26 100644
 +    hmp_handle_error(mon, error);
 +}
 +
-+void hmp_backup(Monitor *mon, const QDict *qdict)
++void coroutine_fn hmp_backup(Monitor *mon, const QDict *qdict)
 +{
 +    Error *error = NULL;
 +
@@ -79,12 +149,19 @@ index ca2599de44..d50e99df26 100644
 +        NULL, // PBS password
 +        NULL, // PBS keyfile
 +        NULL, // PBS key_password
++        NULL, // PBS master_keyfile
 +        NULL, // PBS fingerprint
++        NULL, // PBS backup-ns
 +        NULL, // PBS backup-id
 +        false, 0, // PBS backup-time
++        false, false, // PBS use-dirty-bitmap
++        false, false, // PBS compress
++        false, false, // PBS encrypt
 +        true, dir ? BACKUP_FORMAT_DIR : BACKUP_FORMAT_VMA,
 +        NULL, NULL,
-+        devlist, qdict_haskey(qdict, "speed"), speed, &error);
++        devlist, qdict_haskey(qdict, "speed"), speed,
++        false, 0, // BackupPerf max-workers
++        &error);
 +
 +    hmp_handle_error(mon, error);
 +}
@@ -126,10 +203,10 @@ index a166bff3d5..4b75966c2e 100644
      {
          .name       = "usernet",
 diff --git a/hmp-commands.hx b/hmp-commands.hx
-index b66d7fc4ab..9b6b8e2e9c 100644
+index b66d7fc4ab..896430dae8 100644
 --- a/hmp-commands.hx
 +++ b/hmp-commands.hx
-@@ -101,6 +101,35 @@ ERST
+@@ -101,6 +101,37 @@ ERST
  SRST
  ``block_stream``
    Copy data from a backing file into a block device.
@@ -143,6 +220,7 @@ index b66d7fc4ab..9b6b8e2e9c 100644
 +		    "\n\t\t\t Use -d to dump data into a directory instead"
 +		    "\n\t\t\t of using VMA format.",
 +        .cmd = hmp_backup,
++        .coroutine  = true,
 +    },
 +
 +SRST
@@ -156,6 +234,7 @@ index b66d7fc4ab..9b6b8e2e9c 100644
 +        .params     = "",
 +        .help       = "cancel the current VM backup",
 +        .cmd        = hmp_backup_cancel,
++        .coroutine  = true,
 +    },
 +
 +SRST
@@ -199,7 +278,7 @@ index 603cdb97bb..d307d8eabf 100644
  # libselinux
  selinux = dependency('libselinux',
 diff --git a/monitor/hmp-cmds.c b/monitor/hmp-cmds.c
-index 435f9334f9..9e1bd57aeb 100644
+index 435f9334f9..9a67e544ce 100644
 --- a/monitor/hmp-cmds.c
 +++ b/monitor/hmp-cmds.c
 @@ -21,6 +21,7 @@
@@ -210,13 +289,14 @@ index 435f9334f9..9e1bd57aeb 100644
  #include "qapi/qapi-commands-control.h"
  #include "qapi/qapi-commands-migration.h"
  #include "qapi/qapi-commands-misc.h"
-@@ -144,6 +145,50 @@ void hmp_sync_profile(Monitor *mon, const QDict *qdict)
+@@ -144,6 +145,77 @@ void hmp_sync_profile(Monitor *mon, const QDict *qdict)
      }
  }
  
 +void hmp_info_backup(Monitor *mon, const QDict *qdict)
 +{
 +    BackupStatus *info;
++    PBSBitmapInfoList *bitmap_info;
 +
 +    info = qmp_query_backup(NULL);
 +
@@ -240,19 +320,45 @@ index 435f9334f9..9e1bd57aeb 100644
 +            monitor_printf(mon, "End time: %s", ctime(&info->end_time));
 +        }
 +
-+        int per = (info->has_total && info->total &&
-+            info->has_transferred && info->transferred) ?
-+            (info->transferred * 100)/info->total : 0;
-+        int zero_per = (info->has_total && info->total &&
-+                        info->has_zero_bytes && info->zero_bytes) ?
-+            (info->zero_bytes * 100)/info->total : 0;
 +        monitor_printf(mon, "Backup file: %s\n", info->backup_file);
 +        monitor_printf(mon, "Backup uuid: %s\n", info->uuid);
-+        monitor_printf(mon, "Total size: %zd\n", info->total);
-+        monitor_printf(mon, "Transferred bytes: %zd (%d%%)\n",
-+                       info->transferred, per);
-+        monitor_printf(mon, "Zero bytes: %zd (%d%%)\n",
-+                       info->zero_bytes, zero_per);
++
++        if (!(info->has_total && info->total))  {
++            // this should not happen normally
++            monitor_printf(mon, "Total size: %d\n", 0);
++        } else {
++            size_t total_or_dirty = info->total;
++            bitmap_info = qmp_query_pbs_bitmap_info(NULL);
++
++            while (bitmap_info) {
++                monitor_printf(mon, "Drive %s:\n",
++                        bitmap_info->value->drive);
++                monitor_printf(mon, "  bitmap action: %s\n",
++                        PBSBitmapAction_str(bitmap_info->value->action));
++                monitor_printf(mon, "  size: %zd\n",
++                        bitmap_info->value->size);
++                monitor_printf(mon, "  dirty: %zd\n",
++                        bitmap_info->value->dirty);
++                bitmap_info = bitmap_info->next;
++            }
++
++            qapi_free_PBSBitmapInfoList(bitmap_info);
++
++            int zero_per = (info->has_zero_bytes && info->zero_bytes) ?
++                (info->zero_bytes * 100)/info->total : 0;
++            monitor_printf(mon, "Total size: %zd\n", info->total);
++            int trans_per = (info->transferred * 100)/total_or_dirty;
++            monitor_printf(mon, "Transferred bytes: %zd (%d%%)\n",
++                           info->transferred, trans_per);
++            monitor_printf(mon, "Zero bytes: %zd (%d%%)\n",
++                           info->zero_bytes, zero_per);
++
++            if (info->has_reused) {
++                int reused_per = (info->reused * 100)/total_or_dirty;
++                monitor_printf(mon, "Reused bytes: %zd (%d%%)\n",
++                               info->reused, reused_per);
++            }
++        }
 +    }
 +
 +    qapi_free_BackupStatus(info);
@@ -263,10 +369,10 @@ index 435f9334f9..9e1bd57aeb 100644
      Error *err = NULL;
 diff --git a/proxmox-backup-client.c b/proxmox-backup-client.c
 new file mode 100644
-index 0000000000..a8f6653a81
+index 0000000000..0923037dec
 --- /dev/null
 +++ b/proxmox-backup-client.c
-@@ -0,0 +1,176 @@
+@@ -0,0 +1,146 @@
 +#include "proxmox-backup-client.h"
 +#include "qemu/main-loop.h"
 +#include "block/aio-wait.h"
@@ -274,37 +380,6 @@ index 0000000000..a8f6653a81
 +
 +/* Proxmox Backup Server client bindings using coroutines */
 +
-+typedef struct BlockOnCoroutineWrapper {
-+    AioContext *ctx;
-+    CoroutineEntry *entry;
-+    void *entry_arg;
-+    bool finished;
-+} BlockOnCoroutineWrapper;
-+
-+static void coroutine_fn block_on_coroutine_wrapper(void *opaque)
-+{
-+    BlockOnCoroutineWrapper *wrapper = opaque;
-+    wrapper->entry(wrapper->entry_arg);
-+    wrapper->finished = true;
-+    aio_wait_kick();
-+}
-+
-+void block_on_coroutine_fn(CoroutineEntry *entry, void *entry_arg)
-+{
-+    assert(!qemu_in_coroutine());
-+
-+    AioContext *ctx = qemu_get_current_aio_context();
-+    BlockOnCoroutineWrapper wrapper = {
-+        .finished = false,
-+        .entry = entry,
-+        .entry_arg = entry_arg,
-+        .ctx = ctx,
-+    };
-+    Coroutine *wrapper_co = qemu_coroutine_create(block_on_coroutine_wrapper, &wrapper);
-+    aio_co_enter(ctx, wrapper_co);
-+    AIO_WAIT_WHILE(ctx, !wrapper.finished);
-+}
-+
 +// This is called from another thread, so we use aio_co_schedule()
 +static void proxmox_backup_schedule_wake(void *data) {
 +    CoCtxData *waker = (CoCtxData *)data;
@@ -358,6 +433,7 @@ index 0000000000..a8f6653a81
 +    ProxmoxBackupHandle *pbs,
 +    const char *device_name,
 +    uint64_t size,
++    bool incremental,
 +    Error **errp)
 +{
 +    Coroutine *co = qemu_coroutine_self();
@@ -367,7 +443,7 @@ index 0000000000..a8f6653a81
 +    int pbs_res = -1;
 +
 +    proxmox_backup_register_image_async(
-+        pbs, device_name, size ,proxmox_backup_schedule_wake, &waker, &pbs_res, &pbs_err);
++        pbs, device_name, size, incremental, proxmox_backup_schedule_wake, &waker, &pbs_res, &pbs_err);
 +    qemu_coroutine_yield();
 +    if (pbs_res < 0) {
 +        if (errp) error_setg(errp, "backup register image failed: %s", pbs_err ? pbs_err : "unknown error");
@@ -445,10 +521,10 @@ index 0000000000..a8f6653a81
 +}
 diff --git a/proxmox-backup-client.h b/proxmox-backup-client.h
 new file mode 100644
-index 0000000000..1dda8b7d8f
+index 0000000000..8cbf645b2c
 --- /dev/null
 +++ b/proxmox-backup-client.h
-@@ -0,0 +1,59 @@
+@@ -0,0 +1,60 @@
 +#ifndef PROXMOX_BACKUP_CLIENT_H
 +#define PROXMOX_BACKUP_CLIENT_H
 +
@@ -483,6 +559,7 @@ index 0000000000..1dda8b7d8f
 +    ProxmoxBackupHandle *pbs,
 +    const char *device_name,
 +    uint64_t size,
++    bool incremental,
 +    Error **errp);
 +
 +
@@ -510,10 +587,10 @@ index 0000000000..1dda8b7d8f
 +#endif /* PROXMOX_BACKUP_CLIENT_H */
 diff --git a/pve-backup.c b/pve-backup.c
 new file mode 100644
-index 0000000000..f77892a509
+index 0000000000..dd72ee0ed6
 --- /dev/null
 +++ b/pve-backup.c
-@@ -0,0 +1,938 @@
+@@ -0,0 +1,1097 @@
 +#include "proxmox-backup-client.h"
 +#include "vma.h"
 +
@@ -523,8 +600,12 @@ index 0000000000..f77892a509
 +#include "sysemu/blockdev.h"
 +#include "block/block_int-global-state.h"
 +#include "block/blockjob.h"
++#include "block/dirty-bitmap.h"
 +#include "qapi/qapi-commands-block.h"
 +#include "qapi/qmp/qerror.h"
++#include "qemu/cutils.h"
++
++#include <proxmox-backup-qemu.h>
 +
 +/* PVE backup state and related function */
 +
@@ -545,9 +626,13 @@ index 0000000000..f77892a509
 + *
 + */
 +
++const char *PBS_BITMAP_NAME = "pbs-incremental-dirty-bitmap";
++
 +static struct PVEBackupState {
 +    struct {
-+        // Everithing accessed from qmp_backup_query command is protected using lock
++        // Everything accessed from qmp_backup_query command is protected using
++        // this lock. Do NOT hold this lock for long times, as it is sometimes
++        // acquired from coroutines, and thus any wait time may block the guest.
 +        QemuMutex lock;
 +        Error *error;
 +        time_t start_time;
@@ -556,21 +641,28 @@ index 0000000000..f77892a509
 +        uuid_t uuid;
 +        char uuid_str[37];
 +        size_t total;
++        size_t dirty;
 +        size_t transferred;
++        size_t reused;
 +        size_t zero_bytes;
++        GList *bitmap_list;
++        bool finishing;
++        bool starting;
 +    } stat;
 +    int64_t speed;
++    BackupPerf perf;
 +    VmaWriter *vmaw;
 +    ProxmoxBackupHandle *pbs;
 +    GList *di_list;
-+    QemuMutex backup_mutex;
++    JobTxn *txn;
++    CoMutex backup_mutex;
 +    CoMutex dump_callback_mutex;
 +} backup_state;
 +
 +static void pvebackup_init(void)
 +{
 +    qemu_mutex_init(&backup_state.stat.lock);
-+    qemu_mutex_init(&backup_state.backup_mutex);
++    qemu_co_mutex_init(&backup_state.backup_mutex);
 +    qemu_co_mutex_init(&backup_state.dump_callback_mutex);
 +}
 +
@@ -580,34 +672,15 @@ index 0000000000..f77892a509
 +typedef struct PVEBackupDevInfo {
 +    BlockDriverState *bs;
 +    size_t size;
++    uint64_t block_size;
 +    uint8_t dev_id;
-+    bool completed;
++    int completed_ret; // INT_MAX if not completed
 +    char targetfile[PATH_MAX];
++    BdrvDirtyBitmap *bitmap;
 +    BlockDriverState *target;
++    BlockJob *job;
 +} PVEBackupDevInfo;
 +
-+static void pvebackup_run_next_job(void);
-+
-+static BlockJob *
-+lookup_active_block_job(PVEBackupDevInfo *di)
-+{
-+    if (!di->completed && di->bs) {
-+        WITH_JOB_LOCK_GUARD() {
-+            for (BlockJob *job = block_job_next_locked(NULL); job; job = block_job_next_locked(job)) {
-+                if (job->job.driver->job_type != JOB_TYPE_BACKUP) {
-+                    continue;
-+                }
-+
-+                BackupBlockJob *bjob = container_of(job, BackupBlockJob, common);
-+                if (bjob && bjob->source_bs == di->bs) {
-+                    return job;
-+                }
-+            }
-+        }
-+    }
-+    return NULL;
-+}
-+
 +static void pvebackup_propagate_error(Error *err)
 +{
 +    qemu_mutex_lock(&backup_state.stat.lock);
@@ -624,11 +697,12 @@ index 0000000000..f77892a509
 +    return error_or_canceled;
 +}
 +
-+static void pvebackup_add_transfered_bytes(size_t transferred, size_t zero_bytes)
++static void pvebackup_add_transfered_bytes(size_t transferred, size_t zero_bytes, size_t reused)
 +{
 +    qemu_mutex_lock(&backup_state.stat.lock);
 +    backup_state.stat.zero_bytes += zero_bytes;
 +    backup_state.stat.transferred += transferred;
++    backup_state.stat.reused += reused;
 +    qemu_mutex_unlock(&backup_state.stat.lock);
 +}
 +
@@ -648,10 +722,13 @@ index 0000000000..f77892a509
 +    PVEBackupDevInfo *di = opaque;
 +
 +    assert(backup_state.pbs);
++    assert(buf);
 +
 +    Error *local_err = NULL;
 +    int pbs_res = -1;
 +
++    bool is_zero_block = size == di->block_size && buffer_is_zero(buf, size);
++
 +    qemu_co_mutex_lock(&backup_state.dump_callback_mutex);
 +
 +    // avoid deadlock if job is cancelled
@@ -660,16 +737,29 @@ index 0000000000..f77892a509
 +        return -1;
 +    }
 +
-+    pbs_res = proxmox_backup_co_write_data(backup_state.pbs, di->dev_id, buf, start, size, &local_err);
-+    qemu_co_mutex_unlock(&backup_state.dump_callback_mutex);
++    uint64_t transferred = 0;
++    uint64_t reused = 0;
++    while (transferred < size) {
++        uint64_t left = size - transferred;
++        uint64_t to_transfer = left < di->block_size ? left : di->block_size;
 +
-+    if (pbs_res < 0) {
-+        pvebackup_propagate_error(local_err);
-+        return pbs_res;
-+    } else {
-+        pvebackup_add_transfered_bytes(size, !buf ? size : 0);
++        pbs_res = proxmox_backup_co_write_data(backup_state.pbs, di->dev_id,
++            is_zero_block ? NULL : buf + transferred, start + transferred,
++            to_transfer, &local_err);
++        transferred += to_transfer;
++
++        if (pbs_res < 0) {
++            pvebackup_propagate_error(local_err);
++            qemu_co_mutex_unlock(&backup_state.dump_callback_mutex);
++            return pbs_res;
++        }
++
++        reused += pbs_res == 0 ? to_transfer : 0;
 +    }
 +
++    qemu_co_mutex_unlock(&backup_state.dump_callback_mutex);
++    pvebackup_add_transfered_bytes(size, is_zero_block ? size : 0, reused);
++
 +    return size;
 +}
 +
@@ -690,6 +780,7 @@ index 0000000000..f77892a509
 +    int ret = -1;
 +
 +    assert(backup_state.vmaw);
++    assert(buf);
 +
 +    uint64_t remaining = size;
 +
@@ -716,9 +807,7 @@ index 0000000000..f77892a509
 +        qemu_co_mutex_unlock(&backup_state.dump_callback_mutex);
 +
 +        ++cluster_num;
-+        if (buf) {
-+            buf += VMA_CLUSTER_SIZE;
-+        }
++        buf += VMA_CLUSTER_SIZE;
 +        if (ret < 0) {
 +            Error *local_err = NULL;
 +            vma_writer_error_propagate(backup_state.vmaw, &local_err);
@@ -727,11 +816,11 @@ index 0000000000..f77892a509
 +        } else {
 +            if (remaining >= VMA_CLUSTER_SIZE) {
 +                assert(ret == VMA_CLUSTER_SIZE);
-+                pvebackup_add_transfered_bytes(VMA_CLUSTER_SIZE, zero_bytes);
++                pvebackup_add_transfered_bytes(VMA_CLUSTER_SIZE, zero_bytes, 0);
 +                remaining -= VMA_CLUSTER_SIZE;
 +            } else {
 +                assert(ret == remaining);
-+                pvebackup_add_transfered_bytes(remaining, zero_bytes);
++                pvebackup_add_transfered_bytes(remaining, zero_bytes, 0);
 +                remaining = 0;
 +            }
 +        }
@@ -741,12 +830,12 @@ index 0000000000..f77892a509
 +}
 +
 +// assumes the caller holds backup_mutex
-+static void coroutine_fn pvebackup_co_cleanup(void *unused)
++static void coroutine_fn pvebackup_co_cleanup(void)
 +{
 +    assert(qemu_in_coroutine());
 +
 +    qemu_mutex_lock(&backup_state.stat.lock);
-+    backup_state.stat.end_time = time(NULL);
++    backup_state.stat.finishing = true;
 +    qemu_mutex_unlock(&backup_state.stat.lock);
 +
 +    if (backup_state.vmaw) {
@@ -775,12 +864,39 @@ index 0000000000..f77892a509
 +
 +    g_list_free(backup_state.di_list);
 +    backup_state.di_list = NULL;
++
++    qemu_mutex_lock(&backup_state.stat.lock);
++    backup_state.stat.end_time = time(NULL);
++    backup_state.stat.finishing = false;
++    qemu_mutex_unlock(&backup_state.stat.lock);
 +}
 +
-+// assumes the caller holds backup_mutex
-+static void coroutine_fn pvebackup_complete_stream(void *opaque)
++static void coroutine_fn pvebackup_co_complete_stream(void *opaque)
 +{
 +    PVEBackupDevInfo *di = opaque;
++    int ret = di->completed_ret;
++
++    qemu_mutex_lock(&backup_state.stat.lock);
++    bool starting = backup_state.stat.starting;
++    qemu_mutex_unlock(&backup_state.stat.lock);
++    if (starting) {
++        /* in 'starting' state, no tasks have been run yet, meaning we can (and
++         * must) skip all cleanup, as we don't know what has and hasn't been
++         * initialized yet. */
++        return;
++    }
++
++    qemu_co_mutex_lock(&backup_state.backup_mutex);
++
++    if (ret < 0) {
++        Error *local_err = NULL;
++        error_setg(&local_err, "job failed with err %d - %s", ret, strerror(-ret));
++        pvebackup_propagate_error(local_err);
++    }
++
++    di->bs = NULL;
++
++    assert(di->target == NULL);
 +
 +    bool error_or_canceled = pvebackup_error_or_canceled();
 +
@@ -795,49 +911,90 @@ index 0000000000..f77892a509
 +            pvebackup_propagate_error(local_err);
 +        }
 +    }
-+}
-+
-+static void pvebackup_complete_cb(void *opaque, int ret)
-+{
-+    assert(!qemu_in_coroutine());
-+
-+    PVEBackupDevInfo *di = opaque;
-+
-+    qemu_mutex_lock(&backup_state.backup_mutex);
 +
-+    di->completed = true;
-+
-+    if (ret < 0) {
-+        Error *local_err = NULL;
-+        error_setg(&local_err, "job failed with err %d - %s", ret, strerror(-ret));
-+        pvebackup_propagate_error(local_err);
++    if (di->job) {
++        WITH_JOB_LOCK_GUARD() {
++            job_unref_locked(&di->job->job);
++            di->job = NULL;
++        }
 +    }
 +
-+    di->bs = NULL;
-+
-+    assert(di->target == NULL);
-+
-+    block_on_coroutine_fn(pvebackup_complete_stream, di);
-+
-+    // remove self from job queue
++    // remove self from job list
 +    backup_state.di_list = g_list_remove(backup_state.di_list, di);
 +
 +    g_free(di);
 +
-+    qemu_mutex_unlock(&backup_state.backup_mutex);
++    /* call cleanup if we're the last job */
++    if (!g_list_first(backup_state.di_list)) {
++        pvebackup_co_cleanup();
++    }
 +
-+    pvebackup_run_next_job();
++    qemu_co_mutex_unlock(&backup_state.backup_mutex);
 +}
 +
-+static void pvebackup_cancel(void)
++static void pvebackup_complete_cb(void *opaque, int ret)
 +{
-+    assert(!qemu_in_coroutine());
++    PVEBackupDevInfo *di = opaque;
++    di->completed_ret = ret;
++
++    /*
++     * Schedule stream cleanup in async coroutine. close_image and finish might
++     * take a while, so we can't block on them here. This way it also doesn't
++     * matter if we're already running in a coroutine or not.
++     * Note: di is a pointer to an entry in the global backup_state struct, so
++     * it stays valid.
++     */
++    Coroutine *co = qemu_coroutine_create(pvebackup_co_complete_stream, di);
++    aio_co_enter(qemu_get_aio_context(), co);
++}
 +
++/*
++ * job_cancel(_sync) does not like to be called from coroutines, so defer to
++ * main loop processing via a bottom half. Assumes that caller holds
++ * backup_mutex.
++ */
++static void job_cancel_bh(void *opaque) {
++    CoCtxData *data = (CoCtxData*)opaque;
++
++    /*
++     * Be careful to pick a valid job to cancel:
++     * 1. job_cancel_sync() does not expect the job to be finalized already.
++     * 2. job_exit() might run between scheduling and running job_cancel_bh()
++     *    and pvebackup_co_complete_stream() might not have removed the job from
++     *    the list yet (in fact, cannot, because it waits for the backup_mutex).
++     * Requiring !job_is_completed() ensures that no finalized job is picked.
++     */
++    GList *bdi = g_list_first(backup_state.di_list);
++    while (bdi) {
++        if (bdi->data) {
++            BlockJob *bj = ((PVEBackupDevInfo *)bdi->data)->job;
++            if (bj) {
++                Job *job = &bj->job;
++                WITH_JOB_LOCK_GUARD() {
++                    if (!job_is_completed_locked(job)) {
++                        job_cancel_sync_locked(job, true);
++                        /*
++                         * It's enough to cancel one job in the transaction, the
++                         * rest will follow automatically.
++                         */
++                        break;
++                    }
++                }
++            }
++        }
++        bdi = g_list_next(bdi);
++    }
++
++    aio_co_enter(data->ctx, data->co);
++}
++
++void coroutine_fn qmp_backup_cancel(Error **errp)
++{
 +    Error *cancel_err = NULL;
 +    error_setg(&cancel_err, "backup canceled");
 +    pvebackup_propagate_error(cancel_err);
 +
-+    qemu_mutex_lock(&backup_state.backup_mutex);
++    qemu_co_mutex_lock(&backup_state.backup_mutex);
 +
 +    if (backup_state.vmaw) {
 +        /* make sure vma writer does not block anymore */
@@ -848,39 +1005,14 @@ index 0000000000..f77892a509
 +        proxmox_backup_abort(backup_state.pbs, "backup canceled");
 +    }
 +
-+    qemu_mutex_unlock(&backup_state.backup_mutex);
-+
-+    for(;;) {
-+
-+        BlockJob *next_job = NULL;
-+
-+        qemu_mutex_lock(&backup_state.backup_mutex);
-+
-+        GList *l = backup_state.di_list;
-+        while (l) {
-+            PVEBackupDevInfo *di = (PVEBackupDevInfo *)l->data;
-+            l = g_list_next(l);
-+
-+            BlockJob *job = lookup_active_block_job(di);
-+            if (job != NULL) {
-+                next_job = job;
-+                break;
-+            }
-+        }
-+
-+        qemu_mutex_unlock(&backup_state.backup_mutex);
-+
-+        if (next_job) {
-+            job_cancel_sync(&next_job->job, true);
-+        } else {
-+            break;
-+        }
-+    }
-+}
++    CoCtxData data = {
++        .ctx = qemu_get_current_aio_context(),
++        .co = qemu_coroutine_self(),
++    };
++    aio_bh_schedule_oneshot(data.ctx, job_cancel_bh, &data);
++    qemu_coroutine_yield();
 +
-+void qmp_backup_cancel(Error **errp)
-+{
-+    pvebackup_cancel();
++    qemu_co_mutex_unlock(&backup_state.backup_mutex);
 +}
 +
 +// assumes the caller holds backup_mutex
@@ -933,50 +1065,26 @@ index 0000000000..f77892a509
 +    goto out;
 +}
 +
-+bool job_should_pause_locked(Job *job);
++/*
++ * backup_job_create can *not* be run from a coroutine (and requires an
++ * acquired AioContext), so this can't either.
++ * The caller is responsible that backup_mutex is held nonetheless.
++ */
++static void create_backup_jobs_bh(void *opaque) {
 +
-+static void pvebackup_run_next_job(void)
-+{
 +    assert(!qemu_in_coroutine());
 +
-+    qemu_mutex_lock(&backup_state.backup_mutex);
-+
-+    GList *l = backup_state.di_list;
-+    while (l) {
-+        PVEBackupDevInfo *di = (PVEBackupDevInfo *)l->data;
-+        l = g_list_next(l);
-+
-+        BlockJob *job = lookup_active_block_job(di);
-+
-+        if (job) {
-+            qemu_mutex_unlock(&backup_state.backup_mutex);
-+
-+            WITH_JOB_LOCK_GUARD() {
-+                if (job_should_pause_locked(&job->job)) {
-+                    bool error_or_canceled = pvebackup_error_or_canceled();
-+                    if (error_or_canceled) {
-+                        job_cancel_sync_locked(&job->job, true);
-+                    } else {
-+                        job_resume_locked(&job->job);
-+                    }
-+                }
-+            }
-+            return;
-+        }
-+    }
-+
-+    block_on_coroutine_fn(pvebackup_co_cleanup, NULL); // no more jobs, run cleanup
-+
-+    qemu_mutex_unlock(&backup_state.backup_mutex);
-+}
-+
-+static bool create_backup_jobs(void) {
-+
-+    assert(!qemu_in_coroutine());
++    CoCtxData *data = (CoCtxData*)opaque;
++    Error **errp = (Error**)data->data;
 +
 +    Error *local_err = NULL;
 +
-+    BackupPerf perf = { .max_workers = 16 };
++    /* create job transaction to synchronize bitmap commit and cancel all
++     * jobs in case one errors */
++    if (backup_state.txn) {
++        job_txn_unref(backup_state.txn);
++    }
++    backup_state.txn = job_txn_new_seq();
 +
 +    /* create and start all jobs (paused state) */
 +    GList *l =  backup_state.di_list;
@@ -986,33 +1094,46 @@ index 0000000000..f77892a509
 +
 +        assert(di->target != NULL);
 +
++        MirrorSyncMode sync_mode = MIRROR_SYNC_MODE_FULL;
++        BitmapSyncMode bitmap_mode = BITMAP_SYNC_MODE_NEVER;
++        if (di->bitmap) {
++            sync_mode = MIRROR_SYNC_MODE_BITMAP;
++            bitmap_mode = BITMAP_SYNC_MODE_ON_SUCCESS;
++        }
 +        AioContext *aio_context = bdrv_get_aio_context(di->bs);
 +        aio_context_acquire(aio_context);
 +
 +        BlockJob *job = backup_job_create(
-+            NULL, di->bs, di->target, backup_state.speed, MIRROR_SYNC_MODE_FULL, NULL,
-+            BITMAP_SYNC_MODE_NEVER, false, NULL, &perf, BLOCKDEV_ON_ERROR_REPORT, BLOCKDEV_ON_ERROR_REPORT,
-+            JOB_DEFAULT, pvebackup_complete_cb, di, NULL, &local_err);
++            NULL, di->bs, di->target, backup_state.speed, sync_mode, di->bitmap,
++            bitmap_mode, false, NULL, &backup_state.perf, BLOCKDEV_ON_ERROR_REPORT,
++            BLOCKDEV_ON_ERROR_REPORT, JOB_DEFAULT, pvebackup_complete_cb, di, backup_state.txn,
++            &local_err);
 +
 +        aio_context_release(aio_context);
 +
-+        if (!job || local_err != NULL) {
-+            Error *create_job_err = NULL;
-+            error_setg(&create_job_err, "backup_job_create failed: %s",
-+                       local_err ? error_get_pretty(local_err) : "null");
++        di->job = job;
++        if (job) {
++            WITH_JOB_LOCK_GUARD() {
++                job_ref_locked(&job->job);
++            }
++        }
 +
-+            pvebackup_propagate_error(create_job_err);
++        if (!job || local_err) {
++            error_setg(errp, "backup_job_create failed: %s",
++                       local_err ? error_get_pretty(local_err) : "null");
 +            break;
 +        }
-+        job_start(&job->job);
 +
 +        bdrv_unref(di->target);
 +        di->target = NULL;
 +    }
 +
-+    bool errors = pvebackup_error_or_canceled();
-+
-+    if (errors) {
++    if (*errp) {
++        /*
++         * It's enough to cancel one job in the transaction, the rest will
++         * follow automatically.
++         */
++        bool canceled = false;
 +        l = backup_state.di_list;
 +        while (l) {
 +            PVEBackupDevInfo *di = (PVEBackupDevInfo *)l->data;
@@ -1022,40 +1143,48 @@ index 0000000000..f77892a509
 +                bdrv_unref(di->target);
 +                di->target = NULL;
 +            }
++
++            if (di->job) {
++                WITH_JOB_LOCK_GUARD() {
++                    if (!canceled) {
++                        job_cancel_sync_locked(&di->job->job, true);
++                        canceled = true;
++                    }
++                    job_unref_locked(&di->job->job);
++                    di->job = NULL;
++                }
++            }
 +        }
 +    }
 +
-+    return errors;
++    /* return */
++    aio_co_enter(data->ctx, data->co);
 +}
 +
-+typedef struct QmpBackupTask {
-+    const char *backup_file;
-+    const char *password;
-+    const char *keyfile;
-+    const char *key_password;
-+    const char *backup_id;
-+    bool has_backup_time;
-+    const char *fingerprint;
-+    int64_t backup_time;
-+    bool has_format;
-+    BackupFormat format;
-+    const char *config_file;
-+    const char *firewall_file;
-+    const char *devlist;
-+    bool has_speed;
-+    int64_t speed;
-+    Error **errp;
-+    UuidInfo *result;
-+} QmpBackupTask;
-+
-+// assumes the caller holds backup_mutex
-+static void coroutine_fn pvebackup_co_prepare(void *opaque)
++UuidInfo coroutine_fn *qmp_backup(
++    const char *backup_file,
++    const char *password,
++    const char *keyfile,
++    const char *key_password,
++    const char *master_keyfile,
++    const char *fingerprint,
++    const char *backup_ns,
++    const char *backup_id,
++    bool has_backup_time, int64_t backup_time,
++    bool has_use_dirty_bitmap, bool use_dirty_bitmap,
++    bool has_compress, bool compress,
++    bool has_encrypt, bool encrypt,
++    bool has_format, BackupFormat format,
++    const char *config_file,
++    const char *firewall_file,
++    const char *devlist,
++    bool has_speed, int64_t speed,
++    bool has_max_workers, int64_t max_workers,
++    Error **errp)
 +{
 +    assert(qemu_in_coroutine());
 +
-+    QmpBackupTask *task = opaque;
-+
-+    task->result = NULL; // just to be sure
++    qemu_co_mutex_lock(&backup_state.backup_mutex);
 +
 +    BlockBackend *blk;
 +    BlockDriverState *bs = NULL;
@@ -1073,31 +1202,32 @@ index 0000000000..f77892a509
 +    const char *firewall_name = "qemu-server.fw";
 +
 +    if (backup_state.di_list) {
-+         error_set(task->errp, ERROR_CLASS_GENERIC_ERROR,
++        error_set(errp, ERROR_CLASS_GENERIC_ERROR,
 +                  "previous backup not finished");
-+        return;
++        qemu_co_mutex_unlock(&backup_state.backup_mutex);
++        return NULL;
 +    }
 +
 +    /* Todo: try to auto-detect format based on file name */
-+    BackupFormat format = task->has_format ? task->format : BACKUP_FORMAT_VMA;
++    format = has_format ? format : BACKUP_FORMAT_VMA;
 +
-+    if (task->devlist) {
-+        devs = g_strsplit_set(task->devlist, ",;:", -1);
++    if (devlist) {
++        devs = g_strsplit_set(devlist, ",;:", -1);
 +
 +        gchar **d = devs;
 +        while (d && *d) {
 +            blk = blk_by_name(*d);
 +            if (blk) {
 +                bs = blk_bs(blk);
-+                if (!bdrv_is_inserted(bs)) {
-+                    error_setg(task->errp, QERR_DEVICE_HAS_NO_MEDIUM, *d);
++                if (!bdrv_co_is_inserted(bs)) {
++                    error_setg(errp, QERR_DEVICE_HAS_NO_MEDIUM, *d);
 +                    goto err;
 +                }
 +                PVEBackupDevInfo *di = g_new0(PVEBackupDevInfo, 1);
 +                di->bs = bs;
 +                di_list = g_list_append(di_list, di);
 +            } else {
-+                error_set(task->errp, ERROR_CLASS_DEVICE_NOT_FOUND,
++                error_set(errp, ERROR_CLASS_DEVICE_NOT_FOUND,
 +                          "Device '%s' not found", *d);
 +                goto err;
 +            }
@@ -1109,7 +1239,7 @@ index 0000000000..f77892a509
 +
 +        bs = NULL;
 +        for (bs = bdrv_first(&it); bs; bs = bdrv_next(&it)) {
-+            if (!bdrv_is_inserted(bs) || bdrv_is_read_only(bs)) {
++            if (!bdrv_co_is_inserted(bs) || bdrv_is_read_only(bs)) {
 +                continue;
 +            }
 +
@@ -1120,7 +1250,7 @@ index 0000000000..f77892a509
 +    }
 +
 +    if (!di_list) {
-+        error_set(task->errp, ERROR_CLASS_GENERIC_ERROR, "empty device list");
++        error_set(errp, ERROR_CLASS_GENERIC_ERROR, "empty device list");
 +        goto err;
 +    }
 +
@@ -1130,59 +1260,81 @@ index 0000000000..f77892a509
 +    while (l) {
 +        PVEBackupDevInfo *di = (PVEBackupDevInfo *)l->data;
 +        l = g_list_next(l);
-+        if (bdrv_op_is_blocked(di->bs, BLOCK_OP_TYPE_BACKUP_SOURCE, task->errp)) {
++        if (bdrv_op_is_blocked(di->bs, BLOCK_OP_TYPE_BACKUP_SOURCE, errp)) {
 +            goto err;
 +        }
 +
 +        ssize_t size = bdrv_getlength(di->bs);
 +        if (size < 0) {
-+            error_setg_errno(task->errp, -size, "bdrv_getlength failed");
++            error_setg_errno(errp, -size, "bdrv_getlength failed");
 +            goto err;
 +        }
 +        di->size = size;
 +        total += size;
++
++        di->completed_ret = INT_MAX;
 +    }
 +
 +    uuid_generate(uuid);
 +
++    qemu_mutex_lock(&backup_state.stat.lock);
++    backup_state.stat.reused = 0;
++
++    /* clear previous backup's bitmap_list */
++    if (backup_state.stat.bitmap_list) {
++        GList *bl = backup_state.stat.bitmap_list;
++        while (bl) {
++            g_free(((PBSBitmapInfo *)bl->data)->drive);
++            g_free(bl->data);
++            bl = g_list_next(bl);
++        }
++        g_list_free(backup_state.stat.bitmap_list);
++        backup_state.stat.bitmap_list = NULL;
++    }
++
 +    if (format == BACKUP_FORMAT_PBS) {
-+        if (!task->password) {
-+            error_set(task->errp, ERROR_CLASS_GENERIC_ERROR, "missing parameter 'password'");
-+            goto err;
++        if (!password) {
++            error_set(errp, ERROR_CLASS_GENERIC_ERROR, "missing parameter 'password'");
++            goto err_mutex;
 +        }
-+        if (!task->backup_id) {
-+            error_set(task->errp, ERROR_CLASS_GENERIC_ERROR, "missing parameter 'backup-id'");
-+            goto err;
++        if (!backup_id) {
++            error_set(errp, ERROR_CLASS_GENERIC_ERROR, "missing parameter 'backup-id'");
++            goto err_mutex;
 +        }
-+        if (!task->has_backup_time) {
-+            error_set(task->errp, ERROR_CLASS_GENERIC_ERROR, "missing parameter 'backup-time'");
-+            goto err;
++        if (!has_backup_time) {
++            error_set(errp, ERROR_CLASS_GENERIC_ERROR, "missing parameter 'backup-time'");
++            goto err_mutex;
 +        }
 +
 +        int dump_cb_block_size = PROXMOX_BACKUP_DEFAULT_CHUNK_SIZE; // Hardcoded (4M)
 +        firewall_name = "fw.conf";
 +
 +        char *pbs_err = NULL;
-+        pbs = proxmox_backup_new(
-+            task->backup_file,
-+            task->backup_id,
-+            task->backup_time,
++        pbs = proxmox_backup_new_ns(
++            backup_file,
++            backup_ns,
++            backup_id,
++            backup_time,
 +            dump_cb_block_size,
-+            task->password,
-+            task->keyfile,
-+            task->key_password,
-+            task->fingerprint,
++            password,
++            keyfile,
++            key_password,
++            master_keyfile,
++            has_compress ? compress : true,
++            has_encrypt ? encrypt : !!keyfile,
++            fingerprint,
 +            &pbs_err);
 +
 +        if (!pbs) {
-+            error_set(task->errp, ERROR_CLASS_GENERIC_ERROR,
++            error_set(errp, ERROR_CLASS_GENERIC_ERROR,
 +                      "proxmox_backup_new failed: %s", pbs_err);
 +            proxmox_backup_free_error(pbs_err);
-+            goto err;
++            goto err_mutex;
 +        }
 +
-+        if (proxmox_backup_co_connect(pbs, task->errp) < 0)
-+            goto err;
++        int connect_result = proxmox_backup_co_connect(pbs, errp);
++        if (connect_result < 0)
++            goto err_mutex;
 +
 +        /* register all devices */
 +        l = di_list;
@@ -1190,25 +1342,72 @@ index 0000000000..f77892a509
 +            PVEBackupDevInfo *di = (PVEBackupDevInfo *)l->data;
 +            l = g_list_next(l);
 +
++            di->block_size = dump_cb_block_size;
++
 +            const char *devname = bdrv_get_device_name(di->bs);
++            PBSBitmapAction action = PBS_BITMAP_ACTION_NOT_USED;
++            size_t dirty = di->size;
 +
-+            int dev_id = proxmox_backup_co_register_image(pbs, devname, di->size, task->errp);
-+            if (dev_id < 0)
-+                goto err;
++            BdrvDirtyBitmap *bitmap = bdrv_find_dirty_bitmap(di->bs, PBS_BITMAP_NAME);
++            bool expect_only_dirty = false;
 +
-+            if (!(di->target = bdrv_backup_dump_create(dump_cb_block_size, di->size, pvebackup_co_dump_pbs_cb, di, task->errp))) {
-+                goto err;
++            if (has_use_dirty_bitmap && use_dirty_bitmap) {
++                if (bitmap == NULL) {
++                    bitmap = bdrv_create_dirty_bitmap(di->bs, dump_cb_block_size, PBS_BITMAP_NAME, errp);
++                    if (!bitmap) {
++                        goto err_mutex;
++                    }
++                    action = PBS_BITMAP_ACTION_NEW;
++                } else {
++                    expect_only_dirty = proxmox_backup_check_incremental(pbs, devname, di->size) != 0;
++                }
++
++                if (expect_only_dirty) {
++                    /* track clean chunks as reused */
++                    dirty = MIN(bdrv_get_dirty_count(bitmap), di->size);
++                    backup_state.stat.reused += di->size - dirty;
++                    action = PBS_BITMAP_ACTION_USED;
++                } else {
++                    /* mark entire bitmap as dirty to make full backup */
++                    bdrv_set_dirty_bitmap(bitmap, 0, di->size);
++                    if (action != PBS_BITMAP_ACTION_NEW) {
++                        action = PBS_BITMAP_ACTION_INVALID;
++                    }
++                }
++                di->bitmap = bitmap;
++            } else {
++                /* after a full backup the old dirty bitmap is invalid anyway */
++                if (bitmap != NULL) {
++                    bdrv_release_dirty_bitmap(bitmap);
++                    action = PBS_BITMAP_ACTION_NOT_USED_REMOVED;
++                }
++            }
++
++            int dev_id = proxmox_backup_co_register_image(pbs, devname, di->size, expect_only_dirty, errp);
++            if (dev_id < 0) {
++                goto err_mutex;
++            }
++
++            if (!(di->target = bdrv_backup_dump_create(dump_cb_block_size, di->size, pvebackup_co_dump_pbs_cb, di, errp))) {
++                goto err_mutex;
 +            }
 +
 +            di->dev_id = dev_id;
++
++            PBSBitmapInfo *info = g_malloc(sizeof(*info));
++            info->drive = g_strdup(devname);
++            info->action = action;
++            info->size = di->size;
++            info->dirty = dirty;
++            backup_state.stat.bitmap_list = g_list_append(backup_state.stat.bitmap_list, info);
 +        }
 +    } else if (format == BACKUP_FORMAT_VMA) {
-+        vmaw = vma_writer_create(task->backup_file, uuid, &local_err);
++        vmaw = vma_writer_create(backup_file, uuid, &local_err);
 +        if (!vmaw) {
 +            if (local_err) {
-+                error_propagate(task->errp, local_err);
++                error_propagate(errp, local_err);
 +            }
-+            goto err;
++            goto err_mutex;
 +        }
 +
 +        /* register all devices for vma writer */
@@ -1217,25 +1416,25 @@ index 0000000000..f77892a509
 +            PVEBackupDevInfo *di = (PVEBackupDevInfo *)l->data;
 +            l = g_list_next(l);
 +
-+            if (!(di->target = bdrv_backup_dump_create(VMA_CLUSTER_SIZE, di->size, pvebackup_co_dump_vma_cb, di, task->errp))) {
-+                goto err;
++            if (!(di->target = bdrv_backup_dump_create(VMA_CLUSTER_SIZE, di->size, pvebackup_co_dump_vma_cb, di, errp))) {
++                goto err_mutex;
 +            }
 +
 +            const char *devname = bdrv_get_device_name(di->bs);
 +            di->dev_id = vma_writer_register_stream(vmaw, devname, di->size);
 +            if (di->dev_id <= 0) {
-+                error_set(task->errp, ERROR_CLASS_GENERIC_ERROR,
++                error_set(errp, ERROR_CLASS_GENERIC_ERROR,
 +                          "register_stream failed");
-+                goto err;
++                goto err_mutex;
 +            }
 +        }
 +    } else if (format == BACKUP_FORMAT_DIR) {
-+        if (mkdir(task->backup_file, 0640) != 0) {
-+            error_setg_errno(task->errp, errno, "can't create directory '%s'\n",
-+                             task->backup_file);
-+            goto err;
++        if (mkdir(backup_file, 0640) != 0) {
++            error_setg_errno(errp, errno, "can't create directory '%s'\n",
++                             backup_file);
++            goto err_mutex;
 +        }
-+        backup_dir = task->backup_file;
++        backup_dir = backup_file;
 +
 +        l = di_list;
 +        while (l) {
@@ -1249,40 +1448,39 @@ index 0000000000..f77892a509
 +            bdrv_img_create(di->targetfile, "raw", NULL, NULL, NULL,
 +                            di->size, flags, false, &local_err);
 +            if (local_err) {
-+                error_propagate(task->errp, local_err);
-+                goto err;
++                error_propagate(errp, local_err);
++                goto err_mutex;
 +            }
 +
 +            di->target = bdrv_co_open(di->targetfile, NULL, NULL, flags, &local_err);
 +            if (!di->target) {
-+                error_propagate(task->errp, local_err);
-+                goto err;
++                error_propagate(errp, local_err);
++                goto err_mutex;
 +            }
 +        }
 +    } else {
-+        error_set(task->errp, ERROR_CLASS_GENERIC_ERROR, "unknown backup format");
-+        goto err;
++        error_set(errp, ERROR_CLASS_GENERIC_ERROR, "unknown backup format");
++        goto err_mutex;
 +    }
 +
 +
 +    /* add configuration file to archive */
-+    if (task->config_file) {
-+        if (pvebackup_co_add_config(task->config_file, config_name, format, backup_dir,
-+                                    vmaw, pbs, task->errp) != 0) {
-+            goto err;
++    if (config_file) {
++        if (pvebackup_co_add_config(config_file, config_name, format, backup_dir,
++                                    vmaw, pbs, errp) != 0) {
++            goto err_mutex;
 +        }
 +    }
 +
 +    /* add firewall file to archive */
-+    if (task->firewall_file) {
-+        if (pvebackup_co_add_config(task->firewall_file, firewall_name, format, backup_dir,
-+                                    vmaw, pbs, task->errp) != 0) {
-+            goto err;
++    if (firewall_file) {
++        if (pvebackup_co_add_config(firewall_file, firewall_name, format, backup_dir,
++                                    vmaw, pbs, errp) != 0) {
++            goto err_mutex;
 +        }
 +    }
 +    /* initialize global backup_state now */
-+
-+    qemu_mutex_lock(&backup_state.stat.lock);
++    /* note: 'reused' and 'bitmap_list' are initialized earlier */
 +
 +    if (backup_state.stat.error) {
 +        error_free(backup_state.stat.error);
@@ -1295,19 +1493,27 @@ index 0000000000..f77892a509
 +    if (backup_state.stat.backup_file) {
 +        g_free(backup_state.stat.backup_file);
 +    }
-+    backup_state.stat.backup_file = g_strdup(task->backup_file);
++    backup_state.stat.backup_file = g_strdup(backup_file);
 +
 +    uuid_copy(backup_state.stat.uuid, uuid);
 +    uuid_unparse_lower(uuid, backup_state.stat.uuid_str);
 +    char *uuid_str = g_strdup(backup_state.stat.uuid_str);
 +
 +    backup_state.stat.total = total;
++    backup_state.stat.dirty = total - backup_state.stat.reused;
 +    backup_state.stat.transferred = 0;
 +    backup_state.stat.zero_bytes = 0;
++    backup_state.stat.finishing = false;
++    backup_state.stat.starting = true;
 +
 +    qemu_mutex_unlock(&backup_state.stat.lock);
 +
-+    backup_state.speed = (task->has_speed && task->speed > 0) ? task->speed : 0;
++    backup_state.speed = (has_speed && speed > 0) ? speed : 0;
++
++    backup_state.perf = (BackupPerf){ .max_workers = 16 };
++    if (has_max_workers) {
++        backup_state.perf.max_workers = max_workers;
++    }
 +
 +    backup_state.vmaw = vmaw;
 +    backup_state.pbs = pbs;
@@ -1317,8 +1523,36 @@ index 0000000000..f77892a509
 +    uuid_info = g_malloc0(sizeof(*uuid_info));
 +    uuid_info->UUID = uuid_str;
 +
-+    task->result = uuid_info;
-+    return;
++    /* Run create_backup_jobs_bh outside of coroutine (in BH) but keep
++    * backup_mutex locked. This is fine, a CoMutex can be held across yield
++    * points, and we'll release it as soon as the BH reschedules us.
++    */
++    CoCtxData waker = {
++        .co = qemu_coroutine_self(),
++        .ctx = qemu_get_current_aio_context(),
++        .data = &local_err,
++    };
++    aio_bh_schedule_oneshot(waker.ctx, create_backup_jobs_bh, &waker);
++    qemu_coroutine_yield();
++
++    if (local_err) {
++        error_propagate(errp, local_err);
++        goto err;
++    }
++
++    qemu_co_mutex_unlock(&backup_state.backup_mutex);
++
++    qemu_mutex_lock(&backup_state.stat.lock);
++    backup_state.stat.starting = false;
++    qemu_mutex_unlock(&backup_state.stat.lock);
++
++    /* start the first job in the transaction */
++    job_txn_start_seq(backup_state.txn);
++
++    return uuid_info;
++
++err_mutex:
++    qemu_mutex_unlock(&backup_state.stat.lock);
 +
 +err:
 +
@@ -1337,6 +1571,7 @@ index 0000000000..f77892a509
 +        g_free(di);
 +    }
 +    g_list_free(di_list);
++    backup_state.di_list = NULL;
 +
 +    if (devs) {
 +        g_strfreev(devs);
@@ -1345,66 +1580,20 @@ index 0000000000..f77892a509
 +    if (vmaw) {
 +        Error *err = NULL;
 +        vma_writer_close(vmaw, &err);
-+        unlink(task->backup_file);
++        unlink(backup_file);
 +    }
 +
 +    if (pbs) {
 +        proxmox_backup_disconnect(pbs);
++        backup_state.pbs = NULL;
 +    }
 +
 +    if (backup_dir) {
 +        rmdir(backup_dir);
 +    }
 +
-+    task->result = NULL;
-+    return;
-+}
-+
-+UuidInfo *qmp_backup(
-+    const char *backup_file,
-+    const char *password,
-+    const char *keyfile,
-+    const char *key_password,
-+    const char *fingerprint,
-+    const char *backup_id,
-+    bool has_backup_time, int64_t backup_time,
-+    bool has_format, BackupFormat format,
-+    const char *config_file,
-+    const char *firewall_file,
-+    const char *devlist,
-+    bool has_speed, int64_t speed, Error **errp)
-+{
-+    QmpBackupTask task = {
-+        .backup_file = backup_file,
-+        .password = password,
-+        .key_password = key_password,
-+        .fingerprint = fingerprint,
-+        .backup_id = backup_id,
-+        .has_backup_time = has_backup_time,
-+        .backup_time = backup_time,
-+        .has_format = has_format,
-+        .format = format,
-+        .config_file = config_file,
-+        .firewall_file = firewall_file,
-+        .devlist = devlist,
-+        .has_speed = has_speed,
-+        .speed = speed,
-+        .errp = errp,
-+    };
-+
-+    qemu_mutex_lock(&backup_state.backup_mutex);
-+
-+    block_on_coroutine_fn(pvebackup_co_prepare, &task);
-+
-+    if (*errp == NULL) {
-+        create_backup_jobs();
-+        qemu_mutex_unlock(&backup_state.backup_mutex);
-+        pvebackup_run_next_job();
-+    } else {
-+        qemu_mutex_unlock(&backup_state.backup_mutex);
-+    }
-+
-+    return task.result;
++    qemu_co_mutex_unlock(&backup_state.backup_mutex);
++    return NULL;
 +}
 +
 +BackupStatus *qmp_query_backup(Error **errp)
@@ -1443,20 +1632,67 @@ index 0000000000..f77892a509
 +
 +    info->has_total = true;
 +    info->total = backup_state.stat.total;
++    info->has_dirty = true;
++    info->dirty = backup_state.stat.dirty;
 +    info->has_zero_bytes = true;
 +    info->zero_bytes = backup_state.stat.zero_bytes;
 +    info->has_transferred = true;
 +    info->transferred = backup_state.stat.transferred;
++    info->has_reused = true;
++    info->reused = backup_state.stat.reused;
++    info->finishing = backup_state.stat.finishing;
 +
 +    qemu_mutex_unlock(&backup_state.stat.lock);
 +
 +    return info;
 +}
++
++PBSBitmapInfoList *qmp_query_pbs_bitmap_info(Error **errp)
++{
++    PBSBitmapInfoList *head = NULL, **p_next = &head;
++
++    qemu_mutex_lock(&backup_state.stat.lock);
++
++    GList *l = backup_state.stat.bitmap_list;
++    while (l) {
++        PBSBitmapInfo *info = (PBSBitmapInfo *)l->data;
++        l = g_list_next(l);
++
++        /* clone bitmap info to avoid auto free after QMP marshalling */
++        PBSBitmapInfo *info_ret = g_malloc0(sizeof(*info_ret));
++        info_ret->drive = g_strdup(info->drive);
++        info_ret->action = info->action;
++        info_ret->size = info->size;
++        info_ret->dirty = info->dirty;
++
++        PBSBitmapInfoList *info_list = g_malloc0(sizeof(*info_list));
++        info_list->value = info_ret;
++
++        *p_next = info_list;
++        p_next = &info_list->next;
++    }
++
++    qemu_mutex_unlock(&backup_state.stat.lock);
++
++    return head;
++}
++
++ProxmoxSupportStatus *qmp_query_proxmox_support(Error **errp)
++{
++    ProxmoxSupportStatus *ret = g_malloc0(sizeof(*ret));
++    ret->pbs_library_version = g_strdup(proxmox_backup_qemu_version());
++    ret->pbs_dirty_bitmap = true;
++    ret->pbs_dirty_bitmap_savevm = true;
++    ret->query_bitmap_info = true;
++    ret->pbs_masterkey = true;
++    ret->backup_max_workers = true;
++    return ret;
++}
 diff --git a/qapi/block-core.json b/qapi/block-core.json
-index 542add004b..16fb4c5ea0 100644
+index 542add004b..4ec70acf95 100644
 --- a/qapi/block-core.json
 +++ b/qapi/block-core.json
-@@ -835,6 +835,115 @@
+@@ -835,6 +835,232 @@
  { 'command': 'query-block', 'returns': ['BlockInfo'],
    'allow-preconfig': true }
  
@@ -1473,8 +1709,13 @@ index 542add004b..16fb4c5ea0 100644
 +#
 +# @total: total amount of bytes involved in the backup process
 +#
++# @dirty: with incremental mode (PBS) this is the amount of bytes involved
++#         in the backup process which are marked dirty.
++#
 +# @transferred: amount of bytes already backed up.
 +#
++# @reused: amount of bytes reused due to deduplication.
++#
 +# @zero-bytes: amount of 'zero' bytes detected.
 +#
 +# @start-time: time (epoch) when backup job started.
@@ -1485,12 +1726,15 @@ index 542add004b..16fb4c5ea0 100644
 +#
 +# @uuid: uuid for this backup job
 +#
++# @finishing: if status='active' and finishing=true, then the backup process is
++#             waiting for the target to finish.
++#
 +##
 +{ 'struct': 'BackupStatus',
-+  'data': {'*status': 'str', '*errmsg': 'str', '*total': 'int',
-+           '*transferred': 'int', '*zero-bytes': 'int',
++  'data': {'*status': 'str', '*errmsg': 'str', '*total': 'int', '*dirty': 'int',
++           '*transferred': 'int', '*zero-bytes': 'int', '*reused': 'int',
 +           '*start-time': 'int', '*end-time': 'int',
-+           '*backup-file': 'str', '*uuid': 'str' } }
++           '*backup-file': 'str', '*uuid': 'str', 'finishing': 'bool' } }
 +
 +##
 +# @BackupFormat:
@@ -1525,12 +1769,24 @@ index 542add004b..16fb4c5ea0 100644
 +#
 +# @key-password: password for keyfile (optional for format 'pbs')
 +#
++# @master-keyfile: PEM-formatted master public keyfile (optional for format 'pbs')
++#
 +# @fingerprint: server cert fingerprint (optional for format 'pbs')
 +#
++# @backup-ns: backup namespace (required for format 'pbs')
++#
 +# @backup-id: backup ID (required for format 'pbs')
 +#
 +# @backup-time: backup timestamp (Unix epoch, required for format 'pbs')
 +#
++# @use-dirty-bitmap: use dirty bitmap to detect incremental changes since last job (optional for format 'pbs')
++#
++# @compress: use compression (optional for format 'pbs', defaults to true)
++#
++# @encrypt: use encryption ((optional for format 'pbs', defaults to true if there is a keyfile)
++#
++# @max-workers: see @BackupPerf for details. Default 16.
++#
 +# Returns: the uuid of the backup job
 +#
 +##
@@ -1538,14 +1794,21 @@ index 542add004b..16fb4c5ea0 100644
 +                                    '*password': 'str',
 +                                    '*keyfile': 'str',
 +                                    '*key-password': 'str',
++                                    '*master-keyfile': 'str',
 +                                    '*fingerprint': 'str',
++                                    '*backup-ns': 'str',
 +                                    '*backup-id': 'str',
 +                                    '*backup-time': 'int',
++                                    '*use-dirty-bitmap': 'bool',
++                                    '*compress': 'bool',
++                                    '*encrypt': 'bool',
 +                                    '*format': 'BackupFormat',
 +                                    '*config-file': 'str',
 +                                    '*firewall-file': 'str',
-+                                    '*devlist': 'str', '*speed': 'int' },
-+  'returns': 'UuidInfo' }
++                                    '*devlist': 'str',
++                                    '*speed': 'int',
++                                    '*max-workers': 'int' },
++  'returns': 'UuidInfo', 'coroutine': true }
 +
 +##
 +# @query-backup:
@@ -1567,7 +1830,97 @@ index 542add004b..16fb4c5ea0 100644
 +# Notes: This command succeeds even if there is no backup process running.
 +#
 +##
-+{ 'command': 'backup-cancel' }
++{ 'command': 'backup-cancel', 'coroutine': true }
++
++##
++# @ProxmoxSupportStatus:
++#
++# Contains info about supported features added by Proxmox.
++#
++# @pbs-dirty-bitmap: True if dirty-bitmap-incremental backups to PBS are
++#                    supported.
++#
++# @query-bitmap-info: True if the 'query-pbs-bitmap-info' QMP call is supported.
++#
++# @pbs-dirty-bitmap-savevm: True if 'dirty-bitmaps' migration capability can
++#                           safely be set for savevm-async.
++#
++# @pbs-masterkey: True if the QMP backup call supports the 'master_keyfile'
++#                 parameter.
++#
++# @pbs-library-version: Running version of libproxmox-backup-qemu0 library.
++#
++##
++{ 'struct': 'ProxmoxSupportStatus',
++  'data': { 'pbs-dirty-bitmap': 'bool',
++            'query-bitmap-info': 'bool',
++            'pbs-dirty-bitmap-savevm': 'bool',
++            'pbs-masterkey': 'bool',
++            'pbs-library-version': 'str',
++            'backup-max-workers': 'bool' } }
++
++##
++# @query-proxmox-support:
++#
++# Returns information about supported features added by Proxmox.
++#
++# Returns: @ProxmoxSupportStatus
++#
++##
++{ 'command': 'query-proxmox-support', 'returns': 'ProxmoxSupportStatus' }
++
++##
++# @PBSBitmapAction:
++#
++# An action taken on a dirty-bitmap when a backup job was started.
++#
++# @not-used: Bitmap mode was not enabled.
++#
++# @not-used-removed: Bitmap mode was not enabled, but a bitmap from a
++#                    previous backup still existed and was removed.
++#
++# @new: A new bitmap was attached to the drive for this backup.
++#
++# @used: An existing bitmap will be used to only backup changed data.
++#
++# @invalid: A bitmap existed, but had to be cleared since it's associated
++#           base snapshot did not match the base given for the current job or
++#           the crypt mode has changed.
++#
++##
++{ 'enum': 'PBSBitmapAction',
++  'data': ['not-used', 'not-used-removed', 'new', 'used', 'invalid'] }
++
++##
++# @PBSBitmapInfo:
++#
++# Contains information about dirty bitmaps used for each drive in a PBS backup.
++#
++# @drive: The underlying drive.
++#
++# @action: The action that was taken when the backup started.
++#
++# @size: The total size of the drive.
++#
++# @dirty: How much of the drive is considered dirty and will be backed up,
++#         or 'size' if everything will be.
++#
++##
++{ 'struct': 'PBSBitmapInfo',
++  'data': { 'drive': 'str', 'action': 'PBSBitmapAction', 'size': 'int',
++            'dirty': 'int' } }
++
++##
++# @query-pbs-bitmap-info:
++#
++# Returns information about dirty bitmaps used on the most recently started
++# backup. Returns nothing when the last backup was not using PBS or if no
++# backup occured in this session.
++#
++# Returns: @PBSBitmapInfo
++#
++##
++{ 'command': 'query-pbs-bitmap-info', 'returns': ['PBSBitmapInfo'] }
 +
  ##
  # @BlockDeviceTimedStats:
diff --git a/debian/patches/pve/0030-PVE-Backup-pbs-restore-new-command-to-restore-from-p.patch b/debian/patches/pve/0032-PVE-Backup-pbs-restore-new-command-to-restore-from-p.patch
similarity index 90%
rename from debian/patches/pve/0030-PVE-Backup-pbs-restore-new-command-to-restore-from-p.patch
rename to debian/patches/pve/0032-PVE-Backup-pbs-restore-new-command-to-restore-from-p.patch
index 4ee6100..1acb93d 100644
--- a/debian/patches/pve/0030-PVE-Backup-pbs-restore-new-command-to-restore-from-p.patch
+++ b/debian/patches/pve/0032-PVE-Backup-pbs-restore-new-command-to-restore-from-p.patch
@@ -5,10 +5,12 @@ Subject: [PATCH] PVE-Backup: pbs-restore - new command to restore from proxmox
  backup server
 
 Signed-off-by: Thomas Lamprecht <t.lamprecht at proxmox.com>
+[WB: add namespace support]
+Signed-off-by: Wolfgang Bumiller <w.bumiller at proxmox.com>
 ---
  meson.build   |   4 +
- pbs-restore.c | 223 ++++++++++++++++++++++++++++++++++++++++++++++++++
- 2 files changed, 227 insertions(+)
+ pbs-restore.c | 236 ++++++++++++++++++++++++++++++++++++++++++++++++++
+ 2 files changed, 240 insertions(+)
  create mode 100644 pbs-restore.c
 
 diff --git a/meson.build b/meson.build
@@ -28,10 +30,10 @@ index d307d8eabf..afd105001e 100644
    subdir('contrib/elf2dmp')
 diff --git a/pbs-restore.c b/pbs-restore.c
 new file mode 100644
-index 0000000000..2f834cf42e
+index 0000000000..f03d9bab8d
 --- /dev/null
 +++ b/pbs-restore.c
-@@ -0,0 +1,223 @@
+@@ -0,0 +1,236 @@
 +/*
 + * Qemu image restore helper for Proxmox Backup
 + *
@@ -63,7 +65,7 @@ index 0000000000..2f834cf42e
 +static void help(void)
 +{
 +    const char *help_msg =
-+        "usage: pbs-restore [--repository <repo>] snapshot archive-name target [command options]\n"
++        "usage: pbs-restore [--repository <repo>] [--ns namespace] snapshot archive-name target [command options]\n"
 +        ;
 +
 +    printf("%s", help_msg);
@@ -111,6 +113,7 @@ index 0000000000..2f834cf42e
 +    Error *main_loop_err = NULL;
 +    const char *format = "raw";
 +    const char *repository = NULL;
++    const char *backup_ns = NULL;
 +    const char *keyfile = NULL;
 +    int verbose = false;
 +    bool skip_zero = false;
@@ -124,6 +127,7 @@ index 0000000000..2f834cf42e
 +            {"verbose", no_argument, 0, 'v'},
 +            {"format", required_argument, 0, 'f'},
 +            {"repository", required_argument, 0, 'r'},
++            {"ns", required_argument, 0, 'n'},
 +            {"keyfile", required_argument, 0, 'k'},
 +            {0, 0, 0, 0}
 +        };
@@ -144,6 +148,9 @@ index 0000000000..2f834cf42e
 +            case 'r':
 +                repository = g_strdup(argv[optind - 1]);
 +                break;
++            case 'n':
++                backup_ns = g_strdup(argv[optind - 1]);
++                break;
 +            case 'k':
 +                keyfile = g_strdup(argv[optind - 1]);
 +                break;
@@ -194,8 +201,16 @@ index 0000000000..2f834cf42e
 +        fprintf(stderr, "connecting to repository '%s'\n", repository);
 +    }
 +    char *pbs_error = NULL;
-+    ProxmoxRestoreHandle *conn = proxmox_restore_new(
-+        repository, snapshot, password, keyfile, key_password, fingerprint, &pbs_error);
++    ProxmoxRestoreHandle *conn = proxmox_restore_new_ns(
++        repository,
++        snapshot,
++        backup_ns,
++        password,
++        keyfile,
++        key_password,
++        fingerprint,
++        &pbs_error
++    );
 +    if (conn == NULL) {
 +        fprintf(stderr, "restore failed: %s\n", pbs_error);
 +        return -1;
diff --git a/debian/patches/pve/0032-PVE-various-PBS-fixes.patch b/debian/patches/pve/0032-PVE-various-PBS-fixes.patch
deleted file mode 100644
index c46a307..0000000
--- a/debian/patches/pve/0032-PVE-various-PBS-fixes.patch
+++ /dev/null
@@ -1,217 +0,0 @@
-From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
-From: Dietmar Maurer <dietmar at proxmox.com>
-Date: Thu, 9 Jul 2020 12:53:08 +0200
-Subject: [PATCH] PVE: various PBS fixes
-
-pbs: fix crypt and compress parameters
-Signed-off-by: Wolfgang Bumiller <w.bumiller at proxmox.com>
-
-PVE: handle PBS write callback with big blocks correctly
-Signed-off-by: Stefan Reiter <s.reiter at proxmox.com>
-
-PVE: add zero block handling to PBS dump callback
-Signed-off-by: Stefan Reiter <s.reiter at proxmox.com>
-Signed-off-by: Thomas Lamprecht <t.lamprecht at proxmox.com>
-[FE: adapt to QAPI change dropping redundant has_*]
-Signed-off-by: Fiona Ebner <f.ebner at proxmox.com>
----
- block/monitor/block-hmp-cmds.c |  4 ++-
- pve-backup.c                   | 54 +++++++++++++++++++++++++++-------
- qapi/block-core.json           |  6 ++++
- 3 files changed, 52 insertions(+), 12 deletions(-)
-
-diff --git a/block/monitor/block-hmp-cmds.c b/block/monitor/block-hmp-cmds.c
-index cda5de792b..ecbebd39ac 100644
---- a/block/monitor/block-hmp-cmds.c
-+++ b/block/monitor/block-hmp-cmds.c
-@@ -1056,7 +1056,9 @@ void hmp_backup(Monitor *mon, const QDict *qdict)
-         NULL, // PBS fingerprint
-         NULL, // PBS backup-id
-         false, 0, // PBS backup-time
--        false, false, // PBS incremental
-+        false, false, // PBS use-dirty-bitmap
-+        false, false, // PBS compress
-+        false, false, // PBS encrypt
-         true, dir ? BACKUP_FORMAT_DIR : BACKUP_FORMAT_VMA,
-         NULL, NULL,
-         devlist, qdict_haskey(qdict, "speed"), speed, &error);
-diff --git a/pve-backup.c b/pve-backup.c
-index d9942a14a1..8f18145255 100644
---- a/pve-backup.c
-+++ b/pve-backup.c
-@@ -10,6 +10,7 @@
- #include "block/dirty-bitmap.h"
- #include "qapi/qapi-commands-block.h"
- #include "qapi/qmp/qerror.h"
-+#include "qemu/cutils.h"
- 
- /* PVE backup state and related function */
- 
-@@ -69,6 +70,7 @@ opts_init(pvebackup_init);
- typedef struct PVEBackupDevInfo {
-     BlockDriverState *bs;
-     size_t size;
-+    uint64_t block_size;
-     uint8_t dev_id;
-     bool completed;
-     char targetfile[PATH_MAX];
-@@ -139,10 +141,13 @@ pvebackup_co_dump_pbs_cb(
-     PVEBackupDevInfo *di = opaque;
- 
-     assert(backup_state.pbs);
-+    assert(buf);
- 
-     Error *local_err = NULL;
-     int pbs_res = -1;
- 
-+    bool is_zero_block = size == di->block_size && buffer_is_zero(buf, size);
-+
-     qemu_co_mutex_lock(&backup_state.dump_callback_mutex);
- 
-     // avoid deadlock if job is cancelled
-@@ -151,17 +156,29 @@ pvebackup_co_dump_pbs_cb(
-         return -1;
-     }
- 
--    pbs_res = proxmox_backup_co_write_data(backup_state.pbs, di->dev_id, buf, start, size, &local_err);
--    qemu_co_mutex_unlock(&backup_state.dump_callback_mutex);
-+    uint64_t transferred = 0;
-+    uint64_t reused = 0;
-+    while (transferred < size) {
-+        uint64_t left = size - transferred;
-+        uint64_t to_transfer = left < di->block_size ? left : di->block_size;
- 
--    if (pbs_res < 0) {
--        pvebackup_propagate_error(local_err);
--        return pbs_res;
--    } else {
--        size_t reused = (pbs_res == 0) ? size : 0;
--        pvebackup_add_transfered_bytes(size, !buf ? size : 0, reused);
-+        pbs_res = proxmox_backup_co_write_data(backup_state.pbs, di->dev_id,
-+            is_zero_block ? NULL : buf + transferred, start + transferred,
-+            to_transfer, &local_err);
-+        transferred += to_transfer;
-+
-+        if (pbs_res < 0) {
-+            pvebackup_propagate_error(local_err);
-+            qemu_co_mutex_unlock(&backup_state.dump_callback_mutex);
-+            return pbs_res;
-+        }
-+
-+        reused += pbs_res == 0 ? to_transfer : 0;
-     }
- 
-+    qemu_co_mutex_unlock(&backup_state.dump_callback_mutex);
-+    pvebackup_add_transfered_bytes(size, is_zero_block ? size : 0, reused);
-+
-     return size;
- }
- 
-@@ -182,6 +199,7 @@ pvebackup_co_dump_vma_cb(
-     int ret = -1;
- 
-     assert(backup_state.vmaw);
-+    assert(buf);
- 
-     uint64_t remaining = size;
- 
-@@ -208,9 +226,7 @@ pvebackup_co_dump_vma_cb(
-         qemu_co_mutex_unlock(&backup_state.dump_callback_mutex);
- 
-         ++cluster_num;
--        if (buf) {
--            buf += VMA_CLUSTER_SIZE;
--        }
-+        buf += VMA_CLUSTER_SIZE;
-         if (ret < 0) {
-             Error *local_err = NULL;
-             vma_writer_error_propagate(backup_state.vmaw, &local_err);
-@@ -560,6 +576,10 @@ typedef struct QmpBackupTask {
-     const char *config_file;
-     const char *firewall_file;
-     const char *devlist;
-+    bool has_compress;
-+    bool compress;
-+    bool has_encrypt;
-+    bool encrypt;
-     bool has_speed;
-     int64_t speed;
-     Error **errp;
-@@ -683,6 +703,7 @@ static void coroutine_fn pvebackup_co_prepare(void *opaque)
- 
-         bool use_dirty_bitmap = task->has_use_dirty_bitmap && task->use_dirty_bitmap;
- 
-+
-         char *pbs_err = NULL;
-         pbs = proxmox_backup_new(
-             task->backup_file,
-@@ -692,6 +713,8 @@ static void coroutine_fn pvebackup_co_prepare(void *opaque)
-             task->password,
-             task->keyfile,
-             task->key_password,
-+            task->has_compress ? task->compress : true,
-+            task->has_encrypt ? task->encrypt : !!task->keyfile,
-             task->fingerprint,
-             &pbs_err);
- 
-@@ -712,6 +735,8 @@ static void coroutine_fn pvebackup_co_prepare(void *opaque)
-             PVEBackupDevInfo *di = (PVEBackupDevInfo *)l->data;
-             l = g_list_next(l);
- 
-+            di->block_size = dump_cb_block_size;
-+
-             const char *devname = bdrv_get_device_name(di->bs);
- 
-             BdrvDirtyBitmap *bitmap = bdrv_find_dirty_bitmap(di->bs, PBS_BITMAP_NAME);
-@@ -932,6 +957,8 @@ UuidInfo *qmp_backup(
-     const char *backup_id,
-     bool has_backup_time, int64_t backup_time,
-     bool has_use_dirty_bitmap, bool use_dirty_bitmap,
-+    bool has_compress, bool compress,
-+    bool has_encrypt, bool encrypt,
-     bool has_format, BackupFormat format,
-     const char *config_file,
-     const char *firewall_file,
-@@ -941,6 +968,7 @@ UuidInfo *qmp_backup(
-     QmpBackupTask task = {
-         .backup_file = backup_file,
-         .password = password,
-+        .keyfile = keyfile,
-         .key_password = key_password,
-         .fingerprint = fingerprint,
-         .backup_id = backup_id,
-@@ -948,6 +976,10 @@ UuidInfo *qmp_backup(
-         .backup_time = backup_time,
-         .has_use_dirty_bitmap = has_use_dirty_bitmap,
-         .use_dirty_bitmap = use_dirty_bitmap,
-+        .has_compress = has_compress,
-+        .compress = compress,
-+        .has_encrypt = has_encrypt,
-+        .encrypt = encrypt,
-         .has_format = has_format,
-         .format = format,
-         .config_file = config_file,
-diff --git a/qapi/block-core.json b/qapi/block-core.json
-index 92f90a898a..864b8ce97c 100644
---- a/qapi/block-core.json
-+++ b/qapi/block-core.json
-@@ -913,6 +913,10 @@
- #
- # @use-dirty-bitmap: use dirty bitmap to detect incremental changes since last job (optional for format 'pbs')
- #
-+# @compress: use compression (optional for format 'pbs', defaults to true)
-+#
-+# @encrypt: use encryption ((optional for format 'pbs', defaults to true if there is a keyfile)
-+#
- # Returns: the uuid of the backup job
- #
- ##
-@@ -924,6 +928,8 @@
-                                     '*backup-id': 'str',
-                                     '*backup-time': 'int',
-                                     '*use-dirty-bitmap': 'bool',
-+                                    '*compress': 'bool',
-+                                    '*encrypt': 'bool',
-                                     '*format': 'BackupFormat',
-                                     '*config-file': 'str',
-                                     '*firewall-file': 'str',
diff --git a/debian/patches/pve/0033-PVE-Add-PBS-block-driver-to-map-backup-archives-into.patch b/debian/patches/pve/0033-PVE-Add-PBS-block-driver-to-map-backup-archives-into.patch
index d7868e1..9c9dc26 100644
--- a/debian/patches/pve/0033-PVE-Add-PBS-block-driver-to-map-backup-archives-into.patch
+++ b/debian/patches/pve/0033-PVE-Add-PBS-block-driver-to-map-backup-archives-into.patch
@@ -7,18 +7,20 @@ Signed-off-by: Stefan Reiter <s.reiter at proxmox.com>
 [error cleanups, file_open implementation]
 Signed-off-by: Dietmar Maurer <dietmar at proxmox.com>
 Signed-off-by: Thomas Lamprecht <t.lamprecht at proxmox.com>
+[WB: add namespace support]
+Signed-off-by: Wolfgang Bumiller <w.bumiller at proxmox.com>
 [FE: adapt to changed function signatures
      make pbs_co_preadv return values consistent with QEMU
      getlength is now a coroutine function]
 Signed-off-by: Fiona Ebner <f.ebner at proxmox.com>
 ---
  block/meson.build    |   3 +
- block/pbs.c          | 277 +++++++++++++++++++++++++++++++++++++++++++
+ block/pbs.c          | 305 +++++++++++++++++++++++++++++++++++++++++++
  configure            |   9 ++
  meson.build          |   2 +-
  qapi/block-core.json |  13 ++
  qapi/pragma.json     |   1 +
- 6 files changed, 304 insertions(+), 1 deletion(-)
+ 6 files changed, 332 insertions(+), 1 deletion(-)
  create mode 100644 block/pbs.c
 
 diff --git a/block/meson.build b/block/meson.build
@@ -37,10 +39,10 @@ index 5bcebb934b..eece0d5743 100644
  softmmu_ss.add(files('block-ram-registrar.c'))
 diff --git a/block/pbs.c b/block/pbs.c
 new file mode 100644
-index 0000000000..43e69ada46
+index 0000000000..a2211e0f3b
 --- /dev/null
 +++ b/block/pbs.c
-@@ -0,0 +1,277 @@
+@@ -0,0 +1,305 @@
 +/*
 + * Proxmox Backup Server read-only block driver
 + */
@@ -58,6 +60,7 @@ index 0000000000..43e69ada46
 +#include <proxmox-backup-qemu.h>
 +
 +#define PBS_OPT_REPOSITORY "repository"
++#define PBS_OPT_NAMESPACE "namespace"
 +#define PBS_OPT_SNAPSHOT "snapshot"
 +#define PBS_OPT_ARCHIVE "archive"
 +#define PBS_OPT_KEYFILE "keyfile"
@@ -71,6 +74,7 @@ index 0000000000..43e69ada46
 +    int64_t length;
 +
 +    char *repository;
++    char *namespace;
 +    char *snapshot;
 +    char *archive;
 +} BDRVPBSState;
@@ -85,6 +89,11 @@ index 0000000000..43e69ada46
 +            .help = "The server address and repository to connect to.",
 +        },
 +        {
++            .name = PBS_OPT_NAMESPACE,
++            .type = QEMU_OPT_STRING,
++            .help = "Optional: The snapshot's namespace.",
++        },
++        {
 +            .name = PBS_OPT_SNAPSHOT,
 +            .type = QEMU_OPT_STRING,
 +            .help = "The snapshot to read.",
@@ -120,7 +129,7 @@ index 0000000000..43e69ada46
 +
 +
 +// filename format:
-+// pbs:repository=<repo>,snapshot=<snap>,password=<pw>,key_password=<kpw>,fingerprint=<fp>,archive=<archive>
++// pbs:repository=<repo>,namespace=<ns>,snapshot=<snap>,password=<pw>,key_password=<kpw>,fingerprint=<fp>,archive=<archive>
 +static void pbs_parse_filename(const char *filename, QDict *options,
 +                                     Error **errp)
 +{
@@ -156,6 +165,7 @@ index 0000000000..43e69ada46
 +    s->archive = g_strdup(qemu_opt_get(opts, PBS_OPT_ARCHIVE));
 +    const char *keyfile = qemu_opt_get(opts, PBS_OPT_KEYFILE);
 +    const char *password = qemu_opt_get(opts, PBS_OPT_PASSWORD);
++    const char *namespace = qemu_opt_get(opts, PBS_OPT_NAMESPACE);
 +    const char *fingerprint = qemu_opt_get(opts, PBS_OPT_FINGERPRINT);
 +    const char *key_password = qemu_opt_get(opts, PBS_OPT_ENCRYPTION_PASSWORD);
 +
@@ -168,9 +178,12 @@ index 0000000000..43e69ada46
 +    if (!key_password) {
 +        key_password = getenv("PBS_ENCRYPTION_PASSWORD");
 +    }
++    if (namespace) {
++        s->namespace = g_strdup(namespace);
++    }
 +
 +    /* connect to PBS server in read mode */
-+    s->conn = proxmox_restore_new(s->repository, s->snapshot, password,
++    s->conn = proxmox_restore_new_ns(s->repository, s->snapshot, s->namespace, password,
 +        keyfile, key_password, fingerprint, &pbs_error);
 +
 +    /* invalidates qemu_opt_get char pointers from above */
@@ -215,6 +228,7 @@ index 0000000000..43e69ada46
 +static void pbs_close(BlockDriverState *bs) {
 +    BDRVPBSState *s = bs->opaque;
 +    g_free(s->repository);
++    g_free(s->namespace);
 +    g_free(s->snapshot);
 +    g_free(s->archive);
 +    proxmox_restore_disconnect(s->conn);
@@ -244,7 +258,16 @@ index 0000000000..43e69ada46
 +    BDRVPBSState *s = bs->opaque;
 +    int ret;
 +    char *pbs_error = NULL;
-+    uint8_t *buf = malloc(bytes);
++    uint8_t *buf;
++    bool inline_buf = true;
++
++    /* for single-buffer IO vectors we can fast-path the write directly to it */
++    if (qiov->niov == 1 && qiov->iov->iov_len >= bytes) {
++        buf = qiov->iov->iov_base;
++    } else {
++        inline_buf = false;
++        buf = g_malloc(bytes);
++    }
 +
 +    if (offset < 0 || bytes < 0) {
 +        fprintf(stderr, "unexpected negative 'offset' or 'bytes' value!\n");
@@ -267,8 +290,10 @@ index 0000000000..43e69ada46
 +        return -EIO;
 +    }
 +
-+    qemu_iovec_from_buf(qiov, 0, buf, bytes);
-+    free(buf);
++    if (!inline_buf) {
++        qemu_iovec_from_buf(qiov, 0, buf, bytes);
++        g_free(buf);
++    }
 +
 +    return 0;
 +}
@@ -285,8 +310,13 @@ index 0000000000..43e69ada46
 +static void pbs_refresh_filename(BlockDriverState *bs)
 +{
 +    BDRVPBSState *s = bs->opaque;
-+    snprintf(bs->exact_filename, sizeof(bs->exact_filename), "%s/%s(%s)",
-+             s->repository, s->snapshot, s->archive);
++    if (s->namespace) {
++        snprintf(bs->exact_filename, sizeof(bs->exact_filename), "%s/%s:%s(%s)",
++                 s->repository, s->namespace, s->snapshot, s->archive);
++    } else {
++        snprintf(bs->exact_filename, sizeof(bs->exact_filename), "%s/%s(%s)",
++                 s->repository, s->snapshot, s->archive);
++    }
 +}
 +
 +static const char *const pbs_strong_runtime_opts[] = {
@@ -373,10 +403,10 @@ index afd105001e..d01ee5d489 100644
  summary_info += {'libdaxctl support': libdaxctl}
  summary_info += {'libudev':           libudev}
 diff --git a/qapi/block-core.json b/qapi/block-core.json
-index 864b8ce97c..705a65ab1a 100644
+index 4ec70acf95..47118bf83e 100644
 --- a/qapi/block-core.json
 +++ b/qapi/block-core.json
-@@ -3198,6 +3198,7 @@
+@@ -3301,6 +3301,7 @@
              'parallels', 'preallocate', 'qcow', 'qcow2', 'qed', 'quorum',
              'raw', 'rbd',
              { 'name': 'replication', 'if': 'CONFIG_REPLICATION' },
@@ -384,7 +414,7 @@ index 864b8ce97c..705a65ab1a 100644
              'ssh', 'throttle', 'vdi', 'vhdx',
              { 'name': 'virtio-blk-vfio-pci', 'if': 'CONFIG_BLKIO' },
              { 'name': 'virtio-blk-vhost-user', 'if': 'CONFIG_BLKIO' },
-@@ -3274,6 +3275,17 @@
+@@ -3377,6 +3378,17 @@
  { 'struct': 'BlockdevOptionsNull',
    'data': { '*size': 'int', '*latency-ns': 'uint64', '*read-zeroes': 'bool' } }
  
@@ -397,12 +427,12 @@ index 864b8ce97c..705a65ab1a 100644
 +{ 'struct': 'BlockdevOptionsPbs',
 +  'data': { 'repository': 'str', 'snapshot': 'str', 'archive': 'str',
 +            '*keyfile': 'str', '*password': 'str', '*fingerprint': 'str',
-+            '*key_password': 'str' } }
++            '*key_password': 'str', '*namespace': 'str' } }
 +
  ##
  # @BlockdevOptionsNVMe:
  #
-@@ -4647,6 +4659,7 @@
+@@ -4750,6 +4762,7 @@
        'nfs':        'BlockdevOptionsNfs',
        'null-aio':   'BlockdevOptionsNull',
        'null-co':    'BlockdevOptionsNull',
diff --git a/debian/patches/pve/0034-PVE-add-query_proxmox_support-QMP-command.patch b/debian/patches/pve/0034-PVE-add-query_proxmox_support-QMP-command.patch
deleted file mode 100644
index 0b14e6f..0000000
--- a/debian/patches/pve/0034-PVE-add-query_proxmox_support-QMP-command.patch
+++ /dev/null
@@ -1,74 +0,0 @@
-From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
-From: Stefan Reiter <s.reiter at proxmox.com>
-Date: Wed, 8 Jul 2020 11:57:53 +0200
-Subject: [PATCH] PVE: add query_proxmox_support QMP command
-
-Generic interface for future use, currently used for PBS dirty-bitmap
-backup support.
-
-Signed-off-by: Stefan Reiter <s.reiter at proxmox.com>
-Signed-off-by: Thomas Lamprecht <t.lamprecht at proxmox.com>
-[PVE: query-proxmox-support: include library version]
-Signed-off-by: Stefan Reiter <s.reiter at proxmox.com>
----
- pve-backup.c         |  9 +++++++++
- qapi/block-core.json | 29 +++++++++++++++++++++++++++++
- 2 files changed, 38 insertions(+)
-
-diff --git a/pve-backup.c b/pve-backup.c
-index 8f18145255..1400c21c49 100644
---- a/pve-backup.c
-+++ b/pve-backup.c
-@@ -1054,3 +1054,12 @@ BackupStatus *qmp_query_backup(Error **errp)
- 
-     return info;
- }
-+
-+ProxmoxSupportStatus *qmp_query_proxmox_support(Error **errp)
-+{
-+    ProxmoxSupportStatus *ret = g_malloc0(sizeof(*ret));
-+    ret->pbs_library_version = g_strdup(proxmox_backup_qemu_version());
-+    ret->pbs_dirty_bitmap = true;
-+    ret->pbs_dirty_bitmap_savevm = true;
-+    return ret;
-+}
-diff --git a/qapi/block-core.json b/qapi/block-core.json
-index 705a65ab1a..1ac535fcf2 100644
---- a/qapi/block-core.json
-+++ b/qapi/block-core.json
-@@ -958,6 +958,35 @@
- ##
- { 'command': 'backup-cancel' }
- 
-+##
-+# @ProxmoxSupportStatus:
-+#
-+# Contains info about supported features added by Proxmox.
-+#
-+# @pbs-dirty-bitmap: True if dirty-bitmap-incremental backups to PBS are
-+#                    supported.
-+#
-+# @pbs-dirty-bitmap-savevm: True if 'dirty-bitmaps' migration capability can
-+#                           safely be set for savevm-async.
-+#
-+# @pbs-library-version: Running version of libproxmox-backup-qemu0 library.
-+#
-+##
-+{ 'struct': 'ProxmoxSupportStatus',
-+  'data': { 'pbs-dirty-bitmap': 'bool',
-+            'pbs-dirty-bitmap-savevm': 'bool',
-+            'pbs-library-version': 'str' } }
-+
-+##
-+# @query-proxmox-support:
-+#
-+# Returns information about supported features added by Proxmox.
-+#
-+# Returns: @ProxmoxSupportStatus
-+#
-+##
-+{ 'command': 'query-proxmox-support', 'returns': 'ProxmoxSupportStatus' }
-+
- ##
- # @BlockDeviceTimedStats:
- #
diff --git a/debian/patches/pve/0036-PVE-redirect-stderr-to-journal-when-daemonized.patch b/debian/patches/pve/0034-PVE-redirect-stderr-to-journal-when-daemonized.patch
similarity index 100%
rename from debian/patches/pve/0036-PVE-redirect-stderr-to-journal-when-daemonized.patch
rename to debian/patches/pve/0034-PVE-redirect-stderr-to-journal-when-daemonized.patch
diff --git a/debian/patches/pve/0040-PVE-Migrate-dirty-bitmap-state-via-savevm.patch b/debian/patches/pve/0035-PVE-Migrate-dirty-bitmap-state-via-savevm.patch
similarity index 93%
rename from debian/patches/pve/0040-PVE-Migrate-dirty-bitmap-state-via-savevm.patch
rename to debian/patches/pve/0035-PVE-Migrate-dirty-bitmap-state-via-savevm.patch
index 644afc2..a3d7386 100644
--- a/debian/patches/pve/0040-PVE-Migrate-dirty-bitmap-state-via-savevm.patch
+++ b/debian/patches/pve/0035-PVE-Migrate-dirty-bitmap-state-via-savevm.patch
@@ -175,22 +175,22 @@ index 0000000000..887e998b9e
 +                         NULL);
 +}
 diff --git a/pve-backup.c b/pve-backup.c
-index 1da9dd9edc..e0e38063a8 100644
+index dd72ee0ed6..cb5312fff3 100644
 --- a/pve-backup.c
 +++ b/pve-backup.c
-@@ -1110,6 +1110,7 @@ ProxmoxSupportStatus *qmp_query_proxmox_support(Error **errp)
+@@ -1090,6 +1090,7 @@ ProxmoxSupportStatus *qmp_query_proxmox_support(Error **errp)
      ret->pbs_library_version = g_strdup(proxmox_backup_qemu_version());
      ret->pbs_dirty_bitmap = true;
      ret->pbs_dirty_bitmap_savevm = true;
 +    ret->pbs_dirty_bitmap_migration = true;
      ret->query_bitmap_info = true;
-     return ret;
- }
+     ret->pbs_masterkey = true;
+     ret->backup_max_workers = true;
 diff --git a/qapi/block-core.json b/qapi/block-core.json
-index 43838212e3..e7412f6322 100644
+index 47118bf83e..809f3c61bc 100644
 --- a/qapi/block-core.json
 +++ b/qapi/block-core.json
-@@ -974,6 +974,11 @@
+@@ -984,6 +984,11 @@
  # @pbs-dirty-bitmap-savevm: True if 'dirty-bitmaps' migration capability can
  #                           safely be set for savevm-async.
  #
@@ -199,14 +199,14 @@ index 43838212e3..e7412f6322 100644
 +#                              migration cap if this is false/unset may lead
 +#                              to crashes on migration!
 +#
- # @pbs-library-version: Running version of libproxmox-backup-qemu0 library.
+ # @pbs-masterkey: True if the QMP backup call supports the 'master_keyfile'
+ #                 parameter.
  #
- ##
-@@ -981,6 +986,7 @@
+@@ -994,6 +999,7 @@
    'data': { 'pbs-dirty-bitmap': 'bool',
              'query-bitmap-info': 'bool',
              'pbs-dirty-bitmap-savevm': 'bool',
 +            'pbs-dirty-bitmap-migration': 'bool',
-             'pbs-library-version': 'str' } }
- 
- ##
+             'pbs-masterkey': 'bool',
+             'pbs-library-version': 'str',
+             'backup-max-workers': 'bool' } }
diff --git a/debian/patches/pve/0035-PVE-add-query-pbs-bitmap-info-QMP-call.patch b/debian/patches/pve/0035-PVE-add-query-pbs-bitmap-info-QMP-call.patch
deleted file mode 100644
index 2e0f235..0000000
--- a/debian/patches/pve/0035-PVE-add-query-pbs-bitmap-info-QMP-call.patch
+++ /dev/null
@@ -1,441 +0,0 @@
-From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
-From: Stefan Reiter <s.reiter at proxmox.com>
-Date: Wed, 19 Aug 2020 17:02:00 +0200
-Subject: [PATCH] PVE: add query-pbs-bitmap-info QMP call
-
-Returns advanced information about dirty bitmaps used (or not used) for
-the latest PBS backup.
-
-Signed-off-by: Stefan Reiter <s.reiter at proxmox.com>
-Signed-off-by: Thomas Lamprecht <t.lamprecht at proxmox.com>
----
- monitor/hmp-cmds.c   |  28 ++++++-----
- pve-backup.c         | 117 ++++++++++++++++++++++++++++++++-----------
- qapi/block-core.json |  56 +++++++++++++++++++++
- 3 files changed, 159 insertions(+), 42 deletions(-)
-
-diff --git a/monitor/hmp-cmds.c b/monitor/hmp-cmds.c
-index 087161a967..9a67e544ce 100644
---- a/monitor/hmp-cmds.c
-+++ b/monitor/hmp-cmds.c
-@@ -148,6 +148,7 @@ void hmp_sync_profile(Monitor *mon, const QDict *qdict)
- void hmp_info_backup(Monitor *mon, const QDict *qdict)
- {
-     BackupStatus *info;
-+    PBSBitmapInfoList *bitmap_info;
- 
-     info = qmp_query_backup(NULL);
- 
-@@ -178,26 +179,29 @@ void hmp_info_backup(Monitor *mon, const QDict *qdict)
-             // this should not happen normally
-             monitor_printf(mon, "Total size: %d\n", 0);
-         } else {
--            bool incremental = false;
-             size_t total_or_dirty = info->total;
--            if (info->has_transferred) {
--                if (info->has_dirty && info->dirty) {
--                     if (info->dirty < info->total) {
--                        total_or_dirty = info->dirty;
--                        incremental = true;
--                    }
--                }
-+            bitmap_info = qmp_query_pbs_bitmap_info(NULL);
-+
-+            while (bitmap_info) {
-+                monitor_printf(mon, "Drive %s:\n",
-+                        bitmap_info->value->drive);
-+                monitor_printf(mon, "  bitmap action: %s\n",
-+                        PBSBitmapAction_str(bitmap_info->value->action));
-+                monitor_printf(mon, "  size: %zd\n",
-+                        bitmap_info->value->size);
-+                monitor_printf(mon, "  dirty: %zd\n",
-+                        bitmap_info->value->dirty);
-+                bitmap_info = bitmap_info->next;
-             }
- 
--            int per = (info->transferred * 100)/total_or_dirty;
--
--            monitor_printf(mon, "Backup mode: %s\n", incremental ? "incremental" : "full");
-+            qapi_free_PBSBitmapInfoList(bitmap_info);
- 
-             int zero_per = (info->has_zero_bytes && info->zero_bytes) ?
-                 (info->zero_bytes * 100)/info->total : 0;
-             monitor_printf(mon, "Total size: %zd\n", info->total);
-+            int trans_per = (info->transferred * 100)/total_or_dirty;
-             monitor_printf(mon, "Transferred bytes: %zd (%d%%)\n",
--                           info->transferred, per);
-+                           info->transferred, trans_per);
-             monitor_printf(mon, "Zero bytes: %zd (%d%%)\n",
-                            info->zero_bytes, zero_per);
- 
-diff --git a/pve-backup.c b/pve-backup.c
-index 1400c21c49..0a0996b971 100644
---- a/pve-backup.c
-+++ b/pve-backup.c
-@@ -48,6 +48,7 @@ static struct PVEBackupState {
-         size_t transferred;
-         size_t reused;
-         size_t zero_bytes;
-+        GList *bitmap_list;
-     } stat;
-     int64_t speed;
-     VmaWriter *vmaw;
-@@ -663,7 +664,6 @@ static void coroutine_fn pvebackup_co_prepare(void *opaque)
-     }
- 
-     size_t total = 0;
--    size_t dirty = 0;
- 
-     l = di_list;
-     while (l) {
-@@ -684,18 +684,33 @@ static void coroutine_fn pvebackup_co_prepare(void *opaque)
- 
-     uuid_generate(uuid);
- 
-+    qemu_mutex_lock(&backup_state.stat.lock);
-+    backup_state.stat.reused = 0;
-+
-+    /* clear previous backup's bitmap_list */
-+    if (backup_state.stat.bitmap_list) {
-+        GList *bl = backup_state.stat.bitmap_list;
-+        while (bl) {
-+            g_free(((PBSBitmapInfo *)bl->data)->drive);
-+            g_free(bl->data);
-+            bl = g_list_next(bl);
-+        }
-+        g_list_free(backup_state.stat.bitmap_list);
-+        backup_state.stat.bitmap_list = NULL;
-+    }
-+
-     if (format == BACKUP_FORMAT_PBS) {
-         if (!task->password) {
-             error_set(task->errp, ERROR_CLASS_GENERIC_ERROR, "missing parameter 'password'");
--            goto err;
-+            goto err_mutex;
-         }
-         if (!task->backup_id) {
-             error_set(task->errp, ERROR_CLASS_GENERIC_ERROR, "missing parameter 'backup-id'");
--            goto err;
-+            goto err_mutex;
-         }
-         if (!task->has_backup_time) {
-             error_set(task->errp, ERROR_CLASS_GENERIC_ERROR, "missing parameter 'backup-time'");
--            goto err;
-+            goto err_mutex;
-         }
- 
-         int dump_cb_block_size = PROXMOX_BACKUP_DEFAULT_CHUNK_SIZE; // Hardcoded (4M)
-@@ -722,12 +737,12 @@ static void coroutine_fn pvebackup_co_prepare(void *opaque)
-             error_set(task->errp, ERROR_CLASS_GENERIC_ERROR,
-                       "proxmox_backup_new failed: %s", pbs_err);
-             proxmox_backup_free_error(pbs_err);
--            goto err;
-+            goto err_mutex;
-         }
- 
-         int connect_result = proxmox_backup_co_connect(pbs, task->errp);
-         if (connect_result < 0)
--            goto err;
-+            goto err_mutex;
- 
-         /* register all devices */
-         l = di_list;
-@@ -738,6 +753,8 @@ static void coroutine_fn pvebackup_co_prepare(void *opaque)
-             di->block_size = dump_cb_block_size;
- 
-             const char *devname = bdrv_get_device_name(di->bs);
-+            PBSBitmapAction action = PBS_BITMAP_ACTION_NOT_USED;
-+            size_t dirty = di->size;
- 
-             BdrvDirtyBitmap *bitmap = bdrv_find_dirty_bitmap(di->bs, PBS_BITMAP_NAME);
-             bool expect_only_dirty = false;
-@@ -746,49 +763,59 @@ static void coroutine_fn pvebackup_co_prepare(void *opaque)
-                 if (bitmap == NULL) {
-                     bitmap = bdrv_create_dirty_bitmap(di->bs, dump_cb_block_size, PBS_BITMAP_NAME, task->errp);
-                     if (!bitmap) {
--                        goto err;
-+                        goto err_mutex;
-                     }
-+                    action = PBS_BITMAP_ACTION_NEW;
-                 } else {
-                     expect_only_dirty = proxmox_backup_check_incremental(pbs, devname, di->size) != 0;
-                 }
- 
-                 if (expect_only_dirty) {
--                    dirty += bdrv_get_dirty_count(bitmap);
-+                    /* track clean chunks as reused */
-+                    dirty = MIN(bdrv_get_dirty_count(bitmap), di->size);
-+                    backup_state.stat.reused += di->size - dirty;
-+                    action = PBS_BITMAP_ACTION_USED;
-                 } else {
-                     /* mark entire bitmap as dirty to make full backup */
-                     bdrv_set_dirty_bitmap(bitmap, 0, di->size);
--                    dirty += di->size;
-+                    if (action != PBS_BITMAP_ACTION_NEW) {
-+                        action = PBS_BITMAP_ACTION_INVALID;
-+                    }
-                 }
-                 di->bitmap = bitmap;
-             } else {
--                dirty += di->size;
--
-                 /* after a full backup the old dirty bitmap is invalid anyway */
-                 if (bitmap != NULL) {
-                     bdrv_release_dirty_bitmap(bitmap);
-+                    action = PBS_BITMAP_ACTION_NOT_USED_REMOVED;
-                 }
-             }
- 
-             int dev_id = proxmox_backup_co_register_image(pbs, devname, di->size, expect_only_dirty, task->errp);
-             if (dev_id < 0) {
--                goto err;
-+                goto err_mutex;
-             }
- 
-             if (!(di->target = bdrv_backup_dump_create(dump_cb_block_size, di->size, pvebackup_co_dump_pbs_cb, di, task->errp))) {
--                goto err;
-+                goto err_mutex;
-             }
- 
-             di->dev_id = dev_id;
-+
-+            PBSBitmapInfo *info = g_malloc(sizeof(*info));
-+            info->drive = g_strdup(devname);
-+            info->action = action;
-+            info->size = di->size;
-+            info->dirty = dirty;
-+            backup_state.stat.bitmap_list = g_list_append(backup_state.stat.bitmap_list, info);
-         }
-     } else if (format == BACKUP_FORMAT_VMA) {
--        dirty = total;
--
-         vmaw = vma_writer_create(task->backup_file, uuid, &local_err);
-         if (!vmaw) {
-             if (local_err) {
-                 error_propagate(task->errp, local_err);
-             }
--            goto err;
-+            goto err_mutex;
-         }
- 
-         /* register all devices for vma writer */
-@@ -798,7 +825,7 @@ static void coroutine_fn pvebackup_co_prepare(void *opaque)
-             l = g_list_next(l);
- 
-             if (!(di->target = bdrv_backup_dump_create(VMA_CLUSTER_SIZE, di->size, pvebackup_co_dump_vma_cb, di, task->errp))) {
--                goto err;
-+                goto err_mutex;
-             }
- 
-             const char *devname = bdrv_get_device_name(di->bs);
-@@ -806,16 +833,14 @@ static void coroutine_fn pvebackup_co_prepare(void *opaque)
-             if (di->dev_id <= 0) {
-                 error_set(task->errp, ERROR_CLASS_GENERIC_ERROR,
-                           "register_stream failed");
--                goto err;
-+                goto err_mutex;
-             }
-         }
-     } else if (format == BACKUP_FORMAT_DIR) {
--        dirty = total;
--
-         if (mkdir(task->backup_file, 0640) != 0) {
-             error_setg_errno(task->errp, errno, "can't create directory '%s'\n",
-                              task->backup_file);
--            goto err;
-+            goto err_mutex;
-         }
-         backup_dir = task->backup_file;
- 
-@@ -832,18 +857,18 @@ static void coroutine_fn pvebackup_co_prepare(void *opaque)
-                             di->size, flags, false, &local_err);
-             if (local_err) {
-                 error_propagate(task->errp, local_err);
--                goto err;
-+                goto err_mutex;
-             }
- 
-             di->target = bdrv_co_open(di->targetfile, NULL, NULL, flags, &local_err);
-             if (!di->target) {
-                 error_propagate(task->errp, local_err);
--                goto err;
-+                goto err_mutex;
-             }
-         }
-     } else {
-         error_set(task->errp, ERROR_CLASS_GENERIC_ERROR, "unknown backup format");
--        goto err;
-+        goto err_mutex;
-     }
- 
- 
-@@ -851,7 +876,7 @@ static void coroutine_fn pvebackup_co_prepare(void *opaque)
-     if (task->config_file) {
-         if (pvebackup_co_add_config(task->config_file, config_name, format, backup_dir,
-                                     vmaw, pbs, task->errp) != 0) {
--            goto err;
-+            goto err_mutex;
-         }
-     }
- 
-@@ -859,12 +884,11 @@ static void coroutine_fn pvebackup_co_prepare(void *opaque)
-     if (task->firewall_file) {
-         if (pvebackup_co_add_config(task->firewall_file, firewall_name, format, backup_dir,
-                                     vmaw, pbs, task->errp) != 0) {
--            goto err;
-+            goto err_mutex;
-         }
-     }
-     /* initialize global backup_state now */
--
--    qemu_mutex_lock(&backup_state.stat.lock);
-+    /* note: 'reused' and 'bitmap_list' are initialized earlier */
- 
-     if (backup_state.stat.error) {
-         error_free(backup_state.stat.error);
-@@ -884,10 +908,9 @@ static void coroutine_fn pvebackup_co_prepare(void *opaque)
-     char *uuid_str = g_strdup(backup_state.stat.uuid_str);
- 
-     backup_state.stat.total = total;
--    backup_state.stat.dirty = dirty;
-+    backup_state.stat.dirty = total - backup_state.stat.reused;
-     backup_state.stat.transferred = 0;
-     backup_state.stat.zero_bytes = 0;
--    backup_state.stat.reused = format == BACKUP_FORMAT_PBS && dirty >= total ? 0 : total - dirty;
- 
-     qemu_mutex_unlock(&backup_state.stat.lock);
- 
-@@ -904,6 +927,9 @@ static void coroutine_fn pvebackup_co_prepare(void *opaque)
-     task->result = uuid_info;
-     return;
- 
-+err_mutex:
-+    qemu_mutex_unlock(&backup_state.stat.lock);
-+
- err:
- 
-     l = di_list;
-@@ -1055,11 +1081,42 @@ BackupStatus *qmp_query_backup(Error **errp)
-     return info;
- }
- 
-+PBSBitmapInfoList *qmp_query_pbs_bitmap_info(Error **errp)
-+{
-+    PBSBitmapInfoList *head = NULL, **p_next = &head;
-+
-+    qemu_mutex_lock(&backup_state.stat.lock);
-+
-+    GList *l = backup_state.stat.bitmap_list;
-+    while (l) {
-+        PBSBitmapInfo *info = (PBSBitmapInfo *)l->data;
-+        l = g_list_next(l);
-+
-+        /* clone bitmap info to avoid auto free after QMP marshalling */
-+        PBSBitmapInfo *info_ret = g_malloc0(sizeof(*info_ret));
-+        info_ret->drive = g_strdup(info->drive);
-+        info_ret->action = info->action;
-+        info_ret->size = info->size;
-+        info_ret->dirty = info->dirty;
-+
-+        PBSBitmapInfoList *info_list = g_malloc0(sizeof(*info_list));
-+        info_list->value = info_ret;
-+
-+        *p_next = info_list;
-+        p_next = &info_list->next;
-+    }
-+
-+    qemu_mutex_unlock(&backup_state.stat.lock);
-+
-+    return head;
-+}
-+
- ProxmoxSupportStatus *qmp_query_proxmox_support(Error **errp)
- {
-     ProxmoxSupportStatus *ret = g_malloc0(sizeof(*ret));
-     ret->pbs_library_version = g_strdup(proxmox_backup_qemu_version());
-     ret->pbs_dirty_bitmap = true;
-     ret->pbs_dirty_bitmap_savevm = true;
-+    ret->query_bitmap_info = true;
-     return ret;
- }
-diff --git a/qapi/block-core.json b/qapi/block-core.json
-index 1ac535fcf2..130d5f065f 100644
---- a/qapi/block-core.json
-+++ b/qapi/block-core.json
-@@ -966,6 +966,8 @@
- # @pbs-dirty-bitmap: True if dirty-bitmap-incremental backups to PBS are
- #                    supported.
- #
-+# @query-bitmap-info: True if the 'query-pbs-bitmap-info' QMP call is supported.
-+#
- # @pbs-dirty-bitmap-savevm: True if 'dirty-bitmaps' migration capability can
- #                           safely be set for savevm-async.
- #
-@@ -974,6 +976,7 @@
- ##
- { 'struct': 'ProxmoxSupportStatus',
-   'data': { 'pbs-dirty-bitmap': 'bool',
-+            'query-bitmap-info': 'bool',
-             'pbs-dirty-bitmap-savevm': 'bool',
-             'pbs-library-version': 'str' } }
- 
-@@ -987,6 +990,59 @@
- ##
- { 'command': 'query-proxmox-support', 'returns': 'ProxmoxSupportStatus' }
- 
-+##
-+# @PBSBitmapAction:
-+#
-+# An action taken on a dirty-bitmap when a backup job was started.
-+#
-+# @not-used: Bitmap mode was not enabled.
-+#
-+# @not-used-removed: Bitmap mode was not enabled, but a bitmap from a
-+#                    previous backup still existed and was removed.
-+#
-+# @new: A new bitmap was attached to the drive for this backup.
-+#
-+# @used: An existing bitmap will be used to only backup changed data.
-+#
-+# @invalid: A bitmap existed, but had to be cleared since it's associated
-+#           base snapshot did not match the base given for the current job or
-+#           the crypt mode has changed.
-+#
-+##
-+{ 'enum': 'PBSBitmapAction',
-+  'data': ['not-used', 'not-used-removed', 'new', 'used', 'invalid'] }
-+
-+##
-+# @PBSBitmapInfo:
-+#
-+# Contains information about dirty bitmaps used for each drive in a PBS backup.
-+#
-+# @drive: The underlying drive.
-+#
-+# @action: The action that was taken when the backup started.
-+#
-+# @size: The total size of the drive.
-+#
-+# @dirty: How much of the drive is considered dirty and will be backed up,
-+#         or 'size' if everything will be.
-+#
-+##
-+{ 'struct': 'PBSBitmapInfo',
-+  'data': { 'drive': 'str', 'action': 'PBSBitmapAction', 'size': 'int',
-+            'dirty': 'int' } }
-+
-+##
-+# @query-pbs-bitmap-info:
-+#
-+# Returns information about dirty bitmaps used on the most recently started
-+# backup. Returns nothing when the last backup was not using PBS or if no
-+# backup occured in this session.
-+#
-+# Returns: @PBSBitmapInfo
-+#
-+##
-+{ 'command': 'query-pbs-bitmap-info', 'returns': ['PBSBitmapInfo'] }
-+
- ##
- # @BlockDeviceTimedStats:
- #
diff --git a/debian/patches/pve/0041-migration-block-dirty-bitmap-migrate-other-bitmaps-e.patch b/debian/patches/pve/0036-migration-block-dirty-bitmap-migrate-other-bitmaps-e.patch
similarity index 100%
rename from debian/patches/pve/0041-migration-block-dirty-bitmap-migrate-other-bitmaps-e.patch
rename to debian/patches/pve/0036-migration-block-dirty-bitmap-migrate-other-bitmaps-e.patch
diff --git a/debian/patches/pve/0042-PVE-fall-back-to-open-iscsi-initiatorname.patch b/debian/patches/pve/0037-PVE-fall-back-to-open-iscsi-initiatorname.patch
similarity index 100%
rename from debian/patches/pve/0042-PVE-fall-back-to-open-iscsi-initiatorname.patch
rename to debian/patches/pve/0037-PVE-fall-back-to-open-iscsi-initiatorname.patch
diff --git a/debian/patches/pve/0038-PVE-Backup-Use-a-transaction-to-synchronize-job-stat.patch b/debian/patches/pve/0038-PVE-Backup-Use-a-transaction-to-synchronize-job-stat.patch
deleted file mode 100644
index 7a5cda5..0000000
--- a/debian/patches/pve/0038-PVE-Backup-Use-a-transaction-to-synchronize-job-stat.patch
+++ /dev/null
@@ -1,293 +0,0 @@
-From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
-From: Stefan Reiter <s.reiter at proxmox.com>
-Date: Thu, 20 Aug 2020 14:25:00 +0200
-Subject: [PATCH] PVE-Backup: Use a transaction to synchronize job states
-
-By using a JobTxn, we can sync dirty bitmaps only when *all* jobs were
-successful - meaning we don't need to remove them when the backup fails,
-since QEMU's BITMAP_SYNC_MODE_ON_SUCCESS will now handle that for us.
-
-To keep the rate-limiting and IO impact from before, we use a sequential
-transaction, so drives will still be backed up one after the other.
-
-Signed-off-by: Stefan Reiter <s.reiter at proxmox.com>
-Signed-off-by: Thomas Lamprecht <t.lamprecht at proxmox.com>
-[FE: add new force parameter to job_cancel_sync calls
-     adapt for new job lock mechanism replacing AioContext locks]
-Signed-off-by: Fiona Ebner <f.ebner at proxmox.com>
----
- pve-backup.c | 163 ++++++++++++++++-----------------------------------
- 1 file changed, 50 insertions(+), 113 deletions(-)
-
-diff --git a/pve-backup.c b/pve-backup.c
-index 0a0996b971..629da3e6c7 100644
---- a/pve-backup.c
-+++ b/pve-backup.c
-@@ -54,6 +54,7 @@ static struct PVEBackupState {
-     VmaWriter *vmaw;
-     ProxmoxBackupHandle *pbs;
-     GList *di_list;
-+    JobTxn *txn;
-     QemuMutex backup_mutex;
-     CoMutex dump_callback_mutex;
- } backup_state;
-@@ -73,34 +74,12 @@ typedef struct PVEBackupDevInfo {
-     size_t size;
-     uint64_t block_size;
-     uint8_t dev_id;
--    bool completed;
-     char targetfile[PATH_MAX];
-     BdrvDirtyBitmap *bitmap;
-     BlockDriverState *target;
-+    BlockJob *job;
- } PVEBackupDevInfo;
- 
--static void pvebackup_run_next_job(void);
--
--static BlockJob *
--lookup_active_block_job(PVEBackupDevInfo *di)
--{
--    if (!di->completed && di->bs) {
--        WITH_JOB_LOCK_GUARD() {
--            for (BlockJob *job = block_job_next_locked(NULL); job; job = block_job_next_locked(job)) {
--                if (job->job.driver->job_type != JOB_TYPE_BACKUP) {
--                    continue;
--                }
--
--                BackupBlockJob *bjob = container_of(job, BackupBlockJob, common);
--                if (bjob && bjob->source_bs == di->bs) {
--                    return job;
--                }
--            }
--        }
--    }
--    return NULL;
--}
--
- static void pvebackup_propagate_error(Error *err)
- {
-     qemu_mutex_lock(&backup_state.stat.lock);
-@@ -276,18 +255,6 @@ static void coroutine_fn pvebackup_co_cleanup(void *unused)
-             if (local_err != NULL) {
-                 pvebackup_propagate_error(local_err);
-             }
--        } else {
--            // on error or cancel we cannot ensure synchronization of dirty
--            // bitmaps with backup server, so remove all and do full backup next
--            GList *l = backup_state.di_list;
--            while (l) {
--                PVEBackupDevInfo *di = (PVEBackupDevInfo *)l->data;
--                l = g_list_next(l);
--
--                if (di->bitmap) {
--                    bdrv_release_dirty_bitmap(di->bitmap);
--                }
--            }
-         }
- 
-         proxmox_backup_disconnect(backup_state.pbs);
-@@ -326,8 +293,6 @@ static void pvebackup_complete_cb(void *opaque, int ret)
- 
-     qemu_mutex_lock(&backup_state.backup_mutex);
- 
--    di->completed = true;
--
-     if (ret < 0) {
-         Error *local_err = NULL;
-         error_setg(&local_err, "job failed with err %d - %s", ret, strerror(-ret));
-@@ -340,20 +305,17 @@ static void pvebackup_complete_cb(void *opaque, int ret)
- 
-     block_on_coroutine_fn(pvebackup_complete_stream, di);
- 
--    // remove self from job queue
-+    // remove self from job list
-     backup_state.di_list = g_list_remove(backup_state.di_list, di);
- 
--    if (di->bitmap && ret < 0) {
--        // on error or cancel we cannot ensure synchronization of dirty
--        // bitmaps with backup server, so remove all and do full backup next
--        bdrv_release_dirty_bitmap(di->bitmap);
--    }
--
-     g_free(di);
- 
--    qemu_mutex_unlock(&backup_state.backup_mutex);
-+    /* call cleanup if we're the last job */
-+    if (!g_list_first(backup_state.di_list)) {
-+        block_on_coroutine_fn(pvebackup_co_cleanup, NULL);
-+    }
- 
--    pvebackup_run_next_job();
-+    qemu_mutex_unlock(&backup_state.backup_mutex);
- }
- 
- static void pvebackup_cancel(void)
-@@ -375,32 +337,28 @@ static void pvebackup_cancel(void)
-         proxmox_backup_abort(backup_state.pbs, "backup canceled");
-     }
- 
--    qemu_mutex_unlock(&backup_state.backup_mutex);
--
--    for(;;) {
--
--        BlockJob *next_job = NULL;
-+    /* it's enough to cancel one job in the transaction, the rest will follow
-+     * automatically */
-+    GList *bdi = g_list_first(backup_state.di_list);
-+    BlockJob *cancel_job = bdi && bdi->data ?
-+        ((PVEBackupDevInfo *)bdi->data)->job :
-+        NULL;
- 
--        qemu_mutex_lock(&backup_state.backup_mutex);
--
--        GList *l = backup_state.di_list;
--        while (l) {
--            PVEBackupDevInfo *di = (PVEBackupDevInfo *)l->data;
--            l = g_list_next(l);
--
--            BlockJob *job = lookup_active_block_job(di);
--            if (job != NULL) {
--                next_job = job;
--                break;
--            }
-+    /* ref the job before releasing the mutex, just to be safe */
-+    if (cancel_job) {
-+        WITH_JOB_LOCK_GUARD() {
-+            job_ref_locked(&cancel_job->job);
-         }
-+    }
- 
--        qemu_mutex_unlock(&backup_state.backup_mutex);
-+    /* job_cancel_sync may enter the job, so we need to release the
-+     * backup_mutex to avoid deadlock */
-+    qemu_mutex_unlock(&backup_state.backup_mutex);
- 
--        if (next_job) {
--            job_cancel_sync(&next_job->job, true);
--        } else {
--            break;
-+    if (cancel_job) {
-+        WITH_JOB_LOCK_GUARD() {
-+            job_cancel_sync_locked(&cancel_job->job, true);
-+            job_unref_locked(&cancel_job->job);
-         }
-     }
- }
-@@ -460,49 +418,19 @@ static int coroutine_fn pvebackup_co_add_config(
-     goto out;
- }
- 
--bool job_should_pause_locked(Job *job);
--
--static void pvebackup_run_next_job(void)
--{
--    assert(!qemu_in_coroutine());
--
--    qemu_mutex_lock(&backup_state.backup_mutex);
--
--    GList *l = backup_state.di_list;
--    while (l) {
--        PVEBackupDevInfo *di = (PVEBackupDevInfo *)l->data;
--        l = g_list_next(l);
--
--        BlockJob *job = lookup_active_block_job(di);
--
--        if (job) {
--            qemu_mutex_unlock(&backup_state.backup_mutex);
--
--            WITH_JOB_LOCK_GUARD() {
--                if (job_should_pause_locked(&job->job)) {
--                    bool error_or_canceled = pvebackup_error_or_canceled();
--                    if (error_or_canceled) {
--                        job_cancel_sync_locked(&job->job, true);
--                    } else {
--                        job_resume_locked(&job->job);
--                    }
--                }
--            }
--            return;
--        }
--    }
--
--    block_on_coroutine_fn(pvebackup_co_cleanup, NULL); // no more jobs, run cleanup
--
--    qemu_mutex_unlock(&backup_state.backup_mutex);
--}
--
- static bool create_backup_jobs(void) {
- 
-     assert(!qemu_in_coroutine());
- 
-     Error *local_err = NULL;
- 
-+    /* create job transaction to synchronize bitmap commit and cancel all
-+     * jobs in case one errors */
-+    if (backup_state.txn) {
-+        job_txn_unref(backup_state.txn);
-+    }
-+    backup_state.txn = job_txn_new_seq();
-+
-     BackupPerf perf = { .max_workers = 16 };
- 
-     /* create and start all jobs (paused state) */
-@@ -525,7 +453,7 @@ static bool create_backup_jobs(void) {
-         BlockJob *job = backup_job_create(
-             NULL, di->bs, di->target, backup_state.speed, sync_mode, di->bitmap,
-             bitmap_mode, false, NULL, &perf, BLOCKDEV_ON_ERROR_REPORT, BLOCKDEV_ON_ERROR_REPORT,
--            JOB_DEFAULT, pvebackup_complete_cb, di, NULL, &local_err);
-+            JOB_DEFAULT, pvebackup_complete_cb, di, backup_state.txn, &local_err);
- 
-         aio_context_release(aio_context);
- 
-@@ -537,7 +465,8 @@ static bool create_backup_jobs(void) {
-             pvebackup_propagate_error(create_job_err);
-             break;
-         }
--        job_start(&job->job);
-+
-+        di->job = job;
- 
-         bdrv_unref(di->target);
-         di->target = NULL;
-@@ -555,6 +484,12 @@ static bool create_backup_jobs(void) {
-                 bdrv_unref(di->target);
-                 di->target = NULL;
-             }
-+
-+            if (di->job) {
-+                WITH_JOB_LOCK_GUARD() {
-+                    job_unref_locked(&di->job->job);
-+                }
-+            }
-         }
-     }
- 
-@@ -937,10 +872,6 @@ err:
-         PVEBackupDevInfo *di = (PVEBackupDevInfo *)l->data;
-         l = g_list_next(l);
- 
--        if (di->bitmap) {
--            bdrv_release_dirty_bitmap(di->bitmap);
--        }
--
-         if (di->target) {
-             bdrv_co_unref(di->target);
-         }
-@@ -1021,9 +952,15 @@ UuidInfo *qmp_backup(
-     block_on_coroutine_fn(pvebackup_co_prepare, &task);
- 
-     if (*errp == NULL) {
--        create_backup_jobs();
-+        bool errors = create_backup_jobs();
-         qemu_mutex_unlock(&backup_state.backup_mutex);
--        pvebackup_run_next_job();
-+
-+        if (!errors) {
-+            /* start the first job in the transaction
-+             * note: this might directly enter the job, so we need to do this
-+             * after unlocking the backup_mutex */
-+            job_txn_start_seq(backup_state.txn);
-+        }
-     } else {
-         qemu_mutex_unlock(&backup_state.backup_mutex);
-     }
diff --git a/debian/patches/pve/0046-PVE-block-stream-increase-chunk-size.patch b/debian/patches/pve/0038-PVE-block-stream-increase-chunk-size.patch
similarity index 100%
rename from debian/patches/pve/0046-PVE-block-stream-increase-chunk-size.patch
rename to debian/patches/pve/0038-PVE-block-stream-increase-chunk-size.patch
diff --git a/debian/patches/pve/0039-PVE-Backup-Don-t-block-on-finishing-and-cleanup-crea.patch b/debian/patches/pve/0039-PVE-Backup-Don-t-block-on-finishing-and-cleanup-crea.patch
deleted file mode 100644
index 15eb190..0000000
--- a/debian/patches/pve/0039-PVE-Backup-Don-t-block-on-finishing-and-cleanup-crea.patch
+++ /dev/null
@@ -1,499 +0,0 @@
-From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
-From: Stefan Reiter <s.reiter at proxmox.com>
-Date: Mon, 28 Sep 2020 13:40:51 +0200
-Subject: [PATCH] PVE-Backup: Don't block on finishing and cleanup
- create_backup_jobs
-
-proxmox_backup_co_finish is already async, but previously we would wait
-for the coroutine using block_on_coroutine_fn(). Avoid this by
-scheduling pvebackup_co_complete_stream (and thus pvebackup_co_cleanup)
-as a real coroutine when calling from pvebackup_complete_cb. This is ok,
-since complete_stream uses the backup_mutex internally to synchronize,
-and other streams can happily continue writing in the meantime anyway.
-
-To accomodate, backup_mutex is converted to a CoMutex. This means
-converting every user to a coroutine. This is not just useful here, but
-will come in handy once this series[0] is merged, and QMP calls can be
-yield-able coroutines too. Then we can also finally get rid of
-block_on_coroutine_fn.
-
-Cases of aio_context_acquire/release from within what is now a coroutine
-are changed to aio_co_reschedule_self, which works since a running
-coroutine always holds the aio lock for the context it is running in.
-
-job_cancel_sync is called from a BH since it can't be run from a
-coroutine (uses AIO_WAIT_WHILE internally).
-
-Same thing for create_backup_jobs, which is converted to a BH too.
-
-To communicate the finishing state, a new property is introduced to
-query-backup: 'finishing'. A new state is explicitly not used, since
-that would break compatibility with older qemu-server versions.
-
-Also fix create_backup_jobs:
-
-No more weird bool returns, just the standard "errp" format used
-everywhere else too. With this, if backup_job_create fails, the error
-message is actually returned over QMP and can be shown to the user.
-
-To facilitate correct cleanup on such an error, we call
-create_backup_jobs as a bottom half directly from pvebackup_co_prepare.
-This additionally allows us to actually hold the backup_mutex during
-operation.
-
-Also add a job_cancel_sync before job_unref, since a job must be in
-STATUS_NULL to be deleted by unref, which could trigger an assert
-before.
-
-[0] https://lists.gnu.org/archive/html/qemu-devel/2020-09/msg03515.html
-
-Signed-off-by: Stefan Reiter <s.reiter at proxmox.com>
-Signed-off-by: Thomas Lamprecht <t.lamprecht at proxmox.com>
-[FE: add new force parameter to job_cancel_sync calls]
-Signed-off-by: Fiona Ebner <f.ebner at proxmox.com>
----
- pve-backup.c         | 212 +++++++++++++++++++++++++++----------------
- qapi/block-core.json |   5 +-
- 2 files changed, 138 insertions(+), 79 deletions(-)
-
-diff --git a/pve-backup.c b/pve-backup.c
-index 629da3e6c7..1da9dd9edc 100644
---- a/pve-backup.c
-+++ b/pve-backup.c
-@@ -35,7 +35,9 @@ const char *PBS_BITMAP_NAME = "pbs-incremental-dirty-bitmap";
- 
- static struct PVEBackupState {
-     struct {
--        // Everithing accessed from qmp_backup_query command is protected using lock
-+        // Everything accessed from qmp_backup_query command is protected using
-+        // this lock. Do NOT hold this lock for long times, as it is sometimes
-+        // acquired from coroutines, and thus any wait time may block the guest.
-         QemuMutex lock;
-         Error *error;
-         time_t start_time;
-@@ -49,20 +51,22 @@ static struct PVEBackupState {
-         size_t reused;
-         size_t zero_bytes;
-         GList *bitmap_list;
-+        bool finishing;
-+        bool starting;
-     } stat;
-     int64_t speed;
-     VmaWriter *vmaw;
-     ProxmoxBackupHandle *pbs;
-     GList *di_list;
-     JobTxn *txn;
--    QemuMutex backup_mutex;
-+    CoMutex backup_mutex;
-     CoMutex dump_callback_mutex;
- } backup_state;
- 
- static void pvebackup_init(void)
- {
-     qemu_mutex_init(&backup_state.stat.lock);
--    qemu_mutex_init(&backup_state.backup_mutex);
-+    qemu_co_mutex_init(&backup_state.backup_mutex);
-     qemu_co_mutex_init(&backup_state.dump_callback_mutex);
- }
- 
-@@ -74,6 +78,7 @@ typedef struct PVEBackupDevInfo {
-     size_t size;
-     uint64_t block_size;
-     uint8_t dev_id;
-+    int completed_ret; // INT_MAX if not completed
-     char targetfile[PATH_MAX];
-     BdrvDirtyBitmap *bitmap;
-     BlockDriverState *target;
-@@ -229,12 +234,12 @@ pvebackup_co_dump_vma_cb(
- }
- 
- // assumes the caller holds backup_mutex
--static void coroutine_fn pvebackup_co_cleanup(void *unused)
-+static void coroutine_fn pvebackup_co_cleanup(void)
- {
-     assert(qemu_in_coroutine());
- 
-     qemu_mutex_lock(&backup_state.stat.lock);
--    backup_state.stat.end_time = time(NULL);
-+    backup_state.stat.finishing = true;
-     qemu_mutex_unlock(&backup_state.stat.lock);
- 
-     if (backup_state.vmaw) {
-@@ -263,35 +268,29 @@ static void coroutine_fn pvebackup_co_cleanup(void *unused)
- 
-     g_list_free(backup_state.di_list);
-     backup_state.di_list = NULL;
-+
-+    qemu_mutex_lock(&backup_state.stat.lock);
-+    backup_state.stat.end_time = time(NULL);
-+    backup_state.stat.finishing = false;
-+    qemu_mutex_unlock(&backup_state.stat.lock);
- }
- 
--// assumes the caller holds backup_mutex
--static void coroutine_fn pvebackup_complete_stream(void *opaque)
-+static void coroutine_fn pvebackup_co_complete_stream(void *opaque)
- {
-     PVEBackupDevInfo *di = opaque;
-+    int ret = di->completed_ret;
- 
--    bool error_or_canceled = pvebackup_error_or_canceled();
--
--    if (backup_state.vmaw) {
--        vma_writer_close_stream(backup_state.vmaw, di->dev_id);
-+    qemu_mutex_lock(&backup_state.stat.lock);
-+    bool starting = backup_state.stat.starting;
-+    qemu_mutex_unlock(&backup_state.stat.lock);
-+    if (starting) {
-+        /* in 'starting' state, no tasks have been run yet, meaning we can (and
-+         * must) skip all cleanup, as we don't know what has and hasn't been
-+         * initialized yet. */
-+        return;
-     }
- 
--    if (backup_state.pbs && !error_or_canceled) {
--        Error *local_err = NULL;
--        proxmox_backup_co_close_image(backup_state.pbs, di->dev_id, &local_err);
--        if (local_err != NULL) {
--            pvebackup_propagate_error(local_err);
--        }
--    }
--}
--
--static void pvebackup_complete_cb(void *opaque, int ret)
--{
--    assert(!qemu_in_coroutine());
--
--    PVEBackupDevInfo *di = opaque;
--
--    qemu_mutex_lock(&backup_state.backup_mutex);
-+    qemu_co_mutex_lock(&backup_state.backup_mutex);
- 
-     if (ret < 0) {
-         Error *local_err = NULL;
-@@ -303,7 +302,19 @@ static void pvebackup_complete_cb(void *opaque, int ret)
- 
-     assert(di->target == NULL);
- 
--    block_on_coroutine_fn(pvebackup_complete_stream, di);
-+    bool error_or_canceled = pvebackup_error_or_canceled();
-+
-+    if (backup_state.vmaw) {
-+        vma_writer_close_stream(backup_state.vmaw, di->dev_id);
-+    }
-+
-+    if (backup_state.pbs && !error_or_canceled) {
-+        Error *local_err = NULL;
-+        proxmox_backup_co_close_image(backup_state.pbs, di->dev_id, &local_err);
-+        if (local_err != NULL) {
-+            pvebackup_propagate_error(local_err);
-+        }
-+    }
- 
-     // remove self from job list
-     backup_state.di_list = g_list_remove(backup_state.di_list, di);
-@@ -312,21 +323,46 @@ static void pvebackup_complete_cb(void *opaque, int ret)
- 
-     /* call cleanup if we're the last job */
-     if (!g_list_first(backup_state.di_list)) {
--        block_on_coroutine_fn(pvebackup_co_cleanup, NULL);
-+        pvebackup_co_cleanup();
-     }
- 
--    qemu_mutex_unlock(&backup_state.backup_mutex);
-+    qemu_co_mutex_unlock(&backup_state.backup_mutex);
- }
- 
--static void pvebackup_cancel(void)
-+static void pvebackup_complete_cb(void *opaque, int ret)
- {
--    assert(!qemu_in_coroutine());
-+    PVEBackupDevInfo *di = opaque;
-+    di->completed_ret = ret;
-+
-+    /*
-+     * Schedule stream cleanup in async coroutine. close_image and finish might
-+     * take a while, so we can't block on them here. This way it also doesn't
-+     * matter if we're already running in a coroutine or not.
-+     * Note: di is a pointer to an entry in the global backup_state struct, so
-+     * it stays valid.
-+     */
-+    Coroutine *co = qemu_coroutine_create(pvebackup_co_complete_stream, di);
-+    aio_co_enter(qemu_get_aio_context(), co);
-+}
-+
-+/*
-+ * job_cancel(_sync) does not like to be called from coroutines, so defer to
-+ * main loop processing via a bottom half.
-+ */
-+static void job_cancel_bh(void *opaque) {
-+    CoCtxData *data = (CoCtxData*)opaque;
-+    Job *job = (Job*)data->data;
-+    job_cancel_sync(job, true);
-+    aio_co_enter(data->ctx, data->co);
-+}
- 
-+static void coroutine_fn pvebackup_co_cancel(void *opaque)
-+{
-     Error *cancel_err = NULL;
-     error_setg(&cancel_err, "backup canceled");
-     pvebackup_propagate_error(cancel_err);
- 
--    qemu_mutex_lock(&backup_state.backup_mutex);
-+    qemu_co_mutex_lock(&backup_state.backup_mutex);
- 
-     if (backup_state.vmaw) {
-         /* make sure vma writer does not block anymore */
-@@ -344,28 +380,22 @@ static void pvebackup_cancel(void)
-         ((PVEBackupDevInfo *)bdi->data)->job :
-         NULL;
- 
--    /* ref the job before releasing the mutex, just to be safe */
-     if (cancel_job) {
--        WITH_JOB_LOCK_GUARD() {
--            job_ref_locked(&cancel_job->job);
--        }
-+        CoCtxData data = {
-+            .ctx = qemu_get_current_aio_context(),
-+            .co = qemu_coroutine_self(),
-+            .data = &cancel_job->job,
-+        };
-+        aio_bh_schedule_oneshot(data.ctx, job_cancel_bh, &data);
-+        qemu_coroutine_yield();
-     }
- 
--    /* job_cancel_sync may enter the job, so we need to release the
--     * backup_mutex to avoid deadlock */
--    qemu_mutex_unlock(&backup_state.backup_mutex);
--
--    if (cancel_job) {
--        WITH_JOB_LOCK_GUARD() {
--            job_cancel_sync_locked(&cancel_job->job, true);
--            job_unref_locked(&cancel_job->job);
--        }
--    }
-+    qemu_co_mutex_unlock(&backup_state.backup_mutex);
- }
- 
- void qmp_backup_cancel(Error **errp)
- {
--    pvebackup_cancel();
-+    block_on_coroutine_fn(pvebackup_co_cancel, NULL);
- }
- 
- // assumes the caller holds backup_mutex
-@@ -418,10 +448,18 @@ static int coroutine_fn pvebackup_co_add_config(
-     goto out;
- }
- 
--static bool create_backup_jobs(void) {
-+/*
-+ * backup_job_create can *not* be run from a coroutine (and requires an
-+ * acquired AioContext), so this can't either.
-+ * The caller is responsible that backup_mutex is held nonetheless.
-+ */
-+static void create_backup_jobs_bh(void *opaque) {
- 
-     assert(!qemu_in_coroutine());
- 
-+    CoCtxData *data = (CoCtxData*)opaque;
-+    Error **errp = (Error**)data->data;
-+
-     Error *local_err = NULL;
- 
-     /* create job transaction to synchronize bitmap commit and cancel all
-@@ -457,24 +495,19 @@ static bool create_backup_jobs(void) {
- 
-         aio_context_release(aio_context);
- 
--        if (!job || local_err != NULL) {
--            Error *create_job_err = NULL;
--            error_setg(&create_job_err, "backup_job_create failed: %s",
--                       local_err ? error_get_pretty(local_err) : "null");
-+        di->job = job;
- 
--            pvebackup_propagate_error(create_job_err);
-+        if (!job || local_err) {
-+            error_setg(errp, "backup_job_create failed: %s",
-+                       local_err ? error_get_pretty(local_err) : "null");
-             break;
-         }
- 
--        di->job = job;
--
-         bdrv_unref(di->target);
-         di->target = NULL;
-     }
- 
--    bool errors = pvebackup_error_or_canceled();
--
--    if (errors) {
-+    if (*errp) {
-         l = backup_state.di_list;
-         while (l) {
-             PVEBackupDevInfo *di = (PVEBackupDevInfo *)l->data;
-@@ -487,13 +520,15 @@ static bool create_backup_jobs(void) {
- 
-             if (di->job) {
-                 WITH_JOB_LOCK_GUARD() {
-+                    job_cancel_sync_locked(&di->job->job, true);
-                     job_unref_locked(&di->job->job);
-                 }
-             }
-         }
-     }
- 
--    return errors;
-+    /* return */
-+    aio_co_enter(data->ctx, data->co);
- }
- 
- typedef struct QmpBackupTask {
-@@ -522,11 +557,12 @@ typedef struct QmpBackupTask {
-     UuidInfo *result;
- } QmpBackupTask;
- 
--// assumes the caller holds backup_mutex
- static void coroutine_fn pvebackup_co_prepare(void *opaque)
- {
-     assert(qemu_in_coroutine());
- 
-+    qemu_co_mutex_lock(&backup_state.backup_mutex);
-+
-     QmpBackupTask *task = opaque;
- 
-     task->result = NULL; // just to be sure
-@@ -547,8 +583,9 @@ static void coroutine_fn pvebackup_co_prepare(void *opaque)
-     const char *firewall_name = "qemu-server.fw";
- 
-     if (backup_state.di_list) {
--         error_set(task->errp, ERROR_CLASS_GENERIC_ERROR,
-+        error_set(task->errp, ERROR_CLASS_GENERIC_ERROR,
-                   "previous backup not finished");
-+        qemu_co_mutex_unlock(&backup_state.backup_mutex);
-         return;
-     }
- 
-@@ -615,6 +652,8 @@ static void coroutine_fn pvebackup_co_prepare(void *opaque)
-         }
-         di->size = size;
-         total += size;
-+
-+        di->completed_ret = INT_MAX;
-     }
- 
-     uuid_generate(uuid);
-@@ -846,6 +885,8 @@ static void coroutine_fn pvebackup_co_prepare(void *opaque)
-     backup_state.stat.dirty = total - backup_state.stat.reused;
-     backup_state.stat.transferred = 0;
-     backup_state.stat.zero_bytes = 0;
-+    backup_state.stat.finishing = false;
-+    backup_state.stat.starting = true;
- 
-     qemu_mutex_unlock(&backup_state.stat.lock);
- 
-@@ -860,6 +901,33 @@ static void coroutine_fn pvebackup_co_prepare(void *opaque)
-     uuid_info->UUID = uuid_str;
- 
-     task->result = uuid_info;
-+
-+    /* Run create_backup_jobs_bh outside of coroutine (in BH) but keep
-+    * backup_mutex locked. This is fine, a CoMutex can be held across yield
-+    * points, and we'll release it as soon as the BH reschedules us.
-+    */
-+    CoCtxData waker = {
-+        .co = qemu_coroutine_self(),
-+        .ctx = qemu_get_current_aio_context(),
-+        .data = &local_err,
-+    };
-+    aio_bh_schedule_oneshot(waker.ctx, create_backup_jobs_bh, &waker);
-+    qemu_coroutine_yield();
-+
-+    if (local_err) {
-+        error_propagate(task->errp, local_err);
-+        goto err;
-+    }
-+
-+    qemu_co_mutex_unlock(&backup_state.backup_mutex);
-+
-+    qemu_mutex_lock(&backup_state.stat.lock);
-+    backup_state.stat.starting = false;
-+    qemu_mutex_unlock(&backup_state.stat.lock);
-+
-+    /* start the first job in the transaction */
-+    job_txn_start_seq(backup_state.txn);
-+
-     return;
- 
- err_mutex:
-@@ -882,6 +950,7 @@ err:
-         g_free(di);
-     }
-     g_list_free(di_list);
-+    backup_state.di_list = NULL;
- 
-     if (devs) {
-         g_strfreev(devs);
-@@ -902,6 +971,8 @@ err:
-     }
- 
-     task->result = NULL;
-+
-+    qemu_co_mutex_unlock(&backup_state.backup_mutex);
-     return;
- }
- 
-@@ -947,24 +1018,8 @@ UuidInfo *qmp_backup(
-         .errp = errp,
-     };
- 
--    qemu_mutex_lock(&backup_state.backup_mutex);
--
-     block_on_coroutine_fn(pvebackup_co_prepare, &task);
- 
--    if (*errp == NULL) {
--        bool errors = create_backup_jobs();
--        qemu_mutex_unlock(&backup_state.backup_mutex);
--
--        if (!errors) {
--            /* start the first job in the transaction
--             * note: this might directly enter the job, so we need to do this
--             * after unlocking the backup_mutex */
--            job_txn_start_seq(backup_state.txn);
--        }
--    } else {
--        qemu_mutex_unlock(&backup_state.backup_mutex);
--    }
--
-     return task.result;
- }
- 
-@@ -1012,6 +1067,7 @@ BackupStatus *qmp_query_backup(Error **errp)
-     info->transferred = backup_state.stat.transferred;
-     info->has_reused = true;
-     info->reused = backup_state.stat.reused;
-+    info->finishing = backup_state.stat.finishing;
- 
-     qemu_mutex_unlock(&backup_state.stat.lock);
- 
-diff --git a/qapi/block-core.json b/qapi/block-core.json
-index 130d5f065f..43838212e3 100644
---- a/qapi/block-core.json
-+++ b/qapi/block-core.json
-@@ -865,12 +865,15 @@
- #
- # @uuid: uuid for this backup job
- #
-+# @finishing: if status='active' and finishing=true, then the backup process is
-+#             waiting for the target to finish.
-+#
- ##
- { 'struct': 'BackupStatus',
-   'data': {'*status': 'str', '*errmsg': 'str', '*total': 'int', '*dirty': 'int',
-            '*transferred': 'int', '*zero-bytes': 'int', '*reused': 'int',
-            '*start-time': 'int', '*end-time': 'int',
--           '*backup-file': 'str', '*uuid': 'str' } }
-+           '*backup-file': 'str', '*uuid': 'str', 'finishing': 'bool' } }
- 
- ##
- # @BackupFormat:
diff --git a/debian/patches/pve/0047-block-io-accept-NULL-qiov-in-bdrv_pad_request.patch b/debian/patches/pve/0039-block-io-accept-NULL-qiov-in-bdrv_pad_request.patch
similarity index 100%
rename from debian/patches/pve/0047-block-io-accept-NULL-qiov-in-bdrv_pad_request.patch
rename to debian/patches/pve/0039-block-io-accept-NULL-qiov-in-bdrv_pad_request.patch
diff --git a/debian/patches/pve/0048-block-add-alloc-track-driver.patch b/debian/patches/pve/0040-block-add-alloc-track-driver.patch
similarity index 97%
rename from debian/patches/pve/0048-block-add-alloc-track-driver.patch
rename to debian/patches/pve/0040-block-add-alloc-track-driver.patch
index fc724c9..823be87 100644
--- a/debian/patches/pve/0048-block-add-alloc-track-driver.patch
+++ b/debian/patches/pve/0040-block-add-alloc-track-driver.patch
@@ -25,20 +25,21 @@ once the backing image is removed. It will be replaced by 'file'.
 Signed-off-by: Stefan Reiter <s.reiter at proxmox.com>
 Signed-off-by: Thomas Lamprecht <t.lamprecht at proxmox.com>
 [FE: adapt to changed function signatures
-     make error return value consistent with QEMU]
+     make error return value consistent with QEMU
+     avoid premature break during read]
 Signed-off-by: Fiona Ebner <f.ebner at proxmox.com>
 ---
- block/alloc-track.c | 351 ++++++++++++++++++++++++++++++++++++++++++++
+ block/alloc-track.c | 352 ++++++++++++++++++++++++++++++++++++++++++++
  block/meson.build   |   1 +
- 2 files changed, 352 insertions(+)
+ 2 files changed, 353 insertions(+)
  create mode 100644 block/alloc-track.c
 
 diff --git a/block/alloc-track.c b/block/alloc-track.c
 new file mode 100644
-index 0000000000..113bbd7058
+index 0000000000..b75d7c6460
 --- /dev/null
 +++ b/block/alloc-track.c
-@@ -0,0 +1,351 @@
+@@ -0,0 +1,352 @@
 +/*
 + * Node to allow backing images to be applied to any node. Assumes a blank
 + * image to begin with, only new writes are tracked as allocated, thus this
@@ -216,7 +217,8 @@ index 0000000000..113bbd7058
 +            ret = bdrv_co_preadv(bs->backing, local_offset, local_bytes,
 +                                 &local_qiov, flags);
 +        } else {
-+            ret = qemu_iovec_memset(&local_qiov, cur_offset, 0, local_bytes);
++            qemu_iovec_memset(&local_qiov, cur_offset, 0, local_bytes);
++            ret = 0;
 +        }
 +
 +        if (ret != 0) {
diff --git a/debian/patches/pve/0053-Revert-block-rbd-workaround-for-ceph-issue-53784.patch b/debian/patches/pve/0041-Revert-block-rbd-workaround-for-ceph-issue-53784.patch
similarity index 100%
rename from debian/patches/pve/0053-Revert-block-rbd-workaround-for-ceph-issue-53784.patch
rename to debian/patches/pve/0041-Revert-block-rbd-workaround-for-ceph-issue-53784.patch
diff --git a/debian/patches/pve/0054-Revert-block-rbd-fix-handling-of-holes-in-.bdrv_co_b.patch b/debian/patches/pve/0042-Revert-block-rbd-fix-handling-of-holes-in-.bdrv_co_b.patch
similarity index 100%
rename from debian/patches/pve/0054-Revert-block-rbd-fix-handling-of-holes-in-.bdrv_co_b.patch
rename to debian/patches/pve/0042-Revert-block-rbd-fix-handling-of-holes-in-.bdrv_co_b.patch
diff --git a/debian/patches/pve/0043-PVE-Use-coroutine-QMP-for-backup-cancel_backup.patch b/debian/patches/pve/0043-PVE-Use-coroutine-QMP-for-backup-cancel_backup.patch
deleted file mode 100644
index b38c633..0000000
--- a/debian/patches/pve/0043-PVE-Use-coroutine-QMP-for-backup-cancel_backup.patch
+++ /dev/null
@@ -1,595 +0,0 @@
-From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
-From: Stefan Reiter <s.reiter at proxmox.com>
-Date: Tue, 26 Jan 2021 15:45:30 +0100
-Subject: [PATCH] PVE: Use coroutine QMP for backup/cancel_backup
-
-Finally turn backup QMP calls into coroutines, now that it's possible.
-This has the benefit that calls are asynchronous to the main loop, i.e.
-long running operations like connecting to a PBS server will no longer
-hang the VM.
-
-Additionally, it allows us to get rid of block_on_coroutine_fn, which
-was always a hacky workaround.
-
-While we're already spring cleaning, also remove the QmpBackupTask
-struct, since we can now put the 'prepare' function directly into
-qmp_backup and thus no longer need those giant walls of text.
-
-(Note that for our patches to work with 5.2.0 this change is actually
-required, otherwise monitor_get_fd() fails as we're not in a QMP
-coroutine, but one we start ourselves - we could of course set the
-monitor for that coroutine ourselves, but let's just fix it the right
-way instead)
-
-Signed-off-by: Stefan Reiter <s.reiter at proxmox.com>
-Signed-off-by: Thomas Lamprecht <t.lamprecht at proxmox.com>
-[FE: adapt to QAPI changes
-     call coroutine version of is_inserted()]
-Signed-off-by: Fiona Ebner <f.ebner at proxmox.com>
----
- block/monitor/block-hmp-cmds.c |   4 +-
- hmp-commands.hx                |   2 +
- proxmox-backup-client.c        |  31 -----
- pve-backup.c                   | 220 +++++++++++----------------------
- qapi/block-core.json           |   4 +-
- 5 files changed, 79 insertions(+), 182 deletions(-)
-
-diff --git a/block/monitor/block-hmp-cmds.c b/block/monitor/block-hmp-cmds.c
-index ecbebd39ac..56f39b14d4 100644
---- a/block/monitor/block-hmp-cmds.c
-+++ b/block/monitor/block-hmp-cmds.c
-@@ -1030,7 +1030,7 @@ void hmp_change_medium(Monitor *mon, const char *device, const char *target,
-                                !!read_only, read_only_mode, errp);
- }
- 
--void hmp_backup_cancel(Monitor *mon, const QDict *qdict)
-+void coroutine_fn hmp_backup_cancel(Monitor *mon, const QDict *qdict)
- {
-     Error *error = NULL;
- 
-@@ -1039,7 +1039,7 @@ void hmp_backup_cancel(Monitor *mon, const QDict *qdict)
-     hmp_handle_error(mon, error);
- }
- 
--void hmp_backup(Monitor *mon, const QDict *qdict)
-+void coroutine_fn hmp_backup(Monitor *mon, const QDict *qdict)
- {
-     Error *error = NULL;
- 
-diff --git a/hmp-commands.hx b/hmp-commands.hx
-index 9b6b8e2e9c..896430dae8 100644
---- a/hmp-commands.hx
-+++ b/hmp-commands.hx
-@@ -111,6 +111,7 @@ ERST
- 		    "\n\t\t\t Use -d to dump data into a directory instead"
- 		    "\n\t\t\t of using VMA format.",
-         .cmd = hmp_backup,
-+        .coroutine  = true,
-     },
- 
- SRST
-@@ -124,6 +125,7 @@ ERST
-         .params     = "",
-         .help       = "cancel the current VM backup",
-         .cmd        = hmp_backup_cancel,
-+        .coroutine  = true,
-     },
- 
- SRST
-diff --git a/proxmox-backup-client.c b/proxmox-backup-client.c
-index 4ce7bc0b5e..0923037dec 100644
---- a/proxmox-backup-client.c
-+++ b/proxmox-backup-client.c
-@@ -5,37 +5,6 @@
- 
- /* Proxmox Backup Server client bindings using coroutines */
- 
--typedef struct BlockOnCoroutineWrapper {
--    AioContext *ctx;
--    CoroutineEntry *entry;
--    void *entry_arg;
--    bool finished;
--} BlockOnCoroutineWrapper;
--
--static void coroutine_fn block_on_coroutine_wrapper(void *opaque)
--{
--    BlockOnCoroutineWrapper *wrapper = opaque;
--    wrapper->entry(wrapper->entry_arg);
--    wrapper->finished = true;
--    aio_wait_kick();
--}
--
--void block_on_coroutine_fn(CoroutineEntry *entry, void *entry_arg)
--{
--    assert(!qemu_in_coroutine());
--
--    AioContext *ctx = qemu_get_current_aio_context();
--    BlockOnCoroutineWrapper wrapper = {
--        .finished = false,
--        .entry = entry,
--        .entry_arg = entry_arg,
--        .ctx = ctx,
--    };
--    Coroutine *wrapper_co = qemu_coroutine_create(block_on_coroutine_wrapper, &wrapper);
--    aio_co_enter(ctx, wrapper_co);
--    AIO_WAIT_WHILE(ctx, !wrapper.finished);
--}
--
- // This is called from another thread, so we use aio_co_schedule()
- static void proxmox_backup_schedule_wake(void *data) {
-     CoCtxData *waker = (CoCtxData *)data;
-diff --git a/pve-backup.c b/pve-backup.c
-index e0e38063a8..88e507b3c2 100644
---- a/pve-backup.c
-+++ b/pve-backup.c
-@@ -356,7 +356,7 @@ static void job_cancel_bh(void *opaque) {
-     aio_co_enter(data->ctx, data->co);
- }
- 
--static void coroutine_fn pvebackup_co_cancel(void *opaque)
-+void coroutine_fn qmp_backup_cancel(Error **errp)
- {
-     Error *cancel_err = NULL;
-     error_setg(&cancel_err, "backup canceled");
-@@ -393,11 +393,6 @@ static void coroutine_fn pvebackup_co_cancel(void *opaque)
-     qemu_co_mutex_unlock(&backup_state.backup_mutex);
- }
- 
--void qmp_backup_cancel(Error **errp)
--{
--    block_on_coroutine_fn(pvebackup_co_cancel, NULL);
--}
--
- // assumes the caller holds backup_mutex
- static int coroutine_fn pvebackup_co_add_config(
-     const char *file,
-@@ -531,42 +526,27 @@ static void create_backup_jobs_bh(void *opaque) {
-     aio_co_enter(data->ctx, data->co);
- }
- 
--typedef struct QmpBackupTask {
--    const char *backup_file;
--    const char *password;
--    const char *keyfile;
--    const char *key_password;
--    const char *backup_id;
--    bool has_backup_time;
--    const char *fingerprint;
--    int64_t backup_time;
--    bool has_use_dirty_bitmap;
--    bool use_dirty_bitmap;
--    bool has_format;
--    BackupFormat format;
--    const char *config_file;
--    const char *firewall_file;
--    const char *devlist;
--    bool has_compress;
--    bool compress;
--    bool has_encrypt;
--    bool encrypt;
--    bool has_speed;
--    int64_t speed;
--    Error **errp;
--    UuidInfo *result;
--} QmpBackupTask;
--
--static void coroutine_fn pvebackup_co_prepare(void *opaque)
-+UuidInfo coroutine_fn *qmp_backup(
-+    const char *backup_file,
-+    const char *password,
-+    const char *keyfile,
-+    const char *key_password,
-+    const char *fingerprint,
-+    const char *backup_id,
-+    bool has_backup_time, int64_t backup_time,
-+    bool has_use_dirty_bitmap, bool use_dirty_bitmap,
-+    bool has_compress, bool compress,
-+    bool has_encrypt, bool encrypt,
-+    bool has_format, BackupFormat format,
-+    const char *config_file,
-+    const char *firewall_file,
-+    const char *devlist,
-+    bool has_speed, int64_t speed, Error **errp)
- {
-     assert(qemu_in_coroutine());
- 
-     qemu_co_mutex_lock(&backup_state.backup_mutex);
- 
--    QmpBackupTask *task = opaque;
--
--    task->result = NULL; // just to be sure
--
-     BlockBackend *blk;
-     BlockDriverState *bs = NULL;
-     const char *backup_dir = NULL;
-@@ -583,32 +563,32 @@ static void coroutine_fn pvebackup_co_prepare(void *opaque)
-     const char *firewall_name = "qemu-server.fw";
- 
-     if (backup_state.di_list) {
--        error_set(task->errp, ERROR_CLASS_GENERIC_ERROR,
-+        error_set(errp, ERROR_CLASS_GENERIC_ERROR,
-                   "previous backup not finished");
-         qemu_co_mutex_unlock(&backup_state.backup_mutex);
--        return;
-+        return NULL;
-     }
- 
-     /* Todo: try to auto-detect format based on file name */
--    BackupFormat format = task->has_format ? task->format : BACKUP_FORMAT_VMA;
-+    format = has_format ? format : BACKUP_FORMAT_VMA;
- 
--    if (task->devlist) {
--        devs = g_strsplit_set(task->devlist, ",;:", -1);
-+    if (devlist) {
-+        devs = g_strsplit_set(devlist, ",;:", -1);
- 
-         gchar **d = devs;
-         while (d && *d) {
-             blk = blk_by_name(*d);
-             if (blk) {
-                 bs = blk_bs(blk);
--                if (!bdrv_is_inserted(bs)) {
--                    error_setg(task->errp, QERR_DEVICE_HAS_NO_MEDIUM, *d);
-+                if (!bdrv_co_is_inserted(bs)) {
-+                    error_setg(errp, QERR_DEVICE_HAS_NO_MEDIUM, *d);
-                     goto err;
-                 }
-                 PVEBackupDevInfo *di = g_new0(PVEBackupDevInfo, 1);
-                 di->bs = bs;
-                 di_list = g_list_append(di_list, di);
-             } else {
--                error_set(task->errp, ERROR_CLASS_DEVICE_NOT_FOUND,
-+                error_set(errp, ERROR_CLASS_DEVICE_NOT_FOUND,
-                           "Device '%s' not found", *d);
-                 goto err;
-             }
-@@ -620,7 +600,7 @@ static void coroutine_fn pvebackup_co_prepare(void *opaque)
- 
-         bs = NULL;
-         for (bs = bdrv_first(&it); bs; bs = bdrv_next(&it)) {
--            if (!bdrv_is_inserted(bs) || bdrv_is_read_only(bs)) {
-+            if (!bdrv_co_is_inserted(bs) || bdrv_is_read_only(bs)) {
-                 continue;
-             }
- 
-@@ -631,7 +611,7 @@ static void coroutine_fn pvebackup_co_prepare(void *opaque)
-     }
- 
-     if (!di_list) {
--        error_set(task->errp, ERROR_CLASS_GENERIC_ERROR, "empty device list");
-+        error_set(errp, ERROR_CLASS_GENERIC_ERROR, "empty device list");
-         goto err;
-     }
- 
-@@ -641,13 +621,13 @@ static void coroutine_fn pvebackup_co_prepare(void *opaque)
-     while (l) {
-         PVEBackupDevInfo *di = (PVEBackupDevInfo *)l->data;
-         l = g_list_next(l);
--        if (bdrv_op_is_blocked(di->bs, BLOCK_OP_TYPE_BACKUP_SOURCE, task->errp)) {
-+        if (bdrv_op_is_blocked(di->bs, BLOCK_OP_TYPE_BACKUP_SOURCE, errp)) {
-             goto err;
-         }
- 
-         ssize_t size = bdrv_getlength(di->bs);
-         if (size < 0) {
--            error_setg_errno(task->errp, -size, "bdrv_getlength failed");
-+            error_setg_errno(errp, -size, "bdrv_getlength failed");
-             goto err;
-         }
-         di->size = size;
-@@ -674,47 +654,44 @@ static void coroutine_fn pvebackup_co_prepare(void *opaque)
-     }
- 
-     if (format == BACKUP_FORMAT_PBS) {
--        if (!task->password) {
--            error_set(task->errp, ERROR_CLASS_GENERIC_ERROR, "missing parameter 'password'");
-+        if (!password) {
-+            error_set(errp, ERROR_CLASS_GENERIC_ERROR, "missing parameter 'password'");
-             goto err_mutex;
-         }
--        if (!task->backup_id) {
--            error_set(task->errp, ERROR_CLASS_GENERIC_ERROR, "missing parameter 'backup-id'");
-+        if (!backup_id) {
-+            error_set(errp, ERROR_CLASS_GENERIC_ERROR, "missing parameter 'backup-id'");
-             goto err_mutex;
-         }
--        if (!task->has_backup_time) {
--            error_set(task->errp, ERROR_CLASS_GENERIC_ERROR, "missing parameter 'backup-time'");
-+        if (!has_backup_time) {
-+            error_set(errp, ERROR_CLASS_GENERIC_ERROR, "missing parameter 'backup-time'");
-             goto err_mutex;
-         }
- 
-         int dump_cb_block_size = PROXMOX_BACKUP_DEFAULT_CHUNK_SIZE; // Hardcoded (4M)
-         firewall_name = "fw.conf";
- 
--        bool use_dirty_bitmap = task->has_use_dirty_bitmap && task->use_dirty_bitmap;
--
--
-         char *pbs_err = NULL;
-         pbs = proxmox_backup_new(
--            task->backup_file,
--            task->backup_id,
--            task->backup_time,
-+            backup_file,
-+            backup_id,
-+            backup_time,
-             dump_cb_block_size,
--            task->password,
--            task->keyfile,
--            task->key_password,
--            task->has_compress ? task->compress : true,
--            task->has_encrypt ? task->encrypt : !!task->keyfile,
--            task->fingerprint,
-+            password,
-+            keyfile,
-+            key_password,
-+            has_compress ? compress : true,
-+            has_encrypt ? encrypt : !!keyfile,
-+            fingerprint,
-             &pbs_err);
- 
-         if (!pbs) {
--            error_set(task->errp, ERROR_CLASS_GENERIC_ERROR,
-+            error_set(errp, ERROR_CLASS_GENERIC_ERROR,
-                       "proxmox_backup_new failed: %s", pbs_err);
-             proxmox_backup_free_error(pbs_err);
-             goto err_mutex;
-         }
- 
--        int connect_result = proxmox_backup_co_connect(pbs, task->errp);
-+        int connect_result = proxmox_backup_co_connect(pbs, errp);
-         if (connect_result < 0)
-             goto err_mutex;
- 
-@@ -733,9 +710,9 @@ static void coroutine_fn pvebackup_co_prepare(void *opaque)
-             BdrvDirtyBitmap *bitmap = bdrv_find_dirty_bitmap(di->bs, PBS_BITMAP_NAME);
-             bool expect_only_dirty = false;
- 
--            if (use_dirty_bitmap) {
-+            if (has_use_dirty_bitmap && use_dirty_bitmap) {
-                 if (bitmap == NULL) {
--                    bitmap = bdrv_create_dirty_bitmap(di->bs, dump_cb_block_size, PBS_BITMAP_NAME, task->errp);
-+                    bitmap = bdrv_create_dirty_bitmap(di->bs, dump_cb_block_size, PBS_BITMAP_NAME, errp);
-                     if (!bitmap) {
-                         goto err_mutex;
-                     }
-@@ -765,12 +742,12 @@ static void coroutine_fn pvebackup_co_prepare(void *opaque)
-                 }
-             }
- 
--            int dev_id = proxmox_backup_co_register_image(pbs, devname, di->size, expect_only_dirty, task->errp);
-+            int dev_id = proxmox_backup_co_register_image(pbs, devname, di->size, expect_only_dirty, errp);
-             if (dev_id < 0) {
-                 goto err_mutex;
-             }
- 
--            if (!(di->target = bdrv_backup_dump_create(dump_cb_block_size, di->size, pvebackup_co_dump_pbs_cb, di, task->errp))) {
-+            if (!(di->target = bdrv_backup_dump_create(dump_cb_block_size, di->size, pvebackup_co_dump_pbs_cb, di, errp))) {
-                 goto err_mutex;
-             }
- 
-@@ -784,10 +761,10 @@ static void coroutine_fn pvebackup_co_prepare(void *opaque)
-             backup_state.stat.bitmap_list = g_list_append(backup_state.stat.bitmap_list, info);
-         }
-     } else if (format == BACKUP_FORMAT_VMA) {
--        vmaw = vma_writer_create(task->backup_file, uuid, &local_err);
-+        vmaw = vma_writer_create(backup_file, uuid, &local_err);
-         if (!vmaw) {
-             if (local_err) {
--                error_propagate(task->errp, local_err);
-+                error_propagate(errp, local_err);
-             }
-             goto err_mutex;
-         }
-@@ -798,25 +775,25 @@ static void coroutine_fn pvebackup_co_prepare(void *opaque)
-             PVEBackupDevInfo *di = (PVEBackupDevInfo *)l->data;
-             l = g_list_next(l);
- 
--            if (!(di->target = bdrv_backup_dump_create(VMA_CLUSTER_SIZE, di->size, pvebackup_co_dump_vma_cb, di, task->errp))) {
-+            if (!(di->target = bdrv_backup_dump_create(VMA_CLUSTER_SIZE, di->size, pvebackup_co_dump_vma_cb, di, errp))) {
-                 goto err_mutex;
-             }
- 
-             const char *devname = bdrv_get_device_name(di->bs);
-             di->dev_id = vma_writer_register_stream(vmaw, devname, di->size);
-             if (di->dev_id <= 0) {
--                error_set(task->errp, ERROR_CLASS_GENERIC_ERROR,
-+                error_set(errp, ERROR_CLASS_GENERIC_ERROR,
-                           "register_stream failed");
-                 goto err_mutex;
-             }
-         }
-     } else if (format == BACKUP_FORMAT_DIR) {
--        if (mkdir(task->backup_file, 0640) != 0) {
--            error_setg_errno(task->errp, errno, "can't create directory '%s'\n",
--                             task->backup_file);
-+        if (mkdir(backup_file, 0640) != 0) {
-+            error_setg_errno(errp, errno, "can't create directory '%s'\n",
-+                             backup_file);
-             goto err_mutex;
-         }
--        backup_dir = task->backup_file;
-+        backup_dir = backup_file;
- 
-         l = di_list;
-         while (l) {
-@@ -830,34 +807,34 @@ static void coroutine_fn pvebackup_co_prepare(void *opaque)
-             bdrv_img_create(di->targetfile, "raw", NULL, NULL, NULL,
-                             di->size, flags, false, &local_err);
-             if (local_err) {
--                error_propagate(task->errp, local_err);
-+                error_propagate(errp, local_err);
-                 goto err_mutex;
-             }
- 
-             di->target = bdrv_co_open(di->targetfile, NULL, NULL, flags, &local_err);
-             if (!di->target) {
--                error_propagate(task->errp, local_err);
-+                error_propagate(errp, local_err);
-                 goto err_mutex;
-             }
-         }
-     } else {
--        error_set(task->errp, ERROR_CLASS_GENERIC_ERROR, "unknown backup format");
-+        error_set(errp, ERROR_CLASS_GENERIC_ERROR, "unknown backup format");
-         goto err_mutex;
-     }
- 
- 
-     /* add configuration file to archive */
--    if (task->config_file) {
--        if (pvebackup_co_add_config(task->config_file, config_name, format, backup_dir,
--                                    vmaw, pbs, task->errp) != 0) {
-+    if (config_file) {
-+        if (pvebackup_co_add_config(config_file, config_name, format, backup_dir,
-+                                    vmaw, pbs, errp) != 0) {
-             goto err_mutex;
-         }
-     }
- 
-     /* add firewall file to archive */
--    if (task->firewall_file) {
--        if (pvebackup_co_add_config(task->firewall_file, firewall_name, format, backup_dir,
--                                    vmaw, pbs, task->errp) != 0) {
-+    if (firewall_file) {
-+        if (pvebackup_co_add_config(firewall_file, firewall_name, format, backup_dir,
-+                                    vmaw, pbs, errp) != 0) {
-             goto err_mutex;
-         }
-     }
-@@ -875,7 +852,7 @@ static void coroutine_fn pvebackup_co_prepare(void *opaque)
-     if (backup_state.stat.backup_file) {
-         g_free(backup_state.stat.backup_file);
-     }
--    backup_state.stat.backup_file = g_strdup(task->backup_file);
-+    backup_state.stat.backup_file = g_strdup(backup_file);
- 
-     uuid_copy(backup_state.stat.uuid, uuid);
-     uuid_unparse_lower(uuid, backup_state.stat.uuid_str);
-@@ -890,7 +867,7 @@ static void coroutine_fn pvebackup_co_prepare(void *opaque)
- 
-     qemu_mutex_unlock(&backup_state.stat.lock);
- 
--    backup_state.speed = (task->has_speed && task->speed > 0) ? task->speed : 0;
-+    backup_state.speed = (has_speed && speed > 0) ? speed : 0;
- 
-     backup_state.vmaw = vmaw;
-     backup_state.pbs = pbs;
-@@ -900,8 +877,6 @@ static void coroutine_fn pvebackup_co_prepare(void *opaque)
-     uuid_info = g_malloc0(sizeof(*uuid_info));
-     uuid_info->UUID = uuid_str;
- 
--    task->result = uuid_info;
--
-     /* Run create_backup_jobs_bh outside of coroutine (in BH) but keep
-     * backup_mutex locked. This is fine, a CoMutex can be held across yield
-     * points, and we'll release it as soon as the BH reschedules us.
-@@ -915,7 +890,7 @@ static void coroutine_fn pvebackup_co_prepare(void *opaque)
-     qemu_coroutine_yield();
- 
-     if (local_err) {
--        error_propagate(task->errp, local_err);
-+        error_propagate(errp, local_err);
-         goto err;
-     }
- 
-@@ -928,7 +903,7 @@ static void coroutine_fn pvebackup_co_prepare(void *opaque)
-     /* start the first job in the transaction */
-     job_txn_start_seq(backup_state.txn);
- 
--    return;
-+    return uuid_info;
- 
- err_mutex:
-     qemu_mutex_unlock(&backup_state.stat.lock);
-@@ -959,7 +934,7 @@ err:
-     if (vmaw) {
-         Error *err = NULL;
-         vma_writer_close(vmaw, &err);
--        unlink(task->backup_file);
-+        unlink(backup_file);
-     }
- 
-     if (pbs) {
-@@ -970,57 +945,8 @@ err:
-         rmdir(backup_dir);
-     }
- 
--    task->result = NULL;
--
-     qemu_co_mutex_unlock(&backup_state.backup_mutex);
--    return;
--}
--
--UuidInfo *qmp_backup(
--    const char *backup_file,
--    const char *password,
--    const char *keyfile,
--    const char *key_password,
--    const char *fingerprint,
--    const char *backup_id,
--    bool has_backup_time, int64_t backup_time,
--    bool has_use_dirty_bitmap, bool use_dirty_bitmap,
--    bool has_compress, bool compress,
--    bool has_encrypt, bool encrypt,
--    bool has_format, BackupFormat format,
--    const char *config_file,
--    const char *firewall_file,
--    const char *devlist,
--    bool has_speed, int64_t speed, Error **errp)
--{
--    QmpBackupTask task = {
--        .backup_file = backup_file,
--        .password = password,
--        .keyfile = keyfile,
--        .key_password = key_password,
--        .fingerprint = fingerprint,
--        .backup_id = backup_id,
--        .has_backup_time = has_backup_time,
--        .backup_time = backup_time,
--        .has_use_dirty_bitmap = has_use_dirty_bitmap,
--        .use_dirty_bitmap = use_dirty_bitmap,
--        .has_compress = has_compress,
--        .compress = compress,
--        .has_encrypt = has_encrypt,
--        .encrypt = encrypt,
--        .has_format = has_format,
--        .format = format,
--        .config_file = config_file,
--        .firewall_file = firewall_file,
--        .devlist = devlist,
--        .has_speed = has_speed,
--        .speed = speed,
--        .errp = errp,
--    };
--
--    block_on_coroutine_fn(pvebackup_co_prepare, &task);
--
--    return task.result;
-+    return NULL;
- }
- 
- BackupStatus *qmp_query_backup(Error **errp)
-diff --git a/qapi/block-core.json b/qapi/block-core.json
-index e7412f6322..93d924ef79 100644
---- a/qapi/block-core.json
-+++ b/qapi/block-core.json
-@@ -937,7 +937,7 @@
-                                     '*config-file': 'str',
-                                     '*firewall-file': 'str',
-                                     '*devlist': 'str', '*speed': 'int' },
--  'returns': 'UuidInfo' }
-+  'returns': 'UuidInfo', 'coroutine': true }
- 
- ##
- # @query-backup:
-@@ -959,7 +959,7 @@
- # Notes: This command succeeds even if there is no backup process running.
- #
- ##
--{ 'command': 'backup-cancel' }
-+{ 'command': 'backup-cancel', 'coroutine': true }
- 
- ##
- # @ProxmoxSupportStatus:
diff --git a/debian/patches/pve/0055-Revert-block-rbd-implement-bdrv_co_block_status.patch b/debian/patches/pve/0043-Revert-block-rbd-implement-bdrv_co_block_status.patch
similarity index 100%
rename from debian/patches/pve/0055-Revert-block-rbd-implement-bdrv_co_block_status.patch
rename to debian/patches/pve/0043-Revert-block-rbd-implement-bdrv_co_block_status.patch
diff --git a/debian/patches/pve/0044-PBS-add-master-key-support.patch b/debian/patches/pve/0044-PBS-add-master-key-support.patch
deleted file mode 100644
index 4498723..0000000
--- a/debian/patches/pve/0044-PBS-add-master-key-support.patch
+++ /dev/null
@@ -1,100 +0,0 @@
-From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
-From: Stefan Reiter <s.reiter at proxmox.com>
-Date: Wed, 10 Feb 2021 11:07:06 +0100
-Subject: [PATCH] PBS: add master key support
-MIME-Version: 1.0
-Content-Type: text/plain; charset=UTF-8
-Content-Transfer-Encoding: 8bit
-
-this requires a new enough libproxmox-backup-qemu0, and allows querying
-from the PVE side to avoid QMP calls with unsupported parameters.
-
-Signed-off-by: Fabian Grünbichler <f.gruenbichler at proxmox.com>
-Signed-off-by: Stefan Reiter <s.reiter at proxmox.com>
-Signed-off-by: Thomas Lamprecht <t.lamprecht at proxmox.com>
-[FE: adapt to QAPI change dropping redundant has_*]
-Signed-off-by: Fiona Ebner <f.ebner at proxmox.com>
----
- block/monitor/block-hmp-cmds.c | 1 +
- pve-backup.c                   | 3 +++
- qapi/block-core.json           | 7 +++++++
- 3 files changed, 11 insertions(+)
-
-diff --git a/block/monitor/block-hmp-cmds.c b/block/monitor/block-hmp-cmds.c
-index 56f39b14d4..f852c70611 100644
---- a/block/monitor/block-hmp-cmds.c
-+++ b/block/monitor/block-hmp-cmds.c
-@@ -1053,6 +1053,7 @@ void coroutine_fn hmp_backup(Monitor *mon, const QDict *qdict)
-         NULL, // PBS password
-         NULL, // PBS keyfile
-         NULL, // PBS key_password
-+        NULL, // PBS master_keyfile
-         NULL, // PBS fingerprint
-         NULL, // PBS backup-id
-         false, 0, // PBS backup-time
-diff --git a/pve-backup.c b/pve-backup.c
-index 88e507b3c2..04c5f561cd 100644
---- a/pve-backup.c
-+++ b/pve-backup.c
-@@ -531,6 +531,7 @@ UuidInfo coroutine_fn *qmp_backup(
-     const char *password,
-     const char *keyfile,
-     const char *key_password,
-+    const char *master_keyfile,
-     const char *fingerprint,
-     const char *backup_id,
-     bool has_backup_time, int64_t backup_time,
-@@ -679,6 +680,7 @@ UuidInfo coroutine_fn *qmp_backup(
-             password,
-             keyfile,
-             key_password,
-+            master_keyfile,
-             has_compress ? compress : true,
-             has_encrypt ? encrypt : !!keyfile,
-             fingerprint,
-@@ -1038,5 +1040,6 @@ ProxmoxSupportStatus *qmp_query_proxmox_support(Error **errp)
-     ret->pbs_dirty_bitmap_savevm = true;
-     ret->pbs_dirty_bitmap_migration = true;
-     ret->query_bitmap_info = true;
-+    ret->pbs_masterkey = true;
-     return ret;
- }
-diff --git a/qapi/block-core.json b/qapi/block-core.json
-index 93d924ef79..568feb63ad 100644
---- a/qapi/block-core.json
-+++ b/qapi/block-core.json
-@@ -908,6 +908,8 @@
- #
- # @key-password: password for keyfile (optional for format 'pbs')
- #
-+# @master-keyfile: PEM-formatted master public keyfile (optional for format 'pbs')
-+#
- # @fingerprint: server cert fingerprint (optional for format 'pbs')
- #
- # @backup-id: backup ID (required for format 'pbs')
-@@ -927,6 +929,7 @@
-                                     '*password': 'str',
-                                     '*keyfile': 'str',
-                                     '*key-password': 'str',
-+                                    '*master-keyfile': 'str',
-                                     '*fingerprint': 'str',
-                                     '*backup-id': 'str',
-                                     '*backup-time': 'int',
-@@ -979,6 +982,9 @@
- #                              migration cap if this is false/unset may lead
- #                              to crashes on migration!
- #
-+# @pbs-masterkey: True if the QMP backup call supports the 'master_keyfile'
-+#                 parameter.
-+#
- # @pbs-library-version: Running version of libproxmox-backup-qemu0 library.
- #
- ##
-@@ -987,6 +993,7 @@
-             'query-bitmap-info': 'bool',
-             'pbs-dirty-bitmap-savevm': 'bool',
-             'pbs-dirty-bitmap-migration': 'bool',
-+            'pbs-masterkey': 'bool',
-             'pbs-library-version': 'str' } }
- 
- ##
diff --git a/debian/patches/pve/0063-alloc-track-fix-deadlock-during-drop.patch b/debian/patches/pve/0044-alloc-track-fix-deadlock-during-drop.patch
similarity index 100%
rename from debian/patches/pve/0063-alloc-track-fix-deadlock-during-drop.patch
rename to debian/patches/pve/0044-alloc-track-fix-deadlock-during-drop.patch
diff --git a/debian/patches/pve/0045-PVE-block-pbs-fast-path-reads-without-allocation-if-.patch b/debian/patches/pve/0045-PVE-block-pbs-fast-path-reads-without-allocation-if-.patch
deleted file mode 100644
index 6b18def..0000000
--- a/debian/patches/pve/0045-PVE-block-pbs-fast-path-reads-without-allocation-if-.patch
+++ /dev/null
@@ -1,53 +0,0 @@
-From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
-From: Stefan Reiter <s.reiter at proxmox.com>
-Date: Wed, 9 Dec 2020 11:46:57 +0100
-Subject: [PATCH] PVE: block/pbs: fast-path reads without allocation if
- possible
-
-...and switch over to g_malloc/g_free while at it to align with other
-QEMU code.
-
-Tracing shows the fast-path is taken almost all the time, though not
-100% so the slow one is still necessary.
-
-Signed-off-by: Stefan Reiter <s.reiter at proxmox.com>
-Signed-off-by: Thomas Lamprecht <t.lamprecht at proxmox.com>
----
- block/pbs.c | 17 ++++++++++++++---
- 1 file changed, 14 insertions(+), 3 deletions(-)
-
-diff --git a/block/pbs.c b/block/pbs.c
-index 43e69ada46..5d20789084 100644
---- a/block/pbs.c
-+++ b/block/pbs.c
-@@ -201,7 +201,16 @@ static coroutine_fn int pbs_co_preadv(BlockDriverState *bs,
-     BDRVPBSState *s = bs->opaque;
-     int ret;
-     char *pbs_error = NULL;
--    uint8_t *buf = malloc(bytes);
-+    uint8_t *buf;
-+    bool inline_buf = true;
-+
-+    /* for single-buffer IO vectors we can fast-path the write directly to it */
-+    if (qiov->niov == 1 && qiov->iov->iov_len >= bytes) {
-+        buf = qiov->iov->iov_base;
-+    } else {
-+        inline_buf = false;
-+        buf = g_malloc(bytes);
-+    }
- 
-     if (offset < 0 || bytes < 0) {
-         fprintf(stderr, "unexpected negative 'offset' or 'bytes' value!\n");
-@@ -224,8 +233,10 @@ static coroutine_fn int pbs_co_preadv(BlockDriverState *bs,
-         return -EIO;
-     }
- 
--    qemu_iovec_from_buf(qiov, 0, buf, bytes);
--    free(buf);
-+    if (!inline_buf) {
-+        qemu_iovec_from_buf(qiov, 0, buf, bytes);
-+        g_free(buf);
-+    }
- 
-     return 0;
- }
diff --git a/debian/patches/pve/0066-migration-for-snapshots-hold-the-BQL-during-setup-ca.patch b/debian/patches/pve/0045-migration-for-snapshots-hold-the-BQL-during-setup-ca.patch
similarity index 100%
rename from debian/patches/pve/0066-migration-for-snapshots-hold-the-BQL-during-setup-ca.patch
rename to debian/patches/pve/0045-migration-for-snapshots-hold-the-BQL-during-setup-ca.patch
diff --git a/debian/patches/pve/0067-savevm-async-don-t-hold-BQL-during-setup.patch b/debian/patches/pve/0046-savevm-async-don-t-hold-BQL-during-setup.patch
similarity index 100%
rename from debian/patches/pve/0067-savevm-async-don-t-hold-BQL-during-setup.patch
rename to debian/patches/pve/0046-savevm-async-don-t-hold-BQL-during-setup.patch
diff --git a/debian/patches/pve/0049-PVE-savevm-async-register-yank-before-migration_inco.patch b/debian/patches/pve/0049-PVE-savevm-async-register-yank-before-migration_inco.patch
deleted file mode 100644
index 6646d97..0000000
--- a/debian/patches/pve/0049-PVE-savevm-async-register-yank-before-migration_inco.patch
+++ /dev/null
@@ -1,35 +0,0 @@
-From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
-From: Stefan Reiter <s.reiter at proxmox.com>
-Date: Wed, 26 May 2021 17:36:55 +0200
-Subject: [PATCH] PVE: savevm-async: register yank before
- migration_incoming_state_destroy
-
-Signed-off-by: Stefan Reiter <s.reiter at proxmox.com>
-Signed-off-by: Thomas Lamprecht <t.lamprecht at proxmox.com>
----
- migration/savevm-async.c | 5 +++++
- 1 file changed, 5 insertions(+)
-
-diff --git a/migration/savevm-async.c b/migration/savevm-async.c
-index 70273a2996..fab1dc32d2 100644
---- a/migration/savevm-async.c
-+++ b/migration/savevm-async.c
-@@ -20,6 +20,7 @@
- #include "qemu/timer.h"
- #include "qemu/main-loop.h"
- #include "qemu/rcu.h"
-+#include "qemu/yank.h"
- 
- /* #define DEBUG_SAVEVM_STATE */
- 
-@@ -518,6 +519,10 @@ int load_snapshot_from_blockdev(const char *filename, Error **errp)
-     dirty_bitmap_mig_before_vm_start();
- 
-     qemu_fclose(f);
-+
-+    /* state_destroy assumes a real migration which would have added a yank */
-+    yank_register_instance(MIGRATION_YANK_INSTANCE, &error_abort);
-+
-     migration_incoming_state_destroy();
-     if (ret < 0) {
-         error_setg_errno(errp, -ret, "Error while loading VM state");
diff --git a/debian/patches/pve/0051-vma-allow-partial-restore.patch b/debian/patches/pve/0051-vma-allow-partial-restore.patch
deleted file mode 100644
index f1fd7cb..0000000
--- a/debian/patches/pve/0051-vma-allow-partial-restore.patch
+++ /dev/null
@@ -1,407 +0,0 @@
-From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
-From: Fabian Ebner <f.ebner at proxmox.com>
-Date: Thu, 21 Apr 2022 13:26:48 +0200
-Subject: [PATCH] vma: allow partial restore
-
-Introduce a new map line for skipping a certain drive, of the form
-skip=drive-scsi0
-
-Since in PVE, most archives are compressed and piped to vma for
-restore, it's not easily possible to skip reads.
-
-For the reader, a new skip flag for VmaRestoreState is added and the
-target is allowed to be NULL if skip is specified when registering. If
-the skip flag is set, no writes will be made as well as no check for
-duplicate clusters. Therefore, the flag is not set for verify.
-
-Signed-off-by: Fabian Ebner <f.ebner at proxmox.com>
-Acked-by: Wolfgang Bumiller <w.bumiller at proxmox.com>
-Signed-off-by: Thomas Lamprecht <t.lamprecht at proxmox.com>
----
- vma-reader.c |  64 ++++++++++++---------
- vma.c        | 157 +++++++++++++++++++++++++++++----------------------
- vma.h        |   2 +-
- 3 files changed, 126 insertions(+), 97 deletions(-)
-
-diff --git a/vma-reader.c b/vma-reader.c
-index e65f1e8415..81a891c6b1 100644
---- a/vma-reader.c
-+++ b/vma-reader.c
-@@ -28,6 +28,7 @@ typedef struct VmaRestoreState {
-     bool write_zeroes;
-     unsigned long *bitmap;
-     int bitmap_size;
-+    bool skip;
- }  VmaRestoreState;
- 
- struct VmaReader {
-@@ -425,13 +426,14 @@ VmaDeviceInfo *vma_reader_get_device_info(VmaReader *vmar, guint8 dev_id)
- }
- 
- static void allocate_rstate(VmaReader *vmar,  guint8 dev_id,
--                            BlockBackend *target, bool write_zeroes)
-+                            BlockBackend *target, bool write_zeroes, bool skip)
- {
-     assert(vmar);
-     assert(dev_id);
- 
-     vmar->rstate[dev_id].target = target;
-     vmar->rstate[dev_id].write_zeroes = write_zeroes;
-+    vmar->rstate[dev_id].skip = skip;
- 
-     int64_t size = vmar->devinfo[dev_id].size;
- 
-@@ -446,28 +448,30 @@ static void allocate_rstate(VmaReader *vmar,  guint8 dev_id,
- }
- 
- int vma_reader_register_bs(VmaReader *vmar, guint8 dev_id, BlockBackend *target,
--                           bool write_zeroes, Error **errp)
-+                           bool write_zeroes, bool skip, Error **errp)
- {
-     assert(vmar);
--    assert(target != NULL);
-+    assert(target != NULL || skip);
-     assert(dev_id);
--    assert(vmar->rstate[dev_id].target == NULL);
--
--    int64_t size = blk_getlength(target);
--    int64_t size_diff = size - vmar->devinfo[dev_id].size;
--
--    /* storage types can have different size restrictions, so it
--     * is not always possible to create an image with exact size.
--     * So we tolerate a size difference up to 4MB.
--     */
--    if ((size_diff < 0) || (size_diff > 4*1024*1024)) {
--        error_setg(errp, "vma_reader_register_bs for stream %s failed - "
--                   "unexpected size %zd != %zd", vmar->devinfo[dev_id].devname,
--                   size, vmar->devinfo[dev_id].size);
--        return -1;
-+    assert(vmar->rstate[dev_id].target == NULL && !vmar->rstate[dev_id].skip);
-+
-+    if (target != NULL) {
-+        int64_t size = blk_getlength(target);
-+        int64_t size_diff = size - vmar->devinfo[dev_id].size;
-+
-+        /* storage types can have different size restrictions, so it
-+         * is not always possible to create an image with exact size.
-+         * So we tolerate a size difference up to 4MB.
-+         */
-+        if ((size_diff < 0) || (size_diff > 4*1024*1024)) {
-+            error_setg(errp, "vma_reader_register_bs for stream %s failed - "
-+                       "unexpected size %zd != %zd", vmar->devinfo[dev_id].devname,
-+                       size, vmar->devinfo[dev_id].size);
-+            return -1;
-+        }
-     }
- 
--    allocate_rstate(vmar, dev_id, target, write_zeroes);
-+    allocate_rstate(vmar, dev_id, target, write_zeroes, skip);
- 
-     return 0;
- }
-@@ -560,19 +564,23 @@ static int restore_extent(VmaReader *vmar, unsigned char *buf,
-         VmaRestoreState *rstate = &vmar->rstate[dev_id];
-         BlockBackend *target = NULL;
- 
-+        bool skip = rstate->skip;
-+
-         if (dev_id != vmar->vmstate_stream) {
-             target = rstate->target;
--            if (!verify && !target) {
-+            if (!verify && !target && !skip) {
-                 error_setg(errp, "got wrong dev id %d", dev_id);
-                 return -1;
-             }
- 
--            if (vma_reader_get_bitmap(rstate, cluster_num)) {
--                error_setg(errp, "found duplicated cluster %zd for stream %s",
--                          cluster_num, vmar->devinfo[dev_id].devname);
--                return -1;
-+            if (!skip) {
-+                if (vma_reader_get_bitmap(rstate, cluster_num)) {
-+                    error_setg(errp, "found duplicated cluster %zd for stream %s",
-+                              cluster_num, vmar->devinfo[dev_id].devname);
-+                    return -1;
-+                }
-+                vma_reader_set_bitmap(rstate, cluster_num, 1);
-             }
--            vma_reader_set_bitmap(rstate, cluster_num, 1);
- 
-             max_sector = vmar->devinfo[dev_id].size/BDRV_SECTOR_SIZE;
-         } else {
-@@ -618,7 +626,7 @@ static int restore_extent(VmaReader *vmar, unsigned char *buf,
-                 return -1;
-             }
- 
--            if (!verify) {
-+            if (!verify && !skip) {
-                 int nb_sectors = end_sector - sector_num;
-                 if (restore_write_data(vmar, dev_id, target, vmstate_fd,
-                                        buf + start, sector_num, nb_sectors,
-@@ -654,7 +662,7 @@ static int restore_extent(VmaReader *vmar, unsigned char *buf,
-                         return -1;
-                     }
- 
--                    if (!verify) {
-+                    if (!verify && !skip) {
-                         int nb_sectors = end_sector - sector_num;
-                         if (restore_write_data(vmar, dev_id, target, vmstate_fd,
-                                                buf + start, sector_num,
-@@ -679,7 +687,7 @@ static int restore_extent(VmaReader *vmar, unsigned char *buf,
-                             vmar->partial_zero_cluster_data += zero_size;
-                         }
- 
--                        if (rstate->write_zeroes && !verify) {
-+                        if (rstate->write_zeroes && !verify && !skip) {
-                             if (restore_write_data(vmar, dev_id, target, vmstate_fd,
-                                                    zero_vma_block, sector_num,
-                                                    nb_sectors, errp) < 0) {
-@@ -850,7 +858,7 @@ int vma_reader_verify(VmaReader *vmar, bool verbose, Error **errp)
- 
-     for (dev_id = 1; dev_id < 255; dev_id++) {
-         if (vma_reader_get_device_info(vmar, dev_id)) {
--            allocate_rstate(vmar, dev_id, NULL, false);
-+            allocate_rstate(vmar, dev_id, NULL, false, false);
-         }
-     }
- 
-diff --git a/vma.c b/vma.c
-index e8dffb43e0..e6e9ffc7fe 100644
---- a/vma.c
-+++ b/vma.c
-@@ -138,6 +138,7 @@ typedef struct RestoreMap {
-     char *throttling_group;
-     char *cache;
-     bool write_zero;
-+    bool skip;
- } RestoreMap;
- 
- static bool try_parse_option(char **line, const char *optname, char **out, const char *inbuf) {
-@@ -245,47 +246,61 @@ static int extract_content(int argc, char **argv)
-             char *bps = NULL;
-             char *group = NULL;
-             char *cache = NULL;
-+            char *devname = NULL;
-+            bool skip = false;
-+            uint64_t bps_value = 0;
-+            const char *path = NULL;
-+            bool write_zero = true;
-+
-             if (!line || line[0] == '\0' || !strcmp(line, "done\n")) {
-                 break;
-             }
-             int len = strlen(line);
-             if (line[len - 1] == '\n') {
-                 line[len - 1] = '\0';
--                if (len == 1) {
-+                len = len - 1;
-+                if (len == 0) {
-                     break;
-                 }
-             }
- 
--            while (1) {
--                if (!try_parse_option(&line, "format", &format, inbuf) &&
--                    !try_parse_option(&line, "throttling.bps", &bps, inbuf) &&
--                    !try_parse_option(&line, "throttling.group", &group, inbuf) &&
--                    !try_parse_option(&line, "cache", &cache, inbuf))
--                {
--                    break;
-+            if (strncmp(line, "skip", 4) == 0) {
-+                if (len < 6 || line[4] != '=') {
-+                    g_error("read map failed - option 'skip' has no value ('%s')",
-+                            inbuf);
-+                } else {
-+                    devname = line + 5;
-+                    skip = true;
-+                }
-+            } else {
-+                while (1) {
-+                    if (!try_parse_option(&line, "format", &format, inbuf) &&
-+                        !try_parse_option(&line, "throttling.bps", &bps, inbuf) &&
-+                        !try_parse_option(&line, "throttling.group", &group, inbuf) &&
-+                        !try_parse_option(&line, "cache", &cache, inbuf))
-+                    {
-+                        break;
-+                    }
-                 }
--            }
- 
--            uint64_t bps_value = 0;
--            if (bps) {
--                bps_value = verify_u64(bps);
--                g_free(bps);
--            }
-+                if (bps) {
-+                    bps_value = verify_u64(bps);
-+                    g_free(bps);
-+                }
- 
--            const char *path;
--            bool write_zero;
--            if (line[0] == '0' && line[1] == ':') {
--                path = line + 2;
--                write_zero = false;
--            } else if (line[0] == '1' && line[1] == ':') {
--                path = line + 2;
--                write_zero = true;
--            } else {
--                g_error("read map failed - parse error ('%s')", inbuf);
-+                if (line[0] == '0' && line[1] == ':') {
-+                    path = line + 2;
-+                    write_zero = false;
-+                } else if (line[0] == '1' && line[1] == ':') {
-+                    path = line + 2;
-+                    write_zero = true;
-+                } else {
-+                    g_error("read map failed - parse error ('%s')", inbuf);
-+                }
-+
-+                path = extract_devname(path, &devname, -1);
-             }
- 
--            char *devname = NULL;
--            path = extract_devname(path, &devname, -1);
-             if (!devname) {
-                 g_error("read map failed - no dev name specified ('%s')",
-                         inbuf);
-@@ -299,6 +314,7 @@ static int extract_content(int argc, char **argv)
-             map->throttling_group = group;
-             map->cache = cache;
-             map->write_zero = write_zero;
-+            map->skip = skip;
- 
-             g_hash_table_insert(devmap, map->devname, map);
- 
-@@ -328,6 +344,7 @@ static int extract_content(int argc, char **argv)
-             const char *cache = NULL;
-             int flags = BDRV_O_RDWR;
-             bool write_zero = true;
-+            bool skip = false;
- 
-             BlockBackend *blk = NULL;
- 
-@@ -343,6 +360,7 @@ static int extract_content(int argc, char **argv)
-                 throttling_group = map->throttling_group;
-                 cache = map->cache;
-                 write_zero = map->write_zero;
-+                skip = map->skip;
-             } else {
-                 devfn = g_strdup_printf("%s/tmp-disk-%s.raw",
-                                         dirname, di->devname);
-@@ -361,57 +379,60 @@ static int extract_content(int argc, char **argv)
-                 write_zero = false;
-             }
- 
--	    size_t devlen = strlen(devfn);
--	    QDict *options = NULL;
--            bool writethrough;
--            if (format) {
--                /* explicit format from commandline */
--                options = qdict_new();
--                qdict_put_str(options, "driver", format);
--            } else if ((devlen > 4 && strcmp(devfn+devlen-4, ".raw") == 0) ||
--	               strncmp(devfn, "/dev/", 5) == 0)
--	    {
--                /* This part is now deprecated for PVE as well (just as qemu
--                 * deprecated not specifying an explicit raw format, too.
--                 */
--		/* explicit raw format */
--		options = qdict_new();
--		qdict_put_str(options, "driver", "raw");
--	    }
--            if (cache && bdrv_parse_cache_mode(cache, &flags, &writethrough)) {
--                g_error("invalid cache option: %s\n", cache);
--            }
-+            if (!skip) {
-+                size_t devlen = strlen(devfn);
-+                QDict *options = NULL;
-+                bool writethrough;
-+                if (format) {
-+                    /* explicit format from commandline */
-+                    options = qdict_new();
-+                    qdict_put_str(options, "driver", format);
-+                } else if ((devlen > 4 && strcmp(devfn+devlen-4, ".raw") == 0) ||
-+                    strncmp(devfn, "/dev/", 5) == 0)
-+                {
-+                    /* This part is now deprecated for PVE as well (just as qemu
-+                     * deprecated not specifying an explicit raw format, too.
-+                     */
-+                    /* explicit raw format */
-+                    options = qdict_new();
-+                    qdict_put_str(options, "driver", "raw");
-+                }
- 
--	    if (errp || !(blk = blk_new_open(devfn, NULL, options, flags, &errp))) {
--                g_error("can't open file %s - %s", devfn,
--                        error_get_pretty(errp));
--            }
-+                if (cache && bdrv_parse_cache_mode(cache, &flags, &writethrough)) {
-+                    g_error("invalid cache option: %s\n", cache);
-+                }
- 
--            if (cache) {
--                blk_set_enable_write_cache(blk, !writethrough);
--            }
-+                if (errp || !(blk = blk_new_open(devfn, NULL, options, flags, &errp))) {
-+                    g_error("can't open file %s - %s", devfn,
-+                            error_get_pretty(errp));
-+                }
- 
--            if (throttling_group) {
--                blk_io_limits_enable(blk, throttling_group);
--            }
-+                if (cache) {
-+                    blk_set_enable_write_cache(blk, !writethrough);
-+                }
- 
--            if (throttling_bps) {
--                if (!throttling_group) {
--                    blk_io_limits_enable(blk, devfn);
-+                if (throttling_group) {
-+                    blk_io_limits_enable(blk, throttling_group);
-                 }
- 
--                ThrottleConfig cfg;
--                throttle_config_init(&cfg);
--                cfg.buckets[THROTTLE_BPS_WRITE].avg = throttling_bps;
--                Error *err = NULL;
--                if (!throttle_is_valid(&cfg, &err)) {
--                    error_report_err(err);
--                    g_error("failed to apply throttling");
-+                if (throttling_bps) {
-+                    if (!throttling_group) {
-+                        blk_io_limits_enable(blk, devfn);
-+                    }
-+
-+                    ThrottleConfig cfg;
-+                    throttle_config_init(&cfg);
-+                    cfg.buckets[THROTTLE_BPS_WRITE].avg = throttling_bps;
-+                    Error *err = NULL;
-+                    if (!throttle_is_valid(&cfg, &err)) {
-+                        error_report_err(err);
-+                        g_error("failed to apply throttling");
-+                    }
-+                    blk_set_io_limits(blk, &cfg);
-                 }
--                blk_set_io_limits(blk, &cfg);
-             }
- 
--            if (vma_reader_register_bs(vmar, i, blk, write_zero, &errp) < 0) {
-+            if (vma_reader_register_bs(vmar, i, blk, write_zero, skip, &errp) < 0) {
-                 g_error("%s", error_get_pretty(errp));
-             }
- 
-diff --git a/vma.h b/vma.h
-index c895c97f6d..1b62859165 100644
---- a/vma.h
-+++ b/vma.h
-@@ -142,7 +142,7 @@ GList *vma_reader_get_config_data(VmaReader *vmar);
- VmaDeviceInfo *vma_reader_get_device_info(VmaReader *vmar, guint8 dev_id);
- int vma_reader_register_bs(VmaReader *vmar, guint8 dev_id,
-                            BlockBackend *target, bool write_zeroes,
--                           Error **errp);
-+                           bool skip, Error **errp);
- int vma_reader_restore(VmaReader *vmar, int vmstate_fd, bool verbose,
-                        Error **errp);
- int vma_reader_verify(VmaReader *vmar, bool verbose, Error **errp);
diff --git a/debian/patches/pve/0052-pbs-namespace-support.patch b/debian/patches/pve/0052-pbs-namespace-support.patch
deleted file mode 100644
index b0bb9a6..0000000
--- a/debian/patches/pve/0052-pbs-namespace-support.patch
+++ /dev/null
@@ -1,235 +0,0 @@
-From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
-From: Wolfgang Bumiller <w.bumiller at proxmox.com>
-Date: Tue, 26 Apr 2022 16:06:28 +0200
-Subject: [PATCH] pbs: namespace support
-
-Signed-off-by: Wolfgang Bumiller <w.bumiller at proxmox.com>
-[FE: adapt to QAPI change dropping redundant has_*]
-Signed-off-by: Fiona Ebner <f.ebner at proxmox.com>
----
- block/monitor/block-hmp-cmds.c |  1 +
- block/pbs.c                    | 25 +++++++++++++++++++++----
- pbs-restore.c                  | 19 ++++++++++++++++---
- pve-backup.c                   |  6 +++++-
- qapi/block-core.json           |  5 ++++-
- 5 files changed, 47 insertions(+), 9 deletions(-)
-
-diff --git a/block/monitor/block-hmp-cmds.c b/block/monitor/block-hmp-cmds.c
-index f852c70611..ac23f21eef 100644
---- a/block/monitor/block-hmp-cmds.c
-+++ b/block/monitor/block-hmp-cmds.c
-@@ -1055,6 +1055,7 @@ void coroutine_fn hmp_backup(Monitor *mon, const QDict *qdict)
-         NULL, // PBS key_password
-         NULL, // PBS master_keyfile
-         NULL, // PBS fingerprint
-+        NULL, // PBS backup-ns
-         NULL, // PBS backup-id
-         false, 0, // PBS backup-time
-         false, false, // PBS use-dirty-bitmap
-diff --git a/block/pbs.c b/block/pbs.c
-index 5d20789084..a2211e0f3b 100644
---- a/block/pbs.c
-+++ b/block/pbs.c
-@@ -15,6 +15,7 @@
- #include <proxmox-backup-qemu.h>
- 
- #define PBS_OPT_REPOSITORY "repository"
-+#define PBS_OPT_NAMESPACE "namespace"
- #define PBS_OPT_SNAPSHOT "snapshot"
- #define PBS_OPT_ARCHIVE "archive"
- #define PBS_OPT_KEYFILE "keyfile"
-@@ -28,6 +29,7 @@ typedef struct {
-     int64_t length;
- 
-     char *repository;
-+    char *namespace;
-     char *snapshot;
-     char *archive;
- } BDRVPBSState;
-@@ -41,6 +43,11 @@ static QemuOptsList runtime_opts = {
-             .type = QEMU_OPT_STRING,
-             .help = "The server address and repository to connect to.",
-         },
-+        {
-+            .name = PBS_OPT_NAMESPACE,
-+            .type = QEMU_OPT_STRING,
-+            .help = "Optional: The snapshot's namespace.",
-+        },
-         {
-             .name = PBS_OPT_SNAPSHOT,
-             .type = QEMU_OPT_STRING,
-@@ -77,7 +84,7 @@ static QemuOptsList runtime_opts = {
- 
- 
- // filename format:
--// pbs:repository=<repo>,snapshot=<snap>,password=<pw>,key_password=<kpw>,fingerprint=<fp>,archive=<archive>
-+// pbs:repository=<repo>,namespace=<ns>,snapshot=<snap>,password=<pw>,key_password=<kpw>,fingerprint=<fp>,archive=<archive>
- static void pbs_parse_filename(const char *filename, QDict *options,
-                                      Error **errp)
- {
-@@ -113,6 +120,7 @@ static int pbs_open(BlockDriverState *bs, QDict *options, int flags,
-     s->archive = g_strdup(qemu_opt_get(opts, PBS_OPT_ARCHIVE));
-     const char *keyfile = qemu_opt_get(opts, PBS_OPT_KEYFILE);
-     const char *password = qemu_opt_get(opts, PBS_OPT_PASSWORD);
-+    const char *namespace = qemu_opt_get(opts, PBS_OPT_NAMESPACE);
-     const char *fingerprint = qemu_opt_get(opts, PBS_OPT_FINGERPRINT);
-     const char *key_password = qemu_opt_get(opts, PBS_OPT_ENCRYPTION_PASSWORD);
- 
-@@ -125,9 +133,12 @@ static int pbs_open(BlockDriverState *bs, QDict *options, int flags,
-     if (!key_password) {
-         key_password = getenv("PBS_ENCRYPTION_PASSWORD");
-     }
-+    if (namespace) {
-+        s->namespace = g_strdup(namespace);
-+    }
- 
-     /* connect to PBS server in read mode */
--    s->conn = proxmox_restore_new(s->repository, s->snapshot, password,
-+    s->conn = proxmox_restore_new_ns(s->repository, s->snapshot, s->namespace, password,
-         keyfile, key_password, fingerprint, &pbs_error);
- 
-     /* invalidates qemu_opt_get char pointers from above */
-@@ -172,6 +183,7 @@ static int pbs_file_open(BlockDriverState *bs, QDict *options, int flags,
- static void pbs_close(BlockDriverState *bs) {
-     BDRVPBSState *s = bs->opaque;
-     g_free(s->repository);
-+    g_free(s->namespace);
-     g_free(s->snapshot);
-     g_free(s->archive);
-     proxmox_restore_disconnect(s->conn);
-@@ -253,8 +265,13 @@ static coroutine_fn int pbs_co_pwritev(BlockDriverState *bs,
- static void pbs_refresh_filename(BlockDriverState *bs)
- {
-     BDRVPBSState *s = bs->opaque;
--    snprintf(bs->exact_filename, sizeof(bs->exact_filename), "%s/%s(%s)",
--             s->repository, s->snapshot, s->archive);
-+    if (s->namespace) {
-+        snprintf(bs->exact_filename, sizeof(bs->exact_filename), "%s/%s:%s(%s)",
-+                 s->repository, s->namespace, s->snapshot, s->archive);
-+    } else {
-+        snprintf(bs->exact_filename, sizeof(bs->exact_filename), "%s/%s(%s)",
-+                 s->repository, s->snapshot, s->archive);
-+    }
- }
- 
- static const char *const pbs_strong_runtime_opts[] = {
-diff --git a/pbs-restore.c b/pbs-restore.c
-index 2f834cf42e..f03d9bab8d 100644
---- a/pbs-restore.c
-+++ b/pbs-restore.c
-@@ -29,7 +29,7 @@
- static void help(void)
- {
-     const char *help_msg =
--        "usage: pbs-restore [--repository <repo>] snapshot archive-name target [command options]\n"
-+        "usage: pbs-restore [--repository <repo>] [--ns namespace] snapshot archive-name target [command options]\n"
-         ;
- 
-     printf("%s", help_msg);
-@@ -77,6 +77,7 @@ int main(int argc, char **argv)
-     Error *main_loop_err = NULL;
-     const char *format = "raw";
-     const char *repository = NULL;
-+    const char *backup_ns = NULL;
-     const char *keyfile = NULL;
-     int verbose = false;
-     bool skip_zero = false;
-@@ -90,6 +91,7 @@ int main(int argc, char **argv)
-             {"verbose", no_argument, 0, 'v'},
-             {"format", required_argument, 0, 'f'},
-             {"repository", required_argument, 0, 'r'},
-+            {"ns", required_argument, 0, 'n'},
-             {"keyfile", required_argument, 0, 'k'},
-             {0, 0, 0, 0}
-         };
-@@ -110,6 +112,9 @@ int main(int argc, char **argv)
-             case 'r':
-                 repository = g_strdup(argv[optind - 1]);
-                 break;
-+            case 'n':
-+                backup_ns = g_strdup(argv[optind - 1]);
-+                break;
-             case 'k':
-                 keyfile = g_strdup(argv[optind - 1]);
-                 break;
-@@ -160,8 +165,16 @@ int main(int argc, char **argv)
-         fprintf(stderr, "connecting to repository '%s'\n", repository);
-     }
-     char *pbs_error = NULL;
--    ProxmoxRestoreHandle *conn = proxmox_restore_new(
--        repository, snapshot, password, keyfile, key_password, fingerprint, &pbs_error);
-+    ProxmoxRestoreHandle *conn = proxmox_restore_new_ns(
-+        repository,
-+        snapshot,
-+        backup_ns,
-+        password,
-+        keyfile,
-+        key_password,
-+        fingerprint,
-+        &pbs_error
-+    );
-     if (conn == NULL) {
-         fprintf(stderr, "restore failed: %s\n", pbs_error);
-         return -1;
-diff --git a/pve-backup.c b/pve-backup.c
-index 04c5f561cd..08dfb9cbda 100644
---- a/pve-backup.c
-+++ b/pve-backup.c
-@@ -12,6 +12,8 @@
- #include "qapi/qmp/qerror.h"
- #include "qemu/cutils.h"
- 
-+#include <proxmox-backup-qemu.h>
-+
- /* PVE backup state and related function */
- 
- /*
-@@ -533,6 +535,7 @@ UuidInfo coroutine_fn *qmp_backup(
-     const char *key_password,
-     const char *master_keyfile,
-     const char *fingerprint,
-+    const char *backup_ns,
-     const char *backup_id,
-     bool has_backup_time, int64_t backup_time,
-     bool has_use_dirty_bitmap, bool use_dirty_bitmap,
-@@ -672,8 +675,9 @@ UuidInfo coroutine_fn *qmp_backup(
-         firewall_name = "fw.conf";
- 
-         char *pbs_err = NULL;
--        pbs = proxmox_backup_new(
-+        pbs = proxmox_backup_new_ns(
-             backup_file,
-+            backup_ns,
-             backup_id,
-             backup_time,
-             dump_cb_block_size,
-diff --git a/qapi/block-core.json b/qapi/block-core.json
-index 568feb63ad..9edeb33d82 100644
---- a/qapi/block-core.json
-+++ b/qapi/block-core.json
-@@ -912,6 +912,8 @@
- #
- # @fingerprint: server cert fingerprint (optional for format 'pbs')
- #
-+# @backup-ns: backup namespace (required for format 'pbs')
-+#
- # @backup-id: backup ID (required for format 'pbs')
- #
- # @backup-time: backup timestamp (Unix epoch, required for format 'pbs')
-@@ -931,6 +933,7 @@
-                                     '*key-password': 'str',
-                                     '*master-keyfile': 'str',
-                                     '*fingerprint': 'str',
-+                                    '*backup-ns': 'str',
-                                     '*backup-id': 'str',
-                                     '*backup-time': 'int',
-                                     '*use-dirty-bitmap': 'bool',
-@@ -3385,7 +3388,7 @@
- { 'struct': 'BlockdevOptionsPbs',
-   'data': { 'repository': 'str', 'snapshot': 'str', 'archive': 'str',
-             '*keyfile': 'str', '*password': 'str', '*fingerprint': 'str',
--            '*key_password': 'str' } }
-+            '*key_password': 'str', '*namespace': 'str' } }
- 
- ##
- # @BlockdevOptionsNVMe:
diff --git a/debian/patches/pve/0056-PVE-Backup-create-jobs-correctly-cancel-in-error-sce.patch b/debian/patches/pve/0056-PVE-Backup-create-jobs-correctly-cancel-in-error-sce.patch
deleted file mode 100644
index 5dee746..0000000
--- a/debian/patches/pve/0056-PVE-Backup-create-jobs-correctly-cancel-in-error-sce.patch
+++ /dev/null
@@ -1,60 +0,0 @@
-From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
-From: Fabian Ebner <f.ebner at proxmox.com>
-Date: Wed, 25 May 2022 13:59:37 +0200
-Subject: [PATCH] PVE-Backup: create jobs: correctly cancel in error scenario
-
-The first call to job_cancel_sync() will cancel and free all jobs in
-the transaction, so ensure that it's called only once and get rid of
-the job_unref() that would operate on freed memory.
-
-It's also necessary to NULL backup_state.pbs in the error scenario,
-because a subsequent backup_cancel QMP call (as happens in PVE when
-the backup QMP command fails) would try to call proxmox_backup_abort()
-and run into a segfault.
-
-Signed-off-by: Fiona Ebner <f.ebner at proxmox.com>
-Signed-off-by: Wolfgang Bumiller <w.bumiller at proxmox.com>
-[FE: adapt for new job lock mechanism replacing AioContext locks]
-Signed-off-by: Fiona Ebner <f.ebner at proxmox.com>
----
- pve-backup.c | 10 ++++++++--
- 1 file changed, 8 insertions(+), 2 deletions(-)
-
-diff --git a/pve-backup.c b/pve-backup.c
-index 08dfb9cbda..79d14d6a0b 100644
---- a/pve-backup.c
-+++ b/pve-backup.c
-@@ -505,6 +505,11 @@ static void create_backup_jobs_bh(void *opaque) {
-     }
- 
-     if (*errp) {
-+        /*
-+         * It's enough to cancel one job in the transaction, the rest will
-+         * follow automatically.
-+         */
-+        bool canceled = false;
-         l = backup_state.di_list;
-         while (l) {
-             PVEBackupDevInfo *di = (PVEBackupDevInfo *)l->data;
-@@ -515,11 +520,11 @@ static void create_backup_jobs_bh(void *opaque) {
-                 di->target = NULL;
-             }
- 
--            if (di->job) {
-+            if (!canceled && di->job) {
-                 WITH_JOB_LOCK_GUARD() {
-                     job_cancel_sync_locked(&di->job->job, true);
--                    job_unref_locked(&di->job->job);
-                 }
-+                canceled = true;
-             }
-         }
-     }
-@@ -945,6 +950,7 @@ err:
- 
-     if (pbs) {
-         proxmox_backup_disconnect(pbs);
-+        backup_state.pbs = NULL;
-     }
- 
-     if (backup_dir) {
diff --git a/debian/patches/pve/0057-PVE-Backup-ensure-jobs-in-di_list-are-referenced.patch b/debian/patches/pve/0057-PVE-Backup-ensure-jobs-in-di_list-are-referenced.patch
deleted file mode 100644
index 4beed97..0000000
--- a/debian/patches/pve/0057-PVE-Backup-ensure-jobs-in-di_list-are-referenced.patch
+++ /dev/null
@@ -1,73 +0,0 @@
-From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
-From: Fabian Ebner <f.ebner at proxmox.com>
-Date: Wed, 25 May 2022 13:59:38 +0200
-Subject: [PATCH] PVE-Backup: ensure jobs in di_list are referenced
-
-Ensures that qmp_backup_cancel doesn't pick a job that's already been
-freed. With unlucky timings it seems possible that:
-1. job_exit -> job_completed -> job_finalize_single starts
-2. pvebackup_co_complete_stream gets spawned in completion callback
-3. job finalize_single finishes -> job's refcount hits zero -> job is
-   freed
-4. qmp_backup_cancel comes in and locks backup_state.backup_mutex
-   before pvebackup_co_complete_stream can remove the job from the
-   di_list
-5. qmp_backup_cancel will pick a job that's already been freed
-
-Signed-off-by: Fiona Ebner <f.ebner at proxmox.com>
-Signed-off-by: Wolfgang Bumiller <w.bumiller at proxmox.com>
-[FE: adapt for new job lock mechanism replacing AioContext locks]
-Signed-off-by: Fiona Ebner <f.ebner at proxmox.com>
----
- pve-backup.c | 22 +++++++++++++++++++---
- 1 file changed, 19 insertions(+), 3 deletions(-)
-
-diff --git a/pve-backup.c b/pve-backup.c
-index 79d14d6a0b..67e2b99d74 100644
---- a/pve-backup.c
-+++ b/pve-backup.c
-@@ -318,6 +318,13 @@ static void coroutine_fn pvebackup_co_complete_stream(void *opaque)
-         }
-     }
- 
-+    if (di->job) {
-+        WITH_JOB_LOCK_GUARD() {
-+            job_unref_locked(&di->job->job);
-+            di->job = NULL;
-+        }
-+    }
-+
-     // remove self from job list
-     backup_state.di_list = g_list_remove(backup_state.di_list, di);
- 
-@@ -493,6 +500,11 @@ static void create_backup_jobs_bh(void *opaque) {
-         aio_context_release(aio_context);
- 
-         di->job = job;
-+        if (job) {
-+            WITH_JOB_LOCK_GUARD() {
-+                job_ref_locked(&job->job);
-+            }
-+        }
- 
-         if (!job || local_err) {
-             error_setg(errp, "backup_job_create failed: %s",
-@@ -520,11 +532,15 @@ static void create_backup_jobs_bh(void *opaque) {
-                 di->target = NULL;
-             }
- 
--            if (!canceled && di->job) {
-+            if (di->job) {
-                 WITH_JOB_LOCK_GUARD() {
--                    job_cancel_sync_locked(&di->job->job, true);
-+                    if (!canceled) {
-+                        job_cancel_sync_locked(&di->job->job, true);
-+                        canceled = true;
-+                    }
-+                    job_unref_locked(&di->job->job);
-+                    di->job = NULL;
-                 }
--                canceled = true;
-             }
-         }
-     }
diff --git a/debian/patches/pve/0058-PVE-Backup-avoid-segfault-issues-upon-backup-cancel.patch b/debian/patches/pve/0058-PVE-Backup-avoid-segfault-issues-upon-backup-cancel.patch
deleted file mode 100644
index 68c261e..0000000
--- a/debian/patches/pve/0058-PVE-Backup-avoid-segfault-issues-upon-backup-cancel.patch
+++ /dev/null
@@ -1,118 +0,0 @@
-From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
-From: Fabian Ebner <f.ebner at proxmox.com>
-Date: Wed, 25 May 2022 13:59:39 +0200
-Subject: [PATCH] PVE-Backup: avoid segfault issues upon backup-cancel
-
-When canceling a backup in PVE via a signal it's easy to run into a
-situation where the job is already failing when the backup_cancel QMP
-command comes in. With a bit of unlucky timing on top, it can happen
-that job_exit() runs between schedulung of job_cancel_bh() and
-execution of job_cancel_bh(). But job_cancel_sync() does not expect
-that the job is already finalized (in fact, the job might've been
-freed already, but even if it isn't, job_cancel_sync() would try to
-deref job->txn which would be NULL at that point).
-
-It is not possible to simply use the job_cancel() (which is advertised
-as being async but isn't in all cases) in qmp_backup_cancel() for the
-same reason job_cancel_sync() cannot be used. Namely, because it can
-invoke job_finish_sync() (which uses AIO_WAIT_WHILE and thus hangs if
-called from a coroutine). This happens when there's multiple jobs in
-the transaction and job->deferred_to_main_loop is true (is set before
-scheduling job_exit()) or if the job was not started yet.
-
-Fix the issue by selecting the job to cancel in job_cancel_bh() itself
-using the first job that's not completed yet. This is not necessarily
-the first job in the list, because pvebackup_co_complete_stream()
-might not yet have removed a completed job when job_cancel_bh() runs.
-
-An alternative would be to continue using only the first job and
-checking against JOB_STATUS_CONCLUDED or JOB_STATUS_NULL to decide if
-it's still necessary and possible to cancel, but the approach with
-using the first non-completed job seemed more robust.
-
-Signed-off-by: Fiona Ebner <f.ebner at proxmox.com>
-Signed-off-by: Wolfgang Bumiller <w.bumiller at proxmox.com>
-[FE: adapt for new job lock mechanism replacing AioContext locks]
-Signed-off-by: Fiona Ebner <f.ebner at proxmox.com>
----
- pve-backup.c | 57 ++++++++++++++++++++++++++++++++++------------------
- 1 file changed, 38 insertions(+), 19 deletions(-)
-
-diff --git a/pve-backup.c b/pve-backup.c
-index 67e2b99d74..7a8240363d 100644
---- a/pve-backup.c
-+++ b/pve-backup.c
-@@ -356,12 +356,41 @@ static void pvebackup_complete_cb(void *opaque, int ret)
- 
- /*
-  * job_cancel(_sync) does not like to be called from coroutines, so defer to
-- * main loop processing via a bottom half.
-+ * main loop processing via a bottom half. Assumes that caller holds
-+ * backup_mutex.
-  */
- static void job_cancel_bh(void *opaque) {
-     CoCtxData *data = (CoCtxData*)opaque;
--    Job *job = (Job*)data->data;
--    job_cancel_sync(job, true);
-+
-+    /*
-+     * Be careful to pick a valid job to cancel:
-+     * 1. job_cancel_sync() does not expect the job to be finalized already.
-+     * 2. job_exit() might run between scheduling and running job_cancel_bh()
-+     *    and pvebackup_co_complete_stream() might not have removed the job from
-+     *    the list yet (in fact, cannot, because it waits for the backup_mutex).
-+     * Requiring !job_is_completed() ensures that no finalized job is picked.
-+     */
-+    GList *bdi = g_list_first(backup_state.di_list);
-+    while (bdi) {
-+        if (bdi->data) {
-+            BlockJob *bj = ((PVEBackupDevInfo *)bdi->data)->job;
-+            if (bj) {
-+                Job *job = &bj->job;
-+                WITH_JOB_LOCK_GUARD() {
-+                    if (!job_is_completed_locked(job)) {
-+                        job_cancel_sync_locked(job, true);
-+                        /*
-+                         * It's enough to cancel one job in the transaction, the
-+                         * rest will follow automatically.
-+                         */
-+                        break;
-+                    }
-+                }
-+            }
-+        }
-+        bdi = g_list_next(bdi);
-+    }
-+
-     aio_co_enter(data->ctx, data->co);
- }
- 
-@@ -382,22 +411,12 @@ void coroutine_fn qmp_backup_cancel(Error **errp)
-         proxmox_backup_abort(backup_state.pbs, "backup canceled");
-     }
- 
--    /* it's enough to cancel one job in the transaction, the rest will follow
--     * automatically */
--    GList *bdi = g_list_first(backup_state.di_list);
--    BlockJob *cancel_job = bdi && bdi->data ?
--        ((PVEBackupDevInfo *)bdi->data)->job :
--        NULL;
--
--    if (cancel_job) {
--        CoCtxData data = {
--            .ctx = qemu_get_current_aio_context(),
--            .co = qemu_coroutine_self(),
--            .data = &cancel_job->job,
--        };
--        aio_bh_schedule_oneshot(data.ctx, job_cancel_bh, &data);
--        qemu_coroutine_yield();
--    }
-+    CoCtxData data = {
-+        .ctx = qemu_get_current_aio_context(),
-+        .co = qemu_coroutine_self(),
-+    };
-+    aio_bh_schedule_oneshot(data.ctx, job_cancel_bh, &data);
-+    qemu_coroutine_yield();
- 
-     qemu_co_mutex_unlock(&backup_state.backup_mutex);
- }
diff --git a/debian/patches/pve/0059-vma-create-support-64KiB-unaligned-input-images.patch b/debian/patches/pve/0059-vma-create-support-64KiB-unaligned-input-images.patch
deleted file mode 100644
index ffa8d0e..0000000
--- a/debian/patches/pve/0059-vma-create-support-64KiB-unaligned-input-images.patch
+++ /dev/null
@@ -1,57 +0,0 @@
-From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
-From: Fabian Ebner <f.ebner at proxmox.com>
-Date: Wed, 22 Jun 2022 10:45:11 +0200
-Subject: [PATCH] vma: create: support 64KiB-unaligned input images
-
-which fixes backing up templates with such disks in PVE, for example
-efitype=4m EFI disks on a file-based storage (size = 540672).
-
-If there is not enough left to read, blk_co_preadv will return -EIO,
-so limit the size in the last iteration.
-
-For writing, an unaligned end is already handled correctly.
-
-The call to memset is not strictly necessary, because writing also
-checks that it doesn't write data beyond the end of the image. But
-there are two reasons to do it:
-1. It's cleaner that way.
-2. It allows detecting when the final piece is all zeroes, which might
-   not happen if the buffer still contains data from the previous
-   iteration.
-
-Signed-off-by: Fabian Ebner <f.ebner at proxmox.com>
----
- vma.c | 12 ++++++++++--
- 1 file changed, 10 insertions(+), 2 deletions(-)
-
-diff --git a/vma.c b/vma.c
-index e6e9ffc7fe..304f02bc84 100644
---- a/vma.c
-+++ b/vma.c
-@@ -548,7 +548,7 @@ static void coroutine_fn backup_run(void *opaque)
-     struct iovec iov;
-     QEMUIOVector qiov;
- 
--    int64_t start, end;
-+    int64_t start, end, readlen;
-     int ret = 0;
- 
-     unsigned char *buf = blk_blockalign(job->target, VMA_CLUSTER_SIZE);
-@@ -562,8 +562,16 @@ static void coroutine_fn backup_run(void *opaque)
-         iov.iov_len = VMA_CLUSTER_SIZE;
-         qemu_iovec_init_external(&qiov, &iov, 1);
- 
-+        if (start + 1 == end) {
-+            memset(buf, 0, VMA_CLUSTER_SIZE);
-+            readlen = job->len - start * VMA_CLUSTER_SIZE;
-+            assert(readlen > 0 && readlen <= VMA_CLUSTER_SIZE);
-+        } else {
-+            readlen = VMA_CLUSTER_SIZE;
-+        }
-+
-         ret = blk_co_preadv(job->target, start * VMA_CLUSTER_SIZE,
--                            VMA_CLUSTER_SIZE, &qiov, 0);
-+                            readlen, &qiov, 0);
-         if (ret < 0) {
-             vma_writer_set_error(job->vmaw, "read error", -1);
-             goto out;
diff --git a/debian/patches/pve/0060-vma-create-avoid-triggering-assertion-in-error-case.patch b/debian/patches/pve/0060-vma-create-avoid-triggering-assertion-in-error-case.patch
deleted file mode 100644
index c7961ae..0000000
--- a/debian/patches/pve/0060-vma-create-avoid-triggering-assertion-in-error-case.patch
+++ /dev/null
@@ -1,25 +0,0 @@
-From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
-From: Fabian Ebner <f.ebner at proxmox.com>
-Date: Wed, 22 Jun 2022 10:45:12 +0200
-Subject: [PATCH] vma: create: avoid triggering assertion in error case
-
-error_setg expects its argument to not be initialized yet.
-
-Signed-off-by: Fabian Ebner <f.ebner at proxmox.com>
----
- vma-writer.c | 2 ++
- 1 file changed, 2 insertions(+)
-
-diff --git a/vma-writer.c b/vma-writer.c
-index df4b20793d..ac7da237d0 100644
---- a/vma-writer.c
-+++ b/vma-writer.c
-@@ -311,6 +311,8 @@ VmaWriter *vma_writer_create(const char *filename, uuid_t uuid, Error **errp)
-         }
- 
-         if (vmaw->fd < 0) {
-+            error_free(*errp);
-+            *errp = NULL;
-             error_setg(errp, "can't open file %s - %s\n", filename,
-                        g_strerror(errno));
-             goto err;
diff --git a/debian/patches/pve/0061-block-alloc-track-avoid-premature-break.patch b/debian/patches/pve/0061-block-alloc-track-avoid-premature-break.patch
deleted file mode 100644
index a1c3d11..0000000
--- a/debian/patches/pve/0061-block-alloc-track-avoid-premature-break.patch
+++ /dev/null
@@ -1,36 +0,0 @@
-From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
-From: Fabian Ebner <f.ebner at proxmox.com>
-Date: Wed, 22 Jun 2022 10:45:13 +0200
-Subject: [PATCH] block: alloc-track: avoid premature break
-
-While the bdrv_co_preadv() calls are expected to return 0 on success,
-qemu_iovec_memset() will return the number of bytes set (will be
-local_bytes, because the slice with that size was just initialized).
-
-Don't break out of the loop after the branch with qemu_iovec_memset(),
-because there might still be work to do. Additionally, ret is an int,
-which on 64-bit platforms is too small to hold the size_t returned by
-qemu_iovec_memset().
-
-The branch seems to be difficult to reach in practice, because the
-whole point of alloc-track is to be used with a backing device.
-
-Signed-off-by: Fabian Ebner <f.ebner at proxmox.com>
----
- block/alloc-track.c | 3 ++-
- 1 file changed, 2 insertions(+), 1 deletion(-)
-
-diff --git a/block/alloc-track.c b/block/alloc-track.c
-index 113bbd7058..b75d7c6460 100644
---- a/block/alloc-track.c
-+++ b/block/alloc-track.c
-@@ -175,7 +175,8 @@ static int coroutine_fn track_co_preadv(BlockDriverState *bs,
-             ret = bdrv_co_preadv(bs->backing, local_offset, local_bytes,
-                                  &local_qiov, flags);
-         } else {
--            ret = qemu_iovec_memset(&local_qiov, cur_offset, 0, local_bytes);
-+            qemu_iovec_memset(&local_qiov, cur_offset, 0, local_bytes);
-+            ret = 0;
-         }
- 
-         if (ret != 0) {
diff --git a/debian/patches/pve/0062-PVE-Backup-allow-passing-max-workers-performance-set.patch b/debian/patches/pve/0062-PVE-Backup-allow-passing-max-workers-performance-set.patch
deleted file mode 100644
index 8fd63c0..0000000
--- a/debian/patches/pve/0062-PVE-Backup-allow-passing-max-workers-performance-set.patch
+++ /dev/null
@@ -1,144 +0,0 @@
-From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
-From: Fiona Ebner <f.ebner at proxmox.com>
-Date: Mon, 3 Oct 2022 15:52:04 +0200
-Subject: [PATCH] PVE Backup: allow passing max-workers performance setting
-
-For query-proxmox-support, add an indication that it's possible to use
-the setting.
-
-For now, the other two BackupPerf settings are not exposed:
-
-* use-copy-range: would need to be implemented by the backup-dump
-block driver first, and in fact, the default for backup was changed,
-because it wasn't as fast for backup in QEMU, see commit
-6a30f663d4c0b3c45a544d541e0c4e214b2473a1.
-
-* max-chunk: enforced to be at least the backup cluster size, which is
-4 MiB for PBS and otherwise maximum of source and target cluster size.
-And block-copy has a maximum buffer size of 1 MiB, so setting a larger
-max-chunk doesn't even have an effect. To make the setting sensibly
-usable the check would need to be removed and optionally the
-block-copy max buffer size would need to be bumped. I tried doing just
-that, and tested different source/target combinations with different
-max-chunk settings, but there were no noticable improvements over the
-default "unlimited" (resulting in 1 MiB for block-copy).
-
-Signed-off-by: Fiona Ebner <f.ebner at proxmox.com>
----
- block/monitor/block-hmp-cmds.c |  4 +++-
- pve-backup.c                   | 18 +++++++++++++-----
- qapi/block-core.json           |  9 +++++++--
- 3 files changed, 23 insertions(+), 8 deletions(-)
-
-diff --git a/block/monitor/block-hmp-cmds.c b/block/monitor/block-hmp-cmds.c
-index ac23f21eef..636509b83e 100644
---- a/block/monitor/block-hmp-cmds.c
-+++ b/block/monitor/block-hmp-cmds.c
-@@ -1063,7 +1063,9 @@ void coroutine_fn hmp_backup(Monitor *mon, const QDict *qdict)
-         false, false, // PBS encrypt
-         true, dir ? BACKUP_FORMAT_DIR : BACKUP_FORMAT_VMA,
-         NULL, NULL,
--        devlist, qdict_haskey(qdict, "speed"), speed, &error);
-+        devlist, qdict_haskey(qdict, "speed"), speed,
-+        false, 0, // BackupPerf max-workers
-+        &error);
- 
-     hmp_handle_error(mon, error);
- }
-diff --git a/pve-backup.c b/pve-backup.c
-index 7a8240363d..cb5312fff3 100644
---- a/pve-backup.c
-+++ b/pve-backup.c
-@@ -57,6 +57,7 @@ static struct PVEBackupState {
-         bool starting;
-     } stat;
-     int64_t speed;
-+    BackupPerf perf;
-     VmaWriter *vmaw;
-     ProxmoxBackupHandle *pbs;
-     GList *di_list;
-@@ -492,8 +493,6 @@ static void create_backup_jobs_bh(void *opaque) {
-     }
-     backup_state.txn = job_txn_new_seq();
- 
--    BackupPerf perf = { .max_workers = 16 };
--
-     /* create and start all jobs (paused state) */
-     GList *l =  backup_state.di_list;
-     while (l) {
-@@ -513,8 +512,9 @@ static void create_backup_jobs_bh(void *opaque) {
- 
-         BlockJob *job = backup_job_create(
-             NULL, di->bs, di->target, backup_state.speed, sync_mode, di->bitmap,
--            bitmap_mode, false, NULL, &perf, BLOCKDEV_ON_ERROR_REPORT, BLOCKDEV_ON_ERROR_REPORT,
--            JOB_DEFAULT, pvebackup_complete_cb, di, backup_state.txn, &local_err);
-+            bitmap_mode, false, NULL, &backup_state.perf, BLOCKDEV_ON_ERROR_REPORT,
-+            BLOCKDEV_ON_ERROR_REPORT, JOB_DEFAULT, pvebackup_complete_cb, di, backup_state.txn,
-+            &local_err);
- 
-         aio_context_release(aio_context);
- 
-@@ -585,7 +585,9 @@ UuidInfo coroutine_fn *qmp_backup(
-     const char *config_file,
-     const char *firewall_file,
-     const char *devlist,
--    bool has_speed, int64_t speed, Error **errp)
-+    bool has_speed, int64_t speed,
-+    bool has_max_workers, int64_t max_workers,
-+    Error **errp)
- {
-     assert(qemu_in_coroutine());
- 
-@@ -915,6 +917,11 @@ UuidInfo coroutine_fn *qmp_backup(
- 
-     backup_state.speed = (has_speed && speed > 0) ? speed : 0;
- 
-+    backup_state.perf = (BackupPerf){ .max_workers = 16 };
-+    if (has_max_workers) {
-+        backup_state.perf.max_workers = max_workers;
-+    }
-+
-     backup_state.vmaw = vmaw;
-     backup_state.pbs = pbs;
- 
-@@ -1086,5 +1093,6 @@ ProxmoxSupportStatus *qmp_query_proxmox_support(Error **errp)
-     ret->pbs_dirty_bitmap_migration = true;
-     ret->query_bitmap_info = true;
-     ret->pbs_masterkey = true;
-+    ret->backup_max_workers = true;
-     return ret;
- }
-diff --git a/qapi/block-core.json b/qapi/block-core.json
-index 9edeb33d82..809f3c61bc 100644
---- a/qapi/block-core.json
-+++ b/qapi/block-core.json
-@@ -924,6 +924,8 @@
- #
- # @encrypt: use encryption ((optional for format 'pbs', defaults to true if there is a keyfile)
- #
-+# @max-workers: see @BackupPerf for details. Default 16.
-+#
- # Returns: the uuid of the backup job
- #
- ##
-@@ -942,7 +944,9 @@
-                                     '*format': 'BackupFormat',
-                                     '*config-file': 'str',
-                                     '*firewall-file': 'str',
--                                    '*devlist': 'str', '*speed': 'int' },
-+                                    '*devlist': 'str',
-+                                    '*speed': 'int',
-+                                    '*max-workers': 'int' },
-   'returns': 'UuidInfo', 'coroutine': true }
- 
- ##
-@@ -997,7 +1001,8 @@
-             'pbs-dirty-bitmap-savevm': 'bool',
-             'pbs-dirty-bitmap-migration': 'bool',
-             'pbs-masterkey': 'bool',
--            'pbs-library-version': 'str' } }
-+            'pbs-library-version': 'str',
-+            'backup-max-workers': 'bool' } }
- 
- ##
- # @query-proxmox-support:
diff --git a/debian/patches/pve/0064-savevm-async-optimize-querying-pending.patch b/debian/patches/pve/0064-savevm-async-optimize-querying-pending.patch
deleted file mode 100644
index fb99782..0000000
--- a/debian/patches/pve/0064-savevm-async-optimize-querying-pending.patch
+++ /dev/null
@@ -1,49 +0,0 @@
-From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
-From: Fiona Ebner <f.ebner at proxmox.com>
-Date: Fri, 31 Mar 2023 14:13:01 +0200
-Subject: [PATCH] savevm-async: optimize querying pending
-
-by using the estimate variant until the precopy estimate is below the
-threshold. This is similar to what is done in migration.c
-
-Signed-off-by: Fiona Ebner <f.ebner at proxmox.com>
----
- migration/savevm-async.c | 15 +++++++++++----
- 1 file changed, 11 insertions(+), 4 deletions(-)
-
-diff --git a/migration/savevm-async.c b/migration/savevm-async.c
-index fab1dc32d2..de91506821 100644
---- a/migration/savevm-async.c
-+++ b/migration/savevm-async.c
-@@ -242,12 +242,19 @@ static void coroutine_fn process_savevm_co(void *opaque)
- 
-     while (snap_state.state == SAVE_STATE_ACTIVE) {
-         uint64_t pending_size, pend_precopy, pend_postcopy;
-+        uint64_t threshold = 400 * 1000;
- 
--        /* pending is expected to be called without iothread lock */
-+        /*
-+         * pending_{estimate,exact} are expected to be called without iothread
-+         * lock. Similar to what is done in migration.c, call the exact variant
-+         * only once pend_precopy in the estimate is below the threshold.
-+         */
-         qemu_mutex_unlock_iothread();
--        qemu_savevm_state_pending_exact(&pend_precopy, &pend_postcopy);
-+        qemu_savevm_state_pending_estimate(&pend_precopy, &pend_postcopy);
-+        if (pend_precopy <= threshold) {
-+            qemu_savevm_state_pending_exact(&pend_precopy, &pend_postcopy);
-+        }
-         qemu_mutex_lock_iothread();
--
-         pending_size = pend_precopy + pend_postcopy;
- 
-         /*
-@@ -259,7 +266,7 @@ static void coroutine_fn process_savevm_co(void *opaque)
-         maxlen = blk_getlength(snap_state.target) - 100*1024*1024;
- 
-         /* Note that there is no progress for pend_postcopy when iterating */
--        if (pending_size - pend_postcopy > 400000 && snap_state.bs_pos + pending_size < maxlen) {
-+        if (pend_precopy > threshold && snap_state.bs_pos + pending_size < maxlen) {
-             ret = qemu_savevm_state_iterate(snap_state.file, false);
-             if (ret < 0) {
-                 save_snapshot_error("qemu_savevm_state_iterate error %d", ret);
diff --git a/debian/patches/pve/0065-savevm-async-also-initialize-compression-counters.patch b/debian/patches/pve/0065-savevm-async-also-initialize-compression-counters.patch
deleted file mode 100644
index ae2b463..0000000
--- a/debian/patches/pve/0065-savevm-async-also-initialize-compression-counters.patch
+++ /dev/null
@@ -1,26 +0,0 @@
-From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
-From: Fiona Ebner <f.ebner at proxmox.com>
-Date: Thu, 27 Apr 2023 10:28:08 +0200
-Subject: [PATCH] savevm-async: also initialize compression counters
-
-Like is done in the migration code. While the compression migration
-capability is not used by Proxmox VE currently, it's still worth to
-be prepared for the future.
-
-Signed-off-by: Fiona Ebner <f.ebner at proxmox.com>
----
- migration/savevm-async.c | 1 +
- 1 file changed, 1 insertion(+)
-
-diff --git a/migration/savevm-async.c b/migration/savevm-async.c
-index de91506821..effe6d1e0d 100644
---- a/migration/savevm-async.c
-+++ b/migration/savevm-async.c
-@@ -394,6 +394,7 @@ void qmp_savevm_start(const char *statefile, Error **errp)
-      */
-     migrate_init(ms);
-     memset(&ram_counters, 0, sizeof(ram_counters));
-+    memset(&compression_counters, 0, sizeof(compression_counters));
-     ms->to_dst_file = snap_state.file;
- 
-     error_setg(&snap_state.blocker, "block device is in use by savevm");
diff --git a/debian/patches/series b/debian/patches/series
index 90d7943..375b2db 100644
--- a/debian/patches/series
+++ b/debian/patches/series
@@ -36,58 +36,37 @@ pve/0009-PVE-Up-qemu-img-return-success-on-info-without-snaps.patch
 pve/0010-PVE-Up-qemu-img-dd-add-osize-and-read-from-to-stdin-.patch
 pve/0011-PVE-Up-qemu-img-dd-add-isize-parameter.patch
 pve/0012-PVE-Up-qemu-img-dd-add-n-skip_create.patch
-pve/0013-PVE-virtio-balloon-improve-query-balloon.patch
-pve/0014-PVE-qapi-modify-query-machines.patch
-pve/0015-PVE-qapi-modify-spice-query.patch
-pve/0016-PVE-add-IOChannel-implementation-for-savevm-async.patch
-pve/0017-PVE-add-savevm-async-for-background-state-snapshots.patch
-pve/0018-PVE-add-optional-buffer-size-to-QEMUFile.patch
-pve/0019-PVE-block-add-the-zeroinit-block-driver-filter.patch
-pve/0020-PVE-Add-dummy-id-command-line-parameter.patch
-pve/0021-PVE-Config-Revert-target-i386-disable-LINT0-after-re.patch
-pve/0022-PVE-Up-Config-file-posix-make-locking-optiono-on-cre.patch
-pve/0023-PVE-monitor-disable-oob-capability.patch
-pve/0024-PVE-Compat-4.0-used-balloon-qemu-4-0-config-size-fal.patch
-pve/0025-PVE-Allow-version-code-in-machine-type.patch
-pve/0026-block-backup-move-bcs-bitmap-initialization-to-job-c.patch
-pve/0027-PVE-Backup-add-vma-backup-format-code.patch
-pve/0028-PVE-Backup-add-backup-dump-block-driver.patch
-pve/0029-PVE-Backup-proxmox-backup-patches-for-qemu.patch
-pve/0030-PVE-Backup-pbs-restore-new-command-to-restore-from-p.patch
-pve/0031-PVE-Backup-Add-dirty-bitmap-tracking-for-incremental.patch
-pve/0032-PVE-various-PBS-fixes.patch
+pve/0013-qemu-img-dd-add-l-option-for-loading-a-snapshot.patch
+pve/0014-PVE-virtio-balloon-improve-query-balloon.patch
+pve/0015-PVE-qapi-modify-query-machines.patch
+pve/0016-PVE-qapi-modify-spice-query.patch
+pve/0017-PVE-add-IOChannel-implementation-for-savevm-async.patch
+pve/0018-PVE-add-savevm-async-for-background-state-snapshots.patch
+pve/0019-PVE-add-optional-buffer-size-to-QEMUFile.patch
+pve/0020-PVE-block-add-the-zeroinit-block-driver-filter.patch
+pve/0021-PVE-Add-dummy-id-command-line-parameter.patch
+pve/0022-PVE-Config-Revert-target-i386-disable-LINT0-after-re.patch
+pve/0023-PVE-Up-Config-file-posix-make-locking-optiono-on-cre.patch
+pve/0024-PVE-monitor-disable-oob-capability.patch
+pve/0025-PVE-Compat-4.0-used-balloon-qemu-4-0-config-size-fal.patch
+pve/0026-PVE-Allow-version-code-in-machine-type.patch
+pve/0027-block-backup-move-bcs-bitmap-initialization-to-job-c.patch
+pve/0028-PVE-Backup-add-vma-backup-format-code.patch
+pve/0029-PVE-Backup-add-backup-dump-block-driver.patch
+pve/0030-PVE-Add-sequential-job-transaction-support.patch
+pve/0031-PVE-Backup-Proxmox-backup-patches-for-QEMU.patch
+pve/0032-PVE-Backup-pbs-restore-new-command-to-restore-from-p.patch
 pve/0033-PVE-Add-PBS-block-driver-to-map-backup-archives-into.patch
-pve/0034-PVE-add-query_proxmox_support-QMP-command.patch
-pve/0035-PVE-add-query-pbs-bitmap-info-QMP-call.patch
-pve/0036-PVE-redirect-stderr-to-journal-when-daemonized.patch
-pve/0037-PVE-Add-sequential-job-transaction-support.patch
-pve/0038-PVE-Backup-Use-a-transaction-to-synchronize-job-stat.patch
-pve/0039-PVE-Backup-Don-t-block-on-finishing-and-cleanup-crea.patch
-pve/0040-PVE-Migrate-dirty-bitmap-state-via-savevm.patch
-pve/0041-migration-block-dirty-bitmap-migrate-other-bitmaps-e.patch
-pve/0042-PVE-fall-back-to-open-iscsi-initiatorname.patch
-pve/0043-PVE-Use-coroutine-QMP-for-backup-cancel_backup.patch
-pve/0044-PBS-add-master-key-support.patch
-pve/0045-PVE-block-pbs-fast-path-reads-without-allocation-if-.patch
-pve/0046-PVE-block-stream-increase-chunk-size.patch
-pve/0047-block-io-accept-NULL-qiov-in-bdrv_pad_request.patch
-pve/0048-block-add-alloc-track-driver.patch
-pve/0049-PVE-savevm-async-register-yank-before-migration_inco.patch
-pve/0050-qemu-img-dd-add-l-option-for-loading-a-snapshot.patch
-pve/0051-vma-allow-partial-restore.patch
-pve/0052-pbs-namespace-support.patch
-pve/0053-Revert-block-rbd-workaround-for-ceph-issue-53784.patch
-pve/0054-Revert-block-rbd-fix-handling-of-holes-in-.bdrv_co_b.patch
-pve/0055-Revert-block-rbd-implement-bdrv_co_block_status.patch
-pve/0056-PVE-Backup-create-jobs-correctly-cancel-in-error-sce.patch
-pve/0057-PVE-Backup-ensure-jobs-in-di_list-are-referenced.patch
-pve/0058-PVE-Backup-avoid-segfault-issues-upon-backup-cancel.patch
-pve/0059-vma-create-support-64KiB-unaligned-input-images.patch
-pve/0060-vma-create-avoid-triggering-assertion-in-error-case.patch
-pve/0061-block-alloc-track-avoid-premature-break.patch
-pve/0062-PVE-Backup-allow-passing-max-workers-performance-set.patch
-pve/0063-alloc-track-fix-deadlock-during-drop.patch
-pve/0064-savevm-async-optimize-querying-pending.patch
-pve/0065-savevm-async-also-initialize-compression-counters.patch
-pve/0066-migration-for-snapshots-hold-the-BQL-during-setup-ca.patch
-pve/0067-savevm-async-don-t-hold-BQL-during-setup.patch
+pve/0034-PVE-redirect-stderr-to-journal-when-daemonized.patch
+pve/0035-PVE-Migrate-dirty-bitmap-state-via-savevm.patch
+pve/0036-migration-block-dirty-bitmap-migrate-other-bitmaps-e.patch
+pve/0037-PVE-fall-back-to-open-iscsi-initiatorname.patch
+pve/0038-PVE-block-stream-increase-chunk-size.patch
+pve/0039-block-io-accept-NULL-qiov-in-bdrv_pad_request.patch
+pve/0040-block-add-alloc-track-driver.patch
+pve/0041-Revert-block-rbd-workaround-for-ceph-issue-53784.patch
+pve/0042-Revert-block-rbd-fix-handling-of-holes-in-.bdrv_co_b.patch
+pve/0043-Revert-block-rbd-implement-bdrv_co_block_status.patch
+pve/0044-alloc-track-fix-deadlock-during-drop.patch
+pve/0045-migration-for-snapshots-hold-the-BQL-during-setup-ca.patch
+pve/0046-savevm-async-don-t-hold-BQL-during-setup.patch
-- 
2.39.2






More information about the pve-devel mailing list