[pve-devel] [PATCH qemu 2/2] backup: trim heap after finishing
Wolfgang Bumiller
w.bumiller at proxmox.com
Mon Aug 14 14:55:49 CEST 2023
On Mon, Aug 14, 2023 at 11:21:33AM +0200, Fiona Ebner wrote:
> Reported in the community forum [0]. By default, there can be large
> amounts of memory left assigned to the QEMU process after backup.
> Likely because of fragmentation, it's necessary to explicitly call
> malloc_trim() to tell glibc that it shouldn't keep all that memory
> resident for the process.
>
> QEMU itself already does a malloc_trim() in the RCU thread, but that
> code path might not be reached (or not for a long time) under usual
> operation. The value of 4 MiB for the argument was also copied from
> there.
>
> Example with the following configuration:
> > agent: 1
> > boot: order=scsi0
> > cores: 4
> > cpu: x86-64-v2-AES
> > ide2: none,media=cdrom
> > memory: 1024
> > name: backup-mem
> > net0: virtio=DA:58:18:26:59:9F,bridge=vmbr0,firewall=1
> > numa: 0
> > ostype: l26
> > scsi0: rbd:base-107-disk-0/vm-106-disk-1,size=4302M
> > scsihw: virtio-scsi-pci
> > smbios1: uuid=b2d4511e-8d01-44f1-afd6-9581b30c24a6
> > sockets: 2
> > startup: order=2
> > virtio0: lvmthin:vm-106-disk-1,iothread=1,size=1G
> > virtio1: lvmthin:vm-106-disk-2,iothread=1,size=1G
> > virtio2: lvmthin:vm-106-disk-3,iothread=1,size=1G
> > vmgenid: 0a1d8751-5e02-449d-977e-c0160e900231
>
> Before the change:
>
> > root at pve8a1 ~ # grep VmRSS /proc/$(cat /var/run/qemu-server/106.pid)/status
> > VmRSS: 370948 kB
> > root at pve8a1 ~ # vzdump 106 --storage pbs
> > (...)
> > INFO: Backup job finished successfully
> > root at pve8a1 ~ # grep VmRSS /proc/$(cat /var/run/qemu-server/106.pid)/status
> > VmRSS: 2114964 kB
>
> After the change:
>
> > root at pve8a1 ~ # grep VmRSS /proc/$(cat /var/run/qemu-server/106.pid)/status
> > VmRSS: 398788 kB
> > root at pve8a1 ~ # vzdump 106 --storage pbs
> > (...)
> > INFO: Backup job finished successfully
> > root at pve8a1 ~ # grep VmRSS /proc/$(cat /var/run/qemu-server/106.pid)/status
> > VmRSS: 424356 kB
>
> [0]: https://forum.proxmox.com/threads/131339/
>
> Co-diagnosed-by: Friedrich Weber <f.weber at proxmox.com>
> Co-diagnosed-by: Dominik Csapak <d.csapak at proxmox.com>
> Signed-off-by: Fiona Ebner <f.ebner at proxmox.com>
Both patches
Acked-by: Wolfgang Bumiller <w.bumiller at proxmox.com>
> ---
> ...ckup-Proxmox-backup-patches-for-QEMU.patch | 23 +++++++++++++++----
> ...igrate-dirty-bitmap-state-via-savevm.patch | 4 ++--
> 2 files changed, 20 insertions(+), 7 deletions(-)
>
> diff --git a/debian/patches/pve/0030-PVE-Backup-Proxmox-backup-patches-for-QEMU.patch b/debian/patches/pve/0030-PVE-Backup-Proxmox-backup-patches-for-QEMU.patch
> index 3753eff..d873601 100644
> --- a/debian/patches/pve/0030-PVE-Backup-Proxmox-backup-patches-for-QEMU.patch
> +++ b/debian/patches/pve/0030-PVE-Backup-Proxmox-backup-patches-for-QEMU.patch
> @@ -79,7 +79,8 @@ Signed-off-by: Wolfgang Bumiller <w.bumiller at proxmox.com>
> adapt for new job lock mechanism replacing AioContext locks
> adapt to QAPI changes
> improve canceling
> - allow passing max-workers setting]
> + allow passing max-workers setting
> + use malloc_trim after backup]
> Signed-off-by: Fiona Ebner <f.ebner at proxmox.com>
> ---
> block/meson.build | 5 +
> @@ -92,11 +93,11 @@ Signed-off-by: Fiona Ebner <f.ebner at proxmox.com>
> monitor/hmp-cmds.c | 72 +++
> proxmox-backup-client.c | 146 +++++
> proxmox-backup-client.h | 60 ++
> - pve-backup.c | 1097 ++++++++++++++++++++++++++++++++
> + pve-backup.c | 1109 ++++++++++++++++++++++++++++++++
> qapi/block-core.json | 226 +++++++
> qapi/common.json | 13 +
> qapi/machine.json | 15 +-
> - 14 files changed, 1711 insertions(+), 13 deletions(-)
> + 14 files changed, 1723 insertions(+), 13 deletions(-)
> create mode 100644 proxmox-backup-client.c
> create mode 100644 proxmox-backup-client.h
> create mode 100644 pve-backup.c
> @@ -587,10 +588,10 @@ index 0000000000..8cbf645b2c
> +#endif /* PROXMOX_BACKUP_CLIENT_H */
> diff --git a/pve-backup.c b/pve-backup.c
> new file mode 100644
> -index 0000000000..dd72ee0ed6
> +index 0000000000..10ca8a0b1d
> --- /dev/null
> +++ b/pve-backup.c
> -@@ -0,0 +1,1097 @@
> +@@ -0,0 +1,1109 @@
> +#include "proxmox-backup-client.h"
> +#include "vma.h"
> +
> @@ -605,6 +606,10 @@ index 0000000000..dd72ee0ed6
> +#include "qapi/qmp/qerror.h"
> +#include "qemu/cutils.h"
> +
> ++#if defined(CONFIG_MALLOC_TRIM)
> ++#include <malloc.h>
> ++#endif
> ++
> +#include <proxmox-backup-qemu.h>
> +
> +/* PVE backup state and related function */
> @@ -869,6 +874,14 @@ index 0000000000..dd72ee0ed6
> + backup_state.stat.end_time = time(NULL);
> + backup_state.stat.finishing = false;
> + qemu_mutex_unlock(&backup_state.stat.lock);
> ++
> ++#if defined(CONFIG_MALLOC_TRIM)
> ++ /*
> ++ * Try to reclaim memory for buffers (and, in case of PBS, Rust futures), etc.
> ++ * Won't happen by default if there is fragmentation.
> ++ */
> ++ malloc_trim(4 * 1024 * 1024);
> ++#endif
> +}
> +
> +static void coroutine_fn pvebackup_co_complete_stream(void *opaque)
> diff --git a/debian/patches/pve/0034-PVE-Migrate-dirty-bitmap-state-via-savevm.patch b/debian/patches/pve/0034-PVE-Migrate-dirty-bitmap-state-via-savevm.patch
> index 9437869..7a906e9 100644
> --- a/debian/patches/pve/0034-PVE-Migrate-dirty-bitmap-state-via-savevm.patch
> +++ b/debian/patches/pve/0034-PVE-Migrate-dirty-bitmap-state-via-savevm.patch
> @@ -175,10 +175,10 @@ index 0000000000..887e998b9e
> + NULL);
> +}
> diff --git a/pve-backup.c b/pve-backup.c
> -index dd72ee0ed6..cb5312fff3 100644
> +index 10ca8a0b1d..0a5ce2cab8 100644
> --- a/pve-backup.c
> +++ b/pve-backup.c
> -@@ -1090,6 +1090,7 @@ ProxmoxSupportStatus *qmp_query_proxmox_support(Error **errp)
> +@@ -1102,6 +1102,7 @@ ProxmoxSupportStatus *qmp_query_proxmox_support(Error **errp)
> ret->pbs_library_version = g_strdup(proxmox_backup_qemu_version());
> ret->pbs_dirty_bitmap = true;
> ret->pbs_dirty_bitmap_savevm = true;
> --
> 2.39.2
>
>
>
> _______________________________________________
> pve-devel mailing list
> pve-devel at lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>
>
More information about the pve-devel
mailing list