[pve-devel] [PATCH many v2 00/23] Expand and migrate RRD data (excluding GUI)

Aaron Lauterer a.lauterer at proxmox.com
Wed Jul 9 18:36:40 CEST 2025


This patch series expands the RRD format for nodes and VMs. For all types
(nodes, VMs, storage) we adjust the aggregation to align them with the way they
are done on the Backup Server. Therefore, we have new RRD defitions for all
3 types.

New values are added for nodes and VMs. In particular:

Nodes:
* memfree
* arcsize
* pressures:
  * cpu some
  * io some
  * io full
  * mem some
  * mem full

VMs:
* memhost (memory consumption of all processes in the guests cgroup, host view)
* pressures:
  * cpu some
  * cpu full
  * io some
  * io full
  * mem some
  * mem full

To not lose old RRD data, we need to migrate the old RRD files to the ones with
the new schema. Some initial performance tests showed that migrating 10k VM
RRD files took ~2m40s single threaded. This is way to long to do it within the
pmxcfs itself. Therefore this will be a dedicated step. I wrote a small rust
tool that binds to librrd to to the migraton.

We could include it in a post-install step when upgrading to PVE 9.

To avoid missing data and key errors in the journal, we need to ship some
changes to PVE 8 that can handle the new format sent out by pvestatd. Those
patches are the first in the series and are marked with a "-pve8" postfix in the
repo name.
Those patches are present twice, as we try to keep the same change history on
the PVE9 branches as well.

This series so far only handles migration and any changes needed for the new
fields. It does not yet include any GUI patches to add additional graphs to the
summary pages of nodes and guests. Those are currently being worked on, but are
not yet in a state where I want to send patches.

Plans:
* Add GUI parts:
  * Additional graphs, mostly for pressures.
  * add more info the memory graph. e.g. ZFS ARC
  * add host memory view of guests in graph and gauges

* pve8to9:
  * have a check how many RRD files are present and verify that there is enough
	space on the root FS


How to test:
1. build pve-cluster on PVE8
2. build the -pve8 patches (cluster & manager) and install them on all PVE8 nodes
3. Upgrade the first node to PVE9/trixie and install all the other patches
    build all the other repositories, copy the .deb files over and then ideally
    use something like the following to make shure that any dependency will be
    used from the deb files, and not the apt repositories.
    ```
    apt install ./*.deb --reinstall --allow-downgrades -y
    ```
4. build the migration tool with cargo and copy the binary to the nodes for now.
5. run the migration tool on the first host
6. continue running the migration tool on the other nodes one by one


High level changes since:
v1:
* refactored the patches as they were a bit of a mess in v1, sorry for that
  now we have distinct patches for pve8 for both affected repos (cluster & manager)

RFC:
* drop membuffer and memcached in favor of already present memused and memavailable
* switch from pve9-{type} to pve-{type}-9.0 schema in all places
* add patch for PVE8 & 9 that handles different keys in live status to avoid
  question marks in the UI

cluster-pve8:

Aaron Lauterer (2):
  cfs status.c: drop old pve2-vm rrd schema support
  status: handle new metrics update data

 src/pmxcfs/status.c | 85 ++++++++++++++++++++++++++++-----------------
 1 file changed, 53 insertions(+), 32 deletions(-)


manager-pve8:

Aaron Lauterer (2):
  api2tools: drop old VM rrd schema
  api2tools: extract stats: handle existence of new pve-{type}-9.0 data

 PVE/API2Tools.pm | 44 ++++++++++++++++++++++++--------------------
 1 file changed, 24 insertions(+), 20 deletions(-)


pve9-rrd-migration-tool:

Aaron Lauterer (1):
  introduce rrd migration tool for pve8 -> pve9


cluster:

Aaron Lauterer (4):
  cfs status.c: drop old pve2-vm rrd schema support
  status: handle new metrics update data
  status: introduce new pve-{type}- rrd and metric format
  rrd: adapt to new RRD format with different aggregation windows

 src/PVE/RRD.pm      |  52 ++++++--
 src/pmxcfs/status.c | 318 ++++++++++++++++++++++++++++++++++++++------
 2 files changed, 317 insertions(+), 53 deletions(-)


common:

Folke Gleumes (2):
  fix error in pressure parsing
  add functions to retrieve pressures for vm/ct

 src/PVE/ProcFSTools.pm | 13 ++++++++++++-
 1 file changed, 12 insertions(+), 1 deletion(-)


manager:

Aaron Lauterer (5):
  api2tools: drop old VM rrd schema
  api2tools: extract stats: handle existence of new pve-{type}-9.0 data
  pvestatd: collect and distribute new pve-{type}-9.0 metrics
  api: nodes: rrd and rrddata add decade option and use new pve-node-9.0
    rrd files
  api2tools: extract_vm_status add new vm memhost column

 PVE/API2/Cluster.pm     |   7 +
 PVE/API2/Nodes.pm       |  16 +-
 PVE/API2Tools.pm        |  47 +++---
 PVE/Service/pvestatd.pm | 342 +++++++++++++++++++++++++++++-----------
 4 files changed, 293 insertions(+), 119 deletions(-)


storage:

Aaron Lauterer (1):
  status: rrddata: use new pve-storage-9.0 rrd location if file is
    present

 src/PVE/API2/Storage/Status.pm | 9 ++++-----
 1 file changed, 4 insertions(+), 5 deletions(-)


qemu-server:

Aaron Lauterer (3):
  vmstatus: add memhost for host view of vm mem consumption
  vmstatus: switch mem stat to PSS of VM cgroup
  rrddata: use new pve-vm-9.0 rrd location if file is present

Folke Gleumes (1):
  metrics: add pressure to metrics

 src/PVE/API2/Qemu.pm  | 11 ++++++-----
 src/PVE/QemuServer.pm | 33 +++++++++++++++++++++++++++++----
 2 files changed, 35 insertions(+), 9 deletions(-)


container:

Aaron Lauterer (1):
  rrddata: use new pve-vm-9.0 rrd location if file is present

Folke Gleumes (1):
  metrics: add pressures to metrics

 src/PVE/API2/LXC.pm | 11 ++++++-----
 src/PVE/LXC.pm      |  8 ++++++++
 2 files changed, 14 insertions(+), 5 deletions(-)


Summary over all repositories:
  14 files changed, 752 insertions(+), 244 deletions(-)

-- 
Generated by git-murpp 0.8.1




More information about the pve-devel mailing list