[pbs-devel] [PATCH sys 2/2] sys: procfs: read_meminfo: use MemAvailable from kernel to compute memused
Fabian Grünbichler
f.gruenbichler at proxmox.com
Tue Mar 18 12:09:35 CET 2025
On March 13, 2025 12:45 pm, Dietmar Maurer wrote:
> The current code does not consider "SReclaimable" as cached memory,
> which can lead to totally wrong values for memfree/memused.
>
> This fix uses MemAvailable to compute memused, and returns MemFree
> without adding buffers and caches. This corrensponds to the
> values displayed by the "free" command line tool (total, used, free, avilable),
> where used+available=total.
>
> The value for buffers+cache can be estimated with: available - free
>
> Also adds a simple test case for the parser.
the change itself is a step in the right direction for sure, as the
value previously returned for free was a bogus middle ground between
actuall free and available..
some nits/small comments below..
> Signed-off-by: Dietmar Maurer <dietmar at proxmox.com>
> ---
> proxmox-sys/src/linux/procfs/mod.rs | 85 +++++++++++++++++++++++++++--
> 1 file changed, 80 insertions(+), 5 deletions(-)
>
> diff --git a/proxmox-sys/src/linux/procfs/mod.rs b/proxmox-sys/src/linux/procfs/mod.rs
> index f9dbb5bc..46452d8f 100644
> --- a/proxmox-sys/src/linux/procfs/mod.rs
> +++ b/proxmox-sys/src/linux/procfs/mod.rs
> @@ -432,24 +432,29 @@ fn parse_proc_meminfo(text: &str) -> Result<ProcFsMemInfo, Error> {
> swapused: 0,
> };
>
> - let (mut buffers, mut cached) = (0, 0);
> + let mut available = 0;
> +
> for line in text.lines() {
> let mut content_iter = line.split_whitespace();
> if let (Some(key), Some(value)) = (content_iter.next(), content_iter.next()) {
> match key {
> "MemTotal:" => meminfo.memtotal = value.parse::<u64>()? * 1024,
> "MemFree:" => meminfo.memfree = value.parse::<u64>()? * 1024,
> + "MemAvailable:" => available = value.parse::<u64>()? * 1024,
> "SwapTotal:" => meminfo.swaptotal = value.parse::<u64>()? * 1024,
> "SwapFree:" => meminfo.swapfree = value.parse::<u64>()? * 1024,
> - "Buffers:" => buffers = value.parse::<u64>()? * 1024,
> - "Cached:" => cached = value.parse::<u64>()? * 1024,
> _ => continue,
> }
> }
> }
>
> - meminfo.memfree += buffers + cached;
> - meminfo.memused = meminfo.memtotal - meminfo.memfree;
> + // see free.c and meminfo.h from https://gitlab.com/procps-ng/procps
> + // Note: USED+FREE != TOTAL (USED+FREE+BUFFER+CACHED+SRECLAIMABLE == TOTAL)
> +
> + meminfo.memused = meminfo.memtotal - available;
> +
> + // Available memory is: meminfo.memtotal - meminfo.memused
this is already contained in the code line directly above it, so that
comment doesn't really give the reader any new information?
> + // Buffers + CACHE = meminfo.memtotal - meminfo.memused - meminfo.free
and this is wrong as it is missing SRECLAIMABLE (and potential future, similar categories)
maybe we could just document `ProcFsMemInfo` and its fields properly,
which should make the semantics of each of them clear?
previously, we returned three values for memory usage that were actually
just two:
- memfree
- memtotal
- (memused, which was just the difference of those)
now we return three values that are actually three values and omit the
fourth one:
- memfree (with different semantics than before)
- memtotal
- memused, which is the difference of memtotal and a fourth, not
directly returned value memavailable
should we add memavailable as well? it basically has the same semantics
as the old memfree, but correctly counting all the relevant memory..
in PBS we actually only use memtotal and memused for the RRD graph and
pull metrics, but all three values for the status API return value and
"push" (external) metrics - returning available there as well saves
consumers of that data from having to invert the calculation we did in
the backend if they are interested in that number.
comparing old and new:
- memfree will go down by Buf+Cached
- used will go down by SReclaimable (and potentially other memory usage
counted as available)
so I expect users noticing in any case ;)
>
> meminfo.swapused = meminfo.swaptotal - meminfo.swapfree;
>
> @@ -462,6 +467,76 @@ fn parse_proc_meminfo(text: &str) -> Result<ProcFsMemInfo, Error> {
> Ok(meminfo)
> }
>
> +#[test]
> +fn test_read_proc_meminfo() {
> + let meminfo = parse_proc_meminfo(
> + "MemTotal: 32752584 kB
> +MemFree: 2106048 kB
> +MemAvailable: 13301592 kB
> +Buffers: 0 kB
> +Cached: 490072 kB
> +SwapCached: 0 kB
> +Active: 658700 kB
> +Inactive: 59528 kB
> +Active(anon): 191996 kB
> +Inactive(anon): 49880 kB
> +Active(file): 466704 kB
> +Inactive(file): 9648 kB
> +Unevictable: 16008 kB
> +Mlocked: 12936 kB
> +SwapTotal: 3 kB
> +SwapFree: 2 kB
> +Zswap: 0 kB
> +Zswapped: 0 kB
> +Dirty: 0 kB
> +Writeback: 0 kB
> +AnonPages: 244204 kB
> +Mapped: 66032 kB
> +Shmem: 9960 kB
> +KReclaimable: 11525744 kB
> +Slab: 21002876 kB
> +SReclaimable: 11525744 kB
> +SUnreclaim: 9477132 kB
> +KernelStack: 6816 kB
> +PageTables: 4812 kB
> +SecPageTables: 0 kB
> +NFS_Unstable: 0 kB
> +Bounce: 0 kB
> +WritebackTmp: 0 kB
> +CommitLimit: 16376292 kB
> +Committed_AS: 316368 kB
> +VmallocTotal: 34359738367 kB
> +VmallocUsed: 983836 kB
> +VmallocChunk: 0 kB
> +Percpu: 12096 kB
> +HardwareCorrupted: 0 kB
> +AnonHugePages: 0 kB
> +ShmemHugePages: 0 kB
> +ShmemPmdMapped: 0 kB
> +FileHugePages: 0 kB
> +FilePmdMapped: 0 kB
> +Unaccepted: 0 kB
> +HugePages_Total: 0
> +HugePages_Free: 0
> +HugePages_Rsvd: 0
> +HugePages_Surp: 0
> +Hugepagesize: 2048 kB
> +Hugetlb: 0 kB
> +DirectMap4k: 237284 kB
> +DirectMap2M: 13281280 kB
> +DirectMap1G: 22020096 kB
> +",
> + )
> + .expect("successful parsed a sample /proc/meminfo entry");
> +
> + assert_eq!(meminfo.memtotal, 33538646016);
> + assert_eq!(meminfo.memused, 19917815808);
> + assert_eq!(meminfo.memfree, 2156593152);
> + assert_eq!(meminfo.swapfree, 2048);
> + assert_eq!(meminfo.swaptotal, 3072);
> + assert_eq!(meminfo.swapused, 1024);
> +}
> +
> #[derive(Clone, Debug)]
> pub struct ProcFsCPUInfo {
> pub user_hz: f64,
> --
> 2.39.5
>
>
> _______________________________________________
> pbs-devel mailing list
> pbs-devel at lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
>
>
>
More information about the pbs-devel
mailing list