[pbs-devel] [PATCH v4 proxmox-backup 5/5] fix #5331: garbage collection: avoid multiple chunk atime updates
Thomas Lamprecht
t.lamprecht at proxmox.com
Tue Mar 25 13:07:58 CET 2025
Am 25.03.25 um 12:56 schrieb Thomas Lamprecht:
> Am 21.03.25 um 10:32 schrieb Christian Ebner:
>> To reduce the number of atimes updates, keep track of the recently
>> marked chunks in phase 1 of garbage to avoid multiple atime updates
>> via expensive utimensat() calls.
>>
>> Recently touched chunks are tracked by storing the chunk digests in
>> an LRU cache of fixed capacity. By inserting a digest, the chunk will
>> be the most recently touched one and if already present in the cache
>> before insert, the atime update can be skipped.
>
> Code-wise this looks alright to me, albeit I did not look at it in-depth,
> but what I'd be interested is documenting some more thoughts about how
> the size of the cache was chosen; even if it was mostly random then stating
> so can help a lot when rethinking this in the future, as then one doesn't
> have to guess if there was some more reasoning behind that.
>
> Also some basic benchmarks might be great, even if from some random grown
> setup, as long as one describes it, like the overall pool data usage,
> deduplication factor, amount of backup groups, amount of snapshots and
> their rough age (distribution) and basic system characteristics like the
> cpu and basic parameters of the underlying storage, like filesystem type
> and (block) device type that backs it, as with that one can classify the
> change somewhat good enough.
Oh and it would naturally be nice to repeat that for some different LRU
cache sizes to see how that changes things – here the actual datastore
used naturally is a bit more important, maybe we use one of our more
production-like PBS instances in the testlab for that though and pass
the LRU sizes through some environment variable or the like in a testbuild.
Mentioning that the LRU should profit from the recent change to process
snapshots in a more logical order would be also good IMO.
More information about the pbs-devel
mailing list