[pbs-devel] [PATCH v3 proxmox-backup 49/58] client: backup: increase average chunk size for metadata
Fabian Grünbichler
f.gruenbichler at proxmox.com
Mon Apr 8 10:28:26 CEST 2024
On April 5, 2024 12:49 pm, Dietmar Maurer wrote:
>> for the payload stream simple accumulating 1..N files (or rather, their
>> contents) in a chunk until a certain size threshold is reached might perform
>> better (as in, both be faster than the current chunker, and give us more/better
>> re-usable chunks).
>
> Sorry, but that way you would never reuse any chunks! How is
> that supposed to work?
the chunk re-usage would be moved to the metadata-based caching,
basically:
- big files get a sequence of chunks according to some splitting rules,
those chunks are completely just for that file (so if you just modify
a bit at the front, only the first chunk would be new, the rest still
re-used, but with read penalty)
- smaller files are aggregated into a single chunk, those would not be
re-used if too many of them changed (payload threshold)
it might just trade on set of issues with another (higher padding vs
less deduplication), not sure.
More information about the pbs-devel
mailing list