[pbs-devel] [PATCH proxmox-backup v3 4/5] datastore: data blob: increase compression throughput
Dominik Csapak
d.csapak at proxmox.com
Mon Aug 5 11:32:55 CEST 2024
On 8/5/24 11:24, Dominik Csapak wrote:
> by not using `zstd::stream::copy_encode`, because that has an allocation
> pattern that reduces throughput if the target/source storage and the
> network are faster than the chunk creation.
>
> instead use `zstd_safe::compress` which shouldn't do any big
> allocations, since we provide the target buffer.
>
> To handle the case that the target buffer is too small, we now ignore
> all zstd error and continue with the uncompressed data, logging the error
> except if the target buffer is too small.
>
> Some benchmarks on my machine from tmpfs to a datastore on tmpfs:
>
> Type without patches (MiB/s) with patches (MiB/s)
> .img file ~614 ~767
> pxar one big file ~657 ~807
> pxar small files ~576 ~627
>
> Signed-off-by: Dominik Csapak <d.csapak at proxmox.com>
@thomas
sorry i forgot to adapt the commit message to your (or similar) suggestion
feel free to either adapt it to your liking or telling me i should send a v4 for this ;)
(which i'll happily do if you want)
More information about the pbs-devel
mailing list