[pbs-devel] [PATCH proxmox-backup 1/2] datastore: data blob: increase compression throughput
Dominik Csapak
d.csapak at proxmox.com
Tue Jul 23 12:10:36 CEST 2024
by not using `zstd::stream::copy_encode`, because that has an allocation
pattern that reduces throughput if the target/source storage and the
network are faster than the chunk creation.
instead use `zstd::bulk::compress_to_buffer` which shouldn't to any big
allocations, since we provide the target buffer.
To handle the case that the target buffer is too small, we now ignore
all zstd error and continue with the unencrypted data, logging the error
except if the target buffer is too small.
For now, we have to parse the error string for that, as `zstd` maps all
errors as `io::ErrorKind::Other`. Until that gets changed, there is no
other way to differentiate between different kind of errors.
In my local benchmarks from tmpfs to tmpfs on localhost, where i
previously maxed out at ~450MiB/s i know get ~625MiB/s throughput.
Signed-off-by: Dominik Csapak <d.csapak at proxmox.com>
---
Note: if we want a different behavior for the errors, that's also ok
with me, but zstd errors should be rare i guess (except the target
buffer one) and in that case I find it better to continue with
uncompressed data. For the case that it was a transient error,
the next upload of the chunk will replace the uncompressed one
if it's smaller anyway.
pbs-datastore/src/data_blob.rs | 31 +++++++++++++++++++++----------
1 file changed, 21 insertions(+), 10 deletions(-)
diff --git a/pbs-datastore/src/data_blob.rs b/pbs-datastore/src/data_blob.rs
index a7a55fb7..92242076 100644
--- a/pbs-datastore/src/data_blob.rs
+++ b/pbs-datastore/src/data_blob.rs
@@ -136,7 +136,8 @@ impl DataBlob {
DataBlob { raw_data }
} else {
- let max_data_len = data.len() + std::mem::size_of::<DataBlobHeader>();
+ let header_len = std::mem::size_of::<DataBlobHeader>();
+ let max_data_len = data.len() + header_len;
if compress {
let mut comp_data = Vec::with_capacity(max_data_len);
@@ -147,15 +148,25 @@ impl DataBlob {
unsafe {
comp_data.write_le_value(head)?;
}
-
- zstd::stream::copy_encode(data, &mut comp_data, 1)?;
-
- if comp_data.len() < max_data_len {
- let mut blob = DataBlob {
- raw_data: comp_data,
- };
- blob.set_crc(blob.compute_crc());
- return Ok(blob);
+ comp_data.resize(max_data_len, 0u8);
+
+ match zstd::bulk::compress_to_buffer(data, &mut comp_data[header_len..], 1) {
+ Ok(size) if size <= data.len() => {
+ comp_data.resize(header_len + size, 0u8);
+ let mut blob = DataBlob {
+ raw_data: comp_data,
+ };
+ blob.set_crc(blob.compute_crc());
+ return Ok(blob);
+ }
+ // if size is bigger than the data, or any error is returned, continue with non
+ // compressed archive but log all errors beside buffer too small
+ Ok(_) => {}
+ Err(err) => {
+ if !err.to_string().contains("Destination buffer is too small") {
+ log::error!("zstd compression error: {err}");
+ }
+ }
}
}
--
2.39.2
More information about the pbs-devel
mailing list