[pbs-devel] [PATCH proxmox-backup v3 1/8] api/pull: avoid failing on concurrent conditional chunk uploads

Christian Ebner c.ebner at proxmox.com
Wed Oct 15 18:40:01 CEST 2025


Chunks are currently being uploaded conditionally by setting the
`If-None-Match` header for put request (if not disabled by the
provider quirks).
In that case, uploads to the s3 backend while a concurrent upload to
the same object is ongoing will lead to the request returning with
http status code 409 [0]. While a retry logic with backoff time is
used, the concurrent upload might still not be finished after the
retires are exhausted.

Therefore, use the `upload_replace_on_final_retry` method instead,
which does not set the `If-None-Match` header on the last retry,
effectively re-uploading the object in that case. While it is not
specified which of the concurrent uploads will then be the resulting
object version, this is still fine as chunks with the same digest
encode for the same data (modulo compression).

[0] https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html#API_PutObject_RequestSyntax

Signed-off-by: Christian Ebner <c.ebner at proxmox.com>
---
 src/api2/backup/upload_chunk.rs | 4 ++--
 src/server/pull.rs              | 2 +-
 2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/src/api2/backup/upload_chunk.rs b/src/api2/backup/upload_chunk.rs
index 8dd7e4d52..64e8d6e63 100644
--- a/src/api2/backup/upload_chunk.rs
+++ b/src/api2/backup/upload_chunk.rs
@@ -263,7 +263,7 @@ async fn upload_to_backend(
             if env.no_cache {
                 let object_key = pbs_datastore::s3::object_key_from_digest(&digest)?;
                 let is_duplicate = s3_client
-                    .upload_no_replace_with_retry(object_key, data)
+                    .upload_replace_on_final_retry(object_key, data)
                     .await
                     .map_err(|err| format_err!("failed to upload chunk to s3 backend - {err:#}"))?;
                 return Ok((digest, size, encoded_size, is_duplicate));
@@ -287,7 +287,7 @@ async fn upload_to_backend(
             tracing::info!("Upload of new chunk {}", hex::encode(digest));
             let object_key = pbs_datastore::s3::object_key_from_digest(&digest)?;
             let is_duplicate = s3_client
-                .upload_no_replace_with_retry(object_key, data.clone())
+                .upload_replace_on_final_retry(object_key, data.clone())
                 .await
                 .map_err(|err| format_err!("failed to upload chunk to s3 backend - {err:#}"))?;
 
diff --git a/src/server/pull.rs b/src/server/pull.rs
index 817b57ac5..c0b6fef7c 100644
--- a/src/server/pull.rs
+++ b/src/server/pull.rs
@@ -181,7 +181,7 @@ async fn pull_index_chunks<I: IndexFile>(
                     let upload_data = hyper::body::Bytes::from(data);
                     let object_key = pbs_datastore::s3::object_key_from_digest(&digest)?;
                     let _is_duplicate = proxmox_async::runtime::block_on(
-                        s3_client.upload_no_replace_with_retry(object_key, upload_data),
+                        s3_client.upload_replace_on_final_retry(object_key, upload_data),
                     )
                     .context("failed to upload chunk to s3 backend")?;
                 }
-- 
2.47.3





More information about the pbs-devel mailing list