[pbs-devel] [PATCH proxmox v3 2/2] s3-client: add helper method to force final unconditional upload on

Fabian Grünbichler f.gruenbichler at proxmox.com
Mon Oct 27 15:10:32 CET 2025


On October 15, 2025 6:40 pm, Christian Ebner wrote:
> Extend the currently implemented conditional/unconditional upload
> helpers with an additional variant which will perform conditional
> uploads requests up until the last one. The last will be send
> unconditionally, not setting the If-None-Match header. The usecase
> for this is to not fail in PBS during chunk upload if a concurrent
> upload to the same chunk is in-progress, not finished within the
> upload retries with backoff time.
> 
> Which put object results in the final object is then however not
> clearly specified in that case, AWS docs mention contradicting
> behaviour [0]. Quote for different parts of the docs:
> 
>> If two PUT requests are simultaneously made to the same key, the
>> request with the latest timestamp wins.
>> [...]
>> Amazon S3 internally uses last-writer-wins semantics to determine
>> which write takes precedence.
> 
> [0] https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html#ConsistencyModel
> 
> Signed-off-by: Christian Ebner <c.ebner at proxmox.com>
> ---
>  proxmox-s3-client/src/client.rs | 32 ++++++++++++++++++++++++++++----
>  1 file changed, 28 insertions(+), 4 deletions(-)
> 
> diff --git a/proxmox-s3-client/src/client.rs b/proxmox-s3-client/src/client.rs
> index 4ebd8c4b..fae8a56f 100644
> --- a/proxmox-s3-client/src/client.rs
> +++ b/proxmox-s3-client/src/client.rs
> @@ -684,7 +684,26 @@ impl S3Client {
>          object_data: Bytes,
>      ) -> Result<bool, Error> {
>          let replace = false;
> -        self.do_upload_with_retry(object_key, object_data, replace)
> +        let finally_replace = false;
> +        self.do_upload_with_retry(object_key, object_data, replace, finally_replace)
> +            .await
> +    }
> +
> +    /// Upload the given object via the S3 api, not replacing it if already present in the object
> +    /// store. If a conditional upload leads to repeated failures with status code 409, do not set
> +    /// the `If-None-Match` header for the final retry.
> +    /// Retrying up to 3 times in case of error.
> +    ///
> +    /// Note: Which object results in the final version is not clearly specified.
> +    #[inline(always)]
> +    pub async fn upload_replace_on_final_retry(
> +        &self,
> +        object_key: S3ObjectKey,
> +        object_data: Bytes,
> +    ) -> Result<bool, Error> {
> +        let replace = false;
> +        let finally_replace = true;
> +        self.do_upload_with_retry(object_key, object_data, replace, finally_replace)
>              .await
>      }
>  
> @@ -697,17 +716,19 @@ impl S3Client {
>          object_data: Bytes,
>      ) -> Result<bool, Error> {
>          let replace = true;
> -        self.do_upload_with_retry(object_key, object_data, replace)
> +        let finally_replace = false;
> +        self.do_upload_with_retry(object_key, object_data, replace, finally_replace)
>              .await
>      }
>  
>      /// Helper to perform the object upload and retry, wrapped by the corresponding methods
> -    /// to mask the `replace` flag.
> +    /// to mask the `replace` and `finally_replace` flag.
>      async fn do_upload_with_retry(
>          &self,
>          object_key: S3ObjectKey,
>          object_data: Bytes,
> -        replace: bool,
> +        mut replace: bool,
> +        finally_replace: bool,
>      ) -> Result<bool, Error> {
>          let content_size = object_data.len() as u64;
>          let timeout_secs = content_size
> @@ -719,6 +740,9 @@ impl S3Client {
>                  let backoff_secs = S3_HTTP_REQUEST_RETRY_BACKOFF_DEFAULT * 3_u32.pow(retry as u32);
>                  tokio::time::sleep(backoff_secs).await;
>              }
> +            if retry == MAX_S3_UPLOAD_RETRY - 1 {
> +                replace = finally_replace;
> +            }

same question here as with the previous patch - the description above
mentions that the finally-replace logic triggers if all earlier attempts
returned 409.. but here it now happens unconditional, even if the
retries are caused by other errors?

so either the replace fallback should move into the NeedsRetry handling
below, or the documentation above needs to be adapted to match the code

>              let body = Body::from(object_data.clone());
>              match self
>                  .put_object(object_key.clone(), body, timeout, replace)
> -- 
> 2.47.3
> 
> 
> 
> _______________________________________________
> pbs-devel mailing list
> pbs-devel at lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
> 
> 
> 




More information about the pbs-devel mailing list