[pbs-devel] [PATCH proxmox-backup v3] etc: raise nofile soft limit to hard limit for proxmox-backup-proxy
Fabian Grünbichler
f.gruenbichler at proxmox.com
Fri Nov 21 10:07:21 CET 2025
On November 21, 2025 9:00 am, Christian Ebner wrote:
> On 11/21/25 8:42 AM, Fabian Grünbichler wrote:
>> - reduce the batch size (which determines the number of concurrently
>> held locks in GC) for S3 deletion
>
> exactly what came to my mind as well :)
>
>>
>> the latter would be a fairly simple patch, but make GC potentially a bit
>> more expensive (more delete requests to S3):
>>
>> diff --git a/pbs-datastore/src/datastore.rs b/pbs-datastore/src/datastore.rs
>> index 0a5179230..20372190c 100644
>> --- a/pbs-datastore/src/datastore.rs
>> +++ b/pbs-datastore/src/datastore.rs
>> @@ -1716,6 +1716,24 @@ impl DataStore {
>> }
>>
>> chunk_count += 1;
>> +
>> + drop(_guard);
>> +
>> + if delete_list.len() > 100 {
>> + let delete_objects_result = proxmox_async::runtime::block_on(
>> + s3_client.delete_objects(
>> + &delete_list
>> + .iter()
>> + .map(|(key, _)| key.clone())
>> + .collect::<Vec<S3ObjectKey>>(),
>> + ),
>> + )?;
>> + if let Some(_err) = delete_objects_result.error {
>> + bail!("failed to delete some objects");
>> + }
>> + // release all chunk guards
>> + delete_list.clear();
>> + }
>> }
>>
>> if !delete_list.is_empty() {
>
> Since you already have it in place, do you want to send this patch?
>
> My initial draft was still a bit less efficient than this as I did
> already batch the list object response.
>
> Only thing I still see missing in your patch is to make the 100 a
> constant and set the capacity for the delete list using that on
> instantiation as well.
sent as three-patch series:
https://lore.proxmox.com/pbs-devel/20251121090605.262675-1-f.gruenbichler@proxmox.com/T/#t
More information about the pbs-devel
mailing list