[pbs-devel] [PATCH proxmox-backup 2/2] GC: raise nofile soft limit to the hard limit on s3 backed stores

Christian Ebner c.ebner at proxmox.com
Tue Nov 18 11:45:29 CET 2025


Since commit 86d5d073 ("GC: fix race with chunk upload/insert on s3
backends"), per-chunk file locks are acquired during phase 2 of
garbage collection for datastores backed by s3 object stores. This
however means that up to 1000 file locks might be held at once, which
can result in the limit of open file handles to be reached.

Therefore, bump the nolimit from the soft to the hard limit.

Signed-off-by: Christian Ebner <c.ebner at proxmox.com>
---
 pbs-datastore/src/datastore.rs | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/pbs-datastore/src/datastore.rs b/pbs-datastore/src/datastore.rs
index 0a5179230..ac22c10c5 100644
--- a/pbs-datastore/src/datastore.rs
+++ b/pbs-datastore/src/datastore.rs
@@ -11,6 +11,7 @@ use http_body_util::BodyExt;
 use hyper::body::Bytes;
 use nix::unistd::{unlinkat, UnlinkatFlags};
 use pbs_tools::lru_cache::LruCache;
+use pbs_tools::raise_nofile_limit;
 use tokio::io::AsyncWriteExt;
 use tracing::{info, warn};
 
@@ -1589,6 +1590,12 @@ impl DataStore {
         let s3_client = match self.backend()? {
             DatastoreBackend::Filesystem => None,
             DatastoreBackend::S3(s3_client) => {
+                // required for per-chunk file locks in GC phase 2 on S3 backed stores
+                let old_rlimit =
+                    raise_nofile_limit().context("failed to raise open file handle limit")?;
+                if old_rlimit.rlim_max <= 4096 {
+                    info!("limit for open file handles low: {}", old_rlimit.rlim_max);
+                }
                 proxmox_async::runtime::block_on(s3_client.head_bucket())
                     .context("failed to reach bucket")?;
                 Some(s3_client)
-- 
2.47.3





More information about the pbs-devel mailing list