[pbs-devel] [PATCH proxmox-backup 3/3] GC: S3: phase2: delete last partial batch of objects at the very end

Christian Ebner c.ebner at proxmox.com
Fri Nov 21 10:53:30 CET 2025


On 11/21/25 10:45 AM, Fabian Grünbichler wrote:
> On November 21, 2025 10:31 am, Christian Ebner wrote:
>> While going trough the rest of the series in detail now, one idea right
>> away.
>>
>> On 11/21/25 10:06 AM, Fabian Grünbichler wrote:
>>> instead of after every processing every batch of 1000 listed objects. this
>>> reduces the number of delete calls made to the backend, making regular garbage
>>> collections that do not delete most objects cheaper, but means holding the
>>> flocks for garbage chunks/objects longer.
>>
>> We could avoid holding the flock for to long (e.g. GC over several days
>> because of super slow local datastore cache, S3 backend, ...) by setting
>> (or resetting) a timer on each last delete list insert, and not only
>> using the batch size to decide if to perform the deleteObjects() call,
>> but rather compare if a timeout has been elapsed.
>>
>> This would safeguard us from locking some chunks way to long, causing
>> potential issues with concurrent backups, but not trow out all the
>> benefits this patch brings.
>>
>> What do you think? I could send that as followup if you like.
> 
> considerations like this were why I split this out as separate patch ;)
> 
> the loop here basically does:
> - one S3 call to (continue to) list objects in the bucket
>    (potentially expensive), then for each object:
> -- maps each object back to a chunk (free)
> -- does some local operations
>     (stat, empty marker handling - these should not take too long?)
> -- remove from cache if garbage
>     (might take a bit if local storage is very slow?)
> 
> so yeah, we should probably cap the max. number of list calls before we
> trigger the deletion, to avoid locking a garbage chunk in the first 1000
> objects until the end of GC, if there is no further garbage to fill up
> the batch of 100.. either by number of iterations since the first
> not-yet-process delete insertion, or via timestamp?

Maybe best to reuse a fraction of the already defined 
CHUNK_LOCK_TIMEOUT, so even if there are a lot of list objects 
iterations (many chunks), we don't loose out on the delete list collections?




More information about the pbs-devel mailing list