[pbs-devel] [PATCH proxmox{, -backup} v7 00/47] fix #2943: S3 storage backend for datastores
Christian Ebner
c.ebner at proxmox.com
Mon Jul 14 17:40:30 CEST 2025
On 7/14/25 16:33, Lukas Wagner wrote:
> On 2025-07-10 19:06, Christian Ebner wrote:
>> Disclaimer: These patches are still in an experimental state and not
>> intended for production use.
>>
>> This patch series aims to add S3 compatible object stores as storage
>> backend for PBS datastores. A PBS local cache store using the regular
>> datastore layout is used for faster operation, bypassing requests to
>> the S3 api when possible. Further, the local cache store allows to
>> keep frequently used chunks and is used to avoid expensive metadata
>> updates on the object store, e.g. by using local marker file during
>> garbage collection.
>>
>> Backups are created by upload chunks to the corresponding S3 bucket,
>> while keeping the index files in the local cache store, on backup
>> finish, the snapshot metadata are persisted to the S3 storage backend.
>>
>> Snapshot restores read chunks preferably from the local cache store,
>> downloading and insterting them if not present from the S3 object
>> store. Listing and snapsoht metadata operation currently rely soly on
>> the local cache store.
>>
>> Currently chunks use a 1:1 mapping to S3 objects. An advanced packing
>> mechanism for chunks to significantly reduce the number of api
>> requests and therefore be more cost effective will be implemented as
>> followup patches.
>>
>
> Applied these patches of the latest proxmox and proxmox-backup master branches and
> tried to thoroughly test this new feature.
>
> Here's what I tested:
> - Backup
> - Restore
> - Prune jobs
> - GC
> - Local sync from/to the S3 datastore with some namespace variations
> - Delete datastore
> - Tried to add the same S3 bucket as a new datastore
>
> I ran into an issue when I attempted to run a verify job, which Chris and I already
> debugged off-list:
>
> - An all-zero, 4MB chunk (hash: bb9f...) will not be uploaded to S3 due to it's special usage
> during the atime check during datastore creation.
> This can be easily triggered by backing up a VM with some amounts of unused disk space
> to an *unencrypted* S3 datastore. The error surfaces once attempting to do a
> verification job.
> If the chunk is uploaded manually (e.g. using some kind of S3 client CLI), the verification
> job goes through without any problems.
Thanks a lot for testing and your debugging efforts, was able to fix
this for the upcoming version of the patches!
> Some UI/UX observations:
> - Minor: Would be easier to understand to unify "Unique Identifier" in the S3 client view
> and "S3 Client ID" when adding the datastore (I prefer the latter, it seems more clear to me)
Okay, adapted this as well for the S3 client view and create window.
Also added the still missing cli commands for s3 client manipulation.
> - Minor: The "Host" column in the "Add Datastore" -> S3 Client ID picker does not show
> anything for me.
Ah, the field here got renamed from host to endpoint, as this was better
fitting. Fixed this as well, thanks.
> - It might make sense to make it a bit easier to re-add an existing S3 bucket that was already
> used as a datastore before - right now, it is a bit unintuitive.
> Right now, to "import" an existing bucket, one has to:
> - Use the same datastore name (since it is used in the object key)
> - Enter the same bucket name (makes sense)
> - Make sure that "reuse existing datastore" is *not* ticked (confusing)
> - Press "S3 sync" after adding the datastore (could be automatic)
>
> I think we might be able to reuse the 'reuse datastore' flag and change its behavior
> for S3 datastores to do the right thing automatically, which would be to
> recreate a local cache and then do the S3 sync to get the list of snapshots
> from the bucket.
Okay, will have a go at this tomorrow and see if I manage to adapt this
as well. I agree that reusing the "reuse existing datastore" flag and an
automatic s3-refresh might be more intuitive here.
> In the long-term it could be nice be to actually try to list the contents of
> a bucket and use some heuristics to "find" existing datastores in the bucket
> (could be as easy as trying to find some key that contains ".chunks" in the
> second level, e.g. (somestore/.chunks/...)
> and showing them in some drop-down in the dialog.
Keeping this in mind, but this is out of scope for this series, I would
rather focus on consolidating the current patches for now.
> Keeping the use case of 'reusing' an S3 bucket in mind, maybe it would make
> sense to mark 'ownership' of a datastore in the bucket, e.g. in some special marker
> object (could contain the host name, host key fingerprint, machine-id, etc.),
> as to make it harder to accidentally use the same datastore from multiple PBS servers.
> There could be an "export" mechanism, effectively giving up the ownership by clearing
> the marker, signalling it to be safe to re-add it to another PBS server.
> Just capturing some thoughts here. :)
Hmm, will keep this in mind as well, although I do not see the benefit
of storing the ownership per-se.
Ownership and permissions on the bucket and sub-object are best handled
by the provider and their acls on tokens.
But adding a marker which flags the store as in use seems a good idea
and I will see if it makes sense to add this already. If the user wants
to reuse a datastore for a PBS instance which is no longer available or
failed, removing the marker by some other means (e.g. provider tooling)
first should be acceptable as fail safe I think.
More information about the pbs-devel
mailing list