[pbs-devel] [PATCH proxmox-backup v3 1/3] fix #6195: api: datastore: add endpoint for moving namespaces
Hannes Laimer
h.laimer at proxmox.com
Mon Sep 15 10:27:49 CEST 2025
On 15.09.25 10:15, Christian Ebner wrote:
> Thanks for having a go at this issue, I did not yet have an in depth
> look at this but unfortunately I'm afraid the current implementation
> approach will not work for the S3 backend (and might also have issues
> for local datastores).
>
> Copying the S3 objects is not an atomic operation and will take some
> time, so leaves you open for race conditions. E.g. while you copy
> contents, a new backup snapshot might be created in one of the already
> copied backup groups, which will then however be deleted afterwards.
> Same is true for pruning, and other metadata editing operations such as
> adding notes, backup task logs, ecc.
>
Yes, but not really. We lock the `active_operations` tracking file, so
no new read/write operations can be started after we start the moving
process. There's a short comment in the API endpoint function.
I'm not sure there is much value in more granular locking, I mean, is
half a successful move worth much? Unless we add some kind of rollback,
but tbh, I feel like that would not be worth the effort I think.
> So IMO this must be tackled on a group level, making sure to get an
> exclusive lock for each group (on the source as well as target of the
> move operation) before doing any manipulation. Only then it is okay to
> do any non-atomic operations.
>
> The moving of the namespace must then be implemented as batch operations
> on the groups and sub-namespaces.
>
> This should be handled the same also for regular datastores, to avoid
> any races there to.
More information about the pbs-devel
mailing list