[pbs-devel] [PATCH proxmox-backup 02/17] api/datastore: move group notes setting to the datastore
Christian Ebner
c.ebner at proxmox.com
Tue Nov 4 10:37:05 CET 2025
On 11/4/25 10:13 AM, Fabian Grünbichler wrote:
> On November 4, 2025 9:51 am, Christian Ebner wrote:
>> On 11/3/25 3:51 PM, Fabian Grünbichler wrote:
>>> On November 3, 2025 12:31 pm, Christian Ebner wrote:
>>>> In an effort to abstract away the datastore backend related logic
>>>> from the api, move the set_group_notes to a helper method on the
>>>> datastore.
>>>>
>>>> The new helper method now also assures that the exclusive lock on
>>>> the backup group is acquired before updating the notes, in order to
>>>> avoid possible race conditions, e.g. with backup group destruction.
>>>>
>>>> Signed-off-by: Christian Ebner <c.ebner at proxmox.com>
>>>> ---
>>>> pbs-datastore/src/datastore.rs | 26 ++++++++++++++++++++++++++
>>>> src/api2/admin/datastore.rs | 17 ++++-------------
>>>> 2 files changed, 30 insertions(+), 13 deletions(-)
>>>>
>>>> diff --git a/pbs-datastore/src/datastore.rs b/pbs-datastore/src/datastore.rs
>>>> index 127ba1c81..45f315aeb 100644
>>>> --- a/pbs-datastore/src/datastore.rs
>>>> +++ b/pbs-datastore/src/datastore.rs
>>>> @@ -2418,4 +2418,30 @@ impl DataStore {
>>>> .map_err(|err| format_err!("{err:#}"))?;
>>>> Ok((backend_type, Some(s3_client)))
>>>> }
>>>> +
>>>> + /// Creates or updates the notes associated with a backup group.
>>>> + /// Acquires exclusive lock on the backup group.
>>>> + pub fn set_group_notes(
>>>> + self: &Arc<Self>,
>>>> + notes: String,
>>>> + backup_group: BackupGroup,
>>>> + ) -> Result<(), Error> {
>>>> + let _lock = backup_group.lock().context("failed to lock backup group")?;
>>>
>>> this takes an exclusive lock on group, which means all sorts of other
>>> operations (including creating new snapshots?) are blocked while it is
>>> held
>>>
>>>> +
>>>> + if let DatastoreBackend::S3(s3_client) = self.backend()? {
>>>> + let mut path = backup_group.backup_ns().path();
>>>> + path.push(backup_group.group().to_string());
>>>> + let object_key = crate::s3::object_key_from_path(&path, "notes")
>>>> + .context("invalid owner file object key")?;
>>>> + let data = hyper::body::Bytes::copy_from_slice(notes.as_bytes());
>>>> + let _is_duplicate = proxmox_async::runtime::block_on(
>>>> + s3_client.upload_replace_with_retry(object_key, data),
>>>
>>> but this can take a while, right? FWIW, the same is true of setting the
>>> backup owner..
>>
>> Yes, but I do not have any alternatives here? This needs to be locked as
>> otherwise one might run into consistency issues again, especially
>> important when setting the backup owner. Or is there a way around this
>> which I'm failing to see?
>
> either not lock here, or introduce a notes-specific lock file that is
> held for the full upload+local replace, and lock the group only for the
> local part
But by that I will run into consistency issues again. E.g. the group
might get pruned while updating the notes/owner on the S3 object store.
This will then lead to the notes/owner (and all the directory prefixes
of the object key) to be present on the S3 object store, but not the
local store cache. And on a subsequent S3 refresh the group will be
present again...
More information about the pbs-devel
mailing list