[pbs-devel] [PATCH proxmox-backup v2 2/4] api: datastore: unmount datastore after sync if configured

Hannes Laimer h.laimer at proxmox.com
Tue Nov 11 14:03:17 CET 2025


On 11/11/25 13:55, Fabian Grünbichler wrote:
> On November 11, 2025 1:24 pm, Hannes Laimer wrote:
>> On 11/11/25 13:08, Fabian Grünbichler wrote:
>>> On October 29, 2025 5:01 pm, Hannes Laimer wrote:
>>>> When a sync job is triggered by the mounting of a datastore, we now check
>>>> whether it should also be unmounted automatically afterwards. This is only
>>>> done for jobs triggered by mounting.
>>>>
>>>> We do not do this for manually started or scheduled sync jobs, as those
>>>> run in the proxy process and therefore cannot call the privileged API
>>>> endpoint for unmounting.
>>>>
>>>> The task that starts sync jobs on mount runs in the API process (where the
>>>> mounting occurs), so in that privileged context, we can also perform the
>>>> unmounting.
>>>>
>>>> Tested-by: Robert Obkircher <r.obkircher at proxmox.com>
>>>> Signed-off-by: Hannes Laimer <h.laimer at proxmox.com>
>>>> ---
>>>>    src/api2/admin/datastore.rs | 21 +++++++++++++++++++--
>>>>    1 file changed, 19 insertions(+), 2 deletions(-)
>>>>
>>>> diff --git a/src/api2/admin/datastore.rs b/src/api2/admin/datastore.rs
>>>> index 643d1694..75122260 100644
>>>> --- a/src/api2/admin/datastore.rs
>>>> +++ b/src/api2/admin/datastore.rs
>>>> @@ -2430,6 +2430,7 @@ pub fn do_mount_device(datastore: DataStoreConfig) -> Result<bool, Error> {
>>>>    
>>>>    async fn do_sync_jobs(
>>>>        jobs_to_run: Vec<SyncJobConfig>,
>>>> +    store: String,
>>>
>>> instead of this, the helper could also return
>>>
>>
>> can do
>>
>>>>        worker: Arc<WorkerTask>,
>>>>    ) -> Result<(), Error> {
>>>>        let count = jobs_to_run.len();
>>>> @@ -2442,6 +2443,8 @@ async fn do_sync_jobs(
>>>>                .join(", ")
>>>>        );
>>>>    
>>>> +    let mut unmount_on_done = false;
>>>> +
>>>>        let client = crate::client_helpers::connect_to_localhost()
>>>>            .context("Failed to connect to localhost for starting sync jobs")?;
>>>>        for (i, job_config) in jobs_to_run.into_iter().enumerate() {
>>>> @@ -2484,7 +2487,21 @@ async fn do_sync_jobs(
>>>>                    }
>>>>                }
>>>>            }
>>>> +        unmount_on_done |= job_config.unmount_on_done.unwrap_or_default();
>>>> +    }
>>>> +    if unmount_on_done {
>>>
>>> whether unmounting is necessary/desired, and then the caller could
>>> handle the unmounting.. or even better, the unmount handling could live
>>> in the caller entirely, because right now if anything here fails, there
>>> won't be an unmount..
>>>
>>
>> hmm, yes. I guess there is an argument to be made for not keeping it
>> mounted in case anything goes wrong. I thought about it like that, if
>> something went wrong somebody will want to look at it. So just leave
>> everything as when the failure occurred.
> 
> the failure might also have been transient, and if you don't unmount
> here, you need to do an excursion over the API, as opposed to just doing
> an unplug/plug cycle, like you would normally do (that's the purpose of
> this feature after all, to streamline automated syncs that are plug and
> play (and unplug ;)).
> 
> investigating the error requires somebody in front of the screen anyway,
> and they can just issue a manual mount call if desired?
> 

yes, you're right. Makes more sense than keeping it mounted, will change
in v3

thanks for taking a look :)

>>>> +        match client
>>>> +            .post(
>>>> +                format!("api2/json/admin/datastore/{store}/unmount").as_str(),
>>>> +                None,
>>>> +            )
>>>> +            .await
>>>> +        {
>>>> +            Ok(_) => info!("triggered unmounting successfully"),
>>>> +            Err(err) => warn!("could not unmount: {err}"),
>>>> +        };
>>>>        }
>>>
>>> we are already in the privileged api daemon here, so we don't need to
>>> connect to the proxy which forwards to the privileged api daemon again,
>>> we can just call the unmount inline directly, right?
>>>
>>
>> yes, but I wanted the unmounting task/thread to be owned by the api
>> process, not this one. The idea was to have this one only trigger stuff
> 
> you end up in the same process (unless a reload happened inbetween I
> guess) anyway? there is no ownership of tasks/threads other than by the
> "main" process, is there? and even a `proxmox-backup-manager datastore
> mount ..` or the systemd-triggered `... uuid-mount ..`` will do an API
> call with the actual mount handling being done by the privileged API
> daemon..

I may be wrong, but I think if we do a new_thread() inside a spawn() the
thread dies if the spawn() dies... Actually, now that I'm thinking about
it, may be the other way around.
tldr; didn't test, may be fine





More information about the pbs-devel mailing list