[pbs-devel] [PATCH proxmox-backup v2 2/4] api: datastore: unmount datastore after sync if configured
Hannes Laimer
h.laimer at proxmox.com
Tue Nov 11 13:24:37 CET 2025
On 11/11/25 13:08, Fabian Grünbichler wrote:
> On October 29, 2025 5:01 pm, Hannes Laimer wrote:
>> When a sync job is triggered by the mounting of a datastore, we now check
>> whether it should also be unmounted automatically afterwards. This is only
>> done for jobs triggered by mounting.
>>
>> We do not do this for manually started or scheduled sync jobs, as those
>> run in the proxy process and therefore cannot call the privileged API
>> endpoint for unmounting.
>>
>> The task that starts sync jobs on mount runs in the API process (where the
>> mounting occurs), so in that privileged context, we can also perform the
>> unmounting.
>>
>> Tested-by: Robert Obkircher <r.obkircher at proxmox.com>
>> Signed-off-by: Hannes Laimer <h.laimer at proxmox.com>
>> ---
>> src/api2/admin/datastore.rs | 21 +++++++++++++++++++--
>> 1 file changed, 19 insertions(+), 2 deletions(-)
>>
>> diff --git a/src/api2/admin/datastore.rs b/src/api2/admin/datastore.rs
>> index 643d1694..75122260 100644
>> --- a/src/api2/admin/datastore.rs
>> +++ b/src/api2/admin/datastore.rs
>> @@ -2430,6 +2430,7 @@ pub fn do_mount_device(datastore: DataStoreConfig) -> Result<bool, Error> {
>>
>> async fn do_sync_jobs(
>> jobs_to_run: Vec<SyncJobConfig>,
>> + store: String,
>
> instead of this, the helper could also return
>
can do
>> worker: Arc<WorkerTask>,
>> ) -> Result<(), Error> {
>> let count = jobs_to_run.len();
>> @@ -2442,6 +2443,8 @@ async fn do_sync_jobs(
>> .join(", ")
>> );
>>
>> + let mut unmount_on_done = false;
>> +
>> let client = crate::client_helpers::connect_to_localhost()
>> .context("Failed to connect to localhost for starting sync jobs")?;
>> for (i, job_config) in jobs_to_run.into_iter().enumerate() {
>> @@ -2484,7 +2487,21 @@ async fn do_sync_jobs(
>> }
>> }
>> }
>> + unmount_on_done |= job_config.unmount_on_done.unwrap_or_default();
>> + }
>> + if unmount_on_done {
>
> whether unmounting is necessary/desired, and then the caller could
> handle the unmounting.. or even better, the unmount handling could live
> in the caller entirely, because right now if anything here fails, there
> won't be an unmount..
>
hmm, yes. I guess there is an argument to be made for not keeping it
mounted in case anything goes wrong. I thought about it like that, if
something went wrong somebody will want to look at it. So just leave
everything as when the failure occurred.
>> + match client
>> + .post(
>> + format!("api2/json/admin/datastore/{store}/unmount").as_str(),
>> + None,
>> + )
>> + .await
>> + {
>> + Ok(_) => info!("triggered unmounting successfully"),
>> + Err(err) => warn!("could not unmount: {err}"),
>> + };
>> }
>
> we are already in the privileged api daemon here, so we don't need to
> connect to the proxy which forwards to the privileged api daemon again,
> we can just call the unmount inline directly, right?
>
yes, but I wanted the unmounting task/thread to be owned by the api
process, not this one. The idea was to have this one only trigger stuff
>> +
>> Ok(())
>> }
>>
>> @@ -2566,10 +2583,10 @@ pub fn mount(store: String, rpcenv: &mut dyn RpcEnvironment) -> Result<Value, Er
>> info!("starting {} sync jobs", jobs_to_run.len());
>> let _ = WorkerTask::spawn(
>> "mount-sync-jobs",
>> - Some(store),
>> + Some(store.clone()),
>> auth_id.to_string(),
>> false,
>> - move |worker| async move { do_sync_jobs(jobs_to_run, worker).await },
>> + move |worker| async move { do_sync_jobs(jobs_to_run, store, worker).await },
>> );
>> }
>> Ok(())
>> --
>> 2.47.3
>>
>>
>>
>> _______________________________________________
>> pbs-devel mailing list
>> pbs-devel at lists.proxmox.com
>> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
>>
>>
>>
>
>
> _______________________________________________
> pbs-devel mailing list
> pbs-devel at lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
>
>
More information about the pbs-devel
mailing list