[pbs-devel] [PATCH proxmox-backup v5 1/6] api: make Remote for SyncJob optional

Hannes Laimer h.laimer at proxmox.com
Wed Nov 8 14:26:08 CET 2023



On 11/8/23 11:53, Thomas Lamprecht wrote:
> Am 06/10/2023 um 16:05 schrieb Hannes Laimer:
>> diff --git a/src/api2/pull.rs b/src/api2/pull.rs
>> index daeba7cf..9ed83046 100644
>> --- a/src/api2/pull.rs
>> +++ b/src/api2/pull.rs
> 
>> @@ -65,7 +75,7 @@ impl TryFrom<&SyncJobConfig> for PullParameters {
>>           PullParameters::new(
>>               &sync_job.store,
>>               sync_job.ns.clone().unwrap_or_default(),
>> -            &sync_job.remote,
>> +            sync_job.remote.as_deref().unwrap_or("local"),
> 
> isn't this still the same issue that Wolfgang asked about in his review for
> v3 and v2? (Please be a bit more communicative and answer to review on list
> too, else it's hard to know if you overlooked it or if it's actually OK the
> way it is...)
> 
> faking a remote ID is probably never a good idea, should this stay None here?
> 
> But I actually checked the remaining code, and it fixes this in commit 4/6
> "pull: refactor pulling from a datastore", so question is, why do we add API
> support before it can even correctly work? Shouldn't this commit be ordered
> after the aforementioned refactor one, and the "pull: add support for pulling
> from local datastore" one, avoiding such intermediate "bugs" and making review
> easier due to a more causal order.
> 
makes sense, making it optional felt like the first
logical step, that's why they are ordered this way. But sure, can
re-order.
> 
>>               &sync_job.remote_store,
>>               sync_job.remote_ns.clone().unwrap_or_default(),
>>               sync_job
>> @@ -91,7 +101,7 @@ pub fn do_sync_job(
>>   ) -> Result<String, Error> {
>>       let job_id = format!(
>>           "{}:{}:{}:{}:{}",
>> -        sync_job.remote,
>> +        sync_job.remote.as_deref().unwrap_or("-"),
>>           sync_job.remote_store,
>>           sync_job.store,
>>           sync_job.ns.clone().unwrap_or_default(),
>> @@ -99,6 +109,13 @@ pub fn do_sync_job(
>>       );
>>       let worker_type = job.jobtype().to_string();
>>   
>> +    if sync_job.remote.is_none()
>> +        && sync_job.store == sync_job.remote_store
>> +        && sync_job.ns == sync_job.remote_ns
> 
> I'd disallow syncing to the same store even if different NS, as they still
> might overlap and it's IMO just an odd use-case.
> If, we should allow mocving around groups and namespaces inside the same
> datastore as separate functionallity, without any jobs or syncing involved,
> but that'd be unrelated to this series in any way.

It is kinda odd, the idea was that currently its not possible to move
backups within a datastore (using the UI). But yes, a sync job is
probably not the right place for this.

> 
>> +    {
>> +        bail!("can't sync, source equals the target");
>> +    }
>> +
>>       let (email, notify) = crate::server::lookup_datastore_notify_settings(&sync_job.store);
>>   
>>       let upid_str = WorkerTask::spawn(
>> @@ -122,13 +139,33 @@ pub fn do_sync_job(
>>                   }
>>                   task_log!(
>>                       worker,
>> -                    "sync datastore '{}' from '{}/{}'",
>> +                    "sync datastore '{}' from '{}{}'",
>>                       sync_job.store,
>> -                    sync_job.remote,
>> +                    sync_job
>> +                        .remote
>> +                        .as_deref()
>> +                        .map_or(String::new(), |remote| format!("{remote}/")),
> 
> nit: might be nicer to pull this out via a `let source_prefix = match ...`
> statement up front.
> 
>>                       sync_job.remote_store,
>>                   );
>>   
>> -                pull_store(&worker, &client, pull_params).await?;
>> +                if sync_job.remote.is_some() {
>> +                    pull_store(&worker, &client, pull_params).await?;
>> +                } else {
>> +                    if let (Some(target_ns), Some(source_ns)) = (sync_job.ns, sync_job.remote_ns) {
>> +                        if target_ns.path().starts_with(source_ns.path())
>> +                            && sync_job.store == sync_job.remote_store
>> +                            && sync_job.max_depth.map_or(true, |sync_depth| {
>> +                            target_ns.depth() + sync_depth > MAX_NAMESPACE_DEPTH
>> +                        }) {
>> +                            task_log!(
>> +                                worker,
>> +                                "Can't sync namespace into one of its sub-namespaces, would exceed maximum namespace depth, skipping"
>> +                            );
>> +                        }
>> +                    } else {
>> +                        pull_store(&worker, &client, pull_params).await?;
>> +                    }
>> +                }
>>   
>>                   task_log!(worker, "sync job '{}' end", &job_id);
>>   
> 
>> @@ -256,7 +294,7 @@ async fn pull(
>>       let pull_params = PullParameters::new(
>>           &store,
>>           ns,
>> -        &remote,
>> +        remote.as_deref().unwrap_or("local"),
> 
> same here w.r.t. intermediate bug (or at least hard to follow commit order)
> 
>>           &remote_store,
>>           remote_ns.unwrap_or_default(),
>>           auth_id.clone(),
> 





More information about the pbs-devel mailing list