[pve-devel] [PATCH manager] Jobs: fix scheduling when updating on unrelated nodes
Fabian Ebner
f.ebner at proxmox.com
Fri Jul 15 11:20:38 CEST 2022
Am 15.07.22 um 11:01 schrieb Dominik Csapak> On 7/15/22 10:51, Fabian
Ebner wrote:
>>
>>> so that we don't read the file multiple times each round.
>>>
Could add
Fixes: 530b0a71 ("fix #4053: don't run vzdump jobs when they change from
disabled->enabled")
for completeness.
>>> Signed-off-by: Dominik Csapak <d.csapak at proxmox.com>
>>
>> What about starting_job and started_job? The saved_props are lost when
>> that function writes its new state. Maybe there should be a wrapper for
>> updating the job state that always preserves certain properties.
>
> i guess you're right, but currently that makes no difference since
> we're only concerned with not running too early which is irrelevant
> for the starting/started case
> (and it'll be synced up again after the next iteration)
>
Hmm, it won't work for (at least) a minutely job. The job will run, the
state will lose the saved_props and then
synchronize_job_states_with_config will update the last runtime in the
next run_jobs, and the job won't run that iteration.
>>> @@ -192,6 +206,39 @@ sub update_last_runtime {
>>> });
>>> }
>>> +# saves some properties of the jobcfg into the jobstate so we can
>>> track
>>> +# them on different nodes (where the update was not done)
>>> +# and update the last runtime when they change
>>> +sub update_job_props {
>>
>> update_saved_props or detect_changed_runtime_props might be a bit more
>> telling
>
> then i'd opt for 'detect_changed_runtime_props' since it's a bit more
> verbose imho
>
While the logic for updating the last run time in PVE/API2/Backup.pm's
update_job call is slightly different (won't update when going from
enabled to disabled), I feel like we could switch to (unconditionally)
calling update_job_props there too?
More information about the pve-devel
mailing list