[pve-devel] [PATCH cluster/manager v2] add scheduling daemon for pvesr + vzdump (and more)
Aaron Lauterer
a.lauterer at proxmox.com
Tue Nov 9 17:55:28 CET 2021
Gave it a quick try converting the single backup job that I already had.
Created a new job.
Changed the schedule for the existing job to every two minutes, knowing that it takes longer. It behaves as expected, checking against the start and stop times in the task logs, I see that the job takes about 5 minutes to complete and the the next run is scheduled within the same minute as the previous stops. Though the start / stop time is shown only with minute precision here (seconds are always 00).
A (lightly):
Tested-By: Aaron Lauterer <a.lauterer at proxmox.com>
On 11/8/21 14:07, Dominik Csapak wrote:
> with this series, we implement a new daemon (pvescheduler) that takes
> over from pvesrs' systemd timer (original patch from thomas[0]) and
> extends it with a generic job handling mechanism
>
> then i convert the vzdump cron jobs to these jobs, the immediate
> gain is that users can use calendarevent schedules instead of
> dow + starttime
>
> for now, this 'jobs.cfg' only handles vzdump jobs, but should be easily
> extendable for other type of recurring jobs (like auth realm sync, etc.)
>
> also, i did not yet convert the replication jobs to this job system,
> but that could probably be done without too much effort (though
> i did not look too deeply into it)
>
> if some version of this gets applied, the further plan would be
> to remove the vzdump.cron part completely with 8.0, but until then
> we must at least list/parse that
>
> whats currently missing but not too hard to add is a calculated
> 'next-run' column in the gui
>
> changes from v1:
> * do not log replication into the syslog
> * readjust the loop to start at the full minute every 1000 loops
> * rework locking state locking/handling:
> - i introduces a new 'starting' state that is set before we start
> and we set it to started after the start.
> we sadly cannot start the job while we hold the lock, since the open
> file descriptor will be still open in the worker, and then we cannot
> get the flock again. now it's more modeled after how we do qm/ct
> long running locks (by writing 'starting' locked into the state)
> - the stop check is now its own call at the beginning of the job handling
> - handle created/removed jobs properly:
> i did not think of state handling on other nodes in my previous
> iteration. now on every loop, i sync the statefiles with the config
> (create/remvoe) so that the file gets created/removed on all nodes
> * incorporated fabians feedback for the api (thanks!)
>
> 0: https://lists.proxmox.com/pipermail/pve-devel/2018-April/031357.html
>
> pve-cluster:
>
> Dominik Csapak (1):
> add 'jobs.cfg' to observed files
>
> data/PVE/Cluster.pm | 1 +
> data/src/status.c | 1 +
> 2 files changed, 2 insertions(+)
>
> pve-manager:
>
> Dominik Csapak (5):
> add PVE/Jobs to handle VZDump jobs
> pvescheduler: run jobs from jobs.cfg
> api/backup: refactor string for all days
> api/backup: handle new vzdump jobs
> ui: dc/backup: show id+schedule instead of dow+starttime
>
> Thomas Lamprecht (1):
> replace systemd timer with pvescheduler daemon
>
> PVE/API2/Backup.pm | 235 +++++++++++++++++++-----
> PVE/API2/Cluster/BackupInfo.pm | 9 +
> PVE/Jobs.pm | 286 +++++++++++++++++++++++++++++
> PVE/Jobs/Makefile | 16 ++
> PVE/Jobs/Plugin.pm | 61 ++++++
> PVE/Jobs/VZDump.pm | 54 ++++++
> PVE/Makefile | 3 +-
> PVE/Service/Makefile | 2 +-
> PVE/Service/pvescheduler.pm | 131 +++++++++++++
> bin/Makefile | 6 +-
> bin/pvescheduler | 28 +++
> debian/postinst | 3 +-
> services/Makefile | 3 +-
> services/pvescheduler.service | 16 ++
> services/pvesr.service | 8 -
> services/pvesr.timer | 12 --
> www/manager6/dc/Backup.js | 46 +++--
> www/manager6/dc/BackupJobDetail.js | 10 +-
> 18 files changed, 823 insertions(+), 106 deletions(-)
> create mode 100644 PVE/Jobs.pm
> create mode 100644 PVE/Jobs/Makefile
> create mode 100644 PVE/Jobs/Plugin.pm
> create mode 100644 PVE/Jobs/VZDump.pm
> create mode 100755 PVE/Service/pvescheduler.pm
> create mode 100755 bin/pvescheduler
> create mode 100644 services/pvescheduler.service
> delete mode 100644 services/pvesr.service
> delete mode 100644 services/pvesr.timer
>
More information about the pve-devel
mailing list