[pve-devel] [PATCH cluster/manager] add scheduling daemon for pvesr + vzdump (and more)
Dominik Csapak
d.csapak at proxmox.com
Thu Oct 7 10:27:19 CEST 2021
with this series, we implement a new daemon (pvescheduler) that takes
over from pvesrs' systemd timer (original patch from thomas[0]) and
extends it with a generic job handling mechanism
then i convert the vzdump cron jobs to these jobs, the immediate
gain is that users can use calendarevent schedules instead of
dow + starttime
for now, this 'jobs.cfg' only handles vzdump jobs, but should be easily
extendable for other type of recurring jobs (like auth realm sync, etc.)
also, i did not yet convert the replication jobs to this job system,
but that could probably be done without too much effort (though
i did not look too deeply into it)
patch 2/7 in manager could probably be squashed into the first,
but since it does not only concern the new deaemon i left it
as a seperate patch
if some version of this gets applied, the further plan would be
to remove the vzdump.cron part completely with 8.0, but until then
we must at least list/parse that
whats currently missing but not too hard to add is a calculated
'next-run' column in the gui
a few things that are probably discussion-worthy:
* not sure if a state file per job is the way to go, or if we want
to go the direction of pvesr and use a single state file for all jobs.
since we only have a single entry point (most of the time) for that,
it should make much of a difference either way
* the locking in general. i lock on every update of the state file,
but cannot start the worker while locked, since those locks stay on
the fork_worker call. i am sure there are ways around this, but
did not found an easy way. also questioning if we need that much
locking, since we have that single point when we start (we should
still lock on create/update/delete)
* there is currently no way to handle scheduling on different nodes.
basically the plugin is responsible to run on the correct node, and
do nothing on the others (works out for vzdump api, since it gets
the local vms for each node)
* the auto generation of ids. does not have to be a uuid, but should
prevent id collision of parallel backup job creation
(for api, on the gui id is enforced)
0: https://lists.proxmox.com/pipermail/pve-devel/2018-April/031357.html
pve-cluster:
Dominik Csapak (1):
add 'jobs.cfg' to observed files
data/PVE/Cluster.pm | 1 +
data/src/status.c | 1 +
2 files changed, 2 insertions(+)
pve-manager:
Dominik Csapak (6):
postinst: use reload-or-restart instead of reload-or-try-restart
api/backup: refactor string for all days
add PVE/Jobs to handle VZDump jobs
pvescheduler: run jobs from jobs.cfg
api/backup: handle new vzdump jobs
ui: dc/backup: show id+schedule instead of dow+starttime
Thomas Lamprecht (1):
replace systemd timer with pvescheduler daemon
PVE/API2/Backup.pm | 247 +++++++++++++++++++++++------
PVE/API2/Cluster/BackupInfo.pm | 10 ++
PVE/Jobs.pm | 210 ++++++++++++++++++++++++
PVE/Jobs/Makefile | 16 ++
PVE/Jobs/Plugin.pm | 61 +++++++
PVE/Jobs/VZDump.pm | 54 +++++++
PVE/Makefile | 3 +-
PVE/Service/Makefile | 2 +-
PVE/Service/pvescheduler.pm | 117 ++++++++++++++
bin/Makefile | 6 +-
bin/pvescheduler | 28 ++++
debian/postinst | 5 +-
services/Makefile | 3 +-
services/pvescheduler.service | 16 ++
services/pvesr.service | 8 -
services/pvesr.timer | 12 --
www/manager6/dc/Backup.js | 47 +++---
www/manager6/dc/BackupJobDetail.js | 10 +-
18 files changed, 749 insertions(+), 106 deletions(-)
create mode 100644 PVE/Jobs.pm
create mode 100644 PVE/Jobs/Makefile
create mode 100644 PVE/Jobs/Plugin.pm
create mode 100644 PVE/Jobs/VZDump.pm
create mode 100644 PVE/Service/pvescheduler.pm
create mode 100755 bin/pvescheduler
create mode 100644 services/pvescheduler.service
delete mode 100644 services/pvesr.service
delete mode 100644 services/pvesr.timer
--
2.30.2
More information about the pve-devel
mailing list