[pve-devel] [PATCH access-control 1/1] add realmsync plugin for jobs and CRUD api for domainsync-jobs
Thomas Lamprecht
t.lamprecht at proxmox.com
Mon Nov 7 18:05:10 CET 2022
mid to high level review, but it may be that some things are slightly
out of date since sending this ~7 months ago (sorry for the delay...)
Am 04/04/2022 um 10:54 schrieb Dominik Csapak:
> to be able to define automated jobs that sync ldap/ad
>
> The jobs plugin contains special handling when no node is given, since
> we only want it to run on a single node when that triggers. For that,
> we save a statefile in /etc/pve/priv/jobs/ which contains the
> node/time/upid of the node that runs the job. The first node that
> is able to lock the realm (via cfs_lock_domain) "wins" and may
> sync from the ldap.
>
> in case a specific node was selected, this is omitted and the Jobs
> handling will not let it run on other nodes anyway
>
> the API part is our usual sectionconfig CRUD api, but specialized
> for the specific type of job.
>
> Signed-off-by: Dominik Csapak <d.csapak at proxmox.com>
> ---
> src/PVE/API2/AccessControl.pm | 6 +
> src/PVE/API2/Domainsync.pm | 278 ++++++++++++++++++++++++++++++++++
> src/PVE/API2/Makefile | 1 +
> src/PVE/Jobs/Makefile | 6 +
> src/PVE/Jobs/RealmSync.pm | 192 +++++++++++++++++++++++
> src/PVE/Makefile | 1 +
> 6 files changed, 484 insertions(+)
> create mode 100644 src/PVE/API2/Domainsync.pm
> create mode 100644 src/PVE/Jobs/Makefile
> create mode 100644 src/PVE/Jobs/RealmSync.pm
>
> diff --git a/src/PVE/API2/AccessControl.pm b/src/PVE/API2/AccessControl.pm
> index 5d78c6f..6f32bf2 100644
> --- a/src/PVE/API2/AccessControl.pm
> +++ b/src/PVE/API2/AccessControl.pm
> @@ -15,6 +15,7 @@ use PVE::RESTHandler;
> use PVE::AccessControl;
> use PVE::JSONSchema qw(get_standard_option);
> use PVE::API2::Domains;
> +use PVE::API2::Domainsync;
> use PVE::API2::User;
> use PVE::API2::Group;
> use PVE::API2::Role;
> @@ -57,6 +58,11 @@ __PACKAGE__->register_method ({
> path => 'domains',
> });
>
> +__PACKAGE__->register_method ({
> + subclass => "PVE::API2::Domainsync",
> + path => 'domainsyncjobs',
> +});
I'd place this in a `jobs` sub-directory, iow. above would then be
/access/jobs/domain-sync
> +
> __PACKAGE__->register_method ({
> subclass => "PVE::API2::OpenId",
> path => 'openid',
> diff --git a/src/PVE/API2/Domainsync.pm b/src/PVE/API2/Domainsync.pm
> new file mode 100644
> index 0000000..22333bf
> --- /dev/null
> +++ b/src/PVE/API2/Domainsync.pm
> @@ -0,0 +1,278 @@
> +package PVE::API2::Domainsync;
As "Domainsync" is rather generall and not so telling outside of the access
control context, I'd change module name and file location to:
PVE::API2::AccessControl::Job::DomainSync;
while maybe a bit unwieldy we only need to refer this about one time on
registering it in the manager.
If you want you could place the PVE::API2::AccessControl::Job; (which will be
just a short index endpoint and subdir registering) and this in the same file.
> +
> +use strict;
> +use warnings;
> +
> +use JSON;
> +
> +use PVE::Exception qw(raise_param_exc);
> +use PVE::Tools qw(extract_param);
> +use PVE::Cluster qw (cfs_read_file cfs_write_file cfs_lock_file);
> +use PVE::AccessControl;
> +use PVE::JSONSchema qw(get_standard_option);
> +use PVE::Jobs::RealmSync;
please group by
- perl module
- proxmox but not in repo
- in repo
and sort each group, thx!
> +
> +use PVE::SafeSyslog;
> +use PVE::RESTHandler;
> +use PVE::Auth::Plugin;
> +
> +use base qw(PVE::RESTHandler);
> +
> +my $get_cluster_last_run = sub {
> + my ($jobid) = @_;
> +
> + my $raw = PVE::Tools::file_get_contents("/etc/pve/priv/jobs/realmsync-$jobid.json");
> + my $state = decode_json($raw);
maybe wrap this in eval and add some error context on $@, as decode_json
errors are pretty ugly and hard to relate too as there's not file name
info or the like available.
> + if (my $upid = $state->{upid}) {
> + if (my $decoded = PVE::Tools::upid_decode($upid)) {
> + return $decoded->{starttime};
> + }
> + } else {
> + return $state->{time};
code golfy and such no hard feelings: could be written as:
my $upid = $state->{upid} or return $state->{time};
> + }
> +
> + return undef;
> +};
> +
> +__PACKAGE__->register_method ({
> + name => 'syncjob_index',
> + path => '',
> + method => 'GET',
> + description => "List configured domain-sync-jobs.",
> + permissions => {
> + check => ['perm', '/', ['Sys.Audit']],
> + },
> + parameters => {
> + additionalProperties => 0,
> + properties => {},
> + },
> + returns => {
> + type => 'array',
> + items => {
> + type => "object",
> + properties => {
> + id => {
> + description => "The ID of the entry.",
> + type => 'string'
> + },
> + disable => {
> + description => "Flag to disable the plugin.",
> + type => 'boolean',
> + },
> + # TODO ADD missing properties
> + },
> + },
> + links => [ { rel => 'child', href => "{id}" } ],
> + },
> + code => sub {
> + my ($param) = @_;
> +
> + my $rpcenv = PVE::RPCEnvironment::get();
> + my $user = $rpcenv->get_user();
> +
> + my $jobs_data = cfs_read_file('jobs.cfg');
> + my $order = $jobs_data->{order};
> + my $jobs = $jobs_data->{ids};
> +
> + my $res = [];
> + foreach my $jobid (sort { $order->{$a} <=> $order->{$b} } keys %$jobs) {
nit: we prefer shorter `for` for new code, it's the same as foreach since a while
but, well, shorter.
> + my $job = $jobs->{$jobid};
> + next if $job->{type} ne 'realmsync';
> +
> + if (my $schedule = $job->{schedule}) {
> + $job->{'last-run'} = eval { $get_cluster_last_run->($jobid) };
> + my $last_run = $job->{'last-run'} // time(); # current time as fallback
> + my $calspec = Proxmox::RS::CalendarEvent->new($schedule);
mixed variable naming (snake case before and below, but none(?) here),
> + my $next_run = $calspec->compute_next_event($last_run);
> + $job->{'next-run'} = $next_run if defined($next_run);
> + }
> +
> + push @$res, $job;
> + }
> +
> + return $res;
> + }});
> +
> +__PACKAGE__->register_method({
> + name => 'read_job',
> + path => '{id}',
> + method => 'GET',
> + description => "Read domain-sync job definition.",
> + permissions => {
> + check => ['perm', '/', ['Sys.Audit']],
> + },
> + parameters => {
> + additionalProperties => 0,
> + properties => {
> + id => {
> + type => 'string',
> + format => 'pve-configid',
> + },
> + },
> + },
> + returns => {
> + type => 'object',
> + },
> + code => sub {
> + my ($param) = @_;
> +
> + my $jobs = cfs_read_file('jobs.cfg');
> + my $job = $jobs->{ids}->{$param->{id}};
code style nit: please avoid using hash access inside for hash keys.
> + return $job if $job && $job->{type} eq 'realmsync';
is this outdated due to being from april? or do we really share the
ID namespace between all plugin types?
> +
> + raise_param_exc({ id => "No such job '$param->{id}'" });
> +
> + }});
> +
> +__PACKAGE__->register_method({
> + name => 'create_job',
> + path => '{id}',
> + method => 'POST',
> + protected => 1,
> + description => "Create new domain-sync job.",
> + permissions => {
> + description => "'Realm.AllocateUser' on '/access/realm/<realm>' and "
> + ." 'User.Modify' permissions to '/access/groups/'.",
results in double whitespace at "and 'User.Modify'"
> + check => [ 'and',
> + ['perm', '/access/realm/{realm}', ['Realm.AllocateUser']],
> + ['perm', '/access/groups', ['User.Modify']],
hmm, is this the same as for a manual sync? I guess it's OK then, just
seems a bit lax.
> + ],
> + },
> + parameters => PVE::Jobs::RealmSync->createSchema(),
> + returns => { type => 'null' },
> + code => sub {
> + my ($param) = @_;
> +
> + my $id = extract_param($param, 'id');
> +
> + cfs_lock_file('jobs.cfg', undef, sub {
> + my $data = cfs_read_file('jobs.cfg');
> +
> + die "Job '$id' already exists\n"
> + if $data->{ids}->{$id};
> +
> + my $plugin = PVE::Jobs::Plugin->lookup('realmsync');
> + my $opts = $plugin->check_config($id, $param, 1, 1);
> +
> + $data->{ids}->{$id} = $opts;
> +
> + PVE::Jobs::create_job($id, 'realmsync');
> +
> + cfs_write_file('jobs.cfg', $data);
> + });
> + die "$@" if ($@);
> +
> + return undef;
> + }});
> +
> +__PACKAGE__->register_method({
> + name => 'update_job',
> + path => '{id}',
> + method => 'PUT',
> + protected => 1,
> + description => "Update domain-sync job definition.",
> + permissions => {
> + description => "'Realm.AllocateUser' on '/access/realm/<realm>' and "
> + ." 'User.Modify' permissions to '/access/groups/'.",
> + check => [ 'and',
> + ['perm', '/access/realm/{realm}', ['Realm.AllocateUser']],
> + ['perm', '/access/groups', ['User.Modify']],
> + ],
> + },
> + parameters => PVE::Jobs::RealmSync->updateSchema(),
> + returns => { type => 'null' },
> + code => sub {
> + my ($param) = @_;
> +
> + my $id = extract_param($param, 'id');
> + my $delete = extract_param($param, 'delete');
> + if ($delete) {
> + $delete = [PVE::Tools::split_list($delete)];
> + }
> +
> + my $update_job = sub {
> + my $jobs = cfs_read_file('jobs.cfg');
> +
> + die "no options specified\n" if !scalar(keys %$param);
> +
> + my $plugin = PVE::Jobs::Plugin->lookup('realmsync');
> + my $opts = $plugin->check_config($id, $param, 0, 1);
> +
> + my $job = $jobs->{ids}->{$id};
> + die "no such realmsync job\n" if !$job || $job->{type} ne 'realmsync';
> +
> + my $options = $plugin->options();
> + foreach my $k (@$delete) {
> + my $d = $options->{$k} || die "no such option '$k'\n";
> + die "unable to delete required option '$k'\n" if !$d->{optional};
> + die "unable to delete fixed option '$k'\n" if $d->{fixed};
> + die "cannot set and delete property '$k' at the same time!\n"
> + if defined($opts->{$k});
don't we have that repeated a few times (didn't check actively, but seems to ring
some bells to me)? isn't this something some schema thingy w/could enforce?
(also @wolfgang for the latter)
> + delete $job->{$k};
> + }
> +
> + my $schedule_updated = 0;
> + if ($param->{schedule} ne $job->{schedule}) {
> + $schedule_updated = 1;
> + }
erm, why not?:
my $schedule_updated = $param->{schedule} ne $job->{schedule};
> +
> + foreach my $k (keys %$param) {
probably just copy "error", but please: s/foreach/for/, or even:
$job->{$_} = $param->{$_} for keys $param->%*;
> + $job->{$k} = $param->{$k};
> + }
> +
> + if ($schedule_updated) {
> + PVE::Jobs::updated_job_schedule($id, 'realmsync');
> + }
> +
> + cfs_write_file('jobs.cfg', $jobs);
> + return;
> + };
> + cfs_lock_file('jobs.cfg', undef, $update_job);
could use a direct sub {} instead of named $code_ref like the first
cfs_lock_file usage here does already.
> + die "$@" if ($@);
> + }});
> +
> +
> +__PACKAGE__->register_method({
> + name => 'delete_job',
> + path => '{id}',
> + method => 'DELETE',
> + description => "Delete domain-sync job definition.",
> + permissions => {
> + check => ['perm', '/', ['Sys.Modify']],
> + },
> + protected => 1,
> + parameters => {
> + additionalProperties => 0,
> + properties => {
> + id => {
> + type => 'string',
> + format => 'pve-configid',
> + },
> + },
> + },
> + returns => { type => 'null' },
> + code => sub {
> + my ($param) = @_;
> +
> + my $id = $param->{id};
> +
> + my $delete_job = sub {
> + my $jobs = cfs_read_file('jobs.cfg');
> +
> + if (!defined($jobs->{ids}->{$id})) {
why not check the job type here (but on get/update)?
> + raise_param_exc({ id => "No such job '$id'" });
> + }
> + delete $jobs->{ids}->{$id};
> +
> + PVE::Jobs::remove_job($id, 'realmsync');
> +
> + cfs_write_file('jobs.cfg', $jobs);
> + unlink "/etc/pve/priv/jobs/realmsync-$id.json";
> + };
> +
> + cfs_lock_file('jobs.cfg', undef, $delete_job);
could use a direct sub {} instead of named $code_ref like the first
cfs_lock_file usage here does already.
> + die "$@" if $@;
> +
> + return undef;
> + }});
> +1;
> diff --git a/src/PVE/API2/Makefile b/src/PVE/API2/Makefile
> index 2817f48..9aeb047 100644
> --- a/src/PVE/API2/Makefile
> +++ b/src/PVE/API2/Makefile
> @@ -2,6 +2,7 @@
> API2_SOURCES= \
> AccessControl.pm \
> Domains.pm \
> + Domainsync.pm \
> ACL.pm \
> Role.pm \
> Group.pm \
> diff --git a/src/PVE/Jobs/Makefile b/src/PVE/Jobs/Makefile
> new file mode 100644
> index 0000000..9eed1b2
> --- /dev/null
> +++ b/src/PVE/Jobs/Makefile
> @@ -0,0 +1,6 @@
> +SOURCES=RealmSync.pm
> +
> +.PHONY: install
> +install: ${SOURCES}
> + install -d -m 0755 ${DESTDIR}${PERLDIR}/PVE/Jobs
> + for i in ${SOURCES}; do install -D -m 0644 $$i ${DESTDIR}${PERLDIR}/PVE/Jobs/$$i; done
> diff --git a/src/PVE/Jobs/RealmSync.pm b/src/PVE/Jobs/RealmSync.pm
> new file mode 100644
> index 0000000..f19f4a8
> --- /dev/null
> +++ b/src/PVE/Jobs/RealmSync.pm
> @@ -0,0 +1,192 @@
> +package PVE::Jobs::RealmSync;
Hmm, why not DomainSync, or why is the other module called Domainsynced, no biggie but
slightly odd
> +
> +use strict;
> +use warnings;
missing empty line between strict/warning and other use statements
> +use JSON;
> +
> +use PVE::API2::Domains;
> +use PVE::JSONSchema qw(get_standard_option);
> +use PVE::Cluster;
> +use PVE::CalendarEvent;
> +use PVE::Tools;
pls sort
> +
> +use base qw(PVE::Jobs::Plugin);
isn't this in pve-manager? would add a cyclic dependency and so to our (well
most of the time my ;P) bootstrap PITA.
> +
> +sub type {
> + return 'realmsync';
> +}
> +
> +my $props = get_standard_option('realm-sync-options', {
> + realm => get_standard_option('realm'),
> +});
> +
> +sub properties {
> + return $props;
> +}
> +
> +sub options {
> + my $options = {
> + enabled => { optional => 1 },
> + schedule => {},
> + comment => { optional => 1 },
> + node => { optional => 1 },
> + scope => {},
> + };
> + foreach my $opt (keys %$props) {
s/foreach/for/
> + next if defined($options->{$opt});
> + if ($props->{$opt}->{optional}) {
> + $options->{$opt} = { optional => 1 };
> + } else {
> + $options->{$opt} = {};
> + }
> + }
> + $options->{realm}->{fixed} = 1;
> +
> + return $options;
> +}
> +
> +sub decode_value {
> + my ($class, $type, $key, $value) = @_;
> + return $value;
> +}
> +
> +sub encode_value {
> + my ($class, $type, $key, $value) = @_;
> + return $value;
> +}
> +
> +sub createSchema {
> + my ($class, $skip_type) = @_;
> + my $schema = $class->SUPER::createSchema($skip_type);
> +
> + my $opts = $class->options();
> + for my $opt (keys $schema->{properties}->%*) {
> + next if defined($opts->{$opt}) || $opt eq 'id';
> + delete $schema->{properties}->{$opt};
> + }
> +
> + # remove legacy props
> + delete $schema->{properties}->{full};
> + delete $schema->{properties}->{purge};
> +
> + return $schema;
> +}
> +
> +sub updateSchema {
> + my ($class, $skip_type) = @_;
> + my $schema = $class->SUPER::updateSchema($skip_type);
> +
> + my $opts = $class->options();
> + for my $opt (keys $schema->{properties}->%*) {
> + next if defined($opts->{$opt});
> + next if $opt eq 'id' || $opt eq 'delete';
> + delete $schema->{properties}->{$opt};
> + }
> +
> + # remove legacy props
> + delete $schema->{properties}->{full};
> + delete $schema->{properties}->{purge};
> +
> + return $schema;
> +}
> +
> +sub run {
> + my ($class, $conf, $id, $schedule) = @_;
> +
> + my $node = $conf->{node};
> + foreach my $opt (keys %$conf) {
> + delete $conf->{$opt} if !defined($props->{$opt});
> + }
> +
> + my $realm = $conf->{realm};
> + my $statedir = "/etc/pve/priv/jobs";
> + my $statefile = "$statedir/realmsync-$id.json";
> +
> + if (!defined($node)) {
> + # cluster synced
> + my $curtime = time();
use $now if you want short, IMO a bit better than somewhat randomly cut-off
& directly concatenated words.
> + my $nodename = PVE::INotify::nodename();
> + PVE::Cluster::cfs_update();
doesn't the job infra does this for us?
> + # check statefile in pmxcfs if we should start
> + my $shouldrun = PVE::Cluster::cfs_lock_domain($realm, undef, sub {
> + mkdir $statedir;
> + my $raw = eval { PVE::Tools::file_get_contents($statefile) } // '';
> + my $members = PVE::Cluster::get_members();
> +
> + my $state = ($raw =~ m/^(\{.*\})$/) ? decode_json($1) : {};
> + my $lastnode = $state->{node} // $nodename;
> + my $lastupid = $state->{upid};
> + my $lasttime = $state->{time};
> +
> + my $node_online = 0;
> + if ($lastnode ne $nodename) {
> + if (defined(my $member = $members->{$lastnode})) {
> + $node_online = $member->{online};
> + }
> + } else {
> + $node_online = 1;
> + }
> +
> + if (defined($lastupid)) {
> + # first check if the next run is scheduled
> + if (my $parsed = PVE::Tools::upid_decode($lastupid, 1)) {
> + my $calspec = PVE::CalendarEvent::parse_calendar_event($schedule);
> + my $next_sync = PVE::CalendarEvent::compute_next_event($calspec, $parsed->{starttime});
> + return 0 if !defined($next_sync) || $curtime < $next_sync; # not yet its (next) turn
> + }
> + # check if still running and node is online
> + my $tasks = PVE::Cluster::get_tasklist();
> + for my $task (@$tasks) {
> + next if $task->{upid} ne $lastupid;
> + last if defined($task->{endtime}); # it's already finished
> + last if !$node_online; # it's not finished but the node is offline
> + return 0; # not finished and online
> + }
> + } elsif (defined($lasttime) && ($lasttime+60) > $curtime && $node_online) {
> + # another node started in the last 60 seconds and is online
> + return 0;
> + }
> +> + # any of the following coniditions should be true here:
s/coniditions/conditions/
> + # * it was started on another node but that node is offline now
> + # * it was started but either too long ago, or with an error
> + # * the started task finished
> +
> + PVE::Tools::file_set_contents($statefile, encode_json({
> + node => $nodename,
> + time => $curtime,
> + }));
> + return 1;
> + });
> + die $@ if $@;
> +
> + if ($shouldrun) {
> + my $err = undef;
drop that and..
> + my $upid = eval { PVE::API2::Domains->sync($conf) };
> + $err = $@ if $@;
just save the error with:
my $err = $@;
> + PVE::Cluster::cfs_lock_domain($realm, undef, sub {
> + if ($err && !$upid) {
> + PVE::Tools::file_set_contents($statefile, encode_json({
> + node => $nodename,
> + time => $curtime,
> + error => $err,
> + }));
> + die "$err\n";
> + }
> +
> + PVE::Tools::file_set_contents($statefile, encode_json({
> + node => $nodename,
> + upid => $upid,
> + }));
> + });
> + die $@ if $@;
> + return $upid;
> + } else {
> + return "OK";
> + }
> + }
> +
> + return PVE::API2::Domains->sync($conf);
> +}
> +
> +1;
> diff --git a/src/PVE/Makefile b/src/PVE/Makefile
> index c839d8f..dfc5314 100644
> --- a/src/PVE/Makefile
> +++ b/src/PVE/Makefile
> @@ -8,3 +8,4 @@ install:
> install -D -m 0644 TokenConfig.pm ${DESTDIR}${PERLDIR}/PVE/TokenConfig.pm
> make -C API2 install
> make -C CLI install
> + make -C Jobs install
More information about the pve-devel
mailing list