[pbs-devel] [PATCH v3 proxmox-backup] ui: warn of missing gc-schedule, prune/verify jobs
Christian Ebner
c.ebner at proxmox.com
Mon Dec 11 10:44:16 CET 2023
> On 11.12.2023 10:09 CET Thomas Lamprecht <t.lamprecht at proxmox.com> wrote:
>
>
> Am 06/12/2023 um 15:43 schrieb Christian Ebner:
> > True and I am aware of that. That is why I opted for a warning rather than
> > showing these as errors. But of course an information might be even more
> > fitting here. But I would like to not hide the not configured states to
> > much, as the main point of adding these, is to inform the user that he
> > might be missing something, after all. Maybe I should also add a tool tip
> > to show some explanation for the warning/info when hovered?
>
> IMO it should then be really focus on being informative, not warning, and
> tooltips are not very accessible.
> I mean, what we actually want here is to avoid that a PBS gets full by
> accident because the admin forgot, or did not understand, that they had to
> set up a prune and gc job for anything to be deleted.
> That might be better handled by generating a low-storage space notification
> that is actively sent (with status quo over the mail forwarder, in the
> future with Lukas' new notification stack) to the admin.
Yes, that sounds like an even better approach.
However, how should the storage space be checked and notification be generated
in that case?
One option would be to have a periodic job, which however once again can be
miss-configured and/or fail? Or do you have something else in mind here?
> Such a notification could contain additional hints if there's no GC or no
> prune jobs, or if we want to be fancy, that prune jobs do not cover all
> namespaces (but IMO overkill for starters).
;) already worked on this last week, getting the namespaces and jobs and
checking if they are covered was not that complex after all. Only required to
fetch the namespaces per datastore via the API and compare the namespace depth
with the jobs namespace level and configured max depth.
>
> The when to send might be configurable, but probably something like < 10%
> usable space left, by default.
> This would help users that actively forget setting such things up, which
> while rare has a big impact, while not annoying/scare-mongering users that
> want that explicitly, for why ever use case that is.
>
> >
> > Hmm, good point. Maybe I should not show the state information (and fetch
> > via the API) for that job at all if the user has no privileges? Will have
> > a look if I can exclude these.
>
> This could be also fixed by polling the permissions on login and ticket
> renewal and use those to hide some panels or make some buttons disabled if
> it cannot work anyway.
>
> Sorta like we do in PVE, but I'd avoid that odd heuristic there that merges
> the available privs in some top-level object, but rather just load the whole
> ACL-tree privilege info in a store and check it there, e.g., through a new
> `Proxmox.ACL` module `check` (and maybe `checkAny` for any-of) method that
> we can use on callsites like check(`/datastore/${name}`, 'Datastore.Audit')
> call, if we have more complex cases in the future, we can also add a method
> that takes an object do denote such things like:
> { any: ['Priv.Foo', Priv.Bar], all: ['Priv.Baz'] }
> But I'd start small, one can also just use local variables to query each
> priv and then combine them with a boolean expression to whatever the
> callsites need for now anyway, so `check` and maybe `checkAny` should cover
> most.
I also had a look at this last week, especially how this is handled in PVE, as
I saw that we are not currently storing this same information for the PBS UI.
I however noticed that without the 'Datastore.Audit' permissions, the user is not
able to get the datastore summary panel, and if the user has that permission, also
reading access to the jobs is granted, so it would not have been required to
check the permissions individually for this particular case.
More information about the pbs-devel
mailing list