[pbs-devel] [PATCH v3 proxmox-backup] ui: warn of missing gc-schedule, prune/verify jobs

Thomas Lamprecht t.lamprecht at proxmox.com
Mon Dec 11 11:43:30 CET 2023


Am 11/12/2023 um 10:44 schrieb Christian Ebner:
>> On 11.12.2023 10:09 CET Thomas Lamprecht <t.lamprecht at proxmox.com> wrote:
>> Am 06/12/2023 um 15:43 schrieb Christian Ebner:
>>> True and I am aware of that. That is why I opted for a warning rather than
>>> showing these as errors. But of course an information might be even more
>>> fitting here. But I would like to not hide the not configured states to
>>> much, as the main point of adding these, is to inform the user that he
>>> might be missing something, after all. Maybe I should also add a tool tip
>>> to show some explanation for the warning/info when hovered?
>>
>> IMO it should then be really focus on being informative, not warning, and
>> tooltips are not very accessible.
>> I mean, what we actually want here is to avoid that a PBS gets full by
>> accident because the admin forgot, or did not understand, that they had to
>> set up a prune and gc job for anything to be deleted.
>> That might be better handled by generating a low-storage space notification
>> that is actively sent (with status quo over the mail forwarder, in the
>> future with Lukas' new notification stack) to the admin.
> 
> Yes, that sounds like an even better approach.
> 
> However, how should the storage space be checked and notification be generated
> in that case?
> One option would be to have a periodic job, which however once again can be
> miss-configured and/or fail? Or do you have something else in mind here?

I'd just use the daily timer which we use for sending out notifications about
available apt updates.

I.e., just some fixed-scheduled daily health checking, that, e.g., can be
configured in the nodes' config, but only some params like the low-water
mark for when to send the alert.

>> Sorta like we do in PVE, but I'd avoid that odd heuristic there that merges
>> the available privs in some top-level object, but rather just load the whole
>> ACL-tree privilege info in a store and check it there, e.g., through a new
>> `Proxmox.ACL` module `check` (and maybe `checkAny` for any-of) method that
>> we can use on callsites like check(`/datastore/${name}`, 'Datastore.Audit')
>> call, if we have more complex cases in the future, we can also add a method
>> that takes an object do denote such things like:
>>  { any: ['Priv.Foo', Priv.Bar], all: ['Priv.Baz'] }
>> But I'd start small, one can also just use local variables to query each
>> priv and then combine them with a boolean expression to whatever the
>> callsites need for now anyway, so `check` and maybe `checkAny` should cover
>> most.
> 
> I also had a look at this last week, especially how this is handled in PVE, as
> I saw that we are not currently storing this same information for the PBS UI.
> 
> I however noticed that without the 'Datastore.Audit' permissions, the user is not
> able to get the datastore summary panel, and if the user has that permission, also
> reading access to the jobs is granted, so it would not have been required to
> check the permissions individually for this particular case.

We use that also in PVE, i.e., we do not check for specific VMIDs as there access
is covered by what the API returns anyway, the overall datastore list would be
the same, but what one can do within not, so yeah Datastore.Audit might not have
been the best example here.




More information about the pbs-devel mailing list