[pdm-devel] [PATCH proxmox-datacenter-manager 00/15] change task cache mechanism from time-based to max-size FIFO
Thomas Lamprecht
t.lamprecht at proxmox.com
Wed Feb 5 16:34:30 CET 2025
Am 31.01.25 um 10:35 schrieb Lukas Wagner:
> Some additional context as to explain the 'why', since @Thomas and @Wolfgang requested it:
>
> The status quo for the task cache is to fetch a certain time-based range of tasks
> (iirc the last seven days) from every remote and cache this for a certain period
> of time (max-age). If the cached data is too old, we discard the task data
> and fetch the same time range again.
> My initial reasoning behind designing it like this was to keep the
> 'ground truth' completely on the remote side, so *if* somebody were to mess with
Yeah, I agree with Wolfgang here, nothing in our API allows messing with that,
neither does a first-class PVE CLI command or the like. If we want to hedge
against that we basically cannot cache anything at all, rendering them rather
useless, as even RRD metrics could be altered.
Using the start-time of the newest available task from the cache as (inclusive)
boundary to request newer tasks that happened since the last query would be
enough or?
Pruning older tasks from the cache after some max-time or max-size limit
is overstepped would then make sense though.
IMO it for purging older task log entries to avoid huge cache we should mainly
focus on the age of entries limit, e.g. most our dashboards will focus on showing
the task results since X hours, where a sensible minimum for X is probably at
least 24 hours, probably better more like 3 to 5 days to be able to (quickly)
see what happened over a weekend, maybe with some holiday attached to it.
A generously high size limit as additional upper bound to avoid running out of
space might still be nice to hedge against some node, user or api-tooling going
crazy and producing a huge amount of tasks.
> the task archive, we would be consistent after a refresh. Also this allowed to
> keep the caching logic on PDM side much simpler, since we not doing much more
> than caching the API response from the remote - the same way we already do it
> for resources, subscription status, etc.
>
> The downside is we had some unnecessary traffic, since we keep on fetching old tasks
> that we already received.
>
> I originally posted this as an RFC right before the holidays to get an initial approach
> out to get some feedback on the approach (I wasn't too sure myself if the original approach
> was a good idea) and also to somewhat unblock UI development for the remote task view.
>
> The RFC patches were applied relatively quickly; in a short conversation with Dietmar
> he mentioned that it might be better to just fetch the most recent missing tasks so
> that we don't have to retransmit the old data over and over again.
Yeah, I'm still not convinced that rushing such things through, especially
at RFC stage, will make overall development go quicker. Keeping communication
(solely) locked in isolated off-list discussion isn't helping for sure though.
Here both of that caused IMO rather more overhead.
> Some time later Dominik also approached me and suggested the approach that is implemented in
> this patch series. Which is
> - instead of simply caching the remotes API response, we try to replicate
> the task archive locally
> - memorize the time when we last got the latest tasks and only request
> what is missing since then
> - limit the replicated task archive's size, dropping the oldest tasks
> when the size is exceeded
>
> I didn't have any objections so I went ahead and implemented it.
>
> The main benefits is that we have to transmit much less data.
Yes, but the original idea would act the same in this regard if it did not
hedge against things that cannot really happen without manually messing around
(at which point all bets are off); or what am I missing here? To be fair: I
only skimmed the patches and asked Wolfgang a bit about it as he looked through
them more closely, so I might indeed miss something or just be slightly confused
about the presentation here.
More information about the pdm-devel
mailing list