[pdm-devel] [PATCH proxmox-datacenter-manager 00/15] change task cache mechanism from time-based to max-size FIFO

Lukas Wagner l.wagner at proxmox.com
Thu Feb 6 11:13:45 CET 2025


On  2025-02-05 16:34, Thomas Lamprecht wrote:
> Am 31.01.25 um 10:35 schrieb Lukas Wagner:
>> Some additional context as to explain the 'why', since @Thomas and @Wolfgang requested it:
>>
>> The status quo for the task cache is to fetch a certain time-based range of tasks
>> (iirc the last seven days) from every remote and cache this for a certain period
>> of time (max-age). If the cached data is too old, we discard the task data
>> and fetch the same time range again.
>> My initial reasoning behind designing it like this was to keep the
>> 'ground truth' completely on the remote side, so *if* somebody were to mess with
> 
> Yeah, I agree with Wolfgang here, nothing in our API allows messing with that,
> neither does a first-class PVE CLI command or the like. If we want to hedge
> against that we basically cannot cache anything at all, rendering them rather
> useless, as even RRD metrics could be altered.
> 
> Using the start-time of the newest available task from the cache as (inclusive)
> boundary to request newer tasks that happened since the last query would be
> enough or?
> 
> Pruning older tasks from the cache after some max-time or max-size limit
> is overstepped would then make sense though.

I mean yes, this is pretty much what I implemented in this patch series.
We keep track of the most recent timestamp of a task that we have already in the cache
and only request what is missing in future refreshes. At the moment, we we limit
the cache by its size, discarding the oldest entries when the max-size is exceeded.

> IMO it for purging older task log entries to avoid huge cache we should mainly
> focus on the age of entries limit, e.g. most our dashboards will focus on showing
> the task results since X hours, where a sensible minimum for X is probably at
> least 24 hours, probably better more like 3 to 5 days to be able to (quickly)
> see what happened over a weekend, maybe with some holiday attached to it.

Sure, if you want, I can change it to use time-based approach with a generous maximum size.
Should be pretty simple change to incorporate.

> 
> A generously high size limit as additional upper bound to avoid running out of
> space might still be nice to hedge against some node, user or api-tooling going
> crazy and producing a huge amount of tasks.
> 
>> the task archive, we would be consistent after a refresh. Also this allowed to
>> keep the caching logic on PDM side much simpler, since we not doing much more
>> than caching the API response from the remote - the same way we already do it
>> for resources, subscription status, etc.
>>
>> The downside is we had some unnecessary traffic, since we keep on fetching old tasks
>> that we already received.
>>
>> I originally posted this as an RFC right before the holidays to get an initial approach
>> out to get some feedback on the approach (I wasn't too sure myself if the original approach
>> was a good idea) and also to somewhat unblock UI development for the remote task view.
>>
>> The RFC patches were applied relatively quickly; in a short conversation with Dietmar
>> he mentioned that it might be better to just fetch the most recent missing tasks so
>> that we don't have to retransmit the old data over and over again.
> 
> Yeah, I'm still not convinced that rushing such things through, especially
> at RFC stage, will make overall development go quicker. Keeping communication
> (solely) locked in isolated off-list discussion isn't helping for sure though.
> Here both of that caused IMO rather more overhead.
> 
>> Some time later Dominik also approached me and suggested the approach that is implemented in
>> this patch series. Which is
>>   - instead of simply caching the remotes API response, we try to replicate
>>     the task archive locally
>>   - memorize the time when we last got the latest tasks and only request
>>     what is missing since then
>>   - limit the replicated task archive's size, dropping the oldest tasks
>>     when the size is exceeded
>>
>> I didn't have any objections so I went ahead and implemented it.
>>
>> The main benefits is that we have to transmit much less data.
> 
> Yes, but the original idea would act the same in this regard if it did not
> hedge against things that cannot really happen without manually messing around
> (at which point all bets are off); or what am I missing here? To be fair: I
> only skimmed the patches and asked Wolfgang a bit about it as he looked through
> them more closely, so I might indeed miss something or just be slightly confused
> about the presentation here.

The initial approach which was merged too soon had some flaws which were kindly pointed out to me off-list
with some ideas for improvement.
Essentially, the key point of *this* series is to transform the original approach in a way which
does not 'hedge against things that cannot really happen'.

I'm sorry about any confusion here. I'll try to provide more context from the start if something similar
happens again. Also, I'll try to communicate my questions/doubts (like I said, I wasn't sure if the initial
approach from the RFC was good or not, hence the RFC in the first place) more openly if I post an RFC in a
similar way again, e.g. in a cover letter, to make sure that a proper, on-list discussion takes place before
anything is merged.

Thanks for your input on this.

-- 
- Lukas





More information about the pdm-devel mailing list