[pdm-devel] [PATCH proxmox-datacenter-manager 00/15] change task cache mechanism from time-based to max-size FIFO

Wolfgang Bumiller w.bumiller at proxmox.com
Fri Jan 31 14:36:06 CET 2025


On Fri, Jan 31, 2025 at 10:35:03AM +0100, Lukas Wagner wrote:
> 
> 
> On  2025-01-28 13:25, Lukas Wagner wrote:
> > This patch series changes the remote task caching behavior from being purely
> > time-based cache to a FIFO cache-replacement policy with a maximum number of
> > cached tasks per remote. If the maximum number is exceeded, the oldest tasks
> > are dropped from the cache.
> > 
> > When calling the remote-tasks API, the latest missing task data which is not
> > yet in the cache is requested from the remotes. There we limit this to once
> > every 5 minutes at the moment, with the option for a force-refresh (to be triggered
> > by a refresh button in the UI). As before, we augment the cached task data
> > with the currently running tracked tasks which were started by PDM.
> > 
> > Some words about the cache storage implementation:
> > Note that the storage backend for this cache probably needs some more love in
> > the future. Right now its just a single JSON file for everything, mainly because this
> > was the quickest approach to implement to unblock UI development work. 
> > The problem with the approach is that it does not perform well with setups
> > with a large number of remotes, since every update to the cache rewrites
> > the entire cache file when the cache is persisted, causing additional
> > CPU and IO load.
> > 
> > In the future, we should use a similar mechanism as the task archive in PBS.
> > I'm not sure if the exact same mechanism can be used due to some different
> > requirements, but the general direct probably fits quite well.
> > If we can reuse it 1:1, we have to break it out of (iirc) the WorkerTask struct
> > to be reusable.
> > It will probably require some experimentation and benchmarking to find an
> > ideal approach.
> > We probably don't want a separate archive per remote, since we do not want to
> > read hundres/thousands of files when we request an aggregated remote task history
> > via the API. But having the complete archive in a single file also seems quite
> > challenging - we need to keep the data sorted, while we also need to
> > handle task data arriving out of order from different remotes. Keeping
> > the data sorted when new data arrives leads us to the same problem as with
> > JSON file, being that we have to rewrite the file over and over again, causing
> > load and writes.
> > 
> > The good news is that since this is just a cache, we are pretty free to change
> > the storage implementation without too much trouble; we don't even have to
> > migrate the old data, since it should not be an issue to simply request the
> > data from the remotes again. This is the main reason why I've opted
> > to keep the JSON file for now; I or somebody else can revisit this at a later
> > time.
> > 
> 
> Some additional context as to explain the 'why', since @Thomas and @Wolfgang requested it:
> 
> The status quo for the task cache is to fetch a certain time-based range of tasks
> (iirc the last seven days) from every remote and cache this for a certain period
> of time (max-age). If the cached data is too old, we discard the task data
> and fetch the same time range again.
> My initial reasoning behind designing it like this was to keep the
> 'ground truth' completely on the remote side, so *if* somebody were to mess with

I don't think "messing with the task archive" is something we want to
worry about unless it's "easy enough".

> the task archive, we would be consistent after a refresh. Also this allowed to
> keep the caching logic on PDM side much simpler, since we not doing much more
> than caching the API response from the remote - the same way we already do it
> for resources, subscription status, etc.
> 
> The downside is we had some unnecessary traffic, since we keep on fetching old tasks
> that we already received.
> 
> I originally posted this as an RFC right before the holidays to get an initial approach
> out to get some feedback on the approach (I wasn't too sure myself if the original approach
> was a good idea) and also to somewhat unblock UI development for the remote task view.

There are definitely some things which need to be improved regardless of
which version we use.
This version otherwise does look okay to me I suppose.

> 
> The RFC patches were applied relatively quickly; in a short conversation with Dietmar
> he mentioned that it might be better to just fetch the most recent missing tasks so
> that we don't have to retransmit the old data over and over again.
> 
> Some time later Dominik also approached me and suggested the approach that is implemented in
> this patch series. Which is
>   - instead of simply caching the remotes API response, we try to replicate
>     the task archive locally
>   - memorize the time when we last got the latest tasks and only request
>     what is missing since then
>   - limit the replicated task archive's size, dropping the oldest tasks
>     when the size is exceeded
> 
> I didn't have any objections so I went ahead and implemented it.
> 
> The main benefits is that we have to transmit much less data. 
> The drawback is that the caching logic became more complex, since we e.g. have to make sure
> that we don't have duplicated entries in the task logs. Also, at least in *theory*,
> both could diverge if something were to change the task archive on the remote
> side - not sure how much of a concern this, though.




More information about the pdm-devel mailing list