[pdm-devel] [PATCH proxmox-datacenter-manager 00/15] change task cache mechanism from time-based to max-size FIFO
Wolfgang Bumiller
w.bumiller at proxmox.com
Fri Jan 31 14:51:09 CET 2025
On Fri, Jan 31, 2025 at 02:36:07PM +0100, Wolfgang Bumiller wrote:
> On Fri, Jan 31, 2025 at 10:35:03AM +0100, Lukas Wagner wrote:
> >
> >
> > On 2025-01-28 13:25, Lukas Wagner wrote:
> > > This patch series changes the remote task caching behavior from being purely
> > > time-based cache to a FIFO cache-replacement policy with a maximum number of
> > > cached tasks per remote. If the maximum number is exceeded, the oldest tasks
> > > are dropped from the cache.
> > >
> > > When calling the remote-tasks API, the latest missing task data which is not
> > > yet in the cache is requested from the remotes. There we limit this to once
> > > every 5 minutes at the moment, with the option for a force-refresh (to be triggered
> > > by a refresh button in the UI). As before, we augment the cached task data
> > > with the currently running tracked tasks which were started by PDM.
> > >
> > > Some words about the cache storage implementation:
> > > Note that the storage backend for this cache probably needs some more love in
> > > the future. Right now its just a single JSON file for everything, mainly because this
> > > was the quickest approach to implement to unblock UI development work.
> > > The problem with the approach is that it does not perform well with setups
> > > with a large number of remotes, since every update to the cache rewrites
> > > the entire cache file when the cache is persisted, causing additional
> > > CPU and IO load.
> > >
> > > In the future, we should use a similar mechanism as the task archive in PBS.
> > > I'm not sure if the exact same mechanism can be used due to some different
> > > requirements, but the general direct probably fits quite well.
> > > If we can reuse it 1:1, we have to break it out of (iirc) the WorkerTask struct
> > > to be reusable.
> > > It will probably require some experimentation and benchmarking to find an
> > > ideal approach.
> > > We probably don't want a separate archive per remote, since we do not want to
> > > read hundres/thousands of files when we request an aggregated remote task history
> > > via the API. But having the complete archive in a single file also seems quite
> > > challenging - we need to keep the data sorted, while we also need to
> > > handle task data arriving out of order from different remotes. Keeping
> > > the data sorted when new data arrives leads us to the same problem as with
> > > JSON file, being that we have to rewrite the file over and over again, causing
> > > load and writes.
> > >
> > > The good news is that since this is just a cache, we are pretty free to change
> > > the storage implementation without too much trouble; we don't even have to
> > > migrate the old data, since it should not be an issue to simply request the
> > > data from the remotes again. This is the main reason why I've opted
> > > to keep the JSON file for now; I or somebody else can revisit this at a later
> > > time.
> > >
> >
> > Some additional context as to explain the 'why', since @Thomas and @Wolfgang requested it:
> >
> > The status quo for the task cache is to fetch a certain time-based range of tasks
> > (iirc the last seven days) from every remote and cache this for a certain period
> > of time (max-age). If the cached data is too old, we discard the task data
> > and fetch the same time range again.
> > My initial reasoning behind designing it like this was to keep the
> > 'ground truth' completely on the remote side, so *if* somebody were to mess with
>
> I don't think "messing with the task archive" is something we want to
> worry about unless it's "easy enough".
>
> > the task archive, we would be consistent after a refresh. Also this allowed to
> > keep the caching logic on PDM side much simpler, since we not doing much more
> > than caching the API response from the remote - the same way we already do it
> > for resources, subscription status, etc.
> >
> > The downside is we had some unnecessary traffic, since we keep on fetching old tasks
> > that we already received.
> >
> > I originally posted this as an RFC right before the holidays to get an initial approach
> > out to get some feedback on the approach (I wasn't too sure myself if the original approach
> > was a good idea) and also to somewhat unblock UI development for the remote task view.
>
> There are definitely some things which need to be improved regardless of
> which version we use.
Since this does not fit into the current patches as it is already an
issue in the original code:
Also mentioned this off list, but this is something general to remember:
Whenever we `spawn()` a longer running task we also need to take into
account the possibility that the daemons might be reloaded, where these
tasks would prevent the original one from shutting down (and potentially
race against file locks in the reloaded one), so they should `select()`
on a `proxmox_daemon::shutdown_future()`.
For this case, this means the list of tasks which are currently being
polled in futures needs to be persisted somewhere so the reloaded daemon
can pick it up and continue the polling while the polling task in the
old daemon just gets cancelled. AFAICT this should be fairly
unproblematic here.
More information about the pdm-devel
mailing list