[pbs-devel] [PATCH widget-toolkit] fix #4961: disks: increase timeout to 1min
Gabriel Goller
g.goller at proxmox.com
Thu Sep 28 14:57:10 CEST 2023
On 9/26/23 15:42, Thomas Lamprecht wrote:
> [..]
> I'd rather like to avoid that, 30s is already long and might make some
> users just close the dialogue or whatever.
>
> And general "lightweight-task" mechanism that Fabian proposed could be
> nice for some things but IMO the wrong approach here too.
>
> Here, with the end result being to show a simple disk list, I just
> refuse to believe that this has to take that long; here the API call has
> some serious architectural problems, among other things that can mean a
> mix of:
>
> 1. too much stuff read that is not relevant most of the time, we could
> avoid querying the data that has the worst expensive/useful ratio
> and make that only available on a per-disk info dialogue.
>
> 2. getting some info in a convoluted way (e.g, we have already lsblk
> info, which knows if a disk is a "zfs_member", but still do a zpool
> list, and all of those do separate stats on the device nodes to get
> the rdevs, multiple times...)
>
> 3. missing a cache for the more expensive stuff, e.g., one that the rrd
> stat querying code could probably fill every X'd run or so, as it
> checks for IO metrics anyway
>
> I'd start with the 2. point above, as I saw quite a bit optimization
> potential with only a quick skimming, some of that should hopefully
> produce some actual improvements.
>
> Then I'd check where most of the time is spent, either by asking the
> reporter nicely for timings of all relevant commands and potentially
> also by just checking ourselves, we have slow HW here too, and while
> not 90 disks, even a tenth should show slowish enough behavior to
> be able to find the major culprits with some tracing or even just
> printf-timestamp-debugging
>
>
> The _last_ thing I'd do here for this specific endpoint is extending
> timeouts or inventing some new fancy light-weight task mechanisms.
> As optimizations, omitting expensive stuff, or moving it out from
> "for-all-disk" to a "per-single-disk" view, and caching for the
> "expensive but just really interesting to always see" stuff goes a
> long way..
>
> btw. PBS still misses the Wipe-Disk feature from PVE..
I looked into optimizing some stuff and noticed that (as you
wrote in the issue) the slowest part is the smartctl. Getting
the smartctl-data takes about 60-70% of the time spent on the
request.
I only have 4 disks with 6 partitions, but the api call still takes
about 232.22ms on average. Disabling the smartctl check we
get to 157.22ms. I did some other optimizations and got down
to 121.27ms, which is still not great but an improvement. I will
do some other stuff and check if it helps...
But in the meantime I thought about some way of disabling
smartctl checks when there are a lot of disks: How about we
add a button on the frontend "get smartctl data", which
manually updates the smartctl data in the table. We could then
check server-side if there are more than x disks, or "lsblk" takes
more than x seconds and don't run the smartctl check, which returns
a unknown status. Then the user can manually click the "get smartctl data"
button which makes the smartctl check in a separate api request.
So when a user doesn't have a lot of disks, we still get them all including
the smartctl data on the same request. But if it takes too long, we skip the
smartctl check, and let the user decide if he needs the data or not.
More information about the pbs-devel
mailing list