[pve-devel] [PATCH ha-manager 09/11] manager: use static resource scheduler when configured
Fiona Ebner
f.ebner at proxmox.com
Wed Nov 16 10:37:18 CET 2022
Am 16.11.22 um 08:14 schrieb Thomas Lamprecht:
> Am 11/11/2022 um 10:28 schrieb Fiona Ebner:
>> Am 10.11.22 um 15:37 schrieb Fiona Ebner:
>>> @@ -206,11 +207,30 @@ my $valid_service_states = {
>>> sub recompute_online_node_usage {
>> So I was a bit worried that recompute_online_node_usage() would become
>> too inefficient with the new add_service_usage_to_node() overhead from
>> needing to read the guest configs. I now tested it with ~300 HA services
>> (minimal containers) running on my virtual test cluster.
>>
>> Timings with 'basic' mode were between 0.0004 - 0.001 seconds
>> Timings with 'static' mode were between 0.007 - 0.012 seconds
>>
>> While about a 10-fold increase, it's not too dramatic at least. I guess
>> that's what the caching of cfs files is for :)
>>
>> Still, the function is currently not only called in the main loop in
>> manage(), but also in next_state_recovery() and change_service_state().
>>
>> With, say, 400 HA services each on 5 nodes, if a node fails there's
>> 400 calls from changing to freeze
>
> huh, freeze should only happen on graceful shutdown of a node, not
> if it fails?
Sorry, I meant fence not freeze.
>
>> 400 calls from changing to recovery
>> 400 calls in next_state_recovery
>> 400 calls from changing to started
>> If we take a generous estimate that each call takes 0.1 seconds (there's
>> 2000 services in total), that's 40+80+40 seconds in 3 bursts during the
>> fencing and recovery period.
>
> doesn't that lead to overly long run windows between watchdog updates?
>
>>
>> Is that acceptable? Should I try to optimize how often the function is
>> called?
>>
>
> hmm, a quick look wouldn't hurt, but not required for now IMO - if it can
> interfere with watchdog updates I'd sneak in updating it once in between
> though.
>
Yes, from a quick look that might become a problem, exactly because the
delays happen in bursts (all services change state in a single manage()
run).
Not sure how you would trigger the update, because that would need to
happen in the CRM AFAIU?
There is a fixme comment in CRM.pm's work() to set an alert timer and
enforce working for at most $max_time seconds. That would of course help
here.
Getting rid of superfluous recompute_online_node_usage() calls should
also not be impossible. We'd need to ensure that we add service usage
(that already is done in recovery and next_state_started) and remove
service usage (removing is not implemented right now) when changing
nodes or states. Then it'd be enough to call
recompute_online_node_usage() once per cycle and it'd be a huge
improvement compared to now. Additionally, we could call it whenever we
iterated a certain number of services, just to be sure.
>
> ps. maybe you can have some of that info/stats here in the commit message
> of this patch.
Sure.
More information about the pve-devel
mailing list