[pve-devel] [RFC PATCH manager] WIP: api: implement node-independent bulk actions

Stefan Hanreich s.hanreich at proxmox.com
Thu Mar 20 09:44:36 CET 2025


On 3/19/25 10:04, Dominik Csapak wrote:
> On 3/18/25 12:30, Stefan Hanreich wrote:
>>> There are alternative methods to achieve similar results:
>>> * use some kind of queuing system on the cluster (e.g. via pmxcfs)
>>> * using the 'startall'/'stopall' calls from pve in PDM
>>> * surely some other thing I didn't think about
>>>
>>> We can of course start with this, and change the underlying mechanism
>>> later too.
>>>
>>> If we go this route, I could also rewrite the code in rust if wanted,
>>> since there is nothing particularly dependent on perl here
>>> (besides getting the vmlist, but that could stay in perl).
>>> The bulk of the logic is how to start tasks + handle them finishing +
>>> handling filter + concurrency.
>>
>> I'm actually reading the VM list in the firewall via this:
>> https://git.proxmox.com/?p=proxmox-ve-rs.git;a=blob;f=proxmox-ve-
>> config/src/guest/
>> mod.rs;h=74fd8abc000aec0fa61898840d44ab8a4cd9018b;hb=HEAD#l69
>>
>> So we could build upon that if we want to implement it in Rust?
>>
>> I have something similar, *very* basic, implemented for running multiple
>> tasks across clusters in my SDN patch series - so maybe we could
>> repurpose that for a possible implementation, even generalize it?
> 
> Yeah sounds good if we want to do it this way, for my use case here we
> need to parse the config of
> all guests though, not sure if we can do that in rust. maybe with just a
> minimal config like 'boot'
> and such? Or we try to pull out the pve api types from pdm since there
> are parts of the config
> already exposed i think...

Makes sense to leave it in Perl then, I just thought I'd point it out if
the guest list alone was the dealbreaker.


>>> diff --git a/PVE/API2/Cluster/Bulk.pm b/PVE/API2/Cluster/Bulk.pm
>>> new file mode 100644
>>> index 00000000..05a79155
>>> --- /dev/null
>>> +++ b/PVE/API2/Cluster/Bulk.pm
>>> @@ -0,0 +1,475 @@
>>> +package PVE::API2::Cluster::Bulk;
>>
>> We might wanna think about using sub-paths already, since I can see this
>> growing quite fast (at least a sub-path for SDN would be interesting). I
>> don't know how many other potential use-cases there are aside from that.
>>
> 
> sure I would suggest it like this:
> 
> /cluster/bulk-actions/guest/{start,shutdown,...} ->
> PVE::API2::Cluster::Bulk(Actions?)::Guest;
> /cluster/bulk-actions/sdn/{...} -> PVE::API2::Cluster::Bulk::SDN;
> 
> maybe in the future we can have:
> /cluster/bulk-actions/node/{...} -> PVE::API2::Cluster::Bulk::Node;
> 

fine with me!


>> Maybe extract that into a function, since it seems to be the same code
>> as above?
>>
>> Or maybe even a do while would simplify things here? Haven't thought it
>> through 100%, just an idea:
>>
>>    do {
>>     // check for terminated workers and reap them
>>     // fill empty worker slots with new workers
>>    }
>>    while (workers_exist)
>>
>> Would maybe simplify things and not require the waiting part at the end?
> 
> it's not so easy sadly since the two blocks are not the same
> 
> we have two different mechanisms here
> 
> we have worker slots (max_workers) that we want to fill.
> while we are going through an order (e.g. for start/shutdown) we don't
> want to start with the next order while there are still workers running so
> 
> while we can still ad workers, we loop over the existing ones until one is
> finished and queue the next. at the end of the 'order' we have wait for all
> remaining workers before continuing to the next order.
> 

Yeah, I thought it'd possible to do that loop for each order - but it
was just a quick thought I scribbled down to possibly avoid duplicating
code. I figured I'm probably missing something.





More information about the pve-devel mailing list