[pve-devel] [PATCH qemu-server 3/3] api: include not mapped resources for running vms in migrate preconditions
Fiona Ebner
f.ebner at proxmox.com
Fri Mar 22 17:19:23 CET 2024
Am 20.03.24 um 13:51 schrieb Dominik Csapak:
> so that we can show a proper warning in the migrate dialog and check it
> in the bulk migrate precondition check
>
> the unavailable_storages and allowed_nodes should be the same as before
>
> Signed-off-by: Dominik Csapak <d.csapak at proxmox.com>
> ---
> not super happy with this partial approach, we probably should just
> always return the 'allowed_nodes' and 'not_allowed_nodes' and change
> the gui to handle the running vs not running state?
So not_allowed_nodes can already be returned in both states after this
patch. But allowed nodes still only if not running. I mean, there could
be API users that break if we'd always return allowed_nodes, but it
doesn't sound unreasonable for me to do so. Might even be an opportunity
to structure the code in a bit more straightforward manner.
>
> PVE/API2/Qemu.pm | 27 +++++++++++++++------------
> 1 file changed, 15 insertions(+), 12 deletions(-)
>
> diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
> index 8581a529..b0f155f7 100644
> --- a/PVE/API2/Qemu.pm
> +++ b/PVE/API2/Qemu.pm
> @@ -4439,7 +4439,7 @@ __PACKAGE__->register_method({
> not_allowed_nodes => {
> type => 'object',
> optional => 1,
> - description => "List not allowed nodes with additional informations, only passed if VM is offline"
> + description => "List not allowed nodes with additional informations",
> },
> local_disks => {
> type => 'array',
> @@ -4496,25 +4496,28 @@ __PACKAGE__->register_method({
>
> # if vm is not running, return target nodes where local storage/mapped devices are available
> # for offline migration
> + my $checked_nodes = {};
> + my $allowed_nodes = [];
> if (!$res->{running}) {
> - $res->{allowed_nodes} = [];
> - my $checked_nodes = PVE::QemuServer::check_local_storage_availability($vmconf, $storecfg);
> + $checked_nodes = PVE::QemuServer::check_local_storage_availability($vmconf, $storecfg);
> delete $checked_nodes->{$localnode};
> + }
>
> - foreach my $node (keys %$checked_nodes) {
> - my $missing_mappings = $missing_mappings_by_node->{$node};
> - if (scalar($missing_mappings->@*)) {
> - $checked_nodes->{$node}->{'unavailable-resources'} = $missing_mappings;
> - next;
> - }
> + foreach my $node ((keys $checked_nodes->%*, keys $missing_mappings_by_node->%*)) {
Style nit: please use 'for' instead of 'foreach'
Like this you might iterate over certain nodes twice and then push them
onto the allowed_nodes array twice.
> + my $missing_mappings = $missing_mappings_by_node->{$node};
> + if (scalar($missing_mappings->@*)) {
> + $checked_nodes->{$node}->{'unavailable-resources'} = $missing_mappings;
> + next;
> + }
>
> + if (!$res->{running}) {
> if (!defined($checked_nodes->{$node}->{unavailable_storages})) {
> - push @{$res->{allowed_nodes}}, $node;
> + push $allowed_nodes->@*, $node;
> }
> -
> }
> - $res->{not_allowed_nodes} = $checked_nodes;
> }
> + $res->{not_allowed_nodes} = $checked_nodes if scalar(keys($checked_nodes->%*)) || !$res->{running};
Why not return the empty hash if running? The whole post-if is just
covering that single special case.
> + $res->{allowed_nodes} = $allowed_nodes if scalar($allowed_nodes->@*) || !$res->{running};
Nit: Right now, $allowed_nodes can only be non-empty if
!$res->{running}, so the first part of the check is redundant.
>
> my $local_disks = &$check_vm_disks_local($storecfg, $vmconf, $vmid);
> $res->{local_disks} = [ values %$local_disks ];;
More information about the pve-devel
mailing list