[pve-devel] [PATCH qemu-server 2/2] fix #1384: fix cpu socket id calculation for hotplug
Tobias Böhm
tb at robhost.de
Wed May 17 12:50:08 CEST 2017
Am 17.05.2017 um 11:44 schrieb Thomas Lamprecht:
> On 05/17/2017 09:47 AM, Fabian Grünbichler wrote:
> > the case of multiple single-core vcpus was not handled
> > properly
> >
> > Signed-off-by: Fabian Grünbichler <f.gruenbichler at proxmox.com>
> > ---
> > PVE/QemuServer.pm | 9 +++++++--
> > 1 file changed, 7 insertions(+), 2 deletions(-)
> >
> > diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
> > index ed6c598..d480d55 100644
> > --- a/PVE/QemuServer.pm
> > +++ b/PVE/QemuServer.pm
> > @@ -1723,9 +1723,14 @@ sub print_cpu_device {
> > my $sockets = $conf->{sockets} // 1;
> > my $cores = $conf->{cores} // 1;
> > - my $current_core = ($id - 1) % $cores;
> > - my $current_socket = int(($id - $current_core)/$cores);
> > + # single core default
> > + my $current_core = 0;
> > + my $current_socket = $id - 1;
> > + if ($cores > 1) {
> > + $current_core = ($id - 1) % $cores;
> > + $current_socket = int(($id - $current_core)/$cores);
> > + }
>
> I still find the logic strange and hard to understand for that what it does.
> Also the cpu IDs are not added in order, so I'm not sure if hotunplug always
> removed the "correct" CPU
>
> I've looked at Tobias patch and at wasn't that happy with the existing code,
> my current approach is to move the id wrap logic away from print_cpu_device
> I'm happier with the print_cpu_device as it is below but not with the code
> using it,
> still need to get the bigger picture, maybe we can do this nicer altogether.
> Or Wolfgang posts his superior QemuServer machine patches and solves this
> all :-)
>
>
> diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
> index 2fb419d..2f2de2a 100644
> --- a/PVE/QemuServer.pm
> +++ b/PVE/QemuServer.pm
> @@ -1720,12 +1720,10 @@ sub print_cpu_device {
> $cpu = $cpuconf->{cputype};
> }
>
> - my $sockets = 1;
> - $sockets = $conf->{sockets} if $conf->{sockets};
> my $cores = $conf->{cores} || 1;
>
> - my $current_core = ($id - 1) % $cores;
> - my $current_socket = int(($id - $current_core)/$cores);
> + my $current_core = $id % $cores;
> + my $current_socket = int($id/$cores);
>
> return "$cpu-x86_64-cpu,id=cpu$id,socket-id=$current_socket,core-id=$current_core,thread-id=0";
> }
> @@ -3008,7 +3006,7 @@ sub config_to_command {
>
> push @$cmd, '-smp',
> "1,sockets=$sockets,cores=$cores,maxcpus=$maxcpus";
> for (my $i = 2; $i <= $vcpus; $i++) {
> - my $cpustr = print_cpu_device($conf,$i);
> + my $cpustr = print_cpu_device($conf, $i-1);
> push @$cmd, '-device', $cpustr;
> }
>
> @@ -3788,7 +3786,7 @@ sub qemu_cpu_hotplug {
>
> if (qemu_machine_feature_enabled ($machine_type, undef, 2, 7)) {
>
> - for (my $i = $currentvcpus; $i > $vcpus; $i--) {
> + for (my $i = $currentvcpus-1; $i >= $vcpus; $i--) {
> qemu_devicedel($vmid, "cpu$i");
> my $retry = 0;
> my $currentrunningvcpus = undef;
> @@ -3817,7 +3815,7 @@ sub qemu_cpu_hotplug {
> if (qemu_machine_feature_enabled ($machine_type, undef, 2, 7)) {
>
> for (my $i = $currentvcpus+1; $i <= $vcpus; $i++) {
> - my $cpustr = print_cpu_device($conf, $i);
> + my $cpustr = print_cpu_device($conf, $i-1);
> qemu_deviceadd($vmid, $cpustr);
>
> my $retry = 0;
>
>
> --
>
> FYI: I used the following script to quick test the results:
>
> # set perl's -I accordingly
> use PVE::QemuServer;
> my $conf = {
> kvm => 1,
> cpu => 'host',
> };
>
> for my $i (1 .. 3) {
> $conf->{sockets} = $i;
> for my $j (1 .. 6) {
> $conf->{cores} = $j;
> print "test cores: $j sockets: $i\n";
> for my $id (0 .. $i*$j-1) {
> print "$id:\t" . PVE::QemuServer::print_cpu_device($conf, $id).
> "\n";
> }
> }
> }
>
>
> _______________________________________________
> pve-devel mailing list
> pve-devel at pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Hi,
if you decrement the id before passing it to print_cpu_device, be aware
that you need to increment the variable in the subroutine in the last
line, where it is used for the cpu-id, which seems to be 1-based, in
order to not change the behaviour there.
Regards,
Tobias
More information about the pve-devel
mailing list