[pve-devel] [PATCH v2 qemu-server 8/9] memory: add virtio-mem support
DERUMIER, Alexandre
Alexandre.DERUMIER at groupe-cyllene.com
Wed Jan 25 10:00:58 CET 2023
Le mardi 24 janvier 2023 à 14:06 +0100, Fiona Ebner a écrit :
> Am 04.01.23 um 07:43 schrieb Alexandre Derumier:
> > a 4GB static memory is needed for DMA+boot memory, as this memory
> > is almost always un-unpluggeable.
> >
> > 1 virtio-mem pci device is setup for each numa node on pci.4 bridge
> >
> > virtio-mem use a fixed blocksize with 32000 blocks
> > Blocksize is computed from the maxmemory-4096/32000 with a minimum
> > of
> > 2MB to map THP.
> > (lower blocksize = more chance to unplug memory).
> >
> > Note: Currently, linux only support 4MB virtio blocksize, 2MB
> > support
> > is currently is progress.
> >
>
> For the above paragraphs:
> s/GB/GiB/
> s/MB/MiB/
> ?
yes, I'll fix it in all patches
(...)
> >
> > +sub get_virtiomem_block_size {
> > + my ($conf) = @_;
> > +
> > + my $MAX_MEM = get_max_mem($conf);
> > + my $static_memory = get_static_mem($conf);
> > + my $memory = get_current_memory($conf->{memory});
> > +
> > + #virtiomem can map 32000 block size.
> > + #try to use lowest blocksize, lower = more chance to unplug
> > memory.
> > + my $blocksize = ($MAX_MEM - $static_memory) / 32000;
> > + #2MB is the minimum to be aligned with THP
> > + $blocksize = 2**(ceil(log($blocksize)/log(2)));
> > + $blocksize = 4 if $blocksize < 4;
>
> Why suddenly 4?
I have added a note in the commit :
> Note: Currently, linux only support 4MB virtio blocksize, 2MB support
> is currently is progress.
>
So 2MB is valid from qemu side, but linux guest kernel don't support it
actually. At least , you need to use multiple of 4MB. you can
remove/add 2 blocks of 2MB at the same time, but it don't seem to be
atomic, so I think it's better to use the minimum currently supported
bloc.
Maybe later, we could extend the virtio=X option, to tell the virtio
supported version. (virtio=1.1 , virtio=1.2), and enable supported
features ?
> > +my sub balance_virtiomem {
>
> This function is rather difficult to read. The "record errors and
> filter" logic really should be its own patch after the initial
> support.
> FWIW, I tried my best and it does seems fine :)
>
> But it's not clear to me that we even want that logic? Is it really
> that
> common for qom-set to take so long to be worth introducing all this
> additional handling/complexity? Or should it just be a hard error if
> qom-set still didn't have an effect on a device after 5 seconds.
>
from my test,It can take 2-3second on unplug on bigger setup. I'm doing
it in // to be faster, to avoid to wait nbdimm * 2-3seconds.
> Would it actually be better to just fill up the first, then the
> second
> etc. as needed, rather than balancing? My gut feeling is that having
> fewer "active" devices is better. But this would have to be tested
> with
> some benchmarks of course.
Well, from numa perspective, you really want to balance as much as
possible. (That's why, with classic hotplug, we add/remove dimm on each
socket alternatively).
That's the whole point of numa, read the nearest memory attached to the
processor where the process are running.
That's a main advantage of virtio-mem vs balloning (which doesn't
handle numa, and remove pages randomly on any socket)
More information about the pve-devel
mailing list