[pve-devel] [PATCH common/qemu-server/manager] improve vGPU (mdev) usage for NVIDIA
Dominik Csapak
d.csapak at proxmox.com
Tue Aug 23 09:50:51 CEST 2022
On 8/22/22 16:07, Dominik Csapak wrote:
> On 8/22/22 15:39, DERUMIER, Alexandre wrote:
>> Le 22/08/22 à 12:16, Dominik Csapak a écrit :
>>> On 8/17/22 01:15, DERUMIER, Alexandre wrote:
>>>> Le 9/08/22 à 10:39, Dominik Csapak a écrit :
>>>>> On 8/9/22 09:59, DERUMIER, Alexandre wrote:
>>>>>> Le 26/07/22 à 08:55, Dominik Csapak a écrit :
>>>>>>> so maybe someone can look at that and give some feedback?
>>>>>>> my idea there would be to allow multiple device mappings per node
>>>>>>> (instead of one only) and the qemu code would select one automatically
>>>>>> Hi Dominik,
>>>>>>
>>>>>> do you want to create some kind of pool of pci devices in your ""add
>>>>>> cluster-wide hardware device mapping" patches series ?
>>>>>>
>>>>>> Maybe in hardwaremap, allow to define multiple pci address on same
>>>>>> node ?
>>>>>>
>>>>>> Then, for mdev, look if a mdev already exist in 1 of the device.
>>>>>> If not, try to create the mdev if 1 device, if it's failing (max
>>>>>> number of mdev reached), try to create mdev on the other device,...
>>>>>>
>>>>>> if not mdev, choose a pci device in the pool not yet detached from
>>>>>> host.
>>>>>>
>>>>>
>>>>> yes i plan to do this in my next iteration of the mapping series
>>>>> (basically what you describe)
>>>> Hi, sorry to be late.
>>>>
>>>>
>>>>> my (rough) idea:
>>>>>
>>>>> have a list of pci paths in mapping (e.g. 01:00.0;01:00.4;...)
>>>>> (should be enough, i don't think grouping unrelated devices (different
>>>>> vendor/product) makes much sense?)
>>>> yes, that's enough for me. we don't want to mix unrelated devices.
>>>>
>>>> BTW, I'm finally able to do live migration with nvidia mdev vgpu. (need
>>>> to compile the nvidia vfio driver with an option to enable it + add
>>>> "-device vfio-pci,x-enable-migration=on,..."
>>>
>>> nice (what flag do you need on the driver install? i did not find it)
>>> i'll see if i can test that on a single card (only have one here)
>>>
>>
>>
>> I have use 460.73.01 driver. (last 510 driver don't have the flag and
>> code, don't known why)
>> https://github.com/mbilker/vgpu_unlock-rs/issues/15
>>
>>
>> the flag is NV_KVM_MIGRATION_UAP=1.
>> As I didn't known to pass the flag,
>>
>> I have simply decompress the driver
>> "NVIDIA-Linux-x86_64-460.73.01-grid-vgpu-kvm-v5.run -x"
>> edit the "kernel/nvidia-vgpu-vfio/nvidia-vgpu-vfio.Kbuild" to add
>> NV_KVM_MIGRATION_UAP=1
>>
>> then ./nvidia-installer
>>
>
> thx, i am using the 510.73.06 driver here (official grid driver) and the
> dkms source has that flag, so i changed the .Kbuild in my /usr/src folder
> and rebuilt it. i'll test it tomorrow
>
>
>
so i tested it here on a single machine and single card, and it worked.
i started a second (manual qemu) vm with '-incoming' and used the qemu monitor
to intiate the migration. a vnc session to that vm with running benchmark ran
without any noticable interruption :)
i guess though that since nvidia does not really advertise that feature for their
'linux kvm' drivers, that is a rather experimental and unsupported there.
(In their documentation only citrix/vmware is supported for live migration of vgpus)
so i'll see that after my current cluster mapping patches, i'll add a
'migration' flag to hostpci devices, but only for cli, since there does not
seem to be a supported way for any hw right now (or is there any other vendor with
that feature?)
More information about the pve-devel
mailing list