[pve-devel] [PATCH common/qemu-server/manager] improve vGPU (mdev) usage for NVIDIA

DERUMIER, Alexandre Alexandre.DERUMIER at groupe-cyllene.com
Mon Aug 22 15:39:01 CEST 2022


Le 22/08/22 à 12:16, Dominik Csapak a écrit :
> On 8/17/22 01:15, DERUMIER, Alexandre wrote:
>> Le 9/08/22 à 10:39, Dominik Csapak a écrit :
>>> On 8/9/22 09:59, DERUMIER, Alexandre wrote:
>>>> Le 26/07/22 à 08:55, Dominik Csapak a écrit :
>>>>> so maybe someone can look at that and give some feedback?
>>>>> my idea there would be to allow multiple device mappings per node
>>>>> (instead of one only) and the qemu code would select one automatically
>>>> Hi Dominik,
>>>>
>>>> do you want to create some kind of pool of pci devices in your ""add
>>>> cluster-wide hardware device mapping" patches series ?
>>>>
>>>> Maybe in hardwaremap, allow to define multiple pci address on same 
>>>> node ?
>>>>
>>>> Then, for mdev, look if a mdev already exist in 1 of the device.
>>>> If not, try to create the mdev if 1 device, if it's failing (max
>>>> number of mdev reached), try to create mdev on the other device,...
>>>>
>>>> if not mdev, choose a pci device in the pool not yet detached from 
>>>> host.
>>>>
>>>
>>> yes i plan to do this in my next iteration of the mapping series
>>> (basically what you describe)
>> Hi, sorry to be late.
>>
>>
>>> my (rough) idea:
>>>
>>> have a list of pci paths in mapping (e.g. 01:00.0;01:00.4;...)
>>> (should be enough, i don't think grouping unrelated devices (different
>>> vendor/product) makes much sense?)
>> yes, that's enough for me. we don't want to mix unrelated devices.
>>
>> BTW, I'm finally able to do live migration with nvidia mdev vgpu. (need
>> to compile the nvidia vfio driver with an option to enable it + add
>> "-device vfio-pci,x-enable-migration=on,..."
> 
> nice (what flag do you need on the driver install? i did not find it)
> i'll see if i can test that on a single card (only have one here)
> 


I have use 460.73.01 driver.  (last 510 driver don't have the flag and 
code, don't known why)
https://github.com/mbilker/vgpu_unlock-rs/issues/15


the flag is NV_KVM_MIGRATION_UAP=1.
As I didn't known to pass the flag,

I have simply decompress the driver
"NVIDIA-Linux-x86_64-460.73.01-grid-vgpu-kvm-v5.run -x"
edit the "kernel/nvidia-vgpu-vfio/nvidia-vgpu-vfio.Kbuild" to add 
NV_KVM_MIGRATION_UAP=1

then ./nvidia-installer







>>
>> So, maybe adding a "livemigrate" flag on the hardwaremap could be 
>> great :)
> 
> it's probably better suited for the hostpci setting in the qemu config,
> since that's the place we need it
> 
>>
>> Could be usefull for stateless usb device, like usb dongle,where we
>> could unplug usb/livemigrate/replug usb.
>>
>>
>>
> also probably better suited for the usbX setting
> 
> but those can be done after (some version of) this series
> is applied
> 



More information about the pve-devel mailing list