[pve-devel] [PATCH guest-common/qemu-server/manager/docs v3 0/4] implement experimental vgpu live migration

Dominik Csapak d.csapak at proxmox.com
Fri May 31 10:41:56 CEST 2024


On 5/31/24 10:11, Eneko Lacunza wrote:
> 
> Hi Dominik,
> 
> 
Hi,

> El 28/5/24 a las 9:25, Dominik Csapak escribió:
>> ping?
>>
>> i know we cannot really test this atm but there are a few users/customers
>> that would like to use/test this
>>
>> since i marked it experimental i think it should be ok to include this
>> if its ok code wise
>>
> 
> Would this patch-series help in this scenario:
> 
> - Running VM has a Nvidia mdev asigned in hw.
> - We clone that VM. VM can't be started until we change the hw id to another Nvidia mdev.
> 

No that should already work with the cluster device mappings, there you can configure
a set of cards that will be used for the mdev creation instead of having one fixed card.

see https://pve.proxmox.com/pve-docs/pve-admin-guide.html#resource_mapping
for how to configure that

this series would enable experimental live migration between hosts for cards/drivers that support this.

> If so, I have a potential customer that is looking to migrate a VDI deployment from Nutanix to 
> Proxmox. They're currently testing Proxmox with one server with a Nvidia card, and can test this if 
> packages are prepared (in testing?).
> 
> We deployed this Proxmox server yesterday, we used latest NVIDIA host driver. Latest kernel 6.8 
> didn't work but latest 6.5 did (will report exact versions when I get remote access).

yes we know, see the known issues section of the release 
noteshttps://pve.proxmox.com/wiki/Roadmap#Proxmox_VE_8.2

> 
> Thanks
> 
> Eneko Lacunza
> Zuzendari teknikoa | Director técnico
> Binovo IT Human Project
> 
> Tel. +34 943 569 206 | https://www.binovo.es
> Astigarragako Bidea, 2 - 2º izda. Oficina 10-11, 20180 Oiartzun
> 
> https://www.youtube.com/user/CANALBINOVO
> https://www.linkedin.com/company/37269706/
> 
> 





More information about the pve-devel mailing list