[pve-devel] last training week student feedback/request
Dominik Csapak
d.csapak at proxmox.com
Thu Jun 23 10:37:46 CEST 2022
On 6/23/22 10:25, DERUMIER, Alexandre wrote:
> Hi,
>
> I just finished my proxmox training week,
>
> here some student requests/feedback:
Hi,
i just answer the points where i'm currently involved, so someone else
might answer to the other ones ;)
[snip]
> 2)
> Another student have a need with pci passthrough, cluster with
> multiples nodes with multiple pci cards.
> He's using HA and have 1 or 2 backups nodes with a lot of cards,
> to be able to failover 10 others servers.
>
> The problem is that on the backups nodes, the pci address of the cards
> are not always the same than production nodes.
> So Ha can't work.
>
> I think it could be great to add some kind of "shared local device
> pool" at datacenter level, where we could define
>
> pci: poolname
> node1:pciaddress
> node2:pciaddress
>
> usb: poolname
> node1:usbport
> node2:usbport
>
>
> so we could dynamicaly choose the correct pci address when restarting
> the vm.
>
> Permissions could be added too, maybe a migratable option when mdev
> live migration support will be ready, ...
i was working on that last year, but got hold up with other stuff,
but i'm planning to picking this up again this/next week
my solution looked very similar to yours, with additional fields
to uniquely identify the card (to prevent accidental pass-through
when the address changes fore example)
permissions are also planned there...
>
>
> 3)
> Related to 2), another student have a need of live migraton with nvidia
> card with mdev.
> I'm currently trying to test to see if it's possible, as they are some
> experimental vfio option to enable it, but it doesn't seem to be ready.
>
would be cool, i'd like to have some vgpu capable cards to test here,
but so far no luck (also access/support to/of the vgpu driver
from nvidia is probably the bigger problem AFAICS)
kind regards
Dominik
More information about the pve-devel
mailing list