[pve-devel] Qemu / virtio-rng-pci

Alexandre DERUMIER aderumier at odiso.com
Mon Jun 1 18:52:19 CEST 2015


I had look for some more informations about virtio-rng

https://lists.fedoraproject.org/pipermail/devel/2013-February/177909.html


"BTW, virtio-rng really only works well if you have a hardware RNG in the
host.  Otherwise, the host kernel will take too much time (a few
minutes) before producing enough entropy to feed the FIPS tests in the
guest, and during this time the host will be entropy-starved."

So I don't known if it's a good idea to enable it by default. (performance ?)
It need to be tested.




With ivy-bridge processor,
It's possible to pass RDRAND to guest (I don't have checked if qemu is filtering it or not), without virtio-ring.

or possible to map it on host to /dev/random but it's require an additionnal daemon

"RDRAND only hands out random numbers.  We plan to add QEMU support for
using RDRAND directly (with whitening, similar to rngd), but it is not
in yet.  Right now what you do is use rngd in the host to feed
/dev/random with random numbers from RDRAND, connect /dev/random to
virtio-rng."


With new Broadwell processor, it's directly feeding /dev/random, so in this case we can use virtio-ring by default.




But the article here:
http://rhelblog.redhat.com/2015/03/09/red-hat-enterprise-linux-virtual-machines-access-to-random-numbers-made-easy/

say that changed has been done rhel7.1 (no patch reference), to don't have need of daemon for ivy-bridge.


I'll try to dig a little bit more tomorrow

----- Mail original -----
De: "dietmar" <dietmar at proxmox.com>
À: "Stefan Priebe" <s.priebe at profihost.ag>, "aderumier" <aderumier at odiso.com>
Cc: "pve-devel" <pve-devel at pve.proxmox.com>
Envoyé: Lundi 1 Juin 2015 17:39:25
Objet: Re: [pve-devel] Qemu / virtio-rng-pci

> >>Sure. I'm just thinking about the check regarding Qemu 2.3. I would also 
> >>like to use it for older qemu versions / installations. 
> > 
> >>Is there no other way to support it and not to break live migration? 
> 
> I don't see how to do it, adding a new pci device by default will break live 
> migration. 
> Or we need to add a new option in vmid.conf 
> 
> @Dietmar : any opinion ? 

I don't really want a new option for that... 



More information about the pve-devel mailing list