[PVE-User] OpenVZ, network bridge and performances

Charles Bijon charles at bijon.fr
Tue Nov 22 11:57:37 CET 2011


Hi,

Today, i try to virtualize the production environnement, but we are 
surprised to find some problems with your solution :

  - In a normal case, without virtualisation, we have a response time 
less than 50ms for a request.

-  With your solution, full virtualized on 1 node ( 12 engines on it + 
loadbalancer), we can have faster responses time with 20 ms for a 
request. That's great !

- Now if we use more than one node ( 2 nodes with 24 engines ( 12 + 12) 
), and mixed ( old engines not virtualised ), responses times are 
randomly very degraded, we have some requests over 800 ms in local area 
network and it's impossible to put it in production, we need "real time" 
calculation.

For information are obliged to use veth network interfaces in OpenVZ 
conteners because they are using multicast between engines for 
communication.

I have some questions about this problem.

- Do you think the problem can be the OpenVZ context switching ?

- Can you said me if we can optimize the bridge network switching to 
reduce the latency ? Openvswitch implementation can be a solution ?

- I have understand that the cpu binding can reduce the overhead, but i 
am obliged to stay with the pve-kernel-2.6.32-4 ( 2.6.32-33 ), when i 
upgrade, i have a kernel panic, i think i will disable ipv6 on the newer 
kernel. I will try a new compilation the next few minutes :)

I am using 2 * Dell R710, 96Go RAM, Dual X5660 ( 2*6 cores HT ) with 
15krpm sas hdd to make it.

Regards,

Charles









More information about the pve-user mailing list