[pve-devel]	[PATCH]	openvswitch	hybrid	network	model	implementation
    Alexandre DERUMIER 
    aderumier at odiso.com
       
    Sat Apr 26 16:06:19 CEST 2014
    
    
  
Ok, for now, I'll do test with vlans managed on physical interface
bond0-->bond0.94---->vmbr0v94<---veth110i0------>veth110i0p--->fwbr110i0<----tap110i0
iperf results kernel 2.6.32
----------------------------
vm->host : 1,2gbit/s
host->vm : 700mbit/s
vm->vm : 700mbit/s
iperf results kernel 3.10
----------------------------
vm->host : 12gbit/s
host->vm : 12gbit/s
vm->vm : 10gbits/s
conclusion :
veth sucks on 2.6.32 kernel ;)
but works really fine with 3.10 !
----- Mail original ----- 
De: "Alexandre DERUMIER" <aderumier at odiso.com> 
À: "Dietmar Maurer" <dietmar at proxmox.com> 
Cc: pve-devel at pve.proxmox.com 
Envoyé: Samedi 26 Avril 2014 14:48:59 
Objet: Re: [pve-devel] [PATCH] openvswitch hybrid network model implementation 
>>I'll do more tests 
Ok, seem that it's finally a multicast problem. 
on my first node, 
pm0.94-->pm0-->pm0peer->vmbr0--->bond0 
on pm0peer I see 
14:45:27.150279 1e:e0:3b:16:8d:71 > 01:00:5e:40:03:eb, ethertype 802.1Q (0x8100), length 137: vlan 94, p 0, ethertype IPv4, 10.3.94.31 > 239.192.3.235: ip-proto-17 
But I don't see it on vmbr0. 
So something is filtering. 
Now if I use omping 
on pm0peer 
14:45:26.540535 1e:e0:3b:16:8d:71 > 01:00:5e:40:03:ec, ethertype 802.1Q (0x8100), length 115: vlan 94, p 0, ethertype IPv4, 10.3.94.31.4321 > 239.192.3.236.4321: UDP, length 69 
on vmbr0 
14:47:49.600076 1e:e0:3b:16:8d:71 > 01:00:5e:40:03:ec, ethertype 802.1Q (0x8100), length 115: vlan 94, p 0, ethertype IPv4, 10.3.94.31.4321 > 239.192.3.236.4321: UDP, length 69 
So, seem that omping is not a good test enough ;) 
only difference seem to be ip-proto-17 vs UDP 
I'll do other investigations 
----- Mail original ----- 
De: "Alexandre DERUMIER" <aderumier at odiso.com> 
À: "Dietmar Maurer" <dietmar at proxmox.com> 
Cc: pve-devel at pve.proxmox.com 
Envoyé: Samedi 26 Avril 2014 14:26:15 
Objet: Re: [pve-devel] [PATCH] openvswitch hybrid network model implementation 
>>bench results kernel 2.6.32 
>>------------------------------- 
>>I can't test it, because I have multicast problem with this setup on the host side. 
>>I don't known why, but the whole cluster multicast didn't work anymore when the host has booted with this config. 
>>(I can ping the host, connect to it, but multicast/pve-cluster was broken, and on all nodes). 
>>snooping was disabled on all hosts and switchs 
I just reproduce it with another test cluster. 
It's not a multicast problem, I check with omping, with same multicast address than corosync, and it's works fine. 
I also check on physical switches, multicast works fine. 
So it's just corosync 
Apr 26 14:07:17 corosync [TOTEM ] Retransmit List: 20 
Apr 26 14:07:17 corosync [TOTEM ] Retransmit List: 20 
Apr 26 14:07:17 corosync [TOTEM ] Retransmit List: 20 
Apr 26 14:07:17 corosync [TOTEM ] Retransmit List: 20 
Apr 26 14:07:17 corosync [TOTEM ] Retransmit List: 20 
on each node. (when the 1rst node reboot with the new network model and 2.6.32 kernel) 
I'll do more tests 
----- Mail original ----- 
De: "Alexandre DERUMIER" <aderumier at odiso.com> 
À: "Dietmar Maurer" <dietmar at proxmox.com> 
Cc: pve-devel at pve.proxmox.com 
Envoyé: Vendredi 25 Avril 2014 08:03:14 
Objet: Re: [pve-devel] [PATCH] openvswitch hybrid network model implementation 
here the results 
---------------- 
network model 
------------- 
bridge 
------ 
pm0.94----pm0-----pm0.peer----->vmbr0<-----veth100i0--------veth100i0p.94 (taggig vlan94)--------->fwbr100i0<-----------tap100i0 
bond0--------------------------> <-----veth110i0--------veth110i0p.94 (taggig vlan94)--------->fwbr110i0<-----------tap110i0 
<-----veth200i0--------veth200i0p (no vlan)---------------->fwbr200i0<-----------tap200i0 
/etc/networtk/interfaces 
------------------------ 
auto bond0 
iface bond0 inet manual 
slaves eth0 eth1 
bond_miimon 100 
bond_mode active-backup 
pre-up ifup eth0 eth1 
post-down ifdown eth0 eth1 
auto vmbr0 
iface vmbr0 inet manual 
bridge_ports bond0 
bridge_stp off 
bridge_fd 0 
post-up echo 0 > /sys/devices/virtual/net/vmbr0/bridge/multicast_snooping 
auto pm0 
iface pm0 inet manual 
VETH_BRIDGETO vmbr0 
auto pm0.94 
iface pm0.94 inet static 
address X.X.X.X 
netmask 255.255.255.0 
gateway X.X.X.X 
vlan-raw-device pm0 
bench results kernel 3.10 
--------------------------- 
vm->host : 12gbit/s 
host->vm :12 bgit/s 
vm->vm : 10gibit/s 
The bottleneck is not the veth, but vhost-net process, by default using only 1 core. 
I got same result with taps directly on bridge or openvswitch. 
This could improve with virtio-net multiqueue 
http://www.linux-kvm.org/page/Multiqueue 
But, It's works very fine. 
No need to tricks with bridgevlan, simply tag on veth. 
I think that setup with qinq should work too. 
bench results kernel 2.6.32 
------------------------------- 
I can't test it, because I have multicast problem with this setup on the host side. 
I don't known why, but the whole cluster multicast didn't work anymore when the host has booted with this config. 
(I can ping the host, connect to it, but multicast/pve-cluster was broken, and on all nodes). 
snooping was disabled on all hosts and switchs). 
Could be great if somebody test with 2.6.32. (see if it's buggy, and performance too) 
(I'll send patch) 
----- Mail original ----- 
De: "Alexandre DERUMIER" <aderumier at odiso.com> 
À: "Dietmar Maurer" <dietmar at proxmox.com> 
Cc: pve-devel at pve.proxmox.com 
Envoyé: Jeudi 24 Avril 2014 12:27:17 
Objet: Re: [pve-devel] [PATCH] openvswitch hybrid network model implementation 
>>Sorry, I am busy, doing kernel debugging right now ... 
Ok,no problem, I'll send a report tomorrow 
----- Mail original ----- 
De: "Dietmar Maurer" <dietmar at proxmox.com> 
À: "Alexandre DERUMIER" <aderumier at odiso.com> 
Cc: pve-devel at pve.proxmox.com 
Envoyé: Jeudi 24 Avril 2014 10:40:36 
Objet: RE: [pve-devel] [PATCH] openvswitch hybrid network model implementation 
> (I have patchs for Network.pm to manage veth-fwbridge, do you want them to 
> test ?) 
Sorry, I am busy, doing kernel debugging right now ... 
_______________________________________________ 
pve-devel mailing list 
pve-devel at pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 
_______________________________________________ 
pve-devel mailing list 
pve-devel at pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 
_______________________________________________ 
pve-devel mailing list 
pve-devel at pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 
_______________________________________________ 
pve-devel mailing list 
pve-devel at pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 
    
    
More information about the pve-devel
mailing list