[PVE-User] venet and routing... again ;)

Patryk Benderz Patryk.Benderz at esp.pl
Wed Dec 9 15:45:22 CET 2009


Hi all,
Again I have question about routing, but this time regarding routing on
OpenVZ guest. My HW setup is listed below.
PVE Server:
4 NICs:
vmbr0 10.1.1.219/24
vmbr1 192.168.3.219/24
vmbr2 10.251.224.219/24
vmbr3 192.168.48.219/24

Guest(Debian):
glpi:~# ifconfig /*copied part of result*/
venet0    inet addr:127.0.0.1  P-t-P:127.0.0.1
venet0:0  inet addr:10.251.224.220  P-t-P:10.251.224.220
venet0:1  inet addr:192.168.3.220  P-t-P:192.168.3.220
venet0:2  inet addr:192.168.48.220  P-t-P:192.168.48.220

glpi:~# route -n
Kernel IP routing table
Destination Gateway   Genmask         Flags Metric Ref    Use Iface
192.0.2.1   0.0.0.0   255.255.255.255 UH    0      0        0 venet0
0.0.0.0     192.0.2.1 0.0.0.0         UG    0      0        0 venet0

Now, if i issue command:
glpi:~# ping 10.251.224.190
PING 10.251.224.190 (10.251.224.190) 56(84) bytes of data.
64 bytes from 10.251.224.190: icmp_seq=1 ttl=63 time=779 ms
64 bytes from 10.251.224.190: icmp_seq=2 ttl=63 time=0.172 ms
64 bytes from 10.251.224.190: icmp_seq=3 ttl=63 time=0.316 ms

It is OK, but next one fails:

glpi:~# ping 192.168.48.190
PING 192.168.48.190 (192.168.48.190) 56(84) bytes of data.
--- 192.168.48.190 ping statistics ---
9 packets transmitted, 0 received, 100% packet loss, time 8010ms

After adding modifying routing table:
glpi:~# route add -net 192.168.48.0 netmask 255.255.255.0 dev venet0:2

(Note: despite that i showed routing for venet0:2, it shows venet0)
glpi:~# route -n
Kernel IP routing table
Destination  Gateway   Genmask         Flags Metric Ref    Use Iface
192.0.2.1    0.0.0.0   255.255.255.255 UH    0      0        0 venet0
192.168.48.0 0.0.0.0   255.255.255.0   U     0      0        0 venet0
0.0.0.0      192.0.2.1 0.0.0.0         UG    0      0        0 venet0

anyway, now i can reach this network:
glpi:~# ping 192.168.48.190
PING 192.168.48.190 (192.168.48.190) 56(84) bytes of data.
64 bytes from 192.168.48.190: icmp_seq=1 ttl=63 time=0.103 ms
64 bytes from 192.168.48.190: icmp_seq=2 ttl=63 time=0.094 ms
64 bytes from 192.168.48.190: icmp_seq=3 ttl=63 time=0.089 ms

Only problem is that after reboot of guest, i loose this table. I tried
to modify /var/lib/vz/private(or root)/101/etc/network/interfaces file
to add post-up routing tasks, but i loose it after guest reboot.

So, my questions are:
1)Why is routing on guest side set up in this manner, that all traffic
goes through venet0, and not venet0:1...

2) how to modify routing on venet ifaces for guest to keep it permanent?

3)After reading
http://www.mokonamodoki.com/proxmox-openvz-server-2-nics-2-gateways ,
especially this fragment: "...however veth does give your guest
container OS direct access to the network, in a smiliar fashion to the
way VMware server can give a guest OS direct access to a physical
network using Bridged Ethernet." I tend to move my guests from venetv to
veth structure. Is there any easy way to do it?

I hope i was explicit enough in this post. Thanks.

-- 
Patryk "LeadMan" Benderz
Linux Registered User #377521
()  ascii ribbon campaign - against html e-mail 
/\  www.asciiribbon.org   - against proprietary attachments


Email secured by Check Point



More information about the pve-user mailing list