[PVE-User] ARP issue between lxc containers on PX 4.2
Guillaume
proxmox at shadowprojects.org
Sat Jul 9 21:52:17 CEST 2016
And here i go, sorry for the flood guys.
Now that i set up everything correctly (i was previously using the range
ip for my first lxc hypervisor, so i fixed it - 51.254.231.80/28, so
using 51.254.231.80 for lxc1 was a bad idea), the only thing which
doesn't work (and worked before) is the ping between containers on the
private eth1 interface.
# On LXC 2, i'm trying to ping LXC 1
~# ping 192.168.30.101
~# tcpdump -i eth1
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth1, link-type EN10MB (Ethernet), capture size 262144 bytes
19:44:51.119883 ARP, Request who-has 192.168.30.101 tell 192.168.30.102,
length 28
19:44:52.131154 ARP, Request who-has 192.168.30.101 tell 192.168.30.102,
length 28
19:44:53.127880 ARP, Request who-has 192.168.30.101 tell 192.168.30.102,
length 28
# On proxmox
root at srv3:~# tcpdump -i vmbr2
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on vmbr2, link-type EN10MB (Ethernet), capture size 262144 bytes
21:45:22.711855 ARP, Request who-has 192.168.30.101 tell 192.168.30.102,
length 28
21:45:22.711905 ARP, Reply 192.168.30.101 is-at 62:31:32:34:65:61 (oui
Unknown), length 28
62:31:32:34:65:61 is the mac address of 192.168.30.101
And here's my current network settings :
# Proxmox
auto lo
iface lo inet loopback
iface eth0 inet manual
iface eth1 inet manual
auto vmbr1
iface vmbr1 inet manual
bridge_ports dummy0
bridge_stp off
bridge_fd 0
post-up /etc/pve/kvm-networking.sh
auto vmbr0
iface vmbr0 inet static
address 164.132.161.137
netmask 255.255.255.0
gateway 164.132.161.254
broadcast 164.132.161.255
bridge_ports eth0
bridge_stp off
bridge_fd 0
network 164.132.161.0
post-up /sbin/ip route add to 51.254.231.80/28 dev vmbr0
post-up /sbin/ip route add to default via 51.254.231.94 dev
vmbr0 table 5
post-up /sbin/ip rule add from 51.254.231.80/28 table 5
pre-down /sbin/ip rule del from 51.254.231.80/28 table 5
pre-down /sbin/ip route del to default via 51.254.231.94 dev
vmbr0 table 5
pre-down /sbin/ip route del to 51.254.231.80/28 dev vmbr0
auto vmbr2
iface vmbr2 inet static
address 192.168.30.3
netmask 255.255.255.0
broadcast 192.168.30.255
bridge_ports eth1
bridge_stp off
bridge_fd 0
network 192.168.30.0
post-up /sbin/ip route add to 224.0.0.0/4 dev vmbr0 # force
multicast
# LXC 1
auto eth0 (on vmbr0)
iface eth0 inet static
address 51.254.231.81
netmask 255.255.255.240
gateway 51.254.231.94
network 51.254.231.80
auto eth1 (on vmbr2)
iface eth1 inet static
address 192.168.30.101
netmask 255.255.255.0
~# route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use
Iface
default 51.254.231.94 0.0.0.0 UG 0 0 0 eth0
51.254.231.80 * 255.255.255.240 U 0 0 0 eth0
192.168.30.0 * 255.255.255.0 U 0 0 0 eth1
# LXC 2
auto eth0 (on vmbr0)
iface eth0 inet static
address 51.254.231.82
netmask 255.255.255.240
gateway 51.254.231.94
network 51.254.231.80
auto eth1 (on vmbr2)
iface eth1 inet static
address 192.168.30.102
netmask 255.255.255.0
~# route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use
Iface
default 51.254.231.94 0.0.0.0 UG 0 0 0 eth0
51.254.231.80 * 255.255.255.240 U 0 0 0 eth0
192.168.30.0 * 255.255.255.0 U 0 0 0 eth1
Le 09/07/2016 à 20:31, Guillaume a écrit :
> But i found out that the lines you made me remove, are actually added
> by proxmox.
>
> I updated the ip cidr on a host, and proxmox added them back on the
> node interfaces file :
>
> # --- BEGIN PVE ---
> post-up ip route add 51.254.231.94 dev eth0
> post-up ip route add default via 51.254.231.94 dev eth0
> pre-down ip route del default via 51.254.231.94 dev eth0
> pre-down ip route del 51.254.231.94 dev eth0
> # --- END PVE ---
>
>
> Le 09/07/2016 à 20:23, Guillaume a écrit :
>> Everything works fine now, looks like i used a public ip on my range
>> i shouldn't have.
>>
>> Thanks for the help :)
>>
>>
>> Le 09/07/2016 à 15:05, Guillaume a écrit :
>>> I am gonna be away for a few hours, thanks for the help Alwin.
>>>
>>>
>>> Le 09/07/2016 à 14:59, Guillaume a écrit :
>>>> Only restarted the netwrok services each times i tried something.
>>>>
>>>> Now i restarted the host and it is better.
>>>>
>>>> containers can ping themselves with their private interface (eth1)
>>>> but still nothing on the public one (eth0). Firewall is down
>>>> (pve-firewall stopped) but i have rules to allow ping between
>>>> containers on public interface anyway.
>>>>
>>>> host can ping everyone on both interfaces.
>>>>
>>>> New routes in containers :
>>>>
>>>> ~# route
>>>> Kernel IP routing table
>>>> Destination Gateway Genmask Flags Metric Ref
>>>> Use Iface
>>>> default 51.254.231.94 0.0.0.0 UG 0 0 0 eth0
>>>> 51.254.231.80 * 255.255.255.240 U 0 0 0 eth0
>>>> 192.168.30.0 * 255.255.255.0 U 0 0 0 eth1
>>>>
>>>>
>>>> Le 09/07/2016 à 14:22, Alwin Antreich a écrit :
>>>>> Guillaume,
>>>>>
>>>>> On 07/09/2016 01:13 PM, Guillaume wrote:
>>>>>> I tried enabling proxy_arp on the host, thinking it would help
>>>>>> but it does not.
>>>>>>
>>>>>>
>>>>>> Le 09/07/2016 à 13:03, Guillaume a écrit :
>>>>>>> lxc container public interface (eth0) is bound to vmbr0 and
>>>>>>> private interface (eth1) is bound to vmbr2.
>>>>>>>
>>>>>>> I removed the post-up/pre-down lines from the containers, it was
>>>>>>> a left-over when i tried to fix the issue.
>>>>>>> It doesn't change anything, public and private network works
>>>>>>> well, except between the containers. So i can talk to
>>>>>>> anything outside the host, but not inside.
>>>>> Did you restart the proxmox host after network changes or just the
>>>>> network services? If you didn't, please restart the
>>>>> proxmox host, as the settings are not always picked up after
>>>>> network service restart.
>>>>>
>>>>>>>
>>>>>>>
>>>>>>> Le 09/07/2016 à 12:33, Alwin Antreich a écrit :
>>>>>>>> Guillaume,
>>>>>>>>
>>>>>>>> On 07/09/2016 12:10 PM, Guillaume wrote:
>>>>>>>>> Of course, here they are :
>>>>>>>>>
>>>>>>>>> * Proxmox :
>>>>>>>>>
>>>>>>>>> ~# cat /etc/network/interfaces
>>>>>>>>>
>>>>>>>>> auto lo
>>>>>>>>> iface lo inet loopback
>>>>>>>>>
>>>>>>>>> iface eth0 inet manual
>>>>>>>>>
>>>>>>>>> iface eth1 inet manual
>>>>>>>>>
>>>>>>>>> auto vmbr1
>>>>>>>>> iface vmbr1 inet manual
>>>>>>>>> bridge_ports dummy0
>>>>>>>>> bridge_stp off
>>>>>>>>> bridge_fd 0
>>>>>>>>> post-up /etc/pve/kvm-networking.sh
>>>>>>>>>
>>>>>>>>> auto vmbr0
>>>>>>>>> iface vmbr0 inet static
>>>>>>>>> address 164.132.161.137
>>>>>>>>> netmask 255.255.255.0
>>>>>>>>> gateway 164.132.161.254
>>>>>>>>> broadcast 164.132.161.255
>>>>>>>>> bridge_ports eth0
>>>>>>>>> bridge_stp off
>>>>>>>>> bridge_fd 0
>>>>>>>>> network 164.132.161.0
>>>>>>>>> post-up /sbin/ip route add to 51.254.231.80/28 dev
>>>>>>>>> vmbr0
>>>>>>>>> post-up /sbin/ip route add to default via
>>>>>>>>> 51.254.231.94 dev vmbr0 table 5
>>>>>>>>> post-up /sbin/ip rule add from 51.254.231.80/28 table 5
>>>>>>>>> pre-down /sbin/ip rule del from 51.254.231.80/28
>>>>>>>>> table 5
>>>>>>>>> pre-down /sbin/ip route del to default via
>>>>>>>>> 51.254.231.94 dev vmbr0 table 5
>>>>>>>>> pre-down /sbin/ip route del to 51.254.231.80/28 dev
>>>>>>>>> vmbr0
>>>>>>>>>
>>>>>>>>> iface vmbr0 inet6 static
>>>>>>>>> address 2001:41d0:1008:1c89::1
>>>>>>>>> netmask 64
>>>>>>>>> gateway 2001:41d0:1008:1cff:ff:ff:ff:ff
>>>>>>>>> post-up /sbin/ip -f inet6 route add
>>>>>>>>> 2001:41d0:1008:1cff:ff:ff:ff:ff dev vmbr0
>>>>>>>>> post-up /sbin/ip -f inet6 route add default via
>>>>>>>>> 2001:41d0:1008:1cff:ff:ff:ff:ff
>>>>>>>>> pre-down /sbin/ip -f inet6 route del default via
>>>>>>>>> 2001:41d0:1008:1cff:ff:ff:ff:ff
>>>>>>>>> pre-down /sbin/ip -f inet6 route del
>>>>>>>>> 2001:41d0:1008:1cff:ff:ff:ff:ff dev vmbr0
>>>>>>>>>
>>>>>>>>> auto vmbr2
>>>>>>>>> iface vmbr2 inet static
>>>>>>>>> address 192.168.30.3
>>>>>>>>> netmask 255.255.255.0
>>>>>>>>> broadcast 192.168.30.255
>>>>>>>>> bridge_ports eth1
>>>>>>>>> bridge_stp off
>>>>>>>>> bridge_fd 0
>>>>>>>>> network 192.168.30.0
>>>>>>>> What is your intention with the post-up? And the config resides
>>>>>>>> under vmbr2 but you bind the route to vmbr0, is it
>>>>>>>> supposed to be like this?
>>>>>>>>
>>>>>>>>> post-up /sbin/ip route add to 224.0.0.0/4 dev
>>>>>>>>> vmbr0 # pour forcer le multicast
>>>>>>>>>
>>>>>>>>> ~# route
>>>>>>>>> Kernel IP routing table
>>>>>>>>> Destination Gateway Genmask Flags Metric Ref
>>>>>>>>> Use Iface
>>>>>>>>> default 164.132.161.254 0.0.0.0 UG 0 0
>>>>>>>>> 0 vmbr0
>>>>>>>>> 51.254.231.80 * 255.255.255.240 U 0 0 0
>>>>>>>>> vmbr0
>>>>>>>>> 164.132.161.0 * 255.255.255.0 U 0 0 0
>>>>>>>>> vmbr0
>>>>>>>>> 192.168.30.0 * 255.255.255.0 U 0 0 0
>>>>>>>>> vmbr2
>>>>>>>>> 224.0.0.0 * 240.0.0.0 U 0 0 0
>>>>>>>>> vmbr0
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> * LXC 1 :
>>>>>>>>>
>>>>>>>>> ~# cat /etc/network/interfaces
>>>>>>>>> # interfaces(5) file used by ifup(8) and ifdown(8)
>>>>>>>>> # Include files from /etc/network/interfaces.d:
>>>>>>>>> source-directory /etc/network/interfaces.d
>>>>>>>>>
>>>>>>>>> auto eth0
>>>>>>>>> iface eth0 inet static
>>>>>>>>> address 51.254.231.80
>>>>>>>>> netmask 255.255.255.240
>>>>>>>>> gateway 51.254.231.94
>>>>>>>>> network 51.254.231.80
>>>>>>>>> post-up /sbin/ip route add 164.132.161.137 dev eth0
>>>>>>>>> post-up /sbin/ip route add to default via
>>>>>>>>> 164.132.161.137
>>>>>>>>> pre-down /sbin/ip route del to default via
>>>>>>>>> 164.132.161.137
>>>>>>>>> pre-down /sbin/ip route del 164.132.161.137 dev eth0
>>>>>>>>>
>>>>>>>>> auto eth1
>>>>>>>>> iface eth1 inet static
>>>>>>>>> address 192.168.30.101
>>>>>>>>> netmask 255.255.255.0
>>>>>>>>>
>>>>>>>>> ~# route
>>>>>>>>> Kernel IP routing table
>>>>>>>>> Destination Gateway Genmask Flags Metric Ref
>>>>>>>>> Use Iface
>>>>>>>>> default 51.254.231.94 0.0.0.0 UG 0 0 0 eth0
>>>>>>>>> 51.254.231.80 * 255.255.255.240 U 0 0
>>>>>>>>> 0 eth0
>>>>>>>>> 164.132.161.137 * 255.255.255.255 UH 0 0
>>>>>>>>> 0 eth0
>>>>>>>>> 192.168.30.0 * 255.255.255.0 U 0 0
>>>>>>>>> 0 eth1
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> * LXC 2 :
>>>>>>>>>
>>>>>>>>> ~# cat /etc/network/interfaces
>>>>>>>>> # interfaces(5) file used by ifup(8) and ifdown(8)
>>>>>>>>> # Include files from /etc/network/interfaces.d:
>>>>>>>>> source-directory /etc/network/interfaces.d
>>>>>>>>>
>>>>>>>>> auto eth0
>>>>>>>>> iface eth0 inet static
>>>>>>>>> address 51.254.231.81
>>>>>>>>> netmask 255.255.255.240
>>>>>>>>> gateway 51.254.231.94
>>>>>>>>> network 51.254.231.80
>>>>>>>>> post-up /sbin/ip route add 164.132.161.137 dev eth0
>>>>>>>>> post-up /sbin/ip route add to default via
>>>>>>>>> 164.132.161.137
>>>>>>>>> pre-down /sbin/ip route del to default via
>>>>>>>>> 164.132.161.137
>>>>>>>>> pre-down /sbin/ip route del 164.132.161.137 dev eth0
>>>>>>>>>
>>>>>>>>> auto eth1
>>>>>>>>> iface eth1 inet static
>>>>>>>>> address 192.168.30.102
>>>>>>>>> netmask 255.255.255.0
>>>>>>>>>
>>>>>>>>> ~# route
>>>>>>>>> Kernel IP routing table
>>>>>>>>> Destination Gateway Genmask Flags Metric Ref
>>>>>>>>> Use Iface
>>>>>>>>> default 51.254.231.94 0.0.0.0 UG 0 0 0 eth0
>>>>>>>>> 51.254.231.80 * 255.255.255.240 U 0 0
>>>>>>>>> 0 eth0
>>>>>>>>> 164.132.161.137 * 255.255.255.255 UH 0 0
>>>>>>>>> 0 eth0
>>>>>>>>> 192.168.30.0 * 255.255.255.0 U 0 0
>>>>>>>>> 0 eth1
>>>>>>>> And the LXC container are bound to vmbr2?
>>>>>>>>
>>>>>>>>> Le 09/07/2016 à 11:36, Alwin Antreich a écrit :
>>>>>>>>>> Hi Guillaume,
>>>>>>>>>>
>>>>>>>>>> may you please add the network config of your host & lxc
>>>>>>>>>> guests (incl. routes), for my part, I don't get the picture
>>>>>>>>>> quite yet.
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On 07/08/2016 05:17 PM, Guillaume wrote:
>>>>>>>>>>> I may have found lead, only on the host side.
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> From proxmox, i can't ping the lxc container private address
>>>>>>>>>>>
>>>>>>>>>>> root at srv3:~# ping 192.168.30.101
>>>>>>>>>>> PING 192.168.30.101 (192.168.30.101) 56(84) bytes of data.
>>>>>>>>>>> ^C
>>>>>>>>>>> --- 192.168.30.101 ping statistics ---
>>>>>>>>>>> 2 packets transmitted, 0 received, 100% packet loss, time 999ms
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> But i can ping another server private address (same vrack) :
>>>>>>>>>>> root at srv3:~# ping 192.168.30.250
>>>>>>>>>>> PING 192.168.30.250 (192.168.30.250) 56(84) bytes of data.
>>>>>>>>>>> 64 bytes from 192.168.30.250: icmp_seq=1 ttl=64 time=0.630 ms
>>>>>>>>>>> ^C
>>>>>>>>>>> --- 192.168.30.250 ping statistics ---
>>>>>>>>>>> 1 packets transmitted, 1 received, 0% packet loss, time 0ms
>>>>>>>>>>> rtt min/avg/max/mdev = 0.630/0.630/0.630/0.000 ms
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> But, if i force the ping network interface on vmbr2 (host
>>>>>>>>>>> private network interface) :
>>>>>>>>>>>
>>>>>>>>>>> root at srv3:~# ping -I vmbr2 192.168.30.101
>>>>>>>>>>> PING 192.168.30.101 (192.168.30.101) from 192.168.30.3
>>>>>>>>>>> vmbr2: 56(84) bytes of data.
>>>>>>>>>>> 64 bytes from 192.168.30.101: icmp_seq=1 ttl=64 time=0.084 ms
>>>>>>>>>>> 64 bytes from 192.168.30.101: icmp_seq=2 ttl=64 time=0.024 ms
>>>>>>>>>>> 64 bytes from 192.168.30.101: icmp_seq=3 ttl=64 time=0.035 ms
>>>>>>>>>>> ^C
>>>>>>>>>>> --- 192.168.30.101 ping statistics ---
>>>>>>>>>>> 3 packets transmitted, 3 received, 0% packet loss, time 1998ms
>>>>>>>>>>> rtt min/avg/max/mdev = 0.024/0.047/0.084/0.027 ms
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> It is strange since i have a route on vmbr2 for 192.168.30.0 :
>>>>>>>>>>>
>>>>>>>>>>> root at srv3:~# route
>>>>>>>>>>> Kernel IP routing table
>>>>>>>>>>> Destination Gateway Genmask Flags Metric Ref
>>>>>>>>>>> Use Iface
>>>>>>>>>>> default 164.132.168.254 0.0.0.0 UG 0 0 0 vmbr0
>>>>>>>>>>> 51.254.233.80 * 255.255.255.240 U 0 0 0 vmbr0
>>>>>>>>>>> 164.132.168.0 * 255.255.255.0 U 0
>>>>>>>>>>> 0 0 vmbr0
>>>>>>>>>>> 192.168.30.0 * 255.255.255.0 U 0
>>>>>>>>>>> 0 0 vmbr2
>>>>>>>>>>> 224.0.0.0 * 240.0.0.0 U 0 0 0 vmbr0
>>>>>>>>>>>
>>>>>>>>>>> This solution doesn't change anything for the container. If
>>>>>>>>>>> i try to ping a container (public or private
>>>>>>>>>>> interface) from
>>>>>>>>>>> another while forcing the interface, it doesn't help.
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> Le 08/07/2016 à 11:11, Guillaume a écrit :
>>>>>>>>>>>> Hello,
>>>>>>>>>>>>
>>>>>>>>>>>> I'm running Proxmox 4.2-15, with a fresh install :
>>>>>>>>>>>>
>>>>>>>>>>>> # pveversion -v
>>>>>>>>>>>> proxmox-ve: 4.2-56 (running kernel: 4.4.13-1-pve)
>>>>>>>>>>>> pve-manager: 4.2-15 (running version: 4.2-15/6669ad2c)
>>>>>>>>>>>> pve-kernel-4.4.13-1-pve: 4.4.13-56
>>>>>>>>>>>> pve-kernel-4.2.8-1-pve: 4.2.8-41
>>>>>>>>>>>> lvm2: 2.02.116-pve2
>>>>>>>>>>>> corosync-pve: 2.3.5-2
>>>>>>>>>>>> libqb0: 1.0-1
>>>>>>>>>>>> pve-cluster: 4.0-42
>>>>>>>>>>>> qemu-server: 4.0-83
>>>>>>>>>>>> pve-firmware: 1.1-8
>>>>>>>>>>>> libpve-common-perl: 4.0-70
>>>>>>>>>>>> libpve-access-control: 4.0-16
>>>>>>>>>>>> libpve-storage-perl: 4.0-55
>>>>>>>>>>>> pve-libspice-server1: 0.12.5-2
>>>>>>>>>>>> vncterm: 1.2-1
>>>>>>>>>>>> pve-qemu-kvm: 2.5-19
>>>>>>>>>>>> pve-container: 1.0-70
>>>>>>>>>>>> pve-firewall: 2.0-29
>>>>>>>>>>>> pve-ha-manager: 1.0-32
>>>>>>>>>>>> ksm-control-daemon: 1.2-1
>>>>>>>>>>>> glusterfs-client: 3.5.2-2+deb8u2
>>>>>>>>>>>> lxc-pve: 1.1.5-7
>>>>>>>>>>>> lxcfs: 2.0.0-pve2
>>>>>>>>>>>> cgmanager: 0.39-pve1
>>>>>>>>>>>> criu: 1.6.0-1
>>>>>>>>>>>> zfsutils: 0.6.5.7-pve10~bpo80
>>>>>>>>>>>>
>>>>>>>>>>>> # sysctl -p
>>>>>>>>>>>> net.ipv6.conf.all.autoconf = 0
>>>>>>>>>>>> net.ipv6.conf.default.autoconf = 0
>>>>>>>>>>>> net.ipv6.conf.vmbr0.autoconf = 0
>>>>>>>>>>>> net.ipv6.conf.all.accept_ra = 0
>>>>>>>>>>>> net.ipv6.conf.default.accept_ra = 0
>>>>>>>>>>>> net.ipv6.conf.vmbr0.accept_ra = 0
>>>>>>>>>>>> net.ipv6.conf.vmbr0.accept_ra = 0
>>>>>>>>>>>> net.ipv6.conf.vmbr0.autoconf = 0
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> I'm only using lxc containers.
>>>>>>>>>>>>
>>>>>>>>>>>> Host have 2 networks interfaces, vmbr0 with public ip
>>>>>>>>>>>> 164.132.161.131/32 (gtw 164.132.161.254) and vmbr2 with
>>>>>>>>>>>> private
>>>>>>>>>>>> ip (ovh vrack 2) 192.168.30.3/24.
>>>>>>>>>>>> Containers have public interface eth0 with public ip
>>>>>>>>>>>> address (based on vmbr0) and eth1 with private ip address
>>>>>>>>>>>> (based
>>>>>>>>>>>> on vmbr2) :
>>>>>>>>>>>>
>>>>>>>>>>>> * LXC1
>>>>>>>>>>>> eth0 : 51.254.231.80/28
>>>>>>>>>>>> eth1 : 192.168.30.101/24
>>>>>>>>>>>>
>>>>>>>>>>>> * LXC2
>>>>>>>>>>>> eth0 : 51.254.231.81/28
>>>>>>>>>>>> eth1 : 192.168.30.102/24
>>>>>>>>>>>>
>>>>>>>>>>>> They both have access to the net, but can't talk to each
>>>>>>>>>>>> other, whatever network interface (public or private) i'm
>>>>>>>>>>>> using.
>>>>>>>>>>>> Same issue with firewall down on the node (on the 3 levels).
>>>>>>>>>>>>
>>>>>>>>>>>> # Ping from LXC1 51.254.231.80 to LXC2 51.254.231.81 :
>>>>>>>>>>>> tcpdump from LXC1
>>>>>>>>>>>> 15:54:00.810638 ARP, Request who-has 164.132.161.250 tell
>>>>>>>>>>>> 164.132.161.252, length 46
>>>>>>>>>>>>
>>>>>>>>>>>> # Ping from LXC1 192.168.30.101 to LXC2 192.168.30.102
>>>>>>>>>>>> (vrack) : tcpdump from LXC1
>>>>>>>>>>>> 15:54:52.260934 ARP, Request who-has 192.168.30.102 tell
>>>>>>>>>>>> 192.168.30.3, length 28
>>>>>>>>>>>> 15:54:52.260988 ARP, Reply 192.168.30.102 is-at
>>>>>>>>>>>> 62:31:32:34:65:61 (oui Unknown), length 28
>>>>>>>>>>>> 15:54:52.575082 IP 192.168.30.102 > 192.168.30.101: ICMP
>>>>>>>>>>>> echo request, id 1043, seq 3, length 64
>>>>>>>>>>>> 15:54:53.583057 IP 192.168.30.102 > 192.168.30.101: ICMP
>>>>>>>>>>>> echo request, id 1043, seq 4, length 64
>>>>>>>>>>>>
>>>>>>>>>>>> # Ping from LXC1 192.168.30.101 to LXC2 192.168.30.102
>>>>>>>>>>>> (vrack) : tcpdump from Proxmox
>>>>>>>>>>>> 17:56:05.861665 ARP, Request who-has 192.168.30.101 tell
>>>>>>>>>>>> 192.168.30.102, length 28
>>>>>>>>>>>> 17:56:05.861688 ARP, Reply 192.168.30.101 is-at
>>>>>>>>>>>> 62:31:32:34:65:61 (oui Unknown), length 28
>>>>>>>>>>>> 17:56:06.860925 ARP, Request who-has 192.168.30.101 tell
>>>>>>>>>>>> 192.168.30.102, length 28
>>>>>>>>>>>> 17:56:06.860998 ARP, Reply 192.168.30.101 is-at
>>>>>>>>>>>> 62:31:32:34:65:61 (oui Unknown), length 28
>>>>>>>>>>>>
>>>>>>>>>>>> Any idea ?
>>>>>>>>>>>>
>>>>>>>>>>>> Thanks,
>>>>>>>>>>>>
>>>>>>>>>>>> Guillaume
>>>>>>>>>>>> _______________________________________________
>>>>>>>>>>>> pve-user mailing list
>>>>>>>>>>>> pve-user at pve.proxmox.com
>>>>>>>>>>>> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>>>>>>>>>>>>
>>>>>>>>>>> _______________________________________________
>>>>>>>>>>> pve-user mailing list
>>>>>>>>>>> pve-user at pve.proxmox.com
>>>>>>>>>>> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>>>>>>>>>> Cheers,
>>>>>>>>>> Alwin
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> _______________________________________________
>>>>>>>>>> pve-user mailing list
>>>>>>>>>> pve-user at pve.proxmox.com
>>>>>>>>>> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>>>>>>>>>>
>>>>>>>>> _______________________________________________
>>>>>>>>> pve-user mailing list
>>>>>>>>> pve-user at pve.proxmox.com
>>>>>>>>> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>>>>>>>> Cheers,
>>>>>>>> Alwin
>>>>>>>> _______________________________________________
>>>>>>>> pve-user mailing list
>>>>>>>> pve-user at pve.proxmox.com
>>>>>>>> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>>>>>>>>
>>>>>>>
>>>>>>> _______________________________________________
>>>>>>> pve-user mailing list
>>>>>>> pve-user at pve.proxmox.com
>>>>>>> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>>>>>>>
>>>>>>
>>>>>> _______________________________________________
>>>>>> pve-user mailing list
>>>>>> pve-user at pve.proxmox.com
>>>>>> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>>>>
>>>>
>>>> _______________________________________________
>>>> pve-user mailing list
>>>> pve-user at pve.proxmox.com
>>>> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>>>>
>>>
>>>
>>> _______________________________________________
>>> pve-user mailing list
>>> pve-user at pve.proxmox.com
>>> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>>>
>>
>>
>> _______________________________________________
>> pve-user mailing list
>> pve-user at pve.proxmox.com
>> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>>
>
>
> _______________________________________________
> pve-user mailing list
> pve-user at pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
More information about the pve-user
mailing list