[pve-devel] NFS Storage communication with problems when a second NIC is used.

Cesar Peschiera brain at click.com.py
Wed Nov 19 03:31:34 CET 2014


Hi to all

I guess that PVE have a problem with the communication with the NFS storage
when the PVE host have other IP address in other NIC for the NFS
communication.

All my PVE servers have nfs-kernel-server installed and configured in a
second physical and separate LAN.

For example:
Zone of Servers of Virtualization
---------------------------------
PVE-1 on NIC1 = 10.1.1.1 -> LAN of PVE and VMs.
PVE-1 on NIC2 = 10.2.2.1 -> LAN of the Backups.
(Separate physical and logical networks for this host)

PVE-2 on NIC1 = 10.1.1.2 -> LAN of PVE and VMs.
PVE-2 on NIC2 = 10.2.2.2 -> LAN of the Backups.
(Separate physical and logical networks for this host)

Zone of NFS Servers
(without PVE Cluster)
----------------------
PVE-NFS on NIC1 = 10.1.1.200 -> LAN of PVE.
PVE-NFS on NIC2 = 10.2.2.200 -> LAN of the Backups.
Also are separate physical and logical networks in this host.
(10.2.2.200  is the NIC where PVE-NFS must receive the data of backups)

When i try configure in the PVE GUI the NFS storage, i wait a long time, and
nothing happens, when i try by CLI (from a Virtualization Server):
"showmount -e 10.2.2.20", after 30 seconds or 1 minute, I see the
exportation.

Also, i have tried the command showmount between the distinct hosts PVE, and
works well for the LAN communication but not for the LAN-OF-BACKUPS.

Comands as "ping" and "iperf" for the PVE-NFS Servers in the LAN-OF-BACKUPS 
give me good results.

Not matter if i have bond, bridge, or anything in the LAN-OF-BACKUPS, always
"showmount" takes a while to show results.

shell# pveversion -v
proxmox-ve-2.6.32: 3.3-139 (running kernel: 2.6.32-34-pve)
pve-manager: 3.3-5 (running version: 3.3-5/bfebec03)
pve-kernel-2.6.32-32-pve: 2.6.32-136
pve-kernel-2.6.32-34-pve: 2.6.32-139
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-1
pve-cluster: 3.0-15
qemu-server: 3.3-3
pve-firmware: 1.1-3
libpve-common-perl: 3.0-19
libpve-access-control: 3.0-15
libpve-storage-perl: 3.0-25
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.1-10
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1

Can anybody tell me how fix it?





More information about the pve-devel mailing list