[PVE-User] creating 2 node cluster via crossover cable

Francesco Ongaro francesco.ongaro at isgroup.it
Thu May 30 20:40:53 CEST 2019

On 30/05/19 17:15, Adam Weremczuk wrote:
> Anyway, I've tried creating a cluster using these "crossover" IPs (Ring
> 0 address) but it seems to be persisting of using primary IPs.

Hi Adam,

The part you are referring to IMHO can be resolved by specifying
hostnames when doing the corosync setup (eg: node1-corosync, etc)
and then managing IP resolution in /etc/hosts. This gives maximum

Proxmox nodes chat each other on the IP specified by the hostname (eg:
node1) in /etc/hosts so you will want to use the IP of the direct
attached/dedicated bandwidth network as the main IP.

The usage of the right interface can be verified by migrating a virtual

BTW did something similar with inexpensive 40GB Mellanox cards:

03:00.0 Network controller: Mellanox Technologies MT27500 Family

Since such cards are dual port you can connect up to 3 nodes without a

Alternatively you can bond the two direct attached copper cables (you
can find new and cheap ones online) between two machines if you are
only interested in ZFS replica (it would be nice to have LVM replica

In the latter, more complex, case my setup is the following:


# Infiniband bonding
iface eno1 inet manual
iface eno1d1 inet manual
auto bond0
iface bond0 inet static
        address 192.X.X.X
        network 192.X.X.0
        slaves eno1 eno1d1
#       bond_mode active-backup
        bond_mode balance-rr
        bond_miimon 100
        bond_downdelay 200
        bond_updelay 200

You can then verify that the bond rate negotiated 40/Gb:

# ethtool bond0
Settings for bond0:
	Supported ports: [ ]
	Supported link modes:   Not reported
	Supported pause frame use: No
	Supports auto-negotiation: No
	Advertised link modes:  Not reported
	Advertised pause frame use: No
	Advertised auto-negotiation: No
	Speed: 40000Mb/s
	Duplex: Full
	Port: Other
	Transceiver: internal
	Auto-negotiation: off
	Link detected: yes

This specific interface is a little choosy after recent kernel updates
and tends to revert to Fibre/IB mode opposed to Ethernet mode (the one
you want for corosync et alia).

If you see something like:

# ethtool eno1d1
Settings for eno1d1:
	Supported ports: [ FIBRE ]

You can try the following:

# cat /etc/modules

# cat /etc/modprobe.d/mlx4.conf
blacklist mlx4_ib
options mlx4_core port_type_array="2,2"

# update-initramfs -u

# reboot

If anybody has other suggestion on how to force Mellanox Ethernet mode
I'm all ears!

Have a great day,

Francesco Ongaro, Senior Security Researcher
ISGroup: Information Security Group (www.isgroup.it)
Tel       (+39) 045 4853232
Fax       (+39) 045 5111719
Voicemail (+39) 02 320624653


Il contenuto della presente e-mail ed i suoi allegati, sono diretti
esclusivamente al destinatario e devono ritenersi riservati, con
divieto di diffusione o di uso non conforme alle finalità per le quali
la presente e-mail è stata inviata.

Pertanto, ne è vietata la diffusione e la comunicazione da parte di
soggetti diversi dal destinatario, ai sensi degli artt. 616 e ss. c.p.
e D.lgs n. 196/03 Codice Privacy.

Se la presente e-mail ed i suoi allegati sono stati ricevuti per
errore, siete pregati di distruggere quanto ricevuto e di informare il
mittente al seguente recapito: isgroup at isgroup.it

More information about the pve-user mailing list