[PVE-User] High Availaility / Fencing in Proxmox VE 4.x

Uwe Sauter uwe.sauter.de at gmail.com
Mon Feb 6 13:08:18 CET 2017


Hi Kevin,

thanks for explaining your setup. Comments below.

Am 06.02.2017 um 12:57 schrieb Kevin Lemonnier:
>> * How does fencing work in Proxmox (technically)?
>>   Due to fencing being based on watchdogs I assume that some piece of software
>>   regularly resets the watchdog's clock so that the node isn't rebooted automatically
>>   by the watchdog. Is this correct? Which software is responsible?
> 
> I had a hard time figuring out how to configure Proxmox correctly for something
> similar (GlusterFS over vmbr1), basically :
> 
> # On the first node
> pvecm create My_Cluster -bindnet0_addr 172.16.0.1 -ring0_addr 172.16.0.1
> 
> # On the second node
> pvecm add -ring0_addr 172.16.0.2 172.16.0.1
> 
> # On the third node
> pvecm add -ring0_addr 172.16.0.3 172.16.0.1
> 
> And so on.
> That way Proxmox (corosync) will use the same interface as Ceph, and will self-fence
> when that isn't available. Just don't use any other rings as that would enable Proxmox
> to stay up even when Ceph is down, which is bad (unless you have a way to tell Ceph how
> to use the other interfaces, but I guess you wouldn't be asking this if you did).
> 
> It's been working fine for us for a year now, on a very very unstable network using
> the default self-fencing. Quite happy with it.
> 

But that contradicts what is written in [4], quote: "Storage communication should never be on the same network as corosync!". I
understand your motivation to do it the way you did but there has to be another way (if what the Wiki tells is true).


Regards,

	Uwe

[4] https://pve.proxmox.com/wiki/Separate_Cluster_Network




More information about the pve-user mailing list