<html>
<head>
<meta content="text/html; charset=ISO-8859-1"
http-equiv="Content-Type">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<div class="moz-cite-prefix">Le 28/01/2014 17:52, Angel Docampo a
écrit :<br>
</div>
<blockquote cite="mid:52E7E03C.6070701@dltec.net" type="cite">
<meta content="text/html; charset=ISO-8859-1"
http-equiv="Content-Type">
<div class="moz-cite-prefix">Hi there,<br>
<br>
Mmmm, if you use /etc/fstab or glusterfs client, you will be
accessing the gluster via FUSE. <br>
<br>
And that s*cks.<br>
<br>
I do a trick on my proxmox cluster, as well gluster cluster. I
do have one 10Gb interface dedicated to gluster on each node,
and another 1Gb interface dedicated to proxmox cluster. So, in
hosts, my config is more or less this one.<br>
<br>
#PROXMOX NODES<br>
10.0.0.1 pve01 <br>
10.0.0.2 pve02 <br>
#GLUSTER NODE<br>
192.168.100.10 g01<br>
192.168.100.20 g02<br>
<br>
Until this point, completely normal, now, on the first node
(pve01/g01) y put on hosts<br>
192.168.100.10 gluster <br>
<br>
And this other hosts line on node pve02/g02<br>
192.168.100.20 gluster <br>
<br>
So, on proxmox GUI I mount gluster:VOLUMENAME. Each node mounts
its own mountpoint and on the redhat cluster, the resource is
called the same: gluster.<br>
<br>
Its a trick if your virtualization nodes are also storage nodes,
which is my case.<br>
Hope it helps.<br>
<br>
</div>
</blockquote>
Thanks for the input Angelo, it's a good lightweight tip but
unfortunately it doesn't fit my needs. You can play with CARP
(uCARP) to play with VIP too.<br>
<br>
By default GlusterFS does the automatic failover between nodes but
if your primary node crashes completely, regarding to networking or
OS layer, and your client reboots it will be completely stuck. The
fact to specify a backup address explicits the second node's and
prevents that.<br>
<br>
Regards<br>
</body>
</html>