[PVE-User] Proxmox and Gluster Server with more than 2 server....

lemonnierk at ulrar.net lemonnierk at ulrar.net
Wed Aug 16 00:13:05 CEST 2017


By the way, don't know if you know, but do check you have those
settings configured on the volume :

performance.readdir-ahead: on
cluster.quorum-type: auto
cluster.server-quorum-type: server
network.remote-dio: enable
cluster.eager-lock: enable
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.stat-prefetch: off

Failing that you could get corrupted VM disks, I've heard.
On debian at least, I think on redhat based systems there is a virt
group setting all of that but I have no experience with those distros.

On Tue, Aug 15, 2017 at 06:59:39PM -0300, Gilberto Nunes wrote:
> yep
> 
> thanks for advise
> 
> 
> 
> 
> 2017-08-15 18:58 GMT-03:00 <lemonnierk at ulrar.net>:
> 
> > The delay you mention isn't proxmox, it's gluster's timeout. You can
> > adjust it with "gluster volume set <name> network.ping-timeout 30",
> > but I'd advise against going under 30 seconds (I believe the default
> > is 45 seconds).
> >
> > On Tue, Aug 15, 2017 at 06:42:16PM -0300, Gilberto Nunes wrote:
> > > wow!!! it's took some time... but it work!
> > >
> > > I have 4 servers:
> > > - storage100
> > > - storage200
> > > - storage300
> > > - storage400
> > >
> > > I put storage 100 as main server, and storage400 as the backup.
> > > I turn off the servers:
> > > - storage100
> > > - storage300
> > > - storage400
> > >
> > > The Proxmox took some time, but it's work....
> > >
> > > Great job Proxmox Team!!!
> > >
> > > Thanks a lot
> > >
> > >
> > >
> > >
> > > 2017-08-15 18:33 GMT-03:00 <lemonnierk at ulrar.net>:
> > >
> > > > Yep !
> > > > Just put storage100 and storage200 as a backup, that way if storage100
> > > > dies and you reboot a proxmox node, it'll connect to storage200 and
> > > > it'll still work.
> > > > Basically that's just the address where it'll fetch the volume config,
> > > > but after the initial connection it'll talk to all the bricks on it's
> > > > own.
> > > >
> > > > Works exactly like the fuse mount by hand.
> > > >
> > > > On Tue, Aug 15, 2017 at 06:26:59PM -0300, Gilberto Nunes wrote:
> > > > > I see...
> > > > >
> > > > > So if I put just you of the brick, let's say, storage100, but I have
> > 4
> > > > > bricks, and turn off the storage100, storage200 and storage300, I
> > will
> > > > > still have a valid set up???
> > > > >
> > > > >
> > > > >
> > > > >
> > > > > 2017-08-15 18:24 GMT-03:00 <lemonnierk at ulrar.net>:
> > > > >
> > > > > > Hi,
> > > > > >
> > > > > > You just tell it about one brick, that's enough.
> > > > > > The second input box is there as a backup, in case the first one
> > > > > > is down for some reason.
> > > > > >
> > > > > > When proxmox connects to gluster, it'll get your whole setup and
> > > > > > it will know about the bricks then, no worries.
> > > > > >
> > > > > >
> > > > > > If your proxmox nodes are the same as your gluster nodes (same
> > > > > > physical machines) I'd just use "localhost" as the server when
> > > > > > adding to proxmox.
> > > > > > That's what I've been doing and it works perfectly :)
> > > > > >
> > > > > > On Tue, Aug 15, 2017 at 06:13:04PM -0300, Gilberto Nunes wrote:
> > > > > > > Hi...
> > > > > > >
> > > > > > > I have here a cluster of GlusterFS, with 4 server, working as
> > replica
> > > > > > > distribuited...
> > > > > > > But how can I add the Gluster into Proxmox, since accept just two
> > > > gluster
> > > > > > > server?
> > > > > > > I wanna use all my gluster servers into Proxmox??
> > > > > > >
> > > > > > > How can I achieve this set up???
> > > > > > >
> > > > > > > Thanks a lot
> > > > > > > _______________________________________________
> > > > > > > pve-user mailing list
> > > > > > > pve-user at pve.proxmox.com
> > > > > > > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> > > > > >
> > > > > > _______________________________________________
> > > > > > pve-user mailing list
> > > > > > pve-user at pve.proxmox.com
> > > > > > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> > > > > >
> > > > > >
> > > > > _______________________________________________
> > > > > pve-user mailing list
> > > > > pve-user at pve.proxmox.com
> > > > > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> > > >
> > > > _______________________________________________
> > > > pve-user mailing list
> > > > pve-user at pve.proxmox.com
> > > > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> > > >
> > > >
> > > _______________________________________________
> > > pve-user mailing list
> > > pve-user at pve.proxmox.com
> > > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> >
> > _______________________________________________
> > pve-user mailing list
> > pve-user at pve.proxmox.com
> > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> >
> >
> _______________________________________________
> pve-user mailing list
> pve-user at pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 833 bytes
Desc: Digital signature
URL: <http://lists.proxmox.com/pipermail/pve-user/attachments/20170815/fc4f2c24/attachment.sig>


More information about the pve-user mailing list