[PVE-User] Proxmox and Gluster Server with more than 2 server....
Gilberto Nunes
gilberto.nunes32 at gmail.com
Wed Aug 16 14:26:04 CEST 2017
Well... It's seem to me that not work anymore...
I was able to install a KVM Ubuntu, turn off 3 servers and let just one.
The storage goes to unavailable, after few seconds.
Perhaps needs a minimun qorum for sustain a reliable solution.
But I have a idea: I will mount the share glusterfs into a folder, and
append to proxmox as a Directory Storage... Maybe like this way work
properly.
Obrigado
Cordialmente
Gilberto Ferreira
Consultor TI Linux | IaaS Proxmox, CloudStack, KVM | Zentyal Server |
Zimbra Mail Server
(47) 3025-5907
(47) 99676-7530
Skype: gilberto.nunes36
konnectati.com.br <http://www.konnectati.com.br/>
https://www.youtube.com/watch?v=dsiTPeNWcSE
2017-08-15 19:21 GMT-03:00 Gilberto Nunes <gilberto.nunes32 at gmail.com>:
> Nice! I will!
>
> Tks
>
>
>
>
>
> 2017-08-15 19:19 GMT-03:00 <lemonnierk at ulrar.net>:
>
>> You should really set the options I sent you then :)
>>
>> On Tue, Aug 15, 2017 at 07:18:30PM -0300, Gilberto Nunes wrote:
>> > gluster vol info
>> >
>> > Volume Name: vol-glusterfs
>> > Type: Distributed-Replicate
>> > Volume ID: 3e501d59-46c2-4db0-8ef7-7fe929ba4816
>> > Status: Started
>> > Number of Bricks: 2 x 2 = 4
>> > Transport-type: tcp
>> > Bricks:
>> > Brick1: storage100:/storage/data
>> > Brick2: storage200:/storage/data
>> > Brick3: storage300:/storage/data
>> > Brick4: storage400:/storage/data
>> > Options Reconfigured:
>> > network.ping-timeout: 45
>> > performance.readdir-ahead: on
>> >
>> >
>> >
>> >
>> >
>> > 2017-08-15 19:13 GMT-03:00 <lemonnierk at ulrar.net>:
>> >
>> > > By the way, don't know if you know, but do check you have those
>> > > settings configured on the volume :
>> > >
>> > > performance.readdir-ahead: on
>> > > cluster.quorum-type: auto
>> > > cluster.server-quorum-type: server
>> > > network.remote-dio: enable
>> > > cluster.eager-lock: enable
>> > > performance.quick-read: off
>> > > performance.read-ahead: off
>> > > performance.io-cache: off
>> > > performance.stat-prefetch: off
>> > >
>> > > Failing that you could get corrupted VM disks, I've heard.
>> > > On debian at least, I think on redhat based systems there is a virt
>> > > group setting all of that but I have no experience with those distros.
>> > >
>> > > On Tue, Aug 15, 2017 at 06:59:39PM -0300, Gilberto Nunes wrote:
>> > > > yep
>> > > >
>> > > > thanks for advise
>> > > >
>> > > >
>> > > >
>> > > >
>> > > > 2017-08-15 18:58 GMT-03:00 <lemonnierk at ulrar.net>:
>> > > >
>> > > > > The delay you mention isn't proxmox, it's gluster's timeout. You
>> can
>> > > > > adjust it with "gluster volume set <name> network.ping-timeout
>> 30",
>> > > > > but I'd advise against going under 30 seconds (I believe the
>> default
>> > > > > is 45 seconds).
>> > > > >
>> > > > > On Tue, Aug 15, 2017 at 06:42:16PM -0300, Gilberto Nunes wrote:
>> > > > > > wow!!! it's took some time... but it work!
>> > > > > >
>> > > > > > I have 4 servers:
>> > > > > > - storage100
>> > > > > > - storage200
>> > > > > > - storage300
>> > > > > > - storage400
>> > > > > >
>> > > > > > I put storage 100 as main server, and storage400 as the backup.
>> > > > > > I turn off the servers:
>> > > > > > - storage100
>> > > > > > - storage300
>> > > > > > - storage400
>> > > > > >
>> > > > > > The Proxmox took some time, but it's work....
>> > > > > >
>> > > > > > Great job Proxmox Team!!!
>> > > > > >
>> > > > > > Thanks a lot
>> > > > > >
>> > > > > >
>> > > > > >
>> > > > > >
>> > > > > > 2017-08-15 18:33 GMT-03:00 <lemonnierk at ulrar.net>:
>> > > > > >
>> > > > > > > Yep !
>> > > > > > > Just put storage100 and storage200 as a backup, that way if
>> > > storage100
>> > > > > > > dies and you reboot a proxmox node, it'll connect to
>> storage200 and
>> > > > > > > it'll still work.
>> > > > > > > Basically that's just the address where it'll fetch the volume
>> > > config,
>> > > > > > > but after the initial connection it'll talk to all the bricks
>> on
>> > > it's
>> > > > > > > own.
>> > > > > > >
>> > > > > > > Works exactly like the fuse mount by hand.
>> > > > > > >
>> > > > > > > On Tue, Aug 15, 2017 at 06:26:59PM -0300, Gilberto Nunes
>> wrote:
>> > > > > > > > I see...
>> > > > > > > >
>> > > > > > > > So if I put just you of the brick, let's say, storage100,
>> but I
>> > > have
>> > > > > 4
>> > > > > > > > bricks, and turn off the storage100, storage200 and
>> storage300, I
>> > > > > will
>> > > > > > > > still have a valid set up???
>> > > > > > > >
>> > > > > > > >
>> > > > > > > >
>> > > > > > > >
>> > > > > > > > 2017-08-15 18:24 GMT-03:00 <lemonnierk at ulrar.net>:
>> > > > > > > >
>> > > > > > > > > Hi,
>> > > > > > > > >
>> > > > > > > > > You just tell it about one brick, that's enough.
>> > > > > > > > > The second input box is there as a backup, in case the
>> first
>> > > one
>> > > > > > > > > is down for some reason.
>> > > > > > > > >
>> > > > > > > > > When proxmox connects to gluster, it'll get your whole
>> setup
>> > > and
>> > > > > > > > > it will know about the bricks then, no worries.
>> > > > > > > > >
>> > > > > > > > >
>> > > > > > > > > If your proxmox nodes are the same as your gluster nodes
>> (same
>> > > > > > > > > physical machines) I'd just use "localhost" as the server
>> when
>> > > > > > > > > adding to proxmox.
>> > > > > > > > > That's what I've been doing and it works perfectly :)
>> > > > > > > > >
>> > > > > > > > > On Tue, Aug 15, 2017 at 06:13:04PM -0300, Gilberto Nunes
>> wrote:
>> > > > > > > > > > Hi...
>> > > > > > > > > >
>> > > > > > > > > > I have here a cluster of GlusterFS, with 4 server,
>> working as
>> > > > > replica
>> > > > > > > > > > distribuited...
>> > > > > > > > > > But how can I add the Gluster into Proxmox, since accept
>> > > just two
>> > > > > > > gluster
>> > > > > > > > > > server?
>> > > > > > > > > > I wanna use all my gluster servers into Proxmox??
>> > > > > > > > > >
>> > > > > > > > > > How can I achieve this set up???
>> > > > > > > > > >
>> > > > > > > > > > Thanks a lot
>> > > > > > > > > > _______________________________________________
>> > > > > > > > > > pve-user mailing list
>> > > > > > > > > > pve-user at pve.proxmox.com
>> > > > > > > > > > https://pve.proxmox.com/cgi-bi
>> n/mailman/listinfo/pve-user
>> > > > > > > > >
>> > > > > > > > > _______________________________________________
>> > > > > > > > > pve-user mailing list
>> > > > > > > > > pve-user at pve.proxmox.com
>> > > > > > > > > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>> > > > > > > > >
>> > > > > > > > >
>> > > > > > > > _______________________________________________
>> > > > > > > > pve-user mailing list
>> > > > > > > > pve-user at pve.proxmox.com
>> > > > > > > > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>> > > > > > >
>> > > > > > > _______________________________________________
>> > > > > > > pve-user mailing list
>> > > > > > > pve-user at pve.proxmox.com
>> > > > > > > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>> > > > > > >
>> > > > > > >
>> > > > > > _______________________________________________
>> > > > > > pve-user mailing list
>> > > > > > pve-user at pve.proxmox.com
>> > > > > > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>> > > > >
>> > > > > _______________________________________________
>> > > > > pve-user mailing list
>> > > > > pve-user at pve.proxmox.com
>> > > > > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>> > > > >
>> > > > >
>> > > > _______________________________________________
>> > > > pve-user mailing list
>> > > > pve-user at pve.proxmox.com
>> > > > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>> > >
>> > > _______________________________________________
>> > > pve-user mailing list
>> > > pve-user at pve.proxmox.com
>> > > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>> > >
>> > >
>> > _______________________________________________
>> > pve-user mailing list
>> > pve-user at pve.proxmox.com
>> > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>>
>> _______________________________________________
>> pve-user mailing list
>> pve-user at pve.proxmox.com
>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>>
>>
>
More information about the pve-user
mailing list