[PVE-User] pvecm e 1 not working anymore
Thomas Lamprecht
t.lamprecht at proxmox.com
Fri Mar 18 13:20:35 CET 2016
On 03/18/2016 12:36 PM, Jean-Laurent Ivars wrote:
> You can let go it’s ok, I revert the conf file from my other node, I
> restarted the corosync service from the web interface and the folder
> is back.
>
> As you’re saying* pvecm expected 1 *is working despite what pvecm
> status says because both the hosts are online, I just made test
> (temporary deactivate cluster interface on the other node)
If you did something like "ifdown eth1" forget it , that wont work with
corosync, it may actual cause problems.
And yes it will *not* set expected votes lower if the cluster is full up
and quorate!
A real test from my side shows that is working:
root at due:~# pvecm s
Quorum information
------------------
Date: Fri Mar 18 13:15:42 2016
Quorum provider: corosync_votequorum
Nodes: 3
Node ID: 0x00000002
Ring ID: 704
Quorate: Yes
Votequorum information
----------------------
Expected votes: 3
Highest expected: 3
Total votes: 3
Quorum: 2
Flags: Quorate
Membership information
----------------------
Nodeid Votes Name
0x00000001 1 10.10.10.1
0x00000002 1 10.10.10.2 (local)
0x00000003 1 10.10.10.3
*-> pull network plug here*
root at due:~# pvecm s
Quorum information
------------------
Date: Fri Mar 18 13:15:46 2016
Quorum provider: corosync_votequorum
Nodes: 1
Node ID: 0x00000002
Ring ID: 708
Quorate: No
Votequorum information
----------------------
Expected votes: 3
Highest expected: 3
Total votes: 1
Quorum: 2 Activity blocked
Flags:
Membership information
----------------------
Nodeid Votes Name
0x00000002 1 10.10.10.2 (local)
root at due:~# pvecm e 1
root at due:~# pvecm s
Quorum information
------------------
Date: Fri Mar 18 13:15:51 2016
Quorum provider: corosync_votequorum
Nodes: 1
Node ID: 0x00000002
Ring ID: 708
Quorate: Yes
Votequorum information
----------------------
Expected votes: 1
Highest expected: 1
Total votes: 1
Quorum: 1
Flags: Quorate
Membership information
----------------------
Nodeid Votes Name
0x00000002 1 10.10.10.2 (local)
root at due:~#
btw. using that do a achieve some false sense of HA in a two node
cluster should *never* be done in a production system, if something
fails or does not work as expected do not blame us, we warned you. If
you want HA use at least three nodes.
>
> and voilà :
>
> root at roubaix /etc/pve # pvecm status
> Quorum information
> ------------------
> Date: Fri Mar 18 12:32:44 2016
> Quorum provider: corosync_votequorum
> Nodes: 1
> Node ID: 0x00000001
> Ring ID: 1504
> Quorate: No
>
> Votequorum information
> ----------------------
> Expected votes: 2
> Highest expected: 2
> Total votes: 1
> Quorum: 2 Activity blocked
> Flags:
>
> Membership information
> ----------------------
> Nodeid Votes Name
> 0x00000001 1 10.10.10.2 (local)
> root at roubaix /etc/pve # pvecm expected 1
> root at roubaix /etc/pve # pvecm status
> Quorum information
> ------------------
> Date: Fri Mar 18 12:33:00 2016
> Quorum provider: corosync_votequorum
> Nodes: 1
> Node ID: 0x00000001
> Ring ID: 1504
> Quorate: Yes
>
> Votequorum information
> ----------------------
> Expected votes: 1
> Highest expected: 1
> Total votes: 1
> Quorum: 1
> Flags: Quorate
>
> Membership information
> ----------------------
> Nodeid Votes Name
> 0x00000001 1 10.10.10.2 (local)
> root at roubaix /etc/pve #
>
> Thank you
>
> ------------------------------------------------------------------------
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.proxmox.com/pipermail/pve-user/attachments/20160318/503863e7/attachment.htm>
More information about the pve-user
mailing list