[pve-devel] got stuck while setup new dev custer
Cesar Peschiera
brain at click.com.py
Tue Mar 24 00:25:06 CET 2015
Hi Stefan
I had tested in two brands of switches that if i change the jumbo frames
configuration to the max, the switches also accept MTUs of minor values by
part of the servers, so i have as custom always configure the switches at
max value (one less worry for me).
Also, with that configuration in the switch, if i have a mixed combination
of MTU in the servers, all my servers still operating perfectly.
----- Original Message -----
From: "Stefan Priebe" <s.priebe at profihost.ag>
To: <pve-devel at pve.proxmox.com>
Sent: Monday, March 23, 2015 6:05 PM
Subject: Re: [pve-devel] got stuck while setup new dev custer
> solved. Ugly switch had a special parameter for jumbo frames *gr*
>
> Stefan
>
> Am 23.03.2015 um 22:25 schrieb Stefan Priebe:
>> Also tried:
>> transport="udpu"
>>
>> But it doesn't change anything ;-( same problem. 2nd node does not join
>> first node already running vms.
>>
>> Stefan
>>
>> Am 23.03.2015 um 20:01 schrieb Stefan Priebe:
>>> Hi,
>>>
>>> i wanted to setup a new proxmox dev cluster of 3 nodes. I already had a
>>> single pve machine i want to extend.
>>>
>>> So i used that one as a base.
>>>
>>> # pvecm create pve-dev
>>>
>>> Restarting pve cluster filesystem: pve-cluster[dcdb] notice: wrote new
>>> cluster config '/etc/cluster/cluster.conf'
>>> .
>>> Starting cluster:
>>> Checking if cluster has been disabled at boot... [ OK ]
>>> Checking Network Manager... [ OK ]
>>> Global setup... [ OK ]
>>> Loading kernel modules... [ OK ]
>>> Mounting configfs... [ OK ]
>>> Starting cman... [ OK ]
>>> Waiting for quorum... [ OK ]
>>> Starting fenced... [ OK ]
>>> Starting dlm_controld... [ OK ]
>>> Tuning DLM kernel config... [ OK ]
>>> Unfencing self... [ OK ]
>>>
>>> # pvecm status; pvecm nodes
>>> Version: 6.2.0
>>> Config Version: 1
>>> Cluster Name: pve-dev
>>> Cluster Id: 51583
>>> Cluster Member: Yes
>>> Cluster Generation: 236
>>> Membership state: Cluster-Member
>>> Nodes: 1
>>> Expected votes: 1
>>> Total votes: 1
>>> Node votes: 1
>>> Quorum: 1
>>> Active subsystems: 5
>>> Flags:
>>> Ports Bound: 0
>>> Node name: node1
>>> Node ID: 1
>>> Multicast addresses: 239.192.201.73
>>> Node addresses: 10.255.0.10
>>> Node Sts Inc Joined Name
>>> 1 M 236 2015-03-23 19:48:20 node1
>>>
>>> I then tried to add the 2nd node which just hangs:
>>>
>>> # pvecm add 10.255.0.10
>>> copy corosync auth key
>>> stopping pve-cluster service
>>> Stopping pve cluster filesystem: pve-cluster.
>>> backup old database
>>> Starting pve cluster filesystem : pve-cluster.
>>> Starting cluster:
>>> Checking if cluster has been disabled at boot... [ OK ]
>>> Checking Network Manager... [ OK ]
>>> Global setup... [ OK ]
>>> Loading kernel modules... [ OK ]
>>> Mounting configfs... [ OK ]
>>> Starting cman... [ OK ]
>>> Waiting for quorum... [ OK ]
>>> Starting fenced... [ OK ]
>>> Starting dlm_controld... [ OK ]
>>> Tuning DLM kernel config... [ OK ]
>>> Unfencing self... [ OK ]
>>> waiting for quorum...
>>>
>>> That one hangs at quorum.
>>>
>>> And the first one shows in log:
>>> Mar 23 19:56:41 node1 pmxcfs[7740]: [status] notice: cpg_send_message
>>> retried 100 times
>>> Mar 23 19:56:41 node1 pmxcfs[7740]: [status] crit: cpg_send_message
>>> failed: 6
>>> Mar 23 19:56:42 node1 pmxcfs[7740]: [status] notice: cpg_send_message
>>> retry 10
>>> Mar 23 19:56:43 node1 pmxcfs[7740]: [status] notice: cpg_send_message
>>> retry 20
>>> ...
>>>
>>> I already checked omping which is fine.
>>>
>>> Whats wrong ;-(
>>>
>>> Greets,
>>> Stefan
> _______________________________________________
> pve-devel mailing list
> pve-devel at pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
More information about the pve-devel
mailing list