<div dir="ltr"><div>root@kvm01:/var/log# pveversion -V</div><div>proxmox-ve-2.6.32: 3.1-114 (running kernel: 2.6.32-26-pve)</div><div>pve-manager: 3.1-24 (running version: 3.1-24/060bd5a6)</div><div>pve-kernel-2.6.32-26-pve: 2.6.32-114</div>
<div>lvm2: 2.02.98-pve4</div><div>clvm: 2.02.98-pve4</div><div>corosync-pve: 1.4.5-1</div><div>openais-pve: 1.1.4-3</div><div>libqb0: 0.11.1-2</div><div>redhat-cluster-pve: 3.2.0-2</div><div>resource-agents-pve: 3.9.2-4</div>
<div>fence-agents-pve: 4.0.0-2</div><div>pve-cluster: 3.0-8</div><div>qemu-server: 3.1-8</div><div>pve-firmware: 1.0-23</div><div>libpve-common-perl: 3.0-9</div><div>libpve-access-control: 3.0-8</div><div>libpve-storage-perl: 3.0-18</div>
<div>pve-libspice-server1: 0.12.4-2</div><div>vncterm: 1.1-6</div><div>vzctl: 4.0-1pve4</div><div>vzprocps: 2.0.11-2</div><div>vzquota: 3.1-2</div><div>pve-qemu-kvm: 1.4-17</div><div>ksm-control-daemon: 1.1-1</div><div>glusterfs-client: 3.4.1-1</div>
<div><br></div></div><div class="gmail_extra"><br><br><div class="gmail_quote">2014/1/6 <span dir="ltr"><<a href="mailto:pve-user-request@pve.proxmox.com" target="_blank">pve-user-request@pve.proxmox.com</a>></span><br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Send pve-user mailing list submissions to<br>
š š š š <a href="mailto:pve-user@pve.proxmox.com">pve-user@pve.proxmox.com</a><br>
<br>
To subscribe or unsubscribe via the World Wide Web, visit<br>
š š š š <a href="http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user" target="_blank">http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user</a><br>
or, via email, send a message with subject or body 'help' to<br>
š š š š <a href="mailto:pve-user-request@pve.proxmox.com">pve-user-request@pve.proxmox.com</a><br>
<br>
You can reach the person managing the list at<br>
š š š š <a href="mailto:pve-user-owner@pve.proxmox.com">pve-user-owner@pve.proxmox.com</a><br>
<br>
When replying, please edit your Subject line so it is more specific<br>
than "Re: Contents of pve-user digest..."<br>
<br>
<br>
Today's Topics:<br>
<br>
š š1. got empty cluster VM list (???? ???????)<br>
<br>
<br>
----------------------------------------------------------------------<br>
<br>
Message: 1<br>
Date: Mon, 6 Jan 2014 16:21:18 +0400<br>
From: ???? ??????? <<a href="mailto:malmyzh@gmail.com">malmyzh@gmail.com</a>><br>
To: "<a href="mailto:pve-user@pve.proxmox.com">pve-user@pve.proxmox.com</a>" <<a href="mailto:pve-user@pve.proxmox.com">pve-user@pve.proxmox.com</a>><br>
Subject: [PVE-User] got empty cluster VM list<br>
Message-ID:<br>
š š š š <<a href="mailto:CAF-rypzTzKgeqh5e86eC7aVfzTX9DqNSer1XZDLs2ZOJD9fyQQ@mail.gmail.com">CAF-rypzTzKgeqh5e86eC7aVfzTX9DqNSer1XZDLs2ZOJD9fyQQ@mail.gmail.com</a>><br>
Content-Type: text/plain; charset="koi8-r"<br>
<br>
Hi,All<br>
<br>
A cluster consists of four nodes.<br>
<br>
*cat /etc/pve/cluster.conf*<br>
<?xml version="1.0"?><br>
<br>
<br>
<br>
<cluster config_version="112" name="KVM"><br>
<br>
<br>
<br>
š <logging debug="on" logfile_priority="debug" to_syslog="no"/><br>
<br>
<br>
<br>
š <cman keyfile="/var/lib/pve-cluster/corosync.authkey"/><br>
<br>
<br>
<br>
š <clusternodes><br>
<br>
<br>
<br>
š š <clusternode name="kvm01" nodeid="1" votes="1"><br>
<br>
<br>
<br>
š š š <fence><br>
<br>
<br>
<br>
š š š š <method name="1"><br>
<br>
<br>
<br>
š š š š š <device action="reboot" name="fenceKVM01"/><br>
<br>
<br>
<br>
š š š š </method><br>
<br>
<br>
<br>
š š š </fence><br>
<br>
<br>
<br>
š š </clusternode><br>
<br>
<br>
<br>
š š <clusternode name="kvm02" nodeid="2" votes="1"><br>
<br>
<br>
<br>
š š š <fence><br>
<br>
<br>
<br>
š š š š <method name="1"><br>
<br>
<br>
<br>
š š š š š <device action="reboot" name="fenceKVM02"/><br>
<br>
<br>
<br>
š š š š </method><br>
<br>
<br>
<br>
š š š </fence><br>
<br>
<br>
<br>
š š </clusternode><br>
<br>
<br>
<br>
š š <clusternode name="kvm03" nodeid="3" votes="1"><br>
<br>
<br>
<br>
š š š <fence><br>
<br>
<br>
<br>
š š š š <method name="1"><br>
<br>
<br>
<br>
š š š š š <device action="reboot" name="fenceKVM03"/><br>
<br>
<br>
<br>
š š š š </method><br>
<br>
<br>
<br>
š š š </fence><br>
š š </clusternode><br>
š š <clusternode name="kvm04" nodeid="4" votes="1"><br>
š š š <fence><br>
š š š š <method name="1"><br>
š š š š š <device action="reboot" name="fenceKVM04"/><br>
š š š š </method><br>
š š š </fence><br>
š š </clusternode><br>
š </clusternodes><br>
š <fencedevices><br>
š š <fencedevice agent="fence_ipmilan" ipaddr="X.X.X.X" login="-"<br>
name="fenceKVM01" passwd="-"/><br>
š š <fencedevice agent="fence_ipmilan" ipaddr="X.X.X.X" login="-"<br>
name="fenceKVM02" passwd="-"/><br>
š š <fencedevice agent="fence_ipmilan" ipaddr="X.X.X.X" login="-"<br>
name="fenceKVM03" passwd="-"/><br>
š š <fencedevice agent="fence_ipmilan" ipaddr="X.X.X.X" login="-"<br>
name="fenceKVM04" passwd="-"/><br>
š </fencedevices><br>
š <rm><br>
š š <pvevm autostart="1" vmid="109"/><br>
š š <pvevm autostart="1" vmid="121"/><br>
š š <pvevm autostart="1" vmid="123"/><br>
š š <pvevm autostart="1" vmid="124"/><br>
š š <pvevm autostart="1" vmid="120"/><br>
š š <pvevm autostart="1" vmid="125"/><br>
š š <pvevm autostart="1" vmid="131"/><br>
š š <pvevm autostart="1" vmid="130"/><br>
š š <pvevm autostart="1" vmid="105"/><br>
š š <pvevm autostart="1" vmid="143"/><br>
š š <pvevm autostart="1" vmid="129"/><br>
š š <pvevm autostart="1" vmid="100"/><br>
š š <pvevm autostart="1" vmid="104"/><br>
š š <pvevm autostart="1" vmid="115"/><br>
š š <pvevm autostart="1" vmid="116"/><br>
š š <pvevm autostart="1" vmid="117"/><br>
š š <pvevm autostart="1" vmid="118"/><br>
š š <pvevm autostart="1" vmid="119"/><br>
š </rm><br>
</cluster><br>
<br>
<br>
On kvm01 spontaneously restart virtual machines without reason. Virtual<br>
machines are included in the HA.<br>
*cat /var/log/cluster/rgmanager.log*<br>
<br>
Jan 05 22:49:11 rgmanager [pvevm] VM 117 is running<br>
Jan 05 22:49:31 rgmanager [pvevm] VM 104 is running<br>
Jan 05 22:49:32 rgmanager [pvevm] got empty cluster VM list<br>
Jan 05 22:49:32 rgmanager [pvevm] got empty cluster VM list<br>
Jan 05 22:49:32 rgmanager [pvevm] got empty cluster VM list<br>
Jan 05 22:49:32 rgmanager [pvevm] got empty cluster VM list<br>
Jan 05 22:49:32 rgmanager status on pvevm "120" returned 2 (invalid<br>
argument(s))<br>
Jan 05 22:49:33 rgmanager status on pvevm "131" returned 2 (invalid<br>
argument(s))<br>
Jan 05 22:49:33 rgmanager status on pvevm "129" returned 2 (invalid<br>
argument(s))<br>
Jan 05 22:49:33 rgmanager status on pvevm "130" returned 2 (invalid<br>
argument(s))<br>
Jan 05 22:49:33 rgmanager [pvevm] VM 124 is running<br>
Jan 05 22:49:33 rgmanager [pvevm] VM 119 is running<br>
Jan 05 22:49:33 rgmanager [pvevm] VM 115 is running<br>
Jan 05 22:49:33 rgmanager [pvevm] VM 122 is running<br>
Jan 05 22:49:33 rgmanager [pvevm] VM 116 is running<br>
Jan 05 22:49:33 rgmanager [pvevm] VM 118 is running<br>
Jan 05 22:49:33 rgmanager [pvevm] VM 117 is running<br>
Jan 05 22:49:35 rgmanager Stopping service pvevm:120<br>
Jan 05 22:49:35 rgmanager Stopping service pvevm:131<br>
Jan 05 22:49:35 rgmanager Stopping service pvevm:129<br>
Jan 05 22:49:35 rgmanager Stopping service pvevm:130<br>
Jan 05 22:49:37 rgmanager [pvevm] Task still active, waiting<br>
Jan 05 22:49:37 rgmanager [pvevm] Task still active, waiting<br>
Jan 05 22:49:37 rgmanager [pvevm] Task still active, waiting<br>
........<br>
Jan 05 22:49:42 rgmanager [pvevm] VM 118 is running<br>
Jan 05 22:49:42 rgmanager [pvevm] Task still active, waiting<br>
Jan 05 22:49:42 rgmanager [pvevm] VM 122 is running<br>
Jan 05 22:49:42 rgmanager [pvevm] VM 119 is running<br>
Jan 05 22:49:42 rgmanager [pvevm] VM 124 is running<br>
Jan 05 22:49:42 rgmanager [pvevm] Task still active, waiting<br>
......<br>
Jan 05 22:50:15 rgmanager [pvevm] Task still active, waiting<br>
Jan 05 22:50:15 rgmanager [pvevm] Task still active, waiting<br>
Jan 05 22:50:15 rgmanager Service pvevm:131 is recovering<br>
Jan 05 22:50:16 rgmanager [pvevm] Task still active, waiting<br>
Jan 05 22:50:16 rgmanager [pvevm] Task still active, waiting<br>
Jan 05 22:50:16 rgmanager [pvevm] Task still active, waiting<br>
Jan 05 22:50:17 rgmanager [pvevm] Task still active, waiting<br>
Jan 05 22:50:17 rgmanager [pvevm] Task still active, waiting<br>
Jan 05 22:50:18 rgmanager [pvevm] Task still active, waiting<br>
Jan 05 22:50:18 rgmanager Recovering failed service pvevm:131<br>
Jan 05 22:50:18 rgmanager [pvevm] Task still active, waiting<br>
Jan 05 22:50:18 rgmanager [pvevm] Task still active, waiting<br>
....<br>
Jan 05 22:50:21 rgmanager [pvevm] Task still active, waiting<br>
Jan 05 22:50:21 rgmanager [pvevm] Task still active, waiting<br>
Jan 05 22:50:22 rgmanager [pvevm] VM 119 is running<br>
Jan 05 22:50:22 rgmanager [pvevm] VM 118 is running<br>
Jan 05 22:50:22 rgmanager [pvevm] VM 122 is running<br>
Jan 05 22:50:22 rgmanager [pvevm] Task still active, waiting<br>
Jan 05 22:50:22 rgmanager [pvevm] VM 124 is running<br>
Jan 05 22:50:22 rgmanager [pvevm] Task still active, waiting<br>
Jan 05 22:50:22 rgmanager [pvevm] Task still active, waiting<br>
Jan 05 22:50:22 rgmanager [pvevm] Task still active, waiting<br>
Jan 05 22:50:23 rgmanager [pvevm] Task still active, waiting<br>
Jan 05 22:50:23 rgmanager [pvevm] Task still active, waiting<br>
Jan 05 22:50:23 rgmanager [pvevm] Task still active, waiting<br>
Jan 05 22:50:23 rgmanager [pvevm] Task still active, waiting<br>
Jan 05 22:50:24 rgmanager Service pvevm:130 is recovering<br>
Jan 05 22:50:24 rgmanager [pvevm] Task still active, waiting<br>
Jan 05 22:50:24 rgmanager [pvevm] Task still active, waiting<br>
Jan 05 22:50:24 rgmanager [pvevm] Task still active, waiting<br>
Jan 05 22:50:25 rgmanager [pvevm] Task still active, waiting<br>
Jan 05 22:50:25 rgmanager [pvevm] Task still active, waiting<br>
Jan 05 22:50:25 rgmanager [pvevm] Task still active, waiting<br>
Jan 05 22:50:26 rgmanager [pvevm] Task still active, waiting<br>
Jan 05 22:50:26 rgmanager Recovering failed service pvevm:130<br>
Jan 05 22:50:27 rgmanager [pvevm] Task still active, waiting<br>
Jan 05 22:50:27 rgmanager [pvevm] Task still active, waiting<br>
Jan 05 22:50:27 rgmanager [pvevm] Task still active, waiting<br>
Jan 05 22:50:28 rgmanager [pvevm] Task still active, waiting<br>
Jan 05 22:50:28 rgmanager Service pvevm:131 started<br>
Jan 05 22:50:28 rgmanager [pvevm] Task still active, waiting<br>
Jan 05 22:50:28 rgmanager [pvevm] Task still active, waiting<br>
Jan 05 22:50:29 rgmanager [pvevm] Task still active, waiting<br>
Jan 05 22:50:29 rgmanager [pvevm] Task still active, waiting<br>
......<br>
Jan 05 22:50:35 rgmanager Service pvevm:130 started<br>
Jan 05 22:50:35 rgmanager [pvevm] Task still active, waiting<br>
......<br>
Jan 05 22:50:41 rgmanager [pvevm] Task still active, waiting<br>
Jan 05 22:50:41 rgmanager [pvevm] VM 104 is running<br>
Jan 05 22:50:41 rgmanager [pvevm] VM 115 is running<br>
Jan 05 22:50:41 rgmanager [pvevm] VM 117 is running<br>
Jan 05 22:50:42 rgmanager [pvevm] VM 118 is running<br>
Jan 05 22:50:42 rgmanager [pvevm] VM 119 is running<br>
Jan 05 22:50:42 rgmanager [pvevm] VM 124 is running<br>
Jan 05 22:50:42 rgmanager [pvevm] Task still active, waiting<br>
Jan 05 22:50:42 rgmanager [pvevm] VM 122 is running<br>
Jan 05 22:50:42 rgmanager [pvevm] Task still active, waiting<br>
......<br>
Jan 05 22:50:45 rgmanager Service pvevm:129 is recovering<br>
Jan 05 22:50:45 rgmanager [pvevm] Task still active, waiting<br>
<br>
*On other hosts, such is not a problem.*<br>
<br>
<br>
root@kvm01:/var/log# clustat<br>
Cluster Status for KVM @ Mon Jan š6 16:19:30 2014<br>
Member Status: Quorate<br>
<br>
šMember Name š š š š š š š š š š š š š š š š š š š š š š š š š š ID š Status<br>
š------ ---- š š š š š š š š š š š š š š š š š š š š š š š š š š ---- ------<br>
škvm01 š š š š š š š š š š š š š š š š š š š š š š š š š š š š š š š 1<br>
Online, Local, rgmanager<br>
škvm02 š š š š š š š š š š š š š š š š š š š š š š š š š š š š š š š 2<br>
Online, rgmanager<br>
škvm03 š š š š š š š š š š š š š š š š š š š š š š š š š š š š š š š 3<br>
Online, rgmanager<br>
škvm04 š š š š š š š š š š š š š š š š š š š š š š š š š š š š š š š 4<br>
Online, rgmanager<br>
<br>
šService Name š š š š š š š š š š š š š š š š š š š š š š š š š š Owner<br>
(Last) š š š š š š š š š š š š š š š š š š š š š š š š š š State<br>
š------- ---- š š š š š š š š š š š š š š š š š š š š š š š š š š -----<br>
------ š š š š š š š š š š š š š š š š š š š š š š š š š š -----<br>
špvevm:100 š š š š š š š š š š š š š š š š š š š š š š š š š š š škvm01<br>
š š š š š š š š š š š š š š š š š š š š š š š š š š š šstarted<br>
špvevm:104 š š š š š š š š š š š š š š š š š š š š š š š š š š š škvm01<br>
š š š š š š š š š š š š š š š š š š š š š š š š š š š šstarted<br>
špvevm:105 š š š š š š š š š š š š š š š š š š š š š š š š š š š škvm03<br>
š š š š š š š š š š š š š š š š š š š š š š š š š š š šstarted<br>
špvevm:109 š š š š š š š š š š š š š š š š š š š š š š š š š š š škvm03<br>
š š š š š š š š š š š š š š š š š š š š š š š š š š š šstarted<br>
špvevm:115 š š š š š š š š š š š š š š š š š š š š š š š š š š š škvm01<br>
š š š š š š š š š š š š š š š š š š š š š š š š š š š šstarted<br>
špvevm:116 š š š š š š š š š š š š š š š š š š š š š š š š š š š škvm01<br>
š š š š š š š š š š š š š š š š š š š š š š š š š š š šstarted<br>
špvevm:117 š š š š š š š š š š š š š š š š š š š š š š š š š š š škvm01<br>
š š š š š š š š š š š š š š š š š š š š š š š š š š š šstarted<br>
špvevm:118 š š š š š š š š š š š š š š š š š š š š š š š š š š š škvm01<br>
š š š š š š š š š š š š š š š š š š š š š š š š š š š šstarted<br>
špvevm:119 š š š š š š š š š š š š š š š š š š š š š š š š š š š škvm01<br>
š š š š š š š š š š š š š š š š š š š š š š š š š š š šstarted<br>
špvevm:120 š š š š š š š š š š š š š š š š š š š š š š š š š š š škvm03<br>
š š š š š š š š š š š š š š š š š š š š š š š š š š š šstarted<br>
špvevm:121 š š š š š š š š š š š š š š š š š š š š š š š š š š š škvm03<br>
š š š š š š š š š š š š š š š š š š š š š š š š š š š šstarted<br>
špvevm:123 š š š š š š š š š š š š š š š š š š š š š š š š š š š škvm03<br>
š š š š š š š š š š š š š š š š š š š š š š š š š š š šstarted<br>
špvevm:124 š š š š š š š š š š š š š š š š š š š š š š š š š š š škvm03<br>
š š š š š š š š š š š š š š š š š š š š š š š š š š š šstarted<br>
špvevm:125 š š š š š š š š š š š š š š š š š š š š š š š š š š š škvm03<br>
š š š š š š š š š š š š š š š š š š š š š š š š š š š šstarted<br>
špvevm:129 š š š š š š š š š š š š š š š š š š š š š š š š š š š škvm03<br>
š š š š š š š š š š š š š š š š š š š š š š š š š š š šstarted<br>
špvevm:130 š š š š š š š š š š š š š š š š š š š š š š š š š š š škvm03<br>
š š š š š š š š š š š š š š š š š š š š š š š š š š š šstarted<br>
špvevm:131 š š š š š š š š š š š š š š š š š š š š š š š š š š š škvm03<br>
š š š š š š š š š š š š š š š š š š š š š š š š š š š šstarted<br>
špvevm:143 š š š š š š š š š š š š š š š š š š š š š š š š š š š škvm02<br>
š š š š š š š š š š š š š š š š š š š š š š š š š š š šstarted<br>
<br>
<br>
root@kvm01:/var/log# pvecm status<br>
Version: 6.2.0<br>
Config Version: 112<br>
Cluster Name: KVM<br>
Cluster Id: 549<br>
Cluster Member: Yes<br>
Cluster Generation: 3876<br>
Membership state: Cluster-Member<br>
Nodes: 4<br>
Expected votes: 4<br>
Total votes: 4<br>
Node votes: 1<br>
Quorum: 3<br>
Active subsystems: 6<br>
Flags:<br>
Ports Bound: 0 177<br>
Node name: kvm01<br>
Node ID: 1<br>
Multicast addresses: 239.192.2.39<br>
Node addresses: 192.168.100.1<br>
<br>
<br>
What could be the problem? thank you.<br>
<br>
-<br>
? ?????????, ??????? ???? ???????????<br>
???.: +79229045757<br>
-------------- next part --------------<br>
An HTML attachment was scrubbed...<br>
URL: <<a href="http://pve.proxmox.com/pipermail/pve-user/attachments/20140106/c465ca52/attachment.html" target="_blank">http://pve.proxmox.com/pipermail/pve-user/attachments/20140106/c465ca52/attachment.html</a>><br>
<br>
------------------------------<br>
<br>
_______________________________________________<br>
pve-user mailing list<br>
<a href="mailto:pve-user@pve.proxmox.com">pve-user@pve.proxmox.com</a><br>
<a href="http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user" target="_blank">http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user</a><br>
<br>
<br>
End of pve-user Digest, Vol 70, Issue 7<br>
***************************************<br>
</blockquote></div><br><br clear="all"><div><br></div>-- <br><div dir="ltr">ó Õ×ÁÖÅÎÉÅÍ, æÁÓÉÈÏ× éÒÅË îÕÒÇÁÑÚÏ×ÉÞ<div>íÏÂ.: +79229045757</div></div>
</div>