[PVE-User] pve-user Digest, Vol 70, Issue 7
Ирек Фасихов
malmyzh at gmail.com
Mon Jan 6 13:29:40 CET 2014
root at kvm01:/var/log# pveversion -V
proxmox-ve-2.6.32: 3.1-114 (running kernel: 2.6.32-26-pve)
pve-manager: 3.1-24 (running version: 3.1-24/060bd5a6)
pve-kernel-2.6.32-26-pve: 2.6.32-114
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.0-2
pve-cluster: 3.0-8
qemu-server: 3.1-8
pve-firmware: 1.0-23
libpve-common-perl: 3.0-9
libpve-access-control: 3.0-8
libpve-storage-perl: 3.0-18
pve-libspice-server1: 0.12.4-2
vncterm: 1.1-6
vzctl: 4.0-1pve4
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.4-17
ksm-control-daemon: 1.1-1
glusterfs-client: 3.4.1-1
2014/1/6 <pve-user-request at pve.proxmox.com>
> Send pve-user mailing list submissions to
> pve-user at pve.proxmox.com
>
> To subscribe or unsubscribe via the World Wide Web, visit
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> or, via email, send a message with subject or body 'help' to
> pve-user-request at pve.proxmox.com
>
> You can reach the person managing the list at
> pve-user-owner at pve.proxmox.com
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of pve-user digest..."
>
>
> Today's Topics:
>
> 1. got empty cluster VM list (???? ???????)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Mon, 6 Jan 2014 16:21:18 +0400
> From: ???? ??????? <malmyzh at gmail.com>
> To: "pve-user at pve.proxmox.com" <pve-user at pve.proxmox.com>
> Subject: [PVE-User] got empty cluster VM list
> Message-ID:
> <
> CAF-rypzTzKgeqh5e86eC7aVfzTX9DqNSer1XZDLs2ZOJD9fyQQ at mail.gmail.com>
> Content-Type: text/plain; charset="koi8-r"
>
> Hi,All
>
> A cluster consists of four nodes.
>
> *cat /etc/pve/cluster.conf*
> <?xml version="1.0"?>
>
>
>
> <cluster config_version="112" name="KVM">
>
>
>
> <logging debug="on" logfile_priority="debug" to_syslog="no"/>
>
>
>
> <cman keyfile="/var/lib/pve-cluster/corosync.authkey"/>
>
>
>
> <clusternodes>
>
>
>
> <clusternode name="kvm01" nodeid="1" votes="1">
>
>
>
> <fence>
>
>
>
> <method name="1">
>
>
>
> <device action="reboot" name="fenceKVM01"/>
>
>
>
> </method>
>
>
>
> </fence>
>
>
>
> </clusternode>
>
>
>
> <clusternode name="kvm02" nodeid="2" votes="1">
>
>
>
> <fence>
>
>
>
> <method name="1">
>
>
>
> <device action="reboot" name="fenceKVM02"/>
>
>
>
> </method>
>
>
>
> </fence>
>
>
>
> </clusternode>
>
>
>
> <clusternode name="kvm03" nodeid="3" votes="1">
>
>
>
> <fence>
>
>
>
> <method name="1">
>
>
>
> <device action="reboot" name="fenceKVM03"/>
>
>
>
> </method>
>
>
>
> </fence>
> </clusternode>
> <clusternode name="kvm04" nodeid="4" votes="1">
> <fence>
> <method name="1">
> <device action="reboot" name="fenceKVM04"/>
> </method>
> </fence>
> </clusternode>
> </clusternodes>
> <fencedevices>
> <fencedevice agent="fence_ipmilan" ipaddr="X.X.X.X" login="-"
> name="fenceKVM01" passwd="-"/>
> <fencedevice agent="fence_ipmilan" ipaddr="X.X.X.X" login="-"
> name="fenceKVM02" passwd="-"/>
> <fencedevice agent="fence_ipmilan" ipaddr="X.X.X.X" login="-"
> name="fenceKVM03" passwd="-"/>
> <fencedevice agent="fence_ipmilan" ipaddr="X.X.X.X" login="-"
> name="fenceKVM04" passwd="-"/>
> </fencedevices>
> <rm>
> <pvevm autostart="1" vmid="109"/>
> <pvevm autostart="1" vmid="121"/>
> <pvevm autostart="1" vmid="123"/>
> <pvevm autostart="1" vmid="124"/>
> <pvevm autostart="1" vmid="120"/>
> <pvevm autostart="1" vmid="125"/>
> <pvevm autostart="1" vmid="131"/>
> <pvevm autostart="1" vmid="130"/>
> <pvevm autostart="1" vmid="105"/>
> <pvevm autostart="1" vmid="143"/>
> <pvevm autostart="1" vmid="129"/>
> <pvevm autostart="1" vmid="100"/>
> <pvevm autostart="1" vmid="104"/>
> <pvevm autostart="1" vmid="115"/>
> <pvevm autostart="1" vmid="116"/>
> <pvevm autostart="1" vmid="117"/>
> <pvevm autostart="1" vmid="118"/>
> <pvevm autostart="1" vmid="119"/>
> </rm>
> </cluster>
>
>
> On kvm01 spontaneously restart virtual machines without reason. Virtual
> machines are included in the HA.
> *cat /var/log/cluster/rgmanager.log*
>
> Jan 05 22:49:11 rgmanager [pvevm] VM 117 is running
> Jan 05 22:49:31 rgmanager [pvevm] VM 104 is running
> Jan 05 22:49:32 rgmanager [pvevm] got empty cluster VM list
> Jan 05 22:49:32 rgmanager [pvevm] got empty cluster VM list
> Jan 05 22:49:32 rgmanager [pvevm] got empty cluster VM list
> Jan 05 22:49:32 rgmanager [pvevm] got empty cluster VM list
> Jan 05 22:49:32 rgmanager status on pvevm "120" returned 2 (invalid
> argument(s))
> Jan 05 22:49:33 rgmanager status on pvevm "131" returned 2 (invalid
> argument(s))
> Jan 05 22:49:33 rgmanager status on pvevm "129" returned 2 (invalid
> argument(s))
> Jan 05 22:49:33 rgmanager status on pvevm "130" returned 2 (invalid
> argument(s))
> Jan 05 22:49:33 rgmanager [pvevm] VM 124 is running
> Jan 05 22:49:33 rgmanager [pvevm] VM 119 is running
> Jan 05 22:49:33 rgmanager [pvevm] VM 115 is running
> Jan 05 22:49:33 rgmanager [pvevm] VM 122 is running
> Jan 05 22:49:33 rgmanager [pvevm] VM 116 is running
> Jan 05 22:49:33 rgmanager [pvevm] VM 118 is running
> Jan 05 22:49:33 rgmanager [pvevm] VM 117 is running
> Jan 05 22:49:35 rgmanager Stopping service pvevm:120
> Jan 05 22:49:35 rgmanager Stopping service pvevm:131
> Jan 05 22:49:35 rgmanager Stopping service pvevm:129
> Jan 05 22:49:35 rgmanager Stopping service pvevm:130
> Jan 05 22:49:37 rgmanager [pvevm] Task still active, waiting
> Jan 05 22:49:37 rgmanager [pvevm] Task still active, waiting
> Jan 05 22:49:37 rgmanager [pvevm] Task still active, waiting
> ........
> Jan 05 22:49:42 rgmanager [pvevm] VM 118 is running
> Jan 05 22:49:42 rgmanager [pvevm] Task still active, waiting
> Jan 05 22:49:42 rgmanager [pvevm] VM 122 is running
> Jan 05 22:49:42 rgmanager [pvevm] VM 119 is running
> Jan 05 22:49:42 rgmanager [pvevm] VM 124 is running
> Jan 05 22:49:42 rgmanager [pvevm] Task still active, waiting
> ......
> Jan 05 22:50:15 rgmanager [pvevm] Task still active, waiting
> Jan 05 22:50:15 rgmanager [pvevm] Task still active, waiting
> Jan 05 22:50:15 rgmanager Service pvevm:131 is recovering
> Jan 05 22:50:16 rgmanager [pvevm] Task still active, waiting
> Jan 05 22:50:16 rgmanager [pvevm] Task still active, waiting
> Jan 05 22:50:16 rgmanager [pvevm] Task still active, waiting
> Jan 05 22:50:17 rgmanager [pvevm] Task still active, waiting
> Jan 05 22:50:17 rgmanager [pvevm] Task still active, waiting
> Jan 05 22:50:18 rgmanager [pvevm] Task still active, waiting
> Jan 05 22:50:18 rgmanager Recovering failed service pvevm:131
> Jan 05 22:50:18 rgmanager [pvevm] Task still active, waiting
> Jan 05 22:50:18 rgmanager [pvevm] Task still active, waiting
> ....
> Jan 05 22:50:21 rgmanager [pvevm] Task still active, waiting
> Jan 05 22:50:21 rgmanager [pvevm] Task still active, waiting
> Jan 05 22:50:22 rgmanager [pvevm] VM 119 is running
> Jan 05 22:50:22 rgmanager [pvevm] VM 118 is running
> Jan 05 22:50:22 rgmanager [pvevm] VM 122 is running
> Jan 05 22:50:22 rgmanager [pvevm] Task still active, waiting
> Jan 05 22:50:22 rgmanager [pvevm] VM 124 is running
> Jan 05 22:50:22 rgmanager [pvevm] Task still active, waiting
> Jan 05 22:50:22 rgmanager [pvevm] Task still active, waiting
> Jan 05 22:50:22 rgmanager [pvevm] Task still active, waiting
> Jan 05 22:50:23 rgmanager [pvevm] Task still active, waiting
> Jan 05 22:50:23 rgmanager [pvevm] Task still active, waiting
> Jan 05 22:50:23 rgmanager [pvevm] Task still active, waiting
> Jan 05 22:50:23 rgmanager [pvevm] Task still active, waiting
> Jan 05 22:50:24 rgmanager Service pvevm:130 is recovering
> Jan 05 22:50:24 rgmanager [pvevm] Task still active, waiting
> Jan 05 22:50:24 rgmanager [pvevm] Task still active, waiting
> Jan 05 22:50:24 rgmanager [pvevm] Task still active, waiting
> Jan 05 22:50:25 rgmanager [pvevm] Task still active, waiting
> Jan 05 22:50:25 rgmanager [pvevm] Task still active, waiting
> Jan 05 22:50:25 rgmanager [pvevm] Task still active, waiting
> Jan 05 22:50:26 rgmanager [pvevm] Task still active, waiting
> Jan 05 22:50:26 rgmanager Recovering failed service pvevm:130
> Jan 05 22:50:27 rgmanager [pvevm] Task still active, waiting
> Jan 05 22:50:27 rgmanager [pvevm] Task still active, waiting
> Jan 05 22:50:27 rgmanager [pvevm] Task still active, waiting
> Jan 05 22:50:28 rgmanager [pvevm] Task still active, waiting
> Jan 05 22:50:28 rgmanager Service pvevm:131 started
> Jan 05 22:50:28 rgmanager [pvevm] Task still active, waiting
> Jan 05 22:50:28 rgmanager [pvevm] Task still active, waiting
> Jan 05 22:50:29 rgmanager [pvevm] Task still active, waiting
> Jan 05 22:50:29 rgmanager [pvevm] Task still active, waiting
> ......
> Jan 05 22:50:35 rgmanager Service pvevm:130 started
> Jan 05 22:50:35 rgmanager [pvevm] Task still active, waiting
> ......
> Jan 05 22:50:41 rgmanager [pvevm] Task still active, waiting
> Jan 05 22:50:41 rgmanager [pvevm] VM 104 is running
> Jan 05 22:50:41 rgmanager [pvevm] VM 115 is running
> Jan 05 22:50:41 rgmanager [pvevm] VM 117 is running
> Jan 05 22:50:42 rgmanager [pvevm] VM 118 is running
> Jan 05 22:50:42 rgmanager [pvevm] VM 119 is running
> Jan 05 22:50:42 rgmanager [pvevm] VM 124 is running
> Jan 05 22:50:42 rgmanager [pvevm] Task still active, waiting
> Jan 05 22:50:42 rgmanager [pvevm] VM 122 is running
> Jan 05 22:50:42 rgmanager [pvevm] Task still active, waiting
> ......
> Jan 05 22:50:45 rgmanager Service pvevm:129 is recovering
> Jan 05 22:50:45 rgmanager [pvevm] Task still active, waiting
>
> *On other hosts, such is not a problem.*
>
>
> root at kvm01:/var/log# clustat
> Cluster Status for KVM @ Mon Jan 6 16:19:30 2014
> Member Status: Quorate
>
> Member Name ID
> Status
> ------ ---- ----
> ------
> kvm01 1
> Online, Local, rgmanager
> kvm02 2
> Online, rgmanager
> kvm03 3
> Online, rgmanager
> kvm04 4
> Online, rgmanager
>
> Service Name Owner
> (Last) State
> ------- ---- -----
> ------ -----
> pvevm:100 kvm01
> started
> pvevm:104 kvm01
> started
> pvevm:105 kvm03
> started
> pvevm:109 kvm03
> started
> pvevm:115 kvm01
> started
> pvevm:116 kvm01
> started
> pvevm:117 kvm01
> started
> pvevm:118 kvm01
> started
> pvevm:119 kvm01
> started
> pvevm:120 kvm03
> started
> pvevm:121 kvm03
> started
> pvevm:123 kvm03
> started
> pvevm:124 kvm03
> started
> pvevm:125 kvm03
> started
> pvevm:129 kvm03
> started
> pvevm:130 kvm03
> started
> pvevm:131 kvm03
> started
> pvevm:143 kvm02
> started
>
>
> root at kvm01:/var/log# pvecm status
> Version: 6.2.0
> Config Version: 112
> Cluster Name: KVM
> Cluster Id: 549
> Cluster Member: Yes
> Cluster Generation: 3876
> Membership state: Cluster-Member
> Nodes: 4
> Expected votes: 4
> Total votes: 4
> Node votes: 1
> Quorum: 3
> Active subsystems: 6
> Flags:
> Ports Bound: 0 177
> Node name: kvm01
> Node ID: 1
> Multicast addresses: 239.192.2.39
> Node addresses: 192.168.100.1
>
>
> What could be the problem? thank you.
>
> -
> ? ?????????, ??????? ???? ???????????
> ???.: +79229045757
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> http://pve.proxmox.com/pipermail/pve-user/attachments/20140106/c465ca52/attachment.html
> >
>
> ------------------------------
>
> _______________________________________________
> pve-user mailing list
> pve-user at pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
>
> End of pve-user Digest, Vol 70, Issue 7
> ***************************************
>
--
С уважением, Фасихов Ирек Нургаязович
Моб.: +79229045757
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.proxmox.com/pipermail/pve-user/attachments/20140106/57e33766/attachment.htm>
More information about the pve-user
mailing list