<html>
<head>
<meta content="text/html; charset=ISO-8859-1"
http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<br>
<br>
On 07/27/2012 08:55 AM, Gilberto Nunes wrote:
<blockquote
cite="mid:CAOKSTBsR9KYTBCQQDVE0VHr0EhiDO_hpcMW5sj3_=vfKS_LVJg@mail.gmail.com"
type="cite">
<div>Hi....</div>
<div><br>
</div>
<div>Formerly, I always used heartbeat and drbd to provide HA to
my Virtual Machines...</div>
<div>I always configured heartbeat to check the physical nodes,
and if one of them crashing down, the heartbeat start qm to
initialize the formerly VM running on the node that crash...</div>
<div>Now, I see that PVE bring to us corosync-pve...</div>
<div>My question is: it's simple do the same work that heartbeat
do before?</div>
<div>Is there some risk if I change some configuration on the
default corosync-pve config???</div>
<div><br>
</div>
<div>Thanks </div>
<div><br>
</div>
<div>Cheers</div>
<div><br>
</div>
<div><br clear="all">
<div><br>
</div>
</div>
</blockquote>
<br>
We also used drbd + heartbeat in 1.9 . And Debian etch before
that..<br>
<br>
Using a Primary/Secondary drbd set up with something like heartbeat
to control which system is the Primary just worked.<br>
<br>
It would be I think a very good feature to have in a 2-node drbd
cluster. Or is there something already built in to PVE cluster
that deals with this?<br>
<br>
I understand the the full heartbeat package is not compatible with
pve:<br>
<br>
<tt># aptitude install heartbeat<br>
The following NEW packages will be installed:<br>
cluster-agents{a} cluster-glue{a} heartbeat libcluster-glue{a}
libcorosync4{a} libesmtp5{a} libheartbeat2{a} <br>
libnet1{a} libopenhpi2{a} openhpid{a} pacemaker{a} <br>
0 packages upgraded, 11 newly installed, 0 to remove and 2 not
upgraded.<br>
Need to get 2,968 kB of archives. After unpacking 10.4 MB will be
used.<br>
The following packages have unmet dependencies:<br>
libcorosync4-pve: Conflicts: libcorosync4 but 1.2.1-4 is to be
installed.<br>
The following actions will resolve these dependencies:<br>
<br>
Remove the following packages:<br>
1) clvm <br>
2) corosync-pve <br>
3) fence-agents-pve <br>
4) libcorosync4-pve <br>
5) libopenais3-pve <br>
6) libpve-access-control <br>
7) libpve-storage-perl <br>
8) openais-pve <br>
9) proxmox-ve-2.6.32 <br>
10) pve-cluster <br>
11) pve-manager <br>
12) qemu-server <br>
13) redhat-cluster-pve <br>
14) resource-agents-pve <br>
15) vzctl <br>
Accept this solution? [Y/n/q/?] q</tt><br>
<br>
<br>
<br>
For now we are using drbd Primary/Primary . <br>
We have the KVM's running on one node. <br>
<br>
If the node we are using breaks, we'll do a manual switch over. To
do so this was suggested by Dietmar on the forum:<br>
<br>
"<br>
1-First make sure the other node is really down!<br>
<br>
2-Then set expected votes to gain quorum: [ this may already be set
in our 2 node cluster.conf ? ]<br>
# pvecm expected 1<br>
<br>
3-Then move the config file to correct position:<br>
# mv /etc/pve/nodes/<oldnode>/qemu-server/<vmid>.conf
/etc/pve/nodes/<newnode>/qemu-server/ <br>
" <br>
<br>
<br>
The KVM disks are of course on both nodes thanks to drbd. So
only the .conf files need to me moved.<br>
<br>
I was never comfortable with heartbeat even after 5+ years for
auto fail over. So I do not mind doing the fail over manually in
PVE version 2. <br>
<br>
PS: in the future , sheepdog seems like it will be a better way
to ensure the survivability of KVM disks . <br>
<br>
<br>
<br>
<blockquote
cite="mid:CAOKSTBsR9KYTBCQQDVE0VHr0EhiDO_hpcMW5sj3_=vfKS_LVJg@mail.gmail.com"
type="cite">
<div>-- <br>
Gilberto Nunes
<div><br>
</div>
</div>
<br>
<fieldset class="mimeAttachmentHeader"></fieldset>
<br>
<pre wrap="">_______________________________________________
pve-user mailing list
<a class="moz-txt-link-abbreviated" href="mailto:pve-user@pve.proxmox.com">pve-user@pve.proxmox.com</a>
<a class="moz-txt-link-freetext" href="http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user">http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user</a>
</pre>
</blockquote>
</body>
</html>