[pve-devel] proxmox 3->4 cluster upgrade : corosync2 wheezy transition package ?

Alexandre DERUMIER aderumier at odiso.com
Sun Oct 4 08:00:01 CEST 2015


>>Cross-cluster migration could actually be really handy to have as a full feature.  Then you could, for example, have a development/testing cluster (possibly a single node) in your dev lab, and a production cluster in your >>hosting center(s), and when a given VM moves from dev/test into production, you simply migrate it to the production cluster.  It would also be handy for situations like this one, obviously, where you're migrating between >>otherwise-incompatible environments.
>>
>>It would, by its nature, only support offline migration, so there would be no consideration for HA or any of the numerous checks involved in live migrations.  The most complexity I can see is in specifying when both >>clusters (source and destination) have access to the storage(s) used to hold the VM/CT images/directories.  You'd have to be able to pass something like `--map-storage=<source-name>:<destination-name>` for every storage >>that is accessible from both clusters.  You'd also have to specify a target storage for anything that isn't mapped on both clusters, so the migration could rsync (or whatever) anything that isn't on a shared storage.  But >>the rest should be fairly straightforward, as far as I can tell.  So long as the source can ssh to the destination, it should be a fairly smooth process.

Yes, if the storage is shared on both cluster, and the vm doesn't exist on target cluster, it should be fine.

If a new empty cluster, it should be secure.

1)copy vm config on target cluster
2)start live migration
3) source cluster : instead of moving vm config file to target host, delete it
4) stop source vm




----- Mail original -----
De: "Daniel Hunsaker" <danhunsaker at gmail.com>
À: "pve-devel" <pve-devel at pve.proxmox.com>, "dietmar" <dietmar at proxmox.com>
Envoyé: Samedi 3 Octobre 2015 20:22:00
Objet: Re: [pve-devel] proxmox 3->4 cluster upgrade : corosync2 wheezy transition package ?



Cross-cluster migration could actually be really handy to have as a full feature. Then you could, for example, have a development/testing cluster (possibly a single node) in your dev lab, and a production cluster in your hosting center(s), and when a given VM moves from dev/test into production, you simply migrate it to the production cluster. It would also be handy for situations like this one, obviously, where you're migrating between otherwise-incompatible environments. 

It would, by its nature, only support offline migration, so there would be no consideration for HA or any of the numerous checks involved in live migrations. The most complexity I can see is in specifying when both clusters (source and destination) have access to the storage(s) used to hold the VM/CT images/directories. You'd have to be able to pass something like `--map-storage=<source-name>:<destination-name>` for every storage that is accessible from both clusters. You'd also have to specify a target storage for anything that isn't mapped on both clusters, so the migration could rsync (or whatever) anything that isn't on a shared storage. But the rest should be fairly straightforward, as far as I can tell. So long as the source can ssh to the destination, it should be a fairly smooth process. 
On Sat, Oct 3, 2015, 07:34 Alexandre DERUMIER < aderumier at odiso.com > wrote: 


>>You need to update at least 3 packages for that: 
>> 
>>- libqb 
>>- corosync 
>>- pve-cluster 
>> 
>>and that update will have bad side effects for all other cluster related 
>>packages 
>> 
>>- redhat-cluster-pve 
>>- openais 
>>- clvmd 
>>- anything else? 

Need to disable HA before upgrade. 

For clvmd, I really don't known what is the impact, as I don't use it. 

>> 
>>This looks really complex to me? 


Yes, I don't say it's easy, but the only other way currently, if we want no interruption (for qemu of course), 
and be able to do live migration is : 

1) keep a node empty without any vm 
2) upgrade all hosts in the cluster to jessie + proxmox 4.0 (in place, with vms running during the upgrade) 
3) reboot the empty node 
4) migrate all vm from 1 node to the empty node 
5) reboot the new empty node 
... 
.. 



Another way, could be to build a new cluster, 
and be allowed to do live migration between clusters. 
(Need a little work, but technically it's possible). 
Only with special command line, not exposed in gui. 





----- Mail original ----- 
De: "dietmar" < dietmar at proxmox.com > 
À: "aderumier" < aderumier at odiso.com >, "pve-devel" < pve-devel at pve.proxmox.com > 
Envoyé: Samedi 3 Octobre 2015 10:05:49 
Objet: Re: [pve-devel] proxmox 3->4 cluster upgrade : corosync2 wheezy transition package ? 

> On October 3, 2015 at 9:42 AM Dietmar Maurer < dietmar at proxmox.com > wrote: 
> 
> 
> > > I wonder if it could be great (ad possible? )to make a corosync2 
> > > transition 
> > > package for wheezy. 
> > > 
> > > Like this we could mix (proxmox3-wheezy-corosync2 and 
> > > proxmox4-jessie-corosync2), 
> > > and do live migration as usual. 
> > > 
> > > 
> > > What do you think about it ? 
> > 
> > How should that work? corosync2 is not compatible with corosync 1.4, so 
> > what is the idea? 
> 
> Oh, you want to backport corosync2 to wheezy? Need to think about that. 

You need to update at least 3 packages for that: 

- libqb 
- corosync 
- pve-cluster 

and that update will have bad side effects for all other cluster related 
packages 

- redhat-cluster-pve 
- openais 
- clvmd 
- anything else? 

This looks really complex to me? 

_______________________________________________ 
pve-devel mailing list 
pve-devel at pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 




_______________________________________________ 
pve-devel mailing list 
pve-devel at pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 




More information about the pve-devel mailing list