[PVE-User] Data Migration Question / Back end storage
david at digitaltransitions.ca
Wed Oct 23 04:10:24 CEST 2013
I’m not really sure what the topic should be called.
I just finished setting up a new set of servers that have the following configuration
Nodes 1 and 2 Public Network: 172.16.10.12 & 172.16.10.13
Nodes 1 and 2 Private Network: 10.1.1.12 & 10.1.1.13
Back end NFS / iSCSI Storage: 10.1.1.3
The back end is comprised of 10GB Ethernet, which is why I want to utilize this.
I can mount the NFS share on the 10.1.1.0/24 range without any issue on the servers and I can restore a VM (Win2012) and can migrate from one node to another without any issue.
I then setup each server to have a 300GB iSCSI partition and I’ve replaced the existing /var/lib/vz partition with the iSCSI partition and it mounts without issue every time.
I then restored an openVZ container to the iSCSI volume mounted at /var/lib/vz and it works perfectly fine. My issue that I’m having is when I try to migrate the guest open VZ from one node to the other, it wants to copy it over the 172.16.10.0/24 network, which is link agg for 2GB connectivity, but I’d rather have it copy the data over the 10.1.1.0/24 network since its a dedicated private 10GB network, and in theory much faster.
When I setup the server, I gave the servers vmbr0 the 172.16.0.12 and 172.16.10.13 network IPs. I then created a bond and actually swapped them around so now the setup for both servers is as follows:
vmbr0 - 10.1.1.0 network range - Bridge for eth3 which is a 10GB port
vmbr1 - 172.16.10.0 network range - Bond for eth0 / eth1 which is on a 2GB 802.ad link agg
When I built the cluster for the servers I built it on the 10.1.1.0/24 range so I believe thats done properly.
I guess my question is: What do I have to do in order to get the servers within the cluster to migrate / copy the data from the one iSCSI volume to the other over the 10.1.1.0 network and not over the slower public network?
Thanks for any help.
More information about the pve-user