[PVE-User] Migration, venet and public IPs
Gerry Demaret
ml at x-net.be
Tue Oct 2 18:15:55 CEST 2012
I have recently encountered a problem I didn't expect and can't find a
good solution for it.
I have a cluster with two nodes, 2 NICs each. On each node, eth0 has no
IP assigned and is used in multiple vmbr's with a VLAN tag:
vmbr601 eth0.601
vmbr602 eth0.602
vmbr603 eth0.603
Only eth1 has a private IP on a separate VLAN, which is used for
off-site backups and connecting to the web interface.
I now would like to create an OpenVZ container with a venet network
interface, since I don't trust the person that will be using it to be
careful with its network configuration. With venet, I can limit the
damage he can do to the network. And of course I would like to use a
public IP for the venet interface.
You guessed it, it doesn't work. Of course it doesn't work, the host
has no idea what to do with traffic coming from the venet interface.
I could fix it by assigning an IP in the same public subnet on each of
the hosts but since:
a) I don't want to waste precious IP addresses by assigning one to each
node in my cluster.
b) My nodes would be exposed to the internet. I know I could use
iptables or a dedicated firewall, but I don't like modifying anything
on the hosts and let's say there is no firewall.
c) It doesn't scale well, since if I add other venet interfaces in other
subnets, I have to assign IPs in those ranges as well
d) It doesn't scale well, since if I add new nodes, they need IP
addresses in all subnets because otherwise I can't migrate containers
to them
... I would like to find another solution.
I have some ideas that come to mind, but they all seem less than ideal.
I would like to know if someone else has solved this and if so how? Or
perhaps some of you have a brilliant solution? :)
Thanks!
Gerry.
More information about the pve-user
mailing list