[PVE-User] Planning an HA infrastructure

Philippe Schwarz phil at schwarz-fr.net
Fri Jan 24 08:30:59 CET 2014


-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Hi, i'm gonna get a few bucks to build a brand new HA cluster.

I'm afraid that the price of the whole solution will be more than that
we can afford in a single year; so it'll probably be a two-steps building.

I'm facing many questions.

The switch (L3 one from HP) will maybe (Neither can i be sure, nor can i
modify the decision....) be 1Gb only ports.

A full-HA cluster would, ideally be made of 2 HA-pve servers and 2
redundant SAN servers. No way; it'll be too expensive for us.

I can expect only 3 servers, or 2 huge servers.

So, i'm planning to build one of those two solutions :

- - 2 PVe servers with HA and DRBD for disk mirroring
- - 2 PVE servers with HA and a third one for storing VMs (mainly VM, but
CT too). The SAN would be Amd64/Napp-IT/omniOS, exporting VM with NFS
(probably)
I know i'll have to find a third (or fourth) machine to avoid building a
2 nodes cluster (because of the quorum bug).

Both with either 10GbE with crossovers cables or bonded 1Gb cards.

Among my questions, here are the first ones :

1.  If the core-switch has no 10Gb port, is it possible to avoid using
it ? I'd plan to do so :
- -SAN with 2x10Gbe, one for each PVE server, connected with crossover
cables (so, a single crossover link between the SAN and each PVE)
- - pve with bounded (lacp mostly) 1Gb cards to switch for VMs.
- - 2 crossover cables for fencing
- - 1 crossover cable for heartbeating


2. Unix philosophy : Kiss !(Keep It Simple and Stupid).
Let's assume i won'tr go further than a 2-node cluster (In fact, two
real nodes; it'll be a three nodes because of the quorum bug)
So, 1 SAN (SPOF) with 2 PVE is more complicated than 2 PVE with drbd,
isn'it ?
But, i'm far more better at using ZFS than drbd or lvm or drbd+lvm...
So, i'm not sure a 2 nodes cluster with lvm+drbd would be more simple
for me..

3. Are there any brand i should stay away from concerning the NIC ?
(Excepet The Realtek, of course)
I'll probably use any 10GbE Intel Dual copper card.

4. Because of the 2-steps path of my cluster-building, i'll have to stay
a long time (maybe 1 year) with a non-complete solution, are there any
issue i could face, or is it completely impossible to use this
3-wheels engine.


5. Blades
I may afford to buy a bladecenter (either from Dell or HP).
In that case i get rid of the complicated communications between the
SAN and the PVE, using the built-in high-bandwith communication
backplane, don't i ?
I've never seen any blade. Are the NICs connected to the blade
enclosure or does any blade have their own NICs?


Many questions, but i'm pretty sure they're not the last ones ,-)

Thanks
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)
Comment: Using GnuPG with Icedove - http://www.enigmail.net/

iEYEARECAAYFAlLiFrMACgkQlhqCFkbqHRazrQCfelGSNNSo3p94arhEeQsF/yHW
UiQAnR8h607GfincnroZNb6cSVqCie2H
=WHze
-----END PGP SIGNATURE-----



More information about the pve-user mailing list