[PVE-User] efficient backup of VM on external storage.

Muhammad Yousuf Khan sirtcp at gmail.com
Sat Oct 5 14:52:09 CEST 2013


On Sat, Oct 5, 2013 at 3:57 PM, Fabrizio Cuseo <f.cuseo at panservice.it>wrote:

> All my answers below:
>
> > 1. would you please share your hardware specs of Ceph storage cluster
> > and PVE Box and the connectivity of all boxes.
>
> You can use legacy linux servers for Ceph Storage; of course, you can use
> cheap servers (ex: HP Microserver, with 4 sata disks) if you don't mind
> about performances; if you want performances, you can use 3 xeon based
> server, not less of 8Gbyte ram (16Gbyte is better), and several SAS disks
> for each server. For network connections, use 2/4 giga ethernet in a single
> bond.
> You can also use every kind of legacy linux server, or a virtual machine
> on each proxmox node, using 2 or more local disks (not used for other
> proxmox local storage), like Vmware does with VSA (virtual storage
> appliance); performances can be enough or not depending on your needs.
>
>
Yes i have got few old Dell 490 Xeon workstations with same specs you
specified. so will give it a try.
but which bond type. there are 7 bond modes

mode=0 (Balance Round Robin)
mode=1 (Active backup)
mode=2 (Balance XOR)
mode=3 (Broadcast)
mode=4 (802.3ad)
mode=5 (Balance TLB)
mode=6 (Balance ALB)


. actually i am asking in stability perspective because i dont have enough
time left to experience and find a a right bond type for the
configuration.
 if you already used one with Proxmox setup that would be highly
appreciated if you share your experience it will save lot of time for me.
for me BRR seems a bit better. as per my set of configuration.

 > 2. do you relay upon only replication. dont you take the backup of
> > VM? if yes then again would you please throw some light on your
> > strategy in reference to my question in first message.
>
> Replication is never enough; you always need a backup with a retemption
> strategy; use another storage (a cheap soho nas with 2/4 sata disk in raid
> 1/10)
>

agreed, i can do that, easy for me :). Thanks for the tip


>
> > 3. Any recommended how to for ceph and PVE.
>
> You can find an howto on proxmox ve wiki, (
> http://pve.proxmox.com/wiki/Storage:_Ceph ) and of course you can read on
> ceph home page.
>
>
Thanks for sharing, atleast i will get some basic understanding.


> 4. is your setup Ceph cluster is like DRBD active/passive with heart
> > beat. if one down second will auto up. with a second or 2 downtime?
>
> Replication on a clustered storage is something different; every "file" is
> splitted in several chunks (blocks) and every chunk is written on 2 or more
> servers (depending of replication level you need); so if one of the nodes
> of the ceph storage cluster dies (or if you need to reboot, change
> hardware, move location), the cluster can work in degraded mode (as a raid
> volume), but differently from raid, if you have 3 or more servers, and your
> replica goal is 2, the cluster begin to write the clusters with only 1 copy
> on the other servers; when your dead server can rejoin the cluster, it will
> be syncronized with every change... so always works like a charme.
>
>
> > Sorry for involving DRBD all the time since i have only worked on
> > DRBD clustering, only concept of mine in clustering starts and ends
> > on DRBD+heartbeat :) and as you also know that DRBD work little
> > differently and have some limitations too. so please dont mind.
>
>
> Is really different, and (for me) better. A storage cluster is not only
> redundant, but you can expand it lineary, with other server and/or disks,
> having more space (theoretically infinite space), more redundancy (if you
> change your replica goal from 2 to 3, for example), and more performance
> because you have more cpu, more disk, more IOPS.
>
> Regards, Fabrizio
>
> >
> > Thanks,
> >
> >
> >
> >
> >
> >
> > Regards, Fabrizio
> >
> >
> >
> > ----- Messaggio originale -----
> > Da: "Muhammad Yousuf Khan" < sirtcp at gmail.com >
> > A: pve-user at pve.proxmox.com
> > Inviato: Sabato, 5 ottobre 2013 10:58:41
> > Oggetto: [PVE-User] efficient backup of VM on external storage.
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > i have never worked on external storage just worked on drbd on local
> > storage/system drives.
> >
> >
> > Scenario is:
> > for example. i have two external storage and 1 Proxmox Machine.
> > i am using 1G NICs to connect all nodes
> >
> > the storage are connected to each other from another link but no
> > replication done b/w storage boxes mean they are no failover to each
> > other. and both the storage are connected to Proxmox. i am using
> > primary storage to run 3 machines on NFS or iSCSI. note that machine
> > are running on primary storage over a single 1G ethernet link/single
> > point of failure.
> >
> >
> >
> > now lets say. i want VM1 backedup to secondary storage. however i
> > dont want my backup traffic to effect my primary link where 3
> > machines are active.
> >
> >
> > any suggestion to achieve that.
> >
> >
> >
> >
> >
> > Thanks.
> >
> >
> > _______________________________________________
> > pve-user mailing list
> > pve-user at pve.proxmox.com
> > http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> >
> > --
> > ---
> > Fabrizio Cuseo - mailto: f.cuseo at panservice.it
> > Direzione Generale - Panservice InterNetWorking
> > Servizi Professionali per Internet ed il Networking
> > Panservice e' associata AIIP - RIPE Local Registry
> > Phone: +39 0773 410020 - Fax: +39 0773 470219
> > http://www.panservice.it mailto: info at panservice.it
> > Numero verde nazionale: 800 901492
> >
> >
>
> --
> ---
> Fabrizio Cuseo - mailto:f.cuseo at panservice.it
> Direzione Generale - Panservice InterNetWorking
> Servizi Professionali per Internet ed il Networking
> Panservice e' associata AIIP - RIPE Local Registry
> Phone: +39 0773 410020 - Fax: +39 0773 470219
> http://www.panservice.it  mailto:info at panservice.it
> Numero verde nazionale: 800 901492
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.proxmox.com/pipermail/pve-user/attachments/20131005/031aa1ac/attachment.htm>


More information about the pve-user mailing list