[PVE-User] Proxmox Backup Server (beta)

Iztok Gregori iztok.gregori at elettra.eu
Fri Jul 10 18:29:22 CEST 2020


On 10/07/20 17:31, Dietmar Maurer wrote:
>> On 10/07/20 15:41, Dietmar Maurer wrote:
>>>> Are you planning to support also CEPH (or other distributed file
>>>> systems) as destination storage backend?
>>>
>>> It is already possible to put the datastore a a mounted cephfs, or
>>> anything you can mount on the host.
>>
>> Is this "mount" managed by PBS or you have to "manually" mount it
>> outside PBS?
> 
> Not sure what kind of management you need for that? Usually people
> mount filesystems using /etc/fstab or by creating systemd mount units.

In PVE you can add a storage (like NFS for example) via GUI (or directly 
via config file) and, if I'm not mistaken, from the PVE will "manage" 
the storage (mount it under /mnt/pve, not performing a backup if the 
storage is not ready and so on).

> 
>>> But this means that you copy data over the network multiple times,
>>> so this is not the best option performance wise...
>>
>> True, PBS will act as a gateway to the backing storage cluster, but the
>> data will be only re-routed to the final destination (in this case and
>> OSD) not copied over (putting aside the CEPH replication policy).
> 
> That is probably a very simplistic view of things. It involves copying data
> multiple times, so I will affect performance by sure.

The replication you mean? Yes, it "copies"/distribute the same data on 
multiple targets/disk (more or less the same RAID or ZFS does). But I'm 
not aware of the internals of PBS so maybe my reasoning is really to 
simplistic.

> 
> Note: We take about huge amounts of data.

We daily backup with vzdump over NFS 2TB of data. Clearly because all of 
the backups are full backups we need a lot of space for keeping a 
reasonable retention (8 daily backups + 3 weekly). I resorted to cycle 
to 5 relatively huge NFS server, but it involved a complex 
backup-schedule. But because the amount of data is growing we are 
searching for a backup solution which can be integrated in PVE and could 
be easily expandable.


> 
>> So
>> performance wise you are limited by the bandwidth of the PBS network
>> interfaces (as you will be for a local network storage server) and to
>> the speed of the backing CEPH cluster. Maybe you will loose something on
>> raw performance (but depending on the CEPH cluster you could gain also
>> something) but you will gain the ability of "easily" expandable storage
>> space and no single point of failure.
> 
> Sure, that's true. Would be interesting to get some performance stats for
> such setup...

You mean performance stats about CEPH or about PBS backed with CEPHfs? 
For the latter we could try something in Autumn when some servers will 
became available.

Cheers

Iztok Gregori






More information about the pve-user mailing list