[PVE-User] Proxmox Backup Server (beta)

Tom Weber pve at junkyard.4t2.com
Sat Jul 18 16:59:20 CEST 2020


Am Freitag, den 17.07.2020, 19:43 +0200 schrieb Thomas Lamprecht:
> On 17.07.20 15:23, Tom Weber wrote:
> > thanks for the very detailed answer :)
> > 
> > I was already thinking that this wouldn't work like my current
> > setup.
> > 
> > Once the bitmap on the source side of the backup gets corrupted for
> > whatever reason, incremental wouldn't work and break.
> > Is there some way that the system would notify such a "corrupted"
> > bitmap? 
> > I'm thinking of a manual / test / accidential backup run to a
> > different
> > backup server which would / could ruin all further regular
> > incremental
> > backups undetected.
> 
> If a backup fails, or the last backup index we get doesn't matches
> the
> checksum we cache in the VM QEMU process we drop the bitmap and do
> read
> everything (it's still send incremental from the index we got  now),
> and
> setup a new bitmap from that point.

ah, I think I start to understand (read a bit about the qemu side too
now) :)

So you keep some checksum/signature of a successfull backup run with
the one (non-persistant) dirty bitmap in qemu.
The next backup run can check this and only makes use of the bitmap if
it matches else it will fall back to reading and comparing all qemu
blocks against the ones in the backup - saving only the changed ones? 

If that's the case, it's the answer I was looking for :)


> > about my setup scenario - a bit off topic - backing up to 2
> > different
> > locations every other day basically doubles my backup space and
> > reduces
> > the risk of one failing backup server - of course by taking a 50:50
> > chance of needing to go back 2 days in a worst case scenario.
> > Syncing the backup servers would require twice the space capacity
> > (and
> > additional bw).
> 
> I do not think it would require twice as much space. You already have
> now
> twice copies of what normally would be used for a single backup
> target.
> So even if deduplication between backups is way off you'd still only
> need
> that if you sync remotes. And normally you should need less, as
> deduplication should reduce the per-backup server storage space and
> thus
> the doubled space usage from syncing is actually smaller than the
> doubled
> space usage from the odd/even backups - or?

First of all, that noted backup scenario was not designed for a
blocklevel incremental backup like pbs is meant. I don't know yet if
I'd do it like this for pbs. But it probably helps to understand why it
raised the above question.

If the same "area" of data changes everyday, say 1GB, and I do
incremental backups and have like 10GB of space for that on 2
independent Servers.
Doing that incremental Backup odd/even to those 2 Backupservers, I end
up with 20 days of history whereas with 2 syncronized Backupservers
only 10 days of history are possible (one could also translate this in
doubled backup space ;) ).

And then there are bandwith considerations between these 3 locations.

> Note that remotes sync only the delta since last sync, so bandwidth
> correlates
> to that delta churn. And as long as that churn stays below 50% size
> of a full
> backup you still need less total bandwidth than the odd/even full-
> backup
> approach. At least if averaged over time.

ohh... I think there's the misunderstanding: I wasn't talking about
odd/even FULL-backups! 
Right now I'm doing odd/even incremental backups! Incremental against
the last state of the backup server im backing up to (backing up what
changed in 2 days).

Best,
  Tom






More information about the pve-user mailing list