[PVE-User] Proxmox Backup Server (beta)
Thomas Lamprecht
t.lamprecht at proxmox.com
Fri Jul 17 19:43:29 CEST 2020
On 17.07.20 15:23, Tom Weber wrote:
> Am Freitag, den 17.07.2020, 09:31 +0200 schrieb Fabian Grünbichler:
>> On July 16, 2020 3:03 pm, Tom Weber wrote:
>>> Am Dienstag, den 14.07.2020, 17:52 +0200 schrieb Thomas Lamprecht:
>>>>
>>>> Proxmox Backup Server effectively does that too, but independent
>>>> from
>>>> the
>>>> source storage. We always get the last backup index and only
>>>> upload
>>>> the chunks
>>>> which changed. For running VMs dirty-bitmap is on to improve this
>>>> (avoids
>>>> reading of unchanged blocks) but it's only an optimization - the
>>>> backup is
>>>> incremental either way.
>>>
>>> So there is exactly one dirty-bitmap that get's nulled after a
>>> backup?
>>>
>>> I'm asking because I have Backup setups with 2 Backup Servers at
>>> different Locations, backing up (file-level, incremental) on odd
>>> days
>>> to server1 on even days to server2.
>>>
>>> Such a setup wouldn't work with the block level incremental backup
>>> and
>>> the dirty-bitmap for pve vms + pbs, right?
>>>
>>> Regards,
>>> Tom
>>
>> right now, this would not work since for each backup, the bitmap
>> would
>> be invalidated since the last backup returned by the server does not
>> match the locally stored value. theoretically we could track
>> multiple
>> backup storages, but bitmaps are not free and the handling would
>> quickly
>> become unwieldy.
>>
>> probably you are better off backing up to one server and syncing
>> that to your second one - you can define both as storage on the PVE
>> side
>> and switch over the backup job targets if the primary one fails.
>>
>> theoretically[1]
>>
>> 1.) backup to A
>> 2.) sync A->B
>> 3.) backup to B
>> 4.) sync B->A
>> 5.) repeat
>>
>> works as well and keeps the bitmap valid, but you carefully need to
>> lock-step backup and sync jobs, so it's probably less robust than:
>>
>> 1.) backup to A
>> 2.) sync A->B
>>
>> where missing a sync is not ideal, but does not invalidate the
>> bitmap.
>>
>> note that your backup will still be incremental in any case w.r.t.
>> client <-> server traffic, the client just has to re-read all disks
>> to
>> decide whether it has to upload those chunks or not if the bitmap is
>> not
>> valid or does not exist.
>>
>> 1: theoretically, as you probably run into
>> https://bugzilla.proxmox.com/show_bug.cgi?id=2864 unless you do your
>> backups as 'backup at pam', which is not recommended ;)
>>
>
> thanks for the very detailed answer :)
>
> I was already thinking that this wouldn't work like my current setup.
>
> Once the bitmap on the source side of the backup gets corrupted for
> whatever reason, incremental wouldn't work and break.
> Is there some way that the system would notify such a "corrupted"
> bitmap?
> I'm thinking of a manual / test / accidential backup run to a different
> backup server which would / could ruin all further regular incremental
> backups undetected.
If a backup fails, or the last backup index we get doesn't matches the
checksum we cache in the VM QEMU process we drop the bitmap and do read
everything (it's still send incremental from the index we got now), and
setup a new bitmap from that point.
>
>
> about my setup scenario - a bit off topic - backing up to 2 different
> locations every other day basically doubles my backup space and reduces
> the risk of one failing backup server - of course by taking a 50:50
> chance of needing to go back 2 days in a worst case scenario.
> Syncing the backup servers would require twice the space capacity (and
> additional bw).
I do not think it would require twice as much space. You already have now
twice copies of what normally would be used for a single backup target.
So even if deduplication between backups is way off you'd still only need
that if you sync remotes. And normally you should need less, as
deduplication should reduce the per-backup server storage space and thus
the doubled space usage from syncing is actually smaller than the doubled
space usage from the odd/even backups - or?
Note that remotes sync only the delta since last sync, so bandwidth correlates
to that delta churn. And as long as that churn stays below 50% size of a full
backup you still need less total bandwidth than the odd/even full-backup
approach. At least if averaged over time.
cheers,
Thomas
More information about the pve-user
mailing list