[pve-devel] [PATCH guest-common v2 1/2] ReplicationState: purge state from non local vms

Fabian Ebner f.ebner at proxmox.com
Tue Jun 7 11:12:14 CEST 2022


Am 03.06.22 um 09:16 schrieb Dominik Csapak:
> when running replication, we don't want to keep replication states for
> non-local vms. Normally this would not be a problem, since on migration,
> we transfer the states anyway, but when the ha-manager steals a vm, it
> cannot do that. In that case, having an old state lying around is
> harmful, since the code does not expect the state to be out-of-sync
> with the actual snapshots on disk.
> 
> One such problem is the following:
> 
> Replicate vm 100 from node A to node B and C, and activate HA. When node
> A dies, it will be relocated to e.g. node B and start replicate from
> there. If node B now had an old state lying around for it's sync to node
> C, it might delete the common base snapshots of B and C and cannot sync
> again.

To be even more robust, we could ensure that the last_sync snapshot
mentioned in the job state is actually present before starting to remove
replication snapshots in prepare() on the source side, or change it to
only remove older snapshots. But prepare() is also used on the target
side to remove stale volumes, so we'd have to be careful not to break
the logic for that.

I'm working on the v2 of a series for improving removal of stale volumes
anyways, so I'll see if I can add something there.

> 
> Deleting the state for all non local guests fixes that issue, since it
> always starts fresh, and the potentially existing old state cannot be
> valid anyway since we just relocated the vm here (from a dead node).
> 
> Signed-off-by: Dominik Csapak <d.csapak at proxmox.com>
> Reviewed-by: Fabian Grünbichler <f.gruenbichler at proxmox.com>

Both patches:
Reviewed-by: Fabian Ebner <f.ebner at proxmox.com>





More information about the pve-devel mailing list