[pve-devel] [PATCH V2 guest-common 1/6] Cleanup for stateless jobs.
Thomas Lamprecht
t.lamprecht at proxmox.com
Thu Dec 21 07:45:04 CET 2017
On 12/19/17 3:53 PM, Wolfgang Link wrote:
> If a VM configuration has been manually moved or migrated by HA,
As HA migrate takes the normal migration API paths this isn't true?
s/migrated/recovered/
migration/relocation -> user (or if a preferred node comes up)
triggered action using the standard API
recovery -> "stealing" of a VM from a fenced node to recover it.
But this can be fixed up on applying and the rest looks good now:
Reviewed-by: Thomas Lamprecht <t.lamprecht at proxmox.com>
> there is no status on this new node.
> In this case, the replication snapshots still exist on the remote side.
> It must be possible to remove a job without status,
> otherwise a new replication job on the same remote node will fail
> and the disks will have to be manually removed.
> When searching through the sorted_volumes generated from the VMID.conf,
> we can be sure that every disk will be removed in the event
> of a complete job removal on the remote side.
>
> In the end the remote_prepare_local_job calls on the remote side a prepare.
> ---
> PVE/Replication.pm | 4 +++-
> 1 file changed, 3 insertions(+), 1 deletion(-)
>
> diff --git a/PVE/Replication.pm b/PVE/Replication.pm
> index 9bc4e61..6a20ba2 100644
> --- a/PVE/Replication.pm
> +++ b/PVE/Replication.pm
> @@ -200,8 +200,10 @@ sub replicate {
>
> if ($remove_job eq 'full' && $jobcfg->{target} ne $local_node) {
> # remove all remote volumes
> + my $store_list = [ map { (PVE::Storage::parse_volume_id($_))[0] } @$sorted_volids ];
> +
> my $ssh_info = PVE::Cluster::get_ssh_info($jobcfg->{target});
> - remote_prepare_local_job($ssh_info, $jobid, $vmid, [], $state->{storeid_list}, 0, undef, 1, $logfunc);
> + remote_prepare_local_job($ssh_info, $jobid, $vmid, [], $store_list, 0, undef, 1, $logfunc);
>
> }
> # remove all local replication snapshots (lastsync => 0)
>
More information about the pve-devel
mailing list