[pve-devel] [RFC container] alloc disk: fix #3970 avoid ambiguous rbd image path
Fabian Grünbichler
f.gruenbichler at proxmox.com
Mon Apr 4 17:44:57 CEST 2022
On April 1, 2022 5:24 pm, Aaron Lauterer wrote:
> If two RBD storages use the same pool, but connect to different
> clusters, we cannot say to which cluster the mapped RBD image belongs
> to. To avoid potential data loss, we need to verify that no other
> storage is configured that could have a volume mapped under the same
> path before we format anything.
>
> The ambiguous mapping is in
> /dev/rbd/<pool>/<ns>/<image> where the namespace <ns> is optional.
>
> Once we can tell the clusters apart in the mapping, we can remove these
> checks again.
>
> See bug #3969 for more information on the root cause.
>
> Signed-off-by: Aaron Lauterer <a.lauterer at proxmox.com>
> ---
> RFC because I would like someone else to take a look at the logic and I
> wasn't sure how to format the grouping of the conditions in a way that
> is easy to read
>
> src/PVE/LXC.pm | 45 +++++++++++++++++++++++++++++++++++++++++++++
> 1 file changed, 45 insertions(+)
>
> diff --git a/src/PVE/LXC.pm b/src/PVE/LXC.pm
> index fe63087..b048ce0 100644
> --- a/src/PVE/LXC.pm
> +++ b/src/PVE/LXC.pm
> @@ -1970,6 +1970,51 @@ sub alloc_disk {
> my $scfg = PVE::Storage::storage_config($storecfg, $storage);
> # fixme: use better naming ct-$vmid-disk-X.raw?
>
> + # check if another rbd storage with the same pool name but different
> + # cluster exists. If so, allocating a new volume can potentially be
> + # dangerous because the RBD mapping, exposes it in an ambiguous way under
> + # /dev/rbd/<pool>/<ns>/<image>. Without any information to which cluster it
> + # belongs, we cannot clearly determine which image we access and
> + # potentially format an already used image. See
> + # https://bugzilla.proxmox.com/show_bug.cgi?id=3969 and
> + # https://bugzilla.proxmox.com/show_bug.cgi?id=3970
> + # TODO: remove these checks once #3969 is fixed and we can clearly tell to
> + # which cluster an image belongs to
> + if ($scfg->{type} eq 'rbd') {
> + my $pool = $storecfg->{ids}->{$storage}->{pool};
not used anywhere?
> + foreach my $stor (keys %{$storecfg->{ids}}) {
> + next if $stor eq $storage;
> +
> + my $ccfg = PVE::Storage::storage_config($storecfg, $stor);
that's actually not needed if you are iterating over the keys ;) just
use
my $checked_scfg = $storecfg->{ids}->{$store};
> + next if $ccfg->{type} ne 'rbd';
> +
> + if ($scfg->{pool} eq $ccfg->{pool}) {
why not simply
# different pools -> no clash possible
next if $scfg->{pool} ne $checked_scfg->{pool};
> + if (
> + (
> + defined($scfg->{monhost})
> + && defined($ccfg->{monhost})
> + && $scfg->{monhost} eq $ccfg->{monhost}
> + )
> + || (
> + !defined($scfg->{monhost})
> + && !defined($ccfg->{monhost})
> + )
> + ) {
> + # both external ones same or both hyperconverged
> + next;
untested, but something like the following is probably more readable ;)
could also be adapted to check for monhost overlap instead if desired
(to catch storage A and B not having identical
strings/formatting/elements - if any single mon is listed for both, they
ARE the same cluster for all intents and purposes?)
my $safe_compare = sub {
my ($key) = shift;
my $cmp = sub { $_[0] eq $_[1] };
return PVE::Tools::safe_compare($scfg->{$key}, $checked_scfg->{$key}, $cmp);
};
# internal and internal or external and external with identical monitors
# => same cluster
next if $safe_compare->("monhost");
# different namespaces => no clash possible
next if !$safe_compare->("namespace");
die ..
> + }
> + # different cluster here
> + # we are ok if namespaces are not the same or only one storage has one
> + if (defined($scfg->{namespace}) && defined($ccfg->{namespace})) {
> + next if $scfg->{namespace} ne $ccfg->{namespace};
> + } elsif (defined($scfg->{namespace}) || defined($ccfg->{namespace})) {
> + next;
> + }
> + die "Cannot determine which Ceph cluster the volume mapping belongs to. Abort!\n";
> + }
> + }
> + }
> +
> eval {
> my $do_format = 0;
> if ($scfg->{content}->{rootdir} && $scfg->{path}) {
> --
> 2.30.2
>
>
>
> _______________________________________________
> pve-devel mailing list
> pve-devel at lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>
>
>
More information about the pve-devel
mailing list