[pve-devel] [PATCH pve-manager v5 1/1] pve8to9: check for LVM autoactivation and provide migration script
Friedrich Weber
f.weber at proxmox.com
Wed Jul 9 16:10:00 CEST 2025
Starting with PVE 9, the LVM and LVM-thin plugins create new LVs with
the `--setautoactivation n` flag to fix #4997 [1]. However, this does
not affect already existing LVs of setups upgrading from PVE 8.
Hence, add a new script under /usr/share/pve-manager/migrations that
finds guest volume LVs with autoactivation enabled on enabled and
active LVM and LVM-thin storages, and disables autoactivation for each
of them. Before doing so, ask the user for confirmation for each
storage. For non-interactive usecases, the user can pass an flag to
assume confirmation. Disabling autoactivation via lvchange updates the
LVM metadata, hence, the storage lock is acquired before proceeding.
To avoid holding the lock for too long and blocking other consumers,
the lvchange calls are batched and the lock is released between two
batches. Afterwards, check for remaining guest volume LVs with
autoactivation enabled (these may have been created while the lock was
not held). If there are still such LVs left, or if other errors
occurred, suggest the user to run the command again.
Also, check for guest volume LVs with autoactivation enabled in
pve8to9, and suggest to run the migration script if necessary. The
check is done without a lock, as taking a lock does not have
advantages here.
While some users may have disabled autoactivation on the VG level to
work around #4997, consciously do not take this into account:
Disabling autoactivation for LVs too does not hurt, and avoids issues
in case the user decides to re-enable autoactivation on the VG level
in the future.
[1] https://bugzilla.proxmox.com/show_bug.cgi?id=4997
Signed-off-by: Friedrich Weber <f.weber at proxmox.com>
---
Notes:
If users create new LVs on PVE 8 nodes after running the script but
before actually upgrading to PVE 9, these will still be created with
autoactivation enabled. As discussed in v3, we don't consider this
case, as the alternative would be to suggest runing updatelvm
*post-upgrade*), and this can be easily forgotten by users.
The migration script uses a subroutine from PVE::CLI::pve8to9. Thomas
suggested we could move such helpers (also e.g. the log_* helpers) to
a dedicated module such as PVE/UpgradeHelpers.pm, but I haven't done
so here to keep the scope of this patch series small. Can be done in a
follow-up.
Open questions:
- The script will disable autoactivation on all guest volumes,
regardless of the guest owner. In other words, it will disable
autoactivation on LVs owned by a different node. Is this a problem?
My tests suggests it's unproblematic, but may be worth a second
look. We can change the script to only update LVs owned by the local
node, but then might miss LVs that are migrated between script runs
on different nodes.
changes since v4:
- move update code from pve8to9 to a dedicated migration script
- add pve8to9 suggestion to run the migration script
- ask for confirmation in the migration script, provide CLI option to
assume confirmation
- update LVs in batches (thx Fabian!), check for remaining LVs
afterwards
changes since v3:
- rebase and run perltidy
- change phrasing of warnings to provide more context (thx @Michael!)
new in v3
PVE/CLI/pve8to9.pm | 86 +++++++++++++-
bin/Makefile | 7 +-
bin/pve-lvm-disable-autoactivation | 174 +++++++++++++++++++++++++++++
3 files changed, 265 insertions(+), 2 deletions(-)
create mode 100755 bin/pve-lvm-disable-autoactivation
diff --git a/PVE/CLI/pve8to9.pm b/PVE/CLI/pve8to9.pm
index 7a5725cb..148efcca 100644
--- a/PVE/CLI/pve8to9.pm
+++ b/PVE/CLI/pve8to9.pm
@@ -23,7 +23,7 @@ use PVE::NodeConfig;
use PVE::RPCEnvironment;
use PVE::Storage;
use PVE::Storage::Plugin;
-use PVE::Tools qw(run_command split_list file_get_contents);
+use PVE::Tools qw(run_command split_list file_get_contents trim);
use PVE::QemuConfig;
use PVE::QemuServer;
use PVE::VZDump::Common;
@@ -1485,6 +1485,89 @@ sub check_legacy_backup_job_options {
}
}
+sub query_autoactivated_lvm_guest_volumes {
+ my ($cfg, $storeid, $vgname) = @_;
+
+ my $cmd = [
+ '/sbin/lvs',
+ '--separator',
+ ':',
+ '--noheadings',
+ '--unbuffered',
+ '--options',
+ "lv_name,autoactivation",
+ $vgname,
+ ];
+
+ my $autoactivated_lvs;
+ eval {
+ run_command(
+ $cmd,
+ outfunc => sub {
+ my $line = shift;
+ $line = trim($line);
+
+ my ($name, $autoactivation_flag) = split(':', $line);
+ return if !$name;
+
+ $autoactivated_lvs->{$name} = $autoactivation_flag eq 'enabled';
+ },
+ );
+ };
+ die "could not list LVM logical volumes: $@\n" if $@;
+
+ my $vollist = PVE::Storage::volume_list($cfg, $storeid);
+
+ my $autoactivated_guest_lvs = [];
+ for my $volinfo (@$vollist) {
+ my $volname = (PVE::Storage::parse_volume_id($volinfo->{volid}))[1];
+ push @$autoactivated_guest_lvs, $volname if $autoactivated_lvs->{$volname};
+ }
+
+ return $autoactivated_guest_lvs;
+}
+
+sub check_lvm_autoactivation {
+ my $cfg = PVE::Storage::config();
+ my $storage_info = PVE::Storage::storage_info($cfg);
+
+ log_info("Check for LVM autoactivation settings on LVM and LVM-thin storages...");
+
+ my $needs_fix = 0;
+
+ for my $storeid (sort keys %$storage_info) {
+ my $scfg = PVE::Storage::storage_config($cfg, $storeid);
+ my $type = $scfg->{type};
+ next if $type ne 'lvm' && $type ne 'lvmthin';
+
+ my $vgname = $scfg->{vgname};
+ die "unexpected empty VG name (storage '$storeid')\n" if !$vgname;
+
+ my $info = $storage_info->{$storeid};
+ if (!$info->{enabled} || !$info->{active}) {
+ log_skip("storage '$storeid' ($type) is disabled or inactive");
+ next;
+ }
+
+ my $autoactivated_guest_lvs =
+ query_autoactivated_lvm_guest_volumes($cfg, $storeid, $vgname);
+ if (scalar(@$autoactivated_guest_lvs) > 0) {
+ log_warn("storage '$storeid' has guest volumes with autoactivation enabled");
+ $needs_fix = 1;
+ } else {
+ log_pass("all guest volumes on storage '$storeid' have autoactivation disabled");
+ }
+ }
+ log_warn(
+ "Starting with PVE 9, autoactivation will be disabled for new LVM/LVM-thin guest volumes. "
+ . "Please run the following command to disable autoactivation for existing LVM/LVM-thin "
+ . "guest volumes:" . "\n\n"
+ . "\t/usr/share/pve-manager/migrations/pve-lvm-disable-autoactivation" . "\n")
+ if $needs_fix;
+
+ return undef;
+}
+
sub check_misc {
print_header("MISCELLANEOUS CHECKS");
my $ssh_config = eval { PVE::Tools::file_get_contents('/root/.ssh/config') };
@@ -1596,6 +1679,7 @@ sub check_misc {
check_dkms_modules();
check_legacy_notification_sections();
check_legacy_backup_job_options();
+ check_lvm_autoactivation();
}
my sub colored_if {
diff --git a/bin/Makefile b/bin/Makefile
index 3931804b..51683596 100644
--- a/bin/Makefile
+++ b/bin/Makefile
@@ -30,6 +30,9 @@ HELPERS = \
pve-startall-delay \
pve-init-ceph-crash
+MIGRATIONS = \
+ pve-lvm-disable-autoactivation
+
SERVICE_MANS = $(addsuffix .8, $(SERVICES))
CLI_MANS = \
@@ -83,7 +86,7 @@ pvereport.1.pod: pvereport
.PHONY: tidy
tidy:
- echo $(SCRIPTS) $(HELPERS) | xargs -n4 -P0 proxmox-perltidy
+ echo $(SCRIPTS) $(HELPERS) $(MIGRATIONS) | xargs -n4 -P0 proxmox-perltidy
.PHONY: check
check: $(addsuffix .service-api-verified, $(SERVICES)) $(addsuffix .api-verified, $(CLITOOLS))
@@ -95,6 +98,8 @@ install: $(SCRIPTS) $(CLI_MANS) $(SERVICE_MANS) $(BASH_COMPLETIONS) $(ZSH_COMPLE
install -m 0755 $(SCRIPTS) $(BINDIR)
install -d $(USRSHARE)/helpers
install -m 0755 $(HELPERS) $(USRSHARE)/helpers
+ install -d $(USRSHARE)/migrations
+ install -m 0755 $(MIGRATIONS) $(USRSHARE)/migrations
install -d $(MAN1DIR)
install -m 0644 $(CLI_MANS) $(MAN1DIR)
install -d $(MAN8DIR)
diff --git a/bin/pve-lvm-disable-autoactivation b/bin/pve-lvm-disable-autoactivation
new file mode 100755
index 00000000..69a9f556
--- /dev/null
+++ b/bin/pve-lvm-disable-autoactivation
@@ -0,0 +1,174 @@
+#!/usr/bin/perl
+
+use strict;
+use warnings;
+
+use Getopt::Long;
+use Term::ANSIColor;
+
+use PVE::CLI::pve8to9;
+use PVE::Tools qw(run_command);
+
+my $is_tty = (-t STDOUT);
+
+my $level2color = {
+ pass => 'green',
+ warn => 'yellow',
+ fail => 'bold red',
+};
+
+my $log_line = sub {
+ my ($level, $line) = @_;
+
+ my $color = $level2color->{$level} // '';
+ print color($color) if $is_tty && $color && $color ne '';
+
+ print uc($level), ': ' if defined($level);
+ print "$line\n";
+
+ print color('reset') if $is_tty;
+};
+
+sub log_pass { $log_line->('pass', @_); }
+sub log_info { $log_line->('info', @_); }
+sub log_warn { $log_line->('warn', @_); }
+sub log_fail { $log_line->('fail', @_); }
+
+sub main {
+ my $assume_yes = 0;
+
+ if (!GetOptions('assume-yes|y', \$assume_yes)) {
+ print "USAGE $0 [ --assume-yes | -y ]\n";
+ exit(-1);
+ }
+
+ my $cfg = PVE::Storage::config();
+ my $storage_info = PVE::Storage::storage_info($cfg);
+
+ my $got_error = 0;
+ my $skipped_storage = 0;
+ my $still_found_autoactivated_lvs = 0;
+ my $did_work = 0;
+
+ log_info("Starting with PVE 9, autoactivation will be disabled for new LVM/LVM-thin guest "
+ . "volumes. This script disables autoactivation for existing LVM/LVM-thin guest volumes."
+ );
+
+ for my $storeid (sort keys %$storage_info) {
+ eval {
+ my $scfg = PVE::Storage::storage_config($cfg, $storeid);
+ my $type = $scfg->{type};
+ return if $type ne 'lvm' && $type ne 'lvmthin';
+
+ my $info = $storage_info->{$storeid};
+ if (!$info->{enabled} || !$info->{active}) {
+ log_info("storage '$storeid' ($type) is disabled or inactive");
+ return;
+ }
+
+ my $vgname = $scfg->{vgname};
+ die "unexpected empty VG name (storage '$storeid')\n" if !$vgname;
+
+ my $autoactivated_lvs =
+ PVE::CLI::pve8to9::query_autoactivated_lvm_guest_volumes($cfg, $storeid, $vgname);
+ if (scalar(@$autoactivated_lvs) == 0) {
+ log_pass("all guest volumes on storage '$storeid' have "
+ . "autoactivation disabled");
+ return;
+ }
+
+ print "Disable autoactivation on "
+ . scalar(@$autoactivated_lvs)
+ . " guest volumes on storage '$storeid'? (y/N)? ";
+
+ my $continue;
+ if ($assume_yes) {
+ print "Assuming 'yes' because '--assume-yes' was passed.\n";
+ $continue = 1;
+ } elsif ($is_tty) {
+ my $answer = <STDIN>;
+ $continue = defined($answer) && $answer =~ m/^\s*y(?:es)?\s*$/i;
+ } else {
+ print "Assuming 'no'. Pass '--assume-yes' to always assume 'yes'.\n";
+ $continue = 0;
+ }
+
+ if (!$continue) {
+ $skipped_storage = 1;
+ log_warn("Skipping '$storeid' as requested ...\n");
+ return;
+ }
+
+ # in order to avoid holding the lock for too long at a time, update LVs in batches of 32
+ # and release the lock inbetween
+ my $plugin = PVE::Storage::Plugin->lookup($scfg->{type});
+ while (@$autoactivated_lvs) {
+ my @current_lvs = splice @$autoactivated_lvs, 0, 32;
+ $plugin->cluster_lock_storage(
+ $storeid,
+ $scfg->{shared},
+ undef,
+ sub {
+ for my $lvname (@current_lvs) {
+ log_info("disabling autoactivation for $vgname/$lvname on storage "
+ . "'$storeid'...");
+ my $cmd = [
+ '/sbin/lvchange', '--setautoactivation', 'n', "$vgname/$lvname",
+ ];
+
+ eval { run_command($cmd); };
+ my $err = $@;
+ die "could not disable autoactivation for $vgname/$lvname: $err\n"
+ if $err;
+
+ $did_work = 1;
+ }
+ },
+ );
+ }
+
+ # new LVs might have been created in the meantime while the lock was not held
+ my $still_autoactivated_lvs =
+ PVE::CLI::pve8to9::query_autoactivated_lvm_guest_volumes($cfg, $storeid, $vgname);
+ if (scalar(@$still_autoactivated_lvs) == 0) {
+ log_pass("all guest volumes on storage '$storeid' now have "
+ . "autoactivation disabled");
+ } else {
+ $still_found_autoactivated_lvs = 1;
+ log_warn("some guest volumes on storage '$storeid' still have "
+ . "autoactivation enabled");
+ }
+ };
+
+ my $err = $@;
+ if ($err) {
+ $got_error = 1;
+ log_fail("could not disable autoactivation on enabled storage '$storeid': $err");
+ }
+ }
+
+ my $exit_code;
+ if ($got_error) {
+ $exit_code = 1;
+ log_fail("at least one error was encountered");
+ } elsif ($skipped_storage) {
+ $exit_code = 1;
+ log_warn("at least one enabled and active LVM/LVM-thin storage was skipped. "
+ . "Please run this script again!");
+ } elsif ($still_found_autoactivated_lvs) {
+ $exit_code = 1;
+ log_warn("some guest volumes still have autoactivation enabled. "
+ . "Please run this script again!");
+ } elsif ($did_work) {
+ $exit_code = 0;
+ log_pass("successfully disabled autoactivation for existing guest volumes on all "
+ . "enabled and active LVM/LVM-thin storages.");
+ } else {
+ $exit_code = 0;
+ log_pass("all existing guest volumes on enabled and active LVM/LVM-thin storages "
+ . "already have autoactivation disabled.");
+ }
+ exit($exit_code);
+}
+
+main();
--
2.47.2
More information about the pve-devel
mailing list