[pve-devel] [PATCH v2 manager] add wipe_disk option when destroying ceph disk
David Limbeck
d.limbeck at proxmox.com
Wed Oct 24 09:32:08 CEST 2018
this allows the disk to be reused as ceph disk by zeroing the first 200M
of the destroyed disk
Signed-off-by: David Limbeck <d.limbeck at proxmox.com>
---
since v1:
wipe is always done after remove_partition
fdatasync is used to make sure data is synced on some disks
(as proposed by Alwin)
PVE/API2/Ceph.pm | 12 ++++++++++++
1 file changed, 12 insertions(+)
diff --git a/PVE/API2/Ceph.pm b/PVE/API2/Ceph.pm
index 69489a70..a0b5042d 100644
--- a/PVE/API2/Ceph.pm
+++ b/PVE/API2/Ceph.pm
@@ -434,6 +434,13 @@ __PACKAGE__->register_method ({
}
}
+ my $disks_to_wipe = {};
+ foreach my $part (@$partitions_to_remove) {
+ next if !$part || (! -b $part );
+ my $devpath = PVE::Diskmanage::get_blockdev($part);
+ $disks_to_wipe->{$devpath} = 1;
+ }
+
print "Unmount OSD $osdsection from $mountpoint\n";
eval { run_command(['/bin/umount', $mountpoint]); };
if (my $err = $@) {
@@ -443,6 +450,11 @@ __PACKAGE__->register_method ({
foreach my $part (@$partitions_to_remove) {
$remove_partition->($part);
}
+ foreach my $devpath (keys %$disks_to_wipe) {
+ print "wipe disk: $devpath\n";
+ eval { run_command(['/bin/dd', 'if=/dev/zero', "of=${devpath}", 'bs=1M', 'count=200', 'conv=fdatasync']); };
+ warn $@ if $@;
+ }
}
};
--
2.11.0
More information about the pve-devel
mailing list