[PVE-User] (unsupported) SoftRAID related kernel panic increasing softraid1 device number

Eneko Lacunza elacunza at binovo.es
Thu Nov 14 09:23:11 CET 2013


Hi all,

Yesterday I was on a client doing some final reconfiguration after 
having replaced a VMWare cluster with Proxmox.

One of the steps involved changing the shape of a software raid mirror. 
The mirror had a local SSD disk exposed in a hwraid0 volume, and a 
second, write-mostly, hwraid1 disc, composed of two local SATA discs.

Plan was:
1. Grow softraid1 to include a iSCSI disk in an EMC AX4-5i, effectively 
having 3-way softraid mirror.
2. Remove the hwraid1 SATA volume from the softraid mirror, leaving it 
back with 2 devices.

iSCSI device worked perfectly, could be partitioned, formated and 
written. But everytime I tried to grow the softraid device (believe me I 
tried several times!!), kernel panicked, an our pretty production 
virtual machines stopped working! ;)

The problem was the grow command, not the iSCSI device:
mdadm --grow /dev/md0 -n 3

That command was panicking the kernel. I tried having a hot spare and 
also giving the new block device on the same command, without luck. Also 
.25 and .26 proxmox kernels.

It seems to me that Proxmox (RHEL-based) kernel doesn't support this 
operation, although Debian mdadm does (I think that Debian kernel does 
support it, too).

Finally I failed and removed the SATA volume and added as replacement 
the iSCSI device, without changing the number of devices of the 
softmirror, and that did the (a bit more dangerous) trick.

I thought this could be useful to someone. Also, I tried to find what 
version of linux kernel does support device number growing in softraid1, 
but had no luck...

Fortunately everything is working perfectly know, RAID devices are 
optimal and performance is great!

Cheers
Eneko

-- 
Zuzendari Teknikoa / Director Técnico
Binovo IT Human Project, S.L.
Telf. 943575997
       943493611
Astigarraga bidea 2, planta 6 dcha., ofi. 3-2; 20180 Oiartzun (Gipuzkoa)
www.binovo.es




More information about the pve-user mailing list