[PVE-User] Moving a disk from Ceph to thin-lvm, troubles...

Marco Gaiarin gaio at sv.lnf.it
Thu Nov 17 14:07:38 CET 2016


I'm still building my ceph cluster, and i've found that i put it under
heavy stress migrating data.

So i've setup, on a node (so, not replicated) a thin lvm storage and
tried to move the disk.

My LVM setup:

root at thor:~# pvdisplay 
  --- Physical volume ---
  PV Name               /dev/sda5
  VG Name               pve
  PV Size               1.37 TiB / not usable 2.00 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              358668
  Free PE               0
  Allocated PE          358668
  PV UUID               yxx5qG-NAJQ-IqpV-HdJW-7YJS-M2c5-HeQItn
   
root at thor:~# vgdisplay 
  --- Volume group ---
  VG Name               pve
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  10
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               1.37 TiB
  PE Size               4.00 MiB
  Total PE              358668
  Alloc PE / Size       358668 / 1.37 TiB
  Free  PE / Size       0 / 0   
  VG UUID               VBaahR-ikYG-H2jK-TCdq-SPvE-VbLA-X4fpPd
   
root at thor:~# lvdisplay 
  --- Logical volume ---
  LV Path                /dev/pve/lvol0
  LV Name                lvol0
  VG Name                pve
  LV UUID                LR4G8Z-zHoB-t12p-B127-dK8z-GZw1-tZmQHP
  LV Write Access        read/write
  LV Creation host, time thor, 2016-11-11 12:23:36 +0100
  LV Status              available
  # open                 0
  LV Size                88.00 MiB
  Current LE             22
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           251:0
   
  --- Logical volume ---
  LV Name                scratch
  VG Name                pve
  LV UUID                fFVtrc-B9lJ-h3gj-ksU6-WICb-w0A6-BVlqlq
  LV Write Access        read/write
  LV Creation host, time thor, 2016-11-11 12:24:33 +0100
  LV Pool metadata       scratch_tmeta
  LV Pool data           scratch_tdata
  LV Status              available
  # open                 1
  LV Size                1.37 TiB
  Allocated pool data    48.36%
  Allocated metadata     99.95%
  Current LE             358602
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     4096
  Block device           251:3

(note the 'Allocated pool data    48.36%').


Source disk is 1TB on ceph, rather empty. Target space is 1.37 TB. I've followed
the proxmox wiki creating the thin lvm storage (https://pve.proxmox.com/wiki/Storage:_LVM_Thin).


I've first tried to move the disk 'online', and log say:

 create full clone of drive virtio1 (DATA:vm-107-disk-1)
 Logical volume "vm-107-disk-1" created.
 drive mirror is starting (scanning bitmap) : this step can take some minutes/hours, depend of disk size and storage speed
 transferred: 0 bytes remaining: 1099511627776 bytes total: 1099511627776 bytes progression: 0.00 % busy: true ready: false
 transferred: 146800640 bytes remaining: 1099364827136 bytes total: 1099511627776 bytes progression: 0.01 % busy: true ready: false
 transferred: 557842432 bytes remaining: 1098953785344 bytes total: 1099511627776 bytes progression: 0.05 % busy: true ready: false 
 [...]
 transferred: 727548166144 bytes remaining: 371963461632 bytes total: 1099511627776 bytes progression: 66.17 % busy: true ready: false
 device-mapper: message ioctl on failed: Operation not supported
 Failed to resume scratch.
 lvremove 'pve/vm-107-disk-1' error: Failed to update pool pve/scratch.
 TASK ERROR: storage migration failed: mirroring error: mirroring job seem to have die. Maybe do you have bad sectors? at /usr/share/perl5/PVE/QemuServer.pm line 5890.

In syslog i've catched also:

 Nov 17 12:59:45 thor lvm[598]: Thin metadata pve-scratch-tpool is now 80% full.
 Nov 17 13:03:35 thor lvm[598]: Thin metadata pve-scratch-tpool is now 85% full.
 Nov 17 13:07:25 thor lvm[598]: Thin metadata pve-scratch-tpool is now 90% full.
 Nov 17 13:11:25 thor lvm[598]: Thin metadata pve-scratch-tpool is now 95% full.


Now, if i try again, i simply get (offline or online, make no difference):

 create full clone of drive virtio1 (DATA:vm-107-disk-1)
 device-mapper: message ioctl on failed: Operation not supported
 TASK ERROR: storage migration failed: lvcreate 'pve/vm-107-disk-1' error: Failed to resume scratch.

Also, if i go to proxmox web interfce, storage 'Scratch' (the name of the
thin lvm storage) is:
	Usage 48.36% (677.42 GiB of 1.37 TiB

but 'content' is empty. And i'm sure i've not 677.42 GiB of data in the source disk...


What i'm missing?! Thanks.

-- 
dott. Marco Gaiarin				        GNUPG Key ID: 240A3D66
  Associazione ``La Nostra Famiglia''          http://www.lanostrafamiglia.it/
  Polo FVG   -   Via della Bontà, 7 - 33078   -   San Vito al Tagliamento (PN)
  marco.gaiarin(at)lanostrafamiglia.it   t +39-0434-842711   f +39-0434-842797

		Dona il 5 PER MILLE a LA NOSTRA FAMIGLIA!
    http://www.lanostrafamiglia.it/25/index.php/component/k2/item/123
	(cf 00307430132, categoria ONLUS oppure RICERCA SANITARIA)


More information about the pve-user mailing list