[pve-devel] Qm move_disk bug (?)

Gilberto Nunes gilberto.nunes32 at gmail.com
Wed Sep 30 16:21:47 CEST 2020


Ok! Just to be sure, I did it again...

In the LVM-Thin I have an 100.00g vm disk. Note that only about 6% are
filled up.

lvs
  LV            VG  Attr       LSize   Pool Origin Data%  Meta%  Move Log
Cpy%Sync Convert
  data          pve twi-aotz--  18.87g             31.33  1.86

  root          pve -wi-ao----   9.75g

  swap          pve -wi-ao----   4.00g

  vm-100-disk-0 pve Vwi-aotz-- 100.00g data        5.91


No tried to use move_disk

cmd: qm move_disk 100 scsi0 VMS --format qcow2
(VMS is the Directory Storage)

Using this command to check the qcow2 file

cmd: watch -n 1 qemu-img info vm-100-disk-0.qcow2

Every 1.0s: qemu-img info vm-100-disk-0.qcow2
                                      proxmox01: Wed Sep 30 11:02:02 2020

image: vm-100-disk-0.qcow2
file format: qcow2
virtual size: 100 GiB (107374182400 bytes)
disk size: 21.2 GiB
cluster_size: 65536
Format specific information:
    compat: 1.1
    compression type: zlib
    lazy refcounts: false
    refcount bits: 16
    corrupt: false


After a while, all space in /DATA, which is the Directory Storage are full.
df -h
Filesystem            Size  Used Avail Use% Mounted on
udev                  1.9G     0  1.9G   0% /dev
tmpfs                 394M  5.8M  388M   2% /run
/dev/mapper/pve-root  9.8G  2.5G  7.4G  25% /
tmpfs                 2.0G   52M  1.9G   3% /dev/shm
tmpfs                 5.0M     0  5.0M   0% /run/lock
tmpfs                 2.0G     0  2.0G   0% /sys/fs/cgroup
/dev/vdb1              40G   40G  316K 100% /DATA
/dev/fuse              30M   16K   30M   1% /etc/pve
tmpfs                 394M     0  394M   0% /run/user/0

and the image has almost 40G filled....

qemu-img info vm-100-disk-0.qcow2
image: vm-100-disk-0.qcow2
file format: qcow2
virtual size: 100 GiB (107374182400 bytes)
disk size: 39.9 GiB
cluster_size: 65536
Format specific information:
    compat: 1.1
    compression type: zlib
    lazy refcounts: false
    refcount bits: 16
    corrupt: false

And the command qm move_disk got error after a while:

qm move_disk 100 scsi0 VMS --format qcow2
create full clone of drive scsi0 (local-lvm:vm-100-disk-0)
Formatting '/DATA/images/100/vm-100-disk-0.qcow2', fmt=qcow2
cluster_size=65536 preallocation=metadata compression_type=zlib
size=107374182400 lazy_refcounts=off refcount_bits=16
drive mirror is starting for drive-scsi0
drive-scsi0: transferred: 384827392 bytes remaining: 106989355008 bytes
total: 107374182400 bytes progression: 0.36 % busy: 1 ready: 0
...
...
drive-scsi0: transferred: 42833281024 bytes remaining: 64541097984 bytes
total: 107374379008 bytes progression: 39.89 % busy: 1 ready: 0
drive-scsi0: transferred: 42833281024 bytes remaining: 64541097984 bytes
total: 107374379008 bytes progression: 39.89 % busy: 1 ready: 0
drive-scsi0: Cancelling block job
drive-scsi0: Done.
storage migration failed: mirroring error: drive-scsi0: mirroring has been
cancelled

Then I tried to use qemu-img convert and everything works fine

qemu-img convert -O qcow2 /dev/pve/vm-100-disk-0
/DATA/images/100/vm-100-disk-0.qcow2

qemu-img info vm-100-disk-0.qcow2
image: vm-100-disk-0.qcow2
file format: qcow2
virtual size: 100 GiB (107374182400 bytes)
disk size: 6.01 GiB
cluster_size: 65536
Format specific information:
    compat: 1.1
    compression type: zlib
    lazy refcounts: false
    refcount bits: 16
    corrupt: false
---
Gilberto Nunes Ferreira





Em qua., 30 de set. de 2020 às 10:59, Gilberto Nunes <
gilberto.nunes32 at gmail.com> escreveu:

> UPDATE
> From CLI I have used
>
> qm move_disk 100 scsi0 VMS --format qcow2
>
> ---
> Gilberto Nunes Ferreira
>
> (47) 3025-5907
> (47) 99676-7530 - Whatsapp / Telegram
>
> Skype: gilberto.nunes36
>
>
>
>
>
> Em qua., 30 de set. de 2020 às 10:26, Gilberto Nunes <
> gilberto.nunes32 at gmail.com> escreveu:
>
>> >> How did you move the disk? GUI or CLI?
>> Both.
>> From CLI qm move_disk 100 scsi0 VMS   (VMS is the Directory Storage)
>>
>> Proxmox all up to date...
>> pveversion -v
>> proxmox-ve: 6.2-2 (running kernel: 5.4.65-1-pve)
>> pve-manager: 6.2-12 (running version: 6.2-12/b287dd27)
>> pve-kernel-5.4: 6.2-7
>> pve-kernel-helper: 6.2-7
>> pve-kernel-5.4.65-1-pve: 5.4.65-1
>> pve-kernel-5.4.34-1-pve: 5.4.34-2
>> ceph-fuse: 12.2.11+dfsg1-2.1+b1
>> corosync: 3.0.4-pve1
>> criu: 3.11-3
>> glusterfs-client: 5.5-3
>> ifupdown: 0.8.35+pve1
>> ksm-control-daemon: 1.3-1
>> libjs-extjs: 6.0.1-10
>> libknet1: 1.16-pve1
>> libproxmox-acme-perl: 1.0.5
>> libpve-access-control: 6.1-2
>> libpve-apiclient-perl: 3.0-3
>> libpve-common-perl: 6.2-2
>> libpve-guest-common-perl: 3.1-3
>> libpve-http-server-perl: 3.0-6
>> libpve-storage-perl: 6.2-6
>> libqb0: 1.0.5-1
>> libspice-server1: 0.14.2-4~pve6+1
>> lvm2: 2.03.02-pve4
>> lxc-pve: 4.0.3-1
>> lxcfs: 4.0.3-pve3
>> novnc-pve: 1.1.0-1
>> proxmox-backup-client: 0.8.21-1
>> proxmox-mini-journalreader: 1.1-1
>> proxmox-widget-toolkit: 2.2-12
>> pve-cluster: 6.1-8
>> pve-container: 3.2-2
>> pve-docs: 6.2-6
>> pve-edk2-firmware: 2.20200531-1
>> pve-firewall: 4.1-3
>> pve-firmware: 3.1-3
>> pve-ha-manager: 3.1-1
>> pve-i18n: 2.2-1
>> pve-qemu-kvm: 5.1.0-2
>> pve-xtermjs: 4.7.0-2
>> qemu-server: 6.2-14
>> smartmontools: 7.1-pve2
>> spiceterm: 3.1-1
>> vncterm: 1.6-2
>> zfsutils-linux: 0.8.4-pve1
>>
>>
>> >>> The VM disk (100G) or the physical disk of of the storage?
>>
>> The VM disk has 100G in size, but the storage has 40G... It's just a
>> lab...
>>
>>
>>
>> ---
>> Gilberto Nunes Ferreira
>>
>>
>> Em qua., 30 de set. de 2020 às 10:22, Aaron Lauterer <
>> a.lauterer at proxmox.com> escreveu:
>>
>>> Hey,
>>>
>>> How did you move the disk? GUI or CLI?
>>>
>>> If via CLI, could you post the command?
>>>
>>> Additionally, which versions are installed? (pveversion -v)
>>>
>>> One more question inline.
>>>
>>> On 9/30/20 3:16 PM, Gilberto Nunes wrote:
>>> > Hi all
>>> >
>>> > I tried to move a vm disk from LVM-thin to a Directory Storage but
>>> when I
>>> > did this, the qm move_disk just filled up the entire disk.
>>>
>>> The VM disk (100G) or the physical disk of of the storage?
>>>
>>> > The disk inside LVM-thin has 100G in size but only about 5G is
>>> occupied by
>>> > the OS.
>>> > I have used the qcow2 format.
>>> > However, if I do it from CLI with the command:
>>> >
>>> > qemu-img convert -O qcow2 /dev/pve/vm-100-disk-0
>>> > /DATA/images/100/vm-100-disk-0.qcow2
>>> >
>>> > It works nicely and just copied what the OS occupied inside the VM, but
>>> > created a virtual disk with 100GB.
>>> >
>>> > It's some kind of bug with qm move_disk???
>>> >
>>> > Thanks a lot
>>> >
>>> > ---
>>> > Gilberto Nunes Ferreira
>>> > _______________________________________________
>>> > pve-devel mailing list
>>> > pve-devel at lists.proxmox.com
>>> > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>>> >
>>> >
>>>
>>>



More information about the pve-devel mailing list