<html>
<head>
<meta content="text/html; charset=utf-8" http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<div class="moz-cite-prefix">Hi again,<br>
<br>
Tried this using a different node and disk, with same result:<br>
transferred: 53687091200 bytes remaining: 0 bytes total:
53687091200 bytes progression: 100.00 %<br>
TASK ERROR: storage migration failed: mirroring error: VM 100 qmp
command 'block-job-complete' failed - The active block job for
device 'drive-virtio0' cannot be completed<br>
<br>
It is the same guest on different VM instance.<br>
<br>
I'm using nfs-kernel-server from Debian for NFS service, with the
following /etc/exports<br>
/srv/nfs2 192.168.4.91(rw,sync,no_subtree_check)
192.168.4.92(rw,sync,no_subtree_check)
192.168.4.93(rw,sync,no_subtree_check)<br>
<br>
root@pmx2:/etc# pveversion -v<br>
proxmox-ve-2.6.32: 3.3-138 (running kernel: 2.6.32-33-pve)<br>
pve-manager: 3.3-2 (running version: 3.3-2/995e687e)<br>
pve-kernel-2.6.32-32-pve: 2.6.32-136<br>
pve-kernel-2.6.32-33-pve: 2.6.32-138<br>
lvm2: 2.02.98-pve4<br>
clvm: 2.02.98-pve4<br>
corosync-pve: 1.4.7-1<br>
openais-pve: 1.1.4-3<br>
libqb0: 0.11.1-2<br>
redhat-cluster-pve: 3.2.0-2<br>
resource-agents-pve: 3.9.2-4<br>
fence-agents-pve: 4.0.10-1<br>
pve-cluster: 3.0-15<br>
qemu-server: 3.1-35<br>
pve-firmware: 1.1-3<br>
libpve-common-perl: 3.0-19<br>
libpve-access-control: 3.0-15<br>
libpve-storage-perl: 3.0-23<br>
pve-libspice-server1: 0.12.4-3<br>
vncterm: 1.1-8<br>
vzctl: 4.0-1pve6<br>
vzprocps: 2.0.11-2<br>
vzquota: 3.1-2<br>
pve-qemu-kvm: 2.1-9<br>
ksm-control-daemon: 1.1-1<br>
glusterfs-client: 3.5.2-1<br>
<br>
Any hint? :)<br>
<br>
On 08/10/14 16:50, Eneko Lacunza wrote:<br>
</div>
<blockquote cite="mid:54354F24.1000804@binovo.es" type="cite">
<meta http-equiv="content-type" content="text/html; charset=utf-8">
Hi,<br>
<br>
I have tried to live-migrate storage from local storage in another
node in the same cluster, to the same NFS export, and it doesn't
work either:<br>
<br>
create full clone of drive virtio0 (local:103/vm-103-disk-1.raw)<br>
trying to aquire cfs lock 'storage-nfs1' ... OK<br>
Formatting '/mnt/pve/nfs1/images/103/vm-103-disk-1.raw', fmt=raw
size=53687091200 <br>
transferred: 0 bytes remaining: 53687091200 bytes total:
53687091200 bytes progression: 0.00 %<br>
transferred: 41943040 bytes remaining: 53645148160 bytes total:
53687091200 bytes progression: 0.08 %<br>
transferred: 83886080 bytes remaining: 53603205120 bytes total:
53687091200 bytes progression: 0.16 %<br>
transferred: 136314880 bytes remaining: 53550776320 bytes total:
53687091200 bytes progression: 0.25 %<br>
transferred: 188743680 bytes remaining: 53498347520 bytes total:
53687091200 bytes progression: 0.35 %<br>
[...]<br>
transferred: 53606481920 bytes remaining: 80609280 bytes total:
53687091200 bytes progression: 99.85 %<br>
transferred: 53648424960 bytes remaining: 38666240 bytes total:
53687091200 bytes progression: 99.93 %<br>
transferred: 53680340992 bytes remaining: 6750208 bytes total:
53687091200 bytes progression: 99.99 %<br>
transferred: 53687091200 bytes remaining: 0 bytes total:
53687091200 bytes progression: 100.00 %<br>
TASK ERROR: storage migration failed: mirroring error: VM 103 qmp
command 'block-job-complete' failed - The active block job for
device 'drive-virtio0' cannot be completed<br>
---<br>
<div class="moz-forward-container"><br>
VM has latest virtio drivers for Windows (81)<br>
<br>
-------- Forwarded Message --------
<table class="moz-email-headers-table" border="0"
cellpadding="0" cellspacing="0">
<tbody>
<tr>
<th align="RIGHT" nowrap="nowrap" valign="BASELINE">Subject:
</th>
<td>Storage migration</td>
</tr>
<tr>
<th align="RIGHT" nowrap="nowrap" valign="BASELINE">Date:
</th>
<td>Wed, 08 Oct 2014 14:17:42 +0200</td>
</tr>
<tr>
<th align="RIGHT" nowrap="nowrap" valign="BASELINE">From:
</th>
<td>Eneko Lacunza <a moz-do-not-send="true"
class="moz-txt-link-rfc2396E"
href="mailto:elacunza@binovo.es"><elacunza@binovo.es></a></td>
</tr>
<tr>
<th align="RIGHT" nowrap="nowrap" valign="BASELINE">To: </th>
<td><a moz-do-not-send="true"
class="moz-txt-link-abbreviated"
href="mailto:pve-user@pve.proxmox.com">pve-user@pve.proxmox.com</a></td>
</tr>
</tbody>
</table>
<br>
<br>
<pre>Hi all,
I've found some problems with storage migration with the recent 3.3
versión, vanilla as it is in ISO and also after updating from
pve-non-subscription.
I have a WS2012R2 with virtio block device in local storage. Local
storage (var/lib/vz) and NFS storage (srv/nfs) are on the same machine.
- Moving disk to a NFS shared storage with VM off works OK.
- Moving disk to a NFS shared storage with VM on doesn't work. First try
reports:
---
create full clone of drive virtio0 (local:100/vm-100-disk-1.raw)
Formatting '/mnt/pve/nfs1/images/100/vm-100-disk-2.raw', fmt=raw size=0
transferred: 0 bytes remaining: 53687091200 bytes total: 53687091200
bytes progression: 0.00 %
TASK ERROR: storage migration failed: mirroring error: mirroring job
seem to have die. Maybe do you have bad sectors? at
/usr/share/perl5/PVE/QemuServer.pm line 5170.
---
After re-trying:
---
create full clone of drive virtio0 (local:100/vm-100-disk-1.raw)
Formatting '/mnt/pve/nfs1/images/100/vm-100-disk-2.raw', fmt=raw
size=53687091200
transferred: 0 bytes remaining: 53687091200 bytes total: 53687091200
bytes progression: 0.00 %
transferred: 104857600 bytes remaining: 53582233600 bytes total:
53687091200 bytes progression: 0.20 %
transferred: 125829120 bytes remaining: 53561262080 bytes total:
53687091200 bytes progression: 0.23 %
transferred: 125829120 bytes remaining: 53561262080 bytes total:
53687091200 bytes progression: 0.23 %
transferred: 125829120 bytes remaining: 53561262080 bytes total:
53687091200 bytes progression: 0.23 %
transferred: 125829120 bytes remaining: 53561262080 bytes total:
53687091200 bytes progression: 0.23 %
transferred: 125829120 bytes remaining: 53561262080 bytes total:
53687091200 bytes progression: 0.23 %
transferred: 125829120 bytes remaining: 53561262080 bytes total:
53687091200 bytes progression: 0.23 %
transferred: 125829120 bytes remaining: 53561262080 bytes total:
53687091200 bytes progression: 0.23 %
transferred: 125829120 bytes remaining: 53561262080 bytes total:
53687091200 bytes progression: 0.23 %
transferred: 125829120 bytes remaining: 53561262080 bytes total:
53687091200 bytes progression: 0.23 %
transferred: 125829120 bytes remaining: 53561262080 bytes total:
53687091200 bytes progression: 0.23 %
[...]
---
Usually the % is different, but very low (<1%)
There are no hard disk errors in dmesg/syslog.
I have also found apparently slightly different issues migrating from
NFS to RBD, and from RBD to NFS.
Cheers
Eneko
--
Zuzendari Teknikoa / Director Técnico
Binovo IT Human Project, S.L.
Telf. 943575997
943493611
Astigarraga bidea 2, planta 6 dcha., ofi. 3-2; 20180 Oiartzun (Gipuzkoa)
<a moz-do-not-send="true" class="moz-txt-link-abbreviated" href="http://www.binovo.es">www.binovo.es</a>
</pre>
<br>
</div>
<br>
</blockquote>
<br>
<br>
<pre class="moz-signature" cols="72">--
Zuzendari Teknikoa / Director Técnico
Binovo IT Human Project, S.L.
Telf. 943575997
943493611
Astigarraga bidea 2, planta 6 dcha., ofi. 3-2; 20180 Oiartzun (Gipuzkoa)
<a class="moz-txt-link-abbreviated" href="http://www.binovo.es">www.binovo.es</a></pre>
</body>
</html>