[PVE-User] PVE 5.4: cannot move disk image to Ceph
Uwe Sauter
uwe.sauter.de at gmail.com
Fri Sep 6 12:22:28 CEST 2019
Am 06.09.19 um 12:09 schrieb Mark Adams:
> Is it potentially an issue with having the same pool name on 2 different ceph clusters?
Good catch.
> is there a vm-112-disk-0 on vdisks_cluster2?
No, but disabling the second Ceph in the storage settings allowed the move to succeed. I'll need to think about naming then.
But this keeps me wondering why it only failed for one VM and the other six I moved today caused no problems.
Thank you.
Regards,
Uwe
>
> On Fri, 6 Sep 2019, 12:45 Uwe Sauter, <uwe.sauter.de at gmail.com <mailto:uwe.sauter.de at gmail.com>> wrote:
>
> Hello Alwin,
>
> Am 06.09.19 um 11:32 schrieb Alwin Antreich:
> > Hello Uwe,
> >
> > On Fri, Sep 06, 2019 at 10:41:18AM +0200, Uwe Sauter wrote:
> >> Hi,
> >>
> >> I'm having trouble moving a disk image to Ceph. Moving between local disks and NFS share is working.
> >>
> >> The error given is:
> >>
> >> ########
> >> create full clone of drive scsi0 (aurel-cluster1-VMs:112/vm-112-disk-0.qcow2)
> >> rbd: create error: (17) File exists
> >> TASK ERROR: storage migration failed: error with cfs lock 'storage-vdisks_vm': rbd create vm-112-disk-0' error: rbd: create
> error:
> >> (17) File exists
> >> ########
> > Can you see anything in the ceph logs? And on what version (pveversion
> > -v) are you on?
>
> Nothing obvious in the logs. The cluster is healthy
>
> root at px-bravo-cluster:~# ceph status
> cluster:
> id: 982484e6-69bf-490c-9b3a-942a179e759b
> health: HEALTH_OK
>
> services:
> mon: 3 daemons, quorum px-alpha-cluster,px-bravo-cluster,px-charlie-cluster
> mgr: px-alpha-cluster(active), standbys: px-bravo-cluster, px-charlie-cluster
> osd: 9 osds: 9 up, 9 in
>
> data:
> pools: 1 pools, 128 pgs
> objects: 14.76k objects, 56.0GiB
> usage: 163GiB used, 3.99TiB / 4.15TiB avail
> pgs: 128 active+clean
>
> io:
> client: 2.31KiB/s wr, 0op/s rd, 0op/s wr
>
> I'm on a fully up-to-date PVE 5.4 (all three nodes).
>
> root at px-bravo-cluster:~# pveversion -v
> proxmox-ve: 5.4-2 (running kernel: 4.15.18-20-pve)
> pve-manager: 5.4-13 (running version: 5.4-13/aee6f0ec)
> pve-kernel-4.15: 5.4-8
> pve-kernel-4.15.18-20-pve: 4.15.18-46
> pve-kernel-4.15.18-19-pve: 4.15.18-45
> ceph: 12.2.12-pve1
> corosync: 2.4.4-pve1
> criu: 2.11.1-1~bpo90
> glusterfs-client: 3.8.8-1
> ksm-control-daemon: 1.2-2
> libjs-extjs: 6.0.1-2
> libpve-access-control: 5.1-12
> libpve-apiclient-perl: 2.0-5
> libpve-common-perl: 5.0-54
> libpve-guest-common-perl: 2.0-20
> libpve-http-server-perl: 2.0-14
> libpve-storage-perl: 5.0-44
> libqb0: 1.0.3-1~bpo9
> lvm2: 2.02.168-pve6
> lxc-pve: 3.1.0-6
> lxcfs: 3.0.3-pve1
> novnc-pve: 1.0.0-3
> proxmox-widget-toolkit: 1.0-28
> pve-cluster: 5.0-38
> pve-container: 2.0-40
> pve-docs: 5.4-2
> pve-edk2-firmware: 1.20190312-1
> pve-firewall: 3.0-22
> pve-firmware: 2.0-7
> pve-ha-manager: 2.0-9
> pve-i18n: 1.1-4
> pve-libspice-server1: 0.14.1-2
> pve-qemu-kvm: 3.0.1-4
> pve-xtermjs: 3.12.0-1
> qemu-server: 5.0-54
> smartmontools: 6.5+svn4324-1
> spiceterm: 3.0-5
> vncterm: 1.5-3
> zfsutils-linux: 0.7.13-pve1~bpo2
>
>
>
> >>
> >> but this is not true:
> >>
> >> ########
> >> root at px-bravo-cluster:~# rbd -p vdisks ls
> >> vm-106-disk-0
> >> vm-113-disk-0
> >> vm-113-disk-1
> >> vm-113-disk-2
> >> vm-118-disk-0
> >> vm-119-disk-0
> >> vm-120-disk-0
> >> vm-125-disk-0
> >> vm-125-disk-1
> >> ########
> > Can you create the image by hand (rbd -p rbd create vm-112-disk-0 --size
> > 1G)? And (rbd -p rbd rm vm-112-disk-0) for delete, ofc.
>
> root at px-bravo-cluster:~# rbd -p vdisks create vm-112-disk-0 --size 1G
> rbd: create error: (17) File exists
> 2019-09-06 11:35:20.943998 7faf704660c0 -1 librbd: rbd image vm-112-disk-0 already exists
>
> root at px-bravo-cluster:~# rbd -p vdisks create test --size 1G
>
> root at px-bravo-cluster:~# rbd -p vdisks ls
> test
> vm-106-disk-0
> vm-113-disk-0
> vm-113-disk-1
> vm-113-disk-2
> vm-118-disk-0
> vm-119-disk-0
> vm-120-disk-0
> vm-125-disk-0
> vm-125-disk-1
>
> root at px-bravo-cluster:~# rbd -p vdisks rm test
> Removing image: 100% complete...done.
>
> root at px-bravo-cluster:~# rbd -p vdisks rm vm-112-disk-0
> 2019-09-06 11:36:07.570749 7eff7cff9700 -1 librbd::image::OpenRequest: failed to retreive immutable metadata: (2) No such file or
> directory
> Removing image: 0% complete...failed.
> rbd: delete error: (2) No such file or directory
>
>
> >
> >>
> >> Here is the relevant part of my storage.cfg:
> >>
> >> ########
> >> nfs: aurel-cluster1-VMs
> >> export /backup/proxmox-infra/VMs
> >> path /mnt/pve/aurel-cluster1-VMs
> >> server X.X.X.X
> >> content images
> >> options vers=4.2
> >>
> >>
> >> rbd: vdisks_vm
> >> content images
> >> krbd 0
> >> pool vdisks
> >> ########
> > Is this the complete storage.cfg?
>
> No, only the parts that are relevant for this particular move. Here's the complete file:
>
> ########
> rbd: vdisks_vm
> content images
> krbd 0
> pool vdisks
>
> dir: local-hdd
> path /mnt/local
> content images,iso
> nodes px-alpha-cluster,px-bravo-cluster,px-charlie-cluster
> shared 0
>
> nfs: aurel-cluster1-daily
> export /backup/proxmox-infra/daily
> path /mnt/pve/aurel-cluster1-daily
> server X.X.X.X
> content backup
> maxfiles 30
> options vers=4.2
>
> nfs: aurel-cluster1-weekly
> export /backup/proxmox-infra/weekly
> path /mnt/pve/aurel-cluster1-weekly
> server X.X.X.X
> content backup
> maxfiles 30
> options vers=4.2
>
> nfs: aurel-cluster1-VMs
> export /backup/proxmox-infra/VMs
> path /mnt/pve/aurel-cluster1-VMs
> server X.X.X.X
> content images
> options vers=4.2
>
> nfs: aurel-cluster2-daily
> export /backup/proxmox-infra2/daily
> path /mnt/pve/aurel-cluster2-daily
> server X.X.X.X
> content backup
> maxfiles 30
> options vers=4.2
>
> nfs: aurel-cluster2-weekly
> export /backup/proxmox-infra2/weekly
> path /mnt/pve/aurel-cluster2-weekly
> server X.X.X.X
> content backup
> maxfiles 30
> options vers=4.2
>
> nfs: aurel-cluster2-VMs
> export /backup/proxmox-infra2/VMs
> path /mnt/pve/aurel-cluster2-VMs
> server X.X.X.X
> content images
> options vers=4.2
>
> dir: local
> path /var/lib/vz
> content snippets,vztmpl,images,rootdir,iso
> maxfiles 0
>
> rbd: vdisks_cluster2
> content images
> krbd 0
> monhost px-golf-cluster, px-hotel-cluster, px-india-cluster
> pool vdisks
> username admin
> ########
>
> Thanks,
>
> Uwe
>
> > --
> > Cheers,
> > Alwin
> >
>
> _______________________________________________
> pve-user mailing list
> pve-user at pve.proxmox.com <mailto:pve-user at pve.proxmox.com>
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
More information about the pve-user
mailing list