[PVE-User] RBD Storage from 6.1 to 3.4 (or 4.4)
Alwin Antreich
a.antreich at proxmox.com
Thu Jan 30 13:42:34 CET 2020
Hello Fabrizio,
On Thu, Jan 30, 2020 at 12:46:16PM +0100, Fabrizio Cuseo wrote:
>
> I have installed a new cluster with the last release, with a local ceph storage.
> I also have 2 old and smaller clusters, and I need to migrate all the VMs to the new cluster.
> The best method i have used in past is to add on the NEW cluster the RBD storage of the old cluster, so I can stop the VM, move the .cfg file, start the vm (all those operations are really quick), and move the disk (online) from the old storage to the new storage.
>
> But now, if I add the RBD storage, copying the keyring file of the old cluster to the new cluster, naming as the storage ID, and using the old cluster monitors IP, i can see the storage summary (space total and used), but when I go to "content", i have this error: "rbd error: rbd: listing images failed: (95) Operation not supported (500)".
>
> If, from the new cluster CLI, i use the command:
>
> rbd -k /etc/pve/priv/ceph/CephOLD.keyring -m 172.16.20.31 ls rbd2
>
> I can see the list of disk images, but also the error: "librbd::api::Trash: list: error listing rbd trash entries: (95) Operation not supported"
>
>
> The new cluster ceph release is Nautilus, and the old one is firefly.
>
> Some idea ?
As said by others already, there is no direct way. Best OFC would be to
do a backup + restore. But in any case, you will need a shared storage
that can be reached by both clusters, eg. like NFS. And watch out, as
one cluster can potentially destroy disks from the other cluster on the
shared storage.
--
Cheers,
Alwin
More information about the pve-user
mailing list