[PVE-User] RBD Storage from 6.1 to 3.4 (or 4.4)
Alexandre DERUMIER
aderumier at odiso.com
Thu Jan 30 13:05:42 CET 2020
ceph client vs server are generally compatible between 2 or 3 releases.
They are no way to make working nautilus or luminous clients with firefly.
I think minimum is jewel server for nautilus client.
So best way could be to upgrade your old proxmox cluster first. (from 4->6, this can be done easily without downtime)
----- Mail original -----
De: "Fabrizio Cuseo" <f.cuseo at panservice.it>
À: "uwe sauter de" <uwe.sauter.de at gmail.com>, "proxmoxve" <pve-user at pve.proxmox.com>
Envoyé: Jeudi 30 Janvier 2020 12:59:13
Objet: Re: [PVE-User] RBD Storage from 6.1 to 3.4 (or 4.4)
I can't afford the long downtime. With my method, the downtime is only to stop the VM on the old cluster and start on the new; the disk image copy in done online.
But my last migration was from 3.4 to 4.4
----- Il 30-gen-20, alle 12:51, Uwe Sauter uwe.sauter.de at gmail.com ha scritto:
> If you can afford the downtime of the VMS you might be able to migrate the disk
> images using "rbd export | ncat" and "ncat | rbd
> import".
>
> I haven't tried this with such a great difference of versions but from Proxmox
> 5.4 to 6.1 this worked without a problem.
>
> Regards,
>
> Uwe
>
>
> Am 30.01.20 um 12:46 schrieb Fabrizio Cuseo:
>>
>> I have installed a new cluster with the last release, with a local ceph storage.
>> I also have 2 old and smaller clusters, and I need to migrate all the VMs to the
>> new cluster.
>> The best method i have used in past is to add on the NEW cluster the RBD storage
>> of the old cluster, so I can stop the VM, move the .cfg file, start the vm (all
>> those operations are really quick), and move the disk (online) from the old
>> storage to the new storage.
>>
>> But now, if I add the RBD storage, copying the keyring file of the old cluster
>> to the new cluster, naming as the storage ID, and using the old cluster
>> monitors IP, i can see the storage summary (space total and used), but when I
>> go to "content", i have this error: "rbd error: rbd: listing images failed:
>> (95) Operation not supported (500)".
>>
>> If, from the new cluster CLI, i use the command:
>>
>> rbd -k /etc/pve/priv/ceph/CephOLD.keyring -m 172.16.20.31 ls rbd2
>>
>> I can see the list of disk images, but also the error: "librbd::api::Trash:
>> list: error listing rbd trash entries: (95) Operation not supported"
>>
>>
>> The new cluster ceph release is Nautilus, and the old one is firefly.
>>
>> Some idea ?
>>
>> Thanks in advance, Fabrizio
>>
>> _______________________________________________
>> pve-user mailing list
>> pve-user at pve.proxmox.com
>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>>
>
> _______________________________________________
> pve-user mailing list
> pve-user at pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
--
---
Fabrizio Cuseo - mailto:f.cuseo at panservice.it
Direzione Generale - Panservice InterNetWorking
Servizi Professionali per Internet ed il Networking
Panservice e' associata AIIP - RIPE Local Registry
Phone: +39 0773 410020 - Fax: +39 0773 470219
http://www.panservice.it mailto:info at panservice.it
Numero verde nazionale: 800 901492
_______________________________________________
pve-user mailing list
pve-user at pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
More information about the pve-user
mailing list