[PVE-User] Cannot list disks from an external CEPH pool

Iztok Gregori iztok.gregori at elettra.eu
Wed Jun 1 11:51:37 CEST 2022


On 01/06/22 11:29, Aaron Lauterer wrote:
> Do you get additional errors if you run the following command? Assuming 
> that the storage is also called pool1.
> 
> pvesm list pool1

No additional errors:

root at pmx-14:~# pvesm list pool1
rbd error: rbd: listing images failed: (2) No such file or directory



> Do you have VMs with disk images on that storage? If so, do they start 
> normally?

Yes, we have a lot of VMs with disk on that storage and yes they seems 
to start normally (last start yesterday when we first notice the GUI 
behaviour)

> 
> Can you show the configuration of that storage and the one of the 
> working pool? (/etc/pve/storage.cfg)

Sure (edited the IP addresses and pool names):

[cit /etc/pve/storage.cfg]
...
rbd: pool1
	content images
	monhost 172.16.1.1;1172.16.1.2;172.16.1.3
	pool pool1
	username admin

rbd: pool2
	content images
	monhost 172.16.1.1;172.16.1.2;172.16.1.3
	pool pool2
	username admin
...
[/cit]

Thanks!

Iztok

> 
> On 6/1/22 11:13, Iztok Gregori wrote:
>> Hi to all!
>>
>> I have a Proxmox cluster (7.1) connected to an external CEPH cluster 
>> (octopus).  From the GUI I cannot list the content (disks) of one pool 
>> (but I'm able to list all the other pools):
>>
>> rbd error: rbd: listing images failed: (2) No such file or directory 
>> (500)
>>
>> The pveproxy/access.log shows the error for "pool1":
>>
>> "GET /api2/json/nodes/pmx-14/storage/pool1/content?content=images 
>> HTTP/1.1" 500 13
>>
>> but when I try another pool ("pool2") it works:
>>
>> "GET /api2/json/nodes/pmx-14/storage/pool2/content?content=images 
>> HTTP/1.1" 200 841
>>
>>  From the command line "rbd ls pool1" is working fine (because I don't 
>> have a ceph.conf I ran it with "rbd -m 172.16.1.1 --keyring 
>> /etc/pve/priv/ceph/pool1.keyring ls pool1") and I see the pool contents.
>>
>> The cluster is running fine, the VMs access the disks on that pool 
>> without a problem
>>
>> What can it be?
>>
>> The cluster is a mix of freshly installed nodes and upgraded ones, all 
>> the 17 nodes (but one which is 6.4 but without any running VMs) are 
>> running:
>>
>> root at pmx-14:~# pveversion -v
>> proxmox-ve: 7.1-1 (running kernel: 5.13.19-6-pve)
>> pve-manager: 7.1-12 (running version: 7.1-12/b3c09de3)
>> pve-kernel-helper: 7.1-14
>> pve-kernel-5.13: 7.1-9
>> pve-kernel-5.13.19-6-pve: 5.13.19-15
>> pve-kernel-5.13.19-2-pve: 5.13.19-4
>> ceph-fuse: 15.2.15-pve1
>> corosync: 3.1.5-pve2
>> criu: 3.15-1+pve-1
>> glusterfs-client: 9.2-1
>> ifupdown2: 3.1.0-1+pmx3
>> ksm-control-daemon: 1.4-1
>> libjs-extjs: 7.0.0-1
>> libknet1: 1.22-pve2
>> libproxmox-acme-perl: 1.4.1
>> libproxmox-backup-qemu0: 1.2.0-1
>> libpve-access-control: 7.1-7
>> libpve-apiclient-perl: 3.2-1
>> libpve-common-perl: 7.1-5
>> libpve-guest-common-perl: 4.1-1
>> libpve-http-server-perl: 4.1-1
>> libpve-storage-perl: 7.1-1
>> libspice-server1: 0.14.3-2.1
>> lvm2: 2.03.11-2.1
>> lxc-pve: 4.0.11-1
>> lxcfs: 4.0.11-pve1
>> novnc-pve: 1.3.0-2
>> proxmox-backup-client: 2.1.5-1
>> proxmox-backup-file-restore: 2.1.5-1
>> proxmox-mini-journalreader: 1.3-1
>> proxmox-widget-toolkit: 3.4-7
>> pve-cluster: 7.1-3
>> pve-container: 4.1-4
>> pve-docs: 7.1-2
>> pve-edk2-firmware: 3.20210831-2
>> pve-firewall: 4.2-5
>> pve-firmware: 3.3-6
>> pve-ha-manager: 3.3-3
>> pve-i18n: 2.6-2
>> pve-qemu-kvm: 6.1.1-2
>> pve-xtermjs: 4.16.0-1
>> qemu-server: 7.1-4
>> smartmontools: 7.2-1
>> spiceterm: 3.2-2
>> swtpm: 0.7.1~bpo11+1
>> vncterm: 1.7-1
>> zfsutils-linux: 2.1.4-pve1
>>
>> I can provide other information if it's needed.
>>
>> Cheers
>> Iztok Gregori
>>
>>
> 





More information about the pve-user mailing list