[PVE-User] "nearfull" status in PVE Dashboard not consistent

Frank Thommen f.thommen at dkfz-heidelberg.de
Sun Sep 8 14:17:47 CEST 2024


Hi David,

I have deleted the OSDs one by one (sometimes two by two) and then 
recreated them, but this time with an SSD partition as DB device 
(previously the DB was on the OSD HDDs). So the SSDs should not have 
been added as OSDs.

`ceph osd tree` gives me
----------------------------------
$ ceph osd tree
ID  CLASS  WEIGHT    TYPE NAME            STATUS  REWEIGHT  PRI-AFF
-1         55.66704  root default
-3         18.40823      host pve01
  0    hdd   3.68259          osd.0            up   1.00000  1.00000
  1    hdd   3.68259          osd.1            up   1.00000  1.00000
  2    hdd   3.68259          osd.2            up   1.00000  1.00000
  9    hdd   1.84380          osd.9            up   1.00000  1.00000
10    hdd   1.84380          osd.10           up   1.00000  1.00000
11    hdd   1.84380          osd.11           up   1.00000  1.00000
12    hdd   1.82909          osd.12           up   1.00000  1.00000
-5         19.06548      host pve02
  3    hdd   3.81450          osd.3            up   1.00000  1.00000
  4    hdd   3.81450          osd.4            up   1.00000  1.00000
  5    hdd   3.81450          osd.5            up   1.00000  1.00000
13    hdd   1.90720          osd.13           up   1.00000  1.00000
14    hdd   1.90720          osd.14           up   1.00000  1.00000
15    hdd   1.90720          osd.15           up   1.00000  1.00000
16    hdd   1.90039          osd.16           up   1.00000  1.00000
-7         18.19333      host pve03
  6    hdd   3.63869          osd.6            up   1.00000  1.00000
  7    hdd   3.63869          osd.7            up   1.00000  1.00000
  8    hdd   3.63869          osd.8            up   1.00000  1.00000
17    hdd   1.81929          osd.17           up   1.00000  1.00000
18    hdd   1.81929          osd.18           up   1.00000  1.00000
19    hdd   1.81940          osd.19           up   1.00000  1.00000
20    hdd   1.81929          osd.20           up   1.00000  1.00000
$
----------------------------------

Cheers, Frank



On 07.09.24 21:27, David der Nederlanden | ITTY via pve-user wrote:
> Hi Frank,
> 
> Can you share your OSD layout too?
> 
> My first thought is that you added the SSD's as OSD which caused that OSD to get full, with a near full pool as a result.
> 
> You can get some insights with:
> `ceph osd tree`
> 
> And if needed you can reweight the OSD's, but that would require a good OSD layout:
> `ceph osd reweight-by-utilization`
> 
> Sources:
> https://forum.proxmox.com/threads/ceph-pool-full.47810/  
> https://docs.ceph.com/en/reef/rados/operations/health-checks/#pool-near-full
> 
> Kind regards,
> David der Nederlanden




More information about the pve-user mailing list