[PVE-User] HEALTH_ERR , showing x 1 Full osd(s) , guidance requested

Aaron Lauterer a.lauterer at proxmox.com
Thu May 4 09:29:51 CEST 2023


As already mentioned in another reply, you will have to make space by either 
reweighting the OSD or adding more OSDs.

Another option, though quite a bit more radical, would be to reduce the size of 
the pool.
Right now, you hopefully have a size/min_size of 3/2.

ceph osd pool get {pool} size
ceph osd pool get {pool} min_size

By reducing the size to 2, you will gain about 1/3 of space which can help you 
get out of the situation. But the pool will become IO blocked once a single OSD 
is down.
So that should only be done as an emergency measure to get operational again. 
But then you need to address the actual issue ASAP (get more space) so that you 
can increase the size back to 3 again.

ceph osd pool set {pool} size 2

And please plan an upgrade of the cluster soon! Proxmox VE 6 is EOL since last 
summer and the intricacies of that version and the Ceph versions along with it 
are fading away ;)

https://pve.proxmox.com/wiki/Upgrade_from_6.x_to_7.0

Cheers,
Aaron

On 5/4/23 08:39, Joseph John wrote:
> Dear All,
> Good morning
> We have a proxmox setup, with 4 nodes, with 6.3-3
> We   have    Node 1 and Node 2 running with 6.3-3
> and              Node 3 and Node 4 running with 6.4-15
> 
> today we noticed that , we were not able to ssh to the virtual instance ,
> or not able to login using the console based option
> when I checked the summary, we could see that in "HEALTH_ERR" we are
> getting message that 1 full osd(s)
> 
> 
> 
> Thanks
> Joseph John
> 00971-50-7451809
> _______________________________________________
> pve-user mailing list
> pve-user at lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> 
> 




More information about the pve-user mailing list