[PVE-User] High I/O waits, not sure if it's a ceph issue.

jameslipski jameslipski at protonmail.com
Tue Jun 30 14:07:59 CEST 2020

Thanks for the reply

All nodes are connected to a 10Gbit switch. Ceph is currently running on 14.2.2 but will update to the latest. KRBD was not enabled to the pool.

Before I update ceph, regarding KRBD, I've just enabled it, do I have do re-create the pool, restart ceph, restart the node, etc... or it just takes into effect?

‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
On Monday, June 29, 2020 9:28 PM, Lindsay Mathieson <lindsay.mathieson at gmail.com> wrote:

> On 30/06/2020 11:08 am, jameslipski via pve-user wrote:
> > ust to give a little bit of a background, we currently we have 6 nodes. We're running CEPH, and each node consists of
> > 2 osds (each node has 2x Intel SSDSC2KG019T8) OSD Type is bluestore. Global ceph configurations (at least as shown on the proxmox interface) is as follows:
> Network config? (ie. speed etc).
> Ceph is Nautilus 14.2.9? (latest on proxmox)
> Do you have KRBD set for the Proxmox Ceph Storage? that help a lot.
> ------------------------------------------------------------------------------------------------------------------------------------------------------
> Lindsay
> pve-user mailing list
> pve-user at pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

More information about the pve-user mailing list