[PVE-User] Proxmox Ceph Server - SSD OSDs

Eneko Lacunza elacunza at binovo.es
Wed Dec 3 09:20:22 CET 2014

Hi all,

We have recently changed the storage backend of a Proxmox cluster. It 
was using softraid with a SSD drive and a iSCSI drive, and we have 
switched to RBD on a 2 OSD ceph storage.

One of the VMs is quite I/O intensive, as it executes some precalcs 
between 2 DBs every hour.

Our experience has been as follows:

- 1st SSD OSD (inline journal) was a Intel S3500 300GB . This has been 
rocking fast from begining to end, a non-dreamt delight suddenly 
realized. :) Atop reports on peak use ~4% use and 3MB/s write. Much 
better performance than the old storage.

- 2nd SSD OSD (inline journal) was a Samsung 840 Pro 256GB, used in the 
old softraid. This has been a pain from the begining; when it was up 
rebalancing/degradation fix was painfully slow. Finally on peak use it 
wasn't up to the task, so we had to down it to keep the system doing 
usable work. We have checked it with Samsung's Magician, its 24.5TBW 
well bellow 72TBW warranty, but reported performance is bad even after 
firmware update.

- We replaced the 2nd SSD OSD with a Crucial M550 256GB. This hasn't 
been too bad; rebalancing was much quicker than with 840pro, and on peak 
use atop reports ~50% use and 3MB/s write, so we see it up to the task.

Now my thoughts and questions:
- If I look to the spec sheets, 840pro and M550 should give me much 
better performance than S3500. What's going on? What should I check on 
specs for "real" performance? :-)
- What real use experience do you have with those or other SSDs for Ceph 

Thanks for reading

Zuzendari Teknikoa / Director T├ęcnico
Binovo IT Human Project, S.L.
Telf. 943575997
Astigarraga bidea 2, planta 6 dcha., ofi. 3-2; 20180 Oiartzun (Gipuzkoa)

More information about the pve-user mailing list