[PVE-User] Ceph cache tiering

Fabrizio Cuseo f.cuseo at panservice.it
Sat May 16 10:40:13 CEST 2015

Hallo there.
I am testing the ceph performances with a SSD cache journal.

My setup is:

- 3 hosts, 2 x quad opteron, 64 Gbyte, 1 x giga ethernet (for proxmox), 1 infiniband 20Gbit  (for ceph), 1 x Perc6I with 7 x WD 1Tbyte 128Mb cache (enterprise edition), 1 for proxmox, 6 for ceph OSD, and 1 x Samsung SSD EVO 850 (240Gbyte); they are configured as 8 different Virtual Disk on the raid controller.

I tested the first setup; ceph OSD each with the journal on the single SSD.

My first performances test (with only 1 VM) are about 150Mbyte/sec write.

Now, I would like to test a different ceph setup:

6 x OSD (with 1Tbyte sata disk) and Journal on the same disk

1 x SSD (one for each node), configured as OSD of a different crush map, using the cache tiering configuration.

My two questions are:
- have someone tested this setup ? Performs better than the first ? 
- will be planned in proxmox gui some option to create OSD in a different pool, or the only options (in future too) is to manually modify the crush map and manually create the SSD OSDs ? It will be fine to have all managed by proxmox gui, both to create and monitor.

Regards, Fabrizio

Fabrizio Cuseo - mailto:f.cuseo at panservice.it
Direzione Generale - Panservice InterNetWorking
Servizi Professionali per Internet ed il Networking
Panservice e' associata AIIP - RIPE Local Registry
Phone: +39 0773 410020 - Fax: +39 0773 470219
http://www.panservice.it  mailto:info at panservice.it
Numero verde nazionale: 800 901492

More information about the pve-user mailing list