[PVE-User] Ceph Journal Performance

Lindsay Mathieson lindsay.mathieson at gmail.com
Sun Nov 2 03:18:05 CET 2014


Have been doing a lot of testing with a three node/2 osd setup
- 3TB WD red drives (about 170MB/s write)
- 2 * 1GB Ethernet Bonded dedicated to the network filesystem

With glusterfs, individual VMs were getting up to 70 MB/s write performance. 
Tests on the gluster mount gave 170 MB/s, the drive max.

With Ceph - Journal+OSD on the same disk, I was getting truly dreadful 
performance, ranging from 3MB/s to 10MB/s.

Also I noted that while gluster would maxout the net work bond at around 170 
MB/s, ceph would never get any better than 50MB/s

Obviously the first improvement that could be made is moving the journal to a 
separate SSD.

Would it be ok to use spare space on the proxmox boot/swap/system SSD? or bad 
idea? :)

thanks,

-- 
Lindsay
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 198 bytes
Desc: This is a digitally signed message part.
URL: <http://lists.proxmox.com/pipermail/pve-user/attachments/20141102/19208fbb/attachment.sig>


More information about the pve-user mailing list