[PVE-User] Ceph Journal Performance
Lindsay Mathieson
lindsay.mathieson at gmail.com
Sun Nov 9 23:32:34 CET 2014
On Wed, 5 Nov 2014 05:34:04 PM Eneko Lacunza wrote:
> > Overall, I seemed to get similar i/o to what I was getting with
> > gluster, when I implemented a SSD cache for it (EXT4 with SSD
> > Journal). However ceph seemed to cope better with high loads, with one
> > of my stress tests - starting 7 vm's simultaneously, gluster seemed to
> > fail, with some of the VM's reporting I/O errors and crashing.
> >
> > Whereas with ceph, they were very slow but all started normally.
> >
>
> Thanks for sharing, I haven't used glusterfs but knowing about those I/O
> errors is interesting.
More feedback - after test ceph a lot and feedback from the ceph user list I
concluded that my use cases were not a good fit to ceph. To small basically,
only two osd's, which impacted performance and complicated to manage.
I revisited gluster, this time formatted the filesystem per recommendations
(difficult to find). This seemed to resolved the I/O problems, haven't been
able to recreate them no matter what the load.
--
Lindsay
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 198 bytes
Desc: This is a digitally signed message part.
URL: <http://lists.proxmox.com/pipermail/pve-user/attachments/20141110/34f7d116/attachment.sig>
More information about the pve-user
mailing list