[PVE-User] Ceph Cache Tiering
athompso at athompso.net
Wed Oct 12 05:28:33 CEST 2016
Not a bloody chance... WriteBack is the only thing that gives both acceptable performance characteristics and data guarantees. (The NFS file server running ZFS is running in sync=disabled mode, but it also has dual power supplies connected to dual UPSes, and I'm willing to take the chance of a complete system failure.)
> -----Original Message-----
> From: pve-user [mailto:pve-user-bounces at pve.proxmox.com] On
> Behalf Of Emmanuel Kasper
> Sent: October 11, 2016 05:26
> To: PVE User List <pve-user at pve.proxmox.com>
> Subject: Re: [PVE-User] Ceph Cache Tiering
> On 10/10/2016 04:29 PM, Adam Thompson wrote:
> > The default PVE setup puts an XFS filesystem onto each "full disk"
> assigned to CEPH. CEPH does **not** write directly to raw devices, so
> the choice of filesystem is largely irrelevant.
> > Granted, ZFS is a "heavier" filesystem than XFS, but it's no better or
> worse than running CEPH on XFS on Hardware RAID, which I've done
> > CEPH gives you the ability to not need software or hardware RAID.
> > ZFS gives you the ability to not need hardware RAID.
> > Layering them - assuming you have enough memory and CPU cycles -
> can be very beneficial.
> > Neither CEPH nor XFS does deduplication or compression, which ZFS
> does. Depending on what kind of CPU you have, turning on compression
> can dramatically *speed up* I/O. Depending on how much RAM you
> have, turning on deduplication can dramatically decrease disk space
> > Although, TBH, at that point I'd just do what I have running in
> production right now: a reasonably-powerful SPARC64 NFS fileserver,
> and run QCOW2 files over NFS. Performs better than CEPH did on 1Gbps
> > -Adam
> Out of Curiosity, I suppose you're using the default 'NoCache' as the
> cache mode of those QCOW2 images ?
> pve-user mailing list
> pve-user at pve.proxmox.com
More information about the pve-user