[PVE-User] Ceph Cache Tiering
Adam Thompson
athompso at athompso.net
Mon Oct 10 16:29:53 CEST 2016
The default PVE setup puts an XFS filesystem onto each "full disk" assigned to CEPH. CEPH does **not** write directly to raw devices, so the choice of filesystem is largely irrelevant.
Granted, ZFS is a "heavier" filesystem than XFS, but it's no better or worse than running CEPH on XFS on Hardware RAID, which I've done elsewhere.
CEPH gives you the ability to not need software or hardware RAID.
ZFS gives you the ability to not need hardware RAID.
Layering them - assuming you have enough memory and CPU cycles - can be very beneficial.
Neither CEPH nor XFS does deduplication or compression, which ZFS does. Depending on what kind of CPU you have, turning on compression can dramatically *speed up* I/O. Depending on how much RAM you have, turning on deduplication can dramatically decrease disk space used.
Although, TBH, at that point I'd just do what I have running in production right now: a reasonably-powerful SPARC64 NFS fileserver, and run QCOW2 files over NFS. Performs better than CEPH did on 1Gbps infrastructure.
-Adam
> -----Original Message-----
> From: pve-user [mailto:pve-user-bounces at pve.proxmox.com] On
> Behalf Of Lindsay Mathieson
> Sent: October 10, 2016 09:21
> To: pve-user at pve.proxmox.com
> Subject: Re: [PVE-User] Ceph Cache Tiering
>
> On 10/10/2016 10:22 PM, Eneko Lacunza wrote:
> > But this is nonsense, ZFS backed Ceph?! You're supposed to give full
> > disks to ceph, so that performance increases as you add more disks
>
> I've tried it both ways, the performance is much the same. ZFS also
> increases in performance the more disks you throw it, which is passed
> onto ceph.
>
>
> +Compression
>
> +Auto Bit rot detection and repair
>
> +A lot of flexibility
>
> --
> Lindsay Mathieson
>
> _______________________________________________
> pve-user mailing list
> pve-user at pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
More information about the pve-user
mailing list