<div dir="ltr"><div><div><div><div>You definitely need multiple disks for ceph - I initially tried a 2 node/2 disk ceph setup, it worked by write performance was dreadful. I'm now up to 1 ssd (ceph journal) & 3 disks per node and its reached "acceptable" performance.<br><br></div>Some other considerations:<br></div><br>- live replication/availability. A ceph node can be taken down for upgrades/maintenance or replaced and the system keeps chugging without data loss. <br><br></div>- Administration. I feel that ZFS is a lot easier in this regard. Trivially easy to add disks, caching etc. Ceph requires far more technical knowledge.<br><br></div>- Backup. ZFS snapshot copy is pretty easy, but requires a 2nd ZFS server<br><div><div><br></div><div>- Overhead. If your PVE nodes are also your ceph/zfs servers than that will impact performance. I worry that ceph has added considerable overhead to my PVE nodes, its hard to assess.<br><br></div><div>- Snapshots. Form me ceph rbd snapshot restore is unacceptably slow. Dunno what the zfs ones are like.<br><br></div><div>From a small business setup I'd tend to recommend ZFS and/or a NAS. Ceph is a lot of work to setup and keep running. I'm doing it here with a our 3 node/30 VM setup, but I sometimes wish I'd stuck with a NAS. OTOH we really wanted storage redundancy.<br></div><div><br></div><div><div><div><br><div><div class="gmail_extra">-- <br><div class="gmail_signature">Lindsay</div>
</div></div></div></div></div></div></div>