[pve-devel] SSD only Ceph cluster
VELARTIS Philipp Dürhammer
p.duerhammer at velartis.at
Mon Sep 1 16:47:59 CEST 2014
Yes but with hacks like turn of the crush update on start etc..
http://wiki.ceph.com/Planning/Blueprints/Giant/crush_extension_for_more_flexible_object_placement
looks like they will improve it.
-----Ursprüngliche Nachricht-----
Von: Alexandre DERUMIER [mailto:aderumier at odiso.com]
Gesendet: Montag, 01. September 2014 16:43
An: VELARTIS Philipp Dürhammer
Cc: pve-devel at pve.proxmox.com; Dietmar Maurer
Betreff: Re: AW: [pve-devel] SSD only Ceph cluster
>>Yes :-) and next release will have fully supported possibility to have different roots on hosts.
>>For example one for ssd and one for spinners. (which was possible
>>right now but not very usable) For me it is a lot better to have a big
>>pool with spinners and a separate pool with fast ssds... without the
>>need to have a least 6 or more osd servers
I think it's already possible, editing the crushmap manually.
see:
http://www.sebastien-han.fr/blog/2014/08/25/ceph-mix-sata-and-ssd-within-the-same-box/
----- Mail original -----
De: "VELARTIS Philipp Dürhammer" <p.duerhammer at velartis.at>
À: "Alexandre DERUMIER" <aderumier at odiso.com>, "Dietmar Maurer" <dietmar at proxmox.com>
Cc: pve-devel at pve.proxmox.com
Envoyé: Lundi 1 Septembre 2014 16:33:17
Objet: AW: [pve-devel] SSD only Ceph cluster
Yes :-) and next release will have fully supported possibility to have different roots on hosts.
For example one for ssd and one for spinners. (which was possible right now but not very usable) For me it is a lot better to have a big pool with spinners and a separate pool with fast ssds... without the need to have a least 6 or more osd servers
-----Ursprüngliche Nachricht-----
Von: pve-devel [mailto:pve-devel-bounces at pve.proxmox.com] Im Auftrag von Alexandre DERUMIER
Gesendet: Samstag, 30. August 2014 17:58
An: Dietmar Maurer
Cc: pve-devel at pve.proxmox.com
Betreff: Re: [pve-devel] SSD only Ceph cluster
>>So this is a perfect fit, considering the current ceph limitations?
Yes, sure!
I known also that firefly have limitation in the read memory cache, around 25000iops by node.
Seem that master ceph git has resolved that too :)
Can't wait for Giant release :)
----- Mail original -----
De: "Dietmar Maurer" <dietmar at proxmox.com>
À: "Alexandre DERUMIER" <aderumier at odiso.com>
Cc: "Michael Rasmussen" <mir at datanom.net>, pve-devel at pve.proxmox.com
Envoyé: Samedi 30 Août 2014 17:06:23
Objet: RE: [pve-devel] SSD only Ceph cluster
> >>The Crucial MX100 provides 90k/85k IOPS. Those numbers are from
> >>specs, so I am not sure if you can get that in reality?
>
> No, I think you can reach 90K maybe for some seconds when they are
> empty ;)
>
> check graph here:
> http://www.anandtech.com/show/8066/crucial-mx100-256gb-512gb-review/3
>
> It's more around 7000iops
So this is a perfect fit, considering the current ceph limitations?
_______________________________________________
pve-devel mailing list
pve-devel at pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
More information about the pve-devel
mailing list