[PVE-User] Create secondary pool on ceph servers..
Gilberto Nunes
gilberto.nunes32 at gmail.com
Tue Apr 14 20:57:05 CEST 2020
Oh! Sorry Alwin.
I have some urgence to do this.
So this what I do...
First, I insert all HDDs, both SAS and SSD, into OSD tree.
Than, I check if the system could detect SSD as ssd and SAS as hdd, but
there's not difference! It's show all HDDs as hdd!
So, I change the class with this commands:
ceph osd crush rm-device-class osd.7
ceph osd crush set-device-class ssd osd.7
ceph osd crush rm-device-class osd.8
ceph osd crush set-device-class ssd osd.8
ceph osd crush rm-device-class osd.12
ceph osd crush set-device-class ssd osd.12
ceph osd crush rm-device-class osd.13
ceph osd crush set-device-class ssd osd.13
ceph osd crush rm-device-class osd.14
ceph osd crush set-device-class ssd osd.14
After that, ceph osd crush tree --show-shadow show me different types of
HDD...
ceph osd crush tree --show-shadow
ID CLASS WEIGHT TYPE NAME
-24 ssd 4.36394 root default~ssd
-20 ssd 0 host pve1~ssd
-21 ssd 0 host pve2~ssd
-17 ssd 0.87279 host pve3~ssd
7 ssd 0.87279 osd.7
-18 ssd 0.87279 host pve4~ssd
8 ssd 0.87279 osd.8
-19 ssd 0.87279 host pve5~ssd
12 ssd 0.87279 osd.12
-22 ssd 0.87279 host pve6~ssd
13 ssd 0.87279 osd.13
-23 ssd 0.87279 host pve7~ssd
14 ssd 0.87279 osd.14
-2 hdd 12.00282 root default~hdd
-10 hdd 1.09129 host pve1~hdd
0 hdd 1.09129 osd.0
.....
.....
Then, I have created the rule
ceph osd crush rule create-replicated SSDPOOL default host ssd
Then create a POOL named SSDs
and then assigned the new pool
ceph osd pool set SSDs crush_rule SSDPOOL
It's seems to work properly...
What you thing?
---
Gilberto Nunes Ferreira
Em ter., 14 de abr. de 2020 às 15:30, Alwin Antreich <a.antreich at proxmox.com>
escreveu:
> On Tue, Apr 14, 2020 at 02:35:55PM -0300, Gilberto Nunes wrote:
> > Hi there
> >
> > I have 7 servers with PVE 6 all updated...
> > All servers has named pve1,pve2 and so on...
> > On pve3, pve4 and pve5 has SSD HD of 960GB.
> > So we decided to create a second pool that will use only this SSD.
> > I have readed Ceph CRUSH & device classes in order to do that!
> > So just to do things right, I need check that out:
> > 1 - first create OSD's with all HD, SAS and SSD
> > 2 - second create different pool with command bellow:
> > ruleset:
> >
> > ceph osd crush rule create-replicated <rule-name> <root>
> > <failure-domain> <class>
> >
> > create pool
> >
> > ceph osd pool set <pool-name> crush_rule <rule-name>
> >
> >
> > Well, my question is: can I create OSD with all disk either SAS and
> > SSD, and then after that, create the ruleset and the pool?
> > Is this generated some impact during this operations??
> If your OSD types aren't mixed, then best create the rule for the
> existing pool first. All data will move, once the rule is applied. So,
> not much to movement if they are already on the correct OSD type.
>
> --
> Cheers,
> Alwin
>
> _______________________________________________
> pve-user mailing list
> pve-user at pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
More information about the pve-user
mailing list