[ceph-users] Re: Remapping OSDs under a PG

2021-05-28 Thread Jeremy Hansen
So I did this:

ceph osd crush rule create-replicated hdd-rule default rack hdd

[ceph: root@cn01 ceph]# ceph osd crush rule ls
replicated_rule
hdd-rule
ssd-rule

[ceph: root@cn01 ceph]# ceph osd crush rule dump hdd-rule
{
"rule_id": 1,
"rule_name": "hdd-rule",
"ruleset": 1,
"type": 1,
"min_size": 1,
"max_size": 10,
"steps": [
{
"op": "take",
"item": -2,
"item_name": "default~hdd"
},
{
"op": "chooseleaf_firstn",
"num": 0,
"type": "rack"
},
{
"op": "emit"
}
]
}


Then this:

ceph osd pool set device_health_metrics crush_rule hdd-rule

How do I prove that my device_health_metrics pool is no longer using any SSDs?

ceph pg ls
PGOBJECTS  DEGRADED  MISPLACED  UNFOUND  BYTES  OMAP_BYTES*  OMAP_KEYS*  
LOG  STATE SINCE  VERSION  REPORTED  UP ACTING 
SCRUB_STAMP  DEEP_SCRUB_STAMP
1.041 0  00  00   0   
71  active+clean22h   205'71   253:484  [28,33,10]p28  [28,33,10]p28  
2021-05-27T14:44:37.466384+  2021-05-26T04:23:11.758060+
2.0 0 0  00  00   0
0  active+clean21h  0'0254:56 [9,5,26]p9 [9,5,26]p9  
2021-05-28T00:46:34.470208+  2021-05-28T00:46:15.122042+
2.1 0 0  00  00   0
0  active+clean21h  0'0254:42   [34,0,13]p34   [34,0,13]p34  
2021-05-28T00:46:41.578301+  2021-05-28T00:46:15.122042+
2.2 0 0  00  00   0
0  active+clean21h  0'0254:42   [30,25,5]p30   [30,25,5]p30  
2021-05-28T00:46:41.394685+  2021-05-28T00:46:15.122042+
2.3 0 0  00  00   0
0  active+clean21h  0'0254:42  [14,35,32]p14  [14,35,32]p14  
2021-05-28T00:46:40.545088+  2021-05-28T00:46:15.122042+
2.4 0 0  00  00   0
0  active+clean21h  0'0254:42   [27,28,7]p27   [27,28,7]p27  
2021-05-28T00:46:41.208159+  2021-05-28T00:46:15.122042+
2.5 0 0  00  00   0
0  active+clean21h  0'0254:42 [8,4,35]p8 [8,4,35]p8  
2021-05-28T00:46:39.845197+  2021-05-28T00:46:15.122042+
2.6 0 0  00  00   0
0  active+clean21h  0'0254:42   [31,26,6]p31   [31,26,6]p31  
2021-05-28T00:46:45.808430+  2021-05-28T00:46:15.122042+
2.7 0 0  00  00   0
0  active+clean21h  0'0254:42   [12,7,19]p12   [12,7,19]p12  
2021-05-28T00:46:39.313525+  2021-05-28T00:46:15.122042+
2.8 0 0  00  00   0
0  active+clean21h  0'0254:42  [20,21,11]p20  [20,21,11]p20  
2021-05-28T00:46:38.840636+  2021-05-28T00:46:15.122042+
2.9 0 0  00  00   0
0  active+clean21h  0'0254:42  [31,14,10]p31  [31,14,10]p31  
2021-05-28T00:46:46.791644+  2021-05-28T00:46:15.122042+
2.a 0 0  00  00   0
0  active+clean21h  0'0254:42  [16,27,35]p16  [16,27,35]p16  
2021-05-28T00:46:39.025320+  2021-05-28T00:46:15.122042+
2.b 0 0  00  00   0
0  active+clean21h  0'0254:42  [20,15,11]p20  [20,15,11]p20  
2021-05-28T00:46:42.841924+  2021-05-28T00:46:15.122042+
2.c 0 0  00  00   0
0  active+clean21h  0'0254:42   [32,11,0]p32   [32,11,0]p32  
2021-05-28T00:46:38.403701+  2021-05-28T00:46:15.122042+
2.d 0 0  00  00   0
0  active+clean21h  0'0254:56 [5,19,3]p5 [5,19,3]p5  
2021-05-28T00:46:39.808986+  2021-05-28T00:46:15.122042+
2.e 0 0  00  00   0
0  active+clean21h  0'0254:42  [27,13,17]p27  [27,13,17]p27  
2021-05-28T00:46:42.253293+  2021-05-28T00:46:15.122042+
2.f 0 0  00  00   0
0  active+clean21h  0'0254:42  [11,22,18]p11  [11,22,18]p11  
2021-05-28T00:46:38.721405+  2021-05-28T00:46:15.122042+
2.100 0  00  00   0
0  active+clean21h  0'0254:42   [10,17,7]p10   [10,17,7]p10  
2021-05-28T00:46:38.770867+  2021-05-28T00:46:15.122042+
2.110 0  00  0

[ceph-users] Re: Remapping OSDs under a PG

2021-05-28 Thread Jeremy Hansen
I’m continuing to read and it’s becoming more clear. 

The CRUSH map seems pretty amazing!

-jeremy

> On May 28, 2021, at 1:10 AM, Jeremy Hansen  wrote:
> 
> Thank you both for your response.  So this leads me to the next question:
> 
> ceph osd crush rule create-replicated
> 
> 
> What is  and  in this case?
> 
> It also looks like this is responsible for things like “rack awareness” type 
> attributes which is something I’d like to utilize.:
> 
> # types
> type 0 osd
> type 1 host
> type 2 chassis
> type 3 rack
> type 4 row
> type 5 pdu
> type 6 pod
> type 7 room
> type 8 datacenter
> type 9 zone
> type 10 region
> type 11 root
> This is something I will eventually take advantage of as well.
> 
> Thank you!
> -jeremy
> 
> 
>> On May 28, 2021, at 12:03 AM, Janne Johansson  wrote:
>> 
>> Create a crush rule that only chooses non-ssd drives, then
>> ceph osd pool set  crush_rule YourNewRuleName
>> and it will move over to the non-ssd OSDs.
>> 
>> Den fre 28 maj 2021 kl 02:18 skrev Jeremy Hansen :
>>> 
>>> 
>>> I’m very new to Ceph so if this question makes no sense, I apologize.  
>>> Continuing to study but I thought an answer to this question would help me 
>>> understand Ceph a bit more.
>>> 
>>> Using cephadm, I set up a cluster.  Cephadm automatically creates a pool 
>>> for Ceph metrics.  It looks like one of my ssd osd’s was allocated for the 
>>> PG.  I’d like to understand how to remap this PG so it’s not using the SSD 
>>> OSDs.
>>> 
>>> ceph pg map 1.0
>>> osdmap e205 pg 1.0 (1.0) -> up [28,33,10] acting [28,33,10]
>>> 
>>> OSD 28 is the SSD.
>>> 
>>> Is this possible?  Does this make any sense?  I’d like to reserve the SSDs 
>>> for their own pool.
>>> 
>>> Thank you!
>>> -jeremy
>>> ___
>>> ceph-users mailing list -- ceph-users@ceph.io
>>> To unsubscribe send an email to ceph-users-le...@ceph.io
>> 
>> 
>> 
>> -- 
>> May the most significant bit of your life be positive.
>> ___
>> ceph-users mailing list -- ceph-users@ceph.io
>> To unsubscribe send an email to ceph-users-le...@ceph.io
> 
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Remapping OSDs under a PG

2021-05-28 Thread Jeremy Hansen
Thank you both for your response.  So this leads me to the next question:

ceph osd crush rule create-replicated


What is  and  in this case?

It also looks like this is responsible for things like “rack awareness” type 
attributes which is something I’d like to utilize.:

# types
type 0 osd
type 1 host
type 2 chassis
type 3 rack
type 4 row
type 5 pdu
type 6 pod
type 7 room
type 8 datacenter
type 9 zone
type 10 region
type 11 root
This is something I will eventually take advantage of as well.

Thank you!
-jeremy


> On May 28, 2021, at 12:03 AM, Janne Johansson  wrote:
> 
> Create a crush rule that only chooses non-ssd drives, then
> ceph osd pool set  crush_rule YourNewRuleName
> and it will move over to the non-ssd OSDs.
> 
> Den fre 28 maj 2021 kl 02:18 skrev Jeremy Hansen :
>> 
>> 
>> I’m very new to Ceph so if this question makes no sense, I apologize.  
>> Continuing to study but I thought an answer to this question would help me 
>> understand Ceph a bit more.
>> 
>> Using cephadm, I set up a cluster.  Cephadm automatically creates a pool for 
>> Ceph metrics.  It looks like one of my ssd osd’s was allocated for the PG.  
>> I’d like to understand how to remap this PG so it’s not using the SSD OSDs.
>> 
>> ceph pg map 1.0
>> osdmap e205 pg 1.0 (1.0) -> up [28,33,10] acting [28,33,10]
>> 
>> OSD 28 is the SSD.
>> 
>> Is this possible?  Does this make any sense?  I’d like to reserve the SSDs 
>> for their own pool.
>> 
>> Thank you!
>> -jeremy
>> ___
>> ceph-users mailing list -- ceph-users@ceph.io
>> To unsubscribe send an email to ceph-users-le...@ceph.io
> 
> 
> 
> --
> May the most significant bit of your life be positive.
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io



signature.asc
Description: Message signed with OpenPGP
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Remapping OSDs under a PG

2021-05-28 Thread Janne Johansson
Create a crush rule that only chooses non-ssd drives, then
ceph osd pool set  crush_rule YourNewRuleName
and it will move over to the non-ssd OSDs.

Den fre 28 maj 2021 kl 02:18 skrev Jeremy Hansen :
>
>
> I’m very new to Ceph so if this question makes no sense, I apologize.  
> Continuing to study but I thought an answer to this question would help me 
> understand Ceph a bit more.
>
> Using cephadm, I set up a cluster.  Cephadm automatically creates a pool for 
> Ceph metrics.  It looks like one of my ssd osd’s was allocated for the PG.  
> I’d like to understand how to remap this PG so it’s not using the SSD OSDs.
>
> ceph pg map 1.0
> osdmap e205 pg 1.0 (1.0) -> up [28,33,10] acting [28,33,10]
>
> OSD 28 is the SSD.
>
> Is this possible?  Does this make any sense?  I’d like to reserve the SSDs 
> for their own pool.
>
> Thank you!
> -jeremy
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io



-- 
May the most significant bit of your life be positive.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Remapping OSDs under a PG

2021-05-27 Thread 胡 玮文

在 2021年5月28日,08:18,Jeremy Hansen  写道:


I’m very new to Ceph so if this question makes no sense, I apologize.  
Continuing to study but I thought an answer to this question would help me 
understand Ceph a bit more.

Using cephadm, I set up a cluster.  Cephadm automatically creates a pool for 
Ceph metrics.  It looks like one of my ssd osd’s was allocated for the PG.  I’d 
like to understand how to remap this PG so it’s not using the SSD OSDs.

ceph pg map 1.0
osdmap e205 pg 1.0 (1.0) -> up [28,33,10] acting [28,33,10]

OSD 28 is the SSD.

Is this possible?  Does this make any sense?  I’d like to reserve the SSDs for 
their own pool.

Yes, you can refer to the doc [1]. You need to create a new crush rule with HDD 
device class, and assign this new rule to that pool.

[1]: https://docs.ceph.com/en/latest/rados/operations/crush-map/#device-classes

Weiwen Hu

Thank you!
-jeremy
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io