Re: crush reweight

2013-02-20 Thread Sage Weil
On Wed, 20 Feb 2013, Bo-Syung Yang wrote:
> Hi,
> 
> I have a crush map (may not be practical, but just for demo) applied
> to a two-host cluster (each host has two OSDs) to test "ceph osd crush
> reweight":
> 
> # begin crush map
> 
> # devices
> device 0 sdc-host0
> device 1 sdd-host0
> device 2 sdc-host1
> device 3 sdd-host1
> 
> # types
> type 0 device
> type 1 pool
> type 2 root
> 
> # buckets
> pool one {
> id -1
> alg straw
> hash 0  # rjenkins1
> item sdc-host0 weight 1.000
> item sdd-host0 weight 1.000
> item sdc-host1 weight 1.000
> item sdd-host1 weight 1.000
> }
> 
> pool two {
> id -2
> alg straw
> hash 0  # rjenkins1
> item sdc-host0 weight 1.000
> item sdd-host0 weight 1.000
> item sdc-host1 weight 1.000
> item sdd-host1 weight 1.000
> }
> 
> root root-for-one {
> id -3
> alg straw
> hash 0  # rjenkins1
> item one weight 4.000
> item two weight 4.000
> }
> 
> root root-for-two {
> id -4
> alg straw
> hash 0  # rjenkins1
> item one weight 4.000
> item two weight 4.000
> }
> 
> rule data {
> ruleset 0
> type replicated
> min_size 1
> max_size 4
> step take root-for-one
> step choose firstn 0 type pool
> step choose firstn 1 type device
> step emit
> }
> 
> rule metadata {
> ruleset 1
> type replicated
> min_size 1
> max_size 4
> step take root-for-one
> step choose firstn 0 type pool
> step choose firstn 1 type device
> step emit
> }
> 
> rule rbd {
> ruleset 2
> type replicated
> min_size 1
> max_size 4
> step take root-for-two
> step choose firstn 0 type pool
> step choose firstn 1 type device
> step emit
> }
> 
> 
> After crush map applied, the osd tree looks as:
> 
> # idweight  type name   up/down reweight
> -4  8   root root-for-two
> -1  4   pool one
> 0   1   osd.0   up  1
> 1   1   osd.1   up  1
> 2   1   osd.2   up  1
> 3   1   osd.3   up  1
> -2  4   pool two
> 0   1   osd.0   up  1
> 1   1   osd.1   up  1
> 2   1   osd.2   up  1
> 3   1   osd.3   up  1
> -3  8   root root-for-one
> -1  4   pool one
> 0   1   osd.0   up  1
> 1   1   osd.1   up  1
> 2   1   osd.2   up  1
> 3   1   osd.3   up      1
> -2  4   pool two
> 0   1   osd.0   up  1
> 1   1   osd.1   up  1
> 2   1   osd.2   up  1
> 3   1   osd.3   up  1
> 
> 
> Then, I reweight osd.0 (device sdc-host0) in crush map to 5 through:
> 
>  ceph osd crush reweight sdc-host0 5
> 
> I found the osd tree with the weight changes:
> 
> # idweight  type name   up/down reweight
> -4  8   root root-for-two
> -1  4   pool one
> 0   5   osd.0   up  1
> 1   1   osd.1   up  1
> 2   1   osd.2   up  1
> 3   1   osd.3   up  1
> -2  4   pool two
> 0   1   osd.0   up  1
> 1   1   osd.1   up  1
> 2   1   osd.2   up  1
> 3   1   osd.3   up  1
> -3  12  root root-for-one
> -1  8   pool one
> 0   5   osd.0   up  1
> 1   1   osd.1   up  1
> 2   1   osd.2   up  1
> 3   1   osd.3   up  1
> -2  4   pool two
> 0   1   osd.0   up  1
> 1   1   osd.1   up  1
> 2   1   osd.2   up  1
> 3   1   osd.3   up  1
> 
> My question is why only pool one's weight changed, but not pool two.

Currently the reweight (and most of the other) command(s) assume there is 
only one instance of each item in the hierarchy, and only operate on the 
first one they see.

What is your motivation for having the pools appear in two different 
trees?

sage

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: crush reweight

2013-02-20 Thread Bo-Syung Yang
On Wed, Feb 20, 2013 at 12:39 PM, Sage Weil  wrote:
> On Wed, 20 Feb 2013, Bo-Syung Yang wrote:
>> Hi,
>>
>> I have a crush map (may not be practical, but just for demo) applied
>> to a two-host cluster (each host has two OSDs) to test "ceph osd crush
>> reweight":
>>
>> # begin crush map
>>
>> # devices
>> device 0 sdc-host0
>> device 1 sdd-host0
>> device 2 sdc-host1
>> device 3 sdd-host1
>>
>> # types
>> type 0 device
>> type 1 pool
>> type 2 root
>>
>> # buckets
>> pool one {
>> id -1
>> alg straw
>> hash 0  # rjenkins1
>> item sdc-host0 weight 1.000
>> item sdd-host0 weight 1.000
>> item sdc-host1 weight 1.000
>> item sdd-host1 weight 1.000
>> }
>>
>> pool two {
>> id -2
>> alg straw
>> hash 0  # rjenkins1
>> item sdc-host0 weight 1.000
>> item sdd-host0 weight 1.000
>> item sdc-host1 weight 1.000
>> item sdd-host1 weight 1.000
>> }
>>
>> root root-for-one {
>> id -3
>> alg straw
>> hash 0  # rjenkins1
>> item one weight 4.000
>> item two weight 4.000
>> }
>>
>> root root-for-two {
>> id -4
>> alg straw
>> hash 0  # rjenkins1
>> item one weight 4.000
>> item two weight 4.000
>> }
>>
>> rule data {
>> ruleset 0
>> type replicated
>> min_size 1
>> max_size 4
>> step take root-for-one
>> step choose firstn 0 type pool
>> step choose firstn 1 type device
>> step emit
>> }
>>
>> rule metadata {
>> ruleset 1
>> type replicated
>> min_size 1
>> max_size 4
>> step take root-for-one
>> step choose firstn 0 type pool
>> step choose firstn 1 type device
>> step emit
>> }
>>
>> rule rbd {
>> ruleset 2
>> type replicated
>> min_size 1
>> max_size 4
>> step take root-for-two
>> step choose firstn 0 type pool
>> step choose firstn 1 type device
>> step emit
>> }
>>
>>
>> After crush map applied, the osd tree looks as:
>>
>> # idweight  type name   up/down reweight
>> -4  8   root root-for-two
>> -1  4   pool one
>> 0   1   osd.0   up  1
>> 1   1   osd.1   up  1
>> 2   1   osd.2   up  1
>> 3   1   osd.3   up  1
>> -2  4   pool two
>> 0   1   osd.0   up  1
>> 1   1   osd.1   up  1
>> 2   1   osd.2   up  1
>> 3   1   osd.3   up      1
>> -3  8   root root-for-one
>> -1  4   pool one
>> 0   1   osd.0   up  1
>> 1   1   osd.1   up  1
>> 2   1   osd.2   up  1
>> 3   1   osd.3   up  1
>> -2  4   pool two
>> 0   1   osd.0   up  1
>> 1   1   osd.1   up  1
>> 2   1   osd.2   up  1
>> 3   1   osd.3   up  1
>>
>>
>> Then, I reweight osd.0 (device sdc-host0) in crush map to 5 through:
>>
>>  ceph osd crush reweight sdc-host0 5
>>
>> I found the osd tree with the weight changes:
>>
>> # idweight  type name   up/down reweight
>> -4  8   root root-for-two
>> -1  4   pool one
>> 0   5   osd.0   up  1
>> 1   1   osd.1   up  1
>> 2   1   osd.2   up  1
>> 3   1   osd.3   up  1
>> -2  4   pool two
>> 0   1   osd.0   up  1
>> 1   1   osd.1   up  1
>> 2   1   osd.2   up  1
>> 3   1   osd.3   up  1
>> -3  12  root root-for-one
>> -1  8   pool one
>> 0   5   osd.0   up  1
>> 1   1

Re: crush reweight

2013-02-20 Thread Sage Weil
On Wed, 20 Feb 2013, Bo-Syung Yang wrote:
> On Wed, Feb 20, 2013 at 12:39 PM, Sage Weil  wrote:
> > On Wed, 20 Feb 2013, Bo-Syung Yang wrote:
> >> Hi,
> >>
> >> I have a crush map (may not be practical, but just for demo) applied
> >> to a two-host cluster (each host has two OSDs) to test "ceph osd crush
> >> reweight":
> >>
> >> # begin crush map
> >>
> >> # devices
> >> device 0 sdc-host0
> >> device 1 sdd-host0
> >> device 2 sdc-host1
> >> device 3 sdd-host1
> >>
> >> # types
> >> type 0 device
> >> type 1 pool
> >> type 2 root
> >>
> >> # buckets
> >> pool one {
> >> id -1
> >> alg straw
> >> hash 0  # rjenkins1
> >> item sdc-host0 weight 1.000
> >> item sdd-host0 weight 1.000
> >> item sdc-host1 weight 1.000
> >> item sdd-host1 weight 1.000
> >> }
> >>
> >> pool two {
> >> id -2
> >> alg straw
> >> hash 0  # rjenkins1
> >> item sdc-host0 weight 1.000
> >> item sdd-host0 weight 1.000
> >> item sdc-host1 weight 1.000
> >> item sdd-host1 weight 1.000
> >> }
> >>
> >> root root-for-one {
> >> id -3
> >> alg straw
> >> hash 0  # rjenkins1
> >> item one weight 4.000
> >> item two weight 4.000
> >> }
> >>
> >> root root-for-two {
> >> id -4
> >> alg straw
> >> hash 0  # rjenkins1
> >> item one weight 4.000
> >> item two weight 4.000
> >> }
> >>
> >> rule data {
> >> ruleset 0
> >> type replicated
> >> min_size 1
> >> max_size 4
> >> step take root-for-one
> >> step choose firstn 0 type pool
> >> step choose firstn 1 type device
> >> step emit
> >> }
> >>
> >> rule metadata {
> >> ruleset 1
> >> type replicated
> >> min_size 1
> >> max_size 4
> >> step take root-for-one
> >> step choose firstn 0 type pool
> >> step choose firstn 1 type device
> >> step emit
> >> }
> >>
> >> rule rbd {
> >> ruleset 2
> >> type replicated
> >> min_size 1
> >> max_size 4
> >> step take root-for-two
> >> step choose firstn 0 type pool
> >> step choose firstn 1 type device
> >> step emit
> >> }
> >>
> >>
> >> After crush map applied, the osd tree looks as:
> >>
> >> # idweight  type name   up/down reweight
> >> -4  8   root root-for-two
> >> -1  4   pool one
> >> 0   1   osd.0   up  1
> >> 1   1   osd.1   up  1
> >> 2   1   osd.2   up  1
> >> 3   1   osd.3   up  1
> >> -2  4   pool two
> >> 0   1   osd.0   up  1
> >> 1   1   osd.1   up  1
> >> 2   1   osd.2   up  1
> >> 3   1   osd.3   up  1
> >> -3  8   root root-for-one
> >> -1  4   pool one
> >> 0   1   osd.0   up  1
> >> 1   1   osd.1   up  1
> >> 2   1   osd.2   up  1
> >> 3   1   osd.3   up  1
> >> -2  4   pool two
> >> 0   1   osd.0   up  1
> >> 1   1   osd.1   up  1
> >> 2   1   osd.2   up  1
> >> 3   1   osd.3   up  1
> >>
> >>
> >> Then, I reweight osd.0 (device sdc-host0) in crush map to 5 through:
> >>
> >>  ceph osd crush reweight sdc-host0 5
> >>
> >> I found the osd tree with the weight changes:
> >>
> >> # idweight  type name   up/down reweight
> >> -4  8   root root-for-two
> >> -1  4   pool one
&g

Re: crush reweight

2013-02-20 Thread Bo-Syung Yang
On Wed, Feb 20, 2013 at 1:19 PM, Sage Weil  wrote:
> On Wed, 20 Feb 2013, Bo-Syung Yang wrote:
>> On Wed, Feb 20, 2013 at 12:39 PM, Sage Weil  wrote:
>> > On Wed, 20 Feb 2013, Bo-Syung Yang wrote:
>> >> Hi,
>> >>
>> >> I have a crush map (may not be practical, but just for demo) applied
>> >> to a two-host cluster (each host has two OSDs) to test "ceph osd crush
>> >> reweight":
>> >>
>> >> # begin crush map
>> >>
>> >> # devices
>> >> device 0 sdc-host0
>> >> device 1 sdd-host0
>> >> device 2 sdc-host1
>> >> device 3 sdd-host1
>> >>
>> >> # types
>> >> type 0 device
>> >> type 1 pool
>> >> type 2 root
>> >>
>> >> # buckets
>> >> pool one {
>> >> id -1
>> >> alg straw
>> >> hash 0  # rjenkins1
>> >> item sdc-host0 weight 1.000
>> >> item sdd-host0 weight 1.000
>> >> item sdc-host1 weight 1.000
>> >> item sdd-host1 weight 1.000
>> >> }
>> >>
>> >> pool two {
>> >> id -2
>> >> alg straw
>> >> hash 0  # rjenkins1
>> >> item sdc-host0 weight 1.000
>> >> item sdd-host0 weight 1.000
>> >> item sdc-host1 weight 1.000
>> >> item sdd-host1 weight 1.000
>> >> }
>> >>
>> >> root root-for-one {
>> >> id -3
>> >> alg straw
>> >> hash 0  # rjenkins1
>> >> item one weight 4.000
>> >> item two weight 4.000
>> >> }
>> >>
>> >> root root-for-two {
>> >> id -4
>> >> alg straw
>> >> hash 0  # rjenkins1
>> >> item one weight 4.000
>> >> item two weight 4.000
>> >> }
>> >>
>> >> rule data {
>> >> ruleset 0
>> >> type replicated
>> >> min_size 1
>> >> max_size 4
>> >> step take root-for-one
>> >> step choose firstn 0 type pool
>> >> step choose firstn 1 type device
>> >> step emit
>> >> }
>> >>
>> >> rule metadata {
>> >> ruleset 1
>> >> type replicated
>> >> min_size 1
>> >> max_size 4
>> >> step take root-for-one
>> >> step choose firstn 0 type pool
>> >> step choose firstn 1 type device
>> >> step emit
>> >> }
>> >>
>> >> rule rbd {
>> >> ruleset 2
>> >> type replicated
>> >> min_size 1
>> >> max_size 4
>> >> step take root-for-two
>> >> step choose firstn 0 type pool
>> >> step choose firstn 1 type device
>> >> step emit
>> >> }
>> >>
>> >>
>> >> After crush map applied, the osd tree looks as:
>> >>
>> >> # idweight  type name   up/down reweight
>> >> -4  8   root root-for-two
>> >> -1  4   pool one
>> >> 0   1   osd.0   up  1
>> >> 1   1   osd.1   up  1
>> >> 2   1   osd.2   up  1
>> >> 3   1   osd.3   up  1
>> >> -2  4   pool two
>> >> 0   1   osd.0   up  1
>> >> 1   1   osd.1   up  1
>> >> 2   1   osd.2   up  1
>> >> 3   1   osd.3   up  1
>> >> -3  8   root root-for-one
>> >> -1  4   pool one
>> >> 0   1   osd.0   up  1
>> >> 1   1   osd.1   up  1
>> >> 2   1   osd.2   up  1
>> >> 3   1   osd.3   up  1
>> >> -2  4   pool two
>> >> 0   1   osd.0   up  1
>> >> 1   1   osd.1   up  1
>> >> 

Difference between "ceph osd crush reweight" and "ceph osd reweight"

2013-04-10 Thread Drunkard Zhang
Now I'm building another ceph cluster with 0.60, but I'm still running
v0.55.1 in production. I used to set 'ceph osd reweight' value same as
'ceph osd crush reweight', for example weight 3 for 3TB harddisk, but
it's impossible in v0.60, it ranges 0..1 now.

So, what changed? In my opinion, in v0.60, 'ceph osd crush reweight'
makes ceph distribute data across OSDs by their weight, and the 'ceph
osd reweight' controls migration speed of data in ceph cluster. Right
?

# idweighttype nameup/downreweight
-192root default
-392rack unknownrack
-226host c15
03osd.0up1
13osd.1up1
102osd.10up1
23osd.2up1
33osd.3up1
42osd.4up1
52osd.5up1
62osd.6up1
72osd.7up1
82osd.8up1
92osd.9up1
-433host c16
113osd.11up1
123osd.12up1
133osd.13up1
143osd.14up1
153osd.15up1
163osd.16up1
173osd.17up1
183osd.18up1
193osd.19up1
203osd.20up1
213osd.21up1
-533host c18
223osd.22up1
233osd.23up1
243osd.24up1
253osd.25up1
263osd.26up1
273osd.27up1
283osd.28up1
293osd.29up1
303osd.30up1
313osd.31up1
323osd.32up1
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html