Re: [ceph-users] Add ssd's to hdd cluster, crush map class hdd update necessary?

2018-06-13 Thread Marc Roos
 

This is actually not to nice, because this remapping is now causing a 
nearfull




-Original Message-
From: Dan van der Ster [mailto:d...@vanderster.com] 
Sent: woensdag 13 juni 2018 14:02
To: Marc Roos
Cc: ceph-users
Subject: Re: [ceph-users] Add ssd's to hdd cluster, crush map class hdd 
update necessary?

See this thread:

http://lists.ceph.com/pipermail/ceph-large-ceph.com/2018-April/000106.html
http://lists.ceph.com/pipermail/ceph-large-ceph.com/2018-June/000113.html

(Wido -- should we kill the ceph-large list??)


On Wed, Jun 13, 2018 at 1:14 PM Marc Roos  wrote:
>
>
> I wonder if this is not a bug or so. Adding the class hdd, to an all 
> hdd cluster should not have such result that 60% of objects are moved 
> around.
>
>
> pool fs_data.ec21 id 53
>   3866523/6247464 objects misplaced (61.889%)
>   recovery io 93089 kB/s, 22 objects/s
>
>
>
>
>
> -Original Message-
> From: Marc Roos
> Sent: woensdag 13 juni 2018 7:14
> To: ceph-users; k0ste
> Subject: Re: [ceph-users] Add ssd's to hdd cluster, crush map class 
> hdd update necessary?
>
> I just added here 'class hdd'
>
> rule fs_data.ec21 {
> id 4
> type erasure
> min_size 3
> max_size 3
> step set_chooseleaf_tries 5
> step set_choose_tries 100
> step take default class hdd
> step choose indep 0 type osd
> step emit
> }
>
>
> -Original Message-
> From: Konstantin Shalygin [mailto:k0...@k0ste.ru]
> Sent: woensdag 13 juni 2018 12:30
> To: Marc Roos; ceph-users
> Subject: *SPAM* Re: *SPAM* Re: [ceph-users] Add ssd's 
> to hdd cluster, crush map class hdd update necessary?
>
> On 06/13/2018 12:06 PM, Marc Roos wrote:
> > Shit, I added this class and now everything start backfilling (10%) 
> > How is this possible, I only have hdd's?
>
> This is normal when you change your crush and placement rules.
> Post your output, I will take a look
>
> ceph osd crush tree
> ceph osd crush dump
> ceph osd pool ls detail
>
>
>
>
>
> k
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Add ssd's to hdd cluster, crush map class hdd update necessary?

2018-06-13 Thread Marc Roos
 

Yes thanks I know, I will change it when I get extra an extra node.



-Original Message-
From: Paul Emmerich [mailto:paul.emmer...@croit.io] 
Sent: woensdag 13 juni 2018 16:33
To: Marc Roos
Cc: ceph-users; k0ste
Subject: Re: [ceph-users] Add ssd's to hdd cluster, crush map class hdd 
update necessary?


2018-06-13 7:13 GMT+02:00 Marc Roos :


I just added here 'class hdd'

rule fs_data.ec21 {
id 4
type erasure
min_size 3
max_size 3
step set_chooseleaf_tries 5
step set_choose_tries 100
step take default class hdd
step choose indep 0 type osd
step emit
}



somewhat off-topic, but: 2/1 erasure coding is usually a bad idea for 
the same reasons that size = 2 replicated pools are a bad idea.



Paul

 



-Original Message-
From: Konstantin Shalygin [mailto:k0...@k0ste.ru] 
Sent: woensdag 13 juni 2018 12:30
To: Marc Roos; ceph-users
Subject: *SPAM* Re: *SPAM* Re: [ceph-users] Add 
ssd's to 
hdd cluster, crush map class hdd update necessary?

On 06/13/2018 12:06 PM, Marc Roos wrote:
> Shit, I added this class and now everything start backfilling 
(10%) 
> How is this possible, I only have hdd's?

This is normal when you change your crush and placement rules.
Post your output, I will take a look

ceph osd crush tree
ceph osd crush dump
ceph osd pool ls detail






k


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
<http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com> 





-- 

Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Add ssd's to hdd cluster, crush map class hdd update necessary?

2018-06-13 Thread Paul Emmerich
2018-06-13 7:13 GMT+02:00 Marc Roos :

> I just added here 'class hdd'
>
> rule fs_data.ec21 {
> id 4
> type erasure
> min_size 3
> max_size 3
> step set_chooseleaf_tries 5
> step set_choose_tries 100
> step take default class hdd
> step choose indep 0 type osd
> step emit
> }
>

somewhat off-topic, but: 2/1 erasure coding is usually a bad idea for the
same reasons that size = 2 replicated pools are a bad idea.


Paul


>
>
> -Original Message-
> From: Konstantin Shalygin [mailto:k0...@k0ste.ru]
> Sent: woensdag 13 juni 2018 12:30
> To: Marc Roos; ceph-users
> Subject: *****SPAM* Re: *****SPAM***** Re: [ceph-users] Add ssd's to
> hdd cluster, crush map class hdd update necessary?
>
> On 06/13/2018 12:06 PM, Marc Roos wrote:
> > Shit, I added this class and now everything start backfilling (10%)
> > How is this possible, I only have hdd's?
>
> This is normal when you change your crush and placement rules.
> Post your output, I will take a look
>
> ceph osd crush tree
> ceph osd crush dump
> ceph osd pool ls detail
>
>
>
>
>
> k
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>



-- 
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Add ssd's to hdd cluster, crush map class hdd update necessary?

2018-06-13 Thread Dan van der Ster
See this thread:

http://lists.ceph.com/pipermail/ceph-large-ceph.com/2018-April/000106.html
http://lists.ceph.com/pipermail/ceph-large-ceph.com/2018-June/000113.html

(Wido -- should we kill the ceph-large list??)


On Wed, Jun 13, 2018 at 1:14 PM Marc Roos  wrote:
>
>
> I wonder if this is not a bug or so. Adding the class hdd, to an all hdd
> cluster should not have such result that 60% of objects are moved
> around.
>
>
> pool fs_data.ec21 id 53
>   3866523/6247464 objects misplaced (61.889%)
>   recovery io 93089 kB/s, 22 objects/s
>
>
>
>
>
> -Original Message-
> From: Marc Roos
> Sent: woensdag 13 juni 2018 7:14
> To: ceph-users; k0ste
> Subject: Re: [ceph-users] Add ssd's to hdd cluster, crush map class hdd
> update necessary?
>
> I just added here 'class hdd'
>
> rule fs_data.ec21 {
> id 4
> type erasure
> min_size 3
> max_size 3
> step set_chooseleaf_tries 5
> step set_choose_tries 100
> step take default class hdd
> step choose indep 0 type osd
> step emit
> }
>
>
> -Original Message-
> From: Konstantin Shalygin [mailto:k0...@k0ste.ru]
> Sent: woensdag 13 juni 2018 12:30
> To: Marc Roos; ceph-users
> Subject: *SPAM* Re: *SPAM* Re: [ceph-users] Add ssd's to
> hdd cluster, crush map class hdd update necessary?
>
> On 06/13/2018 12:06 PM, Marc Roos wrote:
> > Shit, I added this class and now everything start backfilling (10%)
> > How is this possible, I only have hdd's?
>
> This is normal when you change your crush and placement rules.
> Post your output, I will take a look
>
> ceph osd crush tree
> ceph osd crush dump
> ceph osd pool ls detail
>
>
>
>
>
> k
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Add ssd's to hdd cluster, crush map class hdd update necessary?

2018-06-13 Thread Marc Roos
 
I wonder if this is not a bug or so. Adding the class hdd, to an all hdd 
cluster should not have such result that 60% of objects are moved 
around.


pool fs_data.ec21 id 53
  3866523/6247464 objects misplaced (61.889%)
  recovery io 93089 kB/s, 22 objects/s





-Original Message-
From: Marc Roos 
Sent: woensdag 13 juni 2018 7:14
To: ceph-users; k0ste
Subject: Re: [ceph-users] Add ssd's to hdd cluster, crush map class hdd 
update necessary?

I just added here 'class hdd'

rule fs_data.ec21 {
id 4
type erasure
min_size 3
max_size 3
step set_chooseleaf_tries 5
step set_choose_tries 100
step take default class hdd
step choose indep 0 type osd
step emit
}


-Original Message-
From: Konstantin Shalygin [mailto:k0...@k0ste.ru]
Sent: woensdag 13 juni 2018 12:30
To: Marc Roos; ceph-users
Subject: *SPAM* Re: *SPAM* Re: [ceph-users] Add ssd's to 
hdd cluster, crush map class hdd update necessary?

On 06/13/2018 12:06 PM, Marc Roos wrote:
> Shit, I added this class and now everything start backfilling (10%) 
> How is this possible, I only have hdd's?

This is normal when you change your crush and placement rules.
Post your output, I will take a look

ceph osd crush tree
ceph osd crush dump
ceph osd pool ls detail





k


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Add ssd's to hdd cluster, crush map class hdd update necessary?

2018-06-13 Thread Marc Roos
I just added here 'class hdd'

rule fs_data.ec21 {
id 4
type erasure
min_size 3
max_size 3
step set_chooseleaf_tries 5
step set_choose_tries 100
step take default class hdd
step choose indep 0 type osd
step emit
}


-Original Message-
From: Konstantin Shalygin [mailto:k0...@k0ste.ru] 
Sent: woensdag 13 juni 2018 12:30
To: Marc Roos; ceph-users
Subject: *SPAM* Re: *SPAM* Re: [ceph-users] Add ssd's to 
hdd cluster, crush map class hdd update necessary?

On 06/13/2018 12:06 PM, Marc Roos wrote:
> Shit, I added this class and now everything start backfilling (10%) 
> How is this possible, I only have hdd's?

This is normal when you change your crush and placement rules.
Post your output, I will take a look

ceph osd crush tree
ceph osd crush dump
ceph osd pool ls detail





k


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Add ssd's to hdd cluster, crush map class hdd update necessary?

2018-06-13 Thread Konstantin Shalygin

On 06/13/2018 09:01 AM, Marc Roos wrote:

Yes but I already have some sort of test cluster with data in it. I
don’t think there are commands to modify existing rules that are being
used by pools. And the default replicated_ruleset doesn’t have a class
specified. I also have an erasure code rule without any class definition
for the file system.


Yes, before migration from multi-root/classless crush to luminous+ 
classified crush you need to assign classified rulesets to your pools.

This is safe to apply on production clusters, on EC pools too.






k
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Add ssd's to hdd cluster, crush map class hdd update necessary?

2018-06-13 Thread Marc Roos
 
Yes but I already have some sort of test cluster with data in it. I 
don’t think there are commands to modify existing rules that are being 
used by pools. And the default replicated_ruleset doesn’t have a class 
specified. I also have an erasure code rule without any class definition 
for the file system.


-Original Message-
From: Konstantin Shalygin [mailto:k0...@k0ste.ru] 
Sent: woensdag 13 juni 2018 5:59
To: ceph-users@lists.ceph.com
Cc: Marc Roos
Subject: *SPAM* Re: [ceph-users] Add ssd's to hdd cluster, crush 
map class hdd update necessary?

> Is it necessary to update the crush map with
>
> class hdd
>
> Before adding ssd's the cluster?


Of course, if this osds of one root.

It is not necessary to manually edit crush:

ceph osd crush rule create-replicated replicated_hosts_hdd default host 
hdd
ceph osd crush rule create-replicated replicated_hosts_nvme default host 

nvme




k



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Add ssd's to hdd cluster, crush map class hdd update necessary?

2018-06-12 Thread Konstantin Shalygin

Is it necessary to update the crush map with

class hdd

Before adding ssd's the cluster?



Of course, if this osds of one root.

It is not necessary to manually edit crush:

ceph osd crush rule create-replicated replicated_hosts_hdd default host hdd
ceph osd crush rule create-replicated replicated_hosts_nvme default host 
nvme





k

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Add ssd's to hdd cluster, crush map class hdd update necessary?

2018-06-12 Thread Marc Roos
 
Eg
# rules
rule replicated_ruleset {
id 0
type replicated
min_size 1
max_size 10
step take default 
step chooseleaf firstn 0 type host
step emit
}

To

# rules
rule replicated_ruleset {
id 0
type replicated
min_size 1
max_size 10
step take default class hdd
step chooseleaf firstn 0 type host
step emit
}

And

rule fs_data.ec21 {
id 4
type erasure
min_size 3
max_size 3
step set_chooseleaf_tries 5
step set_choose_tries 100
step take default
step choose indep 0 type osd
step emit
}

To

rule fs_data.ec21 {
id 4
type erasure
min_size 3
max_size 3
step set_chooseleaf_tries 5
step set_choose_tries 100
step take default class hdd
step choose indep 0 type osd
step emit
}




-Original Message-
From: Marc Roos 
Sent: dinsdag 12 juni 2018 17:07
To: ceph-users
Subject: [ceph-users] Add ssd's to hdd cluster, crush map class hdd 
update necessary?


Is it necessary to update the crush map with 

class hdd

Before adding ssd's the cluster?

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Add ssd's to hdd cluster, crush map class hdd update necessary?

2018-06-12 Thread Marc Roos


Is it necessary to update the crush map with 

class hdd

Before adding ssd's the cluster?

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com