On Thu, Jul 3, 2014 at 11:17 AM, Iban Cabrillo <cabri...@ifca.unican.es> wrote:
> Hi Gregory,
>   Thanks a lot I begin to understand who ceph works.
>   I add a couple of osd servers, and balance the disk between them.
>
>
> [ceph@cephadm ceph-cloud]$ sudo ceph osd tree
> # id    weight    type name    up/down    reweight
> -7    16.2    root 4x1GbFCnlSAS
> -9    5.4        host node02
> 7    2.7            osd.7    up    1
> 8    2.7            osd.8    up    1
> -4    5.4        host node03
>
> 2    2.7            osd.2    up    1
> 9    2.7            osd.9    up    1
> -3    5.4        host node04
>
> 1    2.7            osd.1    up    1
> 10    2.7            osd.10    up    1
> -6    16.2    root 4x4GbFCnlSAS
>
> -5    5.4        host node01
> 3    2.7            osd.3    up    1
> 4    2.7            osd.4    up    1
> -8    5.4        host node02
> 5    2.7            osd.5    up    1
> 6    2.7            osd.6    up    1
> -2    5.4        host node04
>
> 0    2.7            osd.0    up    1
> 11    2.7            osd.11    up    1
> -1    32.4    root default
> -2    5.4        host node04
>
> 0    2.7            osd.0    up    1
> 11    2.7            osd.11    up    1
> -3    5.4        host node04
>
> 1    2.7            osd.1    up    1
> 10    2.7            osd.10    up    1
> -4    5.4        host node03
>
> 2    2.7            osd.2    up    1
> 9    2.7            osd.9    up    1
> -5    5.4        host node01
> 3    2.7            osd.3    up    1
> 4    2.7            osd.4    up    1
> -8    5.4        host node02
> 5    2.7            osd.5    up    1
> 6    2.7            osd.6    up    1
> -9    5.4        host node02
> 7    2.7            osd.7    up    1
> 8    2.7            osd.8    up    1
>
> The Idea Is to have at least 4 servers and 3 disk (2.7 TB SAN attached) for
> server per pool.
> Now i have to adjust the pg and pgp and make some performance test.
>
> PD which is the difference betwwwn chose ans choseleaf?

"choose" instructs the system to choose N different buckets of the
given type (where N is specified by the "firstn 0" block to be the
replication level, but could be 1: "firstn 1", or replication - 1:
"firstn -1"). Since you're saying "choose firstn 0 type host", that's
what you're getting out, and then you're emitting those 3 (by default)
hosts. But they aren't valid "devices" (OSDs), so it's not a valid
mapping; you're supposed to then say "choose firstn 1 device" or
similar.
"chooseleaf" instead tells the system to choose N different buckets,
and then descend from each of those buckets to a leaf ("device") in
the CRUSH hierarchy. It's a little more robust against different
mappings and failure conditions, so generally a better choice than
"choose" if you don't need the finer granularity provided by choose.
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to