Hi Gregory,
  Thanks a lot I begin to understand who ceph works.
  I add a couple of osd servers, and balance the disk between them.

[ceph@cephadm ceph-cloud]$ sudo ceph osd tree
# id    weight    type name    up/down    reweight
-7    16.2    root 4x1GbFCnlSAS
-9    5.4        host node02
7    2.7            osd.7    up    1
8    2.7            osd.8    up    1
-4    5.4        host node03
2    2.7            osd.2    up    1
9    2.7            osd.9    up    1
-3    5.4        host node04
1    2.7            osd.1    up    1
10    2.7            osd.10    up    1
-6    16.2    root 4x4GbFCnlSAS
-5    5.4        host node01
3    2.7            osd.3    up    1
4    2.7            osd.4    up    1
-8    5.4        host node02
5    2.7            osd.5    up    1
6    2.7            osd.6    up    1
-2    5.4        host node04
0    2.7            osd.0    up    1
11    2.7            osd.11    up    1
-1    32.4    root default
-2    5.4        host node04
0    2.7            osd.0    up    1
11    2.7            osd.11    up    1
-3    5.4        host node04
1    2.7            osd.1    up    1
10    2.7            osd.10    up    1
-4    5.4        host node03
2    2.7            osd.2    up    1
9    2.7            osd.9    up    1
-5    5.4        host node01
3    2.7            osd.3    up    1
4    2.7            osd.4    up    1
-8    5.4        host node02
5    2.7            osd.5    up    1
6    2.7            osd.6    up    1
-9    5.4        host node02
7    2.7            osd.7    up    1
8    2.7            osd.8    up    1

The Idea Is to have at least 4 servers and 3 disk (2.7 TB SAN attached) for
server per pool.
Now i have to adjust the pg and pgp and make some performance test.

PD which is the difference betwwwn chose ans choseleaf?

Thanks a lot!


2014-07-03 19:06 GMT+02:00 Gregory Farnum <g...@inktank.com>:

> The PG in question isn't being properly mapped to any OSDs. There's a
> good chance that those trees (with 3 OSDs in 2 hosts) aren't going to
> map well anyway, but the immediate problem should resolve itself if
> you change the "choose" to "chooseleaf" in your rules.
> -Greg
> Software Engineer #42 @ http://inktank.com | http://ceph.com
>
>
> On Thu, Jul 3, 2014 at 4:17 AM, Iban Cabrillo <cabri...@ifca.unican.es>
> wrote:
> > Hi folk,
> >   I am following step by step the test intallation, and checking some
> > configuration before try to deploy a production cluster.
> >
> >   Now I have a Health cluster with 3 mons + 4 OSDs.
> >   I have created a pool with belonging all osd.x and two more one for two
> > servers o the other for the other two.
> >
> >   The general pool work fine (I can create images and mount it on remote
> > machines).
> >
> >   But the other two does not work (the commands rados put, or rbd ls
> "pool"
> > hangs for ever).
> >
> >   this is the tree:
> >
> >    [ceph@cephadm ceph-cloud]$ sudo ceph osd tree
> > # id weight type name up/down reweight
> > -7 5.4 root 4x1GbFCnlSAS
> > -3 2.7 host node04
> > 1 2.7 osd.1 up 1
> > -4 2.7 host node03
> > 2 2.7 osd.2 up 1
> > -6 8.1 root 4x4GbFCnlSAS
> > -5 5.4 host node01
> > 3 2.7 osd.3 up 1
> > 4 2.7 osd.4 up 1
> > -2 2.7 host node04
> > 0 2.7 osd.0 up 1
> > -1 13.5 root default
> > -2 2.7 host node04
> > 0 2.7 osd.0 up 1
> > -3 2.7 host node04
> > 1 2.7 osd.1 up 1
> > -4 2.7 host node03
> > 2 2.7 osd.2 up 1
> > -5 5.4 host node01
> > 3 2.7 osd.3 up 1
> > 4 2.7 osd.4 up 1
> >
> >
> > And this is the crushmap:
> >
> > ...
> > root 4x4GbFCnlSAS {
> >         id -6 #do not change unnecessarily
> >         alg straw
> >         hash 0  # rjenkins1
> >         item node01 weight 5.400
> >         item node04 weight 2.700
> > }
> > root 4x1GbFCnlSAS {
> >         id -7 #do not change unnecessarily
> >         alg straw
> >         hash 0  # rjenkins1
> >         item node04 weight 2.700
> >         item node03 weight 2.700
> > }
> > # rules
> > rule 4x4GbFCnlSAS {
> >         ruleset 1
> >         type replicated
> >         min_size 1
> >         max_size 10
> >         step take 4x4GbFCnlSAS
> >         step choose firstn 0 type host
> >         step emit
> > }
> > rule 4x1GbFCnlSAS {
> >         ruleset 2
> >         type replicated
> >         min_size 1
> >         max_size 10
> >         step take 4x1GbFCnlSAS
> >         step choose firstn 0 type host
> >         step emit
> > }
> > ......
> > I of course set the crush_rules:
> > sudo ceph osd pool set cloud-4x1GbFCnlSAS crush_ruleset 2
> > sudo ceph osd pool set cloud-4x4GbFCnlSAS crush_ruleset 1
> >
> > but seems that are something wrong (4x4GbFCnlSAS.pool is 512MB file):
> >    sudo rados -p cloud-4x1GbFCnlSAS put 4x4GbFCnlSAS.object
> > 4x4GbFCnlSAS.pool
> > !!HANGS for everrrrrrrrrrrrrrrrrrrr!
> >
> > from the ceph-client happen the same
> >  rbd ls cloud-4x1GbFCnlSAS
> >  !!HANGS for everrrrrrrrrrrrrrrrrrrr!
> >
> >
> > [root@cephadm ceph-cloud]# ceph osd map cloud-4x1GbFCnlSAS
> > 4x1GbFCnlSAS.object
> > osdmap e49 pool 'cloud-4x1GbFCnlSAS' (3) object '4x1GbFCnlSAS.object' ->
> pg
> > 3.114ae7a9 (3.29) -> up ([], p-1) acting ([], p-1)
> >
> > Any idea what i am doing wrong??
> >
> > Thanks in advance, I
> > Bertrand Russell:
> > "El problema con el mundo es que los estúpidos están seguros de todo y
> los
> > inteligentes están llenos de dudas"
> >
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
>



-- 
############################################################################
Iban Cabrillo Bartolome
Instituto de Fisica de Cantabria (IFCA)
Santander, Spain
Tel: +34942200969
PGP PUBLIC KEY:
http://pgp.mit.edu/pks/lookup?op=get&search=0xD9DF0B3D6C8C08AC
############################################################################
Bertrand Russell:
*"El problema con el mundo es que los estúpidos están seguros de todo y los
inteligentes están llenos de dudas*"
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to