Hi folk,
  I am following step by step the test intallation, and checking some
configuration before try to deploy a production cluster.

  Now I have a Health cluster with 3 mons + 4 OSDs.
  I have created a pool with belonging all osd.x and two more one for two
servers o the other for the other two.

  The general pool work fine (I can create images and mount it on remote
machines).

  But the other two does not work (the commands rados put, or rbd ls "pool"
hangs for ever).

  this is the tree:

   [ceph@cephadm ceph-cloud]$ sudo ceph osd tree
# id weight type name up/down reweight
-7 5.4 root 4x1GbFCnlSAS
-3 2.7 host node04
1 2.7 osd.1 up 1
-4 2.7 host node03
2 2.7 osd.2 up 1
-6 8.1 root 4x4GbFCnlSAS
-5 5.4 host node01
3 2.7 osd.3 up 1
4 2.7 osd.4 up 1
-2 2.7 host node04
0 2.7 osd.0 up 1
-1 13.5 root default
-2 2.7 host node04
0 2.7 osd.0 up 1
-3 2.7 host node04
1 2.7 osd.1 up 1
-4 2.7 host node03
2 2.7 osd.2 up 1
-5 5.4 host node01
3 2.7 osd.3 up 1
4 2.7 osd.4 up 1


And this is the crushmap:

...
root 4x4GbFCnlSAS {
        id -6 #do not change unnecessarily
        alg straw
        hash 0  # rjenkins1
        item node01 weight 5.400
        item node04 weight 2.700
}
root 4x1GbFCnlSAS {
        id -7 #do not change unnecessarily
        alg straw
        hash 0  # rjenkins1
        item node04 weight 2.700
        item node03 weight 2.700
}
# rules
rule 4x4GbFCnlSAS {
        ruleset 1
        type replicated
        min_size 1
        max_size 10
        step take 4x4GbFCnlSAS
        step choose firstn 0 type host
        step emit
}
rule 4x1GbFCnlSAS {
        ruleset 2
        type replicated
        min_size 1
        max_size 10
        step take 4x1GbFCnlSAS
        step choose firstn 0 type host
        step emit
}
......
I of course set the crush_rules:
sudo ceph osd pool set cloud-4x1GbFCnlSAS crush_ruleset 2
sudo ceph osd pool set cloud-4x4GbFCnlSAS crush_ruleset 1

but seems that are something wrong (4x4GbFCnlSAS.pool is 512MB file):
   sudo rados -p cloud-4x1GbFCnlSAS put 4x4GbFCnlSAS.object
4x4GbFCnlSAS.pool
!!HANGS for everrrrrrrrrrrrrrrrrrrr!

from the ceph-client happen the same
 rbd ls cloud-4x1GbFCnlSAS
 !!HANGS for everrrrrrrrrrrrrrrrrrrr!


[root@cephadm ceph-cloud]# ceph osd map cloud-4x1GbFCnlSAS
4x1GbFCnlSAS.object
osdmap e49 pool 'cloud-4x1GbFCnlSAS' (3) object '4x1GbFCnlSAS.object' -> pg
3.114ae7a9 (3.29) -> *up ([], p-1) acting ([], p-1)*

Any idea what i am doing wrong??

Thanks in advance, I
Bertrand Russell:
*"El problema con el mundo es que los estúpidos están seguros de todo y los
inteligentes están llenos de dudas*"
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to