Hello Vincenzo,

Yes, those 6 OSDs are on different hosts. I've got 3 VMs each with 2 OSDs. So this should be enough for the requirement to have 3 replica's (even though I set it back to 2, as suggested in the howto's).

I will try to have the replica's only over the OSDs en not over the hosts tomorrow.

Met vriendelijke groet/With kind regards,

Tijn Buijs

Cloud.nl logo

t...@cloud.nl <mailto:t...@cloud.nl> | T. 0800-CLOUDNL / +31 (0)162 820 000 | F. +31 (0)162 820 001 Cloud.nl B.V. | Minervum 7092D | 4817 ZK Breda | www.cloud.nl <http://www.cloud.nl>
On 31/07/14 17:18, Vincenzo Pii wrote:
Are the 6 osds on different hosts?

The default ruleset that ceph applies to pools states that object replicas (3 per default) should be placed on OSDs of different hosts.
This cannot be satisfied if you don't have OSDs on separate hosts.

I ran myself into this issue and wrote down the steps I needed to solve it. If this is your case, you can try to read it here: http://blog.zhaw.ch/icclab/deploy-ceph-troubleshooting-part-23/ (paragraph: "Check that replication requirements can be met").

Basically, you either specify a different crush ruleset or reduce the size of the replicas for your pools.

Hope this can help!

Vincenzo.


2014-07-31 16:36 GMT+02:00 Tijn Buijs <t...@cloud.nl <mailto:t...@cloud.nl>>:

    Hello everybody,

    At cloud.nl <http://cloud.nl> we are going to use Ceph. So I find
    it a good idea to get some handson experience with it, so I can
    work with it :). So I'm installing a testcluster in a few
    VirtualBox machines on my iMac, which runs OS X 10.9.4 offcourse.
    I know I will get a lousy performance, but that's not the
    objective here. The objective is to get some experience with Ceph,
    to see how it works.

    But I hit an issue during the initial setup of the cluster. When
    I'm done installing everything and following the howto's on
    ceph.com <http://ceph.com> (the preflight
    <http://ceph.com/docs/master/start/quick-start-preflight/> and the
    Storage Cluster quick start
    <http://ceph.com/docs/master/start/quick-ceph-deploy/>) I need to
    run ceph health to see that everything is running perfectly. But
    it doesn't run perfectly, I get the following output:
    ceph@ceph-admin:~$ ceph health
    HEALTH_WARN 192 pgs incomplete; 192 pgs stuck inactive; 192 pgs
    stuck unclean

    And it stays at this information, it never ever changes. So
    everything is really stuck. But I don't know what is stuck exactly
    and how I can fix it. Some more info about my cluster:
    ceph@ceph-admin:~$ ceph -s
        cluster d31586a5-6dd6-454e-8835-0d6d9e204612
         health HEALTH_WARN 192 pgs incomplete; 192 pgs stuck
    inactive; 192 pgs stuck unclean
         monmap e3: 3 mons at
    
{ceph-mon1=10.28.28.18:6789/0,ceph-mon2=10.28.28.31:6789/0,ceph-mon3=10.28.28.50:6789/0
    
<http://10.28.28.18:6789/0,ceph-mon2=10.28.28.31:6789/0,ceph-mon3=10.28.28.50:6789/0>},
    election epoch 4, quorum 0,1,2 ceph-mon1,ceph-mon2,ceph-mon3
         osdmap e25: 6 osds: 6 up, 6 in
          pgmap v56: 192 pgs, 3 pools, 0 bytes data, 0 objects
                197 MB used, 30455 MB / 30653 MB avail
                     192 creating+incomplete

    I'm running on Ubuntu 14.04.1 LTS Server. I did try to get it
    running on CentOS 6.5 too (CentOS 6.5 is my actual distro of
    choice, but Ceph has more affinity with Ubuntu, so I tried that
    too), but I got exactly the same results.

    But because this is my first install of Ceph I don't know the
    exact debug commands and stuff. I'm willing to get this working,
    but I just don't know how :). Any help is appreciated :).

    Met vriendelijke groet/With kind regards,

    Tijn Buijs

    Cloud.nl logo

    t...@cloud.nl <mailto:t...@cloud.nl> | T. 0800-CLOUDNL / +31
    (0)162 820 000 <tel:%2B31%20%280%29162%20820%20000> | F. +31
    (0)162 820 001 <tel:%2B31%20%280%29162%20820%20001>
    Cloud.nl B.V. | Minervum 7092D | 4817 ZK Breda | www.cloud.nl
    <http://www.cloud.nl>




--
Vincenzo Pii
Researcher, InIT Cloud Computing Lab
Zurich University of Applied Sciences (ZHAW)
http://www.cloudcomp.ch/

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to