I've managed to start and mount the cluster by completely starting the process from scratch. Other thing that i'm searching for is any documentation how to add another node (or hard drives) on a running cluster without affecting the mount point and the running service. Can you point me for this?


On 06/05/2013 11:20 AM, Igor Laskovy wrote:
>and I'm unable to mount the cluster with the following command:
>root@ceph1:/mnt# mount -t ceph 192.168.2.170:6789:/ /mnt

So, what it says?

I'm also recommend to you start from my russian doc http://habrahabr.ru/post/179823


On Tue, Jun 4, 2013 at 4:22 PM, Явор Маринов <ymari...@neterra.net <mailto:ymari...@neterra.net>> wrote:

    That's the exact documentation which i'm using the directory on
    ceph2 is created, and the service is starting without any problems
    on both nodes. However the health of the cluster is getting WARN
    and i was able to mount the cluster




    On 06/04/2013 03:43 PM, Andrei Mikhailovsky wrote:
    Yavor,

    I would highly recommend taking a look at the quick install
    guide: http://ceph.com/docs/next/start/quick-start/

    As per the guide, you need to precreate the directories prior to
    starting ceph.

    Andrei
    ------------------------------------------------------------------------
    *From: *"Явор Маринов" <ymari...@neterra.net>
    <mailto:ymari...@neterra.net>
    *To: *ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
    *Sent: *Tuesday, 4 June, 2013 11:03:52 AM
    *Subject: *[ceph-users] ceph configuration


    Hello,

    I'm new to the Ceph mailing list, and I need some advices for our
    testing cluster. I have 2 servers with x2 hard disks. On the first
    server i configured monitor and OSD, and on the second server
    only OSD.
    The configuration looks like as follows:

    [mon.a]

             host = ceph1
             mon addr = 192.168.2.170:6789 <http://192.168.2.170:6789>

    [osd.0]
             host = ceph1
             addr = 192.168.2.170
             devs = /dev/sdb

    [osd.1]
             host = ceph2
             addr = 192.168.2.114
             devs = /dev/sdb

    Once i initiate 'service ceph -a start' i keep getting the
    following error:

    Mounting xfs on ceph2:/var/lib/ceph/osd/ceph-1
    df: `/var/lib/ceph/osd/ceph-1/.': No such file or directory

    and I'm unable to mount the cluster with the following command:
    root@ceph1:/mnt# mount -t ceph 192.168.2.170:6789:/ /mnt

    Also executing 'ceph health' i'm getting this response:
    HEALTH_WARN 143 pgs degraded; 576 pgs stuck unclean; recovery 15/122
    degraded (12.295%)

    This is fresh install and there aren't any nodes which are
    added/removed.

    Any help will be much appreciated.

    _______________________________________________
    ceph-users mailing list
    ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
    http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



    _______________________________________________
    ceph-users mailing list
    ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
    http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




--
Igor Laskovy
facebook.com/igor.laskovy <http://facebook.com/igor.laskovy>
studiogrizzly.com <http://studiogrizzly.com>

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to