Hi,

Thank you for quick reply.
ems is a server from where I ran service ceph start.

These are the steps followed. Please let me know if have anything is
missing or something is wrong.


wget -q -O- '
https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc' | sudo
apt-key add -
echo deb http://ceph.com/packages/ceph-extras/debian $(lsb_release -sc)
main | sudo tee /etc/apt/sources.list.d/ceph-extras.list
sudo apt-add-repository 'deb http://ceph.com/debian-emperor/ precise  main'
sudo apt-get update && sudo apt-get install ceph

update /etc/ceph/ceph.conf:

[global]
    # For version 0.54 and earlier, you may enable
    # authentication with the following setting.
    # Specifying `cephx` enables authentication;
    # and specifying `none` disables authentication.

    #auth supported = cephx

    # For version 0.55 and beyond, you must explicitly enable
    # or disable authentication with "auth" entries in [global].

    auth cluster required = cephx
    auth service required = cephx
    auth client required = cephx


[osd]
    osd journal size = 1000
    # uncomment the following line if you are mounting with ext4
     filestore xattr use omap = true


    # For Bobtail (v 0.56) and subsequent versions, you may
    # add settings for mkcephfs so that it will create and mount
    # the file system for you. Remove the comment `#` character for
    # the following settings and replace the values in parenthesis
    # with appropriate values, or leave the following settings commented
    # out to accept the default values. You must specify the --mkfs
    # option with mkcephfs in order for the deployment script to
    # utilize the following settings, and you must define the 'devs'
    # option for each osd instance; see below.

    osd mkfs type = ext4
    #osd mkfs options {fs-type} = {mkfs options}   # default for xfs is "-f"
    osd mount options ext4 = user_xattr,rw,noexec,nodev,noatime,nodiratime
# default mount option is "rw, noatime"

[mon.a]
    host = ems
    mon addr = <hostip>:6789

[osd.0]
    host = ems
    devs = /dev/sdb1

[osd.1]
    host = ems
    devs = /dev/sdb2

[mds.a]
    host = ems
    #devs = {path-to-device}


~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~



Copied the configuration file to /etc/ceph/ceph.conf on client host.
  Set the permissions 644 on  client machine


  On  Ceph server host, created directory for each daemon:
         mkdir -p /var/lib/ceph/osd/ceph-0
         mkdir -p /var/lib/ceph/osd/ceph-1
         mkdir -p /var/lib/ceph/mon/ceph-a
         mkdir -p /var/lib/ceph/mds/ceph-a

~ Executed the following on the Ceph server host:
         cd /etc/ceph
         mkcephfs -a -c /etc/ceph/ceph.conf -k ceph.keyring --mkfs

service ceph -a start

Got error:
service ceph -a start
=== mon.a ===
Starting Ceph mon.a on ems...already running
=== mds.a ===
Starting Ceph mds.a on ems...already running
=== osd.0 ===
Mounting ext4 on ems:/var/lib/ceph/osd/ceph-0
Error ENOENT: osd.0 does not exist.  create it before updating the crush map
failed: 'timeout 10 /usr/bin/ceph                       --name=osd.0
             --keyring=/var/lib/ceph/osd/ceph-0/keyring
 osd crush create-or-move                     --                      0
                  0.10                    root=default
 host=ems    '

actually this was warning in dumpling version. So started processes
manually.

[root@ip-10-68-107-28 ceph]# ps -eaf | grep ceph
root     16130     1  0 11:19 pts/1    00:00:01 /usr/bin/ceph-mon -i a
--pid-file /var/run/ceph/mon.a.pid -c /etc/ceph/ceph.conf
root     16232     1  0 11:19 ?        00:00:00 /usr/bin/ceph-mds -i a
--pid-file /var/run/ceph/mds.a.pid -c /etc/ceph/ceph.conf
root     16367     1  0 11:19 ?        00:00:07 /usr/bin/ceph-osd -i 0
--pid-file /var/run/ceph/osd.0.pid -c /etc/ceph/ceph.conf
root     16531     1  0 11:19 ?        00:00:05 /usr/bin/ceph-osd -i 1
--pid-file /var/run/ceph/osd.1.pid -c /etc/ceph/ceph.conf
root     16722 15658  0 11:46 pts/1    00:00:00 grep ceph

output of mount :
/dev/sdb1 on /var/lib/ceph/osd/ceph-0 type ext4
(rw,noexec,nodev,noatime,nodiratime,user_xattr)
/dev/sdb2 on /var/lib/ceph/osd/ceph-1 type ext4
(rw,noexec,nodev,noatime,nodiratime,user_xattr)


Thanks,
Sahana





On Thu, Dec 5, 2013 at 6:17 PM, Li Wang <liw...@ubuntukylin.com> wrote:

> ems is a remote machine?
> Did you set up the corresponding directories: /var/lib/ceph/osd/ceph-0,
> and called mkcephfs before?
> You can also try starting osd manually by 'ceph-osd -i 0 -c
> /etc/ceph/ceph.conf', then 'pgrep ceph-osd' to see if they are there, then
> 'ceph -s' to check the health.
>
>
> On 2013/12/5 18:26, Sahana wrote:
>
>> Installed ceph-emperor using apt-get in ubuntu 12.04 by following the
>> steps given in installation part of ceph-doc website.
>>
>> http://ceph.com/docs/master/install/get-packages/
>>
>> http://ceph.com/docs/master/install/install-storage-cluster/
>>
>> But get error when this command is run :
>>
>>
>> service ceph -a start
>>
>> service ceph -a start
>> === mon.a ===
>> Starting Ceph mon.a on ems...already running
>> === mds.a ===
>> Starting Ceph mds.a on ems...already running
>> === osd.0 ===
>> Mounting ext4 on ems:/var/lib/ceph/osd/ceph-0
>> Error ENOENT: osd.0 does not exist.  create it before updating the crush
>> map
>> failed: 'timeout 10 /usr/bin/ceph                       --name=osd.0
>>                 --keyring=/var/lib/ceph/osd/ceph-0/keyring
>>         osd crush create-or-move                     --
>>       0                       0.10                    root=default
>>               host=hostname    '
>>
>> "Error ENOENT: osd.0 does not exist.  create it before updating the
>> crush map" this was warning in dumpling but in emperor its been
>> converted as error.
>>
>> Please let me know the steps to solve the problem
>>
>> Thanks,
>>
>> Sahana
>>
>>
>>
>>
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to