Re: [ceph-users] Centos 7 OSD silently fail to start

2015-02-25 Thread Leszek Master
Check firewall rules and selinux. It sometimes is a pain in the ... :)
25 lut 2015 01:46 Barclay Jameson almightybe...@gmail.com napisał(a):

 I have tried to install ceph using ceph-deploy but sgdisk seems to
 have too many issues so I did a manual install. After mkfs.btrfs on
 the disks and journals and mounted them I then tried to start the osds
 which failed. The first error was:
 #/etc/init.d/ceph start osd.0
 /etc/init.d/ceph: osd.0 not found (/etc/ceph/ceph.conf defines ,
 /var/lib/ceph defines )

 I then manually added the osds to the conf file with the following as
 an example:
 [osd.0]
 osd_host = node01

 Now when I run the command :
 # /etc/init.d/ceph start osd.0

 There is no error or output from the command and in fact when I do a
 ceph -s no osds are listed as being up.
 Doing as ps aux | grep -i ceph or ps aux | grep -i osd shows there are
 no osd running.
 I also have done htop to see if any process are running and none are shown.

 I had this working on SL6.5 with Firefly but Giant on Centos 7 has
 been nothing but a giant pain.
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Centos 7 OSD silently fail to start

2015-02-25 Thread Robert LeBlanc
We use ceph-disk without any issues on CentOS7. If you want to do a
manual deployment, verfiy you aren't missing any steps in
http://ceph.com/docs/master/install/manual-deployment/#long-form.


On Tue, Feb 24, 2015 at 5:46 PM, Barclay Jameson
almightybe...@gmail.com wrote:
 I have tried to install ceph using ceph-deploy but sgdisk seems to
 have too many issues so I did a manual install. After mkfs.btrfs on
 the disks and journals and mounted them I then tried to start the osds
 which failed. The first error was:
 #/etc/init.d/ceph start osd.0
 /etc/init.d/ceph: osd.0 not found (/etc/ceph/ceph.conf defines ,
 /var/lib/ceph defines )

 I then manually added the osds to the conf file with the following as
 an example:
 [osd.0]
 osd_host = node01

 Now when I run the command :
 # /etc/init.d/ceph start osd.0

 There is no error or output from the command and in fact when I do a
 ceph -s no osds are listed as being up.
 Doing as ps aux | grep -i ceph or ps aux | grep -i osd shows there are
 no osd running.
 I also have done htop to see if any process are running and none are shown.

 I had this working on SL6.5 with Firefly but Giant on Centos 7 has
 been nothing but a giant pain.
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Centos 7 OSD silently fail to start

2015-02-25 Thread Travis Rhoden
Also, did you successfully start your monitor(s), and define/create the
OSDs within the Ceph cluster itself?

There are several steps to creating a Ceph cluster manually.  I'm unsure if
you have done the steps to actually create and register the OSDs with the
cluster.

 - Travis

On Wed, Feb 25, 2015 at 9:49 AM, Leszek Master keks...@gmail.com wrote:

 Check firewall rules and selinux. It sometimes is a pain in the ... :)
 25 lut 2015 01:46 Barclay Jameson almightybe...@gmail.com napisał(a):

 I have tried to install ceph using ceph-deploy but sgdisk seems to
 have too many issues so I did a manual install. After mkfs.btrfs on
 the disks and journals and mounted them I then tried to start the osds
 which failed. The first error was:
 #/etc/init.d/ceph start osd.0
 /etc/init.d/ceph: osd.0 not found (/etc/ceph/ceph.conf defines ,
 /var/lib/ceph defines )

 I then manually added the osds to the conf file with the following as
 an example:
 [osd.0]
 osd_host = node01

 Now when I run the command :
 # /etc/init.d/ceph start osd.0

 There is no error or output from the command and in fact when I do a
 ceph -s no osds are listed as being up.
 Doing as ps aux | grep -i ceph or ps aux | grep -i osd shows there are
 no osd running.
 I also have done htop to see if any process are running and none are
 shown.

 I had this working on SL6.5 with Firefly but Giant on Centos 7 has
 been nothing but a giant pain.
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Centos 7 OSD silently fail to start

2015-02-25 Thread Kyle Hutson
I'm having a similar issue.

I'm following http://ceph.com/docs/master/install/manual-deployment/ to a T.

I have OSDs on the same host deployed with the short-form and they work
fine. I am trying to deploy some more via the long form (because I want
them to appear in a different location in the crush map). Everything
through step 10 (i.e. ceph osd crush add {id-or-name} {weight}
[{bucket-type}={bucket-name} ...] ) works just fine. When I go to step 11 (sudo
/etc/init.d/ceph start osd.{osd-num}) I get:
/etc/init.d/ceph: osd.16 not found (/etc/ceph/ceph.conf defines
mon.hobbit01 osd.7 osd.15 osd.10 osd.9 osd.1 osd.14 osd.2 osd.3 osd.13
osd.8 osd.12 osd.6 osd.11 osd.5 osd.4 osd.0 , /var/lib/ceph defines
mon.hobbit01 osd.7 osd.15 osd.10 osd.9 osd.1 osd.14 osd.2 osd.3 osd.13
osd.8 osd.12 osd.6 osd.11 osd.5 osd.4 osd.0)



On Wed, Feb 25, 2015 at 11:55 AM, Travis Rhoden trho...@gmail.com wrote:

 Also, did you successfully start your monitor(s), and define/create the
 OSDs within the Ceph cluster itself?

 There are several steps to creating a Ceph cluster manually.  I'm unsure
 if you have done the steps to actually create and register the OSDs with
 the cluster.

  - Travis

 On Wed, Feb 25, 2015 at 9:49 AM, Leszek Master keks...@gmail.com wrote:

 Check firewall rules and selinux. It sometimes is a pain in the ... :)
 25 lut 2015 01:46 Barclay Jameson almightybe...@gmail.com napisał(a):

 I have tried to install ceph using ceph-deploy but sgdisk seems to
 have too many issues so I did a manual install. After mkfs.btrfs on
 the disks and journals and mounted them I then tried to start the osds
 which failed. The first error was:
 #/etc/init.d/ceph start osd.0
 /etc/init.d/ceph: osd.0 not found (/etc/ceph/ceph.conf defines ,
 /var/lib/ceph defines )

 I then manually added the osds to the conf file with the following as
 an example:
 [osd.0]
 osd_host = node01

 Now when I run the command :
 # /etc/init.d/ceph start osd.0

 There is no error or output from the command and in fact when I do a
 ceph -s no osds are listed as being up.
 Doing as ps aux | grep -i ceph or ps aux | grep -i osd shows there are
 no osd running.
 I also have done htop to see if any process are running and none are
 shown.

 I had this working on SL6.5 with Firefly but Giant on Centos 7 has
 been nothing but a giant pain.
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Centos 7 OSD silently fail to start

2015-02-25 Thread Robert LeBlanc
I think that your problem lies with systemd (even though you are using
SysV syntax, systemd is really doing the work). Systemd does not like
multiple arguments and I think this is why it is failing. There is
supposed to be some work done to get systemd working ok, but I think
it has the limitation of only working with a cluster named 'ceph'
currently.

What I did to get around the problem was to run the osd command manually:

ceph-osd -i osd#

Once I understand the under-the-hood stuff, I moved to ceph-disk and
now because of the GPT partition IDs, udev automatically starts up the
OSD process at boot/creation and moves to the appropiate CRUSH
location (configuratble in ceph.conf
http://ceph.com/docs/master/rados/operations/crush-map/#crush-location,
an example: crush location = host=test rack=rack3 row=row8
datacenter=local region=na-west root=default). To restart an OSD
process, I just kill the PID for the OSD then issue ceph-disk activate
/dev/sdx1 to restart the OSD process. You probably could stop it with
systemctl since I believe udev creates a resource for it (I should
probably look into that now that this system will be going production
soon).

On Wed, Feb 25, 2015 at 2:13 PM, Kyle Hutson kylehut...@ksu.edu wrote:
 I'm having a similar issue.

 I'm following http://ceph.com/docs/master/install/manual-deployment/ to a T.

 I have OSDs on the same host deployed with the short-form and they work
 fine. I am trying to deploy some more via the long form (because I want them
 to appear in a different location in the crush map). Everything through step
 10 (i.e. ceph osd crush add {id-or-name} {weight}
 [{bucket-type}={bucket-name} ...] ) works just fine. When I go to step 11
 (sudo /etc/init.d/ceph start osd.{osd-num}) I get:
 /etc/init.d/ceph: osd.16 not found (/etc/ceph/ceph.conf defines mon.hobbit01
 osd.7 osd.15 osd.10 osd.9 osd.1 osd.14 osd.2 osd.3 osd.13 osd.8 osd.12 osd.6
 osd.11 osd.5 osd.4 osd.0 , /var/lib/ceph defines mon.hobbit01 osd.7 osd.15
 osd.10 osd.9 osd.1 osd.14 osd.2 osd.3 osd.13 osd.8 osd.12 osd.6 osd.11 osd.5
 osd.4 osd.0)



 On Wed, Feb 25, 2015 at 11:55 AM, Travis Rhoden trho...@gmail.com wrote:

 Also, did you successfully start your monitor(s), and define/create the
 OSDs within the Ceph cluster itself?

 There are several steps to creating a Ceph cluster manually.  I'm unsure
 if you have done the steps to actually create and register the OSDs with the
 cluster.

  - Travis

 On Wed, Feb 25, 2015 at 9:49 AM, Leszek Master keks...@gmail.com wrote:

 Check firewall rules and selinux. It sometimes is a pain in the ... :)

 25 lut 2015 01:46 Barclay Jameson almightybe...@gmail.com napisał(a):

 I have tried to install ceph using ceph-deploy but sgdisk seems to
 have too many issues so I did a manual install. After mkfs.btrfs on
 the disks and journals and mounted them I then tried to start the osds
 which failed. The first error was:
 #/etc/init.d/ceph start osd.0
 /etc/init.d/ceph: osd.0 not found (/etc/ceph/ceph.conf defines ,
 /var/lib/ceph defines )

 I then manually added the osds to the conf file with the following as
 an example:
 [osd.0]
 osd_host = node01

 Now when I run the command :
 # /etc/init.d/ceph start osd.0

 There is no error or output from the command and in fact when I do a
 ceph -s no osds are listed as being up.
 Doing as ps aux | grep -i ceph or ps aux | grep -i osd shows there are
 no osd running.
 I also have done htop to see if any process are running and none are
 shown.

 I had this working on SL6.5 with Firefly but Giant on Centos 7 has
 been nothing but a giant pain.
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Centos 7 OSD silently fail to start

2015-02-25 Thread Kyle Hutson
But I already issued that command (back in step 6).

The interesting part is that ceph-disk activate apparently does it
correctly. Even after reboot, the services start as they should.

On Wed, Feb 25, 2015 at 3:54 PM, Robert LeBlanc rob...@leblancnet.us
wrote:

 I think that your problem lies with systemd (even though you are using
 SysV syntax, systemd is really doing the work). Systemd does not like
 multiple arguments and I think this is why it is failing. There is
 supposed to be some work done to get systemd working ok, but I think
 it has the limitation of only working with a cluster named 'ceph'
 currently.

 What I did to get around the problem was to run the osd command manually:

 ceph-osd -i osd#

 Once I understand the under-the-hood stuff, I moved to ceph-disk and
 now because of the GPT partition IDs, udev automatically starts up the
 OSD process at boot/creation and moves to the appropiate CRUSH
 location (configuratble in ceph.conf
 http://ceph.com/docs/master/rados/operations/crush-map/#crush-location,
 an example: crush location = host=test rack=rack3 row=row8
 datacenter=local region=na-west root=default). To restart an OSD
 process, I just kill the PID for the OSD then issue ceph-disk activate
 /dev/sdx1 to restart the OSD process. You probably could stop it with
 systemctl since I believe udev creates a resource for it (I should
 probably look into that now that this system will be going production
 soon).

 On Wed, Feb 25, 2015 at 2:13 PM, Kyle Hutson kylehut...@ksu.edu wrote:
  I'm having a similar issue.
 
  I'm following http://ceph.com/docs/master/install/manual-deployment/ to
 a T.
 
  I have OSDs on the same host deployed with the short-form and they work
  fine. I am trying to deploy some more via the long form (because I want
 them
  to appear in a different location in the crush map). Everything through
 step
  10 (i.e. ceph osd crush add {id-or-name} {weight}
  [{bucket-type}={bucket-name} ...] ) works just fine. When I go to step 11
  (sudo /etc/init.d/ceph start osd.{osd-num}) I get:
  /etc/init.d/ceph: osd.16 not found (/etc/ceph/ceph.conf defines
 mon.hobbit01
  osd.7 osd.15 osd.10 osd.9 osd.1 osd.14 osd.2 osd.3 osd.13 osd.8 osd.12
 osd.6
  osd.11 osd.5 osd.4 osd.0 , /var/lib/ceph defines mon.hobbit01 osd.7
 osd.15
  osd.10 osd.9 osd.1 osd.14 osd.2 osd.3 osd.13 osd.8 osd.12 osd.6 osd.11
 osd.5
  osd.4 osd.0)
 
 
 
  On Wed, Feb 25, 2015 at 11:55 AM, Travis Rhoden trho...@gmail.com
 wrote:
 
  Also, did you successfully start your monitor(s), and define/create the
  OSDs within the Ceph cluster itself?
 
  There are several steps to creating a Ceph cluster manually.  I'm unsure
  if you have done the steps to actually create and register the OSDs
 with the
  cluster.
 
   - Travis
 
  On Wed, Feb 25, 2015 at 9:49 AM, Leszek Master keks...@gmail.com
 wrote:
 
  Check firewall rules and selinux. It sometimes is a pain in the ... :)
 
  25 lut 2015 01:46 Barclay Jameson almightybe...@gmail.com
 napisał(a):
 
  I have tried to install ceph using ceph-deploy but sgdisk seems to
  have too many issues so I did a manual install. After mkfs.btrfs on
  the disks and journals and mounted them I then tried to start the osds
  which failed. The first error was:
  #/etc/init.d/ceph start osd.0
  /etc/init.d/ceph: osd.0 not found (/etc/ceph/ceph.conf defines ,
  /var/lib/ceph defines )
 
  I then manually added the osds to the conf file with the following as
  an example:
  [osd.0]
  osd_host = node01
 
  Now when I run the command :
  # /etc/init.d/ceph start osd.0
 
  There is no error or output from the command and in fact when I do a
  ceph -s no osds are listed as being up.
  Doing as ps aux | grep -i ceph or ps aux | grep -i osd shows there are
  no osd running.
  I also have done htop to see if any process are running and none are
  shown.
 
  I had this working on SL6.5 with Firefly but Giant on Centos 7 has
  been nothing but a giant pain.
  ___
  ceph-users mailing list
  ceph-users@lists.ceph.com
  http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
 
 
  ___
  ceph-users mailing list
  ceph-users@lists.ceph.com
  http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
 
 
 
  ___
  ceph-users mailing list
  ceph-users@lists.ceph.com
  http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
 
 
 
  ___
  ceph-users mailing list
  ceph-users@lists.ceph.com
  http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
 

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Centos 7 OSD silently fail to start

2015-02-25 Thread Robert LeBlanc
Step #6 in http://ceph.com/docs/master/install/manual-deployment/#long-form
only set-ups the file structure for the OSD, it doesn't start the long
running process.

On Wed, Feb 25, 2015 at 2:59 PM, Kyle Hutson kylehut...@ksu.edu wrote:
 But I already issued that command (back in step 6).

 The interesting part is that ceph-disk activate apparently does it
 correctly. Even after reboot, the services start as they should.

 On Wed, Feb 25, 2015 at 3:54 PM, Robert LeBlanc rob...@leblancnet.us
 wrote:

 I think that your problem lies with systemd (even though you are using
 SysV syntax, systemd is really doing the work). Systemd does not like
 multiple arguments and I think this is why it is failing. There is
 supposed to be some work done to get systemd working ok, but I think
 it has the limitation of only working with a cluster named 'ceph'
 currently.

 What I did to get around the problem was to run the osd command manually:

 ceph-osd -i osd#

 Once I understand the under-the-hood stuff, I moved to ceph-disk and
 now because of the GPT partition IDs, udev automatically starts up the
 OSD process at boot/creation and moves to the appropiate CRUSH
 location (configuratble in ceph.conf
 http://ceph.com/docs/master/rados/operations/crush-map/#crush-location,
 an example: crush location = host=test rack=rack3 row=row8
 datacenter=local region=na-west root=default). To restart an OSD
 process, I just kill the PID for the OSD then issue ceph-disk activate
 /dev/sdx1 to restart the OSD process. You probably could stop it with
 systemctl since I believe udev creates a resource for it (I should
 probably look into that now that this system will be going production
 soon).

 On Wed, Feb 25, 2015 at 2:13 PM, Kyle Hutson kylehut...@ksu.edu wrote:
  I'm having a similar issue.
 
  I'm following http://ceph.com/docs/master/install/manual-deployment/ to
  a T.
 
  I have OSDs on the same host deployed with the short-form and they work
  fine. I am trying to deploy some more via the long form (because I want
  them
  to appear in a different location in the crush map). Everything through
  step
  10 (i.e. ceph osd crush add {id-or-name} {weight}
  [{bucket-type}={bucket-name} ...] ) works just fine. When I go to step
  11
  (sudo /etc/init.d/ceph start osd.{osd-num}) I get:
  /etc/init.d/ceph: osd.16 not found (/etc/ceph/ceph.conf defines
  mon.hobbit01
  osd.7 osd.15 osd.10 osd.9 osd.1 osd.14 osd.2 osd.3 osd.13 osd.8 osd.12
  osd.6
  osd.11 osd.5 osd.4 osd.0 , /var/lib/ceph defines mon.hobbit01 osd.7
  osd.15
  osd.10 osd.9 osd.1 osd.14 osd.2 osd.3 osd.13 osd.8 osd.12 osd.6 osd.11
  osd.5
  osd.4 osd.0)
 
 
 
  On Wed, Feb 25, 2015 at 11:55 AM, Travis Rhoden trho...@gmail.com
  wrote:
 
  Also, did you successfully start your monitor(s), and define/create the
  OSDs within the Ceph cluster itself?
 
  There are several steps to creating a Ceph cluster manually.  I'm
  unsure
  if you have done the steps to actually create and register the OSDs
  with the
  cluster.
 
   - Travis
 
  On Wed, Feb 25, 2015 at 9:49 AM, Leszek Master keks...@gmail.com
  wrote:
 
  Check firewall rules and selinux. It sometimes is a pain in the ... :)
 
  25 lut 2015 01:46 Barclay Jameson almightybe...@gmail.com
  napisał(a):
 
  I have tried to install ceph using ceph-deploy but sgdisk seems to
  have too many issues so I did a manual install. After mkfs.btrfs on
  the disks and journals and mounted them I then tried to start the
  osds
  which failed. The first error was:
  #/etc/init.d/ceph start osd.0
  /etc/init.d/ceph: osd.0 not found (/etc/ceph/ceph.conf defines ,
  /var/lib/ceph defines )
 
  I then manually added the osds to the conf file with the following as
  an example:
  [osd.0]
  osd_host = node01
 
  Now when I run the command :
  # /etc/init.d/ceph start osd.0
 
  There is no error or output from the command and in fact when I do a
  ceph -s no osds are listed as being up.
  Doing as ps aux | grep -i ceph or ps aux | grep -i osd shows there
  are
  no osd running.
  I also have done htop to see if any process are running and none are
  shown.
 
  I had this working on SL6.5 with Firefly but Giant on Centos 7 has
  been nothing but a giant pain.
  ___
  ceph-users mailing list
  ceph-users@lists.ceph.com
  http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
 
 
  ___
  ceph-users mailing list
  ceph-users@lists.ceph.com
  http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
 
 
 
  ___
  ceph-users mailing list
  ceph-users@lists.ceph.com
  http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
 
 
 
  ___
  ceph-users mailing list
  ceph-users@lists.ceph.com
  http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
 


___
ceph-users mailing list
ceph-users@lists.ceph.com

Re: [ceph-users] Centos 7 OSD silently fail to start

2015-02-25 Thread Kyle Hutson
So I issue it twice? e.g.
ceph-osd -i X --mkfs --mkkey
...other commands...
ceph-osd -i X

?


On Wed, Feb 25, 2015 at 4:03 PM, Robert LeBlanc rob...@leblancnet.us
wrote:

 Step #6 in
 http://ceph.com/docs/master/install/manual-deployment/#long-form
 only set-ups the file structure for the OSD, it doesn't start the long
 running process.

 On Wed, Feb 25, 2015 at 2:59 PM, Kyle Hutson kylehut...@ksu.edu wrote:
  But I already issued that command (back in step 6).
 
  The interesting part is that ceph-disk activate apparently does it
  correctly. Even after reboot, the services start as they should.
 
  On Wed, Feb 25, 2015 at 3:54 PM, Robert LeBlanc rob...@leblancnet.us
  wrote:
 
  I think that your problem lies with systemd (even though you are using
  SysV syntax, systemd is really doing the work). Systemd does not like
  multiple arguments and I think this is why it is failing. There is
  supposed to be some work done to get systemd working ok, but I think
  it has the limitation of only working with a cluster named 'ceph'
  currently.
 
  What I did to get around the problem was to run the osd command
 manually:
 
  ceph-osd -i osd#
 
  Once I understand the under-the-hood stuff, I moved to ceph-disk and
  now because of the GPT partition IDs, udev automatically starts up the
  OSD process at boot/creation and moves to the appropiate CRUSH
  location (configuratble in ceph.conf
  http://ceph.com/docs/master/rados/operations/crush-map/#crush-location,
  an example: crush location = host=test rack=rack3 row=row8
  datacenter=local region=na-west root=default). To restart an OSD
  process, I just kill the PID for the OSD then issue ceph-disk activate
  /dev/sdx1 to restart the OSD process. You probably could stop it with
  systemctl since I believe udev creates a resource for it (I should
  probably look into that now that this system will be going production
  soon).
 
  On Wed, Feb 25, 2015 at 2:13 PM, Kyle Hutson kylehut...@ksu.edu
 wrote:
   I'm having a similar issue.
  
   I'm following http://ceph.com/docs/master/install/manual-deployment/
 to
   a T.
  
   I have OSDs on the same host deployed with the short-form and they
 work
   fine. I am trying to deploy some more via the long form (because I
 want
   them
   to appear in a different location in the crush map). Everything
 through
   step
   10 (i.e. ceph osd crush add {id-or-name} {weight}
   [{bucket-type}={bucket-name} ...] ) works just fine. When I go to step
   11
   (sudo /etc/init.d/ceph start osd.{osd-num}) I get:
   /etc/init.d/ceph: osd.16 not found (/etc/ceph/ceph.conf defines
   mon.hobbit01
   osd.7 osd.15 osd.10 osd.9 osd.1 osd.14 osd.2 osd.3 osd.13 osd.8 osd.12
   osd.6
   osd.11 osd.5 osd.4 osd.0 , /var/lib/ceph defines mon.hobbit01 osd.7
   osd.15
   osd.10 osd.9 osd.1 osd.14 osd.2 osd.3 osd.13 osd.8 osd.12 osd.6 osd.11
   osd.5
   osd.4 osd.0)
  
  
  
   On Wed, Feb 25, 2015 at 11:55 AM, Travis Rhoden trho...@gmail.com
   wrote:
  
   Also, did you successfully start your monitor(s), and define/create
 the
   OSDs within the Ceph cluster itself?
  
   There are several steps to creating a Ceph cluster manually.  I'm
   unsure
   if you have done the steps to actually create and register the OSDs
   with the
   cluster.
  
- Travis
  
   On Wed, Feb 25, 2015 at 9:49 AM, Leszek Master keks...@gmail.com
   wrote:
  
   Check firewall rules and selinux. It sometimes is a pain in the ...
 :)
  
   25 lut 2015 01:46 Barclay Jameson almightybe...@gmail.com
   napisał(a):
  
   I have tried to install ceph using ceph-deploy but sgdisk seems to
   have too many issues so I did a manual install. After mkfs.btrfs on
   the disks and journals and mounted them I then tried to start the
   osds
   which failed. The first error was:
   #/etc/init.d/ceph start osd.0
   /etc/init.d/ceph: osd.0 not found (/etc/ceph/ceph.conf defines ,
   /var/lib/ceph defines )
  
   I then manually added the osds to the conf file with the following
 as
   an example:
   [osd.0]
   osd_host = node01
  
   Now when I run the command :
   # /etc/init.d/ceph start osd.0
  
   There is no error or output from the command and in fact when I do
 a
   ceph -s no osds are listed as being up.
   Doing as ps aux | grep -i ceph or ps aux | grep -i osd shows there
   are
   no osd running.
   I also have done htop to see if any process are running and none
 are
   shown.
  
   I had this working on SL6.5 with Firefly but Giant on Centos 7 has
   been nothing but a giant pain.
   ___
   ceph-users mailing list
   ceph-users@lists.ceph.com
   http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
  
  
   ___
   ceph-users mailing list
   ceph-users@lists.ceph.com
   http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
  
  
  
   ___
   ceph-users mailing list
   ceph-users@lists.ceph.com
   

Re: [ceph-users] Centos 7 OSD silently fail to start

2015-02-25 Thread Thomas Foster
I am using the long form and have it working.  The one thing that I saw was
to change from osd_host to just host. See if that works.
On Feb 25, 2015 5:44 PM, Kyle Hutson kylehut...@ksu.edu wrote:

 I just tried it, and that does indeed get the OSD to start.

 However, it doesn't add it to the appropriate place so it would survive a
 reboot. In my case, running 'service ceph status osd.16' still results in
 the same line I posted above.

 There's still something broken such that 'ceph-disk activate' works
 correctly, but using the long-form version does not.

 On Wed, Feb 25, 2015 at 4:03 PM, Robert LeBlanc rob...@leblancnet.us
 wrote:

 Step #6 in
 http://ceph.com/docs/master/install/manual-deployment/#long-form
 only set-ups the file structure for the OSD, it doesn't start the long
 running process.

 On Wed, Feb 25, 2015 at 2:59 PM, Kyle Hutson kylehut...@ksu.edu wrote:
  But I already issued that command (back in step 6).
 
  The interesting part is that ceph-disk activate apparently does it
  correctly. Even after reboot, the services start as they should.
 
  On Wed, Feb 25, 2015 at 3:54 PM, Robert LeBlanc rob...@leblancnet.us
  wrote:
 
  I think that your problem lies with systemd (even though you are using
  SysV syntax, systemd is really doing the work). Systemd does not like
  multiple arguments and I think this is why it is failing. There is
  supposed to be some work done to get systemd working ok, but I think
  it has the limitation of only working with a cluster named 'ceph'
  currently.
 
  What I did to get around the problem was to run the osd command
 manually:
 
  ceph-osd -i osd#
 
  Once I understand the under-the-hood stuff, I moved to ceph-disk and
  now because of the GPT partition IDs, udev automatically starts up the
  OSD process at boot/creation and moves to the appropiate CRUSH
  location (configuratble in ceph.conf
  http://ceph.com/docs/master/rados/operations/crush-map/#crush-location
 ,
  an example: crush location = host=test rack=rack3 row=row8
  datacenter=local region=na-west root=default). To restart an OSD
  process, I just kill the PID for the OSD then issue ceph-disk activate
  /dev/sdx1 to restart the OSD process. You probably could stop it with
  systemctl since I believe udev creates a resource for it (I should
  probably look into that now that this system will be going production
  soon).
 
  On Wed, Feb 25, 2015 at 2:13 PM, Kyle Hutson kylehut...@ksu.edu
 wrote:
   I'm having a similar issue.
  
   I'm following http://ceph.com/docs/master/install/manual-deployment/
 to
   a T.
  
   I have OSDs on the same host deployed with the short-form and they
 work
   fine. I am trying to deploy some more via the long form (because I
 want
   them
   to appear in a different location in the crush map). Everything
 through
   step
   10 (i.e. ceph osd crush add {id-or-name} {weight}
   [{bucket-type}={bucket-name} ...] ) works just fine. When I go to
 step
   11
   (sudo /etc/init.d/ceph start osd.{osd-num}) I get:
   /etc/init.d/ceph: osd.16 not found (/etc/ceph/ceph.conf defines
   mon.hobbit01
   osd.7 osd.15 osd.10 osd.9 osd.1 osd.14 osd.2 osd.3 osd.13 osd.8
 osd.12
   osd.6
   osd.11 osd.5 osd.4 osd.0 , /var/lib/ceph defines mon.hobbit01 osd.7
   osd.15
   osd.10 osd.9 osd.1 osd.14 osd.2 osd.3 osd.13 osd.8 osd.12 osd.6
 osd.11
   osd.5
   osd.4 osd.0)
  
  
  
   On Wed, Feb 25, 2015 at 11:55 AM, Travis Rhoden trho...@gmail.com
   wrote:
  
   Also, did you successfully start your monitor(s), and define/create
 the
   OSDs within the Ceph cluster itself?
  
   There are several steps to creating a Ceph cluster manually.  I'm
   unsure
   if you have done the steps to actually create and register the OSDs
   with the
   cluster.
  
- Travis
  
   On Wed, Feb 25, 2015 at 9:49 AM, Leszek Master keks...@gmail.com
   wrote:
  
   Check firewall rules and selinux. It sometimes is a pain in the
 ... :)
  
   25 lut 2015 01:46 Barclay Jameson almightybe...@gmail.com
   napisał(a):
  
   I have tried to install ceph using ceph-deploy but sgdisk seems to
   have too many issues so I did a manual install. After mkfs.btrfs
 on
   the disks and journals and mounted them I then tried to start the
   osds
   which failed. The first error was:
   #/etc/init.d/ceph start osd.0
   /etc/init.d/ceph: osd.0 not found (/etc/ceph/ceph.conf defines ,
   /var/lib/ceph defines )
  
   I then manually added the osds to the conf file with the
 following as
   an example:
   [osd.0]
   osd_host = node01
  
   Now when I run the command :
   # /etc/init.d/ceph start osd.0
  
   There is no error or output from the command and in fact when I
 do a
   ceph -s no osds are listed as being up.
   Doing as ps aux | grep -i ceph or ps aux | grep -i osd shows there
   are
   no osd running.
   I also have done htop to see if any process are running and none
 are
   shown.
  
   I had this working on SL6.5 with Firefly but Giant on Centos 7 has
   been nothing but a giant pain.
   

Re: [ceph-users] Centos 7 OSD silently fail to start

2015-02-25 Thread Kyle Hutson
Thank you Thomas. You at least made me look it the right spot. Their
long-form is showing what to do for a mon, not an osd.

At the bottom of step 11, instead of
sudo touch /var/lib/ceph/mon/{cluster-name}-{hostname}/sysvinit

It should read
sudo touch /var/lib/ceph/osd/{cluster-name}-{osd-num}/sysvinit

Once I did that 'service ceph status' correctly shows that I have that OSD
available and I can start or stop it from there.

On Wed, Feb 25, 2015 at 4:55 PM, Thomas Foster thomas.foste...@gmail.com
wrote:

 I am using the long form and have it working.  The one thing that I saw
 was to change from osd_host to just host. See if that works.
 On Feb 25, 2015 5:44 PM, Kyle Hutson kylehut...@ksu.edu wrote:

 I just tried it, and that does indeed get the OSD to start.

 However, it doesn't add it to the appropriate place so it would survive a
 reboot. In my case, running 'service ceph status osd.16' still results in
 the same line I posted above.

 There's still something broken such that 'ceph-disk activate' works
 correctly, but using the long-form version does not.

 On Wed, Feb 25, 2015 at 4:03 PM, Robert LeBlanc rob...@leblancnet.us
 wrote:

 Step #6 in
 http://ceph.com/docs/master/install/manual-deployment/#long-form
 only set-ups the file structure for the OSD, it doesn't start the long
 running process.

 On Wed, Feb 25, 2015 at 2:59 PM, Kyle Hutson kylehut...@ksu.edu wrote:
  But I already issued that command (back in step 6).
 
  The interesting part is that ceph-disk activate apparently does it
  correctly. Even after reboot, the services start as they should.
 
  On Wed, Feb 25, 2015 at 3:54 PM, Robert LeBlanc rob...@leblancnet.us
  wrote:
 
  I think that your problem lies with systemd (even though you are using
  SysV syntax, systemd is really doing the work). Systemd does not like
  multiple arguments and I think this is why it is failing. There is
  supposed to be some work done to get systemd working ok, but I think
  it has the limitation of only working with a cluster named 'ceph'
  currently.
 
  What I did to get around the problem was to run the osd command
 manually:
 
  ceph-osd -i osd#
 
  Once I understand the under-the-hood stuff, I moved to ceph-disk and
  now because of the GPT partition IDs, udev automatically starts up the
  OSD process at boot/creation and moves to the appropiate CRUSH
  location (configuratble in ceph.conf
 
 http://ceph.com/docs/master/rados/operations/crush-map/#crush-location,
  an example: crush location = host=test rack=rack3 row=row8
  datacenter=local region=na-west root=default). To restart an OSD
  process, I just kill the PID for the OSD then issue ceph-disk activate
  /dev/sdx1 to restart the OSD process. You probably could stop it with
  systemctl since I believe udev creates a resource for it (I should
  probably look into that now that this system will be going production
  soon).
 
  On Wed, Feb 25, 2015 at 2:13 PM, Kyle Hutson kylehut...@ksu.edu
 wrote:
   I'm having a similar issue.
  
   I'm following
 http://ceph.com/docs/master/install/manual-deployment/ to
   a T.
  
   I have OSDs on the same host deployed with the short-form and they
 work
   fine. I am trying to deploy some more via the long form (because I
 want
   them
   to appear in a different location in the crush map). Everything
 through
   step
   10 (i.e. ceph osd crush add {id-or-name} {weight}
   [{bucket-type}={bucket-name} ...] ) works just fine. When I go to
 step
   11
   (sudo /etc/init.d/ceph start osd.{osd-num}) I get:
   /etc/init.d/ceph: osd.16 not found (/etc/ceph/ceph.conf defines
   mon.hobbit01
   osd.7 osd.15 osd.10 osd.9 osd.1 osd.14 osd.2 osd.3 osd.13 osd.8
 osd.12
   osd.6
   osd.11 osd.5 osd.4 osd.0 , /var/lib/ceph defines mon.hobbit01 osd.7
   osd.15
   osd.10 osd.9 osd.1 osd.14 osd.2 osd.3 osd.13 osd.8 osd.12 osd.6
 osd.11
   osd.5
   osd.4 osd.0)
  
  
  
   On Wed, Feb 25, 2015 at 11:55 AM, Travis Rhoden trho...@gmail.com
   wrote:
  
   Also, did you successfully start your monitor(s), and
 define/create the
   OSDs within the Ceph cluster itself?
  
   There are several steps to creating a Ceph cluster manually.  I'm
   unsure
   if you have done the steps to actually create and register the OSDs
   with the
   cluster.
  
- Travis
  
   On Wed, Feb 25, 2015 at 9:49 AM, Leszek Master keks...@gmail.com
   wrote:
  
   Check firewall rules and selinux. It sometimes is a pain in the
 ... :)
  
   25 lut 2015 01:46 Barclay Jameson almightybe...@gmail.com
   napisał(a):
  
   I have tried to install ceph using ceph-deploy but sgdisk seems
 to
   have too many issues so I did a manual install. After mkfs.btrfs
 on
   the disks and journals and mounted them I then tried to start the
   osds
   which failed. The first error was:
   #/etc/init.d/ceph start osd.0
   /etc/init.d/ceph: osd.0 not found (/etc/ceph/ceph.conf defines ,
   /var/lib/ceph defines )
  
   I then manually added the osds to the conf file with the
 following as
   an example:
   [osd.0]
  

Re: [ceph-users] Centos 7 OSD silently fail to start

2015-02-25 Thread Thomas Foster
Heres the doc I used to get the info:
http://ceph.com/docs/master/rados/troubleshooting/troubleshooting-osd/
On Feb 25, 2015 5:55 PM, Thomas Foster thomas.foste...@gmail.com wrote:

 I am using the long form and have it working.  The one thing that I saw
 was to change from osd_host to just host. See if that works.
 On Feb 25, 2015 5:44 PM, Kyle Hutson kylehut...@ksu.edu wrote:

 I just tried it, and that does indeed get the OSD to start.

 However, it doesn't add it to the appropriate place so it would survive a
 reboot. In my case, running 'service ceph status osd.16' still results in
 the same line I posted above.

 There's still something broken such that 'ceph-disk activate' works
 correctly, but using the long-form version does not.

 On Wed, Feb 25, 2015 at 4:03 PM, Robert LeBlanc rob...@leblancnet.us
 wrote:

 Step #6 in
 http://ceph.com/docs/master/install/manual-deployment/#long-form
 only set-ups the file structure for the OSD, it doesn't start the long
 running process.

 On Wed, Feb 25, 2015 at 2:59 PM, Kyle Hutson kylehut...@ksu.edu wrote:
  But I already issued that command (back in step 6).
 
  The interesting part is that ceph-disk activate apparently does it
  correctly. Even after reboot, the services start as they should.
 
  On Wed, Feb 25, 2015 at 3:54 PM, Robert LeBlanc rob...@leblancnet.us
  wrote:
 
  I think that your problem lies with systemd (even though you are using
  SysV syntax, systemd is really doing the work). Systemd does not like
  multiple arguments and I think this is why it is failing. There is
  supposed to be some work done to get systemd working ok, but I think
  it has the limitation of only working with a cluster named 'ceph'
  currently.
 
  What I did to get around the problem was to run the osd command
 manually:
 
  ceph-osd -i osd#
 
  Once I understand the under-the-hood stuff, I moved to ceph-disk and
  now because of the GPT partition IDs, udev automatically starts up the
  OSD process at boot/creation and moves to the appropiate CRUSH
  location (configuratble in ceph.conf
 
 http://ceph.com/docs/master/rados/operations/crush-map/#crush-location,
  an example: crush location = host=test rack=rack3 row=row8
  datacenter=local region=na-west root=default). To restart an OSD
  process, I just kill the PID for the OSD then issue ceph-disk activate
  /dev/sdx1 to restart the OSD process. You probably could stop it with
  systemctl since I believe udev creates a resource for it (I should
  probably look into that now that this system will be going production
  soon).
 
  On Wed, Feb 25, 2015 at 2:13 PM, Kyle Hutson kylehut...@ksu.edu
 wrote:
   I'm having a similar issue.
  
   I'm following
 http://ceph.com/docs/master/install/manual-deployment/ to
   a T.
  
   I have OSDs on the same host deployed with the short-form and they
 work
   fine. I am trying to deploy some more via the long form (because I
 want
   them
   to appear in a different location in the crush map). Everything
 through
   step
   10 (i.e. ceph osd crush add {id-or-name} {weight}
   [{bucket-type}={bucket-name} ...] ) works just fine. When I go to
 step
   11
   (sudo /etc/init.d/ceph start osd.{osd-num}) I get:
   /etc/init.d/ceph: osd.16 not found (/etc/ceph/ceph.conf defines
   mon.hobbit01
   osd.7 osd.15 osd.10 osd.9 osd.1 osd.14 osd.2 osd.3 osd.13 osd.8
 osd.12
   osd.6
   osd.11 osd.5 osd.4 osd.0 , /var/lib/ceph defines mon.hobbit01 osd.7
   osd.15
   osd.10 osd.9 osd.1 osd.14 osd.2 osd.3 osd.13 osd.8 osd.12 osd.6
 osd.11
   osd.5
   osd.4 osd.0)
  
  
  
   On Wed, Feb 25, 2015 at 11:55 AM, Travis Rhoden trho...@gmail.com
   wrote:
  
   Also, did you successfully start your monitor(s), and
 define/create the
   OSDs within the Ceph cluster itself?
  
   There are several steps to creating a Ceph cluster manually.  I'm
   unsure
   if you have done the steps to actually create and register the OSDs
   with the
   cluster.
  
- Travis
  
   On Wed, Feb 25, 2015 at 9:49 AM, Leszek Master keks...@gmail.com
   wrote:
  
   Check firewall rules and selinux. It sometimes is a pain in the
 ... :)
  
   25 lut 2015 01:46 Barclay Jameson almightybe...@gmail.com
   napisał(a):
  
   I have tried to install ceph using ceph-deploy but sgdisk seems
 to
   have too many issues so I did a manual install. After mkfs.btrfs
 on
   the disks and journals and mounted them I then tried to start the
   osds
   which failed. The first error was:
   #/etc/init.d/ceph start osd.0
   /etc/init.d/ceph: osd.0 not found (/etc/ceph/ceph.conf defines ,
   /var/lib/ceph defines )
  
   I then manually added the osds to the conf file with the
 following as
   an example:
   [osd.0]
   osd_host = node01
  
   Now when I run the command :
   # /etc/init.d/ceph start osd.0
  
   There is no error or output from the command and in fact when I
 do a
   ceph -s no osds are listed as being up.
   Doing as ps aux | grep -i ceph or ps aux | grep -i osd shows
 there
   are
   no osd running.
   I also have done 

Re: [ceph-users] Centos 7 OSD silently fail to start

2015-02-25 Thread Brad Hubbard

On 02/26/2015 09:05 AM, Kyle Hutson wrote:

Thank you Thomas. You at least made me look it the right spot. Their long-form 
is showing what to do for a mon, not an osd.

At the bottom of step 11, instead of
sudo touch /var/lib/ceph/mon/{cluster-name}-{hostname}/sysvinit

It should read
sudo touch /var/lib/ceph/osd/{cluster-name}-{osd-num}/sysvinit

Once I did that 'service ceph status' correctly shows that I have that OSD 
available and I can start or stop it from there.



Could you open a bug for this?

Cheers,
Brad

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Centos 7 OSD silently fail to start

2015-02-25 Thread Brad Hubbard

On 02/26/2015 03:24 PM, Kyle Hutson wrote:

Just did it. Thanks for suggesting it.


No, definitely thank you. Much appreciated.



On Wed, Feb 25, 2015 at 5:59 PM, Brad Hubbard bhubb...@redhat.com 
mailto:bhubb...@redhat.com wrote:

On 02/26/2015 09:05 AM, Kyle Hutson wrote:

Thank you Thomas. You at least made me look it the right spot. Their 
long-form is showing what to do for a mon, not an osd.

At the bottom of step 11, instead of
sudo touch /var/lib/ceph/mon/{cluster-__name}-{hostname}/sysvinit

It should read
sudo touch /var/lib/ceph/osd/{cluster-__name}-{osd-num}/sysvinit

Once I did that 'service ceph status' correctly shows that I have that 
OSD available and I can start or stop it from there.


Could you open a bug for this?

Cheers,
Brad




___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Centos 7 OSD silently fail to start

2015-02-24 Thread Barclay Jameson
I have tried to install ceph using ceph-deploy but sgdisk seems to
have too many issues so I did a manual install. After mkfs.btrfs on
the disks and journals and mounted them I then tried to start the osds
which failed. The first error was:
#/etc/init.d/ceph start osd.0
/etc/init.d/ceph: osd.0 not found (/etc/ceph/ceph.conf defines ,
/var/lib/ceph defines )

I then manually added the osds to the conf file with the following as
an example:
[osd.0]
osd_host = node01

Now when I run the command :
# /etc/init.d/ceph start osd.0

There is no error or output from the command and in fact when I do a
ceph -s no osds are listed as being up.
Doing as ps aux | grep -i ceph or ps aux | grep -i osd shows there are
no osd running.
I also have done htop to see if any process are running and none are shown.

I had this working on SL6.5 with Firefly but Giant on Centos 7 has
been nothing but a giant pain.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com