Re: [ceph-users] Very frustrated with Ceph!

2013-11-05 Thread Mark Kirkwood

Yep - better to be overly cautious about that :-)

On 06/11/13 14:40, Mark Nelson wrote:
We had a discussion about all of this a year ago (when package purge 
was removing mds data and thus destroying clusters).  I think we have 
to be really careful here as it's rather permanent if you make a bad 
choice. I'd much rather that users be annoyed with me that they have 
to go manually clean up old data vs users who can't get their data 
back without herculean efforts.


Mark

On 11/05/2013 07:19 PM, Mark Kirkwood wrote:

... forgot to add: maybe 'uninstall' should be target for ceph-deploy
that removes just the actual software daemons...

On 06/11/13 14:16, Mark Kirkwood wrote:

I think purge of several data containing packages will ask if you want
to destroy that too (Mysql comes to mind - asks if you want to remove
the databases under /var/lib/mysql). So this is possibly reasonable
behaviour.



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Very frustrated with Ceph!

2013-11-05 Thread Mark Nelson
We had a discussion about all of this a year ago (when package purge was 
removing mds data and thus destroying clusters).  I think we have to be 
really careful here as it's rather permanent if you make a bad choice. 
I'd much rather that users be annoyed with me that they have to go 
manually clean up old data vs users who can't get their data back 
without herculean efforts.


Mark

On 11/05/2013 07:19 PM, Mark Kirkwood wrote:

... forgot to add: maybe 'uninstall' should be target for ceph-deploy
that removes just the actual software daemons...

On 06/11/13 14:16, Mark Kirkwood wrote:

I think purge of several data containing packages will ask if you want
to destroy that too (Mysql comes to mind - asks if you want to remove
the databases under /var/lib/mysql). So this is possibly reasonable
behaviour.



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Very frustrated with Ceph!

2013-11-05 Thread Mark Kirkwood
... forgot to add: maybe 'uninstall' should be target for ceph-deploy 
that removes just the actual software daemons...


On 06/11/13 14:16, Mark Kirkwood wrote:
I think purge of several data containing packages will ask if you want 
to destroy that too (Mysql comes to mind - asks if you want to remove 
the databases under /var/lib/mysql). So this is possibly reasonable 
behaviour.




___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Very frustrated with Ceph!

2013-11-05 Thread Mark Kirkwood
I think purge of several data containing packages will ask if you want 
to destroy that too (Mysql comes to mind - asks if you want to remove 
the databases under /var/lib/mysql). So this is possibly reasonable 
behaviour.


Cheers

Mark

On 06/11/13 13:25, Dan Mick wrote:
Yeah; purge does remove packages and *package config files*; however, 
Ceph data is in a different class, hence the existence of purgedata.


A user might be furious if he did what he thought was "remove the 
packages" and the process also creamed his terabytes of stored data he 
was in the process of moving to a different OSD server, manually 
recovering, or whatever.


On 11/05/2013 03:03 PM, Neil Levine wrote:

In the Debian world, purge does both a removal of the package and a
clean up the files so might be good to keep semantic consistency here?


On Tue, Nov 5, 2013 at 1:11 AM, Sage Weil mailto:s...@newdream.net>> wrote:

Purgedata is only meant to be run *after* the package is
uninstalled.  We should make it do a check to enforce that.
Otherwise we run into these problems...



Mark Kirkwood mailto:mark.kirkw...@catalyst.net.nz>> wrote:

On 05/11/13 06:37, Alfredo Deza wrote:

On Mon, Nov 4, 2013 at 12:25 PM, Gruher, Joseph R
mailto:joseph.r.gru...@intel.com>> wrote:

Could these problems be caused by running a purgedata
but not a purge?


It could be, I am not clear on what the expectation was for
just doing
purgedata without a purge.

Purgedata removes /etc/ceph but without the purge ceph
is still installed,
then ceph-deploy install detects ceph as already
installed and does not
(re)create /etc/ceph?


ceph-deploy will not create directories for you, that is
left to the
ceph install process, and just to be clear, the
latest ceph-deploy version (1.3) does not remote /etc/ceph,
just the contents.


Yeah, however purgedata is removing /var/lib/ceph, which means
after
running purgedata you need to either run purge then install or
manually
recreate the various working directories under /var/lib/ceph 
before

attempting any mon. mds or osd creation.

Maybe purgedata should actually leave those top level dirs under
/var/lib/ceph?

regards

Mark


ceph-users mailing list
ceph-users@lists.ceph.com 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Very frustrated with Ceph!

2013-11-05 Thread Dan Mick
Yeah; purge does remove packages and *package config files*; however, 
Ceph data is in a different class, hence the existence of purgedata.


A user might be furious if he did what he thought was "remove the 
packages" and the process also creamed his terabytes of stored data he 
was in the process of moving to a different OSD server, manually 
recovering, or whatever.


On 11/05/2013 03:03 PM, Neil Levine wrote:

In the Debian world, purge does both a removal of the package and a
clean up the files so might be good to keep semantic consistency here?


On Tue, Nov 5, 2013 at 1:11 AM, Sage Weil mailto:s...@newdream.net>> wrote:

Purgedata is only meant to be run *after* the package is
uninstalled.  We should make it do a check to enforce that.
Otherwise we run into these problems...



Mark Kirkwood mailto:mark.kirkw...@catalyst.net.nz>> wrote:

On 05/11/13 06:37, Alfredo Deza wrote:

On Mon, Nov 4, 2013 at 12:25 PM, Gruher, Joseph R
mailto:joseph.r.gru...@intel.com>> wrote:

Could these problems be caused by running a purgedata
but not a purge?


It could be, I am not clear on what the expectation was for
just doing
purgedata without a purge.

Purgedata removes /etc/ceph but without the purge ceph
is still installed,
then ceph-deploy install detects ceph as already
installed and does not
(re)create /etc/ceph?


ceph-deploy will not create directories for you, that is
left to the
ceph install process, and just to be clear, the
latest ceph-deploy version (1.3) does not remote /etc/ceph,
just the contents.


Yeah, however purgedata is removing /var/lib/ceph, which means
after
running purgedata you need to either run purge then install or
manually
recreate the various working directories under /var/lib/ceph before
attempting any mon. mds or osd creation.

Maybe purgedata should actually leave those top level dirs under
/var/lib/ceph?

regards

Mark


ceph-users mailing list
ceph-users@lists.ceph.com 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Very frustrated with Ceph!

2013-11-05 Thread Neil Levine
In the Debian world, purge does both a removal of the package and a clean
up the files so might be good to keep semantic consistency here?


On Tue, Nov 5, 2013 at 1:11 AM, Sage Weil  wrote:

> Purgedata is only meant to be run *after* the package is uninstalled.  We
> should make it do a check to enforce that.  Otherwise we run into these
> problems...
>
>
> Mark Kirkwood  wrote:
>>
>> On 05/11/13 06:37, Alfredo Deza wrote:
>>
>>>  On Mon, Nov 4, 2013 at 12:25 PM, Gruher, Joseph R
>>>   wrote:
>>>
  Could these problems be caused by running a purgedata but not a purge?

>>>
>>>  It could be, I am not clear on what the expectation was for just doing
>>>  purgedata without a purge.
>>>
>>>  Purgedata removes /etc/ceph but without the purge ceph is still installed,
  then ceph-deploy install detects ceph as already installed and does not
  (re)create /etc/ceph?

>>>
>>>  ceph-deploy will not create directories for you, that is
>>> left to the
>>>  ceph install process, and just to be clear, the
>>>  latest ceph-deploy version (1.3) does not remote /etc/ceph, just the 
>>> contents.
>>>
>>
>> Yeah, however purgedata is removing /var/lib/ceph, which means after
>> running purgedata you need to either run purge then install or manually
>> recreate the various working directories under /var/lib/ceph before
>> attempting any mon. mds or osd creation.
>>
>> Maybe purgedata should actually leave those top level dirs under
>> /var/lib/ceph?
>>
>> regards
>>
>> Mark
>> --
>>
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Very frustrated with Ceph!

2013-11-04 Thread Sage Weil
Purgedata is only meant to be run *after* the package is uninstalled.  We 
should make it do a check to enforce that.  Otherwise we run into these 
problems...

Mark Kirkwood  wrote:
>On 05/11/13 06:37, Alfredo Deza wrote:
>> On Mon, Nov 4, 2013 at 12:25 PM, Gruher, Joseph R
>>  wrote:
>>> Could these problems be caused by running a purgedata but not a
>purge?
>>
>> It could be, I am not clear on what the expectation was for just
>doing
>> purgedata without a purge.
>>
>>> Purgedata removes /etc/ceph but without the purge ceph is still
>installed,
>>> then ceph-deploy install detects ceph as already installed and does
>not
>>> (re)create /etc/ceph?
>>
>> ceph-deploy will not create directories for you, that is left to the
>> ceph install process, and just to be clear, the
>> latest ceph-deploy version (1.3) does not remote /etc/ceph, just the
>contents.
>
>Yeah, however purgedata is removing /var/lib/ceph, which means after 
>running purgedata you need to either run purge then install or manually
>
>recreate the various working directories under /var/lib/ceph before 
>attempting any mon. mds or osd creation.
>
>Maybe purgedata should actually leave those top level dirs under 
>/var/lib/ceph?
>
>regards
>
>Mark
>___
>ceph-users mailing list
>ceph-users@lists.ceph.com
>http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Very frustrated with Ceph!

2013-11-04 Thread Mark Kirkwood

On 05/11/13 06:37, Alfredo Deza wrote:

On Mon, Nov 4, 2013 at 12:25 PM, Gruher, Joseph R
 wrote:

Could these problems be caused by running a purgedata but not a purge?


It could be, I am not clear on what the expectation was for just doing
purgedata without a purge.


Purgedata removes /etc/ceph but without the purge ceph is still installed,
then ceph-deploy install detects ceph as already installed and does not
(re)create /etc/ceph?


ceph-deploy will not create directories for you, that is left to the
ceph install process, and just to be clear, the
latest ceph-deploy version (1.3) does not remote /etc/ceph, just the contents.


Yeah, however purgedata is removing /var/lib/ceph, which means after 
running purgedata you need to either run purge then install or manually 
recreate the various working directories under /var/lib/ceph before 
attempting any mon. mds or osd creation.


Maybe purgedata should actually leave those top level dirs under 
/var/lib/ceph?


regards

Mark
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Very frustrated with Ceph!

2013-11-04 Thread Alfredo Deza
On Mon, Nov 4, 2013 at 12:25 PM, Gruher, Joseph R
 wrote:
> Could these problems be caused by running a purgedata but not a purge?

It could be, I am not clear on what the expectation was for just doing
purgedata without a purge.

> Purgedata removes /etc/ceph but without the purge ceph is still installed,
> then ceph-deploy install detects ceph as already installed and does not
> (re)create /etc/ceph?

ceph-deploy will not create directories for you, that is left to the
ceph install process, and just to be clear, the
latest ceph-deploy version (1.3) does not remote /etc/ceph, just the contents.



>
>
>
> [ceph-node2-osd0-centos-6-4][DEBUG ] Package ceph-0.67.4-0.el6.x86_64
> already installed and latest version
>
>
>
> I wonder if you ran a purge and a purgedata if you might have better luck.
> That always works for me.
>
>
>
> From: ceph-users-boun...@lists.ceph.com
> [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Trivedi, Narendra
> Sent: Saturday, November 02, 2013 10:42 PM
> To: Sage Weil
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] Very frustrated with Ceph!
>
>
>
> Thanks a lot Sage for your help :-).
>
>
>
> I started from scratch: See the commands and output below:
>
>
>
> 1) First of all, all the nodes did have but in order to start from scratch I
> removed /etc/ceph from each node.
>
>
>
> 2) I issued a ceph-deploy purgedata to each node from the admin node. This
> threw error towards the end. I assuming since I manually removed /etc/ceph
> from nodes and hence rm command fails:
>
>
>
> [ceph@ceph-admin-node-centos-6-4 my-cluster]$ ceph-deploy purgedata
> ceph-node1-mon-centos-6-4 ceph-node2-osd0-centos-6-4
> ceph-node3-osd1-centos-6-4
>
> [ceph_deploy.cli][INFO  ] Invoked (1.3): /usr/bin/ceph-deploy purgedata
> ceph-node1-mon-centos-6-4 ceph-node2-osd0-centos-6-4
> ceph-node3-osd1-centos-6-4
>
> [ceph_deploy.install][DEBUG ] Purging data from cluster ceph hosts
> ceph-node1-mon-centos-6-4 ceph-node2-osd0-centos-6-4
> ceph-node3-osd1-centos-6-4
>
> [ceph-node1-mon-centos-6-4][DEBUG ] connected to host:
> ceph-node1-mon-centos-6-4
>
> [ceph-node1-mon-centos-6-4][DEBUG ] detect platform information from remote
> host
>
> [ceph-node1-mon-centos-6-4][DEBUG ] detect machine type
>
> [ceph-node1-mon-centos-6-4][INFO  ] Running command: sudo which ceph
>
> [ceph-node2-osd0-centos-6-4][DEBUG ] connected to host:
> ceph-node2-osd0-centos-6-4
>
> [ceph-node2-osd0-centos-6-4][DEBUG ] detect platform information from remote
> host
>
> [ceph-node2-osd0-centos-6-4][DEBUG ] detect machine type
>
> [ceph-node2-osd0-centos-6-4][INFO  ] Running command: sudo which ceph
>
> [ceph-node3-osd1-centos-6-4][DEBUG ] connected to host:
> ceph-node3-osd1-centos-6-4
>
> [ceph-node3-osd1-centos-6-4][DEBUG ] detect platform information from remote
> host
>
> [ceph-node3-osd1-centos-6-4][DEBUG ] detect machine type
>
> [ceph-node3-osd1-centos-6-4][INFO  ] Running command: sudo which ceph
>
> ceph is still installed on:  ['ceph-node1-mon-centos-6-4',
> 'ceph-node2-osd0-centos-6-4', 'ceph-node3-osd1-centos-6-4']
>
> Continue (y/n)y
>
> [ceph-node1-mon-centos-6-4][DEBUG ] connected to host:
> ceph-node1-mon-centos-6-4
>
> [ceph-node1-mon-centos-6-4][DEBUG ] detect platform information from remote
> host
>
> [ceph-node1-mon-centos-6-4][DEBUG ] detect machine type
>
> [ceph_deploy.install][INFO  ] Distro info: CentOS 6.4 Final
>
> [ceph-node1-mon-centos-6-4][INFO  ] purging data on
> ceph-node1-mon-centos-6-4
>
> [ceph-node1-mon-centos-6-4][INFO  ] Running command: sudo rm -rf
> --one-file-system -- /var/lib/ceph
>
> [ceph-node1-mon-centos-6-4][INFO  ] Running command: sudo rm -rf
> --one-file-system -- /etc/ceph/*
>
> [ceph-node2-osd0-centos-6-4][DEBUG ] connected to host:
> ceph-node2-osd0-centos-6-4
>
> [ceph-node2-osd0-centos-6-4][DEBUG ] detect platform information from remote
> host
>
> [ceph-node2-osd0-centos-6-4][DEBUG ] detect machine type
>
> [ceph_deploy.install][INFO  ] Distro info: CentOS 6.4 Final
>
> [ceph-node2-osd0-centos-6-4][INFO  ] purging data on
> ceph-node2-osd0-centos-6-4
>
> [ceph-node2-osd0-centos-6-4][INFO  ] Running command: sudo rm -rf
> --one-file-system -- /var/lib/ceph
>
> [ceph-node2-osd0-centos-6-4][INFO  ] Running command: sudo rm -rf
> --one-file-system -- /etc/ceph/*
>
> Exception in thread Thread-1 (most likely raised during interpreter
> shutdown):
>
> Traceback (most recent call last):
>
>   File "/usr/lib64/python2.6/threading.py", line 532, in __bootstrap_inner
>
>   File "", line 89, in run
>
> :  1

Re: [ceph-users] Very frustrated with Ceph!

2013-11-04 Thread Alfredo Deza
On Mon, Nov 4, 2013 at 9:55 AM, Trivedi, Narendra
 wrote:
> Are you saying despite the osdir error message (I am pasting again below from 
> my posting yesterday) the OSDs are successfully prepared?
>
> [ceph_deploy.osd][ERROR ] OSError: [Errno 2] No such file or directory
> [ceph_deploy][ERROR ] GenericError: Failed to create 2 OSDs
>

Not these, they do look like genuine errors to me. I meant, remote
commands (those appear to be local as you see `ceph_deploy` as the
host).

For example `wget` is a known offender (from your output):

[ceph-node1-mon-centos-6-4][ERROR ] Proxy request sent, awaiting
response... 200 OK




> Thanks!
> Narendra
> -Original Message-
> From: Alfredo Deza [mailto:alfredo.d...@inktank.com]
> Sent: Saturday, November 02, 2013 12:03 PM
> To: Sage Weil
> Cc: Trivedi, Narendra; ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] Very frustrated with Ceph!
>
> On Fri, Nov 1, 2013 at 11:12 PM, Sage Weil  wrote:
>> On Sat, 2 Nov 2013, Trivedi, Narendra wrote:
>>>
>>> Hi Sage,
>>>
>>> I believe I issued a "ceph-deploy install..." from the admin node as
>>> per the documentation and that was almost ok as per the output of the
>>> command below except sometimes there?s an error followed by an ?OK?
>>> message (see the highlighted item in the red below). I eventually ran
>>> into some permission issues but seems things went okay:
>
> Maybe what can be confusing here is that ceph-deploy interprets stderr as 
> ERROR logging level. Unfortunately, some tools will output normal informative 
> data to stderr when they are clearly not errors.
>
> stdout, on the other hand, is interpreted by ceph-deploy as DEBUG level, so 
> you will see logging at that level too.
>
> There is no way for ceph-deploy to tell if you are actually seeing errors 
> because the tool is in fact sending error messages or because it decided to 
> use stderr to send information that should go to stdout.
>
>
>
>>
>> Hmm, the below output makes it look like it was successfully installed
>> on
>> node1 node2 and node3.  Can you confirm that /etc/ceph exists on all
>> three of those hosts?
>>
>> Oh, looking back at your original message, it looks like you are
>> trying to create OSDs on /tmp/osd*.  I would create directories like
>> /ceph/osdo, /ceph/osd1, or similar.  I believe you need to create the
>> directories beforehand, too.  (In a normal deployment, you are either
>> feeding ceph raw disks (/dev/XXX) or an existing mount point on a
>> dedicated disk you already configured and mounted.)
>>
>> sage
>>
>>
>>  >
>>>
>>>
>>> [ceph@ceph-admin-node-centos-6-4 my-cluster]$ ceph-deploy install
>>> ceph-node1-mon-centos-6-4 ceph-node2-osd0-centos-6-4
>>> ceph-node3-osd1-centos-6-4
>>>
>>> [ceph_deploy.cli][INFO  ] Invoked (1.3): /usr/bin/ceph-deploy install
>>> ceph-node1-mon-centos-6-4 ceph-node2-osd0-centos-6-4
>>> ceph-node3-osd1-centos-6-4
>>>
>>> [ceph_deploy.install][DEBUG ] Installing stable version dumpling on
>>> cluster ceph hosts ceph-node1-mon-centos-6-4
>>> ceph-node2-osd0-centos-6-4
>>> ceph-node3-osd1-centos-6-4
>>>
>>> [ceph_deploy.install][DEBUG ] Detecting platform for host
>>> ceph-node1-mon-centos-6-4 ...
>>>
>>> [ceph-node1-mon-centos-6-4][DEBUG ] connected to host:
>>> ceph-node1-mon-centos-6-4
>>>
>>> [ceph-node1-mon-centos-6-4][DEBUG ] detect platform information from
>>> remote host
>>>
>>> [ceph-node1-mon-centos-6-4][DEBUG ] detect machine type
>>>
>>> [ceph_deploy.install][INFO  ] Distro info: CentOS 6.4 Final
>>>
>>> [ceph-node1-mon-centos-6-4][INFO  ] installing ceph on
>>> ceph-node1-mon-centos-6-4
>>>
>>> [ceph-node1-mon-centos-6-4][INFO  ] adding EPEL repository
>>>
>>> [ceph-node1-mon-centos-6-4][INFO  ] Running command: sudo wget
>>> http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch
>>> .rpm
>>>
>>> [ceph-node1-mon-centos-6-4][ERROR ] --2013-11-01 19:51:20--
>>> http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch
>>> .rpm
>>>
>>> [ceph-node1-mon-centos-6-4][ERROR ] Connecting to 10.12.132.208:8080...
>>> connected.
>>>
>>> [ceph-node1-mon-centos-6-4][ERROR ] Proxy request sent, awaiting response...
>>> 200 OK
>>>
>>> [ceph-node1-mon-centos-6-4][ERROR ] Length: 14540 (14K)
>>> [application/x-rpm]
>>>
>>> [

Re: [ceph-users] Very frustrated with Ceph!

2013-11-04 Thread Trivedi, Narendra
Are you saying despite the osdir error message (I am pasting again below from 
my posting yesterday) the OSDs are successfully prepared?  

[ceph_deploy.osd][ERROR ] OSError: [Errno 2] No such file or directory
[ceph_deploy][ERROR ] GenericError: Failed to create 2 OSDs

Thanks!
Narendra 
-Original Message-
From: Alfredo Deza [mailto:alfredo.d...@inktank.com] 
Sent: Saturday, November 02, 2013 12:03 PM
To: Sage Weil
Cc: Trivedi, Narendra; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Very frustrated with Ceph!

On Fri, Nov 1, 2013 at 11:12 PM, Sage Weil  wrote:
> On Sat, 2 Nov 2013, Trivedi, Narendra wrote:
>>
>> Hi Sage,
>>
>> I believe I issued a "ceph-deploy install..." from the admin node as 
>> per the documentation and that was almost ok as per the output of the 
>> command below except sometimes there?s an error followed by an ?OK? 
>> message (see the highlighted item in the red below). I eventually ran 
>> into some permission issues but seems things went okay:

Maybe what can be confusing here is that ceph-deploy interprets stderr as ERROR 
logging level. Unfortunately, some tools will output normal informative data to 
stderr when they are clearly not errors.

stdout, on the other hand, is interpreted by ceph-deploy as DEBUG level, so you 
will see logging at that level too.

There is no way for ceph-deploy to tell if you are actually seeing errors 
because the tool is in fact sending error messages or because it decided to use 
stderr to send information that should go to stdout.



>
> Hmm, the below output makes it look like it was successfully installed 
> on
> node1 node2 and node3.  Can you confirm that /etc/ceph exists on all 
> three of those hosts?
>
> Oh, looking back at your original message, it looks like you are 
> trying to create OSDs on /tmp/osd*.  I would create directories like 
> /ceph/osdo, /ceph/osd1, or similar.  I believe you need to create the 
> directories beforehand, too.  (In a normal deployment, you are either 
> feeding ceph raw disks (/dev/XXX) or an existing mount point on a 
> dedicated disk you already configured and mounted.)
>
> sage
>
>
>  >
>>
>>
>> [ceph@ceph-admin-node-centos-6-4 my-cluster]$ ceph-deploy install
>> ceph-node1-mon-centos-6-4 ceph-node2-osd0-centos-6-4
>> ceph-node3-osd1-centos-6-4
>>
>> [ceph_deploy.cli][INFO  ] Invoked (1.3): /usr/bin/ceph-deploy install
>> ceph-node1-mon-centos-6-4 ceph-node2-osd0-centos-6-4
>> ceph-node3-osd1-centos-6-4
>>
>> [ceph_deploy.install][DEBUG ] Installing stable version dumpling on 
>> cluster ceph hosts ceph-node1-mon-centos-6-4 
>> ceph-node2-osd0-centos-6-4
>> ceph-node3-osd1-centos-6-4
>>
>> [ceph_deploy.install][DEBUG ] Detecting platform for host
>> ceph-node1-mon-centos-6-4 ...
>>
>> [ceph-node1-mon-centos-6-4][DEBUG ] connected to host:
>> ceph-node1-mon-centos-6-4
>>
>> [ceph-node1-mon-centos-6-4][DEBUG ] detect platform information from 
>> remote host
>>
>> [ceph-node1-mon-centos-6-4][DEBUG ] detect machine type
>>
>> [ceph_deploy.install][INFO  ] Distro info: CentOS 6.4 Final
>>
>> [ceph-node1-mon-centos-6-4][INFO  ] installing ceph on
>> ceph-node1-mon-centos-6-4
>>
>> [ceph-node1-mon-centos-6-4][INFO  ] adding EPEL repository
>>
>> [ceph-node1-mon-centos-6-4][INFO  ] Running command: sudo wget 
>> http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch
>> .rpm
>>
>> [ceph-node1-mon-centos-6-4][ERROR ] --2013-11-01 19:51:20-- 
>> http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch
>> .rpm
>>
>> [ceph-node1-mon-centos-6-4][ERROR ] Connecting to 10.12.132.208:8080...
>> connected.
>>
>> [ceph-node1-mon-centos-6-4][ERROR ] Proxy request sent, awaiting response...
>> 200 OK
>>
>> [ceph-node1-mon-centos-6-4][ERROR ] Length: 14540 (14K) 
>> [application/x-rpm]
>>
>> [ceph-node1-mon-centos-6-4][ERROR ] Saving to:
>> `epel-release-6-8.noarch.rpm.2'
>>
>> [ceph-node1-mon-centos-6-4][ERROR ]
>>
>> [ceph-node1-mon-centos-6-4][ERROR ]  0K ..
>>    100% 4.79M=0.003s
>>
>> [ceph-node1-mon-centos-6-4][ERROR ]
>>
>> [ceph-node1-mon-centos-6-4][ERROR ] Last-modified header invalid -- 
>> time-stamp ignored.
>>
>> [ceph-node1-mon-centos-6-4][ERROR ] 2013-11-01 19:52:20 (4.79 MB/s) - 
>> `epel-release-6-8.noarch.rpm.2' saved [14540/14540]
>>
>> [ceph-node1-mon-centos-6-4][ERROR ]
>>
>> [ceph-node1-mon-centos-6-4][INFO  ] Running command: sudo rpm -Uvh 
>> --replacepkgs epel-release-6

Re: [ceph-users] Very frustrated with Ceph!

2013-11-02 Thread Alfredo Deza
r/bin/ceph-deploy mon create
>> ceph-node1-mon-centos-6-4
>>
>> [ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts
>> ceph-node1-mon-centos-6-4
>>
>> [ceph_deploy.mon][DEBUG ] detecting platform for host
>> ceph-node1-mon-centos-6-4 ...
>>
>> [ceph-node1-mon-centos-6-4][DEBUG ] connected to host:
>> ceph-node1-mon-centos-6-4
>>
>> [ceph-node1-mon-centos-6-4][DEBUG ] detect platform information from remote
>> host
>>
>> [ceph-node1-mon-centos-6-4][DEBUG ] detect machine type
>>
>> [ceph_deploy.mon][INFO  ] distro info: CentOS 6.4 Final
>>
>> [ceph-node1-mon-centos-6-4][DEBUG ] determining if provided host has same
>> hostname in remote
>>
>> [ceph-node1-mon-centos-6-4][DEBUG ] get remote short hostname
>>
>> [ceph-node1-mon-centos-6-4][DEBUG ] deploying mon to
>> ceph-node1-mon-centos-6-4
>>
>> [ceph-node1-mon-centos-6-4][DEBUG ] get remote short hostname
>>
>> [ceph-node1-mon-centos-6-4][DEBUG ] remote hostname:
>> ceph-node1-mon-centos-6-4
>>
>> [ceph-node1-mon-centos-6-4][DEBUG ] write cluster configuration to
>> /etc/ceph/{cluster}.conf
>>
>> [ceph-node1-mon-centos-6-4][DEBUG ] create the mon path if it does not exist
>>
>> [ceph-node1-mon-centos-6-4][DEBUG ] checking for done path:
>> /var/lib/ceph/mon/ceph-ceph-node1-mon-centos-6-4/done
>>
>> [ceph-node1-mon-centos-6-4][DEBUG ] done path does not exist:
>> /var/lib/ceph/mon/ceph-ceph-node1-mon-centos-6-4/done
>>
>> [ceph-node1-mon-centos-6-4][INFO  ] creating tmp path: /var/lib/ceph/tmp
>>
>> [ceph-node1-mon-centos-6-4][INFO  ] creating keyring file:
>> /var/lib/ceph/tmp/ceph-ceph-node1-mon-centos-6-4.mon.keyring
>>
>> [ceph-node1-mon-centos-6-4][DEBUG ] create the monitor keyring file
>>
>> [ceph-node1-mon-centos-6-4][INFO  ] Running command: sudo ceph-mon --cluster
>> ceph --mkfs -i ceph-node1-mon-centos-6-4 --keyring
>> /var/lib/ceph/tmp/ceph-ceph-node1-mon-centos-6-4.mon.keyring
>>
>> [ceph-node1-mon-centos-6-4][DEBUG ] ceph-mon: mon.noname-a 10.12.0.70:6789/0
>> is local, renaming to mon.ceph-node1-mon-centos-6-4
>>
>> [ceph-node1-mon-centos-6-4][DEBUG ] ceph-mon: set fsid to
>> c732fc5f-a656-401a-a8e5-4bfed1f89d20
>>
>> [ceph-node1-mon-centos-6-4][DEBUG ] ceph-mon: created monfs at
>> /var/lib/ceph/mon/ceph-ceph-node1-mon-centos-6-4 for
>> mon.ceph-node1-mon-centos-6-4
>>
>> [ceph-node1-mon-centos-6-4][INFO  ] unlinking keyring file
>> /var/lib/ceph/tmp/ceph-ceph-node1-mon-centos-6-4.mon.keyring
>>
>> [ceph-node1-mon-centos-6-4][DEBUG ] create a done file to avoid re-doing the
>> mon deployment
>>
>> [ceph-node1-mon-centos-6-4][DEBUG ] create the init path if it does not
>> exist
>>
>> [ceph-node1-mon-centos-6-4][DEBUG ] locating the `service` executable...
>>
>> [ceph-node1-mon-centos-6-4][INFO  ] Running command: sudo /sbin/service ceph
>> -c /etc/ceph/ceph.conf start mon.ceph-node1-mon-centos-6-4
>>
>> [ceph-node1-mon-centos-6-4][DEBUG ] === mon.ceph-node1-mon-centos-6-4 ===
>>
>> [ceph-node1-mon-centos-6-4][DEBUG ] Starting Ceph
>> mon.ceph-node1-mon-centos-6-4 on ceph-node1-mon-centos-6-4...
>>
>> [ceph-node1-mon-centos-6-4][DEBUG ] Starting ceph-create-keys on
>> ceph-node1-mon-centos-6-4...
>>
>> [ceph-node1-mon-centos-6-4][INFO  ] Running command: sudo ceph
>> --cluster=ceph --admin-daemon
>> /var/run/ceph/ceph-mon.ceph-node1-mon-centos-6-4.asok mon_status
>>
>> [ceph-node1-mon-centos-6-4][DEBUG 
>> ]***
>> *
>>
>> [ceph-node1-mon-centos-6-4][DEBUG ] status for monitor:
>> mon.ceph-node1-mon-centos-6-4
>>
>> [ceph-node1-mon-centos-6-4][DEBUG ] {
>>
>> [ceph-node1-mon-centos-6-4][DEBUG ]   "election_epoch": 2,
>>
>> [ceph-node1-mon-centos-6-4][DEBUG ]   "extra_probe_peers": [],
>>
>> [ceph-node1-mon-centos-6-4][DEBUG ]   "monmap": {
>>
>> [ceph-node1-mon-centos-6-4][DEBUG ] "created": "0.00",
>>
>> [ceph-node1-mon-centos-6-4][DEBUG ] "epoch": 1,
>>
>> [ceph-node1-mon-centos-6-4][DEBUG ] "fsid":
>> "c732fc5f-a656-401a-a8e5-4bfed1f89d20",
>>
>> [ceph-node1-mon-centos-6-4][DEBUG ] "modified": "0.00",
>>
>> [ceph-node1-mon-centos-6-4][DEBUG ] "mons": [
>>
>> [ceph-node1-mon-centos-6-4][DEBUG ]   {
>>
>> [ceph-node1-mon-centos-6-4][DEBUG ] "addr": "10.12.0.70:6789/0",
>>
>> [ceph-node1-mon-centos-6-4][DEBUG ] "name":
>> "ceph-node1-mon-centos-6-4",
>>
>> [ceph-node1-mon-centos-6-4][DEBUG ] "rank": 0
>>
>> [ceph-node1-mon-centos-6-4][DEBUG ]   }
>>
>> [ceph-node1-mon-centos-6-4][DEBUG ] ]
>>
>> [ceph-node1-mon-centos-6-4][DEBUG ]   },
>>
>> [ceph-node1-mon-centos-6-4][DEBUG ]   "name": "ceph-node1-mon-centos-6-4",
>>
>> [ceph-node1-mon-centos-6-4][DEBUG ]   "outside_quorum": [],
>>
>> [ceph-node1-mon-centos-6-4][DEBUG ]   "quorum": [
>>
>> [ceph-node1-mon-centos-6-4][DEBUG ] 0
>>
>> [ceph-node1-mon-centos-6-4][DEBUG ]   ],
>>
>> [ceph-node1-mon-centos-6-4][DEBUG ]   "rank": 0,
>>
>> [ceph-node1-mon-centos-6-4][DEBUG ]   "state": "leader",
>>
>> [ceph-node1-mon-centos-6-4][DEBUG ]   "sync_provider": []
>>
>> [ceph-node1-mon-centos-6-4][DEBUG ] }
>>
>> [ceph-node1-mon-centos-6-4][DEBUG 
>> ]***
>> *
>>
>> [ceph-node1-mon-centos-6-4][INFO  ] monitor: mon.ceph-node1-mon-centos-6-4
>> is running
>>
>> [ceph-node1-mon-centos-6-4][INFO  ] Running command: sudo ceph
>> --cluster=ceph --admin-daemon
>> /var/run/ceph/ceph-mon.ceph-node1-mon-centos-6-4.asok mon_status
>>
>>
>>
>> Thanks!
>>
>> Narendra
>>
>> -Original Message-
>> From: Sage Weil [mailto:s...@inktank.com]
>> Sent: Friday, November 01, 2013 8:37 PM
>> To: Trivedi, Narendra
>> Cc: ceph-users@lists.ceph.com
>> Subject: Re: [ceph-users] Very frustrated with Ceph!
>>
>>
>>
>> On Sat, 2 Nov 2013, Trivedi, Narendra wrote:
>>
>> > [ceph-node2-osd0-centos-6-4][WARNIN] osd keyring does not exist yet,
>>
>> > creating one
>>
>> >
>>
>> > [ceph-node2-osd0-centos-6-4][DEBUG ] create a keyring file
>>
>> >
>>
>> > [ceph_deploy.osd][ERROR ] OSError: [Errno 2] No such file or directory
>>
>>
>>
>> Did you do 'ceph-deploy install ...' on these hosts?
>>
>>
>>
>> sage
>>
>>
>> This message contains information which may be confidential and/or
>> privileged. Unless you are the intended recipient (or authorized to receive
>> for the intended recipient), you may not read, use, copy or disclose to
>> anyone the message or any information contained in the message. If you have
>> received the message in error, please advise the sender by reply e-mail and
>> delete the message and any attachment(s) thereto without retaining any
>> copies.
>>
>>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Very frustrated with Ceph!

2013-11-01 Thread Sage Weil
ph-ceph-node1-mon-centos-6-4.mon.keyring
> 
> [ceph-node1-mon-centos-6-4][DEBUG ] create the monitor keyring file
> 
> [ceph-node1-mon-centos-6-4][INFO  ] Running command: sudo ceph-mon --cluster
> ceph --mkfs -i ceph-node1-mon-centos-6-4 --keyring
> /var/lib/ceph/tmp/ceph-ceph-node1-mon-centos-6-4.mon.keyring
> 
> [ceph-node1-mon-centos-6-4][DEBUG ] ceph-mon: mon.noname-a 10.12.0.70:6789/0
> is local, renaming to mon.ceph-node1-mon-centos-6-4
> 
> [ceph-node1-mon-centos-6-4][DEBUG ] ceph-mon: set fsid to
> c732fc5f-a656-401a-a8e5-4bfed1f89d20
> 
> [ceph-node1-mon-centos-6-4][DEBUG ] ceph-mon: created monfs at
> /var/lib/ceph/mon/ceph-ceph-node1-mon-centos-6-4 for
> mon.ceph-node1-mon-centos-6-4
> 
> [ceph-node1-mon-centos-6-4][INFO  ] unlinking keyring file
> /var/lib/ceph/tmp/ceph-ceph-node1-mon-centos-6-4.mon.keyring
> 
> [ceph-node1-mon-centos-6-4][DEBUG ] create a done file to avoid re-doing the
> mon deployment
> 
> [ceph-node1-mon-centos-6-4][DEBUG ] create the init path if it does not
> exist
> 
> [ceph-node1-mon-centos-6-4][DEBUG ] locating the `service` executable...
> 
> [ceph-node1-mon-centos-6-4][INFO  ] Running command: sudo /sbin/service ceph
> -c /etc/ceph/ceph.conf start mon.ceph-node1-mon-centos-6-4
> 
> [ceph-node1-mon-centos-6-4][DEBUG ] === mon.ceph-node1-mon-centos-6-4 ===
> 
> [ceph-node1-mon-centos-6-4][DEBUG ] Starting Ceph
> mon.ceph-node1-mon-centos-6-4 on ceph-node1-mon-centos-6-4...
> 
> [ceph-node1-mon-centos-6-4][DEBUG ] Starting ceph-create-keys on
> ceph-node1-mon-centos-6-4...
> 
> [ceph-node1-mon-centos-6-4][INFO  ] Running command: sudo ceph
> --cluster=ceph --admin-daemon
> /var/run/ceph/ceph-mon.ceph-node1-mon-centos-6-4.asok mon_status
> 
> [ceph-node1-mon-centos-6-4][DEBUG 
> ]***
> *
> 
> [ceph-node1-mon-centos-6-4][DEBUG ] status for monitor:
> mon.ceph-node1-mon-centos-6-4
> 
> [ceph-node1-mon-centos-6-4][DEBUG ] {
> 
> [ceph-node1-mon-centos-6-4][DEBUG ]   "election_epoch": 2,
> 
> [ceph-node1-mon-centos-6-4][DEBUG ]   "extra_probe_peers": [],
> 
> [ceph-node1-mon-centos-6-4][DEBUG ]   "monmap": {
> 
> [ceph-node1-mon-centos-6-4][DEBUG ] "created": "0.00",
> 
> [ceph-node1-mon-centos-6-4][DEBUG ] "epoch": 1,
> 
> [ceph-node1-mon-centos-6-4][DEBUG ] "fsid":
> "c732fc5f-a656-401a-a8e5-4bfed1f89d20",
> 
> [ceph-node1-mon-centos-6-4][DEBUG ] "modified": "0.00",
> 
> [ceph-node1-mon-centos-6-4][DEBUG ]     "mons": [
> 
> [ceph-node1-mon-centos-6-4][DEBUG ]   {
> 
> [ceph-node1-mon-centos-6-4][DEBUG ]     "addr": "10.12.0.70:6789/0",
> 
> [ceph-node1-mon-centos-6-4][DEBUG ] "name":
> "ceph-node1-mon-centos-6-4",
> 
> [ceph-node1-mon-centos-6-4][DEBUG ] "rank": 0
> 
> [ceph-node1-mon-centos-6-4][DEBUG ]   }
> 
> [ceph-node1-mon-centos-6-4][DEBUG ] ]
> 
> [ceph-node1-mon-centos-6-4][DEBUG ]   },
> 
> [ceph-node1-mon-centos-6-4][DEBUG ]   "name": "ceph-node1-mon-centos-6-4",
> 
> [ceph-node1-mon-centos-6-4][DEBUG ]   "outside_quorum": [],
> 
> [ceph-node1-mon-centos-6-4][DEBUG ]   "quorum": [
> 
> [ceph-node1-mon-centos-6-4][DEBUG ] 0
> 
> [ceph-node1-mon-centos-6-4][DEBUG ]   ],
> 
> [ceph-node1-mon-centos-6-4][DEBUG ]   "rank": 0,
> 
> [ceph-node1-mon-centos-6-4][DEBUG ]   "state": "leader",
> 
> [ceph-node1-mon-centos-6-4][DEBUG ]   "sync_provider": []
> 
> [ceph-node1-mon-centos-6-4][DEBUG ] }
> 
> [ceph-node1-mon-centos-6-4][DEBUG 
> ]***
> *
> 
> [ceph-node1-mon-centos-6-4][INFO  ] monitor: mon.ceph-node1-mon-centos-6-4
> is running
> 
> [ceph-node1-mon-centos-6-4][INFO  ] Running command: sudo ceph
> --cluster=ceph --admin-daemon
> /var/run/ceph/ceph-mon.ceph-node1-mon-centos-6-4.asok mon_status
> 
>  
> 
> Thanks!
> 
> Narendra
> 
> -Original Message-
> From: Sage Weil [mailto:s...@inktank.com]
> Sent: Friday, November 01, 2013 8:37 PM
> To: Trivedi, Narendra
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] Very frustrated with Ceph!
> 
>  
> 
> On Sat, 2 Nov 2013, Trivedi, Narendra wrote:
> 
> > [ceph-node2-osd0-centos-6-4][WARNIN] osd keyring does not exist yet,
> 
> > creating one
> 
> >
> 
> > [ceph-node2-osd0-centos-6-4][DEBUG ] create a keyring file
> 
> >
> 
> > [ceph_deploy.osd][ERROR ] OSError: [Errno 2] No such file or directory
> 
>  
> 
> Did you do 'ceph-deploy install ...' on these hosts?
> 
>  
> 
> sage
> 
> 
> This message contains information which may be confidential and/or
> privileged. Unless you are the intended recipient (or authorized to receive
> for the intended recipient), you may not read, use, copy or disclose to
> anyone the message or any information contained in the message. If you have
> received the message in error, please advise the sender by reply e-mail and
> delete the message and any attachment(s) thereto without retaining any
> copies.
> 
> ___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Very frustrated with Ceph!

2013-11-01 Thread Sage Weil
ploy osd prepare
> ceph-node2-osd0-centos-6-4:/tmp/osd0 ceph-node3-osd1-centos-6-4:/tmp/osd1
> 
> [ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks
> ceph-node2-osd0-centos-6-4:/tmp/osd0: ceph-node3-osd1-centos-6-4:/tmp/osd1:
> 
> [ceph-node2-osd0-centos-6-4][DEBUG ] connected to host:
> ceph-node2-osd0-centos-6-4
> 
> [ceph-node2-osd0-centos-6-4][DEBUG ] detect platform information from remote
> host
> 
> [ceph-node2-osd0-centos-6-4][DEBUG ] detect machine type
> 
> [ceph_deploy.osd][INFO  ] Distro info: CentOS 6.4 Final
> 
> [ceph_deploy.osd][DEBUG ] Deploying osd to ceph-node2-osd0-centos-6-4
> 
> [ceph-node2-osd0-centos-6-4][DEBUG ] write cluster configuration to
> /etc/ceph/{cluster}.conf
> 
> [ceph-node2-osd0-centos-6-4][WARNIN] osd keyring does not exist yet,
> creating one
> 
> [ceph-node2-osd0-centos-6-4][DEBUG ] create a keyring file
> 
> [ceph_deploy.osd][ERROR ] OSError: [Errno 2] No such file or directory
> 
> [ceph-node3-osd1-centos-6-4][DEBUG ] connected to host:
> ceph-node3-osd1-centos-6-4
> 
> [ceph-node3-osd1-centos-6-4][DEBUG ] detect platform information from remote
> host
> 
> [ceph-node3-osd1-centos-6-4][DEBUG ] detect machine type
> 
> [ceph_deploy.osd][INFO  ] Distro info: CentOS 6.4 Final
> 
> [ceph_deploy.osd][DEBUG ] Deploying osd to ceph-node3-osd1-centos-6-4
> 
> [ceph-node3-osd1-centos-6-4][DEBUG ] write cluster configuration to
> /etc/ceph/{cluster}.conf
> 
> [ceph-node3-osd1-centos-6-4][WARNIN] osd keyring does not exist yet,
> creating one
> 
> [ceph-node3-osd1-centos-6-4][DEBUG ] create a keyring file
> 
> [ceph_deploy.osd][ERROR ] OSError: [Errno 2] No such file or directory
> 
> [ceph_deploy][ERROR ] GenericError: Failed to create 2 OSDs
> 
>  
> 
> What are OSError and GenericError in this case?
> 
>  
> 
> Thanks a lot in advance!
> 
> Narendra
> 
>  
> 
> -Original Message-
> From: ceph-users-boun...@lists.ceph.com
> [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Mark Nelson
> Sent: Friday, November 01, 2013 8:28 PM
> To: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] Very frustrated with Ceph!
> 
>  
> 
> Hey Narenda,
> 
>  
> 
> Sorry to hear you've been having trouble.  Do you mind if I ask what took
> the 3 hours of time?  We definitely don't want the install process to take
> that long.  Unfortunately I'm not familiar with the error you are seeing,
> but the folks that work on ceph-deploy may have some advice.
> 
>   Are you using the newest version of ceph-deploy?
> 
>  
> 
> Thanks,
> 
> Mark
> 
>  
> 
> On 11/01/2013 08:17 PM, Trivedi, Narendra wrote:
> 
> > I created new VMs and re-installed everything from scratch. Took me 3
> 
> > hours. Executed all the steps religiously all over again in the links:
> 
> > 
> 
> > http://ceph.com/docs/master/start/quick-start-preflight/
> 
> > 
> 
> > http://ceph.com/docs/master/start/quick-ceph-deploy/
> 
> > 
> 
> > When the time came to prepare OSDs after 4 long hours, I get the same
> 
> > weird error:
> 
> > 
> 
> > 
> 
> > [ceph@ceph-admin-node-centos-6-4 my-cluster]$ ceph-deploy osd prepare
> 
> > ceph-node2-osd0-centos-6-4:/tmp/osd0
> 
> > ceph-node3-osd1-centos-6-4:/tmp/osd1
> 
> > 
> 
> > [*ceph_deploy.cli*][INFO  ] Invoked (1.3): /usr/bin/ceph-deploy osd
> 
> > prepare ceph-node2-osd0-centos-6-4:/tmp/osd0
> 
> > ceph-node3-osd1-centos-6-4:/tmp/osd1
> 
> > 
> 
> > [*ceph_deploy.osd*][DEBUG ] Preparing cluster ceph disks
> 
> > ceph-node2-osd0-centos-6-4:/tmp/osd0:
> ceph-node3-osd1-centos-6-4:/tmp/osd1:
> 
> > 
> 
> > [*ceph-node2-osd0-centos-6-4*][DEBUG ] connected to host:
> 
> > ceph-node2-osd0-centos-6-4
> 
> > 
> 
> > [*ceph-node2-osd0-centos-6-4*][DEBUG ] detect platform information
> 
> > from remote host
> 
> > 
> 
> > [*ceph-node2-osd0-centos-6-4*][DEBUG ] detect machine type
> 
> > 
> 
> > [*ceph_deploy.osd*][INFO  ] Distro info: CentOS 6.4 Final
> 
> > 
> 
> > [*ceph_deploy.osd*][DEBUG ] Deploying osd to
> 
> > ceph-node2-osd0-centos-6-4
> 
> > 
> 
> > [*ceph-node2-osd0-centos-6-4*][DEBUG ] write cluster configuration to
> 
> > /etc/ceph/{cluster}.conf
> 
> > 
> 
> > [*ceph-node2-osd0-centos-6-4*][WARNIN] osd keyring does not exist yet,
> 
> > creating one
> 
> > 
> 
> > [*ceph-node2-osd0-centos-6-4*][DEBUG ] create a keyring file
> 
> > 
> 
> > [*ceph_deploy.osd*][ERROR ] OSError: [E

Re: [ceph-users] Very frustrated with Ceph!

2013-11-01 Thread Sage Weil
On Sat, 2 Nov 2013, Trivedi, Narendra wrote:
> [ceph-node2-osd0-centos-6-4][WARNIN] osd keyring does not exist yet,
> creating one
> 
> [ceph-node2-osd0-centos-6-4][DEBUG ] create a keyring file
> 
> [ceph_deploy.osd][ERROR ] OSError: [Errno 2] No such file or directory

Did you do 'ceph-deploy install ...' on these hosts?

sage
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Very frustrated with Ceph!

2013-11-01 Thread Mark Nelson

Hey Narenda,

Sorry to hear you've been having trouble.  Do you mind if I ask what 
took the 3 hours of time?  We definitely don't want the install process 
to take that long.  Unfortunately I'm not familiar with the error you 
are seeing, but the folks that work on ceph-deploy may have some advice. 
 Are you using the newest version of ceph-deploy?


Thanks,
Mark

On 11/01/2013 08:17 PM, Trivedi, Narendra wrote:

I created new VMs and re-installed everything from scratch. Took me 3
hours. Executed all the steps religiously all over again in the links:

http://ceph.com/docs/master/start/quick-start-preflight/

http://ceph.com/docs/master/start/quick-ceph-deploy/

When the time came to prepare OSDs after 4 long hours, I get the same
weird error:


[ceph@ceph-admin-node-centos-6-4 my-cluster]$ ceph-deploy osd prepare
ceph-node2-osd0-centos-6-4:/tmp/osd0 ceph-node3-osd1-centos-6-4:/tmp/osd1

[*ceph_deploy.cli*][INFO  ] Invoked (1.3): /usr/bin/ceph-deploy osd
prepare ceph-node2-osd0-centos-6-4:/tmp/osd0
ceph-node3-osd1-centos-6-4:/tmp/osd1

[*ceph_deploy.osd*][DEBUG ] Preparing cluster ceph disks
ceph-node2-osd0-centos-6-4:/tmp/osd0: ceph-node3-osd1-centos-6-4:/tmp/osd1:

[*ceph-node2-osd0-centos-6-4*][DEBUG ] connected to host:
ceph-node2-osd0-centos-6-4

[*ceph-node2-osd0-centos-6-4*][DEBUG ] detect platform information from
remote host

[*ceph-node2-osd0-centos-6-4*][DEBUG ] detect machine type

[*ceph_deploy.osd*][INFO  ] Distro info: CentOS 6.4 Final

[*ceph_deploy.osd*][DEBUG ] Deploying osd to ceph-node2-osd0-centos-6-4

[*ceph-node2-osd0-centos-6-4*][DEBUG ] write cluster configuration to
/etc/ceph/{cluster}.conf

[*ceph-node2-osd0-centos-6-4*][WARNIN] osd keyring does not exist yet,
creating one

[*ceph-node2-osd0-centos-6-4*][DEBUG ] create a keyring file

[*ceph_deploy.osd*][ERROR ] OSError: [Errno 2] No such file or directory

[*ceph-node3-osd1-centos-6-4*][DEBUG ] connected to host:
ceph-node3-osd1-centos-6-4

[*ceph-node3-osd1-centos-6-4*][DEBUG ] detect platform information from
remote host

[*ceph-node3-osd1-centos-6-4*][DEBUG ] detect machine type

[*ceph_deploy.osd*][INFO  ] Distro info: CentOS 6.4 Final

[*ceph_deploy.osd*][DEBUG ] Deploying osd to ceph-node3-osd1-centos-6-4

[*ceph-node3-osd1-centos-6-4*][DEBUG ] write cluster configuration to
/etc/ceph/{cluster}.conf

[*ceph-node3-osd1-centos-6-4*][WARNIN] osd keyring does not exist yet,
creating one

[*ceph-node3-osd1-centos-6-4*][DEBUG ] create a keyring file

[*ceph_deploy.osd*][ERROR ] OSError: [Errno 2] No such file or directory

[*ceph_deploy*][ERROR ] GenericError: Failed to create 2 OSDs

What does it even mean??? Seems ceph is not production ready with lot of
missing links, error messages that don’t make any sense and gazillion
problems. Very frustrating!!

*Narendra Trivedi | savvis**cloud*


This message contains information which may be confidential and/or
privileged. Unless you are the intended recipient (or authorized to
receive for the intended recipient), you may not read, use, copy or
disclose to anyone the message or any information contained in the
message. If you have received the message in error, please advise the
sender by reply e-mail and delete the message and any attachment(s)
thereto without retaining any copies.


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com