Re: [ceph-users] Ceph Install Guide

2013-11-01 Thread Karan Singh
Hello Raghavendra 

I would recommend you to follow Inktank Webinar and ceph documentation to go 
with the basics of ceph first. 


As a answer to your question : You would need ADMIN-NODE , MONITOR-NODE , 
OSD-NODE and CLIENT-NODE ( as testing you can configure in 1 or 2 VMs ) 

Ceph Documentation : http://ceph.com/docs/master/ 

Regards 
Karan Singh 

- Original Message -

From: "Raghavendra Lad"  
To: ceph-users@lists.ceph.com 
Sent: Friday, 1 November, 2013 7:27:26 AM 
Subject: [ceph-users] Ceph Install Guide 

Hi, 

Please can you help with the Ceph Install guide. 

Do we need to install Ceph server or client? 

Regards, 
Raghavendra Lad 
Get your own FREE website, FREE domain & FREE mobile app with Company email. 
Know More > 
___ 
ceph-users mailing list 
ceph-users@lists.ceph.com 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph Install Guide

2013-11-01 Thread Wido den Hollander

On 11/01/2013 06:27 AM, Raghavendra Lad wrote:

Hi,

Please can you help with the Ceph Install guide.

Do we need to install Ceph server or client?



On all machines you probably want the 'ceph' package, that's the easiest 
to start with.


Wido


Regards,
Raghavendra Lad

Get your own *FREE* website, *FREE* domain & *FREE* mobile app with
Company email.
*Know More >*




___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




--
Wido den Hollander
42on B.V.

Phone: +31 (0)20 700 9902
Skype: contact42on
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] testing ceph

2013-11-01 Thread charles L
Hi Karan/All,

Thanks.I guess Joao Eduardo missunderstood me too.


1. yes there is 1 SSD on each server. There is also 4 data drive on each 
server. so i will have 4 OSD's on each server.

2. I want to know if its a good idea to use 1 partition on the SSD for the 
journals of the 4 OSD's.

3. considering if the SSD FAILS WHAT HAPPENS TO THE OSD's? if i have 3 replicas 
set, what is a good PLACEMENT GROUP NUMBER for the data pool, THAT ENSURES the 
3 replicas don't end up on the same node. 
because if it does then crush map etc can never recover the my data.

Any idea?

/regards

Charles.


> Date: Thu, 31 Oct 2013 16:27:02 +0200 
> From: ksi...@csc.fi 
> To: charlesboy...@hotmail.com 
> CC: majord...@vger.kernel.org; ceph-us...@ceph.com 
> Subject: Re: [ceph-users] testing ceph 
> 
> Hello Charles 
> 
> Need some more clarification with your setup , Did you mean 
> 
> 1) There is 1 SSD ( 60 GB ) on each server i.e 6 SSD on all 6 servers ? 
> 
> 2) your osd.3 , osd.4 , osd.5 uses same journal ( /dev/sdf2 ) ? 
> 
> Regards 
> Karan Singh 
> 
>  
> From: "charles L"  
> To: "ceph dev" , ceph-us...@ceph.com 
> Sent: Thursday, 31 October, 2013 6:24:13 AM 
> Subject: [ceph-users] testing ceph 
> 
> Hi, 
> Pls is this a good setup for a production environment test of ceph? My 
> focus is on the SSD ... should it be partitioned(sdf1,2 ,3,4) and 
> shared by the four OSDs on a host? or is this a better configuration 
> for the SSD to be just one partition(sdf1) while all osd uses that one 
> partition? 
> my setup: 
> - 6 Servers with one 250gb boot disk for OS(sda), 
> four-2Tb Disks each for the OSDs i.e Total disks = 6x4 = 24 disks (sdb -sde) 
> and one-60GB SSD for Osd Journal(sdf). 
> -RAM = 32GB on each server with 2 GB network link. 
> hostname for servers: Server1 -Server6 
> 
> [osd.0] 
> host = server1 
> devs = /dev/sdb 
> osd journal = /dev/sdf1 
> [osd.1] 
> host = server1 
> devs = /dev/sdc 
> osd journal = /dev/sdf2 
> 
> [osd.3] 
> host = server1 
> devs = /dev/sdd 
> osd journal = /dev/sdf2 
> 
> [osd.4] 
> host = server1 
> devs = /dev/sde 
> osd journal = /dev/sdf2 
> [osd.5] 
> host = server2 
> devs = /dev/sdb 
> osd journal = /dev/sdf2 
> ... 
> [osd.23] 
> host = server6 
> devs = /dev/sde 
> osd journal = /dev/sdf2 
> 
> Thanks. 
> 
> ___ 
> ceph-users mailing list 
> ceph-users@lists.ceph.com 
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
> 
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] testing ceph

2013-11-01 Thread charles L
Hi Karan/All,

Thanks.I guess Joao Eduardo missunderstood me too.


1. yes there is 1 SSD on each server. There is also 4 data drive on each 
server. so i will have 4 OSD's on each server.

2. I want to know if its a good idea to use 1 partition on the SSD for the 
journals of the 4 OSD's.

3. considering if the SSD FAILS WHAT HAPPENS TO THE OSD's? if i have 3 replicas 
set, what is a good PLACEMENT GROUP NUMBER for the data pool, THAT ENSURES the 
3 replicas don't end up on the same node. 
because if it does then crush map etc can never recover the my data.

Any idea?

/regards

Charles.


> Date: Thu, 31 Oct 2013 16:27:02 +0200 
> From: ksi...@csc.fi 
> To: charlesboy...@hotmail.com 
> CC: majord...@vger.kernel.org; ceph-us...@ceph.com 
> Subject: Re: [ceph-users] testing ceph 
> 
> Hello Charles 
> 
> Need some more clarification with your setup , Did you mean 
> 
> 1) There is 1 SSD ( 60 GB ) on each server i.e 6 SSD on all 6 servers ? 
> 
> 2) your osd.3 , osd.4 , osd.5 uses same journal ( /dev/sdf2 ) ? 
> 
> Regards 
> Karan Singh 
> 
>  
> From: "charles L"  
> To: "ceph dev" , ceph-us...@ceph.com 
> Sent: Thursday, 31 October, 2013 6:24:13 AM 
> Subject: [ceph-users] testing ceph 
> 
> Hi, 
> Pls is this a good setup for a production environment test of ceph? My 
> focus is on the SSD ... should it be partitioned(sdf1,2 ,3,4) and 
> shared by the four OSDs on a host? or is this a better configuration 
> for the SSD to be just one partition(sdf1) while all osd uses that one 
> partition? 
> my setup: 
> - 6 Servers with one 250gb boot disk for OS(sda), 
> four-2Tb Disks each for the OSDs i.e Total disks = 6x4 = 24 disks (sdb -sde) 
> and one-60GB SSD for Osd Journal(sdf). 
> -RAM = 32GB on each server with 2 GB network link. 
> hostname for servers: Server1 -Server6 
> 
> [osd.0] 
> host = server1 
> devs = /dev/sdb 
> osd journal = /dev/sdf1 
> [osd.1] 
> host = server1 
> devs = /dev/sdc 
> osd journal = /dev/sdf2 
> 
> [osd.3] 
> host = server1 
> devs = /dev/sdd 
> osd journal = /dev/sdf2 
> 
> [osd.4] 
> host = server1 
> devs = /dev/sde 
> osd journal = /dev/sdf2 
> [osd.5] 
> host = server2 
> devs = /dev/sdb 
> osd journal = /dev/sdf2 
> ... 
> [osd.23] 
> host = server6 
> devs = /dev/sde 
> osd journal = /dev/sdf2 
> 
> Thanks. 
> 
> ___ 
> ceph-users mailing list 
> ceph-users@lists.ceph.com 
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
> 
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] "rbd map" says "bat option at rw"

2013-11-01 Thread nicolasc

Hi every one,

I finally and happily managed to get my Ceph cluster (3 monitors among 8 
nodes, each with 9 OSDs) running on version 0.71, but the "rbd map" 
command shows a weird behaviour.


I can list pools, create images and snapshots, alleluia!
However, mapping to a device with "rbd map" is not working. When I try 
this from one of my nodes, the kernel says:

libceph: bad option at 'rw'
Which "rbd" translates into:
add failed: (22) Invalid argument

Any idea of what that could indicate?

I am using a basic config: no authentication, default crushmap (I just 
changed some weights), and basic network config (public net, cluster 
net). I have tried both image formats, different sizes and pools.


Moreover, I have a client running rbd from Ceph version 0.61.9, and from 
there everything works fine with "rbd map" on the same image. Both nodes 
(Ceph 0.61.9 and 0.71) are running Linux kernel 3.2 for Debian.


Hope you can provide some hints. Best regards,

Nicolas Canceill
Scalable Storage Systems
SURFsara (Amsterdam, NL)

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] [ANN] ceph-deploy 1.3 released!

2013-11-01 Thread Alfredo Deza
Hi all,

A new version (1.3) of ceph-deploy is now out, a lot of fixes went
into this release including the addition of a more robust library to
connect to remote hosts and it removed the one extra dependency we
used. Installation should be simpler.

The complete changelog can be found at:

https://github.com/ceph/ceph-deploy/blob/master/docs/source/changelog.rst


The highlights for this release are:


* We now allow to use `--username` to connect on remote hosts,
specifying something different than the current user or the SSH
config.

* Global timeouts for remote commands to be able to disconnect if
there is no input received (defaults to 5 minutes), but still allowing
other more granular timeouts for some commands that need to just run a
simple command without output expectation.


Please make sure you update (install instructions:
http://github.com/ceph/ceph-deploy/#installation) and use the latest
version!


Thanks,


Alfredo
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Ceph-deploy 1.3 searching for packages in incorrect path

2013-11-01 Thread Karan Singh
Hello Alfredo / InkTank Team 


I just noticed deploying ceph client using ceph-deploy version 1.3 is giving 
problem. However everything works good if i use older version 1.2.7 



1) I performed ceph-deploy install ceph-client1 command , so it started 
searching for ceph-0.72* packages inside 
http://ceph.com/rpm-dumpling/el6/x86_64/ and these packages does not exists 
here. 



[ceph-client1][DEBUG ] Retrieving 
http://ceph.com/rpm-dumpling/el6/noarch/ceph-release-1-0.el6.noarch.rpm 
[ceph-client1][DEBUG ] Preparing... 
## 
[ceph-client1][DEBUG ] ceph-release 
## 
[ceph-client1][INFO ] Running command: yum -y -q install ceph 
[ceph-client1][ERROR ] 
http://ceph.com/rpm-dumpling/el6/x86_64/ceph-0.72-rc1.el6.x86_64.rpm: [Errno 
14] PYCURL ERROR 22 - "The requested URL returned error: 404 Not Found" 
[ceph-client1][ERROR ] Trying other mirror. 
[ceph-client1][ERROR ] 
http://ceph.com/rpm-dumpling/el6/x86_64/libcephfs1-0.72-rc1.el6.x86_64.rpm: 
[Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 404 Not Found" 
[ceph-client1][ERROR ] Trying other mirror. 
[ceph-client1][ERROR ] 
http://ceph.com/rpm-dumpling/el6/x86_64/librados2-0.72-rc1.el6.x86_64.rpm: 
[Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 404 Not Found" 
[ceph-client1][ERROR ] Trying other mirror. 
[ceph-client1][ERROR ] 
http://ceph.com/rpm-dumpling/el6/x86_64/librbd1-0.72-rc1.el6.x86_64.rpm: [Errno 
14] PYCURL ERROR 22 - "The requested URL returned error: 404 Not Found" 
[ceph-client1][ERROR ] Trying other mirror. 
[ceph-client1][ERROR ] 
http://ceph.com/rpm-dumpling/el6/x86_64/python-ceph-0.72-rc1.el6.x86_64.rpm: 
[Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 404 Not Found" 
[ceph-client1][ERROR ] Trying other mirror. 













2) After that i tried ceph-deploy install --stable dumpling ceph-client1 , 
again does the same thing 




[root@ceph-admin ceph]# 
[root@ceph-admin ceph]# ceph-deploy install --stable dumpling ceph-client1 

[ceph-client1][INFO ] Running command: rpm -Uvh --replacepkgs 
http://ceph.com/rpm-dumpling/el6/noarch/ceph-release-1-0.el6.noarch.rpm 
[ceph-client1][DEBUG ] Retrieving 
http://ceph.com/rpm-dumpling/el6/noarch/ceph-release-1-0.el6.noarch.rpm 
[ceph-client1][DEBUG ] Preparing... 
## 
[ceph-client1][DEBUG ] ceph-release 
## 
[ceph-client1][INFO ] Running command: yum -y -q install ceph 
[ceph-client1][ERROR ] 
http://ceph.com/rpm-dumpling/el6/x86_64/ceph-0.72-rc1.el6.x86_64.rpm: [Errno 
14] PYCURL ERROR 22 - "The requested URL returned error: 404 Not Found" 
[ceph-client1][ERROR ] Trying other mirror. 
[ceph-client1][ERROR ] 
http://ceph.com/rpm-dumpling/el6/x86_64/libcephfs1-0.72-rc1.el6.x86_64.rpm: 
[Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 404 Not Found" 
[ceph-client1][ERROR ] Trying other mirror. 
[ceph-client1][ERROR ] 
http://ceph.com/rpm-dumpling/el6/x86_64/librados2-0.72-rc1.el6.x86_64.rpm: 
[Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 404 Not Found" 
[ceph-client1][ERROR ] Trying other mirror. 
[ceph-client1][ERROR ] 
http://ceph.com/rpm-dumpling/el6/x86_64/librbd1-0.72-rc1.el6.x86_64.rpm: [Errno 
14] PYCURL ERROR 22 - "The requested URL returned error: 404 Not Found" 
[ceph-client1][ERROR ] Trying other mirror. 
[ceph-client1][ERROR ] 
http://ceph.com/rpm-dumpling/el6/x86_64/python-ceph-0.72-rc1.el6.x86_64.rpm: 
[Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 404 Not Found" 
[ceph-client1][ERROR ] Trying other mirror. 
[ceph-client1][ERROR ] 
[ceph-client1][ERROR ] 
[ceph-client1][ERROR ] Error Downloading Packages: 
[ceph-client1][ERROR ] librbd1-0.72-rc1.el6.x86_64: failure: 
librbd1-0.72-rc1.el6.x86_64.rpm from ceph: [Errno 256] No more mirrors to try. 
[ceph-client1][ERROR ] librados2-0.72-rc1.el6.x86_64: failure: 
librados2-0.72-rc1.el6.x86_64.rpm from ceph: [Errno 256] No more mirrors to 
try. 
[ceph-client1][ERROR ] ceph-0.72-rc1.el6.x86_64: failure: 
ceph-0.72-rc1.el6.x86_64.rpm from ceph: [Errno 256] No more mirrors to try. 
[ceph-client1][ERROR ] libcephfs1-0.72-rc1.el6.x86_64: failure: 
libcephfs1-0.72-rc1.el6.x86_64.rpm from ceph: [Errno 256] No more mirrors to 
try. 












3) After that i tried running ceph-deploy install --testing ceph-client1 , it 
started searching for ceph-release-1-0.el6.noarch.rpm under 
http://ceph.com/rpm-testing/noarch which does not exists 







[ceph-client1][DEBUG ] epel-release 
## 
[ceph-client1][INFO ] Running command: rpm --import 
https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc 
[ceph-client1][INFO ] Running command: rpm -Uvh --replacepkgs 
http://ceph.com/rpm-testing/noarch/ceph-release-1-0.el6.noarch.rpm 
[ceph-client1][ERROR ] curl: (22) The requested URL returned error: 404 Not 
Found 
[ce

Re: [ceph-users] Next ceph event ??

2013-11-01 Thread LaSalle, Jurvis
I'm also looking to attend a Ceph or OSS commodity clustered storage specific 
conference.  A couple folks from Inktank mentioned they would be at 
http://sc13.supercomputing.org in a couple weeks. Anyone know of any others?

Thanks,
JL

From: Karan Singh mailto:ksi...@csc.fi>>
Date: Wednesday, October 30, 2013 5:51 AM
To: "ceph-users@lists.ceph.com" 
mailto:ceph-users@lists.ceph.com>>, 
"ceph-users-j...@lists.ceph.com" 
mailto:ceph-users-j...@lists.ceph.com>>
Subject: [ceph-users] Next ceph event ??

Guys , do you know where is the next ceph public event is planned and what are 
the dates.

Also what will be focused in the session .

Might be Inktank genius can answer this well.

Regards
Karan Singh
System Specialist Storage | CSC IT center for Science.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Next ceph event ??

2013-11-01 Thread Wido den Hollander
At the end of November there is a CloudStack Collab conference in 
Amsterdam, I will be there with one or two from Inktank.


I'm trying to do a small Ceph workshop on the hackaton, but it's not 
sure yet.


Wido

On 11/01/2013 04:21 PM, LaSalle, Jurvis wrote:

I'm also looking to attend a Ceph or OSS commodity clustered storage
specific conference.  A couple folks from Inktank mentioned they would
be at http://sc13.supercomputing.org in a couple weeks. Anyone know of
any others?

Thanks,
JL

From: Karan Singh mailto:ksi...@csc.fi>>
Date: Wednesday, October 30, 2013 5:51 AM
To: "ceph-users@lists.ceph.com "
mailto:ceph-users@lists.ceph.com>>,
"ceph-users-j...@lists.ceph.com "
mailto:ceph-users-j...@lists.ceph.com>>
Subject: [ceph-users] Next ceph event ??

Guys , do you know where is the next ceph public event is planned
and what are the dates.

Also what will be focused in the session .

Might be Inktank genius can answer this well.

Regards
Karan Singh
System Specialist Storage | CSC IT center for Science.



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




--
Wido den Hollander
42on B.V.

Phone: +31 (0)20 700 9902
Skype: contact42on
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] RBD Modprobe , module not found

2013-11-01 Thread Karan Singh
Hello Community Members Need your help with RBD 

Kindly advice how can i resolve this. 




[root@ceph-client1 /]# modprobe rbd 
FATAL: Module rbd not found. 
[root@ceph-client1 /]# 
[root@ceph-client1 /]# 
[root@ceph-client1 /]# cat /etc/ce 
centos-release ceph/ 
[root@ceph-client1 /]# cat /etc/centos-release 
CentOS release 6.4 (Final) 
[root@ceph-client1 /]# 
[root@ceph-client1 /]# 
[root@ceph-client1 /]# uname -a 
Linux ceph-client1.csc.fi 2.6.32-358.18.1.el6.x86_64 #1 SMP Wed Aug 28 17:19:38 
UTC 2013 x86_64 x86_64 x86_64 GNU/Linux 
[root@ceph-client1 /]# 




Regards 

Karan singh 
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] RBD Modprobe , module not found

2013-11-01 Thread Josh Logan(News)

On 11/1/2013 9:18 AM, Karan Singh wrote:

Hello Community Members Need your help with RBD

Kindly advice how can i resolve this.


[root@ceph-client1 /]# modprobe rbd
FATAL: Module rbd not found.
[root@ceph-client1 /]#
[root@ceph-client1 /]#
[root@ceph-client1 /]# cat /etc/ce
centos-release ceph/
[root@ceph-client1 /]# cat /etc/centos-release
CentOS release 6.4 (Final)
[root@ceph-client1 /]#
[root@ceph-client1 /]#
[root@ceph-client1 /]# uname -a
Linux ceph-client1.csc.fi 2.6.32-358.18.1.el6.x86_64 #1 SMP Wed Aug 28
17:19:38 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
[root@ceph-client1 /]#


Regards

Karan singh




Please use a 3.x kernel from http://elrepo.org/
We have been using them with lots of success.

Josh


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] "rbd map" says "bat option at rw"

2013-11-01 Thread Gregory Farnum
I think this will be easier to help with if you provide the exact
command you're running. :)
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com


On Fri, Nov 1, 2013 at 3:07 AM, nicolasc  wrote:
> Hi every one,
>
> I finally and happily managed to get my Ceph cluster (3 monitors among 8
> nodes, each with 9 OSDs) running on version 0.71, but the "rbd map" command
> shows a weird behaviour.
>
> I can list pools, create images and snapshots, alleluia!
> However, mapping to a device with "rbd map" is not working. When I try this
> from one of my nodes, the kernel says:
> libceph: bad option at 'rw'
> Which "rbd" translates into:
> add failed: (22) Invalid argument
>
> Any idea of what that could indicate?
>
> I am using a basic config: no authentication, default crushmap (I just
> changed some weights), and basic network config (public net, cluster net). I
> have tried both image formats, different sizes and pools.
>
> Moreover, I have a client running rbd from Ceph version 0.61.9, and from
> there everything works fine with "rbd map" on the same image. Both nodes
> (Ceph 0.61.9 and 0.71) are running Linux kernel 3.2 for Debian.
>
> Hope you can provide some hints. Best regards,
>
> Nicolas Canceill
> Scalable Storage Systems
> SURFsara (Amsterdam, NL)
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] radosgw fails to start

2013-11-01 Thread Gruher, Joseph R
Adding some debug arguments has generated output which I believe indicates the 
problem is my keyring is missing, but the keyring seems to be here.  Why would 
this complain about the keyring and fail to start?

[ceph@joceph08 ceph]$ sudo /usr/bin/radosgw -d --debug-rgw 20 --debug-ms 1 start
2013-11-01 10:59:47.015332 7f83978e4820  0 ceph version 0.67.4 
(ad85b8bfafea6232d64cb7ba76a8b6e8252fa0c7), process radosgw, pid 18760
2013-11-01 10:59:47.015338 7f83978e4820 -1 WARNING: libcurl doesn't support 
curl_multi_wait()
2013-11-01 10:59:47.015340 7f83978e4820 -1 WARNING: cross zone / region 
transfer performance may be affected
2013-11-01 10:59:47.018707 7f83978e4820  1 -- :/0 messenger.start
2013-11-01 10:59:47.018773 7f83978e4820 -1 monclient(hunting): ERROR: missing 
keyring, cannot use cephx for authentication
2013-11-01 10:59:47.018774 7f83978e4820  0 librados: client.admin 
initialization error (2) No such file or directory
2013-11-01 10:59:47.018788 7f83978e4820  1 -- :/1018760 mark_down_all
2013-11-01 10:59:47.018932 7f83978e4820  1 -- :/1018760 shutdown complete.
2013-11-01 10:59:47.018967 7f83978e4820 -1 Couldn't init storage provider 
(RADOS)

[ceph@joceph08 ceph]$ sudo service ceph-radosgw status
/usr/bin/radosgw is not running.

[ceph@joceph08 ceph]$ pwd
/etc/ceph

[ceph@joceph08 ceph]$ ls
ceph.client.admin.keyring  ceph.conf  keyring.radosgw.gateway  rbdmap

[ceph@joceph08 ceph]$ cat ceph.client.admin.keyring
[client.admin]
key = AQCYyHJSCFH3BBAA472q80qrAiIIVbvJfK/47A==

[ceph@joceph08 ceph]$ cat keyring.radosgw.gateway
[client.radosgw.gateway]
key = AQBh6nNS0Cu3HxAAMxLsbEYZ3pEbwEBajQb1WA==
caps mon = "allow rw"
caps osd = "allow rwx"

[ceph@joceph08 ceph]$ cat ceph.conf
[client.radosgw.joceph08]
host = joceph08
log_file = /var/log/ceph/radosgw.log
keyring = /etc/ceph/keyring.radosgw.gateway
rgw_socket_path = /tmp/radosgw.sock

[global]
auth_service_required = cephx
filestore_xattr_use_omap = true
auth_client_required = cephx
auth_cluster_required = cephx
mon_host = 10.23.37.142,10.23.37.145,10.23.37.161,10.23.37.165
osd_journal_size = 1024
mon_initial_members = joceph01, joceph02, joceph03, joceph04
fsid = 74d808db-aaa7-41d2-8a84-7d590327a3c7


From: Gruher, Joseph R
Sent: Wednesday, October 30, 2013 12:24 PM
To: ceph-users@lists.ceph.com
Subject: radosgw fails to start, leaves no clues why

Hi all-

Trying to set up object storage on CentOS.  I've done this successfully on 
Ubuntu but I'm having some trouble on CentOS.  I think I have everything 
configured but when I try to start the radosgw service it reports starting, but 
then the status is not running, with no helpful output as to why on the console 
or in the radosgw log.  I once experienced a similar problem in Ubuntu when the 
hostname was incorrect in ceph.conf but that doesn't seem to be the issue here. 
 Not sure where to go next.  Any suggestions what could be the problem?  Thanks!

[ceph@joceph08 ceph]$ sudo service httpd restart
Stopping httpd:[  OK  ]
Starting httpd:[  OK  ]

[ceph@joceph08 ceph]$ cat ceph.conf
[joceph08.radosgw.gateway]
keyring = /etc/ceph/keyring.radosgw.gateway
rgw_dns_name = joceph08
host = joceph08
log_file = /var/log/ceph/radosgw.log
rgw_socket_path = /tmp/radosgw.sock
[global]
filestore_xattr_use_omap = true
mon_host = 10.23.37.142,10.23.37.145,10.23.37.161
osd_journal_size = 1024
mon_initial_members = joceph01, joceph02, joceph03
auth_supported = cephx
fsid = 721ea513-e84c-48df-9c8f-f1d9e602b810

[ceph@joceph08 ceph]$ sudo service ceph-radosgw start
Starting radosgw instance(s)...

[ceph@joceph08 ceph]$ sudo service ceph-radosgw status
/usr/bin/radosgw is not running.

[ceph@joceph08 ceph]$ sudo cat /var/log/ceph/radosgw.log
[ceph@joceph08 ceph]$

[ceph@joceph08 ceph]$ sudo cat /etc/ceph/keyring.radosgw.gateway
[client.radosgw.gateway]
key = AQDbUnFSIGT2BxAA5rz9I1HHIG/LJx+XCYot1w==
caps mon = "allow rw"
caps osd = "allow rwx"

[ceph@joceph08 ceph]$ ceph status
  cluster 721ea513-e84c-48df-9c8f-f1d9e602b810
   health HEALTH_OK
   monmap e1: 3 mons at 
{joceph01=10.23.37.142:6789/0,joceph02=10.23.37.145:6789/0,joceph03=10.23.37.161:6789/0},
 election epoch 8, quorum 0,1,2 joceph01,joceph02,joceph03
   osdmap e119: 16 osds: 16 up, 16 in
pgmap v1383: 3200 pgs: 3200 active+clean; 219 GB data, 411 GB used, 10760 
GB / 11172 GB avail
   mdsmap e1: 0/0/1 up
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] [ANN] ceph-deploy 1.3 released!

2013-11-01 Thread Trivedi, Narendra
I removed my old OSDs from nodes 2 and 3 since they threw a bunch of errors,  
did a purgedata on nodes 1-3 and updated my ceph-deploy to 1.3. After creating 
/tmp/osd0 and /tmp/osd1 on nodes 2 and 3, now I am issuing a command to prepare 
the OSDs and I get the following error: 

[ceph@ceph-admin-node-centos-6-4 mycluster]$ ceph-deploy osd prepare 
ceph-node2-osd0-centos-6-4:/tmp/osd0 ceph-node3-osd1-centos-6-4:/tmp/osd1
[ceph_deploy.cli][INFO  ] Invoked (1.3): /usr/bin/ceph-deploy osd prepare 
ceph-node2-osd0-centos-6-4:/tmp/osd0 ceph-node3-osd1-centos-6-4:/tmp/osd1
[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks 
ceph-node2-osd0-centos-6-4:/tmp/osd0: ceph-node3-osd1-centos-6-4:/tmp/osd1:
[ceph-node2-osd0-centos-6-4][DEBUG ] connected to host: 
ceph-node2-osd0-centos-6-4
[ceph-node2-osd0-centos-6-4][DEBUG ] detect platform information from remote 
host
[ceph-node2-osd0-centos-6-4][DEBUG ] detect machine type
[ceph_deploy.osd][INFO  ] Distro info: CentOS 6.4 Final
[ceph_deploy.osd][DEBUG ] Deploying osd to ceph-node2-osd0-centos-6-4
[ceph-node2-osd0-centos-6-4][DEBUG ] write cluster configuration to 
/etc/ceph/{cluster}.conf
[ceph-node2-osd0-centos-6-4][WARNIN] osd keyring does not exist yet, creating 
one
[ceph-node2-osd0-centos-6-4][DEBUG ] create a keyring file
[ceph_deploy.osd][ERROR ] OSError: [Errno 2] No such file or directory
[ceph-node3-osd1-centos-6-4][DEBUG ] connected to host: 
ceph-node3-osd1-centos-6-4
[ceph-node3-osd1-centos-6-4][DEBUG ] detect platform information from remote 
host
[ceph-node3-osd1-centos-6-4][DEBUG ] detect machine type
[ceph_deploy.osd][INFO  ] Distro info: CentOS 6.4 Final
[ceph_deploy.osd][DEBUG ] Deploying osd to ceph-node3-osd1-centos-6-4
[ceph-node3-osd1-centos-6-4][DEBUG ] write cluster configuration to 
/etc/ceph/{cluster}.conf
[ceph-node3-osd1-centos-6-4][WARNIN] osd keyring does not exist yet, creating 
one
[ceph-node3-osd1-centos-6-4][DEBUG ] create a keyring file
[ceph_deploy.osd][ERROR ] OSError: [Errno 2] No such file or directory
[ceph_deploy][ERROR ] GenericError: Failed to create 2 OSDs

Does anyone why? Seems I am going backward. 

Output from nodes 2 and node3: 

[ceph@ceph-node2-osd0-centos-6-4 tmp]$ ls -al /tmp/osd0
total 8
drwxr-xr-x.  2 root root 4096 Nov  1 13:06 .
drwxrwxrwt. 12 root root 4096 Nov  1 13:07 ..

[ceph@ceph-node3-osd1-centos-6-4 tmp]$ ls -al /tmp/osd1
total 8
drwxr-xr-x.  2 root root 4096 Nov  1 13:06 .
drwxrwxrwt. 11 root root 4096 Nov  1 13:07 ..


-Original Message-
From: ceph-users-boun...@lists.ceph.com 
[mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Alfredo Deza
Sent: Friday, November 01, 2013 7:54 AM
To: ceph-users@lists.ceph.com; ceph-devel
Subject: [ceph-users] [ANN] ceph-deploy 1.3 released!

Hi all,

A new version (1.3) of ceph-deploy is now out, a lot of fixes went into this 
release including the addition of a more robust library to connect to remote 
hosts and it removed the one extra dependency we used. Installation should be 
simpler.

The complete changelog can be found at:

https://github.com/ceph/ceph-deploy/blob/master/docs/source/changelog.rst


The highlights for this release are:


* We now allow to use `--username` to connect on remote hosts, specifying 
something different than the current user or the SSH config.

* Global timeouts for remote commands to be able to disconnect if there is no 
input received (defaults to 5 minutes), but still allowing other more granular 
timeouts for some commands that need to just run a simple command without 
output expectation.


Please make sure you update (install instructions:
http://github.com/ceph/ceph-deploy/#installation) and use the latest version!


Thanks,


Alfredo
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

This message contains information which may be confidential and/or privileged. 
Unless you are the intended recipient (or authorized to receive for the 
intended recipient), you may not read, use, copy or disclose to anyone the 
message or any information contained in the message. If you have received the 
message in error, please advise the sender by reply e-mail and delete the 
message and any attachment(s) thereto without retaining any copies.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] radosgw fails to start

2013-11-01 Thread Derek Yarnell
On 11/1/13, 2:07 PM, Gruher, Joseph R wrote:
> Adding some debug arguments has generated output which I believe indicates 
> the problem is my keyring is missing, but the keyring seems to be here.  Why 
> would this complain about the keyring and fail to start?

Hi,

Are you sure the user you are starting radosgw has the permission to
read the keyring file?

Thanks,
derek
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] radosgw fails to start

2013-11-01 Thread Gruher, Joseph R
>-Original Message-
>From: Derek Yarnell [mailto:de...@umiacs.umd.edu]
>Sent: Friday, November 01, 2013 12:20 PM
>To: Gruher, Joseph R; ceph-users@lists.ceph.com
>Subject: Re: [ceph-users] radosgw fails to start
>
>On 11/1/13, 2:07 PM, Gruher, Joseph R wrote:
>> Adding some debug arguments has generated output which I believe
>indicates the problem is my keyring is missing, but the keyring seems to be
>here.  Why would this complain about the keyring and fail to start?
>
>Hi,
>
>Are you sure the user you are starting radosgw has the permission to read the
>keyring file?
>
>Thanks,
>derek

Thanks for the suggestion.  Yup, it should be readable, first of all I'm 
starting radosgw with sudo, so root should be able to read anything, plus I set 
the file to be readable by all users just in case.  Problem persists...
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] [ANN] ceph-deploy 1.3 released!

2013-11-01 Thread Stefan Priebe

Hi,

i didn't found something in the changelog so i just would like to ask if 
this is planned.


Right now you can already create a new cluster using hostA:IPA HostB:IPB ...

but it does not use these IPs as mon addr also the hostA hostB names 
need to match the hostname this is pretty bad as you cannot change IPs 
or hosts of mons later easily, so i tend to use special names and Ips 
which i can move later to different machines.


The normal ceph config supports:

[mon.a]
   host name = abc
   mon addr = 85.58.34.12

Thanks,
Stefan

Am 01.11.2013 13:54, schrieb Alfredo Deza:

Hi all,

A new version (1.3) of ceph-deploy is now out, a lot of fixes went
into this release including the addition of a more robust library to
connect to remote hosts and it removed the one extra dependency we
used. Installation should be simpler.

The complete changelog can be found at:

https://github.com/ceph/ceph-deploy/blob/master/docs/source/changelog.rst


The highlights for this release are:


* We now allow to use `--username` to connect on remote hosts,
specifying something different than the current user or the SSH
config.

* Global timeouts for remote commands to be able to disconnect if
there is no input received (defaults to 5 minutes), but still allowing
other more granular timeouts for some commands that need to just run a
simple command without output expectation.


Please make sure you update (install instructions:
http://github.com/ceph/ceph-deploy/#installation) and use the latest
version!


Thanks,


Alfredo
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] CDS Firefly Schedule Posted

2013-11-01 Thread Patrick McGarry
Greetings all,

The final schedule for the upcoming Firefly Ceph Developer Summit is
now posted.  Sorry it took a little bit longer to get it out to you,
but we wanted to make sure we found a good slot that was less
disruptive to people's holiday season.

http://ceph.com/community/ceph-developer-summit-firefly/

That said, blueprint submission will be open a little longer than
originally anticipated (hooray!), so be sure to get your blueprints
in.  I would really like to stress this next part:

*** Blueprints don't have to be finished plans for work! ***

We really like to gather all ideas, works-in-progress, and crazy
schemes...even if it's only to document them for later aggregation.

If anyone has question, comments, or anything for the good of the
cause please send them my way.  Thanks.


Best Regards,

Patrick McGarry
Director, Community || Inktank
http://ceph.com  ||  http://inktank.com
@scuttlemonkey || @ceph || @inktank
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Ceph in your neighborhood

2013-11-01 Thread Patrick McGarry
Forget Waldo, Ceph-heads are much more interesting when you find them!

Just wanted to share some of the places (that WE know about) where
Ceph experts might be venturing forth when they aren't sequestered in
a secret lair writing such awesome code.  This is by no means a
completely exhaustive list, and if you have shows, meetups, or other
things to add please reply and tell us about them!

2013
04 Nov :: Santa Clara, CA - Cloud Expo 2013 West (Greg Farnum)
04 Nov :: Hong Kong - OpenStack Summit 2013 (Ross Turk, Bryan
Bogensberger, Jude Fitzgerald, Sage Weil)
18 Nov :: Denver, CO - SC13 (Patrick McGarry, Ross Turk)
20 Nov :: Amsterdam - CloudStack Collaboration Conference EU (Wido den
Hollander)

2014
06 Jan :: Perth - Linux.conf.au (Sage Weil)
01 Feb :: Brussels - FOSDEM (TBD)
21 Feb :: Los Angeles, CA - Scale 12X
19 Mar :: NYC, NY - Structure Data (TBD)
19 Mar :: Greece - European Data Forum (TBD)
03 Apr :: Rust, Germany - World Hosting Days (TBD)
20 Jul :: Portland, OR - OSCON (TBD)

The 2014 events calendar is still in the process of getting ironed out
so there may be changes to those.  There will definitely be a few
additions as well as our plans solidify.  Once we move over to the new
wiki I'll publish all of this data so that anyone attending a
conference that wants to "talk Ceph" can meet up.

Shout if you have any questions.


Best Regards,

Patrick McGarry
Director, Community || Inktank
http://ceph.com  ||  http://inktank.com
@scuttlemonkey || @ceph || @inktank
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] How to use Admin Ops API in Ceph Object Storage

2013-11-01 Thread Jeppesen, Nelson
Any update on this issue? I'm running into the same problem, I can get usage 
information but I get 403s when pulling user data, even with user=* caps.

Thanks!

Nelson Jeppesen
Disney Technology Solutions and Services
Phone 206-588-5001

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] How to use Admin Ops API in Ceph Object Storage

2013-11-01 Thread Jeppesen, Nelson
Looks like the thread was striped, let me add it back in:


-Mensaje original-

De: Yehuda Sadeh [mailto:yehuda at 
inktank.com]

Enviado el: miércoles, 10 de julio de 2013 16:50

Para: Alvaro Izquierdo Jimeno

CC: Bright; ceph-users

Asunto: Re: [ceph-users] How to use Admin Ops API in Ceph Object Storage



On Wed, Jul 10, 2013 at 1:07 AM, Alvaro Izquierdo Jimeno http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>> wrote:

>>Hi all,

>>

>> I have been able to create an user with --caps="usage=read, write" and

>> bring its usages with  GET /admin/usage?format=json HTTP/1.1

>>  Host: ceph-server

>>  Authorization: AWS {access-key}:{hash-of-header-and-secret}

>>

>> After of this, I have created an user with --caps="user=read, write",

>> but I can't bring its user info with GET /admin/user?format=json

>> HTTP/1.1

>>  Host: ceph-server

>>  Authorization: AWS {access-key}:{hash-of-header-and-secret}

>>

>> 403 forbidden is responsed.



>Any other info in the rgw log?



This is the user info:

{ "user_id": "aij219",

  "display_name": "aij219",

  "email": "",

  "suspended": 0,

  "max_buckets": 1000,

  "auid": 0,

  "subusers": [],

  "keys": [

{ "user": "aij219",

  "access_key": "831Q3WV3H9V3EAJ9ZM77",

  "secret_key": "tfVMA1CGe5ZHMrK+bZfz7v84pvOoTQbgUePnt1T6"}],

  "swift_keys": [],

  "caps": [

{ "type": "user",

  "perm": "*"}]}



Attached you can find the log.



>>

>> And just another comment. When I try to  modify an user, I can't

>> change the caps field. The command radosgw-admin user modify --uid="user"  
>> --caps="usage=read, write"

>> doesn't fail, but doesn't update the user.



>Right. There's the radosgw-admin caps add / rm command to do that for you.



Oops, i didn't know this option. Sorry...



>>

>> Many thanks and best regards,

>> Álvaro.

>>

>>

>>

>> -Mensaje original-

>> De: ceph-users-bounces at 
>> lists.ceph.com

>> [mailto:ceph-users-bounces at 
>> lists.ceph..com] En 
>> nombre de Yehuda Sadeh

>> Enviado el: martes, 09 de julio de 2013 18:27

>> Para: Bright

>> CC: ceph-users

>> Asunto: Re: [ceph-users] How to use Admin Ops API in Ceph Object

>> Storage

>>

>> On Mon, Jul 8, 2013 at 8:51 AM, Bright > foxmail.com> wrote:

>>> Hello Guys:

>>>

>>>  I am working with ceph nowadys and i want to setup a system

>>> which

>>>

>>>  includes a web page to create the ceph object storage user.

>>>

>>>  So, i tried to use Admin Ops API to fulfill my needs. However,

>>> if i use

>>>

>>>   GET /admin/usage?format=json HTTP/1.1

>>>

>>> Host: ceph-server

>>>

>>>  it will return 403 access denied.

>>>

>>>  Than, i tried to use

>>>

>>> GET /admin/usage?format=json HTTP/1.1

>>> Host: ceph-server

>>> Authorization: AWS {access-key}:{hash-of-header-and-secret}

>>>

>>>  I used the key of client.user to represent access-key

>>

>> You need to create rgw user for that (radosgw-admin user create) and use 
>> it.. The user itself should have the 'usage' caps set ( --caps="usage=read, 
>> write").

>>

>>>

>>>  and get the hash-of-header-and-secret accordingly.



>>> However, it still  returns 403 access denied.

>>>

>>>  Can anyone explain the method to deal with Admin Ops API, thanks!

>>>

>>>

>>> --

>>> Hui Jiang

>>> East China University of Science and Technology

>>> 130 MeiLong Rd. Shanghai, China 200237 Mobile +86 13774493120 E-mail

>>> hjiang at 
>>> foxmail.com

>>>

>>>

>>>

>>> ___

>>> ceph-users mailing list

>>> ceph-users at 
>>> lists.ceph.com

>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

>>>

>> ___

>> ceph-users mailing list

>> ceph-users at 
>> lists.ceph.com

>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

>> 

>> Verificada la ausencia de virus por G Data AntiVirus

>> Versión: AVA 22.10857 del 10.07.2013

>> Noticias de virus: www.antiviruslab.com





Verificada la ausencia de virus por G Data AntiVirus

Versión: AVA 22.10883 del 11.07.2013

Noticias de virus: www.antiviruslab.com

-- next part --

An embedded and charset-unspecified text was scrubbed...

Name: get_user_info.txt

URL: 



Nelson Jeppesen
Disney Technology Solutions and Services
Phone 206-588-5001

___
ceph-users mailing list
ceph-users@l

[ceph-users] migrating to ceph-deploy

2013-11-01 Thread James Harper
I have a cluster already set up, and I'd like to start using ceph-deploy to add 
my next OSD. The cluster currently doesn't have any authentication or anything.

Should I start using ceph-deploy now, or just add the OSD manually? If the 
former, is there anything I need to do to make sure ceph-deploy won't break the 
already running cluster?

Thanks

James
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] How to use Admin Ops API in Ceph Object Storage

2013-11-01 Thread Jeppesen, Nelson
After further investigation I have noticed I can pull info on ANY user with 
'GET /admin/user?user=user1' but cannot enumerate users with 'GET /admin/user'

Nelson Jeppesen
Disney Technology Solutions and Services
Phone 206-588-5001

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Openstack Instances and RBDs

2013-11-01 Thread Gaylord Holder

http://www.sebastien-han.fr/blog/2013/06/03/ceph-integration-in-openstack-grizzly-update-and-roadmap-for-havana/

suggests it is possible to run openstack instances (not only images) off 
of RBDs in grizzly and havana (which I'm running), and to use RBDs in 
lieu of a shared file system.


I've followed

http://ceph.com/docs/next/rbd/libvirt/

but I can only get boot-from-volume to work.  Instances still are being 
housed in /var/lib/nova/instances, making live-migration a non-starter.


Is there a better guide for running openstack instances out of RBDs, or 
is it just not ready yet?


Thanks,

-Gaylord
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Very frustrated with Ceph!

2013-11-01 Thread Trivedi, Narendra
I created new VMs and re-installed everything from scratch. Took me 3 hours. 
Executed all the steps religiously all over again in the links:

http://ceph.com/docs/master/start/quick-start-preflight/
http://ceph.com/docs/master/start/quick-ceph-deploy/

When the time came to prepare OSDs after 4 long hours, I get the same weird 
error:

[ceph@ceph-admin-node-centos-6-4 my-cluster]$ ceph-deploy osd prepare 
ceph-node2-osd0-centos-6-4:/tmp/osd0 ceph-node3-osd1-centos-6-4:/tmp/osd1
[ceph_deploy.cli][INFO  ] Invoked (1.3): /usr/bin/ceph-deploy osd prepare 
ceph-node2-osd0-centos-6-4:/tmp/osd0 ceph-node3-osd1-centos-6-4:/tmp/osd1
[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks 
ceph-node2-osd0-centos-6-4:/tmp/osd0: ceph-node3-osd1-centos-6-4:/tmp/osd1:
[ceph-node2-osd0-centos-6-4][DEBUG ] connected to host: 
ceph-node2-osd0-centos-6-4
[ceph-node2-osd0-centos-6-4][DEBUG ] detect platform information from remote 
host
[ceph-node2-osd0-centos-6-4][DEBUG ] detect machine type
[ceph_deploy.osd][INFO  ] Distro info: CentOS 6.4 Final
[ceph_deploy.osd][DEBUG ] Deploying osd to ceph-node2-osd0-centos-6-4
[ceph-node2-osd0-centos-6-4][DEBUG ] write cluster configuration to 
/etc/ceph/{cluster}.conf
[ceph-node2-osd0-centos-6-4][WARNIN] osd keyring does not exist yet, creating 
one
[ceph-node2-osd0-centos-6-4][DEBUG ] create a keyring file
[ceph_deploy.osd][ERROR ] OSError: [Errno 2] No such file or directory
[ceph-node3-osd1-centos-6-4][DEBUG ] connected to host: 
ceph-node3-osd1-centos-6-4
[ceph-node3-osd1-centos-6-4][DEBUG ] detect platform information from remote 
host
[ceph-node3-osd1-centos-6-4][DEBUG ] detect machine type
[ceph_deploy.osd][INFO  ] Distro info: CentOS 6.4 Final
[ceph_deploy.osd][DEBUG ] Deploying osd to ceph-node3-osd1-centos-6-4
[ceph-node3-osd1-centos-6-4][DEBUG ] write cluster configuration to 
/etc/ceph/{cluster}.conf
[ceph-node3-osd1-centos-6-4][WARNIN] osd keyring does not exist yet, creating 
one
[ceph-node3-osd1-centos-6-4][DEBUG ] create a keyring file
[ceph_deploy.osd][ERROR ] OSError: [Errno 2] No such file or directory
[ceph_deploy][ERROR ] GenericError: Failed to create 2 OSDs

What does it even mean??? Seems ceph is not production ready with lot of 
missing links, error messages that don't make any sense and gazillion problems. 
Very frustrating!!


Narendra Trivedi | savviscloud


This message contains information which may be confidential and/or privileged. 
Unless you are the intended recipient (or authorized to receive for the 
intended recipient), you may not read, use, copy or disclose to anyone the 
message or any information contained in the message. If you have received the 
message in error, please advise the sender by reply e-mail and delete the 
message and any attachment(s) thereto without retaining any copies.___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Very frustrated with Ceph!

2013-11-01 Thread Mark Nelson

Hey Narenda,

Sorry to hear you've been having trouble.  Do you mind if I ask what 
took the 3 hours of time?  We definitely don't want the install process 
to take that long.  Unfortunately I'm not familiar with the error you 
are seeing, but the folks that work on ceph-deploy may have some advice. 
 Are you using the newest version of ceph-deploy?


Thanks,
Mark

On 11/01/2013 08:17 PM, Trivedi, Narendra wrote:

I created new VMs and re-installed everything from scratch. Took me 3
hours. Executed all the steps religiously all over again in the links:

http://ceph.com/docs/master/start/quick-start-preflight/

http://ceph.com/docs/master/start/quick-ceph-deploy/

When the time came to prepare OSDs after 4 long hours, I get the same
weird error:


[ceph@ceph-admin-node-centos-6-4 my-cluster]$ ceph-deploy osd prepare
ceph-node2-osd0-centos-6-4:/tmp/osd0 ceph-node3-osd1-centos-6-4:/tmp/osd1

[*ceph_deploy.cli*][INFO  ] Invoked (1.3): /usr/bin/ceph-deploy osd
prepare ceph-node2-osd0-centos-6-4:/tmp/osd0
ceph-node3-osd1-centos-6-4:/tmp/osd1

[*ceph_deploy.osd*][DEBUG ] Preparing cluster ceph disks
ceph-node2-osd0-centos-6-4:/tmp/osd0: ceph-node3-osd1-centos-6-4:/tmp/osd1:

[*ceph-node2-osd0-centos-6-4*][DEBUG ] connected to host:
ceph-node2-osd0-centos-6-4

[*ceph-node2-osd0-centos-6-4*][DEBUG ] detect platform information from
remote host

[*ceph-node2-osd0-centos-6-4*][DEBUG ] detect machine type

[*ceph_deploy.osd*][INFO  ] Distro info: CentOS 6.4 Final

[*ceph_deploy.osd*][DEBUG ] Deploying osd to ceph-node2-osd0-centos-6-4

[*ceph-node2-osd0-centos-6-4*][DEBUG ] write cluster configuration to
/etc/ceph/{cluster}.conf

[*ceph-node2-osd0-centos-6-4*][WARNIN] osd keyring does not exist yet,
creating one

[*ceph-node2-osd0-centos-6-4*][DEBUG ] create a keyring file

[*ceph_deploy.osd*][ERROR ] OSError: [Errno 2] No such file or directory

[*ceph-node3-osd1-centos-6-4*][DEBUG ] connected to host:
ceph-node3-osd1-centos-6-4

[*ceph-node3-osd1-centos-6-4*][DEBUG ] detect platform information from
remote host

[*ceph-node3-osd1-centos-6-4*][DEBUG ] detect machine type

[*ceph_deploy.osd*][INFO  ] Distro info: CentOS 6.4 Final

[*ceph_deploy.osd*][DEBUG ] Deploying osd to ceph-node3-osd1-centos-6-4

[*ceph-node3-osd1-centos-6-4*][DEBUG ] write cluster configuration to
/etc/ceph/{cluster}.conf

[*ceph-node3-osd1-centos-6-4*][WARNIN] osd keyring does not exist yet,
creating one

[*ceph-node3-osd1-centos-6-4*][DEBUG ] create a keyring file

[*ceph_deploy.osd*][ERROR ] OSError: [Errno 2] No such file or directory

[*ceph_deploy*][ERROR ] GenericError: Failed to create 2 OSDs

What does it even mean??? Seems ceph is not production ready with lot of
missing links, error messages that don’t make any sense and gazillion
problems. Very frustrating!!

*Narendra Trivedi | savvis**cloud*


This message contains information which may be confidential and/or
privileged. Unless you are the intended recipient (or authorized to
receive for the intended recipient), you may not read, use, copy or
disclose to anyone the message or any information contained in the
message. If you have received the message in error, please advise the
sender by reply e-mail and delete the message and any attachment(s)
thereto without retaining any copies.


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Very frustrated with Ceph!

2013-11-01 Thread Sage Weil
On Sat, 2 Nov 2013, Trivedi, Narendra wrote:
> [ceph-node2-osd0-centos-6-4][WARNIN] osd keyring does not exist yet,
> creating one
> 
> [ceph-node2-osd0-centos-6-4][DEBUG ] create a keyring file
> 
> [ceph_deploy.osd][ERROR ] OSError: [Errno 2] No such file or directory

Did you do 'ceph-deploy install ...' on these hosts?

sage
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Very frustrated with Ceph!

2013-11-01 Thread Sage Weil
On Sat, 2 Nov 2013, Trivedi, Narendra wrote:
> 
> Hi Mark,
> 
> I have been very impressed with ceph's overall philosophy but it seems there
> are many loose ends. Many times a presence of "/etc/ceph" is assumed as is
> evident from the error/log messages but is not documented/assumed in the

This is a bit of a canary.  /etc/ceph is created by the deb or rpm at 
install time, so if you *ever* see that it is missing it is because an 
install step was missed/skipped somewhere on that machine.  e.g.,

 ceph-deploy install HOST

We deliberately avoid creating it in other places to avoid seeing more 
confusing errors later down the line.  (Probably we should have more 
helpful/informative error messages when it's not found, though!)

> official documentation, other times ceph-deploy throws errors that
> apparently ain't a big deal and then logs are not that detailed and leave
> you in lurch . What took me 3 hours? Maybe since I am behind proxy, getting
> 4 nodes ready, making sure networking is all setup on these nodes and then
> installing ceph only to get in the end.  Anyway, I am pasting log of my

BTW, making the private network or proxy installs work is the next major 
item for us to tackle for ceph-deploy.  You're not the only one to feel 
this pain, but hopefully you'll be one of the last!

Hope this helps-
sage



> keystrokes to get everything ready. What do 2 lines towards the end mean??
> Could you please let me know what did I wrong based on what I have pasted
> below?.  
> 
>  
> 
> 1) Create a ceph user. All commands should be executed as a ceph user. This
> should be done on all nodes:
> 
>  
> 
> [root@ceph-node1-mon-centos-6-4 ~]# sudo useradd -d /home/ceph -m ceph
> 
> [root@ceph-node1-mon-centos-6-4 ~]# passwd ceph
> 
> Changing password for user ceph.
> 
> New password:
> 
> Retype new password:
> 
> passwd: all authentication tokens updated successfully.
> 
> [root@ceph-node1-mon-centos-6-4 ~]# echo "ceph ALL = (root) NOPASSWD:ALL" |
> sudo tee /etc/sudoers.d/ceph
> 
> ceph ALL = (root) NOPASSWD:ALL
> 
> [root@ceph-node1-mon-centos-6-4 ~]# sudo chmod 0440 /etc/sudoers.d/ceph
> 
>  
> 
> 2) Login as ceph and from now on execute the commands as the ceph user. Some
> commands need to be executed as root. Use sudo for them.
> 
>  
> 
> 3) For all the nodes add the following lines to ~/.bash_profile and
> /root/.bash_profile
> 
>  
> 
> http_proxy=http://10.12.132.208:8080
> 
> https_proxy=https://10.12.132.208:8080
> 
>  
> 
> export http_proxy
> 
> export https_proxy
> 
>  
> 
> 4) On all the nodes add the following lines to /root/.rpmmacros (use sudo
> vim):
> 
>  
> 
> %_httpproxy 10.12.132.208
> 
> %_httpport 8080
> 
>  
> 
> 5) On all the nodes add the following line to /root/.curlrc (use sudo vim):
> 
>  
> 
> proxy=http://10.12.132.208:8080
> 
>  
> 
> 6) On all the nodes add the following line to /etc/yum.conf (use sudo vim):
> 
>  
> 
> proxy=http://10.12.132.208:8080
> 
>  
> 
> 7) On all the nodes, add dd the following line to /etc/wgetrc (use sudo
> vim):
> 
>  
> 
> http_proxy=http://10.12.132.208:8080
> 
> https_proxy=http://10.12.132.208:8080
> 
> ftp_proxy=http://10.12.132.208:8080
> 
> use_proxy = on (uncomment this one)
> 
>  
> 
> 8) Execute sudo visudo to add the following line to the sudoers file:
> 
>  
> 
> Defaults env_keep += "http_proxy https_proxy"
> 
>  
> 
> 9) In the sudoers file, comment the following line (use sudo visudo):
> 
>  
> 
> Defaults requiretty
> 
>  
> 
> 10) Install ssh on all nodes
> 
>  
> 
> [ceph@ceph-node1-mon-centos-6-4 ~]# sudo yum install openssh-server
> 
> Loaded plugins: fastestmirror, refresh-packagekit, security
> 
> Determining fastest mirrors
> 
> * base: centos.mirror.constant.com
> 
> * extras: bay.uchicago.edu
> 
> * updates: centos.aol.com
> 
> base | 3.7 kB 00:00
> 
> base/primary_db | 4.4 MB 00:05
> 
> extras | 3.4 kB 00:00
> 
> extras/primary_db | 18 kB 00:00
> 
> updates | 3.4 kB 00:00
> 
> updates/primary_db | 5.0 MB 00:07
> 
> Setting up Install Process
> 
> Package openssh-server-5.3p1-84.1.el6.x86_64 already installed and latest
> version
> 
> Nothing to do
> 
>  
> 
> 12) Since the Ceph documentation recommends using hostnames instead of IP
> addresses, we need to for now enter the /etc/hosts of the four nodes:
> 
>  
> 
> admin node:
> 
>  
> 
> 127.0.0.1 localhost.localdomain localhost ceph-admin-node-centos-6-4
> 
> ::1 localhost.localdomain localhost6 localhost ceph-admin-node-centos-6-4
> 
> 10.12.0.70 ceph-node1-mon-centos-6-4
> 
> 10.12.0.71 ceph-node2-osd0-centos-6-4
> 
> 10.12.0.72 ceph-node3-osd1-centos-6-4
> 
>  
> 
> monitor node:
> 
>  
> 
> 127.0.0.1 localhost.localdomain localhost ceph-node1-mon-centos-6-4
> 
> ::1 localhost.localdomain localhost6 localhost ceph-node1-mon-centos-6-4
> 
> 10.12.0.71 ceph-node2-osd0-centos-6-4
> 
> 10.12.0.72 ceph-node3-osd1-centos-6-4
> 
> 10.12.0.73 ceph-admin-node-centos-6-4
> 
>  
> 
> osd0 node:
> 
>  
> 
> 127.0.0.1 localhost.localdomain localhost ceph-node2-osd0-centos-

Re: [ceph-users] Ceph Install Guide

2013-11-01 Thread Raghavendra Lad
Hi Karan, 

Thank you. I will give it a try.

Regards,
Raghavendra Lad



From: "Karan Singh"ksi...@csc.fi
Sent:Fri, 01 Nov 2013 14:26:24 +0530
To: Raghavendra Lad raghavendra_...@rediffmail.com
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Ceph Install Guide
 Hello Raghavendra
>I would recommend you to follow Inktank Webinar and ceph documentation to go 
>with the basics of ceph first.
>
>As a answer to your question : You would need  ADMIN-NODE , MONITOR-NODE , 
>OSD-NODE and CLIENT-NODE ( as testing you can configure in 1 or 2 VMs )
>Ceph Documentation : http://ceph.com/docs/master/
>RegardsKaran Singh
>From: "Raghavendra Lad" 
>To: ceph-users@lists.ceph.com
>Sent: Friday, 1 November, 2013 7:27:26 AM
>Subject: [ceph-users] Ceph Install Guide
>
>Hi, 
> 
>Please can you help with the Ceph Install guide. 
> 
>Do we need to install Ceph server or client? 
> 
>Regards, 
>Raghavendra Lad
>Get your own FREE website, FREE domain & FREE mobile app with Company 
>email.  Know More >
>___
>ceph-users mailing list
>ceph-users@lists.ceph.com
>http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>   ___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] "rbd map" says "bat option at rw"

2013-11-01 Thread Josh Durgin

On 11/01/2013 03:07 AM, nicolasc wrote:

Hi every one,

I finally and happily managed to get my Ceph cluster (3 monitors among 8
nodes, each with 9 OSDs) running on version 0.71, but the "rbd map"
command shows a weird behaviour.

I can list pools, create images and snapshots, alleluia!
However, mapping to a device with "rbd map" is not working. When I try
this from one of my nodes, the kernel says:
 libceph: bad option at 'rw'
Which "rbd" translates into:
 add failed: (22) Invalid argument

Any idea of what that could indicate?


Thanks for the report! The rw option was added in linux 3.7. In ceph
0.71, rbd map is passing the 'rw' and 'ro' options to tell the kernel
that to map the image as read-only or read/write. This will be fixed
in 0.72 by:

https://github.com/ceph/ceph/pull/807

Josh


I am using a basic config: no authentication, default crushmap (I just
changed some weights), and basic network config (public net, cluster
net). I have tried both image formats, different sizes and pools.

Moreover, I have a client running rbd from Ceph version 0.61.9, and from
there everything works fine with "rbd map" on the same image. Both nodes
(Ceph 0.61.9 and 0.71) are running Linux kernel 3.2 for Debian.

Hope you can provide some hints. Best regards,

Nicolas Canceill
Scalable Storage Systems
SURFsara (Amsterdam, NL)


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Very frustrated with Ceph!

2013-11-01 Thread Sage Weil
On Sat, 2 Nov 2013, Trivedi, Narendra wrote:
> 
> Hi Sage,
> 
> I believe I issued a "ceph-deploy install..." from the admin node as per the
> documentation and that was almost ok as per the output of the command below
> except sometimes there?s an error followed by an ?OK? message (see the
> highlighted item in the red below). I eventually ran into some permission
> issues but seems things went okay:  

Hmm, the below output makes it look like it was successfully installed on 
node1 node2 and node3.  Can you confirm that /etc/ceph exists on all three 
of those hosts?

Oh, looking back at your original message, it looks like you are trying to 
create OSDs on /tmp/osd*.  I would create directories like /ceph/osdo, 
/ceph/osd1, or similar.  I believe you need to create the directories 
beforehand, too.  (In a normal deployment, you are either feeding ceph raw 
disks (/dev/XXX) or an existing mount point on a dedicated disk you 
already configured and mounted.)

sage


 > 
>  
> 
> [ceph@ceph-admin-node-centos-6-4 my-cluster]$ ceph-deploy install
> ceph-node1-mon-centos-6-4 ceph-node2-osd0-centos-6-4
> ceph-node3-osd1-centos-6-4
> 
> [ceph_deploy.cli][INFO  ] Invoked (1.3): /usr/bin/ceph-deploy install
> ceph-node1-mon-centos-6-4 ceph-node2-osd0-centos-6-4
> ceph-node3-osd1-centos-6-4
> 
> [ceph_deploy.install][DEBUG ] Installing stable version dumpling on cluster
> ceph hosts ceph-node1-mon-centos-6-4 ceph-node2-osd0-centos-6-4
> ceph-node3-osd1-centos-6-4
> 
> [ceph_deploy.install][DEBUG ] Detecting platform for host
> ceph-node1-mon-centos-6-4 ...
> 
> [ceph-node1-mon-centos-6-4][DEBUG ] connected to host:
> ceph-node1-mon-centos-6-4
> 
> [ceph-node1-mon-centos-6-4][DEBUG ] detect platform information from remote
> host
> 
> [ceph-node1-mon-centos-6-4][DEBUG ] detect machine type
> 
> [ceph_deploy.install][INFO  ] Distro info: CentOS 6.4 Final
> 
> [ceph-node1-mon-centos-6-4][INFO  ] installing ceph on
> ceph-node1-mon-centos-6-4
> 
> [ceph-node1-mon-centos-6-4][INFO  ] adding EPEL repository
> 
> [ceph-node1-mon-centos-6-4][INFO  ] Running command: sudo wget
> http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
> 
> [ceph-node1-mon-centos-6-4][ERROR ] --2013-11-01 19:51:20-- 
> http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
> 
> [ceph-node1-mon-centos-6-4][ERROR ] Connecting to 10.12.132.208:8080...
> connected.
> 
> [ceph-node1-mon-centos-6-4][ERROR ] Proxy request sent, awaiting response...
> 200 OK
> 
> [ceph-node1-mon-centos-6-4][ERROR ] Length: 14540 (14K) [application/x-rpm]
> 
> [ceph-node1-mon-centos-6-4][ERROR ] Saving to:
> `epel-release-6-8.noarch.rpm.2'
> 
> [ceph-node1-mon-centos-6-4][ERROR ]
> 
> [ceph-node1-mon-centos-6-4][ERROR ]  0K ..
>        100% 4.79M=0.003s
> 
> [ceph-node1-mon-centos-6-4][ERROR ]
> 
> [ceph-node1-mon-centos-6-4][ERROR ] Last-modified header invalid --
> time-stamp ignored.
> 
> [ceph-node1-mon-centos-6-4][ERROR ] 2013-11-01 19:52:20 (4.79 MB/s) -
> `epel-release-6-8.noarch.rpm.2' saved [14540/14540]
> 
> [ceph-node1-mon-centos-6-4][ERROR ]
> 
> [ceph-node1-mon-centos-6-4][INFO  ] Running command: sudo rpm -Uvh
> --replacepkgs epel-release-6*.rpm
> 
> [ceph-node1-mon-centos-6-4][DEBUG ] Preparing...   
> ##
> 
> [ceph-node1-mon-centos-6-4][DEBUG ] epel-release   
> ##
> 
> [ceph-node1-mon-centos-6-4][INFO  ] Running command: sudo rpm --import
> https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc
> 
> [ceph-node1-mon-centos-6-4][INFO  ] Running command: sudo rpm -Uvh
> --replacepkgs
> http://ceph.com/rpm-dumpling/el6/noarch/ceph-release-1-0.el6.noarch.rpm
> 
> [ceph-node1-mon-centos-6-4][DEBUG ] Retrieving
> http://ceph.com/rpm-dumpling/el6/noarch/ceph-release-1-0.el6.noarch.rpm
> 
> [ceph-node1-mon-centos-6-4][DEBUG ] Preparing...   
> ##
> 
> [ceph-node1-mon-centos-6-4][DEBUG ] ceph-release   
> ##
> 
> [ceph-node1-mon-centos-6-4][INFO  ] Running command: sudo yum -y -q install
> ceph
> 
> [ceph-node1-mon-centos-6-4][DEBUG ] Package ceph-0.67.4-0.el6.x86_64 already
> installed and latest version
> 
> [ceph-node1-mon-centos-6-4][INFO  ] Running command: sudo ceph --version
> 
> [ceph-node1-mon-centos-6-4][DEBUG ] ceph version 0.67.4
> (ad85b8bfafea6232d64cb7ba76a8b6e8252fa0c7)
> 
> [ceph_deploy.install][DEBUG ] Detecting platform for host
> ceph-node2-osd0-centos-6-4 ...
> 
> [ceph-node2-osd0-centos-6-4][DEBUG ] connected to host:
> ceph-node2-osd0-centos-6-4
> 
> [ceph-node2-osd0-centos-6-4][DEBUG ] detect platform information from remote
> host
> 
> [ceph-node2-osd0-centos-6-4][DEBUG ] detect machine type
> 
> [ceph_deploy.install][INFO  ] Distro info: CentOS 6.4 Final
> 
> [ceph-node2-osd0-centos-6-4][INFO  ] installing