Re: [ceph-users] Using Ceph as Storage for VMware

2013-05-09 Thread Leen Besselink
On Fri, May 10, 2013 at 12:12:45AM +0100, Neil Levine wrote:
> Leen,
> 
> Do you mean you get LIO working with RBD directly? Or are you just
> re-exporting a kernel mounted volume?
> 

Yes, re-exporting a kernel mounted volume on seperate gateway machines.

> Neil
> 
> On Thu, May 9, 2013 at 11:58 PM, Leen Besselink  
> wrote:
> > On Thu, May 09, 2013 at 11:51:32PM +0100, Neil Levine wrote:
> >> Jared,
> >>
> >> As Weiguo says you will need to use a gateway to present a Ceph block
> >> device (RBD) in a format VMware understands. We've contributed the
> >> relevant code to the TGT iSCSI target (see blog:
> >> http://ceph.com/dev-notes/adding-support-for-rbd-to-stgt/) and though
> >> we haven't done a massive amount of testing on it, I'd love to get
> >> some feedback on it. We will be putting more effort into it this cycle
> >> (including producing a package).
> >>
> >
> > We also have a legacy virtualization setup we are thinking of using with 
> > Ceph
> > and iSCSI. We however also ended up at LIO, because LIO supports the iSCSI
> > extensions which are needed for clustering.
> >
> > stgt doesn't yet support all the needed extensions as far as I can see.
> >
> > There seems to be exactly one person sporadically working on improving stgt
> > in this area.
> >
> >> If you have a VMware account rep, be sure to ask him to file support
> >> for Ceph as a customer request with the product teams while we
> >> continue knock on VMware's door :-)
> >>
> >> Neil
> >>
> >> On Thu, May 9, 2013 at 11:30 PM, w sun  wrote:
> >> > RBD is not supported by VMware/vSphere. You will need to build a
> >> > NFS/iSCSI/FC GW to support VMware. Here is a post someone has been trying
> >> > and you may have to contact them directly for status,
> >> >
> >> > http://ceph.com/community/ceph-over-fibre-for-vmware/
> >> >
> >> > --weiguo
> >> >
> >> > 
> >> > To: ceph-users@lists.ceph.com
> >> > From: jaredda...@shelterinsurance.com
> >> > Date: Thu, 9 May 2013 17:25:02 -0500
> >> > Subject: [ceph-users] Using Ceph as Storage for VMware
> >> >
> >> >
> >> > I am investigating using Ceph as a storage target for virtual servers in
> >> > VMware.  We have 3 servers packed with hard drives ready for the proof of
> >> > concept.  I am looking for some direction.  Is this a valid use for Ceph?
> >> > If so, has anybody accomplished this?  Are there any documents on how to 
> >> > set
> >> > this up?  Should I use RDB, NFS, etc?  Any help, would be greatly
> >> > appreciated.
> >> >
> >> >
> >> > Thank You,
> >> >
> >> > JD
> >> >
> >> >
> >> > ___ ceph-users mailing list
> >> > ceph-users@lists.ceph.com
> >> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >> >
> >> > ___
> >> > ceph-users mailing list
> >> > ceph-users@lists.ceph.com
> >> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >> >
> >> ___
> >> ceph-users mailing list
> >> ceph-users@lists.ceph.com
> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Using Ceph as Storage for VMware

2013-05-09 Thread Neil Levine
Leen,

Do you mean you get LIO working with RBD directly? Or are you just
re-exporting a kernel mounted volume?

Neil

On Thu, May 9, 2013 at 11:58 PM, Leen Besselink  wrote:
> On Thu, May 09, 2013 at 11:51:32PM +0100, Neil Levine wrote:
>> Jared,
>>
>> As Weiguo says you will need to use a gateway to present a Ceph block
>> device (RBD) in a format VMware understands. We've contributed the
>> relevant code to the TGT iSCSI target (see blog:
>> http://ceph.com/dev-notes/adding-support-for-rbd-to-stgt/) and though
>> we haven't done a massive amount of testing on it, I'd love to get
>> some feedback on it. We will be putting more effort into it this cycle
>> (including producing a package).
>>
>
> We also have a legacy virtualization setup we are thinking of using with Ceph
> and iSCSI. We however also ended up at LIO, because LIO supports the iSCSI
> extensions which are needed for clustering.
>
> stgt doesn't yet support all the needed extensions as far as I can see.
>
> There seems to be exactly one person sporadically working on improving stgt
> in this area.
>
>> If you have a VMware account rep, be sure to ask him to file support
>> for Ceph as a customer request with the product teams while we
>> continue knock on VMware's door :-)
>>
>> Neil
>>
>> On Thu, May 9, 2013 at 11:30 PM, w sun  wrote:
>> > RBD is not supported by VMware/vSphere. You will need to build a
>> > NFS/iSCSI/FC GW to support VMware. Here is a post someone has been trying
>> > and you may have to contact them directly for status,
>> >
>> > http://ceph.com/community/ceph-over-fibre-for-vmware/
>> >
>> > --weiguo
>> >
>> > 
>> > To: ceph-users@lists.ceph.com
>> > From: jaredda...@shelterinsurance.com
>> > Date: Thu, 9 May 2013 17:25:02 -0500
>> > Subject: [ceph-users] Using Ceph as Storage for VMware
>> >
>> >
>> > I am investigating using Ceph as a storage target for virtual servers in
>> > VMware.  We have 3 servers packed with hard drives ready for the proof of
>> > concept.  I am looking for some direction.  Is this a valid use for Ceph?
>> > If so, has anybody accomplished this?  Are there any documents on how to 
>> > set
>> > this up?  Should I use RDB, NFS, etc?  Any help, would be greatly
>> > appreciated.
>> >
>> >
>> > Thank You,
>> >
>> > JD
>> >
>> >
>> > ___ ceph-users mailing list
>> > ceph-users@lists.ceph.com
>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> >
>> > ___
>> > ceph-users mailing list
>> > ceph-users@lists.ceph.com
>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> >
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Using Ceph as Storage for VMware

2013-05-09 Thread Leen Besselink
On Thu, May 09, 2013 at 11:51:32PM +0100, Neil Levine wrote:
> Jared,
> 
> As Weiguo says you will need to use a gateway to present a Ceph block
> device (RBD) in a format VMware understands. We've contributed the
> relevant code to the TGT iSCSI target (see blog:
> http://ceph.com/dev-notes/adding-support-for-rbd-to-stgt/) and though
> we haven't done a massive amount of testing on it, I'd love to get
> some feedback on it. We will be putting more effort into it this cycle
> (including producing a package).
> 

We also have a legacy virtualization setup we are thinking of using with Ceph
and iSCSI. We however also ended up at LIO, because LIO supports the iSCSI
extensions which are needed for clustering.

stgt doesn't yet support all the needed extensions as far as I can see.

There seems to be exactly one person sporadically working on improving stgt
in this area.

> If you have a VMware account rep, be sure to ask him to file support
> for Ceph as a customer request with the product teams while we
> continue knock on VMware's door :-)
> 
> Neil
> 
> On Thu, May 9, 2013 at 11:30 PM, w sun  wrote:
> > RBD is not supported by VMware/vSphere. You will need to build a
> > NFS/iSCSI/FC GW to support VMware. Here is a post someone has been trying
> > and you may have to contact them directly for status,
> >
> > http://ceph.com/community/ceph-over-fibre-for-vmware/
> >
> > --weiguo
> >
> > 
> > To: ceph-users@lists.ceph.com
> > From: jaredda...@shelterinsurance.com
> > Date: Thu, 9 May 2013 17:25:02 -0500
> > Subject: [ceph-users] Using Ceph as Storage for VMware
> >
> >
> > I am investigating using Ceph as a storage target for virtual servers in
> > VMware.  We have 3 servers packed with hard drives ready for the proof of
> > concept.  I am looking for some direction.  Is this a valid use for Ceph?
> > If so, has anybody accomplished this?  Are there any documents on how to set
> > this up?  Should I use RDB, NFS, etc?  Any help, would be greatly
> > appreciated.
> >
> >
> > Thank You,
> >
> > JD
> >
> >
> > ___ ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Using Ceph as Storage for VMware

2013-05-09 Thread Neil Levine
Jared,

As Weiguo says you will need to use a gateway to present a Ceph block
device (RBD) in a format VMware understands. We've contributed the
relevant code to the TGT iSCSI target (see blog:
http://ceph.com/dev-notes/adding-support-for-rbd-to-stgt/) and though
we haven't done a massive amount of testing on it, I'd love to get
some feedback on it. We will be putting more effort into it this cycle
(including producing a package).

If you have a VMware account rep, be sure to ask him to file support
for Ceph as a customer request with the product teams while we
continue knock on VMware's door :-)

Neil

On Thu, May 9, 2013 at 11:30 PM, w sun  wrote:
> RBD is not supported by VMware/vSphere. You will need to build a
> NFS/iSCSI/FC GW to support VMware. Here is a post someone has been trying
> and you may have to contact them directly for status,
>
> http://ceph.com/community/ceph-over-fibre-for-vmware/
>
> --weiguo
>
> 
> To: ceph-users@lists.ceph.com
> From: jaredda...@shelterinsurance.com
> Date: Thu, 9 May 2013 17:25:02 -0500
> Subject: [ceph-users] Using Ceph as Storage for VMware
>
>
> I am investigating using Ceph as a storage target for virtual servers in
> VMware.  We have 3 servers packed with hard drives ready for the proof of
> concept.  I am looking for some direction.  Is this a valid use for Ceph?
> If so, has anybody accomplished this?  Are there any documents on how to set
> this up?  Should I use RDB, NFS, etc?  Any help, would be greatly
> appreciated.
>
>
> Thank You,
>
> JD
>
>
> ___ ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Using Ceph as Storage for VMware

2013-05-09 Thread w sun
RBD is not supported by VMware/vSphere. You will need to build a NFS/iSCSI/FC 
GW to support VMware. Here is a post someone has been trying and you may have 
to contact them directly for status,
http://ceph.com/community/ceph-over-fibre-for-vmware/
--weiguo

To: ceph-users@lists.ceph.com
From: jaredda...@shelterinsurance.com
Date: Thu, 9 May 2013 17:25:02 -0500
Subject: [ceph-users] Using Ceph as Storage for VMware

I am investigating using Ceph as a storage
target for virtual servers in VMware.  We have 3 servers packed with
hard drives ready for the proof of concept.  I am looking for some
direction.  Is this a valid use for Ceph?  If so, has anybody
accomplished this?  Are there any documents on how to set this up?
 Should I use RDB, NFS, etc?  Any help, would be greatly appreciated.





Thank You,



JD


 


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com  
  ___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Using Ceph as Storage for VMware

2013-05-09 Thread Jared Davis
I am investigating using Ceph as a storage target for virtual servers in 
VMware.  We have 3 servers packed with hard drives ready for the proof of 
concept.  I am looking for some direction.  Is this a valid use for Ceph? 
If so, has anybody accomplished this?  Are there any documents on how to 
set this up?  Should I use RDB, NFS, etc?  Any help, would be greatly 
appreciated.


Thank You,

JD

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-deploy doesn't update ceph.conf

2013-05-09 Thread Igor Laskovy
I have confused about ceph-deploy use case too.

>Modify the ceph.conf in the ceph-deploy directory to add your
>
>cluster network = 1.2.3.0/24

What about mention the cluster IP for concrete OSD "clusteraddr= 1.2.3.1"
? Does it grab it automatically?


On Thu, May 9, 2013 at 10:45 PM, Sage Weil  wrote:

> On Thu, 9 May 2013, Danny Luhde-Thompson wrote:
> > Hi everyone,
> > After reading all the research papers and docs over the last few months
> and
> > waiting for Cuttlefish, I finally deployed a test cluster of 18 osds
> across
> > 6 hosts.  It's performing better than I expected so far, all on the
> default
> > single interface.
> >
> > I was also surprised by the minimal ceph.conf and non-functioning
> "service
> > ceph" too, so perhaps worth mentioning in the doc?  ceph-deploy also
> failed
> > to start one of the osds (with activate), but running ceph-osd manually
> > worked.
>
> If you can reproduce this problem, we'd love to hear details!
>
> > I now want to enter a secondary interface for the cluster-side network,
> so I
> > assume I have to enter all the OSD's in ceph.conf manually anyway?
>
> Modify the ceph.conf in the ceph-deploy directory to add your
>
>  cluster network = 1.2.3.0/24
>
> or whatever it is, and then
>
>  ceph-deploy config push HOST [...]
>
> and then restart the daemons.
>
> > Very impressed so far!
>
> Thanks!  ceph-deploy is pretty new and rough around the edges, so any
> feedback (and patches!) are very welcome!
>
> sage
>
>
> >
> > Danny
> >
> > On 9 May 2013 19:29, Sage Weil  wrote:
> >   On Thu, 9 May 2013, Greg Chavez wrote:
> >   > So I feel like I'm missing something.  I just deployed 3
> >   storage nodes
> >   > with ceph-deploy, each with a monitor daemon and 6-8 osd's.
> >   All of
> >   > them seem to be active with health OK.  However, it doesn't
> >   seem that
> >   > I ended up with a useful ceph.conf.
> >   >
> >   > ( running 0.61-113-g61354b2-1raring )
> >   >
> >   > This is all I get:
> >   >
> >   > [global]
> >   > fsid = af1581c1-8c45-4e24-b4f1-9a56e8a62aeb
> >   > mon_initial_members = kvm-cs-sn-10i, kvm-cs-sn-14i,
> >   kvm-cs-sn-15i
> >   > mon_host = 192.168.241.110,192.168.241.114,192.168.241.115
> >   > auth_supported = cephx
> >   > osd_journal_size = 1024
> >   > filestore_xattr_use_omap = true
> >
> > This is correct.  It is no longer necessary to enumerate daemons in
> > ceph.conf when using the default locations (/var/lib/ceph/osd/*).
> >
> > > When you run "service ceph status", it returns nothing because it
> > > can't find any osd stanzas.
> >
> > 'service' invokes the sysvinit scripts, and on Ubuntu we are using
> > upstart.  Try
> >
> >  initctl list | grep ceph
> >
> > to see what is running.
> >
> > sage
> >
> >
> > > Based on the directory names in
> > > /var/lib/ceph/mon and the parsing of the mounted storage volumes I
> > > wrote this out:
> > >
> > > http://pastebin.com/DLgtiC23
> > >
> > > And then pushed it out.
> > >
> > > The monitor id names seem odd to me, with the hostname instead of a,
> > > b, and c, but whatever.
> > >
> > > Now I get this output:
> > >
> > > root@kvm-cs-sn-10i:/etc/ceph# service ceph -a status
> > > === mon.kvm-cs-sn-10i ===
> > > mon.kvm-cs-sn-10i: not running.
> > > === mon.kvm-cs-sn-14i ===
> > > mon.kvm-cs-sn-14i: not running.
> > > === mon.kvm-cs-sn-15i ===
> > > mon.kvm-cs-sn-15i: not running.
> > > === osd.0 ===
> > > osd.0: not running.
> > > === osd.1 ===
> > > osd.1: not running.
> > > === osd.10 ===
> > > osd.10: not running.
> > > ...etc
> > >
> > > Not true!  Even worse, when I try to run "service ceph -a start", It
> > > freaks and complains about missing keys.  So now I have this process
> > > hanging around:
> > >
> > > /usr/bin/python /usr/sbin/ceph-create-keys -i kvm-cs-sn-10i
> > >
> > > Here's the output from that attempt:
> > >
> > > root@kvm-cs-sn-10i:/tmp# service ceph -a start
> > > === mon.kvm-cs-sn-10i ===
> > > Starting Ceph mon.kvm-cs-sn-10i on kvm-cs-sn-10i...
> > > failed: 'ulimit -n 8192;  /usr/bin/ceph-mon -i kvm-cs-sn-10i
> > > --pid-file /var/run/ceph/mon.kvm-cs-sn-10i.pid -c
> > /etc/ceph/ceph.conf
> > > '
> > > Starting ceph-create-keys on kvm-cs-sn-10i...
> > >
> > > Luckily I hadn't set up my ssh keys yet, so that's as far as I got.
> > >
> > > Would dearly love some guidance.  Thanks in advance!
> > >
> > > --Greg Chavez
> > > ___
> > > ceph-users mailing list
> > > ceph-users@lists.ceph.com
> > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> > >
> > >
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> >
> >
> >
> > --
> > Mean Trading Systems LLP
> > http://www.meantradingsystems.com
> >
> >
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> h

Re: [ceph-users] ceph-deploy issue with non-default cluster name?

2013-05-09 Thread w sun
Ah, now I see the only purpose of cluster naming is for adding same node to 
multiple clusters. Thx for the quick pointer. --weiguo

Date: Thu, 9 May 2013 12:52:42 -0700
From: s...@inktank.com
To: ws...@hotmail.com
CC: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] ceph-deploy issue with non-default cluster name?

On Thu, 9 May 2013, w sun wrote:
> I think I ran into a bug with ceph-deploy on cuttlefish? Has anyone else
> seen this?
> 
> When creating new monitor, on the server node 1, found the directory
> prepended with default cluster name "ceph" ( was created,
> root@svl-ceph-01:/var/lib/ceph# ll /var/lib/ceph/mon/
> total 12
> drwxr-xr-x 3 root root 4096 May  9 12:22 ./
> drwxr-xr-x 9 root root 4096 May  8 12:14 ../
> drwxr-xr-x 2 root root 4096 May  9 12:22 ceph-svl-ceph-01/
 
That prepended 'ceph' is the cluster name (which defaults to ceph).  Leave 
off the --cluster XX arg unless  you plan on having a single host 
participate in multiple ceph clusters.
 
sage
 
 
> 
> ---
> --
> 
> ceph@svl-ceph-deploy:~/svl-ceph-cluster$ ceph-deploy --overwrite-conf
> --cluster svlCephPoc mon create svl-ceph-0{1,2,3} 
> 
> ceph-mon: mon.noname-a 10.33.156.144:6789/0 is local, renaming to
> mon.svl-ceph-01
> ceph-mon: set fsid to 3936e344-5f07-4d1e-b1e8-bc6f14b99bcd
> IO error: /var/lib/ceph/mon/svlCephPoc-svl-ceph-01/store.db/LOCK: No such
> file or directory
> ceph-mon: error opening mon data directory at
> '/var/lib/ceph/mon/svlCephPoc-svl-ceph-01': (22) Invalid argument
> Traceback (most recent call last):
>   File "/home/ceph/ceph-deploy/ceph-deploy", line 9, in 
> load_entry_point('ceph-deploy==0.1', 'console_scripts', 'ceph-deploy')()
>   File "/users/ceph/ceph-deploy/ceph_deploy/cli.py", line 112, in main
> return args.func(args)
>   File "/users/ceph/ceph-deploy/ceph_deploy/mon.py", line 234, in mon
> mon_create(args)
>   File "/users/ceph/ceph-deploy/ceph_deploy/mon.py", line 138, in mon_create
> init=init,
>   
> File"/users/ceph/ceph-deploy/virtualenv/local/lib/python2.7/site-packages/pushy
> -0.5.1-py2.7.egg/pushy/protocol/proxy.py", line 255, in 
> (conn.operator(type_, self, args, kwargs))
>   
> File"/users/ceph/ceph-deploy/virtualenv/local/lib/python2.7/site-packages/pushy
> -0.5.1-py2.7.egg/pushy/protocol/connection.py", line 66, in operator
> return self.send_request(type_, (object, args, kwargs))
>   
> File"/users/ceph/ceph-deploy/virtualenv/local/lib/python2.7/site-packages/pushy
> -0.5.1-py2.7.egg/pushy/protocol/baseconnection.py", line 323, in
> send_request
> return self.__handle(m)
>   
> File"/users/ceph/ceph-deploy/virtualenv/local/lib/python2.7/site-packages/pushy
> -0.5.1-py2.7.egg/pushy/protocol/baseconnection.py", line 639, in __handle
> raise e
> pushy.protocol.proxy.ExceptionProxy: Command '['ceph-mon', '--cluster',
> 'svlCephPoc', '--mkfs', '-i', 'svl-ceph-01', '--keyring',
> '/var/lib/ceph/tmp/svlCephPoc-svl-ceph-01.mon.keyring']' returned non-zero
> exit status 1
> 
> 
> 

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com  
  ___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-deploy issue with non-default cluster name?

2013-05-09 Thread Sage Weil
On Thu, 9 May 2013, w sun wrote:
> I think I ran into a bug with ceph-deploy on cuttlefish? Has anyone else
> seen this?
> 
> When creating new monitor, on the server node 1, found the directory
> prepended with default cluster name "ceph" ( was created,
> root@svl-ceph-01:/var/lib/ceph# ll /var/lib/ceph/mon/
> total 12
> drwxr-xr-x 3 root root 4096 May  9 12:22 ./
> drwxr-xr-x 9 root root 4096 May  8 12:14 ../
> drwxr-xr-x 2 root root 4096 May  9 12:22 ceph-svl-ceph-01/

That prepended 'ceph' is the cluster name (which defaults to ceph).  Leave 
off the --cluster XX arg unless  you plan on having a single host 
participate in multiple ceph clusters.

sage


> 
> ---
> --
> 
> ceph@svl-ceph-deploy:~/svl-ceph-cluster$ ceph-deploy --overwrite-conf
> --cluster svlCephPoc mon create svl-ceph-0{1,2,3} 
> 
> ceph-mon: mon.noname-a 10.33.156.144:6789/0 is local, renaming to
> mon.svl-ceph-01
> ceph-mon: set fsid to 3936e344-5f07-4d1e-b1e8-bc6f14b99bcd
> IO error: /var/lib/ceph/mon/svlCephPoc-svl-ceph-01/store.db/LOCK: No such
> file or directory
> ceph-mon: error opening mon data directory at
> '/var/lib/ceph/mon/svlCephPoc-svl-ceph-01': (22) Invalid argument
> Traceback (most recent call last):
>   File "/home/ceph/ceph-deploy/ceph-deploy", line 9, in 
>     load_entry_point('ceph-deploy==0.1', 'console_scripts', 'ceph-deploy')()
>   File "/users/ceph/ceph-deploy/ceph_deploy/cli.py", line 112, in main
>     return args.func(args)
>   File "/users/ceph/ceph-deploy/ceph_deploy/mon.py", line 234, in mon
>     mon_create(args)
>   File "/users/ceph/ceph-deploy/ceph_deploy/mon.py", line 138, in mon_create
>     init=init,
>   
> File"/users/ceph/ceph-deploy/virtualenv/local/lib/python2.7/site-packages/pushy
> -0.5.1-py2.7.egg/pushy/protocol/proxy.py", line 255, in 
>     (conn.operator(type_, self, args, kwargs))
>   
> File"/users/ceph/ceph-deploy/virtualenv/local/lib/python2.7/site-packages/pushy
> -0.5.1-py2.7.egg/pushy/protocol/connection.py", line 66, in operator
>     return self.send_request(type_, (object, args, kwargs))
>   
> File"/users/ceph/ceph-deploy/virtualenv/local/lib/python2.7/site-packages/pushy
> -0.5.1-py2.7.egg/pushy/protocol/baseconnection.py", line 323, in
> send_request
>     return self.__handle(m)
>   
> File"/users/ceph/ceph-deploy/virtualenv/local/lib/python2.7/site-packages/pushy
> -0.5.1-py2.7.egg/pushy/protocol/baseconnection.py", line 639, in __handle
>     raise e
> pushy.protocol.proxy.ExceptionProxy: Command '['ceph-mon', '--cluster',
> 'svlCephPoc', '--mkfs', '-i', 'svl-ceph-01', '--keyring',
> '/var/lib/ceph/tmp/svlCephPoc-svl-ceph-01.mon.keyring']' returned non-zero
> exit status 1
> 
> 
> ___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] cephfs command functional on fuse?

2013-05-09 Thread Sage Weil
On Thu, 9 May 2013, Danny Luhde-Thompson wrote:
> I don't seem to be able to use the cephfs command via a fuse mount.  Is this
> expected?  I saw no mention of it in the doc.  This is on the default
> precise kernel (3.2.0-40-generic #64-Ubuntu SMP Mon Mar 25 21:22:10 UTC 2013
> x86_64 x86_64 x86_64 GNU/Linux).
> danny@ceph:/ceph$ cephfs . show_layout
> Error getting layout: Inappropriate ioctl for device

Yeah, the ioctls only work for the kernel client.

Most of what you used to do via the cephfs tool can now be done via 
getting and setting virtual xattrs:

 getfattr -d -m - file

sage

> 
> strace :-
> 
> open(".", O_RDONLY)                     = 3
> fstat(3, {st_mode=S_IFDIR|0755, st_size=0, ...}) = 0
> ioctl(3, 0x80289701, 0x7fff3b7b1890)    = -1 ENOTTY (Inappropriate ioctl for
> device)
> 
> Thanks,
> 
> Danny
> 
> --
> Mean Trading Systems LLP
> http://www.meantradingsystems.com
> 
> ___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-deploy doesn't update ceph.conf

2013-05-09 Thread Sage Weil
On Thu, 9 May 2013, Danny Luhde-Thompson wrote:
> Hi everyone,
> After reading all the research papers and docs over the last few months and
> waiting for Cuttlefish, I finally deployed a test cluster of 18 osds across
> 6 hosts.  It's performing better than I expected so far, all on the default
> single interface.
> 
> I was also surprised by the minimal ceph.conf and non-functioning "service
> ceph" too, so perhaps worth mentioning in the doc?  ceph-deploy also failed
> to start one of the osds (with activate), but running ceph-osd manually
> worked.

If you can reproduce this problem, we'd love to hear details!

> I now want to enter a secondary interface for the cluster-side network, so I
> assume I have to enter all the OSD's in ceph.conf manually anyway?

Modify the ceph.conf in the ceph-deploy directory to add your

 cluster network = 1.2.3.0/24

or whatever it is, and then

 ceph-deploy config push HOST [...]

and then restart the daemons.

> Very impressed so far!

Thanks!  ceph-deploy is pretty new and rough around the edges, so any 
feedback (and patches!) are very welcome!

sage


> 
> Danny
> 
> On 9 May 2013 19:29, Sage Weil  wrote:
>   On Thu, 9 May 2013, Greg Chavez wrote:
>   > So I feel like I'm missing something.  I just deployed 3
>   storage nodes
>   > with ceph-deploy, each with a monitor daemon and 6-8 osd's.
>   All of
>   > them seem to be active with health OK.  However, it doesn't
>   seem that
>   > I ended up with a useful ceph.conf.
>   >
>   > ( running 0.61-113-g61354b2-1raring )
>   >
>   > This is all I get:
>   >
>   > [global]
>   > fsid = af1581c1-8c45-4e24-b4f1-9a56e8a62aeb
>   > mon_initial_members = kvm-cs-sn-10i, kvm-cs-sn-14i,
>   kvm-cs-sn-15i
>   > mon_host = 192.168.241.110,192.168.241.114,192.168.241.115
>   > auth_supported = cephx
>   > osd_journal_size = 1024
>   > filestore_xattr_use_omap = true
> 
> This is correct.  It is no longer necessary to enumerate daemons in
> ceph.conf when using the default locations (/var/lib/ceph/osd/*).
> 
> > When you run "service ceph status", it returns nothing because it
> > can't find any osd stanzas.
> 
> 'service' invokes the sysvinit scripts, and on Ubuntu we are using
> upstart.  Try
> 
>  initctl list | grep ceph
> 
> to see what is running.
> 
> sage
> 
> 
> > Based on the directory names in
> > /var/lib/ceph/mon and the parsing of the mounted storage volumes I
> > wrote this out:
> >
> > http://pastebin.com/DLgtiC23
> >
> > And then pushed it out.
> >
> > The monitor id names seem odd to me, with the hostname instead of a,
> > b, and c, but whatever.
> >
> > Now I get this output:
> >
> > root@kvm-cs-sn-10i:/etc/ceph# service ceph -a status
> > === mon.kvm-cs-sn-10i ===
> > mon.kvm-cs-sn-10i: not running.
> > === mon.kvm-cs-sn-14i ===
> > mon.kvm-cs-sn-14i: not running.
> > === mon.kvm-cs-sn-15i ===
> > mon.kvm-cs-sn-15i: not running.
> > === osd.0 ===
> > osd.0: not running.
> > === osd.1 ===
> > osd.1: not running.
> > === osd.10 ===
> > osd.10: not running.
> > ...etc
> >
> > Not true!  Even worse, when I try to run "service ceph -a start", It
> > freaks and complains about missing keys.  So now I have this process
> > hanging around:
> >
> > /usr/bin/python /usr/sbin/ceph-create-keys -i kvm-cs-sn-10i
> >
> > Here's the output from that attempt:
> >
> > root@kvm-cs-sn-10i:/tmp# service ceph -a start
> > === mon.kvm-cs-sn-10i ===
> > Starting Ceph mon.kvm-cs-sn-10i on kvm-cs-sn-10i...
> > failed: 'ulimit -n 8192;  /usr/bin/ceph-mon -i kvm-cs-sn-10i
> > --pid-file /var/run/ceph/mon.kvm-cs-sn-10i.pid -c
> /etc/ceph/ceph.conf
> > '
> > Starting ceph-create-keys on kvm-cs-sn-10i...
> >
> > Luckily I hadn't set up my ssh keys yet, so that's as far as I got.
> >
> > Would dearly love some guidance.  Thanks in advance!
> >
> > --Greg Chavez
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> >
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
> 
> 
> 
> --
> Mean Trading Systems LLP
> http://www.meantradingsystems.com
> 
> ___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] ceph-deploy issue with non-default cluster name?

2013-05-09 Thread w sun
I think I ran into a bug with ceph-deploy on cuttlefish? Has anyone else seen 
this?
When creating new monitor, on the server node 1, found the directory prepended 
with default cluster name "ceph" ( was created,
root@svl-ceph-01:/var/lib/ceph# ll /var/lib/ceph/mon/total 12drwxr-xr-x 3 root 
root 4096 May  9 12:22 ./drwxr-xr-x 9 root root 4096 May  8 12:14 ../drwxr-xr-x 
2 root root 4096 May  9 12:22 ceph-svl-ceph-01/
-
ceph@svl-ceph-deploy:~/svl-ceph-cluster$ ceph-deploy --overwrite-conf --cluster 
svlCephPoc mon create svl-ceph-0{1,2,3} 
ceph-mon: mon.noname-a 10.33.156.144:6789/0 is local, renaming to 
mon.svl-ceph-01ceph-mon: set fsid to 3936e344-5f07-4d1e-b1e8-bc6f14b99bcdIO 
error: /var/lib/ceph/mon/svlCephPoc-svl-ceph-01/store.db/LOCK: No such file or 
directoryceph-mon: error opening mon data directory at 
'/var/lib/ceph/mon/svlCephPoc-svl-ceph-01': (22) Invalid argumentTraceback 
(most recent call last):  File "/home/ceph/ceph-deploy/ceph-deploy", line 9, in 
load_entry_point('ceph-deploy==0.1', 'console_scripts', 
'ceph-deploy')()  File "/users/ceph/ceph-deploy/ceph_deploy/cli.py", line 112, 
in mainreturn args.func(args)  File 
"/users/ceph/ceph-deploy/ceph_deploy/mon.py", line 234, in mon
mon_create(args)  File "/users/ceph/ceph-deploy/ceph_deploy/mon.py", line 138, 
in mon_createinit=init,  File 
"/users/ceph/ceph-deploy/virtualenv/local/lib/python2.7/site-packages/pushy-0.5.1-py2.7.egg/pushy/protocol/proxy.py",
 line 255, in (conn.operator(type_, self, args, kwargs))  File 
"/users/ceph/ceph-deploy/virtualenv/local/lib/python2.7/site-packages/pushy-0.5.1-py2.7.egg/pushy/protocol/connection.py",
 line 66, in operatorreturn self.send_request(type_, (object, args, 
kwargs))  File 
"/users/ceph/ceph-deploy/virtualenv/local/lib/python2.7/site-packages/pushy-0.5.1-py2.7.egg/pushy/protocol/baseconnection.py",
 line 323, in send_requestreturn self.__handle(m)  File 
"/users/ceph/ceph-deploy/virtualenv/local/lib/python2.7/site-packages/pushy-0.5.1-py2.7.egg/pushy/protocol/baseconnection.py",
 line 639, in __handleraise epushy.protocol.proxy.ExceptionProxy: Command 
'['ceph-mon', '--cluster', 'svlCephPoc', '--mkfs', '-i', 'svl-ceph-01', 
'--keyring', '/var/lib/ceph/tmp/svlCephPoc-svl-ceph-01.mon.keyring']' returned 
non-zero exit status 1
  ___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] cephfs command functional on fuse?

2013-05-09 Thread Danny Luhde-Thompson
I don't seem to be able to use the cephfs command via a fuse mount.  Is
this expected?  I saw no mention of it in the doc.  This is on the default
precise kernel (3.2.0-40-generic #64-Ubuntu SMP Mon Mar 25 21:22:10 UTC
2013 x86_64 x86_64 x86_64 GNU/Linux).

danny@ceph:/ceph$ cephfs . show_layout
Error getting layout: Inappropriate ioctl for device

strace :-

open(".", O_RDONLY) = 3
fstat(3, {st_mode=S_IFDIR|0755, st_size=0, ...}) = 0
ioctl(3, 0x80289701, 0x7fff3b7b1890)= -1 ENOTTY (Inappropriate ioctl
for device)

Thanks,

Danny

-- 
Mean Trading Systems LLP
http://www.meantradingsystems.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-deploy doesn't update ceph.conf

2013-05-09 Thread Danny Luhde-Thompson
Hi everyone,

After reading all the research papers and docs over the last few months and
waiting for Cuttlefish, I finally deployed a test cluster of 18 osds across
6 hosts.  It's performing better than I expected so far, all on the default
single interface.

I was also surprised by the minimal ceph.conf and non-functioning "service
ceph" too, so perhaps worth mentioning in the doc?  ceph-deploy also failed
to start one of the osds (with activate), but running ceph-osd manually
worked.

I now want to enter a secondary interface for the cluster-side network, so
I assume I have to enter all the OSD's in ceph.conf manually anyway?

Very impressed so far!

Danny

On 9 May 2013 19:29, Sage Weil  wrote:

> On Thu, 9 May 2013, Greg Chavez wrote:
> > So I feel like I'm missing something.  I just deployed 3 storage nodes
> > with ceph-deploy, each with a monitor daemon and 6-8 osd's. All of
> > them seem to be active with health OK.  However, it doesn't seem that
> > I ended up with a useful ceph.conf.
> >
> > ( running 0.61-113-g61354b2-1raring )
> >
> > This is all I get:
> >
> > [global]
> > fsid = af1581c1-8c45-4e24-b4f1-9a56e8a62aeb
> > mon_initial_members = kvm-cs-sn-10i, kvm-cs-sn-14i, kvm-cs-sn-15i
> > mon_host = 192.168.241.110,192.168.241.114,192.168.241.115
> > auth_supported = cephx
> > osd_journal_size = 1024
> > filestore_xattr_use_omap = true
>
> This is correct.  It is no longer necessary to enumerate daemons in
> ceph.conf when using the default locations (/var/lib/ceph/osd/*).
>
> > When you run "service ceph status", it returns nothing because it
> > can't find any osd stanzas.
>
> 'service' invokes the sysvinit scripts, and on Ubuntu we are using
> upstart.  Try
>
>  initctl list | grep ceph
>
> to see what is running.
>
> sage
>
>
> > Based on the directory names in
> > /var/lib/ceph/mon and the parsing of the mounted storage volumes I
> > wrote this out:
> >
> > http://pastebin.com/DLgtiC23
> >
> > And then pushed it out.
> >
> > The monitor id names seem odd to me, with the hostname instead of a,
> > b, and c, but whatever.
> >
> > Now I get this output:
> >
> > root@kvm-cs-sn-10i:/etc/ceph# service ceph -a status
> > === mon.kvm-cs-sn-10i ===
> > mon.kvm-cs-sn-10i: not running.
> > === mon.kvm-cs-sn-14i ===
> > mon.kvm-cs-sn-14i: not running.
> > === mon.kvm-cs-sn-15i ===
> > mon.kvm-cs-sn-15i: not running.
> > === osd.0 ===
> > osd.0: not running.
> > === osd.1 ===
> > osd.1: not running.
> > === osd.10 ===
> > osd.10: not running.
> > ...etc
> >
> > Not true!  Even worse, when I try to run "service ceph -a start", It
> > freaks and complains about missing keys.  So now I have this process
> > hanging around:
> >
> > /usr/bin/python /usr/sbin/ceph-create-keys -i kvm-cs-sn-10i
> >
> > Here's the output from that attempt:
> >
> > root@kvm-cs-sn-10i:/tmp# service ceph -a start
> > === mon.kvm-cs-sn-10i ===
> > Starting Ceph mon.kvm-cs-sn-10i on kvm-cs-sn-10i...
> > failed: 'ulimit -n 8192;  /usr/bin/ceph-mon -i kvm-cs-sn-10i
> > --pid-file /var/run/ceph/mon.kvm-cs-sn-10i.pid -c /etc/ceph/ceph.conf
> > '
> > Starting ceph-create-keys on kvm-cs-sn-10i...
> >
> > Luckily I hadn't set up my ssh keys yet, so that's as far as I got.
> >
> > Would dearly love some guidance.  Thanks in advance!
> >
> > --Greg Chavez
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> >
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>



-- 
Mean Trading Systems LLP
http://www.meantradingsystems.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Cephfs/Hadoop/HBase

2013-05-09 Thread Noah Watkins
Mike,

I'm guessing that HBase is creating and deleting its blocks, but that the 
deletes are delayed:

  http://ceph.com/docs/master/dev/delayed-delete/

which would explain the correct reporting at the file system level, but not the 
actual 'data' pool. I'm not as familiar with this level of detail, and copied 
Greg who can probably answer easily.

Thanks,
Noah

On May 9, 2013, at 4:29 AM, Mike Bryant  wrote:

> Hi,
> I'm experimenting with running hbase using the hadoop-ceph java
> filesystem implementation, and I'm having an issue with space usage.
> 
> With the hbase daemons running, The amount of data in the 'data' pool
> grows continuously, at a much higher rate than expected. Doing a du,
> or ls -lh on a mounted copy shows a usage of ~16GB. But the data pool
> has grown to consume ~160GB at times. When I restart the daemons,
> shortly thereafter the data pool shrinks rapidly. If I restart all of
> them it comes down to match the actual space usage.
> 
> My current hypothesis is that the MDS isn't deleting the objects for
> some reason, possibly because there's still an open filehandle?
> 
> My question is, how can I get a report from the MDS on which objects
> aren't visible from the filesystem / why it's not deleted them yet /
> what open filehandles there are etc.
> 
> Cheers
> Mike
> 
> --
> Mike Bryant | Systems Administrator | Ocado Technology
> mike.bry...@ocado.com | 01707 382148 | www.ocado.com
> 
> -- 
> Notice:  This email is confidential and may contain copyright material of 
> Ocado Limited (the "Company"). Opinions and views expressed in this message 
> may not necessarily reflect the opinions and views of the Company.
> 
> If you are not the intended recipient, please notify us immediately and 
> delete all copies of this message. Please note that it is your 
> responsibility to scan this message for viruses.
> 
> Company reg. no. 3875000.
> 
> Ocado Limited
> Titan Court
> 3 Bishops Square
> Hatfield Business Park
> Hatfield
> Herts
> AL10 9NE
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-deploy doesn't update ceph.conf

2013-05-09 Thread Sage Weil
On Thu, 9 May 2013, Greg Chavez wrote:
> So I feel like I'm missing something.  I just deployed 3 storage nodes
> with ceph-deploy, each with a monitor daemon and 6-8 osd's. All of
> them seem to be active with health OK.  However, it doesn't seem that
> I ended up with a useful ceph.conf.
> 
> ( running 0.61-113-g61354b2-1raring )
> 
> This is all I get:
> 
> [global]
> fsid = af1581c1-8c45-4e24-b4f1-9a56e8a62aeb
> mon_initial_members = kvm-cs-sn-10i, kvm-cs-sn-14i, kvm-cs-sn-15i
> mon_host = 192.168.241.110,192.168.241.114,192.168.241.115
> auth_supported = cephx
> osd_journal_size = 1024
> filestore_xattr_use_omap = true

This is correct.  It is no longer necessary to enumerate daemons in 
ceph.conf when using the default locations (/var/lib/ceph/osd/*).

> When you run "service ceph status", it returns nothing because it
> can't find any osd stanzas.

'service' invokes the sysvinit scripts, and on Ubuntu we are using 
upstart.  Try

 initctl list | grep ceph

to see what is running.

sage


> Based on the directory names in
> /var/lib/ceph/mon and the parsing of the mounted storage volumes I
> wrote this out:
> 
> http://pastebin.com/DLgtiC23
> 
> And then pushed it out.
> 
> The monitor id names seem odd to me, with the hostname instead of a,
> b, and c, but whatever.
> 
> Now I get this output:
> 
> root@kvm-cs-sn-10i:/etc/ceph# service ceph -a status
> === mon.kvm-cs-sn-10i ===
> mon.kvm-cs-sn-10i: not running.
> === mon.kvm-cs-sn-14i ===
> mon.kvm-cs-sn-14i: not running.
> === mon.kvm-cs-sn-15i ===
> mon.kvm-cs-sn-15i: not running.
> === osd.0 ===
> osd.0: not running.
> === osd.1 ===
> osd.1: not running.
> === osd.10 ===
> osd.10: not running.
> ...etc
> 
> Not true!  Even worse, when I try to run "service ceph -a start", It
> freaks and complains about missing keys.  So now I have this process
> hanging around:
> 
> /usr/bin/python /usr/sbin/ceph-create-keys -i kvm-cs-sn-10i
> 
> Here's the output from that attempt:
> 
> root@kvm-cs-sn-10i:/tmp# service ceph -a start
> === mon.kvm-cs-sn-10i ===
> Starting Ceph mon.kvm-cs-sn-10i on kvm-cs-sn-10i...
> failed: 'ulimit -n 8192;  /usr/bin/ceph-mon -i kvm-cs-sn-10i
> --pid-file /var/run/ceph/mon.kvm-cs-sn-10i.pid -c /etc/ceph/ceph.conf
> '
> Starting ceph-create-keys on kvm-cs-sn-10i...
> 
> Luckily I hadn't set up my ssh keys yet, so that's as far as I got.
> 
> Would dearly love some guidance.  Thanks in advance!
> 
> --Greg Chavez
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
> 
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] ceph-deploy doesn't update ceph.conf

2013-05-09 Thread Greg Chavez
So I feel like I'm missing something.  I just deployed 3 storage nodes
with ceph-deploy, each with a monitor daemon and 6-8 osd's. All of
them seem to be active with health OK.  However, it doesn't seem that
I ended up with a useful ceph.conf.

( running 0.61-113-g61354b2-1raring )

This is all I get:

[global]
fsid = af1581c1-8c45-4e24-b4f1-9a56e8a62aeb
mon_initial_members = kvm-cs-sn-10i, kvm-cs-sn-14i, kvm-cs-sn-15i
mon_host = 192.168.241.110,192.168.241.114,192.168.241.115
auth_supported = cephx
osd_journal_size = 1024
filestore_xattr_use_omap = true

When you run "service ceph status", it returns nothing because it
can't find any osd stanzas.  Based on the directory names in
/var/lib/ceph/mon and the parsing of the mounted storage volumes I
wrote this out:

http://pastebin.com/DLgtiC23

And then pushed it out.

The monitor id names seem odd to me, with the hostname instead of a,
b, and c, but whatever.

Now I get this output:

root@kvm-cs-sn-10i:/etc/ceph# service ceph -a status
=== mon.kvm-cs-sn-10i ===
mon.kvm-cs-sn-10i: not running.
=== mon.kvm-cs-sn-14i ===
mon.kvm-cs-sn-14i: not running.
=== mon.kvm-cs-sn-15i ===
mon.kvm-cs-sn-15i: not running.
=== osd.0 ===
osd.0: not running.
=== osd.1 ===
osd.1: not running.
=== osd.10 ===
osd.10: not running.
...etc

Not true!  Even worse, when I try to run "service ceph -a start", It
freaks and complains about missing keys.  So now I have this process
hanging around:

/usr/bin/python /usr/sbin/ceph-create-keys -i kvm-cs-sn-10i

Here's the output from that attempt:

root@kvm-cs-sn-10i:/tmp# service ceph -a start
=== mon.kvm-cs-sn-10i ===
Starting Ceph mon.kvm-cs-sn-10i on kvm-cs-sn-10i...
failed: 'ulimit -n 8192;  /usr/bin/ceph-mon -i kvm-cs-sn-10i
--pid-file /var/run/ceph/mon.kvm-cs-sn-10i.pid -c /etc/ceph/ceph.conf
'
Starting ceph-create-keys on kvm-cs-sn-10i...

Luckily I hadn't set up my ssh keys yet, so that's as far as I got.

Would dearly love some guidance.  Thanks in advance!

--Greg Chavez
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] v0.61.1 released

2013-05-09 Thread Sage Weil
This release fixes a problem when upgrading a bobtail cluster that had 
snapshots to cuttlefish.  Please use this instead of v0.61 if you are 
upgrading to avoid possible ceph-osd daemon crashes.  There is also fix 
for a problem deploying monitors and generating new authentication keys.

Notable changes:

 * osd: handle upgrade when legacy snap collections are present; repair 
   from previous failed restart
 * ceph-create-keys: fix race with ceph-mon startup (which broke 
   'ceph-deploy gatherkeys ...')
 * ceph-create-keys: gracefully handle bad response from ceph-osd
 * sysvinit: do not assume default osd_data when automatically weighting 
   OSD
 * osd: avoid crash from ill-behaved classes using getomapvals
 * debian: fix squeeze dependency
 * mon: debug options to log or dump leveldb transactactions

You can get v0.61.1 from the usual places:

 * Git at git://github.com/ceph/ceph.git
 * Tarball at http://ceph.com/download/ceph-0.61.1.tar.gz
 * For Debian/Ubuntu packages, see http://ceph.com/docs/master/install/debian
 * For RPMs, see http://ceph.com/docs/master/install/rpm
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] RadosGW High Availability

2013-05-09 Thread Dimitri Maziuk
On 05/09/2013 09:57 AM, Tyler Brekke wrote:
> For High availability RGW you would need a load balancer. HA Proxy is
> an example of a load balancer that has been used successfully with
> rados gateway endpoints.

Strictly speaking for HA you need an HA solution. E.g. heartbeat. Main
difference between that and load balancing is that one server serves the
clients until it dies, then another takes over. With load balancing, all
servers get a share of the requests. It can be configured to do HA: set
"main" server's share to 100%, then the backup will get no requests as
long as the main is up.

RRDNS is a load balancing solution. Dep. on the implementation it can
simply return a list of IPs instead of a single IP for the host name,
then it's up to the client to pick one. A simple stupid client may
always pick the first one. A simple stupid server may always return the
list in the same order. That could be how all your clients always pick
the same server.

-- 
Dimitri Maziuk
Programmer/sysadmin
BioMagResBank, UW-Madison -- http://www.bmrb.wisc.edu



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] RadosGW High Availability

2013-05-09 Thread Tyler Brekke
For High availability RGW you would need a load balancer. HA Proxy is
an example of a load balancer that has been used successfully with
rados gateway endpoints.

On Thu, May 9, 2013 at 5:51 AM, Igor Laskovy  wrote:
> Anybody?
>
>
> On Tue, May 7, 2013 at 1:19 PM, Igor Laskovy  wrote:
>>
>> I tried do that and put behind RR DNS, but unfortunately only one host can
>> server requests from clients - second host does not responds totally.  I am
>> not to good familiar with apache, in standard log files nothing helpful.
>> Maybe this whole HA design is wrong? Does anybody resolve HA for Rados
>> Gateway endpoint? How?
>>
>>
>> On Wed, May 1, 2013 at 12:28 PM, Igor Laskovy 
>> wrote:
>>>
>>> Hello,
>>>
>>> Whether any best practices how to make Hing Availability of RadosGW?
>>> For example, is this right way to create two or tree RadosGW (keys for
>>> ceph-auth, directory and so on) and having for example this is ceph.conf:
>>>
>>> [client.radosgw.a]
>>> host = ceph01
>>> ...options...
>>>
>>> [client.radosgw.b]
>>> host = ceph02
>>> ...options...
>>>
>>> Does this rgws will run simultaneous?
>>> Have radosgw.b ability to continues serve load if ceph01 host went down?
>>>
>>> --
>>> Igor Laskovy
>>> facebook.com/igor.laskovy
>>> studiogrizzly.com
>>
>>
>>
>>
>> --
>> Igor Laskovy
>> facebook.com/igor.laskovy
>> studiogrizzly.com
>
>
>
>
> --
> Igor Laskovy
> facebook.com/igor.laskovy
> studiogrizzly.com
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Ceph Developer Summit - Summary and Videos

2013-05-09 Thread Patrick McGarry
Greetings!

The videos, blueprints, etherpads, and irc logs from the developer
summit this week have been posted on both the original wiki page as
well as in an aggregated blog post:

http://ceph.com/events/ceph-developer-summit-summary-and-session-videos/

Thanks to everyone who came and made this a truly great first summit.
If you have questions or feedback please feel free to send them my
way.  See you in a few months for the next one!


Best Regards,

Patrick McGarry
Director, Community || Inktank

http://ceph.com  ||  http://inktank.com
@scuttlemonkey || @ceph || @inktank
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] RadosGW High Availability

2013-05-09 Thread Igor Laskovy
Anybody?


On Tue, May 7, 2013 at 1:19 PM, Igor Laskovy  wrote:

> I tried do that and put behind RR DNS, but unfortunately only one host can
> server requests from clients - second host does not responds totally.  I
> am not to good familiar with apache, in standard log files nothing helpful.
> Maybe this whole HA design is wrong? Does anybody resolve HA for Rados
> Gateway endpoint? How?
>
>
> On Wed, May 1, 2013 at 12:28 PM, Igor Laskovy wrote:
>
>> Hello,
>>
>> Whether any best practices how to make Hing Availability of RadosGW?
>> For example, is this right way to create two or tree RadosGW (keys for
>> ceph-auth, directory and so on) and having for example this is ceph.conf:
>>
>> [client.radosgw.a]
>> host = ceph01
>> ...options...
>>
>> [client.radosgw.b]
>> host = ceph02
>> ...options...
>>
>> Does this rgws will run simultaneous?
>> Have radosgw.b ability to continues serve load if ceph01 host went down?
>>
>> --
>> Igor Laskovy
>> facebook.com/igor.laskovy
>> studiogrizzly.com
>>
>
>
>
> --
> Igor Laskovy
> facebook.com/igor.laskovy
> studiogrizzly.com
>



-- 
Igor Laskovy
facebook.com/igor.laskovy
studiogrizzly.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Cephfs/Hadoop/HBase

2013-05-09 Thread Mike Bryant
Hi,
I'm experimenting with running hbase using the hadoop-ceph java
filesystem implementation, and I'm having an issue with space usage.

With the hbase daemons running, The amount of data in the 'data' pool
grows continuously, at a much higher rate than expected. Doing a du,
or ls -lh on a mounted copy shows a usage of ~16GB. But the data pool
has grown to consume ~160GB at times. When I restart the daemons,
shortly thereafter the data pool shrinks rapidly. If I restart all of
them it comes down to match the actual space usage.

My current hypothesis is that the MDS isn't deleting the objects for
some reason, possibly because there's still an open filehandle?

My question is, how can I get a report from the MDS on which objects
aren't visible from the filesystem / why it's not deleted them yet /
what open filehandles there are etc.

Cheers
Mike

--
Mike Bryant | Systems Administrator | Ocado Technology
mike.bry...@ocado.com | 01707 382148 | www.ocado.com

-- 
Notice:  This email is confidential and may contain copyright material of 
Ocado Limited (the "Company"). Opinions and views expressed in this message 
may not necessarily reflect the opinions and views of the Company.

If you are not the intended recipient, please notify us immediately and 
delete all copies of this message. Please note that it is your 
responsibility to scan this message for viruses.

Company reg. no. 3875000.

Ocado Limited
Titan Court
3 Bishops Square
Hatfield Business Park
Hatfield
Herts
AL10 9NE
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Mounting CephFS - mount error 5 = Input/output error

2013-05-09 Thread Matt Chipman
The auth key needs to be copied to all machines in the cluster.  It looks
like the key might not be on the 10.81.2.100 machine.

Check /etc/ceph for the key if you are running Debian or Ubuntu

I am just new to all this myself so I may be totally wrong but it seems
plausible in my head :)

-Matt


On Thu, May 9, 2013 at 4:17 AM, Wyatt Gorman  wrote:

> Does anyone have any ideas about the below authentication error?
> -- Forwarded message --
> From: "Wyatt Gorman" 
> Date: May 7, 2013 1:34 PM
> Subject: Re: [ceph-users] Mounting CephFS - mount error 5 = Input/output
> error
> To: "Jens Kristian Søgaard" , <
> ceph-users@lists.ceph.com>
>
> Here's the result of running ceph-mds -i a -d
>
> ceph-mds -i a -d
> 2013-05-07 13:33:11.816963 b732a710  0 starting mds.a at :/0
> ceph version 0.56.6 (95a0bda7f007a33b0dc7adf4b330778fa1e5d70c), process
> ceph-mds, pid 9900
> 2013-05-07 13:33:11.824077 b4a1bb70  0 mds.-1.0 ms_handle_connect on
> 10.81.2.100:6789/0
> 2013-05-07 13:33:11.825629 b732a710 -1 mds.-1.0 ERROR: failed to
> authenticate: (1) Operation not permitted
> 2013-05-07 13:33:11.825653 b732a710  1 mds.-1.0 suicide.  wanted down:dne,
> now up:boot
> 2013-05-07 13:33:11.825973 b732a710  0 stopped.
>
> This "ERROR: failed to authenticate: (1) Operation not permitted"
> indicates some problem with the authentication, correct? Something about my
> keyring? I created a new one with ceph-authtool -C and it still returns
> that error.
>
>
> On Mon, May 6, 2013 at 1:53 PM, Jens Kristian Søgaard <
> j...@mermaidconsulting.dk> wrote:
>
>> Hi,
>>
>>
>>  how? running ceph-mds just returns the help page, and I'm not sure what
>>> arguments to use.
>>>
>>
>> Try running
>>
>> ceph-mds -i a -d
>>
>> (if the id of your mds is a)
>>
>> The -d means to to into the foreground and output debug information.
>>
>> Normally you would start the mds from the service management system on
>> your platform. On my Fedora system it look like this:
>>
>> service ceph start mds.a
>>
>>
>> --
>> Jens Kristian Søgaard, Mermaid Consulting ApS,
>> j...@mermaidconsulting.dk,
>> http://www.mermaidconsulting.**com/ 
>>
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com