Re: [ceph-users] [Ceph-community] Setting up Ceph calamari :: Made Simple

2014-09-24 Thread Don Talton (dotalton)
Great stuff Karan, thank you.

[http://www.cisco.com/web/europe/images/email/signature/logo05.jpg]

Don Talton
dotal...@cisco.com
Phone: 602-692-9510






From: Ceph-community [mailto:ceph-community-boun...@lists.ceph.com] On Behalf 
Of Karan Singh
Sent: Wednesday, September 24, 2014 1:16 AM
To: Ceph Community; ceph-users; ceph-calam...@lists.ceph.com
Subject: [Ceph-community] Setting up Ceph calamari :: Made Simple

Hello Cepher's

Now here comes my new blog on setting up Ceph Calamari.

I hope you would like this step-by-step guide

http://karan-mj.blogspot.fi/2014/09/ceph-calamari-survival-guide.html


- Karan -

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] The Kraken has been released!

2014-08-15 Thread Don Talton (dotalton)
There have been a ton of updates to Kraken over the past few months. Feel free 
to take a look here: http://imgur.com/fDnqpO9

Just as easy to setup before, with a lot more functionality. OSD+MON+AUTH 
operations are coming in the next release.


-Original Message-
From: Loic Dachary [mailto:l...@dachary.org] 
Sent: Thursday, January 09, 2014 11:32 AM
To: Don Talton (dotalton); ceph-us...@ceph.com
Subject: Re: [ceph-users] The Kraken has been released!

One more incentive to learn django :-)

On 09/01/2014 06:31, Don Talton (dotalton) wrote:
> The first phase of Kraken (free) dashboard for Ceph cluster monitoring is 
> complete. You can grab it here (https://github.com/krakendash/krakendash)
> 
>  
> 
> Pictures here http://imgur.com/a/JoVPy
> 
>  
> 
> Current features:
> 
>  
> 
>   MON statuses
> 
>   OSD statuses
> 
> OSD detail drilldown
> 
>   Pool statuses
> 
> Pool detail drilldown
> 
>  
> 
> Upcoming features:
> 
>   Advanced metrics via collectd
> 
>   Cluster management (eg write) operations
> 
>   Multi-cluster support
> 
>   Hardware node monitoring
> 
>  
> 
> Dave Simard has contributed a wrapper for the Ceph API here 
> (https://github.com/dmsimard/python-cephclient) which Kraken will begin using 
> shortly.
> 
>  
> 
> Pull requests are welcome! The more the merrier, I'd love to get more 
> features developed.
> 
>  
> 
> Donald Talton
> 
> Cloud Systems Development
> 
> Cisco Systems
> 
>  
> 
>  
> 
> 
> 
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 

-- 
Loïc Dachary, Artisan Logiciel Libre

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] ceph-deploy or manual?

2014-05-27 Thread Don Talton (dotalton)
I'd love to know how people are deploying their production clouds now. I've 
heard mixed answers about whether or not the "right" way is with ceph-deploy, 
or manual deployment. Are people using automation tools like puppet or ansible?


Donald Talton
Cloud Systems Development
Cisco Systems


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] visualizing a ceph cluster automatically

2014-05-16 Thread Don Talton (dotalton)
Have to plug Kraken too!

https://github.com/krakendash/krakendash

Here is a screenshot http://i.imgur.com/fDnqpO9.png


From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Drew 
Weaver
Sent: Friday, May 16, 2014 5:01 AM
To: 'ceph-users@lists.ceph.com'
Subject: [ceph-users] visualizing a ceph cluster automatically

Does anyone know of any tools that help you visually monitor a ceph cluster 
automatically?

Something that is host, osd, mon aware and shows various status of components, 
etc?

Thanks,
-Drew
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Monitoring ceph statistics using rados python module

2014-05-13 Thread Don Talton (dotalton)
python-cephclient may be of some use to you

https://github.com/dmsimard/python-cephclient



> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Mike Dawson
> Sent: Tuesday, May 13, 2014 10:04 AM
> To: Adrian Banasiak; Haomai Wang
> Cc: ceph-us...@ceph.com
> Subject: Re: [ceph-users] Monitoring ceph statistics using rados python module
> 
> Adrian,
> 
> Yes, it is single OSD oriented.
> 
> Like Haomai, we monitor perf dumps from individual OSD admin sockets. On
> new enough versions of ceph, you can do 'ceph daemon osd.x perf dump',
> which is a shorter way to ask for the same output as 'ceph --admin-daemon
> /var/run/ceph/ceph-osd.x.asok perf dump'. Keep in mind, either version has to
> be run locally on the host where osd.x is running.
> 
> We use Sensu to take samples and push them to Graphite. We have the ability
> to then build dashboards showing the whole cluster, units in our CRUSH tree,
> hosts, or an individual OSDs.
> 
> I have found that monitoring each OSD's admin daemon is critical. Often times
> a single OSD can affect performance of the entire cluster. Without individual
> data, these types of issues can be quite difficult to pinpoint.
> 
> Also, note that Inktank has developed Calamari. There are rumors that it may
> be open sourced at some point in the future.
> 
> Cheers,
> Mike Dawson
> 
> 
> On 5/13/2014 12:33 PM, Adrian Banasiak wrote:
> > Thanks for sugestion with admin daemon but it looks like single osd
> > oriented. I have used perf dump on mon socket and it output some
> > interesting data in case of monitoring whole cluster:
> > { "cluster": { "num_mon": 4,
> >"num_mon_quorum": 4,
> >"num_osd": 29,
> >"num_osd_up": 29,
> >"num_osd_in": 29,
> >"osd_epoch": 1872,
> >"osd_kb": 20218112516,
> >"osd_kb_used": 5022202696,
> >"osd_kb_avail": 15195909820,
> >"num_pool": 4,
> >"num_pg": 3500,
> >"num_pg_active_clean": 3500,
> >"num_pg_active": 3500,
> >"num_pg_peering": 0,
> >"num_object": 400746,
> >"num_object_degraded": 0,
> >"num_object_unfound": 0,
> >"num_bytes": 1678788329609,
> >"num_mds_up": 0,
> >"num_mds_in": 0,
> >"num_mds_failed": 0,
> >"mds_epoch": 1},
> >
> > Unfortunately cluster wide IO statistics are still missing.
> >
> >
> > 2014-05-13 17:17 GMT+02:00 Haomai Wang  > >:
> >
> > Not sure your demand.
> >
> > I use "ceph --admin-daemon /var/run/ceph/ceph-osd.x.asok perf dump" to
> > get the monitor infos. And the result can be parsed by simplejson
> > easily via python.
> >
> > On Tue, May 13, 2014 at 10:56 PM, Adrian Banasiak
> > mailto:adr...@banasiak.it>> wrote:
> >  > Hi, i am working with test Ceph cluster and now I want to
> > implement Zabbix
> >  > monitoring with items such as:
> >  >
> >  > - whoe cluster IO (for example ceph -s -> recovery io 143 MB/s, 35
> >  > objects/s)
> >  > - pg statistics
> >  >
> >  > I would like to create single script in python to retrive values
> > using rados
> >  > python module, but there are only few informations in
> > documentation about
> >  > module usage. I've created single function which calculates all pools
> >  > current read/write statistics but i cant find out how to add
> > recovery IO
> >  > usage and pg statistics:
> >  >
> >  > read = 0
> >  > write = 0
> >  > for pool in conn.list_pools():
> >  > io = conn.open_ioctx(pool)
> >  > stats[pool] = io.get_stats()
> >  > read+=int(stats[pool]['num_rd'])
> >  > write+=int(stats[pool]['num_wr'])
> >  >
> >  > Could someone share his knowledge about rados module for
> > retriving ceph
> >  > statistics?
> >  >
> >  > BTW Ceph is awesome!
> >  >
> >  > --
> >  > Best regards, Adrian Banasiak
> >  > email: adr...@banasiak.it 
> >  >
> >  > ___
> >  > ceph-users mailing list
> >  > ceph-users@lists.ceph.com 
> >  > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >  >
> >
> >
> >
> > --
> > Best Regards,
> >
> > Wheat
> >
> >
> >
> >
> > --
> > Pozdrawiam, Adrian Banasiak
> > email: adr...@banasiak.it 
> >
> >
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com

[ceph-users] Contract position available at Cisco for a qualified Ceph/OpenStack engineer

2014-03-19 Thread Don Talton (dotalton)
Cisco is searching for an experienced DevOps engineer to work as part of a team 
characterizing the stability, scale and performance of a large distributed 
cloud architecture. This position focuses on locating the bottlenecks in the 
architecture and developing test suites to add to CI/CD efforts to ensure a 
base level of stability and performance is met with each iteration/build.  

Requirements:
  Automation and familiarity with jenkins/git/gerrit
  Experience with iSCSI, SAN, block, file, and object storage systems such as 
Swift,  Gluster, Ceph.
  Agile process experience

Puppet experience a plus

Please email me directly at dotal...@cisco.com if you are interested.

Donald Talton
Cloud Systems Development
Cisco Systems


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] qemu non-shared storage migration of nova instances?

2014-03-11 Thread Don Talton (dotalton)
Hi guys and gals,

I'm able to do live migration via 'nova live-migration', as long as my 
instances are sitting on shared storage. However, when they are not, nova 
live-migrate fails, due to a shared storage check.

To get around this, I attempted to do a live migration via libvirt directly. 
Using the feature "--copy-storage-all" fails. Part of the trouble with this is 
that even though nova is booted from a volume stored on ceph, there are still 
support files (eg console.log, disk.config) that reside in the instances 
directory. The virsh command (I've tried many combinations of many different 
migration approaches) is "virsh migrate --live --copy-storage-all 
instance-000c qemu+ssh://target/system". This fails due to libvirt not 
creating the instance dir and copying the support files to the target.

I'm curious if anyone has been able to get something like this to work. I'd 
really love to get ceph-backed live migration going without adding the overhead 
of shared storage for nova too.

Thanks,

Donald Talton
Cloud Systems Development
Cisco Systems


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] The Kraken has been released!

2014-01-08 Thread Don Talton (dotalton)
The first phase of Kraken (free) dashboard for Ceph cluster monitoring is 
complete. You can grab it here (https://github.com/krakendash/krakendash)



Pictures here http://imgur.com/a/JoVPy



Current features:



  MON statuses

  OSD statuses

OSD detail drilldown

  Pool statuses

Pool detail drilldown



Upcoming features:

  Advanced metrics via collectd

  Cluster management (eg write) operations

  Multi-cluster support

  Hardware node monitoring



Dave Simard has contributed a wrapper for the Ceph API here 
(https://github.com/dmsimard/python-cephclient) which Kraken will begin using 
shortly.



Pull requests are welcome! The more the merrier, I'd love to get more features 
developed.



Donald Talton

Cloud Systems Development

Cisco Systems




___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-deploy issues with initial mons that aren't up

2013-12-20 Thread Don Talton (dotalton)
I guess I should add, what if I add OSDs to a mon in this scenario? Do they get 
up and in and will the crush map from the non initial mons get merged with the 
initial when it's online?

> -Original Message-
> From: ceph-users-boun...@lists.ceph.com [mailto:ceph-users-
> boun...@lists.ceph.com] On Behalf Of Don Talton (dotalton)
> Sent: Friday, December 20, 2013 9:17 AM
> To: Gregory Farnum
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] ceph-deploy issues with initial mons that aren't up
> 
> This makes sense. So if other mons come up that are *not* defined as initial
> mons, then they will not be in service until the initial mon is up and ready? 
> At
> which point they can form their quorum and operate?
> 
> 
> > -Original Message-
> > From: Gregory Farnum [mailto:g...@inktank.com]
> > Sent: Thursday, December 19, 2013 10:19 PM
> > To: Don Talton (dotalton)
> > Cc: ceph-users@lists.ceph.com
> > Subject: Re: [ceph-users] ceph-deploy issues with initial mons that
> > aren't up
> >
> > "mon initial members" is a race prevention mechanism whose purpose is
> > to prevent your monitors from forming separate quorums when they're
> > brought up by automated software provisioning systems (by not allowing
> > monitors to form a quorum unless everybody in the list is a member).
> > If you want to add other monitors at a later time you can do so by
> > specifying them elsewhere (including in mon hosts or whatever, so
> > other daemons will attempt to contact them.) -Greg Software Engineer
> > #42 @ http://inktank.com | http://ceph.com
> >
> >
> > On Thu, Dec 19, 2013 at 9:13 PM, Don Talton (dotalton)
> >  wrote:
> > > I just realized my email is not clear. If the first mon is up and
> > > the additional
> > initials are not, then the process fails.
> > >
> > >> -Original Message-
> > >> From: ceph-users-boun...@lists.ceph.com [mailto:ceph-users-
> > >> boun...@lists.ceph.com] On Behalf Of Don Talton (dotalton)
> > >> Sent: Thursday, December 19, 2013 2:44 PM
> > >> To: ceph-users@lists.ceph.com
> > >> Subject: [ceph-users] ceph-deploy issues with initial mons that
> > >> aren't up
> > >>
> > >> Hi all,
> > >>
> > >> I've been working in some ceph-deploy automation and think I've
> > >> stumbled on an interesting behavior. I create a new cluster, and
> > >> specify 3 machines. If all 3 are not and unable to be ssh'd into
> > >> with the account I created for ceph- deploy, then the mon create
> > >> process will fail and the cluster is not properly setup with keys, etc.
> > >>
> > >> This seems odd to me, since I may want to specify initial mons that
> > >> may not yet be up (say they are waiting for cobbler to finish
> > >> loading them for example), but I want them as part of the initial 
> > >> cluster.
> > >>
> > >>
> > >> Donald Talton
> > >> Cloud Systems Development
> > >> Cisco Systems
> > >>
> > >>
> > >>
> > >> ___
> > >> ceph-users mailing list
> > >> ceph-users@lists.ceph.com
> > >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> > > ___
> > > ceph-users mailing list
> > > ceph-users@lists.ceph.com
> > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-deploy issues with initial mons that aren't up

2013-12-20 Thread Don Talton (dotalton)
This makes sense. So if other mons come up that are *not* defined as initial 
mons, then they will not be in service until the initial mon is up and ready? 
At which point they can form their quorum and operate?


> -Original Message-
> From: Gregory Farnum [mailto:g...@inktank.com]
> Sent: Thursday, December 19, 2013 10:19 PM
> To: Don Talton (dotalton)
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] ceph-deploy issues with initial mons that aren't up
> 
> "mon initial members" is a race prevention mechanism whose purpose is to
> prevent your monitors from forming separate quorums when they're
> brought up by automated software provisioning systems (by not allowing
> monitors to form a quorum unless everybody in the list is a member).
> If you want to add other monitors at a later time you can do so by specifying
> them elsewhere (including in mon hosts or whatever, so other daemons will
> attempt to contact them.) -Greg Software Engineer #42 @
> http://inktank.com | http://ceph.com
> 
> 
> On Thu, Dec 19, 2013 at 9:13 PM, Don Talton (dotalton)
>  wrote:
> > I just realized my email is not clear. If the first mon is up and the 
> > additional
> initials are not, then the process fails.
> >
> >> -Original Message-
> >> From: ceph-users-boun...@lists.ceph.com [mailto:ceph-users-
> >> boun...@lists.ceph.com] On Behalf Of Don Talton (dotalton)
> >> Sent: Thursday, December 19, 2013 2:44 PM
> >> To: ceph-users@lists.ceph.com
> >> Subject: [ceph-users] ceph-deploy issues with initial mons that
> >> aren't up
> >>
> >> Hi all,
> >>
> >> I've been working in some ceph-deploy automation and think I've
> >> stumbled on an interesting behavior. I create a new cluster, and
> >> specify 3 machines. If all 3 are not and unable to be ssh'd into with
> >> the account I created for ceph- deploy, then the mon create process
> >> will fail and the cluster is not properly setup with keys, etc.
> >>
> >> This seems odd to me, since I may want to specify initial mons that
> >> may not yet be up (say they are waiting for cobbler to finish loading
> >> them for example), but I want them as part of the initial cluster.
> >>
> >>
> >> Donald Talton
> >> Cloud Systems Development
> >> Cisco Systems
> >>
> >>
> >>
> >> ___
> >> ceph-users mailing list
> >> ceph-users@lists.ceph.com
> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-deploy issues with initial mons that aren't up

2013-12-19 Thread Don Talton (dotalton)
I just realized my email is not clear. If the first mon is up and the 
additional initials are not, then the process fails.

> -Original Message-
> From: ceph-users-boun...@lists.ceph.com [mailto:ceph-users-
> boun...@lists.ceph.com] On Behalf Of Don Talton (dotalton)
> Sent: Thursday, December 19, 2013 2:44 PM
> To: ceph-users@lists.ceph.com
> Subject: [ceph-users] ceph-deploy issues with initial mons that aren't up
> 
> Hi all,
> 
> I've been working in some ceph-deploy automation and think I've stumbled
> on an interesting behavior. I create a new cluster, and specify 3 machines. If
> all 3 are not and unable to be ssh'd into with the account I created for ceph-
> deploy, then the mon create process will fail and the cluster is not properly
> setup with keys, etc.
> 
> This seems odd to me, since I may want to specify initial mons that may not
> yet be up (say they are waiting for cobbler to finish loading them for
> example), but I want them as part of the initial cluster.
> 
> 
> Donald Talton
> Cloud Systems Development
> Cisco Systems
> 
> 
> 
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] ceph-deploy issues with initial mons that aren't up

2013-12-19 Thread Don Talton (dotalton)
Hi all,

I've been working in some ceph-deploy automation and think I've stumbled on an 
interesting behavior. I create a new cluster, and specify 3 machines. If all 3 
are not and unable to be ssh'd into with the account I created for ceph-deploy, 
then the mon create process will fail and the cluster is not properly setup 
with keys, etc.

This seems odd to me, since I may want to specify initial mons that may not yet 
be up (say they are waiting for cobbler to finish loading them for example), 
but I want them as part of the initial cluster.


Donald Talton
Cloud Systems Development
Cisco Systems



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Error connecting to ceph cluster in openstack cinder

2013-12-18 Thread Don Talton (dotalton)
Check that cinder has access to read your ceph.conf file. I've had to 644 mine.

From: ceph-users-boun...@lists.ceph.com 
[mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of bigbird Lim
Sent: Wednesday, December 18, 2013 10:19 AM
To: ceph-users@lists.ceph.com
Subject: [ceph-users] Error connecting to ceph cluster in openstack cinder

hi,
I am trying to get ceph working with cinder. I have an existing openstack setup 
with one nova-controller and five compute-nodes. I have setup another three 
separate servers as a ceph cluster. Following the instruction at 
http://ceph.com/docs/master/rbd/rbd-openstack/, I am getting this error when 
starting cinder-volume:


2013-12-18 09:06:49.756 12380 AUDIT cinder.service [-] Starting cinder-volume 
node (version 2013.2)
2013-12-18 09:06:50.286 12380 INFO cinder.openstack.common.rpc.common 
[req-925fa7e8-1ccf-474d-a3a8-646e0f9ec93e None None] Connected to AMQP server 
on localhost:5672
2013-12-18 09:06:50.297 12380 INFO cinder.volume.manager 
[req-925fa7e8-1ccf-474d-a3a8-646e0f9ec93e None None] Starting volume driver 
RBDDriver (1.1.0)
2013-12-18 09:06:50.316 12380 ERROR cinder.volume.drivers.rbd 
[req-925fa7e8-1ccf-474d-a3a8-646e0f9ec93e None None] error connecting to ceph 
cluster
2013-12-18 09:06:50.316 12380 TRACE cinder.volume.drivers.rbd Traceback (most 
recent call last):
2013-12-18 09:06:50.316 12380 TRACE cinder.volume.drivers.rbd   File 
"/usr/lib/python2.7/dist-packages/cinder/volume/drivers/rbd.py", line 262, in 
check_for_setup_error
2013-12-18 09:06:50.316 12380 TRACE cinder.volume.drivers.rbd with 
RADOSClient(self):
2013-12-18 09:06:50.316 12380 TRACE cinder.volume.drivers.rbd   File 
"/usr/lib/python2.7/dist-packages/cinder/volume/drivers/rbd.py", line 234, in 
__init__
2013-12-18 09:06:50.316 12380 TRACE cinder.volume.drivers.rbd self.cluster, 
self.ioctx = driver._connect_to_rados(pool)
2013-12-18 09:06:50.316 12380 TRACE cinder.volume.drivers.rbd   File 
"/usr/lib/python2.7/dist-packages/cinder/volume/drivers/rbd.py", line 282, in 
_connect_to_rados
2013-12-18 09:06:50.316 12380 TRACE cinder.volume.drivers.rbd 
client.connect()
2013-12-18 09:06:50.316 12380 TRACE cinder.volume.drivers.rbd   File 
"/usr/lib/python2.7/dist-packages/rados.py", line 367, in connect
2013-12-18 09:06:50.316 12380 TRACE cinder.volume.drivers.rbd raise 
make_ex(ret, "error calling connect")
2013-12-18 09:06:50.316 12380 TRACE cinder.volume.drivers.rbd ObjectNotFound: 
error calling connect
2013-12-18 09:06:50.316 12380 TRACE cinder.volume.drivers.rbd
2013-12-18 09:06:50.319 12380 ERROR cinder.volume.manager 
[req-925fa7e8-1ccf-474d-a3a8-646e0f9ec93e None None] Error encountered during 
initialization of driver: RBDDriver
2013-12-18 09:06:50.319 12380 ERROR cinder.volume.manager 
[req-925fa7e8-1ccf-474d-a3a8-646e0f9ec93e None None] Bad or unexpected response 
from the storage volume backend API: error connecting to ceph cluster
2013-12-18 09:06:50.319 12380 TRACE cinder.volume.manager Traceback (most 
recent call last):
2013-12-18 09:06:50.319 12380 TRACE cinder.volume.manager   File 
"/usr/lib/python2.7/dist-packages/cinder/volume/manager.py", line 190, in 
init_host
2013-12-18 09:06:50.319 12380 TRACE cinder.volume.manager 
self.driver.check_for_setup_error()
2013-12-18 09:06:50.319 12380 TRACE cinder.volume.manager   File 
"/usr/lib/python2.7/dist-packages/cinder/volume/drivers/rbd.py", line 267, in 
check_for_setup_error
2013-12-18 09:06:50.319 12380 TRACE cinder.volume.manager raise 
exception.VolumeBackendAPIException(data=msg)
2013-12-18 09:06:50.319 12380 TRACE cinder.volume.manager 
VolumeBackendAPIException: Bad or unexpected response from the storage volume 
backend API: error connecting to ceph cluster
2013-12-18 09:06:50.319 12380 TRACE cinder.volume.manager

This is my cinder.conf file:
cat /etc/cinder/cinder.conf
[DEFAULT]
rootwrap_config=/etc/cinder/rootwrap.conf
sql_connection = 
mysql://cinderUser:cinderPass@10.193.0.120/cinder
api_paste_config = /etc/cinder/api-paste.ini
iscsi_helper=ietadm
volume_name_template = volume-%s
volume_group = cinder-volumes
verbose = True
auth_strategy = keystone
iscsi_ip_address=10.193.0.120
volume_driver=cinder.volume.drivers.rbd.RBDDriver
rbd_pool=volumes
glance_api_version=2
rdb_user=volumes
rdb_secret_uuid=19365acb-10b4-44c9-9a28-f948e8128e91

and ceph.conf file
[global]
fsid = 633accd0-dd09-4d97-ab40-2aca79f44d1c
mon_initial_members = ceph-1
mon_host = 10.193.0.111
auth_supported = cephx
osd_journal_size = 1024
filestore_xattr_use_omap = true

Thanks for the help

Song

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] mon add problem

2013-12-16 Thread Don Talton (dotalton)
[kvm2][WARNIN] kvm2 is not defined in `mon initial members`

The above is why. When you run 'ceph-deploy new', pass it all the machines you 
intend to use as mons, eg

'ceph-deploy new mon1 mon2 mon3'

Or alternately, you can modify the ceph.conf file in your bootstrap directory. 
And the mon and the IP, you'll see where. Do not use the mon's FQDN, only the 
shortname.


From: ceph-users-boun...@lists.ceph.com 
[mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Umar Draz
Sent: Monday, December 16, 2013 2:28 PM
To: ceph-us...@ceph.com
Subject: [ceph-users] mon add problem

Hi,

I am try to add mon host using ceph-deploy mon create kvm2, but its not working 
and giving me an error.

[kvm2][DEBUG ] determining if provided host has same hostname in remote
[kvm2][DEBUG ] get remote short hostname
[kvm2][DEBUG ] deploying mon to kvm2
[kvm2][DEBUG ] get remote short hostname
[kvm2][DEBUG ] remote hostname: kvm2
[kvm2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[kvm2][DEBUG ] create the mon path if it does not exist
[kvm2][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-kvm2/done
[kvm2][DEBUG ] create a done file to avoid re-doing the mon deployment
[kvm2][DEBUG ] create the init path if it does not exist
[kvm2][DEBUG ] locating the `service` executable...
[kvm2][INFO  ] Running command: initctl emit ceph-mon cluster=ceph id=kvm2
[kvm2][INFO  ] Running command: ceph --cluster=ceph --admin-daemon 
/var/run/ceph/ceph-mon.kvm2.asok mon_status
[kvm2][ERROR ] admin_socket: exception getting command descriptions: [Errno 2] 
No such file or directory
[kvm2][WARNIN] monitor: mon.kvm2, might not be running yet
[kvm2][INFO  ] Running command: ceph --cluster=ceph --admin-daemon 
/var/run/ceph/ceph-mon.kvm2.asok mon_status
[kvm2][ERROR ] admin_socket: exception getting command descriptions: [Errno 2] 
No such file or directory
[kvm2][WARNIN] kvm2 is not defined in `mon initial members`
[kvm2][WARNIN] monitor kvm2 does not exist in monmap
[kvm2][WARNIN] neither `public_addr` nor `public_network` keys are defined for 
monitors
[kvm2][WARNIN] monitors may not be able to form quorum
root@kvm1:/home/umar/ceph-cluster# ceph-deploy mon create kvm2


would you please help me how to solve this problem?

Br.

Umar
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph management tools

2013-11-22 Thread Don Talton (dotalton)
Hi Knut,

I have it on my "to-do" list to write a free open-source Ceph web GUI (in 
django). There is a ceph admin REST api that seems like all management 
functionality is exposed. I've yet to write a blueprint, but if you are 
interested in contributing, I'd love the help.

https://github.com/dontalton/kraken

If not, please check the repo for updates. I hope to get a base app done before 
the end of the year.

Other than that, the only tool available that I am aware of is Inktank's Ceph 
dashboard. They just recently launched it themselves, so I don't think there is 
anything else out there yet.


From: ceph-users-boun...@lists.ceph.com 
[mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Knut Moe
Sent: Friday, November 22, 2013 11:16 AM
To: ceph-users@lists.ceph.com
Subject: [ceph-users] Ceph management tools

I am doing some distributed storage research for a client and was wondering if 
you guys could point me to any Web/GUI tools that can be used to 
configure/manage/monitor Ceph clusters.

Also, in your opinion how does Ceph stack up to GlusterFS and Apache Hadoop? It 
seems as Ceph and Gluster are using a similar model by employing an algorithm 
to determine storage locations.

Any feedback on this would be very helpful. Thanks.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Puppet Modules for Ceph

2013-11-06 Thread Don Talton (dotalton)
Hi Karan,

1. Not test on CentOS at all. But since the work is done using ceph-deploy it 
*should* be the same.
2. Everything supported by ceph-deploy (mon, osd, mds).
3. Change the dpkg command to the equivalent rpm command to test whether or not 
a package is already installed.
  
https://github.com/dontalton/puppet-cephdeploy/blob/master/manifests/baseconfig.pp#L114
  
https://github.com/dontalton/puppet-cephdeploy/blob/master/manifests/init.pp#L122
  


> -Original Message-
> From: Karan Singh [mailto:ksi...@csc.fi]
> Sent: Thursday, November 07, 2013 5:02 AM
> To: Don Talton (dotalton)
> Cc: ceph-users@lists.ceph.com; ceph-users-j...@lists.ceph.com; ceph-
> us...@ceph.com
> Subject: Re: [ceph-users] Puppet Modules for Ceph
> 
> A Big thanks Don for creating puppet modules .
> 
> Need your guidance on -
> 
> 1) Did you manage to run this on centos
> 2) What all things can be installed using these modules ( mon , osd , mds OR
> All )
> 3) What all things i need to change in this module
> 
> 
> Many Thanks
> Karan Singh
> 
> 
> - Original Message -
> From: "Don Talton (dotalton)" 
> To: "Karan Singh" , ceph-users@lists.ceph.com, ceph-users-
> j...@lists.ceph.com, ceph-us...@ceph.com
> Sent: Wednesday, 6 November, 2013 6:49:16 PM
> Subject: RE: [ceph-users] Puppet Modules for Ceph
> 
> This will work https://github.com/dontalton/puppet-cephdeploy
> 
> Just change the unless statements (should only be two) from testing dpkg to
> testing rpm instead.
> I'll add an OS check myself, or you can fork and send me a pull request.
> 
> > -Original Message-
> > From: ceph-users-boun...@lists.ceph.com [mailto:ceph-users-
> > boun...@lists.ceph.com] On Behalf Of Karan Singh
> > Sent: Wednesday, November 06, 2013 7:56 PM
> > To: ceph-users@lists.ceph.com; ceph-users-j...@lists.ceph.com; ceph-
> > us...@ceph.com
> > Subject: Re: [ceph-users] Puppet Modules for Ceph
> >
> > Dear Cephers
> >
> > I have a running ceph cluster that was deployed using ceph-deploy ,
> > our next objective is to build a Puppet setup that can be used for
> > long term scaling of ceph infrastructure.
> >
> > It would be a great help if any one can
> >
> > 1) Provide ceph modules for (centos OS)
> > 2) Guidance on how to proceed
> >
> > Many Thanks
> > Karan Singh
> >
> >
> > - Original Message -
> > From: "Karan Singh" 
> > To: "Loic Dachary" 
> > Cc: ceph-users@lists.ceph.com
> > Sent: Monday, 4 November, 2013 5:01:26 PM
> > Subject: Re: [ceph-users] Ceph deployment using puppet
> >
> > Hello Loic
> >
> > Thanks for your reply , Ceph-deploy works good to me.
> >
> > My next objective is to deploy ceph using puppet. Can you guide me now
> > i can proceed.
> >
> > Regards
> > karan
> >
> > - Original Message -
> > From: "Loic Dachary" 
> > To: ceph-users@lists.ceph.com
> > Sent: Monday, 4 November, 2013 4:45:06 PM
> > Subject: Re: [ceph-users] Ceph deployment using puppet
> >
> > Hi,
> >
> > Unless you're force to use puppet for some reason, I suggest you give
> > ceph- deploy a try:
> >
> > http://ceph.com/docs/master/start/quick-ceph-deploy/
> >
> > Cheers
> >
> > On 04/11/2013 19:00, Karan Singh wrote:
> > > Hello Everyone
> > >
> > > Can  someone guide me how i can start for " ceph deployment using
> > puppet " , what all things i need to have for this .
> > >
> > > I have no prior idea of using puppet , hence need your help to
> > > getting
> > started with it.
> > >
> > >
> > > Regards
> > > Karan Singh
> > >
> > >
> > > ___
> > > ceph-users mailing list
> > > ceph-users@lists.ceph.com
> > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> > >
> >
> > --
> > Loïc Dachary, Artisan Logiciel Libre
> >
> >
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Puppet Modules for Ceph

2013-11-06 Thread Don Talton (dotalton)
This will work https://github.com/dontalton/puppet-cephdeploy

Just change the unless statements (should only be two) from testing dpkg to 
testing rpm instead.
I'll add an OS check myself, or you can fork and send me a pull request.

> -Original Message-
> From: ceph-users-boun...@lists.ceph.com [mailto:ceph-users-
> boun...@lists.ceph.com] On Behalf Of Karan Singh
> Sent: Wednesday, November 06, 2013 7:56 PM
> To: ceph-users@lists.ceph.com; ceph-users-j...@lists.ceph.com; ceph-
> us...@ceph.com
> Subject: Re: [ceph-users] Puppet Modules for Ceph
> 
> Dear Cephers
> 
> I have a running ceph cluster that was deployed using ceph-deploy , our next
> objective is to build a Puppet setup that can be used for long term scaling of
> ceph infrastructure.
> 
> It would be a great help if any one can
> 
> 1) Provide ceph modules for (centos OS)
> 2) Guidance on how to proceed
> 
> Many Thanks
> Karan Singh
> 
> 
> - Original Message -
> From: "Karan Singh" 
> To: "Loic Dachary" 
> Cc: ceph-users@lists.ceph.com
> Sent: Monday, 4 November, 2013 5:01:26 PM
> Subject: Re: [ceph-users] Ceph deployment using puppet
> 
> Hello Loic
> 
> Thanks for your reply , Ceph-deploy works good to me.
> 
> My next objective is to deploy ceph using puppet. Can you guide me now i
> can proceed.
> 
> Regards
> karan
> 
> - Original Message -
> From: "Loic Dachary" 
> To: ceph-users@lists.ceph.com
> Sent: Monday, 4 November, 2013 4:45:06 PM
> Subject: Re: [ceph-users] Ceph deployment using puppet
> 
> Hi,
> 
> Unless you're force to use puppet for some reason, I suggest you give ceph-
> deploy a try:
> 
> http://ceph.com/docs/master/start/quick-ceph-deploy/
> 
> Cheers
> 
> On 04/11/2013 19:00, Karan Singh wrote:
> > Hello Everyone
> >
> > Can  someone guide me how i can start for " ceph deployment using
> puppet " , what all things i need to have for this .
> >
> > I have no prior idea of using puppet , hence need your help to getting
> started with it.
> >
> >
> > Regards
> > Karan Singh
> >
> >
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> 
> --
> Loïc Dachary, Artisan Logiciel Libre
> 
> 
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Inktank Ceph Enterprise Launch

2013-10-30 Thread Don Talton (dotalton)
I actually started a django app (no code pushed yet) for this purpose. I 
guessed that Inktank might come out with a commercial offering and thought a 
FOSS dashboard would be a good thing for the community too.

https://github.com/dontalton/kraken

I'd much rather contribute to a Inktank-backed dashboard if it were FOSS, than 
start a new project.


> -Original Message-
> From: ceph-users-boun...@lists.ceph.com [mailto:ceph-users-
> boun...@lists.ceph.com] On Behalf Of Loic Dachary
> Sent: Wednesday, October 30, 2013 10:29 AM
> To: Patrick McGarry; ceph-users@lists.ceph.com; Ceph Devel
> Subject: Re: [ceph-users] Inktank Ceph Enterprise Launch
> 
> Hi Patrick,
> 
> I wish Inktank was able to base its strategy and income on Free Software.
> Like RedHat does, for instance. In addition, as long as Inktank employs the
> majority of Ceph developers, publishing Calamari as a proprietary software is
> a conflict of interest. Should someone from the community bootstrap a Free
> Software alternative to Calamari, it will compete with it. And should Inktank
> employees participate in the development of this alternative, it would be
> against the best interest of Inktank. If that were not true, there would be no
> reason to publish Calamari as proprietary software in the first place.
> 
> Please reconsider your decision to publish Calamari as a proprietary software.
> 
> Now is probably the right time to call for the creation of a Ceph foundation.
> 
> Cheers
> 
> On 30/10/2013 18:01, Patrick McGarry wrote:
> > Salutations Ceph-ers,
> >
> > As many of you have noticed, Inktank has taken the wraps off the
> > latest and greatest magic for enterprise customers.  Wanted to share a
> > few thoughts from a community perspective on Ceph.com and answer any
> > questions/concerns folks might have.
> >
> > http://ceph.com/community/new-inktank-ceph-enterprise-builds-on-
> what-m
> > akes-ceph-great/
> >
> > Just to reiterate, there will be no changes/limitations to Ceph.  All
> > Inktank contributions to Ceph will continue to be open source and
> > useable.  If you have any questions feel free to direct them my way.
> > Thanks.
> >
> >
> > Best Regards,
> >
> > Patrick McGarry
> > Director, Community || Inktank
> > http://ceph.com  ||  http://inktank.com @scuttlemonkey || @ceph ||
> > @inktank
> > --
> > To unsubscribe from this list: send the line "unsubscribe ceph-devel"
> > in the body of a message to majord...@vger.kernel.org More
> majordomo
> > info at  http://vger.kernel.org/majordomo-info.html
> >
> 
> --
> Loïc Dachary, Artisan Logiciel Libre

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] new puppet-cephdeploy module

2013-09-16 Thread Don Talton (dotalton)
As weird as it might seem, there is a puppet module now to automate 
ceph-deploy. It came about as Cisco has its own OpenStack installer platform 
which requires full orchestration. It might be of some use to others, so here 
is the link:

https://github.com/dontalton/puppet-cephdeploy


Donald Talton
Systems Development Unit
dotal...@cisco.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] puppet-cephdeploy module

2013-09-16 Thread Don Talton (dotalton)
As weird as it might seem, there is a puppet module now to automate 
ceph-deploy. It came about as Cisco has its own OpenStack installer platform 
which requires full orchestration. It might be of some use to others, so here 
is the link:

https://github.com/dontalton/puppet-cephdeploy


Donald Talton
Systems Development Unit
dotal...@cisco.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Deploy a Ceph cluster to play around with

2013-09-16 Thread Don Talton (dotalton)
If you are just playing around, you could roll everything onto a single server. 
Or, if you wanted, put the MON and OSD on a single server and the radosgw on a 
different server. You can accomplish this in a virtual machine if you don't 
have all the hardware you would like to test with.

> -Original Message-
> From: ceph-users-boun...@lists.ceph.com [mailto:ceph-users-
> boun...@lists.ceph.com] On Behalf Of Guang
> Sent: Monday, September 16, 2013 6:14 AM
> To: ceph-users@lists.ceph.com; Ceph Development
> Subject: [ceph-users] Deploy a Ceph cluster to play around with
> 
> Hello ceph-users, ceph-devel,
> Nice to meet you in the community!
> Today I tried to deploy a Ceph cluster to play around with the API, and during
> the deployment, i have a couple of questions which may need you help:
>   1) How many hosts do I need if I want to deploy a cluster with RadosGW (so
> that I can try with the S3 API)? Is it 3 OSD + 1 Mon + 1 GW =  5 hosts on
> minimum?
> 
>   2) I have a list of hardwares, however, my host only have 1 disk with two
> partitions, one for boot and another for LVM members, is it possible to
> deploy an OSD on such hardware (e.g. make a partition with ext4)? Or I will
> need another disk to do so?
> 
> -bash-4.1$ ceph-deploy disk list myserver.com [ceph_deploy.osd][INFO  ]
> Distro info: RedHatEnterpriseServer 6.3 Santiago [ceph_deploy.osd][DEBUG ]
> Listing disks on myserver.com...
> [repl101.mobstor.gq1.yahoo.com][INFO  ] Running command: ceph-disk list
> [repl101.mobstor.gq1.yahoo.com][INFO  ] /dev/sda :
> [repl101.mobstor.gq1.yahoo.com][INFO  ]  /dev/sda1 other, ext4, mounted
> on /boot [repl101.mobstor.gq1.yahoo.com][INFO  ]  /dev/sda2 other,
> LVM2_member
> 
> Thanks,
> Guang
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph-OSD on compute nodes?

2013-08-28 Thread Don Talton (dotalton)
We’re rolling this option out in Cisco Openstack Installer. Our testing shows 
that it’s okay for smaller scale clouds, although we have not fully tested it 
large scale yet. I’ve personally tested it on Cisco C240s (50GB RAM, 16 cores) 
with 3 OSD per compute node with positive results. For this smaller config we 
run the MON on the controller node with the option of additional MON on 
user-specified node.

From: ceph-users-boun...@lists.ceph.com 
[mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Sébastien Han
Sent: Wednesday, August 28, 2013 2:02 AM
To: ceph-users@lists.ceph.com; Mark Chaney
Subject: Re: [ceph-users] Ceph-OSD on compute nodes?

The "not recommended statement" is more a general performance concern. I 
believe the main problem here is the RAM consumed by the hypervisor and the RAM 
needed for the OSD (and good buffer cache too).
CPU load is also something to take into account.


Sébastien Han
Cloud Engineer

"Always give 100%. Unless you're giving blood."

[cid:image001.png@01CEA3DB.E7091320]

Phone: +33 (0)1 49 70 99 72 -
Mobile: +33 (0)6 52 84 44 70
Mail:  
sebastien@enovance.com - Skype : 
han.sbastien
Address : 10, rue de la Victoire - 75009 Paris
Web : www.enovance.com - Twitter : @enovance


On August 27, 2013 at 3:06:17 PM, Mark Chaney 
(mcha...@maximalliance.com) wrote:
How does the community feel about running OSDs on the same node as openstack 
compute? What if its only 3 sata disks? Isnt ceph-OSD a bit to CPU and ram 
hungry for doing such a thing and would lead little left over for vm instances? 
Just curious as I just saw someone in a forum that said they were going to do 
that and i always thought it was not recommended by ceph developers.

- Mark
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
<>___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] OSD never gets marked up?

2013-07-31 Thread Don Talton (dotalton)
So after much testing, it appears there may be some residual data left on a 
disk by a ceph installation? I can't think of another explanation.

I've done repeated installs using /dev/sdd, the first install worked, the rest 
failed. Subsequent installs would appear to work, the OSD daemon would start, 
but the OSD would never get marked as up. There were no errors in the logs 
indicating a disk issue. On a whim, I changed my configuration to use /dev/sdb, 
and it came up immediately. I used mkfs.xfs to format the disk. I'll do some 
additional testing to confirm this issue. I plan to completely zero the drive 
out and see if that works.

> -Original Message-
> From: Don Talton (dotalton)
> Sent: Monday, July 29, 2013 1:53 PM
> To: Gregory Farnum
> Cc: ceph-users@lists.ceph.com
> Subject: RE: [ceph-users] OSD never gets marked up?
> 
> Sorry, forgot to point out this:
> 
> 2013-07-29 20:46:26.366916 7f4f28c6e700  0 -- 2.4.1.7:6801/13319 >>
> 2.4.1.8:6802/18344 pipe(0x284378 0 sd=30 
> :53729 s=1 pgs=0
> cs=0 l=0).connect claims to be 2.4.1.8:6802/17272 not 2.4.1.8:6802/18344 -
> wrong node!
> 
> Not sure what that means?
> 
> > -Original Message-
> > From: ceph-users-boun...@lists.ceph.com [mailto:ceph-users-
> > boun...@lists.ceph.com] On Behalf Of Don Talton (dotalton)
> > Sent: Monday, July 29, 2013 1:49 PM
> > To: Gregory Farnum
> > Cc: ceph-users@lists.ceph.com
> > Subject: Re: [ceph-users] OSD never gets marked up?
> >
> > The entirety of the osd log file is below. I tried this on both
> > bobtail and cuttlefish. Bobtail I noticed errors about all of the xfs
> > features not being supported, which has gone away in cuttlefish. So I
> > am assuming that issue is resolved. I don't see any other errors
> >
> > 2013-07-29 20:46:25.813383 7f4f3e909780  0 ceph version 0.61.7
> > (8f010aff684e820ecc837c25ac77c7a05d71 
> > 91ff), process
> ceph-
> > osd, pid 13319
> > 2013-07-29 20:46:25.814051 7f4f3e909780  1 -- 2.4.1.7:0/0 learned my
> > addr
> > 2.4.1.7:0/0
> > 2013-07-29 20:46:25.814068 7f4f3e909780  1 accepter.accepter.bind
> > my_inst.addr is 2.4.1.7:6800/13319  
> > need_addr=0
> > 2013-07-29 20:46:25.814085 7f4f3e909780  1 -- 2.4.1.7:0/0 learned my
> > addr
> > 2.4.1.7:0/0
> > 2013-07-29 20:46:25.814090 7f4f3e909780  1 accepter.accepter.bind
> > my_inst.addr is 2.4.1.7:6801/13319  
> > need_addr=0
> > 2013-07-29 20:46:25.814103 7f4f3e909780  1 -- 2.4.1.7:0/0 learned my
> > addr
> > 2.4.1.7:0/0
> > 2013-07-29 20:46:25.814108 7f4f3e909780  1 accepter.accepter.bind
> > my_inst.addr is 2.4.1.7:6802/13319  
> > need_addr=0
> > 2013-07-29 20:46:25.893876 7f4f3e909780  0
> > filestore(/var/lib/ceph/osd/osd.0) mount FIEMAP ioctl is supported and
> > appears to work
> > 2013-07-29 20:46:25.893892 7f4f3e909780  0
> > filestore(/var/lib/ceph/osd/osd.0) mount FIEMAP ioctl is disabled via
> > 'filestore fiemap' config option
> > 2013-07-29 20:46:25.894237 7f4f3e909780  0
> > filestore(/var/lib/ceph/osd/osd.0) mount did NOT detect b trfs
> > 2013-07-29 20:46:25.943640 7f4f3e909780  0
> > filestore(/var/lib/ceph/osd/osd.0) mount syncfs(2) syscal   
> >   l
> > fully supported (by glibc and kernel)
> > 2013-07-29 20:46:25.943784 7f4f3e909780  0
> > filestore(/var/lib/ceph/osd/osd.0) mount found snaps <>
> > 2013-07-29 20:46:26.027401 7f4f3e909780  0
> > filestore(/var/lib/ceph/osd/osd.0) mount: enabling WRITEA HEAD journal
> > mode: btrfs not detected
> > 2013-07-29 20:46:26.034732 7f4f3e909780 -1 journal FileJournal::_open:
> > disabling aio for non-block j ournal.  Use 
> > journal_force_aio
> to
> > force use of aio anyway
> > 2013-07-29 20:46:26.034828 7f4f3e909780  1 journal _open
> > /var/lib/ceph/osd/osd.0/journal fd 20: 4294 
> > 967296 bytes,
> > block size 4096 bytes, directio = 1, aio = 0
> > 2013-07-29 20:46:26.035140 7f4f3e909780  1 journal _open
> > /var/lib/ceph/osd/osd.0/journal fd 20: 4294 
> > 967296 bytes,
> > block size 4096 bytes, directio = 1, aio = 0
> > 2013-07-29 20:46:26.036189 7f4f3e909780  1 journal close
> > /var/lib/ceph/osd/osd.0/journal
> > 2013-07-29 20:46:26.036905 7f4f3e909780  1 -- 2.4.1.7:6800/13319
> > messenger.start
> > 2013-07-29 20:46:26.036958 7f4f3e909780  1 -- :/0 messenger.s

Re: [ceph-users] OSD never gets marked up?

2013-07-29 Thread Don Talton (dotalton)
Sorry, forgot to point out this: 

2013-07-29 20:46:26.366916 7f4f28c6e700  0 -- 2.4.1.7:6801/13319 >> 
2.4.1.8:6802/18344 pipe(0x284378 0 sd=30 :53729 
s=1 pgs=0 cs=0 l=0).connect claims to be 2.4.1.8:6802/17272 not 
2.4.1.8:6802/18344 -  wrong node!

Not sure what that means?

> -Original Message-
> From: ceph-users-boun...@lists.ceph.com [mailto:ceph-users-
> boun...@lists.ceph.com] On Behalf Of Don Talton (dotalton)
> Sent: Monday, July 29, 2013 1:49 PM
> To: Gregory Farnum
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] OSD never gets marked up?
> 
> The entirety of the osd log file is below. I tried this on both bobtail and
> cuttlefish. Bobtail I noticed errors about all of the xfs features not being
> supported, which has gone away in cuttlefish. So I am assuming that issue is
> resolved. I don't see any other errors
> 
> 2013-07-29 20:46:25.813383 7f4f3e909780  0 ceph version 0.61.7
> (8f010aff684e820ecc837c25ac77c7a05d71 91ff), 
> process ceph-
> osd, pid 13319
> 2013-07-29 20:46:25.814051 7f4f3e909780  1 -- 2.4.1.7:0/0 learned my addr
> 2.4.1.7:0/0
> 2013-07-29 20:46:25.814068 7f4f3e909780  1 accepter.accepter.bind
> my_inst.addr is 2.4.1.7:6800/13319  
> need_addr=0
> 2013-07-29 20:46:25.814085 7f4f3e909780  1 -- 2.4.1.7:0/0 learned my addr
> 2.4.1.7:0/0
> 2013-07-29 20:46:25.814090 7f4f3e909780  1 accepter.accepter.bind
> my_inst.addr is 2.4.1.7:6801/13319  
> need_addr=0
> 2013-07-29 20:46:25.814103 7f4f3e909780  1 -- 2.4.1.7:0/0 learned my addr
> 2.4.1.7:0/0
> 2013-07-29 20:46:25.814108 7f4f3e909780  1 accepter.accepter.bind
> my_inst.addr is 2.4.1.7:6802/13319  
> need_addr=0
> 2013-07-29 20:46:25.893876 7f4f3e909780  0
> filestore(/var/lib/ceph/osd/osd.0) mount FIEMAP ioctl is
> supported and appears to work
> 2013-07-29 20:46:25.893892 7f4f3e909780  0
> filestore(/var/lib/ceph/osd/osd.0) mount FIEMAP ioctl is
> disabled via 'filestore fiemap' config option
> 2013-07-29 20:46:25.894237 7f4f3e909780  0
> filestore(/var/lib/ceph/osd/osd.0) mount did NOT detect b
> trfs
> 2013-07-29 20:46:25.943640 7f4f3e909780  0
> filestore(/var/lib/ceph/osd/osd.0) mount syncfs(2) syscal 
> l
> fully supported (by glibc and kernel)
> 2013-07-29 20:46:25.943784 7f4f3e909780  0
> filestore(/var/lib/ceph/osd/osd.0) mount found snaps <>
> 2013-07-29 20:46:26.027401 7f4f3e909780  0
> filestore(/var/lib/ceph/osd/osd.0) mount: enabling WRITEA
> HEAD journal mode: btrfs not detected
> 2013-07-29 20:46:26.034732 7f4f3e909780 -1 journal FileJournal::_open:
> disabling aio for non-block j ournal.  Use 
> journal_force_aio to
> force use of aio anyway
> 2013-07-29 20:46:26.034828 7f4f3e909780  1 journal _open
> /var/lib/ceph/osd/osd.0/journal fd 20: 4294 
> 967296 bytes,
> block size 4096 bytes, directio = 1, aio = 0
> 2013-07-29 20:46:26.035140 7f4f3e909780  1 journal _open
> /var/lib/ceph/osd/osd.0/journal fd 20: 4294 
> 967296 bytes,
> block size 4096 bytes, directio = 1, aio = 0
> 2013-07-29 20:46:26.036189 7f4f3e909780  1 journal close
> /var/lib/ceph/osd/osd.0/journal
> 2013-07-29 20:46:26.036905 7f4f3e909780  1 -- 2.4.1.7:6800/13319
> messenger.start
> 2013-07-29 20:46:26.036958 7f4f3e909780  1 -- :/0 messenger.start
> 2013-07-29 20:46:26.036975 7f4f3e909780  1 -- 2.4.1.7:6802/13319
> messenger.start
> 2013-07-29 20:46:26.036993 7f4f3e909780  1 -- 2.4.1.7:6801/13319
> messenger.start
> 2013-07-29 20:46:26.102317 7f4f3e909780  0
> filestore(/var/lib/ceph/osd/osd.0) mount FIEMAP ioctl is
> supported and appears to work
> 2013-07-29 20:46:26.102329 7f4f3e909780  0
> filestore(/var/lib/ceph/osd/osd.0) mount FIEMAP ioctl is
> disabled via 'filestore fiemap' config option
> 2013-07-29 20:46:26.102643 7f4f3e909780  0
> filestore(/var/lib/ceph/osd/osd.0) mount did NOT detect b
> trfs
> 2013-07-29 20:46:26.201852 7f4f3e909780  0
> filestore(/var/lib/ceph/osd/osd.0) mount syncfs(2) syscal 
> l
> fully supported (by glibc and kernel)
> 2013-07-29 20:46:26.201928 7f4f3e909780  0
> filestore(/var/lib/ceph/osd/osd.0) mount found snaps <>
> 2013-07-29 20:46:26.268809 7f4f3e909780  0
> filestore(/var/lib/ceph/osd/osd.0) mount: enabling WRITEA
> HEAD journal mode: btrfs not detected
> 2013-07-29 20:46:26.272557 7f4f3e909780 -1 journal FileJournal::_open:
> disabling aio for non-block j ournal.  Use 
> journal_force

Re: [ceph-users] OSD never gets marked up?

2013-07-29 Thread Don Talton (dotalton)
5 bytes epoch 
0) v1 -- ?+0 0x2d66d80 con 0x285a160
2013-07-29 20:46:26.364569 7f4f31580700  1 -- 2.4.1.7:6800/13319 <== mon.0 
2.4.1.4:6789/0 4  aut h_reply(proto 2 0 
Success) v1  393+0+0 (184374621 0 0) 0x2d68600 con 0x285a160
2013-07-29 20:46:26.364662 7f4f31580700  1 -- 2.4.1.7:6800/13319 --> 
2.4.1.4:6789/0 -- mon_subscribe ({monmap=0+}) 
v2 -- ?+0 0x2829700 con 0x285a160
2013-07-29 20:46:26.364690 7f4f31580700  1 -- 2.4.1.7:6800/13319 --> 
2.4.1.4:6789/0 -- mon_subscribe 
({monmap=0+,osd_pg_creates=0}) v2 -- ?+0 0x2829a80 con 0x285a160
2013-07-29 20:46:26.364759 7f4f31580700  1 -- 2.4.1.7:6800/13319 --> 
2.4.1.4:6789/0 -- auth(proto 2  2 bytes epoch 
0) v1 -- ?+0 0x2d66b40 con 0x285a160
2013-07-29 20:46:26.364866 7f4f3e909780  5 monclient: authenticate success, 
global_id 4267
2013-07-29 20:46:26.365466 7f4f31580700  1 -- 2.4.1.7:6800/13319 <== mon.0 
2.4.1.4:6789/0 5  mon _map v1  191+0+0 
(4283215978 0 0) 0x2d68400 con 0x285a160
2013-07-29 20:46:26.365518 7f4f31580700  1 -- 2.4.1.7:6800/13319 <== mon.0 
2.4.1.4:6789/0 6  mon _subscribe_ack(300s) 
v1  20+0+0 (1055034598 0 0) 0x2829a80 con 0x285a160
2013-07-29 20:46:26.36 7f4f31580700  1 -- 2.4.1.7:6800/13319 <== mon.0 
2.4.1.4:6789/0 7  mon _map v1  191+0+0 
(4283215978 0 0) 0x2d68000 con 0x285a160
2013-07-29 20:46:26.365581 7f4f31580700  1 -- 2.4.1.7:6800/13319 <== mon.0 
2.4.1.4:6789/0 8  mon _subscribe_ack(300s) 
v1  20+0+0 (1055034598 0 0) 0x2829700 con 0x285a160
2013-07-29 20:46:26.365732 7f4f31580700  1 -- 2.4.1.7:6800/13319 <== mon.0 
2.4.1.4:6789/0 9  aut h_reply(proto 2 0 
Success) v1  194+0+0 (3275840853 0 0) 0x2d68800 con 0x285a160
2013-07-29 20:46:26.366916 7f4f28c6e700  0 -- 2.4.1.7:6801/13319 >> 
2.4.1.8:6802/18344 pipe(0x284378 0 sd=30 :53729 
s=1 pgs=0 cs=0 l=0).connect claims to be 2.4.1.8:6802/17272 not 
2.4.1.8:6802/18344 -  wrong node!
2013-07-29 20:46:26.367028 7f4f28c6e700  0 -- 2.4.1.7:6801/13319 >> 
2.4.1.8:6802/18344 pipe(0x284378 0 sd=30 :53729 
s=1 pgs=0 cs=0 l=0).fault with nothing to send, going to standby
2013-07-29 20:46:26.369488 7f4f3e909780  1 -- 2.4.1.7:6800/13319 --> 
2.4.1.4:6789/0 -- mon_get_versi on(what=osdmap 
handle=1) v1 -- ?+0 0x2829540 con 0x285a160
2013-07-29 20:46:26.370168 7f4f31580700  1 -- 2.4.1.7:6800/13319 <== mon.0 
2.4.1.4:6789/0 10  mo 
n_check_map_ack(handle=1 version=5) v2  24+0+0 (3346734780 0 0) 0x2829e00 
con 0x285a160
2013-07-29 20:46:26.370303 7f4f2d578700  1 -- 2.4.1.7:6800/13319 --> 
2.4.1.4:6789/0 -- osd_boot(osd.         0 booted 0 v64) 
v3 -- ?+0 0x2833400 con 0x285a160







> -Original Message-
> From: Gregory Farnum [mailto:g...@inktank.com]
> Sent: Monday, July 29, 2013 12:17 PM
> To: Don Talton (dotalton)
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] OSD never gets marked up?
> 
> On Mon, Jul 29, 2013 at 11:36 AM, Don Talton (dotalton)
>  wrote:
> > Hello,
> >
> > I have a small test cluster that I deploy using puppet-ceph. Both the MON
> and the OSDs deploy properly, and appear to have all of the correct
> configurations. However, the OSDs are never marked as up. Any input is
> appreciated. The daemons are running on each OSD server, the OSDs are
> listed in the crushmap, and I can see them successfully authenticate with the
> MON eg.
> >
> > 2013-07-29 18:34:19.231905 7fbfeaaa5700  1 -- 2.4.1.7:0/14269 <==
> > mon.0 2.4.1.4:6789/0 8 
> > mon_command_ack([osd,crush,create-or-
> move,0,0.91,root=default,host=cep
> > h-osd0]=0 create-or-move updated item id 0 name 'osd.0' weight 0.91 at
> > location {host=ceph-osd0,root=default} to crush map v8) v1 
> > 223+0+0 (1170550300 0 0) 0x7fbfe00016d0 con 0x16b1770create-or-move
> > updated item id 0 name 'osd.0' weight 0.91 at location
> > {host=ceph-osd0,root=default} to crush map
> 
> This is actually not the OSDs doing this, but the ceph admin tool using the
> bootstrap-osd key. Do you have anything that's from the OSDs? There might
> be something helpful in the OSD logs; if not you can add "debug ms = 1" and
> restart them and there certainly will be.
> -Greg
> Software Engineer #42 @ http://inktank.com | http://ceph.com
> 
> >
> >
> > My ceph.conf
> >
> > [global]
> >   auth cluster required = cephx
> >   auth service requir

[ceph-users] OSD never gets marked up?

2013-07-29 Thread Don Talton (dotalton)
Hello,

I have a small test cluster that I deploy using puppet-ceph. Both the MON and 
the OSDs deploy properly, and appear to have all of the correct configurations. 
However, the OSDs are never marked as up. Any input is appreciated. The daemons 
are running on each OSD server, the OSDs are listed in the crushmap, and I can 
see them successfully authenticate with the MON eg.

2013-07-29 18:34:19.231905 7fbfeaaa5700  1 -- 2.4.1.7:0/14269 <== mon.0 
2.4.1.4:6789/0 8  
mon_command_ack([osd,crush,create-or-move,0,0.91,root=default,host=ceph-osd0]=0 
create-or-move updated item id 0 name 'osd.0' weight 0.91 at location 
{host=ceph-osd0,root=default} to crush map v8) v1  223+0+0 (1170550300 0 0) 
0x7fbfe00016d0 con 0x16b1770create-or-move updated item id 0 name 'osd.0' 
weight 0.91 at location {host=ceph-osd0,root=default} to crush map


My ceph.conf

[global]
  auth cluster required = cephx
  auth service required = cephx
  auth client required = cephx
  keyring = /etc/ceph/keyring

  fsid = e80afa94-a64c-486c-9e34-d55e85f26406
  debug ms = 1/5

[mon]
  mon data = /var/lib/ceph/mon/mon.$id
  debug mon = 20
  debug paxos = 1/5
  debug auth = 2

[osd]
  osd journal size = 4096
  cluster network = 2.4.1.0/24
  public network = 2.4.1.0/24
  filestore flusher = false
  osd data = /var/lib/ceph/osd/osd.$id
  osd journal = /var/lib/ceph/osd/osd.$id/journal
  osd mkfs type = xfs
  keyring = /var/lib/ceph/osd/osd.$id/keyring
  debug osd = 1/5
  debug filestore = 1/5
  debug journal = 1
  debug monc = 5/20

[mds]
  mds data = /var/lib/ceph/mds/mds.$id
  keyring = /var/lib/ceph/mds/mds.$id/keyring

[mon.0]
  host = ceph-mon0
  mon addr = 2.4.1.4:6789
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com