Re: [ceph-users] [Ceph-maintainers] Debian buster information

2019-05-31 Thread Dan Mick
Péter:

I'm forwarding this to ceph-users for a better answer/discussion


On 5/29/19 6:52 AM, Erdősi Péter wrote:
> Dear CEPH maintainers,
> 
> I would like to ask a few information about CEPH and Debian 10 (Buster).
> We would like to install a CEPH to the RC buster. As I can see, the ceph
> packages in buster now are 12.2.11, however I cannot find the
> ceph-deploy package in the repository.
> 
> I've tried to add the repository from the install guide, but there are
> no buster repo there.
> 
> My questions are:
>  - Is there any non debian repository for buster now?
>  - The buster version from the debian repository should work properly?
> (we intrested in RBD and the libvirt driver, no cephfs or object store
> will be used)
>  - Are there any testing happen with the packages, which are in the
> debian repository? (quality and/or functional.)
>  - Why no ceph-deploy package exist in the debian buster, if the osd and
> mon packages are there?
>  - When will be able to use the repository in buster at
> download.ceph.com? (after buster becomes stable, maybe sooner?)
>  - Could you guess a timerange, when the buster repo will work? (weeks,
> months)
> 
> Thanks,
>  Peter ERDOSI - KIFU
> ___
> Ceph-maintainers mailing list
> ceph-maintain...@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-maintainers-ceph.com


-- 
Dan Mick
Red Hat, Inc.
Ceph docs: http://ceph.com/docs
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] New Ceph community manager: Mike Perez

2018-08-28 Thread Dan Mick
On 08/28/2018 06:13 PM, Sage Weil wrote:
> Hi everyone,
> 
> Please help me welcome Mike Perez, the new Ceph community manager!
> 
> Mike has a long history with Ceph: he started at DreamHost working on 
> OpenStack and Ceph back in the early days, including work on the original 
> RBD integration.  He went on to work in several roles in the OpenStack 
> project, doing a mix of infrastructure, cross-project and community 
> related initiatives, including serving as the Project Technical Lead for 
> Cinder.
> 
> Mike lives in Pasadena, CA, and can be reached at mpe...@redhat.com, on 
> IRC as thingee, or twitter as @thingee.
> 
> I am very excited to welcome Mike back to Ceph, and look forward to 
> working together on building the Ceph developer and user communities!
> 
> sage
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 

Welcome back Mike!

-- 
Dan Mick
Red Hat, Inc.
Ceph docs: http://ceph.com/docs
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Dashboard can't activate in Luminous?

2018-08-23 Thread Dan Mick
On 08/23/2018 02:52 PM, Robert Stanford wrote:
> 
>  I just installed a new luminous cluster.  When I run this command:
> ceph mgr module enable dashboard
> 
> I get this response:
> all mgr daemons do not support module 'dashboard'
> 
> All daemons are Luminous (I confirmed this by runing ceph version).
> Why would this error appear?
> 
>  Thank you
>  R

Something's prevented the mgr(s) from loading the dashboard module; have
a look at the mgr log(s)

-- 
Dan Mick
Red Hat, Inc.
Ceph docs: http://ceph.com/docs
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph Developer Monthly - March 2018

2018-02-28 Thread Dan Mick
Would anyone else appreciate a Google Calendar invitation for the CDMs?
Seems like a natural.

On 02/27/2018 09:37 PM, Leonardo Vaz wrote:
> Hey Cephers,
> 
> This is just a friendly reminder that the next Ceph Developer Monthly
> meeting is coming up:
> 
>  http://wiki.ceph.com/Planning
> 
> If you have work that you're doing that it a feature work, significant
> backports, or anything you would like to discuss with the core team,
> please add it to the following page:
> 
>  http://wiki.ceph.com/CDM_07-MAR-2018
> 
> This edition happens on APAC friendly hours (21:00 EST) and we will
> use the following Bluejeans URL for the video conference:
> 
>  https://bluejeans.com/9290089010/
> 
> If you have questions or comments, please let us know.
> 
> Kindest regards,
> 
> Leo
> 


-- 
Dan Mick
Red Hat, Inc.
Ceph docs: http://ceph.com/docs
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] formatting bytes and object counts in ceph status ouput

2018-01-02 Thread Dan Mick
On 01/02/2018 08:54 AM, John Spray wrote:
> On Tue, Jan 2, 2018 at 10:43 AM, Jan Fajerski  wrote:
>> Hi lists,
>> Currently the ceph status output formats all numbers with binary unit
>> prefixes, i.e. 1MB equals 1048576 bytes and an object count of 1M equals
>> 1048576 objects.  I received a bug report from a user that printing object
>> counts with a base 2 multiplier is confusing (I agree) so I opened a bug and
>> https://github.com/ceph/ceph/pull/19117.
>> In the PR discussion a couple of questions arose that I'd like to get some
>> opinions on:
> 
>> - Should we print binary unit prefixes (MiB, GiB, ...) since that would be
>> technically correct?
> 
> I'm not a fan of the technically correct base 2 units -- they're still
> relatively rarely used, and I've spent most of my life using kB to
> mean 1024, not 1000.
> 
>> - Should counters (like object counts) be formatted with a base 10
>> multiplier or  a multiplier woth base 2?
> 
> I prefer base 2 for any dimensionless quantities (or rates thereof) in
> computing.  Metres and kilograms go in base 10, bytes go in base 2.
> 
> It's all very subjective and a matter of opinion of course, and my
> feelings aren't particularly strong :-)
> 
> John

100% agreed.  "iB" is an affectation IMO.  But I'm grumpy and old.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Moderator?

2017-08-23 Thread Dan Mick
On 08/23/2017 07:29 AM, Eric Renfro wrote:
> I sent a message in almost 2 days ago, with pasted logs. Since then, it’s 
> been in the moderator’s queue and still not approved (or even declined). 
> 
> Is anyone actually checking that? ;)
> 
> Eric Renfro

As you guessed, no, not really.

I found the message.  It's too large.  There are a lot of users on this
mailing list.  Please cut down its size and try again.

-- 
Dan Mick
Red Hat, Inc.
Ceph docs: http://ceph.com/docs
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] updating the documentation

2017-07-17 Thread Dan Mick
On 07/12/2017 11:29 AM, Sage Weil wrote:
> We have a fair-sized list of documentation items to update for the 
> luminous release.  The other day when I starting looking through what is 
> there now, though, I was also immediately struck by how out of date much 
> of the content is.  In addition to addressing the immediate updates for 
> luminous, I think we also need a systematic review of the current docs 
> (including the information structure) and a coordinated effort to make 
> updates and revisions.
> 
> First question is, of course: is anyone is interested in helping 
> coordinate this effort?
> 
> In the meantime, we can also avoid making the problem worse by requiring 
> that all pull requests include any relevant documentation updates.  This 
> means (1) helping educate contributors that doc updates are needed, (2) 
> helping maintainers and reviewers remember that doc updates are part of 
> the merge criteria (it will likely take a bit of time before this is 
> second nature), and (3) generally inducing developers to become aware of 
> the documentation that exists so that they know what needs to be updated 
> when they make a change.

As a reminder, there is a 'needs-doc' tag.  I don't think it's seen much
use and I'm aware of its problems.  Nothing beats discipline.


-- 
Dan Mick
Red Hat, Inc.
Ceph docs: http://ceph.com/docs
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Scuttlemonkey signing off...

2017-05-23 Thread Dan Mick
On 05/22/2017 07:36 AM, Patrick McGarry wrote:

> I'm writing to you today to share that my time in the Ceph community
> is coming to an end this year. 

You'll leave a big hole, Patrick.  It's been great having you along for
the ride.

-- 
Dan Mick
Red Hat, Inc.
Ceph docs: http://ceph.com/docs
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] clock skew

2017-04-05 Thread Dan Mick

> Just to follow-up on this: we have yet experienced a clock skew since we
> starting using chrony. Just three days ago, I know, bit still...

did you mean "we have not yet..."?

> Perhaps you should try it too, and report if it (seems to) work better
> for you as well.
> 
> But again, just three days, could be I cheer too early.

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-rest-api's behavior

2017-03-29 Thread Dan Mick
It looks like, while the mon allows 'get_command_descriptions' with no
privilege (other than basic auth), the same is not true of osd or mds.
I don't know if that's the only thing that would prevent a 'readonly'
ceph-rest-api (or ceph CLI or other programs that use the
mon_command/osd_command interfaces), but it's certainly a stopper.


On 03/27/2017 11:07 PM, Brad Hubbard wrote:
> I've copied Dan who may have some thoughts on this and has been
> involved with this code.
> 
> On Tue, Mar 28, 2017 at 3:58 PM, Mika c  wrote:
>> Hi Brad,
>>Thanks for your help. I found that's my problem. Forget attach file name
>> with words ''keyring".
>>
>> And sorry to bother you again. Is it possible to create a minimum privilege
>> client for the api to run?
>>
>>
>>
>> Best wishes,
>> Mika

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Problems with http://tracker.ceph.com/?

2017-01-20 Thread Dan Mick

ns3 is still answering, wrongly, for the record

On 1/20/2017 12:18 PM, David Galloway wrote:
This is resolved.  Apparently ns3 was shutdown a while ago and ns4 
just took a while to catch up.


ceph.com, download.ceph.com, and docs.ceph.com all have updated DNS 
records.


Sorry again for the trouble this caused all week.  The steps we've 
taken should allow us to return to a reasonable level of stability and 
uptime.


On 01/20/2017 11:28 AM, Dan Mick wrote:

Only in that we changed the zone and apparently it hasn't propagated
properly.  I'll check with RHIT.

Sent from Nine <http://www.9folders.com/>

*From:* Sean Redmond <sean.redmo...@gmail.com>
*Sent:* Jan 20, 2017 3:07 AM
*To:* Dan Mick
*Cc:* Shinobu Kinjo; Brian Andrus; ceph-users
*Subject:* Re: [ceph-users] Problems with http://tracker.ceph.com/?

Hi,

Is the current strange DNS issue with docs.ceph.com
<http://docs.ceph.com> related to this also? I noticed that
docs.ceph.com <http://docs.ceph.com> is getting a different A record
from ns4.redhat.com <http://ns4.redhat.com> vs ns{1..3}.redhat.com
<http://redhat.com>

dig output here > http://pastebin.com/WapDY9e2

Thanks

On Thu, Jan 19, 2017 at 11:03 PM, Dan Mick <dm...@redhat.com
<mailto:dm...@redhat.com>> wrote:

On 01/19/2017 09:57 AM, Shinobu Kinjo wrote:

>> The good news is the tenant delete failed. The bad news is 
we're looking for
>> the tracker volume now, which is no longer present in the Ceph 
project.


We've reloaded a new instance of tracker.ceph.com
<http://tracker.ceph.com> from a backup of the
database, and believe it's back online now.  The backup was taken at
about 12:31 PDT, so the last 8 or so hours of changes are, sadly, 
gone,
so if you had tracker updates during that time period, you may 
need to

redo them.

Sorry for the inconvenience.  We've relocated the tracker service to
hopefully mitigate this vulnerability.



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Problems with http://tracker.ceph.com/?

2017-01-20 Thread Dan Mick
Only in that we changed the zone and apparently it hasn't propagated properly.  
I'll check with RHIT.

Sent from Nine

From: Sean Redmond <sean.redmo...@gmail.com>
Sent: Jan 20, 2017 3:07 AM
To: Dan Mick
Cc: Shinobu Kinjo; Brian Andrus; ceph-users
Subject: Re: [ceph-users] Problems with http://tracker.ceph.com/?

Hi,

Is the current strange DNS issue with docs.ceph.com related to this also? I 
noticed that docs.ceph.com is getting a different A record from ns4.redhat.com 
vs ns{1..3}.redhat.com

dig output here > http://pastebin.com/WapDY9e2

Thanks

On Thu, Jan 19, 2017 at 11:03 PM, Dan Mick <dm...@redhat.com> wrote:
>
> On 01/19/2017 09:57 AM, Shinobu Kinjo wrote:
>
> >> The good news is the tenant delete failed. The bad news is we're looking 
> >> for
> >> the tracker volume now, which is no longer present in the Ceph project.
>
> We've reloaded a new instance of tracker.ceph.com from a backup of the
> database, and believe it's back online now.  The backup was taken at
> about 12:31 PDT, so the last 8 or so hours of changes are, sadly, gone,
> so if you had tracker updates during that time period, you may need to
> redo them.
>
> Sorry for the inconvenience.  We've relocated the tracker service to
> hopefully mitigate this vulnerability.
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Problems with http://tracker.ceph.com/?

2017-01-19 Thread Dan Mick
On 01/19/2017 09:57 AM, Shinobu Kinjo wrote:

>> The good news is the tenant delete failed. The bad news is we're looking for
>> the tracker volume now, which is no longer present in the Ceph project.

We've reloaded a new instance of tracker.ceph.com from a backup of the
database, and believe it's back online now.  The backup was taken at
about 12:31 PDT, so the last 8 or so hours of changes are, sadly, gone,
so if you had tracker updates during that time period, you may need to
redo them.

Sorry for the inconvenience.  We've relocated the tracker service to
hopefully mitigate this vulnerability.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] tracker.ceph.com

2016-12-17 Thread Dan Mick

tracker.ceph.com is having issues. I'm looking at it.

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Announcing: Embedded Ceph and Rook

2016-12-02 Thread Dan Mick
On 11/30/2016 03:46 PM, Bassam Tabbara wrote:
> Hello Cephers,
> 
> I wanted to let you know about a new library that is now available in
> master. It's called “libcephd” and it enables the embedding of Ceph
> daemons like MON and OSD (and soon MDS and RGW) into other applications.
> Using libcephd it's possible to create new applications that closely
> integrate Ceph storage without bringing in the full distribution of Ceph
> and its dependencies. For example, you can build storage application
> that runs the Ceph daemons on limited distributions like CoreOS natively
> or along side a hypervisor for hyperconverged scenarios. The goal is to
> enable a broader ecosystem of solutions built around Ceph and reduce
> some of the friction for adopting Ceph today. See
> http://pad.ceph.com/p/embedded-ceph for the blueprint.
> 
> We (Quantum) are using embedded Ceph in a new open-source project called
> Rook (https://github.com/rook/rook and https://rook.io). Rook integrates
> embedded Ceph in a deployment that is targeting cloud-native applications.
> 
> Please feel free to respond with feedback. Also if you’re in the Seattle
> area next week stop by for a meetup on embedded Ceph and its use in Rook
> https://www.meetup.com/Pacific-Northwest-Ceph-Meetup/events/235632106/
> 
> Thanks!
> Bassam
> 


Is there anyplace you explain in more detail about why this design is
attractive?  I'm having a hard time imagining why applications would
want to try to embed the cluster.

-- 
Dan Mick
Red Hat, Inc.
Ceph docs: http://ceph.com/docs
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] test

2016-11-30 Thread Dan Mick

please discard

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph website problems?

2016-10-12 Thread Dan Mick
Everything should have been back some time ago ( UTC or thereabouts)

On 10/11/2016 10:41 PM, Brian :: wrote:
> Looks like they are having major challenges getting that ceph cluster
> running again.. Still down.
> 
> On Tuesday, October 11, 2016, Ken Dreyer  > wrote:
>> I think this may be related:
>>
> http://www.dreamhoststatus.com/2016/10/11/dreamcompute-us-east-1-cluster-service-disruption/
>>
>> On Tue, Oct 11, 2016 at 5:57 AM, Sean Redmond  > wrote:
>>> Hi,
>>>
>>> Looks like the ceph website and related sub domains are giving errors for
>>> the last few hours.
>>>
>>> I noticed the below that I use are in scope.
>>>
>>> http://ceph.com/
>>> http://docs.ceph.com/
>>> http://download.ceph.com/
>>> http://tracker.ceph.com/
>>>
>>> Thanks
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Chown / symlink issues on download.ceph.com

2016-06-21 Thread Dan Mick
On 06/20/2016 12:54 AM, Wido den Hollander wrote:
> Hi Dan,
> 
> There seems to be a symlink issue on download.ceph.com:
> 
> # rsync -4 -avrn download.ceph.com::ceph /tmp|grep 'rpm-hammer/rhel7'
> rpm-hammer/rhel7 -> /home/dhc-user/repos/rpm-hammer/el7
> 
> Could you take a quick look at that? It breaks the syncs for all the other 
> mirrors who sync from download.ceph.com
> 
> Maybe do a chown (automated, cron?) as well to make sure all the files are 
> readable by rsync?
> 
> Thanks!
> 
> Wido
> 

I've just removed the symlink.  It probably was doing no good.

If there are further perm issues I don't see them, but let me know.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] [Ceph-maintainers] download.ceph.com has AAAA record that points to unavailable address

2016-02-25 Thread Dan Mick
Because we thought that he infrastructure did at the time.   We'll get that 
removed; I can see where it could cause hassles. 

Sent from Nine

From: Andy Allan <gravityst...@gmail.com>
Sent: Feb 25, 2016 6:11 AM
To: Dan Mick
Cc: Artem Fokin; ceph-users
Subject: Re: [ceph-users] [Ceph-maintainers] download.ceph.com has  record 
that points to unavailable address

Hi Dan, 

If download.ceph.com doesn't support IPv6, then why is there a  
record for it? 

Thanks, 
Andy 

On 25 February 2016 at 02:21, Dan Mick <dm...@redhat.com> wrote: 
> Yes.  download.ceph.com does not currently support IPv6 access. 
> 
> On 02/14/2016 11:53 PM, Artem Fokin wrote: 
>> Hi 
>> 
>> It seems like download.ceph.com has some outdated IPv6 address 
>> 
>> ~ curl -v -s download.ceph.com > /dev/null 
>> * About to connect() to download.ceph.com port 80 (#0) 
>> *   Trying 2607:f298:6050:51f3:f816:3eff:fe50:5ec... Connection refused 
>> *   Trying 173.236.253.173... connected 
>> 
>> 
>> 
>> ~ dig  download.ceph.com | grep  
>> ; <<>> DiG 9.8.1-P1 <<>>  download.ceph.com 
>> ;download.ceph.com.    IN     
>> download.ceph.com.    286    IN     
>> 2607:f298:6050:51f3:f816:3eff:fe50:5ec 
>> 
>> If this is the wrong mailing list, please refer to the correct one. 
>> 
>> Thanks! 
>> ___ 
>> Ceph-maintainers mailing list 
>> ceph-maintain...@lists.ceph.com 
>> http://lists.ceph.com/listinfo.cgi/ceph-maintainers-ceph.com 
> 
> ___ 
> ceph-users mailing list 
> ceph-users@lists.ceph.com 
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] [Ceph-maintainers] download.ceph.com has AAAA record that points to unavailable address

2016-02-24 Thread Dan Mick
Yes.  download.ceph.com does not currently support IPv6 access.

On 02/14/2016 11:53 PM, Artem Fokin wrote:
> Hi
> 
> It seems like download.ceph.com has some outdated IPv6 address
> 
> ~ curl -v -s download.ceph.com > /dev/null
> * About to connect() to download.ceph.com port 80 (#0)
> *   Trying 2607:f298:6050:51f3:f816:3eff:fe50:5ec... Connection refused
> *   Trying 173.236.253.173... connected
> 
> 
> 
> ~ dig  download.ceph.com | grep 
> ; <<>> DiG 9.8.1-P1 <<>>  download.ceph.com
> ;download.ceph.com.IN
> download.ceph.com.286IN
> 2607:f298:6050:51f3:f816:3eff:fe50:5ec
> 
> If this is the wrong mailing list, please refer to the correct one.
> 
> Thanks!
> ___
> Ceph-maintainers mailing list
> ceph-maintain...@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-maintainers-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-rest-api's behavior

2016-01-26 Thread Dan Mick
Is the client.test-admin key in the keyring read by ceph-rest-api?

On 01/22/2016 04:05 PM, Shinobu Kinjo wrote:
> Does anyone have any idea about that?
> 
> Rgds,
> Shinobu
> 
> - Original Message -
> From: "Shinobu Kinjo" <ski...@redhat.com>
> To: "ceph-users" <ceph-users@lists.ceph.com>
> Sent: Friday, January 22, 2016 7:15:36 AM
> Subject: ceph-rest-api's behavior
> 
> Hello,
> 
> "ceph-rest-api" works greatly with client.admin.
> But with client.test-admin which I created just after building the Ceph 
> cluster , it does not work.
> 
>  ~$ ceph auth get-or-create client.test-admin mon 'allow *' mds 'allow *' osd 
> 'allow *'
> 
>  ~$ sudo ceph auth list
>  installed auth entries:
>...
>  client.test-admin
>   key: AQCOVaFWTYr2ORAAKwruANTLXqdHOchkVvRApg==
>   caps: [mds] allow *
>   caps: [mon] allow *
>   caps: [osd] allow *
> 
>  ~$ ceph-rest-api -n client.test-admin
>  Traceback (most recent call last):
>File "/bin/ceph-rest-api", line 59, in 
>  rest,
>File "/usr/lib/python2.7/site-packages/ceph_rest_api.py", line 504, in 
> generate_app
>  addr, port = api_setup(app, conf, cluster, clientname, clientid, args)
>File "/usr/lib/python2.7/site-packages/ceph_rest_api.py", line 106, in 
> api_setup
>  app.ceph_cluster.connect()
>File "/usr/lib/python2.7/site-packages/rados.py", line 485, in connect
>  raise make_ex(ret, "error connecting to the cluster")
>  rados.ObjectNotFound: error connecting to the cluster
> 
> # ceph version 9.2.0 (bb2ecea240f3a1d525bcb35670cb07bd1f0ca299)
> 
> Is that expected behavior?
> Or if I've missed anything, please point it out to me.
> 
> Rgds,
> Shinobu
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 


-- 
Dan Mick
Red Hat, Inc.
Ceph docs: http://ceph.com/docs
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] [Ceph-maintainers] ceph packages link is gone

2015-12-03 Thread Dan Mick
This was sent to the ceph-maintainers list; answering here:

On 11/25/2015 02:54 AM, Alaâ Chatti wrote:
> Hello,
> 
> I used to install qemu-ceph on centos 6 machine from
> http://ceph.com/packages/, but the link has been removed, and there is
> no alternative in the documentation. Would you please update the link so
> I can install the version of qemu that supports rbd.
> 
> Thank you

Packages can be found on http://download.ceph.com/

-- 
Dan Mick
Red Hat, Inc.
Ceph docs: http://ceph.com/docs
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] tracker.ceph.com downtime today

2015-10-22 Thread Dan Mick
tracker.ceph.com will be brought down today for upgrade and move to a
new host.  I plan to do this at about 4PM PST (40 minutes from now).
Expect a downtime of about 15-20 minutes.  More notification to follow.

-- 
Dan Mick
Red Hat, Inc.
Ceph docs: http://ceph.com/docs
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] tracker.ceph.com downtime today

2015-10-22 Thread Dan Mick
tracker.ceph.com down now

On 10/22/2015 03:20 PM, Dan Mick wrote:
> tracker.ceph.com will be brought down today for upgrade and move to a
> new host.  I plan to do this at about 4PM PST (40 minutes from now).
> Expect a downtime of about 15-20 minutes.  More notification to follow.
> 


-- 
Dan Mick
Red Hat, Inc.
Ceph docs: http://ceph.com/docs
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] tracker.ceph.com downtime today

2015-10-22 Thread Dan Mick
It's back.  New DNS info is propagating its way around.  If you
absolutely must get to it, newtracker.ceph.com is the new address, but
please don't bookmark that, as it will be going away after the transition.

Please let me know of any problems you have.

On 10/22/2015 04:09 PM, Dan Mick wrote:
> tracker.ceph.com down now
> 
> On 10/22/2015 03:20 PM, Dan Mick wrote:
>> tracker.ceph.com will be brought down today for upgrade and move to a
>> new host.  I plan to do this at about 4PM PST (40 minutes from now).
>> Expect a downtime of about 15-20 minutes.  More notification to follow.
>>
> 
> 


-- 
Dan Mick
Red Hat, Inc.
Ceph docs: http://ceph.com/docs
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] tracker.ceph.com downtime today

2015-10-22 Thread Dan Mick
Fixed a configuration problem preventing updating issues, and switched
the mailer to use ipv4; if you updated and failed, or missed an email
notification, that may have been why.

On 10/22/2015 04:51 PM, Dan Mick wrote:
> It's back.  New DNS info is propagating its way around.  If you
> absolutely must get to it, newtracker.ceph.com is the new address, but
> please don't bookmark that, as it will be going away after the transition.
> 
> Please let me know of any problems you have.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] tracker.ceph.com downtime today

2015-10-22 Thread Dan Mick
Found that issue; reverted the database to the non-backlog-plugin state,
created a test bug.  Retry?

On 10/22/2015 06:54 PM, Dan Mick wrote:
> I see that too.  I suspect this is because of leftover database columns
> from the backlogs plugin, which is removed.  Looking into it.
> 
> On 10/22/2015 06:43 PM, Kyle Bader wrote:
>> I tried to open a new issue and got this error:
>>
>> Internal error
>>
>> An error occurred on the page you were trying to access.
>> If you continue to experience problems please contact your Redmine
>> administrator for assistance.
>>
>> If you are the Redmine administrator, check your log files for details
>> about the error.
>>
>>
>> On Thu, Oct 22, 2015 at 6:15 PM, Dan Mick <dm...@redhat.com> wrote:
>>> Fixed a configuration problem preventing updating issues, and switched
>>> the mailer to use ipv4; if you updated and failed, or missed an email
>>> notification, that may have been why.
>>>
>>> On 10/22/2015 04:51 PM, Dan Mick wrote:
>>>> It's back.  New DNS info is propagating its way around.  If you
>>>> absolutely must get to it, newtracker.ceph.com is the new address, but
>>>> please don't bookmark that, as it will be going away after the transition.
>>>>
>>>> Please let me know of any problems you have.
>>>
>>> ---
>>> Note: This list is intended for discussions relating to Red Hat Storage 
>>> products, customers and/or support. Discussions on GlusterFS and Ceph 
>>> architecture, design and engineering should go to relevant upstream mailing 
>>> lists.
>>
>>
>>
> 

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] tracker.ceph.com downtime today

2015-10-22 Thread Dan Mick
I see that too.  I suspect this is because of leftover database columns
from the backlogs plugin, which is removed.  Looking into it.

On 10/22/2015 06:43 PM, Kyle Bader wrote:
> I tried to open a new issue and got this error:
> 
> Internal error
> 
> An error occurred on the page you were trying to access.
> If you continue to experience problems please contact your Redmine
> administrator for assistance.
> 
> If you are the Redmine administrator, check your log files for details
> about the error.
> 
> 
> On Thu, Oct 22, 2015 at 6:15 PM, Dan Mick <dm...@redhat.com> wrote:
>> Fixed a configuration problem preventing updating issues, and switched
>> the mailer to use ipv4; if you updated and failed, or missed an email
>> notification, that may have been why.
>>
>> On 10/22/2015 04:51 PM, Dan Mick wrote:
>>> It's back.  New DNS info is propagating its way around.  If you
>>> absolutely must get to it, newtracker.ceph.com is the new address, but
>>> please don't bookmark that, as it will be going away after the transition.
>>>
>>> Please let me know of any problems you have.
>>
>> ---
>> Note: This list is intended for discussions relating to Red Hat Storage 
>> products, customers and/or support. Discussions on GlusterFS and Ceph 
>> architecture, design and engineering should go to relevant upstream mailing 
>> lists.
> 
> 
> 

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] IPv6 connectivity after website changes

2015-09-22 Thread Dan Mick
On 09/22/2015 05:22 AM, Sage Weil wrote:
> On Tue, 22 Sep 2015, Wido den Hollander wrote:
>> Hi,
>>
>> After the recent changes in the Ceph website the IPv6 connectivity got lost.
>>
>> www.ceph.com
>> docs.ceph.com
>> download.ceph.com
>> git.ceph.com
>>
>> The problem I'm now facing with a couple of systems is that they can't
>> download the Package signing key from git.ceph.com or anything from
>> download.ceph.com
>>
>> I see everything is still hosted at Dreamhost which has native IPv6 to
>> all systems, so it's mainly just adding the -records and it should
>> be fixed.
> 
> Yep... we'll get this added today!  (Dan, if you send me the v6 addrs on 
> irc, I can update DNS.)
> 
> sage
> 

I've added ceph.com and download.ceph.com so far, and confirmed they're
answering.  git.ceph.com and docs.ceph.com should not have changed, but
I don't see  records for them either; investigating.

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Email lgx...@nxtzas.com trying to subscribe to tracker.ceph.com

2015-08-20 Thread Dan Mick
Someone using the email address

lgx...@nxtzas.com

is trying to subscribe to the Ceph Redmine tracker, but neither redmine nor I 
can use that email address; it bounces with 

lgx...@nxtzas.com: Host or domain name not found. Name service error for
name=nxtzas.com type=: Host not found

If this is you, please email me privately and we'll get you fixed up.


-- 
Dan Mick Red Hat, Inc. Ceph docs: http://ceph.com/docs
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Fw: Ceph problem

2015-07-23 Thread Dan Mick



From: Aaron fjw6...@163.com
Sent: Jul 23, 2015 6:39 AM
To: dan.m...@inktank.com
Subject: Ceph problem

hello,
I am a user of ceph, I'm from china
I have two problem on ceph, I need your help


 import boto
 import boto.s3.connection
 access_key = '2EOCDA99UCZQFA1CQRCM'
 secret_key = 'avxcywxBPMtiDriwBTOk+cO1zrBikHqoSB0GUtqV'
 conn = boto.connect_s3(
... aws_access_key_id = access_key,
... aws_secret_access_key = secret_key,
... host = 'localhost',
... calling_format = boto.s3.connection.OrdinaryCallingFormat(),)
 b=conn.list_all_buckets()[0]
 list(b.list())
[Key: my-new-bucket,1/123.txt, Key: my-new-bucket,1234.txt, Key: 
my-new-bucket,2.txt, Key: my-new-bucket,3.txt, Key: 
my-new-bucket,N01/hello.txt, Key: my-new-bucket,aaa, Key: 
my-new-bucket,hello]


problem 1 : some error show after I run this command 


 b.get_website_configuration()
Traceback (most recent call last):
  File stdin, line 1, in module
  File /usr/lib/python2.7/site-packages/boto/s3/bucket.py, line 1480, in 
get_website_configuration
return self.get_website_configuration_with_xml(headers)[0]
  File /usr/lib/python2.7/site-packages/boto/s3/bucket.py, line 1519, in 
get_website_configuration_with_xml
body = self.get_website_configuration_xml(headers=headers)
  File /usr/lib/python2.7/site-packages/boto/s3/bucket.py, line 1534, in 
get_website_configuration_xml
response.status, response.reason, body)
boto.exception.S3ResponseError: S3ResponseError: 403 Forbidden
?xml version=1.0 
encoding=UTF-8?ErrorCodeSignatureDoesNotMatch/Code/Error


problem 2 : I need an url start with N07 , but possible Bucket name can't start 
with upper-case , is there some method let me use a name with N07 ,or can I use 
name n07 and URL start with N07 ? means URL different with name 


 conn.create_bucket(N07)
Traceback (most recent call last):
  File stdin, line 1, in module
  File /usr/lib/python2.7/site-packages/boto/s3/connection.py, line 599, in 
create_bucket
check_lowercase_bucketname(bucket_name)
  File /usr/lib/python2.7/site-packages/boto/s3/connection.py, line 59, in 
check_lowercase_bucketname
raise BotoClientError(Bucket names cannot contain upper-case  \
boto.exception.BotoClientError: BotoClientError: Bucket names cannot contain 
upper-case characters when using either the sub-domain or virtual hosting 
calling format.


Thank you very much.___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Any workaround for ImportError: No module named ceph_argparse?

2015-07-16 Thread Dan Mick
On 07/15/2015 11:11 AM, Deneau, Tom wrote:
 I just installed 9.0.2 on Trusty using ceph-deploy install --testing and I am 
 hitting
 the ImportError:  No module named ceph_argparse issue.
 
 What is the best way to get around this issue and still run a version that is
 compatible with other (non-Ubuntu) nodes in the cluster that are running 
 9.0.1?
 
 -- Tom Deneau

Which command is prompting that error?


-- 
Dan Mick
Red Hat, Inc.
Ceph docs: http://ceph.com/docs
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] initially conf calamari to know about my Ceph cluster(s)

2015-02-20 Thread Dan Mick
By the way, you may want to put these sorts of questions on
ceph-calam...@lists.ceph.com, which is specific to calamari.

On 02/16/2015 01:08 PM, Steffen Winther wrote:
 Steffen Winther ceph.user@... writes:
 
 Trying to figure out how to initially configure
 calamari clients to know about my
 Ceph Cluster(s) when such aint install through ceph.deploy
 but through Proxmox pveceph.

 Assume I possible need to copy some client admin keys and
 configure my MON hosts somehow, any pointers to doc on this?
 :) stupid me, most have been to tried after struggling with the built...
 
 It was just a question of finish the Karan$s guide from step 5 and
 make my salt master and minions work plus diamond.
 Now everything seems to be working,
 nice dashboard/workbench etc.
 
 
 Step 5 from
 http://karan-mj.blogspot.fi/2014/09/ceph-calamari-survival-guide.html:
 
 5 Calamari would not be able to find the Ceph cluster
 and will ask to add a cluster, for this we need to add Ceph clients
 to dashboard by installing salt-minion and
 diamond packages on them.
 
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
 


-- 
Dan Mick
Red Hat, Inc.
Ceph docs: http://ceph.com/docs
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Calamari build in vagrants

2015-02-20 Thread Dan Mick
On 02/16/2015 12:57 PM, Steffen Winther wrote:
 Dan Mick dmick@... writes:
 

 0cbcfbaa791baa3ee25c4f1a135f005c1d568512 on the 1.2.3 branch has the
 change to yo 1.1.0.  I've just cherry-picked that to v1.3 and master.
 Do you mean that you merged 1.2.3 into master and branch 1.3?

I put just that specific commit onto v1.3 and master.  The branches may
need a little preening to be completely synced, but that commit should
be in all of them now.


 BTW I managed to clone and built branch 1.2.3 in my vagrant env.

\o/


-- 
Dan Mick
Red Hat, Inc.
Ceph docs: http://ceph.com/docs
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Calamari build in vagrants

2015-02-13 Thread Dan Mick
0cbcfbaa791baa3ee25c4f1a135f005c1d568512 on the 1.2.3 branch has the
change to yo 1.1.0.  I've just cherry-picked that to v1.3 and master.

On 02/12/2015 11:21 AM, Steffen Winther wrote:
 Steffen Winther ceph.user@... writes:
 

 Trying to build calamari rpm+deb packages following this guide:
 http://karan-mj.blogspot.fi/2014/09/ceph-calamari-survival-guide.html

 Server packages works fine, but fails in clients for:
  dashboard manage admin login due to:

 yo  1.1.0 seems needed to build the clients,
 but can't found this with npm, what to do about this anyone?

 1.1.0 seems oldest version npm will install, latest says 1.4.5 :(

 build error:
 npm ERR! notarget No compatible version found:
yo at '=1.0.0-0 1.1.0-0'
 npm ERR! notarget Valid install targets:
 npm ERR! notarget [1.1.0,1.1.1,1.1.2,
1.2.0,1.2.1,1.3.0,1.3.2,1.3.3]

 
 Found a tar ball of yo@1.0.6 which can be installed with either:
 
 npm install -g tar ball
 
 of npm install -g package directory
 
 :)
 
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
 


-- 
Dan Mick
Red Hat, Inc.
Ceph docs: http://ceph.com/docs
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Compilation problem

2015-02-09 Thread Dan Mick
On 02/09/2015 09:14 PM, Ken Dreyer wrote:
 On 02/09/2015 08:17 AM, Gregory Farnum wrote:
 I think there's ongoing work to backport (portions of?) Ceph to RHEL5,
 but it definitely doesn't build out of the box. Even beyond the
 library dependencies you've noticed you'll find more issues with e.g.
 the boost and gcc versions. :/
 
 As far as I know, only librados v0.79 is under development so far. I was
 curious about the location of the current work, and found that it's by
 Rohan Mars, at https://github.com/droneware/ceph/tree/librados-aix-port
 
 - Ken

That tree will be being rebased soon, but yes, the only intent is to get
librados working.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Compilation problem

2015-02-09 Thread Dan Mick
On 02/09/2015 09:25 PM, Dan Mick wrote:

 That tree will be being rebased soon

will be being?  Wow.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] command to flush rbd cache?

2015-02-04 Thread Dan Mick
On 02/04/2015 10:44 PM, Udo Lembke wrote:
 Hi all,
 is there any command to flush the rbd cache like the
 echo 3  /proc/sys/vm/drop_caches for the os cache?
 
 Udo

Do you mean the kernel rbd or librbd?  The latter responds to flush
requests from the hypervisor.  The former...I'm not sure it has a
separate cache.

-- 
Dan Mick
Red Hat, Inc.
Ceph docs: http://ceph.com/docs
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] command to flush rbd cache?

2015-02-04 Thread Dan Mick
I don't know the details well; I know the device itself supports the
block-device-level cache-flush commands (I know there's a SCSI-specific
one but I don't know offhand if there's a device generic one) so the
guest OS can, and does, request flushing.  I can't remember if there's
also a qemu command to prompt the virtual device to flush without
telling the guest.

On 02/04/2015 11:08 PM, Udo Lembke wrote:
 Hi Dan,
 I mean qemu-kvm, also librbd.
 But how I can kvm told to flush the buffer?
 
 Udo
 
 On 05.02.2015 07:59, Dan Mick wrote:
 On 02/04/2015 10:44 PM, Udo Lembke wrote:
 Hi all,
 is there any command to flush the rbd cache like the
 echo 3  /proc/sys/vm/drop_caches for the os cache?

 Udo
 Do you mean the kernel rbd or librbd?  The latter responds to flush
 requests from the hypervisor.  The former...I'm not sure it has a
 separate cache.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Building a ceph from source

2014-12-08 Thread Dan Mick
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

The hang on ceph command exiting is probably 8797.

On 12/08/2014 07:47 AM, Loic Dachary wrote:
 Hi Patrick,
 
 Once compiled from sources ceph -s should work. Do you run it from
 sources ? I would check if you don't have Ceph libraries (librados
 ?) installed from a package. Having a mixture of ceph from source
 and another version installed from packages can lead to confusion.
 
 Cheers
 
 On 06/12/2014 12:02, Patrick Darley wrote:
 Hi,
 
 I had a question on the topic of building ceph v0.88 from
 source.
 
 I am using a custom linux system that does not have a package
 manager. and aims to compile all software from source, where
 possible.
 
 I have built several of the software depends and managed to build
 ceph semi-successfully. The problems I am having are commands
 such as `ceph -s` do not terminate, and have to be keyboard
 interrupted to be stopped.
 
 - I wonder if this is because I have not installed some necessary
 software that is stopping it from functioning properly? Is there
 a list of software that ceph requires to build and operate, that
 is not directed towards use with a package manager? - Is there
 another reason why this might be?
 
 Thanks in advance,
 
 Patrick ___ 
 ceph-users mailing list ceph-users@lists.ceph.com 
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
 
 
 
 
 ___ ceph-users mailing
 list ceph-users@lists.ceph.com 
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
 

- -- 
Dan Mick, Filesystem Engineering
Inktank Storage, Inc.   http://inktank.com
Ceph docs: http://ceph.com/docs
-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iQEcBAEBCgAGBQJUhgCeAAoJEFx+Dr1on4lN3ZsH/i8G7WtBhddwyZI0qooJ6U/O
8YJrJlCFVkrZKTExsvDxH+Ak7IY+MSY0++YXfKZyqmD9szWb7JWGbrvISwKMqYzn
yC6wZYJi9VNAa0YyxdJalahJhZbuembqecSj8br00fXQxtiWNMadbKnRs/AL/6BC
o7/LTDnPijVedgyinhMkCey6XPx+5ltwbE12rHt+rjXR35cY7ofm8FfOKUncuGor
Hldr7xainE1RvnyPVrLrcgRco2VeXEJadYfvzye3rXPmXmnyGGjnJnJ4uxP64zZY
5zBEWLZPVTV/xbjTIlMFQd9F+a0L+dzX4V4gRi9iYVPar1RLvauBRIDT1S+X7lI=
=Q9Wz
-END PGP SIGNATURE-
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] [ceph-calamari] Setting up Ceph calamari :: Made Simple

2014-09-25 Thread Dan Mick
Can you explain this a little more, Johan?  I've never even heard of
ipvsadmin or its facilities before today, and it ought not be required...
On Sep 25, 2014 7:04 AM, Johan Kooijman m...@johankooijman.com wrote:

 Karan,

 Thanks for the tutorial, great stuff. Please note that in order to get the
 graphs working, I had to install ipvsadm and create a symlink from
 /sbin/ipvsadm to /usr/bin/ipvsadm (CentOS 6).

 On Wed, Sep 24, 2014 at 10:16 AM, Karan Singh karan.si...@csc.fi wrote:

 Hello Cepher’s

 Now here comes my new blog on setting up Ceph Calamari.

 I hope you would like this step-by-step guide

 http://karan-mj.blogspot.fi/2014/09/ceph-calamari-survival-guide.html


 - Karan -


 ___
 ceph-calamari mailing list
 ceph-calam...@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-calamari-ceph.com




 --
 Met vriendelijke groeten / With kind regards,
 Johan Kooijman

 ___
 ceph-calamari mailing list
 ceph-calam...@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-calamari-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-deploy with mount option like discard, noatime

2014-08-25 Thread Dan Mick
The mounting is actually done by ceph-disk, which can also run from a
udev rule.  It gets options from the ceph configuration option osd
mount options {fstype}, which you can set globally or per-daemon as
with any other ceph option.


On 08/25/2014 04:11 PM, Somnath Roy wrote:
 Hi,
 
 Ceph-deploy does partition and mount the OSD/journal drive for the user.
 I can’t find any option of supplying mount options like discard,noatiome
 etc suitable for SSDs during ceph-deploy.
 
 Is there a way to control it ? If not, what could be the workaround ?
 
  
 
 Thanks  Regards
 
 Somnath
 
 
 
 
 PLEASE NOTE: The information contained in this electronic mail message
 is intended only for the use of the designated recipient(s) named above.
 If the reader of this message is not the intended recipient, you are
 hereby notified that you have received this message in error and that
 any review, dissemination, distribution, or copying of this message is
 strictly prohibited. If you have received this communication in error,
 please notify the sender by telephone or e-mail (as shown above)
 immediately and destroy any and all copies of this message in your
 possession (whether hard copies or electronically stored copies).
 
 
 
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
 

-- 
Dan Mick, Filesystem Engineering
Inktank Storage, Inc.   http://inktank.com
Ceph docs: http://ceph.com/docs
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-deploy with mount option like discard, noatime

2014-08-25 Thread Dan Mick
Precisely.

On 08/25/2014 05:26 PM, Somnath Roy wrote:
 Thanks Dan !
 Yes, I saw that in the ceph-disk scripts and it is using ceph-conf utility to 
 parse the config option.
 But, while installing with ceph-deploy, the default config file is created by 
 ceph-deploy only. So, I need to do the following while installing I guess. 
 Correct me if I am wrong.
 
 1.  After 'ceph-deploy new' step, modify the config file created with the 
 mount options.
 
 2. Go ahead with subsequent steps and ceph-deploy activate will use the mount 
 config options added in the conf file.
 
 Regards
 Somnath
 
 -Original Message-
 From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Dan 
 Mick
 Sent: Monday, August 25, 2014 5:04 PM
 To: ceph-users@lists.ceph.com
 Subject: Re: [ceph-users] ceph-deploy with mount option like discard, noatime
 
 The mounting is actually done by ceph-disk, which can also run from a udev 
 rule.  It gets options from the ceph configuration option osd mount options 
 {fstype}, which you can set globally or per-daemon as with any other ceph 
 option.
 
 
 On 08/25/2014 04:11 PM, Somnath Roy wrote:
 Hi,

 Ceph-deploy does partition and mount the OSD/journal drive for the user.
 I can't find any option of supplying mount options like
 discard,noatiome etc suitable for SSDs during ceph-deploy.

 Is there a way to control it ? If not, what could be the workaround ?



 Thanks  Regards

 Somnath


 --
 --

 PLEASE NOTE: The information contained in this electronic mail message
 is intended only for the use of the designated recipient(s) named above.
 If the reader of this message is not the intended recipient, you are
 hereby notified that you have received this message in error and that
 any review, dissemination, distribution, or copying of this message is
 strictly prohibited. If you have received this communication in error,
 please notify the sender by telephone or e-mail (as shown above)
 immediately and destroy any and all copies of this message in your
 possession (whether hard copies or electronically stored copies).



 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

 
 --
 Dan Mick, Filesystem Engineering
 Inktank Storage, Inc.   http://inktank.com
 Ceph docs: http://ceph.com/docs
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
 
 
 
 PLEASE NOTE: The information contained in this electronic mail message is 
 intended only for the use of the designated recipient(s) named above. If the 
 reader of this message is not the intended recipient, you are hereby notified 
 that you have received this message in error and that any review, 
 dissemination, distribution, or copying of this message is strictly 
 prohibited. If you have received this communication in error, please notify 
 the sender by telephone or e-mail (as shown above) immediately and destroy 
 any and all copies of this message in your possession (whether hard copies or 
 electronically stored copies).
 

-- 
Dan Mick, Filesystem Engineering
Inktank Storage, Inc.   http://inktank.com
Ceph docs: http://ceph.com/docs
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] [ceph-calamari] Issue during ceph-deploy osd activate

2014-07-16 Thread Dan Mick
There is a log kept in ceph.log of every ceph-deploy command.
On Jul 16, 2014 5:21 AM, John Spray john.sp...@redhat.com wrote:

 Hi Shubhendu,

 ceph-deploy is not part of Calamari, the ceph-users list is a better
 place to get help with that.  I have CC'd the list here.

 It will help if you can specify the series of ceph-deploy commands you
 ran before the failing one as well.

 Thanks,
 John

 On Wed, Jul 16, 2014 at 9:29 AM, Shubhendu Tripathi shtri...@redhat.com
 wrote:
  Hi,
 
  I am trying to setup a ceph cluster. But while activating the OSDs the
 admin
  gets timeout while running the command sudo ceph --cluster=ceph osd stat
  --format=json on remote node.
 
  If I try to execute the same command on one of the nodes of the cluster,
 I
  get the below errors -
 
  [ceph@dhcp43-15 ~]$ sudo ceph-disk-activate --mark-init sysvinit --mount
  /var/local/osd0
  INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=mycephcluster
  --show-config-value=fsid
  INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph
  --show-config-value=fsid
  INFO:ceph-disk:Running command: /usr/bin/ceph --cluster ceph --name
  client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring
 osd
  create --concise 450d3d27-95ce-4925-890a-908b35fc9255
  2014-07-16 13:47:28.073034 7fce166fb700  0 -- 10.70.43.15:0/1031798 
  10.70.43.20:6789/0 pipe(0x7fce08000c00 sd=4 :0 s=1 pgs=0 cs=0 l=1
  c=0x7fce08000e70).fault
 
 
  It keeps trying for 300 sec and then gives up with error -
 
  2014-07-16 13:02:48.077898 7f71b81dd700  0 monclient(hunting):
 authenticate
  timed out after 300
  2014-07-16 13:02:48.078033 7f71b81dd700  0 librados: client.bootstrap-osd
  authentication error (110) Connection timed out
  Error connecting to cluster: TimedOut
  ceph-disk: Error: ceph osd create failed: Command '/usr/bin/ceph'
 returned
  non-zero exit status 1:
 
  Kindly help resolving the same.
 
  Regards,
  Shubhendu
 
 
  ___
  ceph-calamari mailing list
  ceph-calam...@lists.ceph.com
  http://lists.ceph.com/listinfo.cgi/ceph-calamari-ceph.com
 ___
 ceph-calamari mailing list
 ceph-calam...@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-calamari-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph interactive mode tab completion

2014-02-11 Thread Dan Mick
Neither bash completion nor internal completion are yet functioning, no; 
there was some work to head in that direction but it was never finished.


On 02/03/2014 02:28 PM, Ben Sherman wrote:

Hello all,

I noticed ceph has an interactive mode.

I did quick search and I don't see that tab completion is in there,
but there are some mentions of readline in the source, so I'm
wondering if it is on the horizon.



--ben
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



--
Dan Mick, Filesystem Engineering
Inktank Storage, Inc.   http://inktank.com
Ceph docs: http://ceph.com/docs
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] PG state diagram

2013-11-25 Thread Dan Mick

Yes


On 11/25/2013 04:25 PM, Mark Kirkwood wrote:

That's rather cool (very easy to change). However given that the current
generated size is kinda a big thumbnail and too small to be actually
read meaningfully, would it not make sense to generate a larger
resolution version by default and make the current one a link to it?

Cheers

Mark

On 26/11/13 07:17, Gregory Farnum wrote:

It's generated from a .dot file which you can render as you like. :)
Please be aware that that diagram is for developers and will be
meaningless without that knowledge.
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com


On Mon, Nov 25, 2013 at 6:42 AM, Regola, Nathan (Contractor)
nathan_reg...@cable.comcast.com wrote:

Is there a vector graphics file (or a higher resolution file of some
type)
of the state diagram on the page below, as I can't read the text.

Thanks,
Nate


http://ceph.com/docs/master/dev/peering/



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


--
Dan Mick, Filesystem Engineering
Inktank Storage, Inc.   http://inktank.com
Ceph docs: http://ceph.com/docs
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Failed to fetch files

2013-11-21 Thread Dan Mick
Perhaps you mean these instructions, from 
http://ceph.com/docs/master/start/quick-start-preflight/#ceph-deploy-setup?


---clip---
2. Add the Ceph packages to your repository. Replace 
{ceph-stable-release} with a stable Ceph release (e.g., cuttlefish, 
dumpling, etc.).


For example:
echo deb http://ceph.com/debian-{ceph-stable-release}/ $(lsb_release 
-sc) main | sudo tee /etc/apt/sources.list.d/ceph.list

---clip---

See that second sentence there?


On 11/21/2013 01:01 PM, Knut Moe wrote:

Hi all,

I am trying to install Ceph using the Preflight Checklist and when I
issue the following command

sudo apt-get update  sudo apt-get install ceph-deploy

I get the following error after it goes through a lot different steps:

Failed to fetch
http://ceph.com/debian-{ceph-stable-release}/dists/precise/main/binary-amd64/Packages
404 Not Found

Failed to fetch
http://ceph.com/debian-{ceph-stable-release}/dists/precise/main/binary-i386/Packages
404 Not Found

I am using Ubuntu 12.04, 64-bit.

Thanks,
Kurt





___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



--
Dan Mick, Filesystem Engineering
Inktank Storage, Inc.   http://inktank.com
Ceph docs: http://ceph.com/docs
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Mapping rbd's on boot

2013-11-13 Thread Dan Mick
There is /etc/init.d/rbdmap; although I see no documentation for it, 
there is a sample map file added to /etc/ceph as well.


Looks like it was added in Dumpling.

On 11/13/2013 01:31 PM, Dane Elwell wrote:

Hi,

Is there a preferable or supported way of having rbd’s mapped on boot?
We have a server that will need to map several rbd’s and then mount
them, and I was wondering if there’s anything out there more elegant
than dumping stuff in /etc/rc.local?

I’ve seen this issue and related commit on the tracker - is anyone using
this in production?

http://tracker.ceph.com/issues/1790

If the answer is to put things in /etc/rc.local, then presumably on
shutdown the rbd won’t be unmapped. Are there any risks associated with
this?

Thanks

Dane


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



--
Dan Mick, Filesystem Engineering
Inktank Storage, Inc.   http://inktank.com
Ceph docs: http://ceph.com/docs
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Very frustrated with Ceph!

2013-11-05 Thread Dan Mick
Yeah; purge does remove packages and *package config files*; however, 
Ceph data is in a different class, hence the existence of purgedata.


A user might be furious if he did what he thought was remove the 
packages and the process also creamed his terabytes of stored data he 
was in the process of moving to a different OSD server, manually 
recovering, or whatever.


On 11/05/2013 03:03 PM, Neil Levine wrote:

In the Debian world, purge does both a removal of the package and a
clean up the files so might be good to keep semantic consistency here?


On Tue, Nov 5, 2013 at 1:11 AM, Sage Weil s...@newdream.net
mailto:s...@newdream.net wrote:

Purgedata is only meant to be run *after* the package is
uninstalled.  We should make it do a check to enforce that.
Otherwise we run into these problems...



Mark Kirkwood mark.kirkw...@catalyst.net.nz
mailto:mark.kirkw...@catalyst.net.nz wrote:

On 05/11/13 06:37, Alfredo Deza wrote:

On Mon, Nov 4, 2013 at 12:25 PM, Gruher, Joseph R
joseph.r.gru...@intel.com
mailto:joseph.r.gru...@intel.com wrote:

Could these problems be caused by running a purgedata
but not a purge?


It could be, I am not clear on what the expectation was for
just doing
purgedata without a purge.

Purgedata removes /etc/ceph but without the purge ceph
is still installed,
then ceph-deploy install detects ceph as already
installed and does not
(re)create /etc/ceph?


ceph-deploy will not create directories for you, that is
left to the
ceph install process, and just to be clear, the
latest ceph-deploy version (1.3) does not remote /etc/ceph,
just the contents.


Yeah, however purgedata is removing /var/lib/ceph, which means
after
running purgedata you need to either run purge then install or
manually
recreate the various working directories under /var/lib/ceph before
attempting any mon. mds or osd creation.

Maybe purgedata should actually leave those top level dirs under
/var/lib/ceph?

regards

Mark


ceph-users mailing list
ceph-users@lists.ceph.com mailto:ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com mailto:ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] new version of stgt pushed to ceph-extras

2013-11-04 Thread Dan Mick
Hey, all, we just refreshed stgt to its latest released version 
(1.0.41), and I also tweaked the rbd backend to be a little more

flexible and useful.

stgt is a userspace iSCSI target implementation (using tgtd) that can 
export several types of storage entities as iSCSI LUNs; backends include 
files in synchronous or asynchronous mode, passthrough to
real SCSI devices, tape-device emulation on a file, sheepdog block 
images, and Ceph RBD images.


New bs_rbd features include:

* fixed up tgt-admin to work with rbd images (so .conf files work)

* no more 20-rbd-image-per-tgtd limit

* tgtadm accepts --bsopts for each image
  conf= to set path to ceph.conf file
  id= to set clientid

  This means that each image can have different logging and access
  rights, or even refer to a different cluster.

The stgt source has also been refactored so that packagers can
build with or without bs_rbd, now built into an .so, and distribute
that module separately if desired; thus the base package doesn't
require Ceph libraries librbd and librados.

The source is available upstream at http://github.com/fujita/stgt.
Packages are built and available in the ceph-extras repository at 
http://ceph.com/packages/ceph-extras/.


Enjoy!

--
Dan Mick, Filesystem Engineering
Inktank Storage, Inc.   http://inktank.com
Ceph docs: http://ceph.com/docs
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] saucy salamander support?

2013-10-22 Thread Dan Mick
/etc/ceph should be installed by the package named 'ceph'.  Make sure 
you're using ceph-deploy install to install the Ceph packages before 
trying to use the machines for mon create.


On 10/22/2013 10:32 AM, LaSalle, Jurvis wrote:

thanks for the quick responses.  seems to be working ok for me, but...

[OT]

I keep hitting this issue where ceph-deploy will not mkdir /etc/ceph/
before it tries to write cluster configuration to
/etc/ceph/{cluster}.conf.  Manually creating the dir on each mon node
allows me to issue a ceph-deploy mon create rc-ceph-node1 successfully.
Is this a bug?  a missing step from the quickstart?

I first gave the quickstart a try under raring ringtail last week.  it
didn't nip me the first time through the quickstart, but after ceph-deploy
purgedata Š, ceph-deploy forgetkeys, it has plagued me each time since.

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



--
Dan Mick, Filesystem Engineering
Inktank Storage, Inc.   http://inktank.com
Ceph docs: http://ceph.com/docs
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] use ceph-rest-api without rados-gw

2013-10-10 Thread Dan Mick



On 10/10/2013 06:41 AM, su kucherova wrote:

Hi I want to use ceph-rest-api.But i dont want to use rados-gw
I have not setup rados-gw.I have osd+mon+msd.


ceph-rest-api has nothing to do with radosgw, so that should be fine.


I get an error while starting the ceph-rest-api.I dont know what i am
missing abt ceph-rest-api.
Can we not use ceph-rest-api to manage osd+mon+msd. Only?

  ceph-rest-api -c /etc/ceph/ceph.conf -n ceph

Traceback (most recent call last):
   File /usr/bin/ceph-rest-api, line 59, in module
 rest,
   File /usr/lib/python2.6/site-packages/ceph_rest_api.py, line 497,
in generate_app
 addr, port = api_setup(app, conf, cluster, clientname, clientid, args)
   File /usr/lib/python2.6/site-packages/ceph_rest_api.py, line 103,
in api_setup
 app.ceph_cluster = rados.Rados(name=clientname, conffile=conf)
   File /usr/lib/python2.6/site-packages/rados.py, line 211, in __init__
 raise Error(rados_initialize failed with error code: %d % ret)
rados.Error: rados_initialize failed with error code: -22


when you invoke it with -n ceph, you are saying I will connect to the 
cluster as a client named client.ceph, so must have keys, etc. to 
enable that.  Try leaving off -n ceph.





Thanks
Su




--
Dan Mick, Filesystem Engineering
Inktank Storage, Inc.   http://inktank.com
Ceph docs: http://ceph.com/docs
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] v0.67 Dumpling released

2013-08-16 Thread Dan Mick
That loosk interesting, but I cannot browse without making an account; 
can you make your source freely available?


On 08/14/2013 10:54 PM, Mikaël Cluseau wrote:

Hi lists,

in this release I see that the ceph command is not compatible with
python 3. The changes were not all trivial so I gave up, but for those
using gentoo, I made my ceph git repository available here with an
ebuild that forces the python version to 2.6 ou 2.7 :

git clone https://git.isi.nc/cloud/cloud-overlay.git

I upgraded from cuttlefish without any problem so good job
ceph-contributors :)

Have a nice day.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


--
Dan Mick, Filesystem Engineering
Inktank Storage, Inc.   http://inktank.com
Ceph docs: http://ceph.com/docs
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] v0.67 Dumpling released

2013-08-16 Thread Dan Mick

OK, no worries.  Was just after maximum availability.

On 08/16/2013 08:19 PM, Mikaël Cluseau wrote:

On 08/17/2013 02:06 PM, Dan Mick wrote:

That loosk interesting, but I cannot browse without making an account;
can you make your source freely available?


gitlab's policy is the following :

Public access
If checked, this project can be cloned /without any/ authentication. It
will also be listed on the public access directory
https://git.isi.nc/public. /Any/ user will have Guest
https://git.isi.nc/help/permissions permissions on the repository.


We support oauth (gmail has been extensively tested ;)) but accounts are
blocked by default so I don't know if you'll have instant access even
after an oauth login.


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] adding osds manually

2013-08-15 Thread Dan Mick
That cluster was not deployed by ceph-deploy; ceph-deploy has never put 
entries for the daemons into ceph.conf.



On 08/06/2013 12:08 PM, Kevin Weiler wrote:

Hi again Ceph devs,

I'm trying to deploy ceph using puppet and I'm hoping to add my osds
non-sequentially. I spoke with dmick on #ceph about this and we both
agreed it doesn't seem possible given the documentation. However, I have
an example of a ceph cluster that was deployed using ceph-deploy and it
seems to have non-sequential osds. I've pasted the output of ceph osd
tree and my ceph.conf here:

http://pastebin.com/0zXvwWcc

Is it possible to add osds non-sequentially? It seems like this
deployment is doing it. Cheers!

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Start Stop OSD

2013-08-13 Thread Dan Mick
Adding back ceph-users; try not to turn public threads into private ones 
when the problem hasn't been resolved.


On 08/13/2013 04:42 AM, Joshua Young wrote:

So I put the journals on their own partitions and they worked just
fine. All night they were up doing normal operations. When running
initctl list | grep ceph I would get ...

ceph-mds-all-starter stop/waiting
ceph-mds-all start/running
ceph-osd-all start/running
ceph-osd-all-starter stop/waiting
ceph-all start/running
ceph-mon-all start/running
ceph-mon-all-starter stop/waiting
ceph-mon (ceph/cloud3) start/running, process 1864
ceph-create-keys stop/waiting
ceph-osd (ceph/8) start/running, process 2136
ceph-osd (ceph/20) start/running, process 5281
ceph-osd (ceph/15) start/running, process 5292
ceph-osd (ceph/14) start/running, process 2135
ceph-mds stop/waiting



This is correct. There are 4 OSDs on this server. Now I have come in
today and running ceph -s still says all of my OSDS are up. When I run
the same command as above I only see OSD 14. When I go into the logs of
one of the others (OSD 15 ) I see this...


Does ps agree that only one OSD is left running?


2013-08-13 06:37:48.414775 7ffa2099a7c0  0 ceph version 0.61.7 
(8f010aff684e820ecc837c25ac77c7a05d7191ff), process ceph-osd, pid 16597
2013-08-13 06:37:48.421208 7ffa2099a7c0  0 filestore(/var/lib/ceph/osd/ceph-15) 
lock_fsid failed to lock /var/lib/ceph/osd/ceph-15/fsid, is another ceph-osd 
still running? (11) Resource temporarily unavailable
2013-08-13 06:37:48.421246 7ffa2099a7c0 -1 filestore(/var/lib/ceph/osd/ceph-15) 
FileStore::mount: lock_fsid failed
2013-08-13 06:37:48.421274 7ffa2099a7c0 -1 ^[[0;31m ** ERROR: error converting 
store /var/lib/ceph/osd/ceph-15: (16) Device or resource busy^[[0m
2013-08-13 06:37:48.445927 7f0fbb6687c0  0 ceph version 0.61.7 
(8f010aff684e820ecc837c25ac77c7a05d7191ff), process ceph-osd, pid 16659
2013-08-13 06:37:48.447470 7f0fbb6687c0  0 filestore(/var/lib/ceph/osd/ceph-15) 
lock_fsid failed to lock /var/lib/ceph/osd/ceph-15/fsid, is another ceph-osd 
still running? (11) Resource temporarily unavailable
2013-08-13 06:37:48.447480 7f0fbb6687c0 -1 filestore(/var/lib/ceph/osd/ceph-15) 
FileStore::mount: lock_fsid failed
2013-08-13 06:37:48.447500 7f0fbb6687c0 -1 ^[[0;31m ** ERROR: error converting 
store /var/lib/ceph/osd/ceph-15: (16) Device or resource busy^[[0m
2013-08-13 06:37:48.474852 7f28f332c7c0  0 ceph version 0.61.7 
(8f010aff684e820ecc837c25ac77c7a05d7191ff), process ceph-osd, pid 16752
2013-08-13 06:37:48.476695 7f28f332c7c0  0 filestore(/var/lib/ceph/osd/ceph-15) 
lock_fsid failed to lock /var/lib/ceph/osd/ceph-15/fsid, is another ceph-osd 
still running? (11) Resource temporarily unavailable
2013-08-13 06:37:48.476707 7f28f332c7c0 -1 filestore(/var/lib/ceph/osd/ceph-15) 
FileStore::mount: lock_fsid failed
2013-08-13 06:37:48.476728 7f28f332c7c0 -1 ^[[0;31m ** ERROR: error converting 
store /var/lib/ceph/osd/ceph-15: (16) Device or resource busy^[[0m
2013-08-13 06:37:48.501723 7f84618467c0  0 ceph version 0.61.7 
(8f010aff684e820ecc837c25ac77c7a05d7191ff), process ceph-osd, pid 16845
2013-08-13 06:37:48.503919 7f84618467c0  0 filestore(/var/lib/ceph/osd/ceph-15) 
lock_fsid failed to lock /var/lib/ceph/osd/ceph-15/fsid, is another ceph-osd 
still running? (11) Resource temporarily unavailable
2013-08-13 06:37:48.503932 7f84618467c0 -1 filestore(/var/lib/ceph/osd/ceph-15) 
FileStore::mount: lock_fsid failed
2013-08-13 06:37:48.503955 7f84618467c0 -1 ^[[0;31m ** ERROR: error converting 
store /var/lib/ceph/osd/ceph-15: (16) Device or resource busy^[[0m
2013-08-13 06:37:48.529665 7f29c2a367c0  0 ceph version 0.61.7 
(8f010aff684e820ecc837c25ac77c7a05d7191ff), process ceph-osd, pid 16944
2013-08-13 06:37:48.531227 7f29c2a367c0  0 filestore(/var/lib/ceph/osd/ceph-15) 
lock_fsid failed to lock /var/lib/ceph/osd/ceph-15/fsid, is another ceph-osd 
still running? (11) Resource temporarily unavailable
2013-08-13 06:37:48.531239 7f29c2a367c0 -1 filestore(/var/lib/ceph/osd/ceph-15) 
FileStore::mount: lock_fsid failed
2013-08-13 06:37:48.531260 7f29c2a367c0 -1 ^[[0;31m ** ERROR: error converting 
store /var/lib/ceph/osd/ceph-15: (16) Device or resource busy^[[0m



So the OSD can't get a lock on its data.  You aren't attempting to share 
devices/partitions for OSD storage as well, are you?


What is your cluster configuration?



Any idea? Thanks



-Original Message-
From: Dan Mick [mailto:dan.m...@inktank.com]
Sent: Monday, August 12, 2013 5:50 PM
To: Joshua Young
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Start Stop OSD



On 08/12/2013 04:49 AM, Joshua Young wrote:

I have 2 issues that I can not find a solution to.

First: I am unable to stop / start any osd by command. I have deployed
with ceph-deploy on Ubuntu 13.04 and everything seems to be working
find. I have 5 hosts 5 mons and 20 osds.

Using initctl list | grep ceph gives me



ceph-osd (ceph/15) start/running, process 2122


The fact that only one is output means

Re: [ceph-users] Start Stop OSD

2013-08-12 Thread Dan Mick



On 08/12/2013 04:49 AM, Joshua Young wrote:

I have 2 issues that I can not find a solution to.

First: I am unable to stop / start any osd by command. I have deployed
with ceph-deploy on Ubuntu 13.04 and everything seems to be working
find. I have 5 hosts 5 mons and 20 osds.

Using initctl list | grep ceph gives me



ceph-osd (ceph/15) start/running, process 2122


The fact that only one is output means that upstart believes there's 
only one OSD job running.  Are you sure the other daemons are actually 
alive and started by upstart?



However OSD 12 13 14 15 are all on this server.

sudo stop ceph-osd id=12

gives me stop: Unknown instance: ceph/12

Does anyone know what is wrong? Nothing in logs.

Also, when trying to put the journal on an SSD everything works fine. I
can add all 4 disks per host to the same SSD. The issue is when I
restart the server, only 1 out of the 3 OSDs will come back up. Has
anyone else had this issue?


Are you using partitions on the SSD?  If not, that's obviously going to 
be a problem; the device is usable by only one journal at a time.

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph instead of RAID

2013-08-12 Thread Dan Mick



On 08/12/2013 06:49 PM, Dmitry Postrigan wrote:

Hello community,

I am currently installing some backup servers with 6x3TB drives in them. I 
played with RAID-10 but I was not
impressed at all with how it performs during a recovery.

Anyway, I thought what if instead of RAID-10 I use ceph? All 6 disks will be 
local, so I could simply create
6 local OSDs + a monitor, right? Is there anything I need to watch out for in 
such configuration?


I mean, you can certainly do that.  1 mon and all OSDs on one server is 
not particularly fault-tolerant, perhaps, but if you have multiple such 
servers in the cluster, sure, why not?



Another thing. I am using ceph-deploy and I have noticed that when I do this:

 ceph-deploy --verbose  new localhost

the ceph.conf file is created in the current folder instead of /etc. Is this 
normal?


Yes.  ceph-deploy also distributes ceph.conf where it needs to go.


Also, in the ceph.conf there's a line:
 mon host = ::1
Is this normal or I need to change this to point to localhost?



You want to configure the machines such that they have resolvable 'real' 
IP addresses:


http://ceph.com/docs/master/start/quick-start-preflight/#hostname-resolution



Thanks for any feedback on this.

Dmitry

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



--
Dan Mick, Filesystem Engineering
Inktank Storage, Inc.   http://inktank.com
Ceph docs: http://ceph.com/docs
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] STGT targets.conf example

2013-08-09 Thread Dan Mick
Awesome. Thanks Darryl. Do you want to propose a fix to stgt, or shall I?
On Aug 8, 2013 7:21 PM, Darryl Bond db...@nrggos.com.au wrote:

 Dan,
 I found that the tgt-admin perl script looks for a local file

 if (-e $backing_store  ! -d $backing_store  $can_alloc == 1) {

  A bit nasty, but I created some empty files relative to / of the same
 path as the RBD backing store which worked around the problem.

 mkdir /iscsi-spin
 touch /iscsi-spin/test

 Lets me restart tgtd and have the LUN created properly.

 tgt-admin --dump is also not that useful, doesn't output the backing
 store type.

 # tgt-admin --dump
 default-driver iscsi

 target iqn.2013.com.ceph:test
 backing-store iscsi-spin/test
 initiator-address 192.168.6.100
 /target


 Darryl

 On 08/09/13 07:23, Dan Mick wrote:

 On 08/04/2013 10:15 PM, Darryl Bond wrote:

 I am testing scsi-target-utils tgtd with RBD support.
 I have successfully created an iscsi target using RBD as an iscsi target
 and tested it.
 It backs onto a rados pool iscsi-spin with a RBD called test.
 Now I want it to survive a reboot. I have created a conf file

 target iqn.2008-09.com.ceph:test
   backing-store iscsi-spin/test
   bs-type rbd
   path iscsi-spin/test
   /backing-store
 /target

 When I restart tgtd It creates the target but doesn't connect the
 backing store.
 The tool tgt-admin has a test mode for the configuration file

 [root@cephgw conf.d]# tgt-admin -p -e
 # Adding target: iqn.2008-09.com.ceph:test
 tgtadm -C 0 --lld iscsi --op new --mode target --tid 1 -T
 iqn.2008-09.com.ceph:test
 # Skipping device: iscsi-spin/test
 # iscsi-spin/bashful-spin does not exist - please check the
 configuration file
 tgtadm -C 0 --lld iscsi --op bind --mode target --tid 1 -I ALL

 It looks to me like tgtd support RBD backing stores but the
 configuration utilities don't.

 I have not tried config files or tgt-admin to any great extent, but it
 doesn't look to me like there are backend dependencies in those tools
 (or I would have modified them at the time :)), but, that said, there
 may be some weird problem.  tgt-admin is a Perl script that could be
 instrumented to figure out what's going on.

 I do know that the syntax of the config file is dicey.

  Anyone tried this?
 What have I missed?

 Regards
 Darryl


 The contents of this electronic message and any attachments are intended
 only for the addressee and may contain legally privileged, personal,
 sensitive or confidential information. If you are not the intended
 addressee, and have received this email, any transmission, distribution,
 downloading, printing or photocopying of the contents of this message or
 attachments is strictly prohibited. Any legal privilege or
 confidentiality attached to this message and attachments is not waived,
 lost or destroyed by reason of delivery to any person other than
 intended addressee. If you have received this message and are not the
 intended addressee you should notify the sender by return email and
 destroy all copies of the message and any attachments. Unless expressly
 attributed, the views expressed in this email do not necessarily
 represent the views of the company.
 __**_
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/**listinfo.cgi/ceph-users-ceph.**comhttp://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



 The contents of this electronic message and any attachments are intended
 only for the addressee and may contain legally privileged, personal,
 sensitive or confidential information. If you are not the intended
 addressee, and have received this email, any transmission, distribution,
 downloading, printing or photocopying of the contents of this message or
 attachments is strictly prohibited. Any legal privilege or confidentiality
 attached to this message and attachments is not waived, lost or destroyed
 by reason of delivery to any person other than intended addressee. If you
 have received this message and are not the intended addressee you should
 notify the sender by return email and destroy all copies of the message and
 any attachments. Unless expressly attributed, the views expressed in this
 email do not necessarily represent the views of the company.

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] minimum object size in ceph

2013-08-06 Thread Dan Mick
No minumum object size.  As for key, not sure what you mean; the closest 
thing to an object 'key' is its name, but it's obvious from routines 
like rados_read() and rados_write() that that's a const char *.  Did you 
mean some other key?


On 08/06/2013 12:13 PM, Nulik Nol wrote:

Hi,

when using the C api (RADOS) what is the minimum object size ? And
what is the key type ? (uint64_t, char[], or something like that ?)

TIA
Nulik
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



--
Dan Mick, Filesystem Engineering
Inktank Storage, Inc.   http://inktank.com
Ceph docs: http://ceph.com/docs
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Optimize Ceph cluster (kernel, osd, rbd)

2013-07-22 Thread Dan Mick
I would get the cluster up and running and do some experiments before I 
spent any time on optimization, much less all this.


On 07/20/2013 09:35 AM, Ta Ba Tuan wrote:

Please help me!


On 07/20/2013 02:11 AM, Ta Ba Tuan wrote:

Hi everyone,

I have *3 nodes (running MON and MDS)*
and *6 data nodes ( 84 OSDs**)*
Each data nodes has configuraions:
  - CPU: 24 processor * Core Intel(R) Xeon(R) CPU E5-2620 0 @ 2.00GHz
  - RAM: 32GB
  - Disk: 14*4TB
(14disks *4TB *6 data nodes= 84 OSDs)

To optimize Ceph Cluster, *I adjusted some kernel arguments*
(nr_request in queue and increated read throughput):

#Adjust nr_request in queue (staying in mem - default is 128)
echo 1024  /sys/block/sdb/queue/nr_requests
echo noop  /sys/block/sda/queue/scheduler   (default= noop
deadline [cfq])
#Increase read throughput  (default: 128)
echo 512  /sys/block/*/queue/read_ahead_kb

And, *tuning Ceph configuraion options below:*

[client]

 rbd cache = true
 rbd cache size = 536870912
 rbd cache max dirty = 134217728
 rbd cache target dirty = 33554432
 rbd cache max dirty age = 5

[osd]
osd data = /var/lib/ceph/osd/cloud-$id
osd journal = /var/lib/ceph/osd/cloud-$id/journal
osd journal size = 1
osd mkfs type = xfs
osd mkfs options xfs = -f -i size=2048
osd mount options xfs = rw,noatime,inode64,logbsize=250k

keyring = /var/lib/ceph/osd/cloud-$id/keyring.osd.$id
#increasing the number may increase the request processing rate
osd op threads = 24
#The number of disk threads, which are used to perform background disk
intensive OSD operations such as scrubbing and snap trimming
osd disk threads =24
#The number of active recovery requests per OSD at one time. More
requests will accelerate recovery, but the requests places an
increased load on the cluster.
osd recovery max active =1
#writing direct to the journal.
#Allow use of libaio to do asynchronous writes
journal dio = true
journal aio = true
#Synchronization interval:
#The maximum/minimum interval in seconds for synchronizing the filestore.
filestore max sync interval = 100
filestore min sync interval = 50
#Defines the maximum number of in progress operations the file store
accepts before blocking on queuing new operations.
filestore queue max ops = 2000
#The maximum number of bytes for an operation
filestore queue max bytes = 536870912
#The maximum number of operations the filestore can commit.
filestore queue committing max ops = 2000 (default =500)
#The maximum number of bytes the filestore can commit.
filestore queue committing max bytes = 536870912
#When you add or remove Ceph OSD Daemons to a cluster, the CRUSH
algorithm will want to rebalance the cluster by moving placement
groups to or from Ceph OSD Daemons to restore the balance. The process
of migrating placement groups and the objects they contain can reduce
the cluster’s operational performance considerably. To maintain
operational performance, Ceph performs this migration with
‘backfilling’, which allows Ceph to set backfill operations to a lower
priority than requests to read or write data.
osd max backfills = 1


Tomorrow, I'm going to implement Ceph Cluster,
I have very little experience in managing Ceph. So, I hope someone
give me advices about above arguments and guide me how to best
optimize ceph cluster?

Thank you so much!
--tuantaba







___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Problems with tgt with ceph support

2013-07-15 Thread Dan Mick
Apologies; I don't really understand the results.  They're labeled RBD 
Device /dev/rbd/rbd/iscsi-image-part1 exported with tgt and
TGT-RBD connector.  Is the former nothing to do with tgt (i.e., just 
the kernel block device), and the latter is stgt/tgtd?


Do you interpret them to say the first run gave avg 258MB/s, and the 
second run 198MB/s?


On 07/15/2013 02:41 AM, Toni F. [ackstorm] wrote:

Here's my results.

The performance test was seq write

On 15/07/13 10:12, Toni F. [ackstorm] wrote:

I'm going to do a performance test with fio to see the difference.

Regards

On 12/07/13 18:15, Dan Mick wrote:


Ceph performance is a very very complicated subject. How does that
compare to other access methods?  Say, rbd import/export for an easy
test?

On Jul 12, 2013 8:22 AM, Toni F. [ackstorm]
toni.fuen...@ackstorm.es mailto:toni.fuen...@ackstorm.es wrote:

It works, but the performance is very poor. 100MB/s or less

Which are your performance experience?

Regards

On 12/07/13 13:56, Toni F. [ackstorm] wrote:

It works!

Thanks for all

On 12/07/13 11:23, Toni F. [ackstorm] wrote:

Yes! it seems that i wasn't compiled the rbd support.

System:
State: ready
debug: off
LLDs:
iscsi: ready
Backing stores:
bsg
sg
null
ssc
aio
rdwr (bsoflags sync:direct)
Device types:
disk
cd/dvd
osd
controller
changer
tape
passthrough
iSNS:
iSNS=Off
iSNSServerIP=
iSNSServerPort=3205
iSNSAccessControl=Off

I'm going to recompile it
Thanks a lot!

On 11/07/13 07:45, Dan Mick wrote:



On 07/10/2013 04:12 AM, Toni F. [ackstorm] wrote:

Hi all,

I have installed the v0.37 of tgt.

To test this feature i follow the
http://ceph.com/dev-notes/adding-support-for-rbd-to-stgt/
guide

When i launch the command:

tgtadm --lld iscsi --mode logicalunit --op new
--tid 1 --lun 0
--backing-store iscsi-image --bstype rbd

fails

First i think that lun cannot be 0 because lun 0
is used by the
controller (previous command)


This worked when I first wrote the backend, but tgt
may have changed; I'll investigate and change the
blog entry if so.  Thanks.


If i launch the correct command with lun 1 i have
this error:

tgtadm: invalid request

In syslog:

Jul 10 12:54:03 datastore-lnx001 tgtd:
device_mgmt(245) sz:28
params:path=iscsi-image,bstype=rbd
Jul 10 12:54:03 datastore-lnx001 tgtd:
tgt_device_create(532) failed to
find bstype, rbd

What's wrong? not supported?



Where did you get your tgtd?  Was it built with rbd
support (CEPH_RBD defined
in the environment for make)?

sudo ./tgtadm --lld iscsi --op show --mode system

should tell you.

How did you set up access to ceph.conf?








--

Toni Fuentes Rico
toni.fuen...@ackstorm.es mailto:toni.fuen...@ackstorm.es
Administración de Sistemas

Oficina central: 902 888 345

ACK STORM, S.L.
ISO 9001:2008 (Cert.nº. 536932)
http://ackstorm.es

Este mensaje electrónico contiene información de ACK STORM, S.L.
que es privada y confidencial, siendo para el uso exclusivo de la
persona(s) o entidades arriba  mencionadas. Si usted no es el
destinatario señalado, le informamos que cualquier divulgación,
copia, distribución o uso de los contenidos está prohibida. Si
usted ha recibido este mensaje por error, por favor borre su
contenido y comuníquenoslo en la dirección ackst...@ackstorm.es
mailto:ackst...@ackstorm.es


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




--

Toni Fuentes Rico
toni.fuen...@ackstorm.es
Administración de Sistemas

Oficina central: 902 888 345

ACK STORM, S.L.
ISO 9001:2008 (Cert.nº. 536932)
http://ackstorm.es

Este mensaje electrónico contiene información de ACK STORM, S.L. que es privada 
y confidencial, siendo para el uso exclusivo de la persona(s) o entidades 
arriba  mencionadas. Si usted no es el destinatario señalado, le

Re: [ceph-users] Problems with tgt with ceph support

2013-07-12 Thread Dan Mick
Ceph performance is a very very complicated subject. How does that compare
to other access methods?  Say, rbd import/export for an easy test?
On Jul 12, 2013 8:22 AM, Toni F. [ackstorm] toni.fuen...@ackstorm.es
wrote:

 It works, but the performance is very poor. 100MB/s or less

 Which are your performance experience?

 Regards

 On 12/07/13 13:56, Toni F. [ackstorm] wrote:

 It works!

 Thanks for all

 On 12/07/13 11:23, Toni F. [ackstorm] wrote:

 Yes! it seems that i wasn't compiled the rbd support.

 System:
 State: ready
 debug: off
 LLDs:
 iscsi: ready
 Backing stores:
 bsg
 sg
 null
 ssc
 aio
 rdwr (bsoflags sync:direct)
 Device types:
 disk
 cd/dvd
 osd
 controller
 changer
 tape
 passthrough
 iSNS:
 iSNS=Off
 iSNSServerIP=
 iSNSServerPort=3205
 iSNSAccessControl=Off

 I'm going to recompile it
 Thanks a lot!

 On 11/07/13 07:45, Dan Mick wrote:



 On 07/10/2013 04:12 AM, Toni F. [ackstorm] wrote:

 Hi all,

 I have installed the v0.37 of tgt.

 To test this feature i follow the
 http://ceph.com/dev-notes/**adding-support-for-rbd-to-**stgt/http://ceph.com/dev-notes/adding-support-for-rbd-to-stgt/guide

 When i launch the command:

 tgtadm --lld iscsi --mode logicalunit --op new --tid 1 --lun 0
 --backing-store iscsi-image --bstype rbd

 fails

 First i think that lun cannot be 0 because lun 0 is used by the
 controller (previous command)


 This worked when I first wrote the backend, but tgt may have changed;
 I'll investigate and change the blog entry if so.  Thanks.


 If i launch the correct command with lun 1 i have this error:

 tgtadm: invalid request

 In syslog:

 Jul 10 12:54:03 datastore-lnx001 tgtd: device_mgmt(245) sz:28
 params:path=iscsi-image,**bstype=rbd
 Jul 10 12:54:03 datastore-lnx001 tgtd: tgt_device_create(532) failed to
 find bstype, rbd

 What's wrong? not supported?



 Where did you get your tgtd?  Was it built with rbd support (CEPH_RBD
 defined
 in the environment for make)?

 sudo ./tgtadm --lld iscsi --op show --mode system

 should tell you.

 How did you set up access to ceph.conf?








 --

 Toni Fuentes Rico
 toni.fuen...@ackstorm.es
 Administración de Sistemas

 Oficina central: 902 888 345

 ACK STORM, S.L.
 ISO 9001:2008 (Cert.nº. 536932)
 http://ackstorm.es

 Este mensaje electrónico contiene información de ACK STORM, S.L. que es
 privada y confidencial, siendo para el uso exclusivo de la persona(s) o
 entidades arriba  mencionadas. Si usted no es el destinatario señalado, le
 informamos que cualquier divulgación, copia, distribución o uso de los
 contenidos está prohibida. Si usted ha recibido este mensaje por error, por
 favor borre su contenido y comuníquenoslo en la dirección
 ackst...@ackstorm.es


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] some problem install ceph-deploy(china)

2013-05-31 Thread Dan Mick
argparse is a standard Python module, and should be available with your 
Python installation, or at least optionally downloadable (on Ubuntu, 
it's part of the python2.7 package.  It's strange that you don't already 
have it, but try checking your OS install facilities for it first.


On 05/31/2013 06:29 AM, 张鹏 wrote:

hello everyone
I come from china,when i install ceph-deploy in my server i find some
problem
when i run ./bootstrap i find  i canot get the argparse ,i find the url
is a http address

when i write the same address in my webbrowser with https://  before the
address

it can down load it ,
but when i use http in my browser  it cannot down it  ,it maybe only in
china has this problem,so  i want to know  how can i changge the http
address to a https  address ,thank you


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Fwd: Fwd: some problem install ceph-deploy(china)

2013-05-30 Thread Dan Mick

I think you meant this to go to ceph-users:

 Original Message 
Subject:Fwd: some problem install ceph-deploy(china)
Date:   Fri, 31 May 2013 02:54:56 +0800
From:   张鹏 zphj1...@gmail.com
To: dan.m...@inktank.com



hello everyone
I come from china,when i install ceph-deploy in my server i find some
problem
when i run ./bootstrap i find  i canot get the argparse ,i find the url
is a http address

when i write the same address in my webbrowser with https://  before the
address

it can down load it ,
but when i use http in my browser  it cannot down it  ,it maybe only in
china has this problem,so  i want to know  how can i changge the http
address to a https  address ,thank you
内嵌图片 1




-- 
Dan Mick, Filesystem Engineering
Inktank Storage, Inc.   http://inktank.com
Ceph docs: http://ceph.com/docs
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph v6.1, rbd-fuse issue, rbd_list: error %d Numerical result out of range

2013-05-21 Thread Dan Mick

Hi Sean:

It looks to me like this is the result of the simple-minded[1] strategy 
for allocating a return buffer for rbd_list():


ibuf_len = 1024;
ibuf = malloc(ibuf_len);
actual_len = rbd_list(ioctx, ibuf, ibuf_len);
if (actual_len  0) {
simple_err(rbd_list: error %d\n, actual_len);
return;
}

An easy fix would be to catch the actual_len  0 case and reallocate 
ibuf with the returned ibuf_len size.


I also note that ibuf is never freed, which is not great.  In fact that 
whole enumerate_images() routine is not what you'd call very solid. 
Here's a mostly-untested patch you can try if you like (I'll test tomorrow):


diff --git a/src/rbd_fuse/rbd-fuse.c b/src/rbd_fuse/rbd-fuse.c
index 5a4bfe2..5411ff8 100644
--- a/src/rbd_fuse/rbd-fuse.c
+++ b/src/rbd_fuse/rbd-fuse.c
@@ -93,8 +93,13 @@ enumerate_images(struct rbd_image **head)
ibuf = malloc(ibuf_len);
actual_len = rbd_list(ioctx, ibuf, ibuf_len);
if (actual_len  0) {
-   simple_err(rbd_list: error %d\n, actual_len);
-   return;
+   /* ibuf_len now set to required length */
+   actual_len = rbd_list(ioctx, ibuf, ibuf_len);
+   if (actual_len  0) {
+   /* shouldn't happen */
+   simple_err(rbd_list:, actual_len);
+   return;
+   }
}

fprintf(stderr, pool %s: , pool_name);
@@ -102,10 +107,11 @@ enumerate_images(struct rbd_image **head)
 ip += strlen(ip) + 1)  {
fprintf(stderr, %s, , ip);
im = malloc(sizeof(*im));
-   im-image_name = ip;
+   im-image_name = strdup(ip);
im-next = *head;
*head = im;
}
+   free(ibuf);
fprintf(stderr, \n);
return;
 }


--
[1] it was my simple mind...



On 05/17/2013 02:34 AM, Sean wrote:

Hi everyone

The image files don't display in mount point when using the command
rbd-fuse -p poolname -c /etc/ceph/ceph.conf /aa

but other pools can display image files with the same command. I also create
more sizes and more numbers images than that pool, it's work fine.

How can I track the issue?

It reports the below errors after enabling debug output of Fuse options.

root@ceph3:/# rbd-fuse -p qa_vol /aa -d
FUSE library version: 2.8.6
nullpath_ok: 0
unique: 1, opcode: INIT (26), nodeid: 0, insize: 56
INIT: 7.17
flags=0x047b
max_readahead=0x0002
INIT: 7.12
flags=0x0031
max_readahead=0x0002
max_write=0x0002
unique: 1, success, outsize: 40
unique: 2, opcode: GETATTR (3), nodeid: 1, insize: 56
getattr /
rbd_list: error %d
: Numerical result out of range
unique: 2, success, outsize: 120
unique: 3, opcode: OPENDIR (27), nodeid: 1, insize: 48
opendir flags: 0x98800 /
rbd_list: error %d
: Numerical result out of range
opendir[0] flags: 0x98800 /
unique: 3, success, outsize: 32
unique: 4, opcode: READDIR (28), nodeid: 1, insize: 80
readdir[0] from 0
unique: 4, success, outsize: 80
unique: 5, opcode: READDIR (28), nodeid: 1, insize: 80
unique: 5, success, outsize: 16
unique: 6, opcode: RELEASEDIR (29), nodeid: 1, insize: 64
releasedir[0] flags: 0x0
unique: 6, success, outsize: 16


thanks.
Sean Cao


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



--
Dan Mick, Filesystem Engineering
Inktank Storage, Inc.   http://inktank.com
Ceph docs: http://ceph.com/docs
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Determining when an 'out' OSD is actually unused

2013-05-21 Thread Dan Mick
Yes, with the proviso that you really mean kill the osd when clean.
Marking out is step 1.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Determining when an 'out' OSD is actually unused

2013-05-20 Thread Dan Mick



On 05/20/2013 01:33 PM, Alex Bligh wrote:

If I want to remove an osd, I use 'ceph out' before taking it down, i.e. 
stopping the OSD process, and removing the disk.

How do I (preferably programatically) tell when it is safe to stop the OSD 
process? The documentation says 'ceph -w', which is not especially helpful, (a) 
if I want to do it programatically, or (b) if there are other problems in the 
cluster so ceph was not reporting HEALTH_OK to start with.

Is there a better way?



We've had some discussions about this recently, but there's no great way 
of doing this right now.  We should probably have a query option that 
returns number of PGs on this OSD or some such.



--
Dan Mick, Filesystem Engineering
Inktank Storage, Inc.   http://inktank.com
Ceph docs: http://ceph.com/docs
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Growing the cluster.

2013-05-20 Thread Dan Mick
What does your crushmap look like?  There's a good chance you're 
choosing first hosts, and then OSDs, which means you can't come up with 
3 replicas (because there are only two hosts).


Try:
ceph -o my.crush.map osd getcrushmap
crushtool -i my.crush.map --test --output-csv

and then look at the .csv files created in that directory; that 
simulates some random object placements, and will let you know which 
OSDs the crushmap chose.  I bet you'll see that the data pool isn't 
replicating to 3 OSDs.


On 05/20/2013 11:51 AM, Nicolas Fernandez wrote:

Hello,
I'm deploying a test cluster on 0.61.2 version between two nodes
(OSD/MDS), and another (MON).
I have a problem making my cluster grow, today i've added an OSD into a
node that was a osd exist. I've made a reweight and add a replica.The
crushmap is up to date but now i'm getting some pgs in stuck
unclean.I've been cheecking tuneables options but that haven't sold the
issue, how can i fix the healthof the cluster?.

My cluster status:

# ceph -s
health HEALTH_WARN 192 pgs degraded; 177 pgs stuck unclean; recovery
10910/32838 degraded (33.224%); clock skew detected on mon.b
monmap e1: 3 mons at
{a=192.168.2.144:6789/0,b=192.168.2.194:6789/0,c=192.168.2.145:6789/0
http://192.168.2.144:6789/0,b=192.168.2.194:6789/0,c=192.168.2.145:6789/0},
election epoch 148, quorum 0,1,2 a,b,c
osdmap e576: 3 osds: 3 up, 3 in
 pgmap v17715: 576 pgs: 79 active, 305 active+clean, 98
active+degraded, 94 active+clean+degraded; 1837 MB data, 6778 MB used,
440 GB / 446 GB avail; 10910/32838 degraded (33.224%)
mdsmap e136: 1/1/1 up {0=a=up:active}

The replica configuration is:

pool 0 'data' rep size 3 min_size 2 crush_ruleset 0 object_hash rjenkins
pg_num 192 pgp_num 192 last_change 576 owner 0 crash_replay_interval 45
pool 1 'metadata' rep size 2 min_size 1 crush_ruleset 1 object_hash
rjenkins pg_num 192 pgp_num 192 last_change 556 owner 0
pool 2 'rbd' rep size 2 min_size 1 crush_ruleset 2 object_hash rjenkins
pg_num 192 pgp_num 192 last_change 1 owner 0

OSD Tree:

#ceph osd tree

# idweighttype nameup/downreweight
-13root default
-33rack unknownrack
-21host ceph01
01osd.0up1
-42host ceph02
11osd.1up1
21osd.2up1

Thanks.




___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] shared images

2013-05-13 Thread Dan Mick



On 05/13/2013 09:55 AM, Gregory Farnum wrote:

On Mon, May 13, 2013 at 9:10 AM, Harald Rößler harald.roess...@btd.de wrote:


Hi Together

is there a description of how a shared image works in detail? Can such
an image can be used for a shared file system on two virtual machine
(KVM) to mount. In my case, write on one machine and read only on the
other KVM.Are the changes are visible on the read only KVM?


The image is just striped across RADOS objects. In general you can
think of it behaving exactly like a hard drive connected to your
computer over iSCSI — a proper shared FS (eg, OCFS2) will work on top
of it, but there's no magic that makes running an ext4 mount on two
machines work...


Also, if you're talking about RBD images, there can be VM-side caching, 
and so there's no guarantee that writes from the writing VM will be seen 
by the readonly VM.  rbd isn't meant for sharing.  You want a filesystem 
for things like that.

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Urgent Notice to Users on 0.61.1

2013-05-10 Thread Dan Mick
I also just pushed a fix to the cuttlefish branch, so if you want 
packages that fix this, you can get them from gitbuilders using the 
testing versions, branch cuttlefish.


Thanks, Mike, for pointing this out!

On 05/10/2013 08:27 PM, Mike Dawson wrote:

Anyone running 0.61.1,

Watch out for high disk usage due to a file likely located at
/var/log/ceph/ceph-mon.mon-name.tdump. This file contains debugging
for monitor transactions. This debugging was added in the past week or
so to track down another anomaly. It is not necessary (or useful unless
you are debugging monitor transactions).

Depending on the load of your cluster, the tdump file may become large
quickly. I've seen it grow over 50GB/day.

IMHO, the default setting should be false, but it was merged in prior
to 0.61.1 with a default of true. If you are on 0.61.1, you can
confirm the settings with:

ceph --admin-daemon /var/run/ceph/ceph-mon.mon-name.asok config show |
grep mon_debug_dump

To disable it, add mon debug dump transactions = false to ceph.conf,
and restart your monitors. Alternatively, you can inject the argument or
add the cmdline flag.

dmick has an issue started at:
http://tracker.ceph.com/issues/5024

Thanks,
Mike
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] mon crash

2013-04-15 Thread Dan Mick
Two is a strange choice for number of monitors; you really want an odd 
number.  With two, if either one fails (or you have a network fault),

the cluster is dead because there's no majority.

That said, we certainly don't expect monitors to die when the network 
fault goes away.  Searching the bug database reveals 
http://tracker.ceph.com/issues/4175, but this should have been included 
in v0.60.


I'll ask Joao to have a look.  Joao, can you have a look?  :)



On 04/15/2013 01:56 PM, Craig Lewis wrote:


I'm doing a test of Ceph in two colo facilities.  Since it's just a
test, I only have 2 VMs running, one in each colo.  Both VMs are runing
mon, mds, a single osd, and the RADOS gw.  Cephx is disabled.  I'm
testing if the latency between the two facilities (~20ms) is low enough
that I can run a single Ceph cluster in both locations.  If it doesn't
work out, I'll run two independent Ceph clusters with manual replication.

This weekend, the connection between the two locations was degraded.
The link had 37% packet loss, for less than a minute. When the link
returned to normal, the re-elected mon leader crashed.

Is this a real bug, or did this happen because I'm only running 2
nodes?  I'm trying to avoid bringing more nodes into this test. My VM
infrastructure is pretty weak, and I'm afraid that more nodes would
introduce more noise in the test.

I saw this happen once before (the primary colo had a UPS failure,
causing a switch reboot).  The same process crashed, with the same stack
trace.  When that happened, I ran sudo service ceph restart on the
machine with the crashed mon, and everything started up fine.  I haven't
restarted anything this time.

I tried to recreate the problem by stopping and starting the VPN between
the two locations, but that didn't trigger the crash.  I have some more
ideas on how to trigger, I'll continue trying today.



arnulf@ceph0:~$ lsb_release -a
Distributor ID:Ubuntu
Description:Ubuntu 12.04.2 LTS
Release:12.04
Codename:precise

arnulf@ceph0:~$ uname -a
Linux ceph0 3.5.0-27-generic #46~precise1-Ubuntu SMP Tue Mar 26 19:33:21
UTC 2013 x86_64 x86_64 x86_64 GNU/Linux

arnulf@ceph0:~$ cat /etc/apt/sources.list.d/ceph.list
deb http://ceph.com/debian-testing/ precise main

arnulf@ceph0:~$ ceph -v
ceph version 0.60 (f26f7a39021dbf440c28d6375222e21c94fe8e5c)


ceph-mon.log from the non-elected master, mon.b:
2013-04-13 07:57:39.445098 7fde958f4700  0 mon.b@1(peon).data_health(20)
update_stats avail 85% total 17295768 used 1679152 avail 14738024
2013-04-13 07:58:35.150603 7fde950f3700  0 log [INF] : mon.b calling new
monitor election
2013-04-13 07:58:35.150876 7fde950f3700  1 mon.b@1(electing).elector(20)
init, last seen epoch 20
2013-04-13 07:58:39.445355 7fde958f4700  0
mon.b@1(electing).data_health(20) update_stats avail 85% total 17295768
used 1679152 avail 14738024
2013-04-13 07:58:40.192514 7fde958f4700  1 mon.b@1(electing).elector(21)
init, last seen epoch 21
2013-04-13 07:58:43.748907 7fde93dee700  0 -- 192.168.22.62:6789/0 
192.168.2.62:6789/0 pipe(0x2c56500 sd=25 :6789 s=2 pgs=108 cs=1
l=0).fault, initiating reconnect
2013-04-13 07:58:43.786209 7fde93ff0700  0 -- 192.168.22.62:6789/0 
192.168.2.62:6789/0 pipe(0x2c56500 sd=8 :6789 s=1 pgs=108 cs=2 l=0).fault
2013-04-13 07:59:13.050245 7fde958f4700  1 mon.b@1(probing) e1
discarding message auth(proto 0 26 bytes epoch 1) v1 and sending client
elsewhere; we are not in quorum
2013-04-13 07:59:13.050277 7fde958f4700  1 mon.b@1(probing) e1
discarding message auth(proto 0 34 bytes epoch 1) v1 and sending client
elsewhere; we are not in quorum
2013-04-13 07:59:13.050285 7fde958f4700  1 mon.b@1(probing) e1
discarding message auth(proto 0 26 bytes epoch 1) v1 and sending client
elsewhere; we are not in quorum
...

ceph-mon.log from the elected master, mon.a:
2013-04-13 07:57:41.756844 7f162be82700  0
mon.a@0(leader).data_health(20) update_stats avail 84% total 17295768
used 1797312 avail 14619864
2013-04-13 07:58:35.210875 7f162b681700  0 log [INF] : mon.a calling new
monitor election
2013-04-13 07:58:35.211081 7f162b681700  1 mon.a@0(electing).elector(20)
init, last seen epoch 20
2013-04-13 07:58:40.270547 7f162be82700  1 mon.a@0(electing).elector(21)
init, last seen epoch 21
2013-04-13 07:58:41.757032 7f162be82700  0
mon.a@0(electing).data_health(20) update_stats avail 84% total 17295768
used 1797312 avail 14619864
2013-04-13 07:58:43.441306 7f162b681700  0 log [INF] : mon.a@0 won
leader election with quorum 0,1
2013-04-13 07:58:43.560319 7f162b681700  0 log [INF] : pgmap v1684: 632
pgs: 632 active+clean; 9982 bytes data, 2079 MB used, 100266 MB / 102346
MB avail; 0B/s rd, 0B/s wr, 0op/s
2013-04-13 07:58:43.561722 7f162b681700 -1 mon/PaxosService.cc: In
function 'void PaxosService::propose_pending()' thread 7f162b681700 time
2013-04-13 07:58:43.560456
mon/PaxosService.cc: 127: FAILED assert(have_pending)

  ceph version 0.60 (f26f7a39021dbf440c28d6375222e21c94fe8e5c)
  1: (PaxosService::propose_pending()+0x46d) 

Re: [ceph-users] Puppet modules for Ceph finally landed!

2013-03-28 Thread Dan Mick

This is pretty cool, Sébastien.

On 03/28/2013 02:34 AM, Sebastien Han wrote:

Hello everybody,

Quite recently François Charlier and I worked together on the Puppet
modules for Ceph on behalf of our employer eNovance. In fact, François
started to work on them last summer, back then he achieved the Monitor
manifests. So basically, we worked on the OSD manifest. Modules are in
pretty good shape thus we thought it was important to communicate to the
community. That's enough talk, let's dive into these modules and explain
what do they do. See below what's available:

* Testing environment is Vagrant ready.
* Bobtail Debian latest stable version will be installed
* The module only supports CephX, at least for now
* Generic deployment for 3 monitors based on a template file
examples/common.sh which respectively includes mon.sh, osd.sh, mds.sh.
* Generic deployment for N OSDs. OSD disks need to be set from the
examples/site.pp file (line 71). Puppet will format specified disks in
XFS (only filesystem implemented) using these options: `-f -d
agcount=cpu-core-number -l size=1024m -n size=64k` and finally mounted
with: `rw,noatime,inode64`. Then it will mount all of them and append
the appropriate lines in the fstab file of each storage node. Finally
the OSDs will be added into Ceph.

All the necessary materials (sources and how-to) are publicly available
(and for free) under AGPL license on Github at
https://github.com/enovance/puppet-ceph . Those manifests do the job
quite nicely, although we still need to work on MDS (90% done, just need
a validation),  RGW (0% done) and a more flexible implementation
(authentication and filesystem support). Obviously comments,
constructive critics and feedback are more then welcome. Thus don't
hesitate to drop an email to either François (f.charl...@enovance.com
mailto:f.charl...@enovance.com) or I (sebast...@enovance.com
mailto:sebast...@enovance.com) if you have further questions.

Cheers!


Sébastien Han
Cloud Engineer

Always give 100%. Unless you're giving blood.









PHONE : +33 (0)1 49 70 99 72 – MOBILE : +33 (0)6 52 84 44 70
EMAIL : sebastien@enovance.com
mailto:sebastien@enovance.com – SKYPE : han.sbastien
ADDRESS : 10, rue de la Victoire – 75009 Paris
WEB : www.enovance.com – TWITTER : @enovance



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph.conf include another file option?

2013-03-04 Thread Dan Mick

Nothing I'm able to find.  You can specify a file with -c or
with CEPH_CONF, though, so you could always glue together a temp
file from pieces yourself.  -c and CEPH_CONF can also be a list
of files to try (the first one that parses successfully will
be the configuration), if that's helpful.

You may also be interested to know about ceph --admin-daemon admin 
socket config show.


On 03/04/2013 01:53 PM, Nick Bartos wrote:

Since radosgw requires the keystone admin token stored in ceph.conf,
and certain unprivileged services we run need access to ceph.conf, I'm
going to have to make a separate ceph.conf just for radosgw.  However,
it would be nice if I didn't have to duplicate things in the main
ceph.conf.  Is there any sort of include another file or similar
functionality supported by the ceph.conf config file parser?


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] segmentation fault on librbd

2013-03-04 Thread Dan Mick

This turned out to be http://tracker.ceph.com/issues/4122 for
those following along at home.  We were bitten by a change to
boost::spirit.

On 03/02/2013 11:54 AM, Mr. NPP wrote:

yes that works, the chatroom helped and i get this to work
qemu-img info rbd:libvirt-pool/gentoo-vm

so i got past those issues and now i'm stuck again.

2013-03-02 11:53:44.825985 7f9713fff700  1 -- 10.52.254.3:0/1018805
http://10.52.254.3:0/1018805 == mon.0 10.52.254.4:6789/0
http://10.52.254.4:6789/0 6  mon_subscribe_ack(300s) v1 
20+0+0 (1461124100 0 0) 0x7f9704001190 con 0x7f9720736f80
2013-03-02 11:53:44.825995 7f9713fff700 10 monclient:
handle_subscribe_ack sent 2013-03-02 11:53:44.821937 renew after
2013-03-02 11:56:14.821937
2013-03-02 11:53:44.826129 7f9713fff700  1 -- 10.52.254.3:0/1018805
http://10.52.254.3:0/1018805 == mon.0 10.52.254.4:6789/0
http://10.52.254.4:6789/0 7  osd_map(15..15 src has 1..15) v3 
4491+0+0 (4288196624 0 0) 0x7f9704000ae0 con 0x7f9720736f80
2013-03-02 11:53:44.826284 7f9713fff700  1 -- 10.52.254.3:0/1018805
http://10.52.254.3:0/1018805 == mon.0 10.52.254.4:6789/0
http://10.52.254.4:6789/0 8  mon_subscribe_ack(300s) v1 
20+0+0 (1461124100 0 0) 0x7f9704002260 con 0x7f9720736f80
2013-03-02 11:53:44.826299 7f9713fff700 10 monclient:
handle_subscribe_ack sent 0.00, ignoring
2013-03-02 11:53:44.826308 7f9713fff700  1 -- 10.52.254.3:0/1018805
http://10.52.254.3:0/1018805 == mon.0 10.52.254.4:6789/0
http://10.52.254.4:6789/0 9  osd_map(15..15 src has 1..15) v3 
4491+0+0 (4288196624 0 0) 0x7f9704003620 con 0x7f9720736f80
2013-03-02 11:53:44.826349 7f9713fff700  1 -- 10.52.254.3:0/1018805
http://10.52.254.3:0/1018805 == mon.0 10.52.254.4:6789/0
http://10.52.254.4:6789/0 10  mon_subscribe_ack(300s) v1 
20+0+0 (1461124100 0 0) 0x7f9704003980 con 0x7f9720736f80
2013-03-02 11:53:44.826357 7f9713fff700 10 monclient:
handle_subscribe_ack sent 0.00, ignoring
2013-03-02 11:53:44.826376 7f971ebfa900 20 librbd::ImageCtx: enabling
writeback caching...
2013-03-02 11:53:44.826433 7f971ebfa900 20 librbd: open_image: ictx =
0x7f9720738f00 name = 'gentoo-vm' id = '' snap_name = ''
2013-03-02 11:53:44.826565 7f971ebfa900  1 -- 10.52.254.3:0/1018805
http://10.52.254.3:0/1018805 -- 10.52.254.1:6805/5432
http://10.52.254.1:6805/5432 -- osd_op(client.4810.0:1 gentoo-vm.rbd
[stat] 3.50d89219) v4 -- ?+0 0x7f972073ad70 con 0x7f972073a9e0
2013-03-02 11:53:44.828473 7f9713fff700  1 -- 10.52.254.3:0/1018805
http://10.52.254.3:0/1018805 == osd.6 10.52.254.1:6805/5432
http://10.52.254.1:6805/5432 1  osd_op_reply(1 gentoo-vm.rbd
[stat] = -1 (Operation not permitted)) v4  112+0+0 (4279049108 0 0)
0x7f9709a0 con 0x7f972073a9e0
2013-03-02 11:53:44.828591 7f971ebfa900  1 -- 10.52.254.3:0/1018805
http://10.52.254.3:0/1018805 -- 10.52.254.1:6803/5148
http://10.52.254.1:6803/5148 -- osd_op(client.4810.0:2
rbd_id.gentoo-vm [stat] 3.88180738) v4 -- ?+0 0x7f972073b550 con
0x7f972073b1f0
2013-03-02 11:53:44.829989 7f9713fff700  1 -- 10.52.254.3:0/1018805
http://10.52.254.3:0/1018805 == osd.4 10.52.254.1:6803/5148
http://10.52.254.1:6803/5148 1  osd_op_reply(2 rbd_id.gentoo-vm
[stat] = -1 (Operation not permitted)) v4  115+0+0 (3762661955 0 0)
0x7f96f80009a0 con 0x7f972073b1f0
qemu-system-x86_64: -drive
file=rbd:libvirt-pool/gentoo-vm:debug_ms=1:debug_rbd=20:debug_monc=10:log_to_stderr=true:id=libvirt:key=AQDhNTFRKBRAFRAAnzrS9QkGa+eKZzkGNU1UOw==:auth_supported=cephx\;none,if=none,id=drive-virtio-disk0,format=raw:
error reading header from gentoo-vm
2013-03-02 11:53:44.830025 7f971ebfa900 -1 librbd::ImageCtx: error
finding header: (1) Operation not permitted
2013-03-02 11:53:44.830308 7f971ebfa900  1 -- 10.52.254.3:0/1018805
http://10.52.254.3:0/1018805 mark_down 0x7f972073b1f0 -- 0x7f972073adc0
2013-03-02 11:53:44.830386 7f971ebfa900  1 -- 10.52.254.3:0/1018805
http://10.52.254.3:0/1018805 mark_down 0x7f972073a9e0 -- 0x7f972073a780
2013-03-02 11:53:44.830495 7f971ebfa900  1 -- 10.52.254.3:0/1018805
http://10.52.254.3:0/1018805 mark_down_all
2013-03-02 11:53:44.830720 7f971ebfa900  1 -- 10.52.254.3:0/1018805
http://10.52.254.3:0/1018805 shutdown complete.
qemu-system-x86_64: -drive
file=rbd:libvirt-pool/gentoo-vm:debug_ms=1:debug_rbd=20:debug_monc=10:log_to_stderr=true:id=libvirt:key=AQDhNTFRKBRAFRAAnzrS9QkGa+eKZzkGNU1UOw==:auth_supported=cephx\;none,if=none,id=drive-virtio-disk0,format=raw:
could not open disk image
rbd:libvirt-pool/gentoo-vm:debug_ms=1:debug_rbd=20:debug_monc=10:log_to_stderr=true:id=libvirt:key=AQDhNTFRKBRAFRAAnzrS9QkGa+eKZzkGNU1UOw==:auth_supported=cephx\;none:
Operation not permitted
2013-03-02 19:53:44.979+: shutting down

i'm not sure why i'm getting operation not permitted from the osd's.



--
Dan Mick, Filesystem Engineering
Inktank Storage, Inc.   http://inktank.com
Ceph docs: http://ceph.com/docs
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users