Re: [ceph-users] Red Hat to acquire Inktank

2014-04-30 Thread Wido den Hollander

On 04/30/2014 10:46 PM, Patrick McGarry wrote:

Hey Danny (and Wido),

WRT the foundation I'm sure you can see why it has been on hold for
the last few weeks.  However, this is not signifying the death of the
effort.  Both Sage and I still feel that this is a discussion worth
having.  However, the discussion hasn't happened yet, so it's far too
early to be able to say anything beyond that.



Ok, thanks for the information. Just something that comes up in my mind:

- Repository location and access
- Documentation efforts for non-RHEL platforms
- Support for non-RHEL platforms

I'm confident that RedHat will make Ceph bigger and better, but I'm just 
a bit worried about how that will happen and what will happen to the 
existing community.



Once we get all of the acquisition stuff settled we'll start having
those conversations again.  There are obvious pros and cons to both
sides, so the outcome is far from obvious.  I will definitely let you
all know as soon as there is information to be had.



Understood. Keep us posted!

Wido


Sorry I couldn't be more informative, but we're still looking at it!




Best Regards,

Patrick McGarry
Director, Community || Inktank
http://ceph.com  ||  http://inktank.com
@scuttlemonkey || @ceph || @inktank


On Wed, Apr 30, 2014 at 4:34 PM, Danny Al-Gaaf  wrote:

Am 30.04.2014 14:18, schrieb Sage Weil:

Today we are announcing some very big news: Red Hat is acquiring Inktank.
We are very excited about what this means for Ceph, the community, the
team, our partners, and our customers. Ceph has come a long way in the ten
years since the first line of code has been written, particularly over the
last two years that Inktank has been focused on its development. The fifty
members of the Inktank team, our partners, and the hundreds of other
contributors have done amazing work in bringing us to where we are today.

We believe that, as part of Red Hat, the Inktank team will be able to
build a better quality Ceph storage platform that will benefit the entire
ecosystem. Red Hat brings a broad base of expertise in building and
delivering hardened software stacks as well as a wealth of resources that
will help Ceph become the transformative and ubiquitous storage platform
that we always believed it could be.


What does that mean to the idea/plans to move Ceph to a Foundation? Are
they canceled?

Danny


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




--
Wido den Hollander
42on B.V.
Ceph trainer and consultant

Phone: +31 (0)20 700 9902
Skype: contact42on
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] "ceph-deploy osd activate" error: AttributeError: 'module' object has no attribute 'logger' exception

2014-04-30 Thread Mike Dawson

Victor,

This is a verified issue reported earlier today:

http://tracker.ceph.com/issues/8260

Cheers,
Mike


On 4/30/2014 3:10 PM, Victor Bayon wrote:

Hi all,
I am following the "quick-ceph-deploy" tutorial [1] and I am getting a
  error when running the "ceph-deploy osd activate" and I am getting an
exception. See below[2].
I am following the quick tutorial step by step, except that
any help greatly appreciate
"ceph-deploy mon create-initial" does not seem to gather the keys and I
have to execute

manually with

ceph-deploy gatherkeys node01

I am following the same configuration with
- one admin node (myhost)
- 1 monitoring node (node01)
- 2 osd (node02, node03)


I am in Ubuntu Server 12.04 LTS (precise) and using ceph "emperor"


Any help greatly appreciated

Many thanks

Best regards

/V

[1] http://ceph.com/docs/master/start/quick-ceph-deploy/
[2] Error:
ceph@myhost:~/cluster$ ceph-deploy osd activate node02:/var/local/osd0
node03:/var/local/osd1
[ceph_deploy.conf][DEBUG ] found configuration file at:
/home/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.0): /usr/bin/ceph-deploy osd
activate node02:/var/local/osd0 node03:/var/local/osd1
[ceph_deploy.osd][DEBUG ] Activating cluster ceph disks
node02:/var/local/osd0: node03:/var/local/osd1:
[node02][DEBUG ] connected to host: node02
[node02][DEBUG ] detect platform information from remote host
[node02][DEBUG ] detect machine type
[ceph_deploy.osd][INFO  ] Distro info: Ubuntu 12.04 precise
[ceph_deploy.osd][DEBUG ] activating host node02 disk /var/local/osd0
[ceph_deploy.osd][DEBUG ] will use init type: upstart
[node02][INFO  ] Running command: sudo ceph-disk-activate --mark-init
upstart --mount /var/local/osd0
[node02][WARNIN] got latest monmap
[node02][WARNIN] 2014-04-30 19:36:30.268882 7f506fd07780 -1 journal
FileJournal::_open: disabling aio for non-block journal.  Use
journal_force_aio to force use of aio anyway
[node02][WARNIN] 2014-04-30 19:36:30.298239 7f506fd07780 -1 journal
FileJournal::_open: disabling aio for non-block journal.  Use
journal_force_aio to force use of aio anyway
[node02][WARNIN] 2014-04-30 19:36:30.301091 7f506fd07780 -1
filestore(/var/local/osd0) could not find 23c2fcde/osd_superblock/0//-1
in index: (2) No such file or directory
[node02][WARNIN] 2014-04-30 19:36:30.307474 7f506fd07780 -1 created
object store /var/local/osd0 journal /var/local/osd0/journal for osd.0
fsid 76de3b72-44e3-47eb-8bd7-2b5b6e3666eb
[node02][WARNIN] 2014-04-30 19:36:30.307512 7f506fd07780 -1 auth: error
reading file: /var/local/osd0/keyring: can't open
/var/local/osd0/keyring: (2) No such file or directory
[node02][WARNIN] 2014-04-30 19:36:30.307547 7f506fd07780 -1 created new
key in keyring /var/local/osd0/keyring
[node02][WARNIN] added key for osd.0
Traceback (most recent call last):
   File "/usr/bin/ceph-deploy", line 21, in 
 sys.exit(main())
   File
"/usr/lib/python2.7/dist-packages/ceph_deploy/util/decorators.py", line
62, in newfunc
 return f(*a, **kw)
   File "/usr/lib/python2.7/dist-packages/ceph_deploy/cli.py", line 147,
in main
 return args.func(args)
   File "/usr/lib/python2.7/dist-packages/ceph_deploy/osd.py", line 532,
in osd
 activate(args, cfg)
   File "/usr/lib/python2.7/dist-packages/ceph_deploy/osd.py", line 338,
in activate
 catch_osd_errors(distro.conn, distro.logger, args)
AttributeError: 'module' object has no attribute 'logger'


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] RBD clone for OpenStack Nova ephemeral volumes

2014-04-30 Thread Dmitry Borodaenko
I've re-proposed the rbd-clone-image-handler blueprint via nova-specs:
https://review.openstack.org/91486

In other news, Sebastien has helped me test the most recent
incarnation of this patch series and it seems to be usable now. With
an important exception of live migrations of VMs with RBD backed
ephemeral drives, which will need a bit more work and a separate
blueprint.

On Mon, Apr 28, 2014 at 7:44 PM, Dmitry Borodaenko
 wrote:
> I have decoupled the Nova rbd-ephemeral-clone branch from the
> multiple-image-location patch, the result can be found at the same
> location on GitHub as before:
> https://github.com/angdraug/nova/tree/rbd-ephemeral-clone
>
> I will keep rebasing this over Nova master, I also plan to update the
> rbd-clone-image-handler blueprint and publish it to nova-specs so that
> the patch series could be proposed for Juno.
>
> Icehouse backport of this branch is here:
> https://github.com/angdraug/nova/tree/rbd-ephemeral-clone-stable-icehouse
>
> I am not going to track every stable/icehouse commit with this branch,
> instead, I will rebase it over stable release tags as they appear.
> Right now it's based on tag:2014.1.
>
> For posterity, I'm leaving the multiple-image-location patch rebased
> over current Nova master here:
> https://github.com/angdraug/nova/tree/multiple-image-location
>
> I don't plan on maintaining multiple-image-location, just leaving it
> out there to save some rebasing effort for whoever decides to pick it
> up.
>
> -DmitryB
>
> On Fri, Mar 21, 2014 at 1:12 PM, Josh Durgin  wrote:
>> On 03/20/2014 07:03 PM, Dmitry Borodaenko wrote:
>>>
>>> On Thu, Mar 20, 2014 at 3:43 PM, Josh Durgin 
>>> wrote:

 On 03/20/2014 02:07 PM, Dmitry Borodaenko wrote:
>
> The patch series that implemented clone operation for RBD backed
> ephemeral volumes in Nova did not make it into Icehouse. We have tried
> our best to help it land, but it was ultimately rejected. Furthermore,
> an additional requirement was imposed to make this patch series
> dependent on full support of Glance API v2 across Nova (due to its
> dependency on direct_url that was introduced in v2).
>
> You can find the most recent discussion of this patch series in the
> FFE (feature freeze exception) thread on openstack-dev ML:
>
> http://lists.openstack.org/pipermail/openstack-dev/2014-March/029127.html
>
> As I explained in that thread, I believe this feature is essential for
> using Ceph as a storage backend for Nova, so I'm going to try and keep
> it alive outside of OpenStack mainline until it is allowed to land.
>
> I have created rbd-ephemeral-clone branch in my nova repo fork on
> GitHub:
> https://github.com/angdraug/nova/tree/rbd-ephemeral-clone
>
> I will keep it rebased over nova master, and will create an
> rbd-ephemeral-clone-stable-icehouse to track the same patch series
> over nova stable/icehouse once it's branched. I also plan to make sure
> that this patch series is included in Mirantis OpenStack 5.0 which
> will be based on Icehouse.
>
> If you're interested in this feature, please review and test. Bug
> reports and patches are welcome, as long as their scope is limited to
> this patch series and is not applicable for mainline OpenStack.


 Thanks for taking this on Dmitry! Having rebased those patches many
 times during icehouse, I can tell you it's often not trivial.
>>>
>>>
>>> Indeed, I get conflicts every day lately, even in the current
>>> bugfixing stage of the OpenStack release cycle. I have a feeling it
>>> will not get easier when Icehouse is out and Juno is in full swing.
>>>
 Do you think the imagehandler-based approach is best for Juno? I'm
 leaning towards the older way [1] for simplicity of review, and to
 avoid using glance's v2 api by default.
 [1] https://review.openstack.org/#/c/46879/
>>>
>>>
>>> Excellent question, I have thought long and hard about this. In
>>> retrospect, requiring this change to depend on the imagehandler patch
>>> back in December 2013 proven to have been a poor decision.
>>> Unfortunately, now that it's done, porting your original patch from
>>> Havana to Icehouse is more work than keeping the new patch series up
>>> to date with Icehouse, at least short term. Especially if we decide to
>>> keep the rbd_utils refactoring, which I've grown to like.
>>>
>>> As far as I understand, your original code made use of the same v2 api
>>> call even before it was rebased over imagehandler patch:
>>>
>>> https://github.com/jdurgin/nova/blob/8e4594123b65ddf47e682876373bca6171f4a6f5/nova/image/glance.py#L304
>>>
>>> If I read this right, imagehandler doesn't create the dependency on v2
>>> api, the only reason it caused a problem was because it exposed the
>>> output of the same Glance API call to a code path that assumed a v1
>>> data structure. If so, decoupling rbd clone patch from imagehandler
>>> will not h

Re: [ceph-users] Red Hat to acquire Inktank

2014-04-30 Thread Mark Nelson

On 04/30/2014 05:33 PM, Gandalf Corvotempesta wrote:

2014-05-01 0:20 GMT+02:00 Matt W. Benjamin :

Hi,

Sure, that's planned for integration in Giant (see Blueprints).


Great. Any ETA? Firefly was planned for February :)



At least on the plus side you can download the code whenever you want, 
even if we decide we want to do more testing/fixing. :D


Mark
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Red Hat to acquire Inktank

2014-04-30 Thread Gandalf Corvotempesta
2014-05-01 0:20 GMT+02:00 Matt W. Benjamin :
> Hi,
>
> Sure, that's planned for integration in Giant (see Blueprints).

Great. Any ETA? Firefly was planned for February :)
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Mons deadlocked after they all died

2014-04-30 Thread Marc
On 30/04/2014 00:42, Gregory Farnum wrote:
> On Tue, Apr 29, 2014 at 3:28 PM, Marc  wrote:
>> Thank you for the help so far! I went for option 1 and that did solve
>> that problem. However quorum has not been restored. Here's the
>> information I can get:
>>
>> mon a+b are in state Electing and have been for more than 2 hours now.
>> mon c does reply to "help" by using the socket, but it does not respond
>> to mon_status nor sync_status (even though help lists them, so they
>> should be available). The logs of mon.c show a loop that contains
>>
>> peer paxos version 15329444 vs my version 0 (too far ahead)
> Yeah, "version 0" means the store got entirely reset somehow. You're
> going to need to either manually copy an existing monitor into the
> right location, or edit the monmaps on the existing monitors so they
> believe they're the only two, and then add a new monitor in the third
> location using the standard tooling for that. (The second might be
> easier to find.)
> -Greg
> Software Engineer #42 @ http://inktank.com | http://ceph.com

Okay, I would agree that option 2 seems more straight forward. But with
--extract-monmap not being available and the cluster not having quorum,
by "editing" the monmap you mean create a fresh one, right? Would
creating a new monmap that only contains the 2 monitors that appear to
be fine be a safe procedure as far as the stored data is concerned?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Red Hat to acquire Inktank

2014-04-30 Thread Gandalf Corvotempesta
2014-05-01 0:11 GMT+02:00 Mark Nelson :
> Usable is such a vague word.  I imagine it's testable after a fashion. :D

Ok but I prefere an "official" support with IB integrated in main ceph repo
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Red Hat to acquire Inktank

2014-04-30 Thread Mark Nelson

On 04/30/2014 05:05 PM, Gandalf Corvotempesta wrote:

2014-04-30 22:27 GMT+02:00 Mark Nelson :

Check out the xio work that the linuxbox/mellanox folks are working on.
Matt Benjamin has posted quite a bit of info to the list recently!


Is that usable ?



Usable is such a vague word.  I imagine it's testable after a fashion. :D

Mark
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Red Hat to acquire Inktank

2014-04-30 Thread Gandalf Corvotempesta
2014-04-30 22:27 GMT+02:00 Mark Nelson :
> Check out the xio work that the linuxbox/mellanox folks are working on.
> Matt Benjamin has posted quite a bit of info to the list recently!

Is that usable ?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] OSD not starting at boot time

2014-04-30 Thread Andrija Panic
Hi,

I was wondering why would OSDs not start at the boot time, happens on 1
server (2 OSDs).

If i check with: chkconfig ceph --list, I can see that is should start,
that is, the MON on this server does really start but OSDs does not.

I can normally start them with: service ceph start osd.X

This is CentOS 6.5, and CEPH 0.72.2 deployed with ceph deploy tool.

I did not forget the ceph osd activate... for sure.

Thanks
-- 

Andrija Panić
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Red Hat to acquire Inktank

2014-04-30 Thread Patrick McGarry
Hey Danny (and Wido),

WRT the foundation I'm sure you can see why it has been on hold for
the last few weeks.  However, this is not signifying the death of the
effort.  Both Sage and I still feel that this is a discussion worth
having.  However, the discussion hasn't happened yet, so it's far too
early to be able to say anything beyond that.

Once we get all of the acquisition stuff settled we'll start having
those conversations again.  There are obvious pros and cons to both
sides, so the outcome is far from obvious.  I will definitely let you
all know as soon as there is information to be had.

Sorry I couldn't be more informative, but we're still looking at it!




Best Regards,

Patrick McGarry
Director, Community || Inktank
http://ceph.com  ||  http://inktank.com
@scuttlemonkey || @ceph || @inktank


On Wed, Apr 30, 2014 at 4:34 PM, Danny Al-Gaaf  wrote:
> Am 30.04.2014 14:18, schrieb Sage Weil:
>> Today we are announcing some very big news: Red Hat is acquiring Inktank.
>> We are very excited about what this means for Ceph, the community, the
>> team, our partners, and our customers. Ceph has come a long way in the ten
>> years since the first line of code has been written, particularly over the
>> last two years that Inktank has been focused on its development. The fifty
>> members of the Inktank team, our partners, and the hundreds of other
>> contributors have done amazing work in bringing us to where we are today.
>>
>> We believe that, as part of Red Hat, the Inktank team will be able to
>> build a better quality Ceph storage platform that will benefit the entire
>> ecosystem. Red Hat brings a broad base of expertise in building and
>> delivering hardened software stacks as well as a wealth of resources that
>> will help Ceph become the transformative and ubiquitous storage platform
>> that we always believed it could be.
>
> What does that mean to the idea/plans to move Ceph to a Foundation? Are
> they canceled?
>
> Danny
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Red Hat to acquire Inktank

2014-04-30 Thread Danny Al-Gaaf
Am 30.04.2014 14:18, schrieb Sage Weil:
> Today we are announcing some very big news: Red Hat is acquiring Inktank. 
> We are very excited about what this means for Ceph, the community, the 
> team, our partners, and our customers. Ceph has come a long way in the ten 
> years since the first line of code has been written, particularly over the 
> last two years that Inktank has been focused on its development. The fifty 
> members of the Inktank team, our partners, and the hundreds of other 
> contributors have done amazing work in bringing us to where we are today.
> 
> We believe that, as part of Red Hat, the Inktank team will be able to 
> build a better quality Ceph storage platform that will benefit the entire 
> ecosystem. Red Hat brings a broad base of expertise in building and 
> delivering hardened software stacks as well as a wealth of resources that 
> will help Ceph become the transformative and ubiquitous storage platform 
> that we always believed it could be.

What does that mean to the idea/plans to move Ceph to a Foundation? Are
they canceled?

Danny


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Red Hat to acquire Inktank

2014-04-30 Thread Mark Nelson

On 04/30/2014 03:21 PM, Gandalf Corvotempesta wrote:

2014-04-30 14:18 GMT+02:00 Sage Weil :

Today we are announcing some very big news: Red Hat is acquiring Inktank.


Great news.
Any changes to get native Infiniband support in ceph like in GlusterFS ?


Check out the xio work that the linuxbox/mellanox folks are working on. 
 Matt Benjamin has posted quite a bit of info to the list recently!


Thanks,
Mark


--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] SSD journal overload?

2014-04-30 Thread Indra Pramana
Hi Irek,

Good day to you.

Any updates/comments on below?

Looking forward to your reply, thank you.

Cheers.



On Tue, Apr 29, 2014 at 12:47 PM, Indra Pramana  wrote:

> Hi Irek,
>
> Good day to you, and thank you for your e-mail.
>
> Is there a better way other than patching the kernel? I would like to
> avoid having to compile a custom kernel for my OS. I read that I can
> disable write-caching on the drive using hdparm:
>
> hdparm -W0 /dev/sdf
> hdparm -W0 /dev/sdg
>
> I tested on one of my test servers and it seems I can disable it using the
> command.
>
> Current setup, write-caching is on:
>
> 
> root@ceph-osd-09:/home/indra# hdparm -W /dev/sdg
>
> /dev/sdg:
>  write-caching =  1 (on)
> 
>
> I tried to disable write-caching and it's successful:
>
> 
> root@ceph-osd-09:/home/indra# hdparm -W0 /dev/sdg
>
> /dev/sdg:
>  setting drive write-caching to 0 (off)
>  write-caching =  0 (off)
> 
>
> I check again, and now write-caching is disabled.
>
> 
> root@ceph-osd-09:/home/indra# hdparm -W /dev/sdg
>
> /dev/sdg:
>  write-caching =  0 (off)
> 
>
> Would the above give the same result? If yes, I will try to do that on our
> running cluster tonight.
>
> May I also know how I can confirm if my SSD comes with "volatile cache" as
> mentioned on your article? I tried to check my SSD's data sheet and there's
> no information on whether it comes with volatile cache or not. I also read
> that disabling write-caching will also increase the risk of data-loss. Can
> you comment on that?
>
> Looking forward to your reply, thank you.
>
> Cheers.
>
>
>
> On Mon, Apr 28, 2014 at 7:49 PM, Irek Fasikhov  wrote:
>
>> This is my article :).
>> To patch to the kernel (
>> http://www.theirek.com/downloads/code/CMD_FLUSH.diff).
>> After rebooting, run the following commands:
>> echo temporary write through > /sys/class/scsi_disk//cache_type
>>
>>
>> 2014-04-28 15:44 GMT+04:00 Indra Pramana :
>>
>> Hi Irek,
>>>
>>> Thanks for the article. Do you have any other web sources pertaining to
>>> the same issue, which is in English?
>>>
>>> Looking forward to your reply, thank you.
>>>
>>> Cheers.
>>>
>>>
>>> On Mon, Apr 28, 2014 at 7:40 PM, Irek Fasikhov wrote:
>>>
 Most likely you need to apply a patch to the kernel.


 http://www.theirek.com/blog/2014/02/16/patch-dlia-raboty-s-enierghoniezavisimym-keshiem-ssd-diskov


 2014-04-28 15:20 GMT+04:00 Indra Pramana :

 Hi Udo and Irek,
>
> Good day to you, and thank you for your emails.
>
>
> >perhaps due IOs from the journal?
> >You can test with iostat (like "iostat -dm 5 sdg").
>
> Yes, I have shared the iostat result earlier on this same thread. At
> times the utilisation of the 2 journal drives will hit 100%, especially
> when I simulate writing data using rados bench command. Any suggestions
> what could be the cause of the I/O issue?
>
>
> 
> avg-cpu:  %user   %nice %system %iowait  %steal   %idle
>1.850.001.653.140.00   93.36
>
>
> Device: rrqm/s   wrqm/s r/s w/srkB/swkB/s
> avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
>  sdg   0.00 0.000.00   55.00 0.00 25365.33
> 922.3834.22  568.900.00  568.90  17.82  98.00
> sdf   0.00 0.000.00   55.67 0.00 25022.67
> 899.0229.76  500.570.00  500.57  17.60  98.00
>
>
> avg-cpu:  %user   %nice %system %iowait  %steal   %idle
>2.100.001.372.070.00   94.46
>
>
> Device: rrqm/s   wrqm/s r/s w/srkB/swkB/s
> avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
>  sdg   0.00 0.000.00   56.67 0.00 25220.00
> 890.1223.60  412.140.00  412.14  17.62  99.87
> sdf   0.00 0.000.00   52.00 0.00 24637.33
> 947.5933.65  587.410.00  587.41  19.23 100.00
>
>
> avg-cpu:  %user   %nice %system %iowait  %steal   %idle
>2.210.001.776.750.00   89.27
>
>
> Device: rrqm/s   wrqm/s r/s w/srkB/swkB/s
> avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
>  sdg   0.00 0.000.00   54.33 0.00 24802.67
> 912.9825.75  486.360.00  486.36  18.40 100.00
> sdf   0.00 0.000.00   53.00 0.00 24716.00
> 932.6835.26  669.890.00  669.89  18.87 100.00
>
>
> avg-cpu:  %user   %nice %system %iowait  %steal   %idle
>1.870.001.675.250.00   91.21
>
>
> Device: rrqm/s   wrqm/s r/s w/srkB/swkB/s
> avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
>  sdg   0.00 0.000.00   94.33 0.00 26257.33
> 556.6918.29  208.440.00  208.44  10.50  99.07
> sdf

Re: [ceph-users] Red Hat to acquire Inktank

2014-04-30 Thread Gandalf Corvotempesta
2014-04-30 14:18 GMT+02:00 Sage Weil :
> Today we are announcing some very big news: Red Hat is acquiring Inktank.

Great news.
Any changes to get native Infiniband support in ceph like in GlusterFS ?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Unable to bring cluster up

2014-04-30 Thread Gandalf Corvotempesta
2014-04-30 22:11 GMT+02:00 Andrey Korolyov :
> regarding this one and previous you told about memory consumption -
> there are too much PGs, so memory consumption is so high as you are
> observing. Dead loop of osd-never-goes-up is probably because of
> suicide timeout of internal queues. It is may be not good but
> expected.

You are right. I don't know why I've created more or less 100.000 PGs, probably
I did something wrong.

I've created a new cluster from scratch, with just 512 PGs and seems
to work fine
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Unable to bring cluster up

2014-04-30 Thread Andrey Korolyov
Galndalf,

regarding this one and previous you told about memory consumption -
there are too much PGs, so memory consumption is so high as you are
observing. Dead loop of osd-never-goes-up is probably because of
suicide timeout of internal queues. It is may be not good but
expected.

OSD behaviour is ultimately depends of all kinds of knobs you can
change, e.g. I had found recently funny issue that the collection
warm-up (e.g. bringing collections to the RAM cache) actually slows
down OSD rejoin (typical post-peering I/O delays), comparing with
regular situation when collections reading from disk upon OSD launch.


On Tue, Apr 29, 2014 at 9:22 PM, Gregory Farnum  wrote:
> You'll need to go look at the individual OSDs to determine why they
> aren't on. All the cluster knows is that the OSDs aren't communicating
> properly.
> -Greg
> Software Engineer #42 @ http://inktank.com | http://ceph.com
>
>
> On Tue, Apr 29, 2014 at 3:06 AM, Gandalf Corvotempesta
>  wrote:
>> After a simple "service ceph restart" on a server, i'm unable to get
>> my cluster up again
>> http://pastebin.com/raw.php?i=Wsmfik2M
>>
>> suddenly, some OSDs goes UP and DOWN randomly.
>>
>> I don't see any network traffic on cluster interface.
>> How can I detect what ceph is doing ? From the posted output there is
>> no way to detect if ceph is recovering or not. Showing just a bunch of
>> increasing/decreasing numbers doens't help.
>>
>> I can see this:
>>
>> 2014-04-29 12:03:49.013808 mon.0 [INF] pgmap v1047121: 98432 pgs: 241
>> inactive, 33138 peering, 25 remapped, 60067 down+peering, 3489
>> remapped+peering, 1472 down+remapped+peering; 66261 bytes data, 1647
>> MB used, 5582 GB / 5583 GB avail
>>
>> so what, is it recovering? Is it sleeping ? Why is not recovering ?
>>
>> http://pastebin.com/raw.php?i=2EdugwQa
>> why all OSDs from host osd12 and osd13 are down ? Both hosts are up and 
>> running.
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] "ceph-deploy osd activate" error: AttributeError: 'module' object has no attribute 'logger' exception

2014-04-30 Thread Victor Bayon
Hi all,
I am following the "quick-ceph-deploy" tutorial [1] and I am getting a
 error when running the "ceph-deploy osd activate" and I am getting an
exception. See below[2].
I am following the quick tutorial step by step, except that
any help greatly appreciate
"ceph-deploy mon create-initial" does not seem to gather the keys and I
have to execute

manually with

ceph-deploy gatherkeys node01

I am following the same configuration with
- one admin node (myhost)
- 1 monitoring node (node01)
- 2 osd (node02, node03)


I am in Ubuntu Server 12.04 LTS (precise) and using ceph "emperor"


Any help greatly appreciated

Many thanks

Best regards

/V

[1] http://ceph.com/docs/master/start/quick-ceph-deploy/
[2] Error:
ceph@myhost:~/cluster$ ceph-deploy osd activate node02:/var/local/osd0
node03:/var/local/osd1
[ceph_deploy.conf][DEBUG ] found configuration file at:
/home/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.0): /usr/bin/ceph-deploy osd
activate node02:/var/local/osd0 node03:/var/local/osd1
[ceph_deploy.osd][DEBUG ] Activating cluster ceph disks
node02:/var/local/osd0: node03:/var/local/osd1:
[node02][DEBUG ] connected to host: node02
[node02][DEBUG ] detect platform information from remote host
[node02][DEBUG ] detect machine type
[ceph_deploy.osd][INFO  ] Distro info: Ubuntu 12.04 precise
[ceph_deploy.osd][DEBUG ] activating host node02 disk /var/local/osd0
[ceph_deploy.osd][DEBUG ] will use init type: upstart
[node02][INFO  ] Running command: sudo ceph-disk-activate --mark-init
upstart --mount /var/local/osd0
[node02][WARNIN] got latest monmap
[node02][WARNIN] 2014-04-30 19:36:30.268882 7f506fd07780 -1 journal
FileJournal::_open: disabling aio for non-block journal.  Use
journal_force_aio to force use of aio anyway
[node02][WARNIN] 2014-04-30 19:36:30.298239 7f506fd07780 -1 journal
FileJournal::_open: disabling aio for non-block journal.  Use
journal_force_aio to force use of aio anyway
[node02][WARNIN] 2014-04-30 19:36:30.301091 7f506fd07780 -1
filestore(/var/local/osd0) could not find 23c2fcde/osd_superblock/0//-1 in
index: (2) No such file or directory
[node02][WARNIN] 2014-04-30 19:36:30.307474 7f506fd07780 -1 created object
store /var/local/osd0 journal /var/local/osd0/journal for osd.0 fsid
76de3b72-44e3-47eb-8bd7-2b5b6e3666eb
[node02][WARNIN] 2014-04-30 19:36:30.307512 7f506fd07780 -1 auth: error
reading file: /var/local/osd0/keyring: can't open /var/local/osd0/keyring:
(2) No such file or directory
[node02][WARNIN] 2014-04-30 19:36:30.307547 7f506fd07780 -1 created new key
in keyring /var/local/osd0/keyring
[node02][WARNIN] added key for osd.0
Traceback (most recent call last):
  File "/usr/bin/ceph-deploy", line 21, in 
sys.exit(main())
  File "/usr/lib/python2.7/dist-packages/ceph_deploy/util/decorators.py",
line 62, in newfunc
return f(*a, **kw)
  File "/usr/lib/python2.7/dist-packages/ceph_deploy/cli.py", line 147, in
main
return args.func(args)
  File "/usr/lib/python2.7/dist-packages/ceph_deploy/osd.py", line 532, in
osd
activate(args, cfg)
  File "/usr/lib/python2.7/dist-packages/ceph_deploy/osd.py", line 338, in
activate
catch_osd_errors(distro.conn, distro.logger, args)
AttributeError: 'module' object has no attribute 'logger'
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Red Hat to acquire Inktank

2014-04-30 Thread Zach Hill
Congrats to the Inktank team and Sage!


On Wed, Apr 30, 2014 at 5:18 AM, Sage Weil  wrote:

> Today we are announcing some very big news: Red Hat is acquiring Inktank.
> We are very excited about what this means for Ceph, the community, the
> team, our partners, and our customers. Ceph has come a long way in the ten
> years since the first line of code has been written, particularly over the
> last two years that Inktank has been focused on its development. The fifty
> members of the Inktank team, our partners, and the hundreds of other
> contributors have done amazing work in bringing us to where we are today.
>
> We believe that, as part of Red Hat, the Inktank team will be able to
> build a better quality Ceph storage platform that will benefit the entire
> ecosystem. Red Hat brings a broad base of expertise in building and
> delivering hardened software stacks as well as a wealth of resources that
> will help Ceph become the transformative and ubiquitous storage platform
> that we always believed it could be.
>
> For existing Inktank customers, this is going to mean turning a reliable
> and robust storage system into something that delivers even more value. In
> particular, joining forces with the Red Hat team will improve our ability
> to address problems at all layers of the storage stack, including in the
> kernel. We naturally recognize that many customers and users have built
> platforms based on other Linux distributions. We will continue to support
> these installations while we determine how to provide the best customer
> experience moving forward and how the next iteration of the enterprise
> Ceph product will be structured. In the meantime, our team remains
> committed to keeping Ceph an open, multiplatform project that works in any
> environment where it makes sense, including other Linux distributions and
> non-Linux operating systems.
>
> Red Hat is one of only a handful of companies that I trust to steward the
> Ceph project. When we started Inktank two years ago, our goal was to build
> the business by making Ceph successful as a broad-based, collaborative
> open source project with a vibrant user, developer, and commercial
> community. Red Hat shares this vision. They are passionate about open
> source, and have demonstrated that they are strong and fair stewards with
> other critical projects (like KVM). Red Hat intends to administer the Ceph
> trademark in a manner that protects the ecosystem as a whole and creates a
> level playing field where everyone is held to the same standards of use.
> Similarly, policies like "upstream first" ensure that bug fixes and
> improvements that go into Ceph-derived products are always shared with the
> community to streamline development and benefit all members of the
> ecosystem.
>
> One important change that will take place involves Inktank's product
> strategy, in which some add-on software we have developed is proprietary.
> In contrast, Red Hat favors a pure open source model. That means that
> Calamari, the monitoring and diagnostics tool that Inktank has developed
> as part of the Inktank Ceph Enterprise product, will soon be open sourced.
>
> This is a big step forward for the Ceph community. Very little will change
> on day one as it will take some time to integrate the Inktank business and
> for any significant changes to happen with our engineering activities.
> However, we are very excited about what is coming next for Ceph and are
> looking forward to this new chapter.
>
> I'd like to thank everyone who has helped Ceph get to where we are today:
> the amazing research group at UCSC where it began, DreamHost for
> supporting us for so many years, the incredible Inktank team, and the many
> contributors and users that have helped shape the system. We continue to
> believe that robust, scalable, and completely open storage platforms like
> Ceph will transform a storage industry that is still dominated by
> proprietary systems. Let's make it happen!
>
> sage
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Red Hat to acquire Inktank

2014-04-30 Thread Wido den Hollander

On 04/30/2014 02:18 PM, Sage Weil wrote:

Today we are announcing some very big news: Red Hat is acquiring Inktank.
We are very excited about what this means for Ceph, the community, the
team, our partners, and our customers. Ceph has come a long way in the ten
years since the first line of code has been written, particularly over the
last two years that Inktank has been focused on its development. The fifty
members of the Inktank team, our partners, and the hundreds of other
contributors have done amazing work in bringing us to where we are today.



Congratulations Sage! What a great achievement for you personally as 
well as the whole Inktank team.


Who ever thought that when Ceph started as your 'pet project' ~9 years 
ago that it would turn out to something which is changing the world!



We believe that, as part of Red Hat, the Inktank team will be able to
build a better quality Ceph storage platform that will benefit the entire
ecosystem. Red Hat brings a broad base of expertise in building and
delivering hardened software stacks as well as a wealth of resources that
will help Ceph become the transformative and ubiquitous storage platform
that we always believed it could be.

For existing Inktank customers, this is going to mean turning a reliable
and robust storage system into something that delivers even more value. In
particular, joining forces with the Red Hat team will improve our ability
to address problems at all layers of the storage stack, including in the
kernel. We naturally recognize that many customers and users have built
platforms based on other Linux distributions. We will continue to support
these installations while we determine how to provide the best customer
experience moving forward and how the next iteration of the enterprise
Ceph product will be structured. In the meantime, our team remains
committed to keeping Ceph an open, multiplatform project that works in any
environment where it makes sense, including other Linux distributions and
non-Linux operating systems.

Red Hat is one of only a handful of companies that I trust to steward the
Ceph project. When we started Inktank two years ago, our goal was to build
the business by making Ceph successful as a broad-based, collaborative
open source project with a vibrant user, developer, and commercial
community. Red Hat shares this vision. They are passionate about open
source, and have demonstrated that they are strong and fair stewards with
other critical projects (like KVM). Red Hat intends to administer the Ceph
trademark in a manner that protects the ecosystem as a whole and creates a
level playing field where everyone is held to the same standards of use.
Similarly, policies like "upstream first" ensure that bug fixes and
improvements that go into Ceph-derived products are always shared with the
community to streamline development and benefit all members of the
ecosystem.

One important change that will take place involves Inktank's product
strategy, in which some add-on software we have developed is proprietary.
In contrast, Red Hat favors a pure open source model. That means that
Calamari, the monitoring and diagnostics tool that Inktank has developed
as part of the Inktank Ceph Enterprise product, will soon be open sourced.



Great to hear this! I think a lot of people will be very happy to use 
Calamari, but also to contribute to Calamari!



This is a big step forward for the Ceph community. Very little will change
on day one as it will take some time to integrate the Inktank business and
for any significant changes to happen with our engineering activities.
However, we are very excited about what is coming next for Ceph and are
looking forward to this new chapter.



Any ideas yet on the foundation for Ceph? Since the current code is 
LGPLv2 with commits from various contributers.


What effect will this have on the trademark for Ceph? There was talk 
this would go to a Ceph foundation,



I'd like to thank everyone who has helped Ceph get to where we are today:
the amazing research group at UCSC where it began, DreamHost for
supporting us for so many years, the incredible Inktank team, and the many
contributors and users that have helped shape the system. We continue to
believe that robust, scalable, and completely open storage platforms like
Ceph will transform a storage industry that is still dominated by
proprietary systems. Let's make it happen!



Yes! Let's keep changing the storage world and make Ceph bigger and 
better then it already is!


Wido


sage
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




--
Wido den Hollander
42on B.V.
Ceph trainer and consultant

Phone: +31 (0)20 700 9902
Skype: contact42on
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Red Hat to acquire Inktank

2014-04-30 Thread Alex Crow

On 30/04/14 18:05, Mike Hanby wrote:

Congrats, any possible conflict with RedHat's earlier acquisition of GlusterFS?




They are similar projects but with slightly different targets. I guess 
with Ceph+Openstack and Gluster+[RHEV|Ovirt] as natural partners. That 
said I'd love to see Ceph+[RHEV|Ovirt]. That would really be the way I'd 
like to run my virtualisation setup (too small really for private cloud).


One really good thing that RH can bring to both projects is better 
documentation and top-notch support.


Alex





___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] pgmap version increasing

2014-04-30 Thread John Spray
Yes, this is normal.  The pgmap version updates continuously even on an
idle system, because it is incremented when the periodic reports on PG
status are received by the mon from the osds.

It's a bit annoying if you want to set something else up to update when the
pg status changes - in that case you have to actually examine the parts of
the pg status you care about (e.g. 'pgs brief') to see if values are
different  :-/

Cheers,
John


On Wed, Apr 30, 2014 at 4:21 PM, Gandalf Corvotempesta <
gandalf.corvotempe...@gmail.com> wrote:

> I'm testing an idle ceph cluster.
> my pgmap version is always increasing, is this normal ?
>
> 2014-04-30 17:20:41.934127 mon.0 [INF] pgmap v281: 640 pgs: 640
> active+clean; 0 bytes data, 333 MB used, 14896 GB / 14896 GB avail
> 2014-04-30 17:20:42.962033 mon.0 [INF] pgmap v282: 640 pgs: 640
> active+clean; 0 bytes data, 333 MB used, 14896 GB / 14896 GB avail
> 2014-04-30 17:20:35.373060 osd.4 [INF] 0.179 scrub ok
> 2014-04-30 17:20:37.373338 osd.4 [INF] 0.7a scrub ok
> 2014-04-30 17:20:38.373606 osd.4 [INF] 0.1ba scrub ok
> 2014-04-30 17:20:43.990160 mon.0 [INF] pgmap v283: 640 pgs: 640
> active+clean; 0 bytes data, 333 MB used, 14896 GB / 14896 GB avail
> 2014-04-30 17:20:46.361545 mon.0 [INF] pgmap v284: 640 pgs: 640
> active+clean; 0 bytes data, 333 MB used, 14896 GB / 14896 GB avail
> 2014-04-30 17:20:48.438894 mon.0 [INF] pgmap v285: 640 pgs: 640
> active+clean; 0 bytes data, 333 MB used, 14896 GB / 14896 GB avail
> 2014-04-30 17:20:44.297707 osd.2 [INF] 2.26 scrub ok
> 2014-04-30 17:20:46.297851 osd.2 [INF] 2.27 scrub ok
> 2014-04-30 17:20:48.298423 osd.2 [INF] 2.29 scrub ok
> 2014-04-30 17:20:51.931978 mon.0 [INF] pgmap v286: 640 pgs: 640
> active+clean; 0 bytes data, 333 MB used, 14896 GB / 14896 GB avail
> 2014-04-30 17:20:46.374796 osd.4 [INF] 0.3e scrub ok
> 2014-04-30 17:20:48.375078 osd.4 [INF] 1.2 scrub ok
> 2014-04-30 17:20:50.375458 osd.4 [INF] 1.3d scrub ok
> 2014-04-30 17:20:51.375821 osd.4 [INF] 2.1 scrub ok
> 2014-04-30 17:20:52.376033 osd.4 [INF] 2.3c scrub ok
> 2014-04-30 17:20:53.954350 mon.0 [INF] pgmap v287: 640 pgs: 640
> active+clean; 0 bytes data, 333 MB used, 14896 GB / 14896 GB avail
> 2014-04-30 17:20:56.364735 mon.0 [INF] pgmap v288: 640 pgs: 640
> active+clean; 0 bytes data, 333 MB used, 14896 GB / 14896 GB avail
> 2014-04-30 17:20:53.299142 osd.2 [INF] 2.2c scrub ok
> 2014-04-30 17:20:58.299835 osd.2 [INF] 2.3d scrub ok
> 2014-04-30 17:21:01.932738 mon.0 [INF] pgmap v289: 640 pgs: 640
> active+clean; 0 bytes data, 333 MB used, 14896 GB / 14896 GB avail
>
>
>
> cluster doesn nothing at this time.
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Red Hat to acquire Inktank

2014-04-30 Thread Mike Hanby
Congrats, any possible conflict with RedHat's earlier acquisition of GlusterFS?

> On Apr 30, 2014, at 7:18, "Sage Weil"  wrote:
> 
> Today we are announcing some very big news: Red Hat is acquiring Inktank. 
> We are very excited about what this means for Ceph, the community, the 
> team, our partners, and our customers. Ceph has come a long way in the ten 
> years since the first line of code has been written, particularly over the 
> last two years that Inktank has been focused on its development. The fifty 
> members of the Inktank team, our partners, and the hundreds of other 
> contributors have done amazing work in bringing us to where we are today.
> 
> We believe that, as part of Red Hat, the Inktank team will be able to 
> build a better quality Ceph storage platform that will benefit the entire 
> ecosystem. Red Hat brings a broad base of expertise in building and 
> delivering hardened software stacks as well as a wealth of resources that 
> will help Ceph become the transformative and ubiquitous storage platform 
> that we always believed it could be.
> 
> For existing Inktank customers, this is going to mean turning a reliable 
> and robust storage system into something that delivers even more value. In 
> particular, joining forces with the Red Hat team will improve our ability 
> to address problems at all layers of the storage stack, including in the 
> kernel. We naturally recognize that many customers and users have built 
> platforms based on other Linux distributions. We will continue to support 
> these installations while we determine how to provide the best customer 
> experience moving forward and how the next iteration of the enterprise 
> Ceph product will be structured. In the meantime, our team remains 
> committed to keeping Ceph an open, multiplatform project that works in any 
> environment where it makes sense, including other Linux distributions and 
> non-Linux operating systems.
> 
> Red Hat is one of only a handful of companies that I trust to steward the 
> Ceph project. When we started Inktank two years ago, our goal was to build 
> the business by making Ceph successful as a broad-based, collaborative 
> open source project with a vibrant user, developer, and commercial 
> community. Red Hat shares this vision. They are passionate about open 
> source, and have demonstrated that they are strong and fair stewards with 
> other critical projects (like KVM). Red Hat intends to administer the Ceph 
> trademark in a manner that protects the ecosystem as a whole and creates a 
> level playing field where everyone is held to the same standards of use. 
> Similarly, policies like "upstream first" ensure that bug fixes and 
> improvements that go into Ceph-derived products are always shared with the 
> community to streamline development and benefit all members of the 
> ecosystem.
> 
> One important change that will take place involves Inktank's product 
> strategy, in which some add-on software we have developed is proprietary. 
> In contrast, Red Hat favors a pure open source model. That means that 
> Calamari, the monitoring and diagnostics tool that Inktank has developed 
> as part of the Inktank Ceph Enterprise product, will soon be open sourced.
> 
> This is a big step forward for the Ceph community. Very little will change 
> on day one as it will take some time to integrate the Inktank business and 
> for any significant changes to happen with our engineering activities. 
> However, we are very excited about what is coming next for Ceph and are 
> looking forward to this new chapter.
> 
> I'd like to thank everyone who has helped Ceph get to where we are today: 
> the amazing research group at UCSC where it began, DreamHost for 
> supporting us for so many years, the incredible Inktank team, and the many 
> contributors and users that have helped shape the system. We continue to 
> believe that robust, scalable, and completely open storage platforms like 
> Ceph will transform a storage industry that is still dominated by 
> proprietary systems. Let's make it happen!
> 
> sage
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] 回复: Hey, Where can I find the source code of " classObjectOperationImpl " ?

2014-04-30 Thread Kai Zhang
Hi Peng,
If you are interested in the code path of Ceph, these blogs may help:
How does a Ceph OSD handle a read message ? (in Firefly and up)
How does a Ceph OSD handle a write message ? (up to Emperor)


Here is the note about rados write I took when I read the source code:
| rados put  [infile]
L [tools/rados/rados.cc] main()
L rados_tool_common()
L do_put()L io_ctx.write()
L io_ctx_impl->write()
L [librados/IoCtxImpl.cc] write()
L operate()
L op_submit()
L [osdc/Objecter.cc] op_submit()
L _op_submit()
L recalc_op_target()
send_op()
L messenger->send_message()
L [msg/SimpleMessenger.cc] _send_message()
L submit_message()
L pipe->_send()
L [msg/Pipe.h] _send()
L [msg/Pipe.cc] witer()
L write_message()
L do_sendmsg()
L sendmsg()

Hope these could help.

Regards,
Kai Zhang



At 2014-04-30 00:04:55,peng  wrote:

In the librados.cc , I found the following code:
Step 1 .
file :  librados.cc 
void librados::ObjectWriteOperation::write(uint64_t off, const bufferlist& bl)
{
  ::ObjectOperation *o = (::ObjectOperation *)impl;
  bufferlist c = bl;
  o->write(off, c);
}
Step 2 .  to find  ::ObjectOperation   
File : Objecter.h
struct  ObjectOperation  {
   void  write( .. ) {}   //  call add_data
   void  add_data( .. ) {}  // call add_op 
   void  add_op(...)  {}  // need  OSDOp 
}
Step 3.  to findOSDOp 
File : osd_types.h
struct OSDOp { ... } 
 
But , the question is : how to transfer the data to rados-cluster???I 
assume that there will be some socket connection(tcp,etc)  to transfer data , 
but I find nothing about socket connection..
 
Besides,  I found something in IoCtxImpl.cc  and throught it I found  
ceph_tid_t Objecter::_op_submit(Op *op)   in Objecter.cc ..  It looks like the 
real operation is here. 
 
confused..Appreciate any help~!


-- 原始邮件 --
发件人: "John Spray";;
发送时间: 2014年4月29日(星期二) 下午5:59
收件人: "peng";
抄送: "ceph-users";
主题: Re: [ceph-users] Hey, Where can I find the source code of " 
classObjectOperationImpl " ?


It's not a real class, just a type definition used for the 
ObjectOperation::impl pointer.  The actual object is an ObjectOperation.


src/librados/librados.cc
1797:  impl = (ObjectOperationImpl *)new ::ObjectOperation;


John



On Tue, Apr 29, 2014 at 10:49 AM, peng  wrote:

Hey,
I can find a declaration in librados.hpp ,  but when I try to find the source 
code of  ObjectOperatoinImpl , I find nothing ..
 
 
Is it a ghost class??
 
Confused.. Appreciate any help .

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Red Hat to acquire Inktank

2014-04-30 Thread Pawel Stefanski
On Wed, Apr 30, 2014 at 2:18 PM, Sage Weil  wrote:
> Today we are announcing some very big news: Red Hat is acquiring Inktank.
> We are very excited about what this means for Ceph, the community, the
> team, our partners, and our customers. Ceph has come a long way in the ten
> years since the first line of code has been written, particularly over the
> last two years that Inktank has been focused on its development. The fifty
> members of the Inktank team, our partners, and the hundreds of other
> contributors have done amazing work in bringing us to where we are today.
>
[...]

Hello!!!

Congratulations Glad to hear that RH will continue develop as a
OSS and even open source Calamari!
I also admire RH work on KVM and Linux kernel, so I'm very excited on this news!

best regards!
-- 
Pawel
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Red Hat to acquire Inktank

2014-04-30 Thread Pawel Stefanski
Hello!!!

Congratulations Glad to hear that RH will continue develop as a OSS and
even open source Calamari!
I also admire RH work on KVM and Linux kernel, so I'm very excited on this
news!

best regards!
-- 
pawel


On Wed, Apr 30, 2014 at 2:18 PM, Sage Weil  wrote:

> Today we are announcing some very big news: Red Hat is acquiring Inktank.
> We are very excited about what this means for Ceph, the community, the
> team, our partners, and our customers. Ceph has come a long way in the ten
> years since the first line of code has been written, particularly over the
> last two years that Inktank has been focused on its development. The fifty
> members of the Inktank team, our partners, and the hundreds of other
> contributors have done amazing work in bringing us to where we are today.
>
> We believe that, as part of Red Hat, the Inktank team will be able to
> build a better quality Ceph storage platform that will benefit the entire
> ecosystem. Red Hat brings a broad base of expertise in building and
> delivering hardened software stacks as well as a wealth of resources that
> will help Ceph become the transformative and ubiquitous storage platform
> that we always believed it could be.
>
> For existing Inktank customers, this is going to mean turning a reliable
> and robust storage system into something that delivers even more value. In
> particular, joining forces with the Red Hat team will improve our ability
> to address problems at all layers of the storage stack, including in the
> kernel. We naturally recognize that many customers and users have built
> platforms based on other Linux distributions. We will continue to support
> these installations while we determine how to provide the best customer
> experience moving forward and how the next iteration of the enterprise
> Ceph product will be structured. In the meantime, our team remains
> committed to keeping Ceph an open, multiplatform project that works in any
> environment where it makes sense, including other Linux distributions and
> non-Linux operating systems.
>
> Red Hat is one of only a handful of companies that I trust to steward the
> Ceph project. When we started Inktank two years ago, our goal was to build
> the business by making Ceph successful as a broad-based, collaborative
> open source project with a vibrant user, developer, and commercial
> community. Red Hat shares this vision. They are passionate about open
> source, and have demonstrated that they are strong and fair stewards with
> other critical projects (like KVM). Red Hat intends to administer the Ceph
> trademark in a manner that protects the ecosystem as a whole and creates a
> level playing field where everyone is held to the same standards of use.
> Similarly, policies like "upstream first" ensure that bug fixes and
> improvements that go into Ceph-derived products are always shared with the
> community to streamline development and benefit all members of the
> ecosystem.
>
> One important change that will take place involves Inktank's product
> strategy, in which some add-on software we have developed is proprietary.
> In contrast, Red Hat favors a pure open source model. That means that
> Calamari, the monitoring and diagnostics tool that Inktank has developed
> as part of the Inktank Ceph Enterprise product, will soon be open sourced.
>
> This is a big step forward for the Ceph community. Very little will change
> on day one as it will take some time to integrate the Inktank business and
> for any significant changes to happen with our engineering activities.
> However, we are very excited about what is coming next for Ceph and are
> looking forward to this new chapter.
>
> I'd like to thank everyone who has helped Ceph get to where we are today:
> the amazing research group at UCSC where it began, DreamHost for
> supporting us for so many years, the incredible Inktank team, and the many
> contributors and users that have helped shape the system. We continue to
> believe that robust, scalable, and completely open storage platforms like
> Ceph will transform a storage industry that is still dominated by
> proprietary systems. Let's make it happen!
>
> sage
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Red Hat to acquire Inktank

2014-04-30 Thread Mark Nelson

On 04/30/2014 10:19 AM, Loic Dachary wrote:

Hi Sage,

Congratulations, this is good news.

On 30/04/2014 14:18, Sage Weil wrote:


One important change that will take place involves Inktank's product
strategy, in which some add-on software we have developed is proprietary.
In contrast, Red Hat favors a pure open source model. That means that
Calamari, the monitoring and diagnostics tool that Inktank has developed
as part of the Inktank Ceph Enterprise product, will soon be open sourced.


I'm glad to hear that this acquisitions puts an end to the proprietary software 
created by the Inktank Ceph developers. And I assume they are also happy about 
the change :-)


I for one am excited about an open source Calamari! :)



Cheers



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] pgmap version increasing

2014-04-30 Thread Gandalf Corvotempesta
I'm testing an idle ceph cluster.
my pgmap version is always increasing, is this normal ?

2014-04-30 17:20:41.934127 mon.0 [INF] pgmap v281: 640 pgs: 640
active+clean; 0 bytes data, 333 MB used, 14896 GB / 14896 GB avail
2014-04-30 17:20:42.962033 mon.0 [INF] pgmap v282: 640 pgs: 640
active+clean; 0 bytes data, 333 MB used, 14896 GB / 14896 GB avail
2014-04-30 17:20:35.373060 osd.4 [INF] 0.179 scrub ok
2014-04-30 17:20:37.373338 osd.4 [INF] 0.7a scrub ok
2014-04-30 17:20:38.373606 osd.4 [INF] 0.1ba scrub ok
2014-04-30 17:20:43.990160 mon.0 [INF] pgmap v283: 640 pgs: 640
active+clean; 0 bytes data, 333 MB used, 14896 GB / 14896 GB avail
2014-04-30 17:20:46.361545 mon.0 [INF] pgmap v284: 640 pgs: 640
active+clean; 0 bytes data, 333 MB used, 14896 GB / 14896 GB avail
2014-04-30 17:20:48.438894 mon.0 [INF] pgmap v285: 640 pgs: 640
active+clean; 0 bytes data, 333 MB used, 14896 GB / 14896 GB avail
2014-04-30 17:20:44.297707 osd.2 [INF] 2.26 scrub ok
2014-04-30 17:20:46.297851 osd.2 [INF] 2.27 scrub ok
2014-04-30 17:20:48.298423 osd.2 [INF] 2.29 scrub ok
2014-04-30 17:20:51.931978 mon.0 [INF] pgmap v286: 640 pgs: 640
active+clean; 0 bytes data, 333 MB used, 14896 GB / 14896 GB avail
2014-04-30 17:20:46.374796 osd.4 [INF] 0.3e scrub ok
2014-04-30 17:20:48.375078 osd.4 [INF] 1.2 scrub ok
2014-04-30 17:20:50.375458 osd.4 [INF] 1.3d scrub ok
2014-04-30 17:20:51.375821 osd.4 [INF] 2.1 scrub ok
2014-04-30 17:20:52.376033 osd.4 [INF] 2.3c scrub ok
2014-04-30 17:20:53.954350 mon.0 [INF] pgmap v287: 640 pgs: 640
active+clean; 0 bytes data, 333 MB used, 14896 GB / 14896 GB avail
2014-04-30 17:20:56.364735 mon.0 [INF] pgmap v288: 640 pgs: 640
active+clean; 0 bytes data, 333 MB used, 14896 GB / 14896 GB avail
2014-04-30 17:20:53.299142 osd.2 [INF] 2.2c scrub ok
2014-04-30 17:20:58.299835 osd.2 [INF] 2.3d scrub ok
2014-04-30 17:21:01.932738 mon.0 [INF] pgmap v289: 640 pgs: 640
active+clean; 0 bytes data, 333 MB used, 14896 GB / 14896 GB avail



cluster doesn nothing at this time.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Red Hat to acquire Inktank

2014-04-30 Thread Loic Dachary
Hi Sage,

Congratulations, this is good news.

On 30/04/2014 14:18, Sage Weil wrote:

> One important change that will take place involves Inktank's product 
> strategy, in which some add-on software we have developed is proprietary. 
> In contrast, Red Hat favors a pure open source model. That means that 
> Calamari, the monitoring and diagnostics tool that Inktank has developed 
> as part of the Inktank Ceph Enterprise product, will soon be open sourced.

I'm glad to hear that this acquisitions puts an end to the proprietary software 
created by the Inktank Ceph developers. And I assume they are also happy about 
the change :-)

Cheers

-- 
Loïc Dachary, Artisan Logiciel Libre



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph mds laggy and failed assert in function replay mds/journal.cc

2014-04-30 Thread Yan, Zheng
On Wed, Apr 30, 2014 at 3:07 PM, Mohd Bazli Ab Karim
 wrote:
> Hi Zheng,
>
> Sorry for the late reply. For sure, I will try this again after we completely 
> verifying all content in the file system. Hopefully all will be good.
> And, please confirm this, I will set debug_mds=10 for the ceph-mds, and do 
> you want me to send the ceph-mon log too?

yes please.

>
> BTW, how to confirm that the mds has passed the beacon to mon or not?
>
read monitor's log

Regards
Yan, Zheng


> Thank you so much Zheng!
>
> Bazli
>
> -Original Message-
> From: Yan, Zheng [mailto:uker...@gmail.com]
> Sent: Tuesday, April 29, 2014 10:13 PM
> To: Mohd Bazli Ab Karim
> Cc: Luke Jing Yuan; Wong Ming Tat
> Subject: Re: [ceph-users] Ceph mds laggy and failed assert in function replay 
> mds/journal.cc
>
> On Tue, Apr 29, 2014 at 5:30 PM, Mohd Bazli Ab Karim  
> wrote:
>> Hi Zheng,
>>
>> The another issue that Luke mentioned just now was like this.
>> At first, we ran one mds (mon01) with the new compiled ceph-mds. It works 
>> fine with only one MDS running at that time. However, when we ran two more 
>> MDSes mon02 mon03 with the new compiled ceph-mds, it started acting weird.
>> Mon01 which was became active at first, will have the error and started to 
>> respawning. Once respawning happened, mon03 will take over from mon01 as 
>> master mds, and replay happened again.
>> Again, when mon03 became active, it will have the same error like below, and 
>> respawning again. So, it seems to me that replay will continue to happen 
>> from one mds to another when they got respawned.
>>
>> 2014-04-29 15:36:24.917798 7f5c36476700  1 mds.0.server
>> reconnect_clients -- 1 sessions
>> 2014-04-29 15:36:24.919620 7f5c2fb3e700  0 -- 10.4.118.23:6800/26401
>> >> 10.1.64.181:0/1558263174 pipe(0x2924f5780 sd=41 :6800 s=0 pgs=0
>> cs=0 l=0 c=0x37056e0).accept peer addr is really
>> 10.1.64.181:0/1558263174 (socket is 10.1.64.181:57649/0)
>> 2014-04-29 15:36:24.921661 7f5c36476700  0 log [DBG] : reconnect by
>> client.884169 10.1.64.181:0/1558263174 after 0.003774
>> 2014-04-29 15:36:24.921786 7f5c36476700  1 mds.0.12858 reconnect_done
>> 2014-04-29 15:36:25.109391 7f5c36476700  1 mds.0.12858 handle_mds_map
>> i am now mds.0.12858
>> 2014-04-29 15:36:25.109413 7f5c36476700  1 mds.0.12858 handle_mds_map
>> state change up:reconnect --> up:rejoin
>> 2014-04-29 15:36:25.109417 7f5c36476700  1 mds.0.12858 rejoin_start
>> 2014-04-29 15:36:26.918067 7f5c36476700  1 mds.0.12858
>> rejoin_joint_start
>> 2014-04-29 15:36:33.520985 7f5c36476700  1 mds.0.12858 rejoin_done
>> 2014-04-29 15:36:36.252925 7f5c36476700  1 mds.0.12858 handle_mds_map
>> i am now mds.0.12858
>> 2014-04-29 15:36:36.252927 7f5c36476700  1 mds.0.12858 handle_mds_map
>> state change up:rejoin --> up:active
>> 2014-04-29 15:36:36.252932 7f5c36476700  1 mds.0.12858 recovery_done -- 
>> successful recovery!
>> 2014-04-29 15:36:36.745833 7f5c36476700  1 mds.0.12858 active_start
>> 2014-04-29 15:36:36.987854 7f5c36476700  1 mds.0.12858 cluster recovered.
>> 2014-04-29 15:36:40.182604 7f5c36476700  0 mds.0.12858
>> handle_mds_beacon no longer laggy
>> 2014-04-29 15:36:57.947441 7f5c2fb3e700  0 -- 10.4.118.23:6800/26401
>> >> 10.1.64.181:0/1558263174 pipe(0x2924f5780 sd=41 :6800 s=2 pgs=156
>> cs=1 l=0 c=0x37056e0).fault with nothing to send, going to standby
>> 2014-04-29 15:37:10.534593 7f5c36476700  1 mds.-1.-1 handle_mds_map i
>> (10.4.118.23:6800/26401) dne in the mdsmap, respawning myself
>> 2014-04-29 15:37:10.534604 7f5c36476700  1 mds.-1.-1 respawn
>> 2014-04-29 15:37:10.534609 7f5c36476700  1 mds.-1.-1  e: '/usr/bin/ceph-mds'
>> 2014-04-29 15:37:10.534612 7f5c36476700  1 mds.-1.-1  0: '/usr/bin/ceph-mds'
>> 2014-04-29 15:37:10.534616 7f5c36476700  1 mds.-1.-1  1: '--cluster=ceph'
>> 2014-04-29 15:37:10.534619 7f5c36476700  1 mds.-1.-1  2: '-i'
>> 2014-04-29 15:37:10.534621 7f5c36476700  1 mds.-1.-1  3: 'mon03'
>> 2014-04-29 15:37:10.534623 7f5c36476700  1 mds.-1.-1  4: '-f'
>> 2014-04-29 15:37:10.534641 7f5c36476700  1 mds.-1.-1  cwd /
>> 2014-04-29 15:37:12.155458 7f8907c8b780  0 ceph version  (), process
>> ceph-mds, pid 26401
>> 2014-04-29 15:37:12.249780 7f8902d10700  1 mds.-1.0 handle_mds_map
>> standby
>>
>> p/s. we ran ceph-mon and ceph-mds on same servers, (mon01,mon02,mon03)
>>
>> I sent to you two log files, mon01 and mon03 where the scenario of mon03 
>> have state->standby->replay->active->respawned. And also, mon01 which is now 
>> running as active as a single MDS at this moment.
>>
>
> After the MDS became ative, it did not send beacon to the monitor. It seems 
> like the MDS was busy doing something else. If this issue still happen, set 
> debug_mds=10 and send the log to me.
>
> Regards
> Yan, Zheng
>
>> Regards,
>> Bazli
>> -Original Message-
>> From: Luke Jing Yuan
>> Sent: Tuesday, April 29, 2014 4:46 PM
>> To: Yan, Zheng
>> Cc: Mohd Bazli Ab Karim; Wong Ming Tat
>> Subject: RE: [ceph-users] Ceph mds laggy and failed assert in function
>> replay md

Re: [ceph-users] Possible repo packaging regression

2014-04-30 Thread Jens Kristian Søgaard

Hi Jeff,


or something else. Downgrading back to leveldb 1.7.0-2 resolved my problem.
Is anyone else seeing this?


It sounds a bit like what I reported here:

http://tracker.ceph.com/issues/7918

--
Jens Kristian Søgaard, Mermaid Consulting ApS,
j...@mermaidconsulting.dk,
http://www.mermaidconsulting.com/
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Possible repo packaging regression

2014-04-30 Thread Jeff Bachtel
Per http://tracker.ceph.com/issues/6022 leveldb-1.12 was pulled out of 
the ceph-extras repo due to patches applied by a leveldb fork (Basho 
patch). It's back in ceph-extras (since the 28th at least), and on 
CentOS 6 is causing an abort on mon start when run with the Firefly 
release candidate


# /etc/init.d/ceph start
=== mon.m1 ===
Starting Ceph mon.m1 on m1...
pthread lock: Invalid argument
*** Caught signal (Aborted) **
 in thread 7fa7524f67a0
 ceph version 0.80-rc1-28-ga027100 
(a0271000c12486d3c5adb2b0732e1c70c3789a4f)

 1: /usr/bin/ceph-mon() [0x86bcb1]
 2: /lib64/libpthread.so.0() [0x35cac0f710]
 3: (gsignal()+0x35) [0x35ca432925]
 4: (abort()+0x175) [0x35ca434105]
 5: (()+0x34d71) [0x7fa752532d71]
 6: (leveldb::DBImpl::Get(leveldb::ReadOptions const&, leveldb::Slice 
const&, leveldb::Value*)+0x50) [0x7fa7

52518120]
 7: (LevelDBStore::_get_iterator()+0x41) [0x826d71]
 8: (MonitorDBStore::exists(std::string const&, std::string 
const&)+0x28) [0x539e88]

 9: (main()+0x13f8) [0x533d78]
 10: (__libc_start_main()+0xfd) [0x35ca41ed1d]
 11: /usr/bin/ceph-mon() [0x530f19]
2014-04-30 10:32:40.397243 7fa7524f67a0 -1 *** Caught signal (Aborted) **
 in thread 7fa7524f67a0

The SRPM for what ended up on ceph-extras wasn't uploaded to the repo, 
so I didn't check to see if it was the Basho patch being applied again 
or something else. Downgrading back to leveldb 1.7.0-2 resolved my problem.


Is anyone else seeing this?

Jeff
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph 0.80 and Ubuntu 14.04

2014-04-30 Thread James Page
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 30/04/14 06:32, Henrik Korkuc wrote:
> 
> Ubuntu 14.04 currently ships ceph 0.79. After firefly release
> ubuntu maintainer will update ceph version in ubuntu's repos.

Thanks Henrik - you beat me to it :-)

> 
> On 2014.04.30 07:08, Kenneth wrote:
>> 
>> Latest Ceph release is Firefly v0.80 right? Or is it still in
>> beta? And Ubuntu is on 14.04.
>> 
>> Will I be able to install ceph 0.80 on Ubuntu 14.04 for
>> production? If not, what is the time frame on when can I install
>> the ceph v0.80 on ubuntu 14.04?
>> 


- -- 
James Page
Ubuntu and Debian Developer
james.p...@ubuntu.com
jamesp...@debian.org
-BEGIN PGP SIGNATURE-
Version: GnuPG v1
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQIcBAEBCAAGBQJTYQntAAoJEL/srsug59jD57sP+gNdjti/98IRQSGDtVaSK58Z
Pn2XB278f9pcalZCu3tEVls1ZLTSvs6Mr5pZP7edNWMZOEK84Zr2B9j77N9qNCmD
AXxPFzD+qViru6xyUC4zjy6Q0TW74Ko42nKsTEXHbQSYTL0+lyUZIYb0NcPZMpYO
5DJ6RdJmjtdEdisJiUnNEONj/AbKnsM4QvdzUXmNxXVkpl8EGMcsWu2lx8eyIp9e
XyCCVjKr3zcAwBZXPl0qp4+9OeYBsOQmE3Bg/gkH9UcYiGhIhQVrghSfayu/0jHt
zLWscucxapt2UxRRh5GazAwEWcPhnYdrdcGi/XMr88rrHIAKLayAT/RywhazrZrV
ypMyNsCW+y2hFjx1BraNme67U/Gi5BnultapIZ/MnbkOvju40vupWxE7K/SdBOOC
T0nVtHN/QfXcGUkXr88YgA8/M50y7O4wsxrGxlk8Nv5EHUAZnBk7VxfLjp7l8597
EvtZ2ctDlP0Pi3ltMzo3FyK2rNSZ/6mN0Bl3ua0W7feElP2LeZhSgSgeayOdsk53
sXakeKaMe4drpEdZs0WeG4pNlij93ks6StscVfV5HKf+vZFvSDXF3dpp/xsif0vo
e8fH8Lkrp/9SrkurMbblIzKL9VgIKgU0RyP6QHtaVWT+6JXZwJhvqe6iPRmm67A4
FEGqGlKfR9JOKa21RbXl
=tRbw
-END PGP SIGNATURE-
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Some PG stay in "incomplete" state

2014-04-30 Thread ??
Two days ago, my cluster have down  because hard drive is damaged,now , I fix 
the hardisk and copy the data to a new disk, but some pg 
stay in "incomplete" state. how can I solve it ? thank you___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Red Hat to acquire Inktank

2014-04-30 Thread Travis Rhoden
Sage,

Congrats to you and Inktank!

 - Travis


On Wed, Apr 30, 2014 at 9:27 AM, Haomai Wang  wrote:

> Congratulation!
>
> On Wed, Apr 30, 2014 at 8:18 PM, Sage Weil  wrote:
> > Today we are announcing some very big news: Red Hat is acquiring Inktank.
> > We are very excited about what this means for Ceph, the community, the
> > team, our partners, and our customers. Ceph has come a long way in the
> ten
> > years since the first line of code has been written, particularly over
> the
> > last two years that Inktank has been focused on its development. The
> fifty
> > members of the Inktank team, our partners, and the hundreds of other
> > contributors have done amazing work in bringing us to where we are today.
> >
> > We believe that, as part of Red Hat, the Inktank team will be able to
> > build a better quality Ceph storage platform that will benefit the entire
> > ecosystem. Red Hat brings a broad base of expertise in building and
> > delivering hardened software stacks as well as a wealth of resources that
> > will help Ceph become the transformative and ubiquitous storage platform
> > that we always believed it could be.
> >
> > For existing Inktank customers, this is going to mean turning a reliable
> > and robust storage system into something that delivers even more value.
> In
> > particular, joining forces with the Red Hat team will improve our ability
> > to address problems at all layers of the storage stack, including in the
> > kernel. We naturally recognize that many customers and users have built
> > platforms based on other Linux distributions. We will continue to support
> > these installations while we determine how to provide the best customer
> > experience moving forward and how the next iteration of the enterprise
> > Ceph product will be structured. In the meantime, our team remains
> > committed to keeping Ceph an open, multiplatform project that works in
> any
> > environment where it makes sense, including other Linux distributions and
> > non-Linux operating systems.
> >
> > Red Hat is one of only a handful of companies that I trust to steward the
> > Ceph project. When we started Inktank two years ago, our goal was to
> build
> > the business by making Ceph successful as a broad-based, collaborative
> > open source project with a vibrant user, developer, and commercial
> > community. Red Hat shares this vision. They are passionate about open
> > source, and have demonstrated that they are strong and fair stewards with
> > other critical projects (like KVM). Red Hat intends to administer the
> Ceph
> > trademark in a manner that protects the ecosystem as a whole and creates
> a
> > level playing field where everyone is held to the same standards of use.
> > Similarly, policies like "upstream first" ensure that bug fixes and
> > improvements that go into Ceph-derived products are always shared with
> the
> > community to streamline development and benefit all members of the
> > ecosystem.
> >
> > One important change that will take place involves Inktank's product
> > strategy, in which some add-on software we have developed is proprietary.
> > In contrast, Red Hat favors a pure open source model. That means that
> > Calamari, the monitoring and diagnostics tool that Inktank has developed
> > as part of the Inktank Ceph Enterprise product, will soon be open
> sourced.
> >
> > This is a big step forward for the Ceph community. Very little will
> change
> > on day one as it will take some time to integrate the Inktank business
> and
> > for any significant changes to happen with our engineering activities.
> > However, we are very excited about what is coming next for Ceph and are
> > looking forward to this new chapter.
> >
> > I'd like to thank everyone who has helped Ceph get to where we are today:
> > the amazing research group at UCSC where it began, DreamHost for
> > supporting us for so many years, the incredible Inktank team, and the
> many
> > contributors and users that have helped shape the system. We continue to
> > believe that robust, scalable, and completely open storage platforms like
> > Ceph will transform a storage industry that is still dominated by
> > proprietary systems. Let's make it happen!
> >
> > sage
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>
> --
> Best Regards,
>
> Wheat
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Red Hat to acquire Inktank

2014-04-30 Thread Haomai Wang
Congratulation!

On Wed, Apr 30, 2014 at 8:18 PM, Sage Weil  wrote:
> Today we are announcing some very big news: Red Hat is acquiring Inktank.
> We are very excited about what this means for Ceph, the community, the
> team, our partners, and our customers. Ceph has come a long way in the ten
> years since the first line of code has been written, particularly over the
> last two years that Inktank has been focused on its development. The fifty
> members of the Inktank team, our partners, and the hundreds of other
> contributors have done amazing work in bringing us to where we are today.
>
> We believe that, as part of Red Hat, the Inktank team will be able to
> build a better quality Ceph storage platform that will benefit the entire
> ecosystem. Red Hat brings a broad base of expertise in building and
> delivering hardened software stacks as well as a wealth of resources that
> will help Ceph become the transformative and ubiquitous storage platform
> that we always believed it could be.
>
> For existing Inktank customers, this is going to mean turning a reliable
> and robust storage system into something that delivers even more value. In
> particular, joining forces with the Red Hat team will improve our ability
> to address problems at all layers of the storage stack, including in the
> kernel. We naturally recognize that many customers and users have built
> platforms based on other Linux distributions. We will continue to support
> these installations while we determine how to provide the best customer
> experience moving forward and how the next iteration of the enterprise
> Ceph product will be structured. In the meantime, our team remains
> committed to keeping Ceph an open, multiplatform project that works in any
> environment where it makes sense, including other Linux distributions and
> non-Linux operating systems.
>
> Red Hat is one of only a handful of companies that I trust to steward the
> Ceph project. When we started Inktank two years ago, our goal was to build
> the business by making Ceph successful as a broad-based, collaborative
> open source project with a vibrant user, developer, and commercial
> community. Red Hat shares this vision. They are passionate about open
> source, and have demonstrated that they are strong and fair stewards with
> other critical projects (like KVM). Red Hat intends to administer the Ceph
> trademark in a manner that protects the ecosystem as a whole and creates a
> level playing field where everyone is held to the same standards of use.
> Similarly, policies like "upstream first" ensure that bug fixes and
> improvements that go into Ceph-derived products are always shared with the
> community to streamline development and benefit all members of the
> ecosystem.
>
> One important change that will take place involves Inktank's product
> strategy, in which some add-on software we have developed is proprietary.
> In contrast, Red Hat favors a pure open source model. That means that
> Calamari, the monitoring and diagnostics tool that Inktank has developed
> as part of the Inktank Ceph Enterprise product, will soon be open sourced.
>
> This is a big step forward for the Ceph community. Very little will change
> on day one as it will take some time to integrate the Inktank business and
> for any significant changes to happen with our engineering activities.
> However, we are very excited about what is coming next for Ceph and are
> looking forward to this new chapter.
>
> I'd like to thank everyone who has helped Ceph get to where we are today:
> the amazing research group at UCSC where it began, DreamHost for
> supporting us for so many years, the incredible Inktank team, and the many
> contributors and users that have helped shape the system. We continue to
> believe that robust, scalable, and completely open storage platforms like
> Ceph will transform a storage industry that is still dominated by
> proprietary systems. Let's make it happen!
>
> sage
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



-- 
Best Regards,

Wheat
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Red Hat to acquire Inktank

2014-04-30 Thread Alexandre DERUMIER
This is a very good news, congratulations !

(do you known if Ceph Enterprise subscription price will remain the same ?
 I'm looking to take support next year)

- Mail original - 

De: "Sage Weil"  
À: ceph-de...@vger.kernel.org, ceph-us...@ceph.com 
Envoyé: Mercredi 30 Avril 2014 14:18:48 
Objet: Red Hat to acquire Inktank 

Today we are announcing some very big news: Red Hat is acquiring Inktank. 
We are very excited about what this means for Ceph, the community, the 
team, our partners, and our customers. Ceph has come a long way in the ten 
years since the first line of code has been written, particularly over the 
last two years that Inktank has been focused on its development. The fifty 
members of the Inktank team, our partners, and the hundreds of other 
contributors have done amazing work in bringing us to where we are today. 

We believe that, as part of Red Hat, the Inktank team will be able to 
build a better quality Ceph storage platform that will benefit the entire 
ecosystem. Red Hat brings a broad base of expertise in building and 
delivering hardened software stacks as well as a wealth of resources that 
will help Ceph become the transformative and ubiquitous storage platform 
that we always believed it could be. 

For existing Inktank customers, this is going to mean turning a reliable 
and robust storage system into something that delivers even more value. In 
particular, joining forces with the Red Hat team will improve our ability 
to address problems at all layers of the storage stack, including in the 
kernel. We naturally recognize that many customers and users have built 
platforms based on other Linux distributions. We will continue to support 
these installations while we determine how to provide the best customer 
experience moving forward and how the next iteration of the enterprise 
Ceph product will be structured. In the meantime, our team remains 
committed to keeping Ceph an open, multiplatform project that works in any 
environment where it makes sense, including other Linux distributions and 
non-Linux operating systems. 

Red Hat is one of only a handful of companies that I trust to steward the 
Ceph project. When we started Inktank two years ago, our goal was to build 
the business by making Ceph successful as a broad-based, collaborative 
open source project with a vibrant user, developer, and commercial 
community. Red Hat shares this vision. They are passionate about open 
source, and have demonstrated that they are strong and fair stewards with 
other critical projects (like KVM). Red Hat intends to administer the Ceph 
trademark in a manner that protects the ecosystem as a whole and creates a 
level playing field where everyone is held to the same standards of use. 
Similarly, policies like "upstream first" ensure that bug fixes and 
improvements that go into Ceph-derived products are always shared with the 
community to streamline development and benefit all members of the 
ecosystem. 

One important change that will take place involves Inktank's product 
strategy, in which some add-on software we have developed is proprietary. 
In contrast, Red Hat favors a pure open source model. That means that 
Calamari, the monitoring and diagnostics tool that Inktank has developed 
as part of the Inktank Ceph Enterprise product, will soon be open sourced. 

This is a big step forward for the Ceph community. Very little will change 
on day one as it will take some time to integrate the Inktank business and 
for any significant changes to happen with our engineering activities. 
However, we are very excited about what is coming next for Ceph and are 
looking forward to this new chapter. 

I'd like to thank everyone who has helped Ceph get to where we are today: 
the amazing research group at UCSC where it began, DreamHost for 
supporting us for so many years, the incredible Inktank team, and the many 
contributors and users that have helped shape the system. We continue to 
believe that robust, scalable, and completely open storage platforms like 
Ceph will transform a storage industry that is still dominated by 
proprietary systems. Let's make it happen! 

sage 
-- 
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in 
the body of a message to majord...@vger.kernel.org 
More majordomo info at http://vger.kernel.org/majordomo-info.html 
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Red Hat to acquire Inktank

2014-04-30 Thread Asif Murad Khan
literally it a very good news for ceph, but ceph is now a expensive product
:)


On Wed, Apr 30, 2014 at 6:18 PM, Sage Weil  wrote:

> Today we are announcing some very big news: Red Hat is acquiring Inktank.
> We are very excited about what this means for Ceph, the community, the
> team, our partners, and our customers. Ceph has come a long way in the ten
> years since the first line of code has been written, particularly over the
> last two years that Inktank has been focused on its development. The fifty
> members of the Inktank team, our partners, and the hundreds of other
> contributors have done amazing work in bringing us to where we are today.
>
> We believe that, as part of Red Hat, the Inktank team will be able to
> build a better quality Ceph storage platform that will benefit the entire
> ecosystem. Red Hat brings a broad base of expertise in building and
> delivering hardened software stacks as well as a wealth of resources that
> will help Ceph become the transformative and ubiquitous storage platform
> that we always believed it could be.
>
> For existing Inktank customers, this is going to mean turning a reliable
> and robust storage system into something that delivers even more value. In
> particular, joining forces with the Red Hat team will improve our ability
> to address problems at all layers of the storage stack, including in the
> kernel. We naturally recognize that many customers and users have built
> platforms based on other Linux distributions. We will continue to support
> these installations while we determine how to provide the best customer
> experience moving forward and how the next iteration of the enterprise
> Ceph product will be structured. In the meantime, our team remains
> committed to keeping Ceph an open, multiplatform project that works in any
> environment where it makes sense, including other Linux distributions and
> non-Linux operating systems.
>
> Red Hat is one of only a handful of companies that I trust to steward the
> Ceph project. When we started Inktank two years ago, our goal was to build
> the business by making Ceph successful as a broad-based, collaborative
> open source project with a vibrant user, developer, and commercial
> community. Red Hat shares this vision. They are passionate about open
> source, and have demonstrated that they are strong and fair stewards with
> other critical projects (like KVM). Red Hat intends to administer the Ceph
> trademark in a manner that protects the ecosystem as a whole and creates a
> level playing field where everyone is held to the same standards of use.
> Similarly, policies like "upstream first" ensure that bug fixes and
> improvements that go into Ceph-derived products are always shared with the
> community to streamline development and benefit all members of the
> ecosystem.
>
> One important change that will take place involves Inktank's product
> strategy, in which some add-on software we have developed is proprietary.
> In contrast, Red Hat favors a pure open source model. That means that
> Calamari, the monitoring and diagnostics tool that Inktank has developed
> as part of the Inktank Ceph Enterprise product, will soon be open sourced.
>
> This is a big step forward for the Ceph community. Very little will change
> on day one as it will take some time to integrate the Inktank business and
> for any significant changes to happen with our engineering activities.
> However, we are very excited about what is coming next for Ceph and are
> looking forward to this new chapter.
>
> I'd like to thank everyone who has helped Ceph get to where we are today:
> the amazing research group at UCSC where it began, DreamHost for
> supporting us for so many years, the incredible Inktank team, and the many
> contributors and users that have helped shape the system. We continue to
> believe that robust, scalable, and completely open storage platforms like
> Ceph will transform a storage industry that is still dominated by
> proprietary systems. Let's make it happen!
>
> sage
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>



-- 
Asif Murad Khan
Cell: +880-1713-114230
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Red Hat to acquire Inktank

2014-04-30 Thread Sage Weil
Today we are announcing some very big news: Red Hat is acquiring Inktank. 
We are very excited about what this means for Ceph, the community, the 
team, our partners, and our customers. Ceph has come a long way in the ten 
years since the first line of code has been written, particularly over the 
last two years that Inktank has been focused on its development. The fifty 
members of the Inktank team, our partners, and the hundreds of other 
contributors have done amazing work in bringing us to where we are today.

We believe that, as part of Red Hat, the Inktank team will be able to 
build a better quality Ceph storage platform that will benefit the entire 
ecosystem. Red Hat brings a broad base of expertise in building and 
delivering hardened software stacks as well as a wealth of resources that 
will help Ceph become the transformative and ubiquitous storage platform 
that we always believed it could be.

For existing Inktank customers, this is going to mean turning a reliable 
and robust storage system into something that delivers even more value. In 
particular, joining forces with the Red Hat team will improve our ability 
to address problems at all layers of the storage stack, including in the 
kernel. We naturally recognize that many customers and users have built 
platforms based on other Linux distributions. We will continue to support 
these installations while we determine how to provide the best customer 
experience moving forward and how the next iteration of the enterprise 
Ceph product will be structured. In the meantime, our team remains 
committed to keeping Ceph an open, multiplatform project that works in any 
environment where it makes sense, including other Linux distributions and 
non-Linux operating systems.

Red Hat is one of only a handful of companies that I trust to steward the 
Ceph project. When we started Inktank two years ago, our goal was to build 
the business by making Ceph successful as a broad-based, collaborative 
open source project with a vibrant user, developer, and commercial 
community. Red Hat shares this vision. They are passionate about open 
source, and have demonstrated that they are strong and fair stewards with 
other critical projects (like KVM). Red Hat intends to administer the Ceph 
trademark in a manner that protects the ecosystem as a whole and creates a 
level playing field where everyone is held to the same standards of use. 
Similarly, policies like "upstream first" ensure that bug fixes and 
improvements that go into Ceph-derived products are always shared with the 
community to streamline development and benefit all members of the 
ecosystem.

One important change that will take place involves Inktank's product 
strategy, in which some add-on software we have developed is proprietary. 
In contrast, Red Hat favors a pure open source model. That means that 
Calamari, the monitoring and diagnostics tool that Inktank has developed 
as part of the Inktank Ceph Enterprise product, will soon be open sourced.

This is a big step forward for the Ceph community. Very little will change 
on day one as it will take some time to integrate the Inktank business and 
for any significant changes to happen with our engineering activities. 
However, we are very excited about what is coming next for Ceph and are 
looking forward to this new chapter.

I'd like to thank everyone who has helped Ceph get to where we are today: 
the amazing research group at UCSC where it began, DreamHost for 
supporting us for so many years, the incredible Inktank team, and the many 
contributors and users that have helped shape the system. We continue to 
believe that robust, scalable, and completely open storage platforms like 
Ceph will transform a storage industry that is still dominated by 
proprietary systems. Let's make it happen!

sage
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-mon is taking too much memory. It's a bug?

2014-04-30 Thread Joao Eduardo Luis

On 04/30/2014 10:41 AM, Gonzalo Aguilar Delgado wrote:

Hi,

I've found my system with memory almost full. I see

   PID USUARIO   PR  NIVIRTRESSHR S  %CPU %MEM HORA+ ORDEN
  2317 root  20   0  824860 647856   3532 S   0,7  5,3  29:46.51
ceph-mon

I think it's too much. But what do you think?


Definitely too much.

We've seen this behavior before and there is a ticket currently open for 
it (see http://tracker.ceph.com/issues/8036), as this increasing memory 
consumption will eventually trigger an OOM on machines with low amounts 
of RAM.  However I haven't been able to reproduce it once to actually 
investigate what the problem might be.


Do you have any idea how you got here?

Also, could you please run

'ceph heap stats -m IP:PORT', with IP and PORT of that monitor, and send 
us back the result?


  -Joao


--
Joao Eduardo Luis
Software Engineer | http://inktank.com | http://ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] How to submit bug and patch for ceph?

2014-04-30 Thread Haomai Wang
How to submit patch: https://github.com/ceph/ceph/blob/master/SubmittingPatches

You can register a bug on tracker.ceph.com/projects/ceph/issues

On Wed, Apr 30, 2014 at 4:30 PM, You, Ji  wrote:
> Hi,
>
> An simple question, how to submit path for ceph? I just find the steps of 
> submitting path for ceph documents.
>
> Thanks
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



-- 
Best Regards,

Wheat
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] how can I repair the pg

2014-04-30 Thread Haomai Wang
Oh, sorry, not notice it's a incomplete state.

What's the result of "ceph -s"? It should be exists osd down.

On Wed, Apr 30, 2014 at 5:23 PM, vernon1...@126.com  wrote:
> Hello,
> The pg was "incomplete", and I had try to repair it before. But it do
> nothing.
>
> 
> vernon1...@126.com
>
> From: Haomai Wang
> Date: 2014-04-30 17:14
> To: vernon1...@126.com
> CC: ceph-users
> Subject: Re: [ceph-users] how can I repair the pg
> You can find the inconsistence pg via " ceph pg dump" and then run
> "ceph pg repair "
>
> On Wed, Apr 30, 2014 at 5:00 PM, vernon1...@126.com 
> wrote:
>> Hi,
>> I have some problem now. A large number of osds have down before. When
>> some
>> of them become up, I found a pg was "incomplete". Now this pg's map is
>> [35,29,42].
>> the pg's folders in osd.35 and osd.29 are empty. But there are 9.2G
>> capacity
>> in osd.42.  Like this:
>>
>> here is osd.35
>> [root@ceph952 49.6_head]# ls
>> [root@ceph952 49.6_head]#
>>
>> here is osd.42
>> [root@ceph960 49.6_head]# ls
>> DIR_6  DIR_E
>> [root@ceph960 49.6_head]#
>>
>> I want to know how to repair this pg?
>> And I found, when i stop osd.35, the map change like [0,29,42]. I run
>> "ceph
>> pg 49.6 query", and it show me:
>>
>> [root@ceph960 ~]# ceph pg 49.6 query
>> ... ...
>> "probing_osds": [
>> "(0,255)",
>> "(7,255)",
>> "(20,255)",
>> "(21,255)",
>> "(25,255)",
>> "(26,255)",
>> "(29,255)",
>> "(33,255)",
>> "(34,255)",
>> "(35,255)",
>> "(39,255)",
>> "(41,255)",
>> "(42,255)"],
>>   "down_osds_we_would_probe": [
>> 38],
>>   "peering_blocked_by": []},
>> { "name": "Started",
>>   "enter_time": "2014-04-30 16:52:24.181956"}]}
>>
>> Can I delete all this "probing_osds" but 42, and set the osd.42 as the
>> up_primary ?
>>
>> Thanks.
>>
>> 
>> vernon1...@126.com
>>
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>
>
>
> --
> Best Regards,
>
> Wheat



-- 
Best Regards,

Wheat
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] ceph-mon is taking too much memory. It's a bug?

2014-04-30 Thread Gonzalo Aguilar Delgado

Hi,

I've found my system with memory almost full. I see

  PID USUARIO   PR  NIVIRTRESSHR S  %CPU %MEM HORA+ ORDEN
 2317 root  20   0  824860 647856   3532 S   0,7  5,3  29:46.51 
ceph-mon


I think it's too much. But what do you think?

Best regards,
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] how can I repair the pg

2014-04-30 Thread Haomai Wang
You can find the inconsistence pg via " ceph pg dump" and then run
"ceph pg repair "

On Wed, Apr 30, 2014 at 5:00 PM, vernon1...@126.com  wrote:
> Hi,
> I have some problem now. A large number of osds have down before. When some
> of them become up, I found a pg was "incomplete". Now this pg's map is
> [35,29,42].
> the pg's folders in osd.35 and osd.29 are empty. But there are 9.2G capacity
> in osd.42.  Like this:
>
> here is osd.35
> [root@ceph952 49.6_head]# ls
> [root@ceph952 49.6_head]#
>
> here is osd.42
> [root@ceph960 49.6_head]# ls
> DIR_6  DIR_E
> [root@ceph960 49.6_head]#
>
> I want to know how to repair this pg?
> And I found, when i stop osd.35, the map change like [0,29,42]. I run "ceph
> pg 49.6 query", and it show me:
>
> [root@ceph960 ~]# ceph pg 49.6 query
> ... ...
> "probing_osds": [
> "(0,255)",
> "(7,255)",
> "(20,255)",
> "(21,255)",
> "(25,255)",
> "(26,255)",
> "(29,255)",
> "(33,255)",
> "(34,255)",
> "(35,255)",
> "(39,255)",
> "(41,255)",
> "(42,255)"],
>   "down_osds_we_would_probe": [
> 38],
>   "peering_blocked_by": []},
> { "name": "Started",
>   "enter_time": "2014-04-30 16:52:24.181956"}]}
>
> Can I delete all this "probing_osds" but 42, and set the osd.42 as the
> up_primary ?
>
> Thanks.
>
> 
> vernon1...@126.com
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>



-- 
Best Regards,

Wheat
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] how can I repair the pg

2014-04-30 Thread vernon1...@126.com
Hi,
I have some problem now. A large number of osds have down before. When some of 
them become up, I found a pg was "incomplete". Now this pg's map is [35,29,42]. 
the pg's folders in osd.35 and osd.29 are empty. But there are 9.2G capacity in 
osd.42.  Like this:

here is osd.35
[root@ceph952 49.6_head]# ls
[root@ceph952 49.6_head]# 

here is osd.42
[root@ceph960 49.6_head]# ls
DIR_6  DIR_E
[root@ceph960 49.6_head]# 

I want to know how to repair this pg?
And I found, when i stop osd.35, the map change like [0,29,42]. I run "ceph pg 
49.6 query", and it show me: 

[root@ceph960 ~]# ceph pg 49.6 query 
... ...
"probing_osds": [
"(0,255)",
"(7,255)",
"(20,255)",
"(21,255)",
"(25,255)",
"(26,255)",
"(29,255)",
"(33,255)",
"(34,255)",
"(35,255)",
"(39,255)",
"(41,255)",
"(42,255)"],
  "down_osds_we_would_probe": [
38],
  "peering_blocked_by": []},
{ "name": "Started",
  "enter_time": "2014-04-30 16:52:24.181956"}]}

Can I delete all this "probing_osds" but 42, and set the osd.42 as the 
up_primary ?

Thanks.




vernon1...@126.com___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] mkcephfs questions

2014-04-30 Thread Haomai Wang
OK, actually I just know it. It looks OK.

According to the log, many osds try to boot and repeatedly. I think
the problem maybe in monitor side. Could you check the monitor node
and the ceph-mon.log which provided is blank.

On Wed, Apr 30, 2014 at 3:59 PM, Cao, Buddy  wrote:
> Yes, I set "osd journal size= 0 " by purpose, I'd like to use all of the 
> space of journal device, I think I got the idea from Ceph website... Yes, I 
> do run " mkcephfs -a -c /etc/ceph/ceph.conf -k /etc/ceph/keyring.admin" to 
> start create ceph cluster, and it succeed.
>
> Do you think "osd journal size=0" would cause any problems?
>
>
> Wei Cao (Buddy)
>
> -Original Message-
> From: Haomai Wang [mailto:haomaiw...@gmail.com]
> Sent: Wednesday, April 30, 2014 3:48 PM
> To: Cao, Buddy
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] mkcephfs questions
>
> I found "osd journal size = 0" in your ceph.conf?
> Do you really run mkcephfs with this? I think it will be fail.
>
> On Wed, Apr 30, 2014 at 2:42 PM, Cao, Buddy  wrote:
>> Here you go... I did not see any stuck clean related log...
>>
>>
>>
>> Wei Cao (Buddy)
>>
>> -Original Message-
>> From: Haomai Wang [mailto:haomaiw...@gmail.com]
>> Sent: Wednesday, April 30, 2014 2:12 PM
>> To: Cao, Buddy
>> Cc: ceph-users@lists.ceph.com
>> Subject: Re: [ceph-users] mkcephfs questions
>>
>> Hmm, it should be another problem plays. Maybe more logs could explain it.
>>
>> ceph.log
>> ceph-mon.log
>>
>> On Wed, Apr 30, 2014 at 12:06 PM, Cao, Buddy  wrote:
>>> Thanks your reply, Haomai. What I don't understand is that, why the stuck 
>>> unclean pgs keep the same numbers after 12 hours. It's the common behavior 
>>> or not?
>>>
>>>
>>> Wei Cao (Buddy)
>>>
>>> -Original Message-
>>> From: Haomai Wang [mailto:haomaiw...@gmail.com]
>>> Sent: Wednesday, April 30, 2014 11:36 AM
>>> To: Cao, Buddy
>>> Cc: ceph-users@lists.ceph.com
>>> Subject: Re: [ceph-users] mkcephfs questions
>>>
>>> The result of "ceph -s" should tell you the reason. There only exists
>>> 21 OSD up but we need 24 OSDs
>>>
>>> On Wed, Apr 30, 2014 at 11:21 AM, Cao, Buddy  wrote:
 Hi,



 I setup ceph cluster thru mkcephfs command, after I enter “ceph –s”,
 it always returns 4950 stuck unclean pgs. I tried the same “ceph -s”
 after 12 hrs,  there still returns the same unclean pgs number, nothing 
 changed.
 Does mkcephfs always has the problem or I did something wrong? I
 attached the result of “ceph -s”, “ceph osd tree” and ceph.conf I
 have, please kindly help.





 [root@ceph]# ceph -s

 cluster 99fd4ff8-0fb8-47b9-8179-fefbba1c2503

  health HEALTH_WARN 4950 pgs degraded; 4950 pgs stuck unclean;
 recovery
 21/42 objects degraded (50.000%); 3/24 in osds are down; clock skew
 detected on mon.1, mon.2

  monmap e1: 3 mons at
 {0=192.168.0.2:6789/0,1=192.168.0.3:6789/0,2=192.168.0.4:6789/0},
 election epoch 6, quorum 0,1,2 0,1,2

  mdsmap e4: 1/1/1 up {0=0=up:active}

  osdmap e6019: 24 osds: 21 up, 24 in

   pgmap v16445: 4950 pgs, 6 pools, 9470 bytes data, 21 objects

 4900 MB used, 93118 MB / 98019 MB avail

 21/42 objects degraded (50.000%)

 4950 active+degraded



 [root@ceph]# ceph osd tree //part of returns

 # idweight  type name   up/down reweight

 -36 25  root vsm

 -31 3.2 storage_group ssd

 -16 3   zone zone_a_ssd

 -1  1   host vsm2_ssd_zone_a

 2   1   osd.2   up  1

 -6  1   host vsm3_ssd_zone_a

 10  1   osd.10  up  1

 -11 1   host vsm4_ssd_zone_a

 18  1   osd.18  up  1

 -21 0.0 zone zone_c_ssd

 -26 0.0 zone zone_b_ssd

 -33 3.2 storage_group sata

 -18 3   zone zone_a_sata

 -3  1   host vsm2_sata_zone_a

 1   1   osd.1   up  1

 -8  1   host vsm3_sata_zone_a

 9   1   osd.9   up  1

 -13 1   host vsm4_sata_zone_a

 17  1   osd.17  up  1

 -23 0.0 zone zone_c_sata

 -28 0.0 zone zone_b_sata





 Wei Cao (Buddy)




 ___
 ceph-users ma

[ceph-users] How to submit bug and patch for ceph?

2014-04-30 Thread You, Ji
Hi,

An simple question, how to submit path for ceph? I just find the steps of 
submitting path for ceph documents.

Thanks
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] ?????? Hey, Where can I find the source code of " classObjectOperationImpl " ?

2014-04-30 Thread peng
In the librados.cc , I found the following code: 
 Step 1 . 
 file :  librados.cc  
void librados::ObjectWriteOperation::write(uint64_t off, const bufferlist& bl)
{
  ::ObjectOperation *o = (::ObjectOperation *)impl;
  bufferlist c = bl;
  o->write(off, c);
} 
 Step 2 .  to find  ::ObjectOperation
 File : Objecter.h 
 struct  ObjectOperation  { 
void  write( .. ) {}   //  call add_data 
void  add_data( .. ) {}  // call add_op  
void  add_op(...)  {}  // need  OSDOp  
 }
 Step 3.  to findOSDOp  
 File : osd_types.h 
struct OSDOp { ... }  
  
 But , the question is : how to transfer the data to rados-cluster???I 
assume that there will be some socket connection(tcp,etc)  to transfer data , 
but I find nothing about socket connection.. 
  
 Besides,  I found something in IoCtxImpl.cc  and throught it I found  
ceph_tid_t Objecter::_op_submit(Op *op)   in Objecter.cc ..  It looks like the 
real operation is here.  
  
 confused..Appreciate any help~! 
  

 --  --
  ??: "John Spray";;
 : 2014??4??29??(??) 5:59
 ??: "peng"; 
 : "ceph-users"; 
 : Re: [ceph-users] Hey, Where can I find the source code of " 
classObjectOperationImpl " ?

 

  It's not a real class, just a type definition used for the 
ObjectOperation::impl pointer.  The actual object is an ObjectOperation.
 

  src/librados/librados.cc
 1797:  impl = (ObjectOperationImpl *)new ::ObjectOperation;

 

 John

 

 On Tue, Apr 29, 2014 at 10:49 AM, peng  wrote:
  Hey, 
 I can find a declaration in librados.hpp ,  but when I try to find the source 
code of  ObjectOperatoinImpl , I find nothing ..
  
  
 Is it a ghost class?? 
  
 Confused.. Appreciate any help . 

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-deploy - how to get in-memory ceph.conf

2014-04-30 Thread Cao, Buddy
You mean the "[osd]" and "[osd.x]" information are not necessary anymore? What 
if all the ceph nodes plus monitor nodes are down, after I reboot all the 
nodes, will ceph cluster be back to the normal status? And where the ceph 
cluster read the configuration info in ceph.conf before?


Wei Cao (Buddy)

-Original Message-
From: ceph-users-boun...@lists.ceph.com 
[mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Robert Sander
Sent: Wednesday, April 30, 2014 3:47 PM
To: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] ceph-deploy - how to get in-memory ceph.conf

On 30.04.2014 09:38, Cao, Buddy wrote:
> Thanks Robert. The auto-created ceph.conf file in local working directory is 
> too simple, almost nothing inside it. How do I know the osd.x created by 
> ceph-deploy, and populate these kinda necessary information into ceph.conf? 

This information is not necessary any more.

The important information are the monitors' addresses and network addresses of 
public and cluster networks. Plus the cluster fsid.

Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin

http://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein -- Sitz: Berlin

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] mkcephfs questions

2014-04-30 Thread Cao, Buddy
Yes, I set "osd journal size= 0 " by purpose, I'd like to use all of the space 
of journal device, I think I got the idea from Ceph website... Yes, I do run " 
mkcephfs -a -c /etc/ceph/ceph.conf -k /etc/ceph/keyring.admin" to start create 
ceph cluster, and it succeed.

Do you think "osd journal size=0" would cause any problems?


Wei Cao (Buddy)

-Original Message-
From: Haomai Wang [mailto:haomaiw...@gmail.com] 
Sent: Wednesday, April 30, 2014 3:48 PM
To: Cao, Buddy
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] mkcephfs questions

I found "osd journal size = 0" in your ceph.conf?
Do you really run mkcephfs with this? I think it will be fail.

On Wed, Apr 30, 2014 at 2:42 PM, Cao, Buddy  wrote:
> Here you go... I did not see any stuck clean related log...
>
>
>
> Wei Cao (Buddy)
>
> -Original Message-
> From: Haomai Wang [mailto:haomaiw...@gmail.com]
> Sent: Wednesday, April 30, 2014 2:12 PM
> To: Cao, Buddy
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] mkcephfs questions
>
> Hmm, it should be another problem plays. Maybe more logs could explain it.
>
> ceph.log
> ceph-mon.log
>
> On Wed, Apr 30, 2014 at 12:06 PM, Cao, Buddy  wrote:
>> Thanks your reply, Haomai. What I don't understand is that, why the stuck 
>> unclean pgs keep the same numbers after 12 hours. It's the common behavior 
>> or not?
>>
>>
>> Wei Cao (Buddy)
>>
>> -Original Message-
>> From: Haomai Wang [mailto:haomaiw...@gmail.com]
>> Sent: Wednesday, April 30, 2014 11:36 AM
>> To: Cao, Buddy
>> Cc: ceph-users@lists.ceph.com
>> Subject: Re: [ceph-users] mkcephfs questions
>>
>> The result of "ceph -s" should tell you the reason. There only exists
>> 21 OSD up but we need 24 OSDs
>>
>> On Wed, Apr 30, 2014 at 11:21 AM, Cao, Buddy  wrote:
>>> Hi,
>>>
>>>
>>>
>>> I setup ceph cluster thru mkcephfs command, after I enter “ceph –s”, 
>>> it always returns 4950 stuck unclean pgs. I tried the same “ceph -s”
>>> after 12 hrs,  there still returns the same unclean pgs number, nothing 
>>> changed.
>>> Does mkcephfs always has the problem or I did something wrong? I 
>>> attached the result of “ceph -s”, “ceph osd tree” and ceph.conf I 
>>> have, please kindly help.
>>>
>>>
>>>
>>>
>>>
>>> [root@ceph]# ceph -s
>>>
>>> cluster 99fd4ff8-0fb8-47b9-8179-fefbba1c2503
>>>
>>>  health HEALTH_WARN 4950 pgs degraded; 4950 pgs stuck unclean; 
>>> recovery
>>> 21/42 objects degraded (50.000%); 3/24 in osds are down; clock skew 
>>> detected on mon.1, mon.2
>>>
>>>  monmap e1: 3 mons at
>>> {0=192.168.0.2:6789/0,1=192.168.0.3:6789/0,2=192.168.0.4:6789/0},
>>> election epoch 6, quorum 0,1,2 0,1,2
>>>
>>>  mdsmap e4: 1/1/1 up {0=0=up:active}
>>>
>>>  osdmap e6019: 24 osds: 21 up, 24 in
>>>
>>>   pgmap v16445: 4950 pgs, 6 pools, 9470 bytes data, 21 objects
>>>
>>> 4900 MB used, 93118 MB / 98019 MB avail
>>>
>>> 21/42 objects degraded (50.000%)
>>>
>>> 4950 active+degraded
>>>
>>>
>>>
>>> [root@ceph]# ceph osd tree //part of returns
>>>
>>> # idweight  type name   up/down reweight
>>>
>>> -36 25  root vsm
>>>
>>> -31 3.2 storage_group ssd
>>>
>>> -16 3   zone zone_a_ssd
>>>
>>> -1  1   host vsm2_ssd_zone_a
>>>
>>> 2   1   osd.2   up  1
>>>
>>> -6  1   host vsm3_ssd_zone_a
>>>
>>> 10  1   osd.10  up  1
>>>
>>> -11 1   host vsm4_ssd_zone_a
>>>
>>> 18  1   osd.18  up  1
>>>
>>> -21 0.0 zone zone_c_ssd
>>>
>>> -26 0.0 zone zone_b_ssd
>>>
>>> -33 3.2 storage_group sata
>>>
>>> -18 3   zone zone_a_sata
>>>
>>> -3  1   host vsm2_sata_zone_a
>>>
>>> 1   1   osd.1   up  1
>>>
>>> -8  1   host vsm3_sata_zone_a
>>>
>>> 9   1   osd.9   up  1
>>>
>>> -13 1   host vsm4_sata_zone_a
>>>
>>> 17  1   osd.17  up  1
>>>
>>> -23 0.0 zone zone_c_sata
>>>
>>> -28 0.0 zone zone_b_sata
>>>
>>>
>>>
>>>
>>>
>>> Wei Cao (Buddy)
>>>
>>>
>>>
>>>
>>> ___
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>>
>>
>>
>> --
>> Best Regards,
>>
>> Wheat
>
>
>
> --
> Best Regards,
>
> Wheat



--
Best Regards,

Wheat
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] mkcephfs questions

2014-04-30 Thread Haomai Wang
I found "osd journal size = 0" in your ceph.conf?
Do you really run mkcephfs with this? I think it will be fail.

On Wed, Apr 30, 2014 at 2:42 PM, Cao, Buddy  wrote:
> Here you go... I did not see any stuck clean related log...
>
>
>
> Wei Cao (Buddy)
>
> -Original Message-
> From: Haomai Wang [mailto:haomaiw...@gmail.com]
> Sent: Wednesday, April 30, 2014 2:12 PM
> To: Cao, Buddy
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] mkcephfs questions
>
> Hmm, it should be another problem plays. Maybe more logs could explain it.
>
> ceph.log
> ceph-mon.log
>
> On Wed, Apr 30, 2014 at 12:06 PM, Cao, Buddy  wrote:
>> Thanks your reply, Haomai. What I don't understand is that, why the stuck 
>> unclean pgs keep the same numbers after 12 hours. It's the common behavior 
>> or not?
>>
>>
>> Wei Cao (Buddy)
>>
>> -Original Message-
>> From: Haomai Wang [mailto:haomaiw...@gmail.com]
>> Sent: Wednesday, April 30, 2014 11:36 AM
>> To: Cao, Buddy
>> Cc: ceph-users@lists.ceph.com
>> Subject: Re: [ceph-users] mkcephfs questions
>>
>> The result of "ceph -s" should tell you the reason. There only exists
>> 21 OSD up but we need 24 OSDs
>>
>> On Wed, Apr 30, 2014 at 11:21 AM, Cao, Buddy  wrote:
>>> Hi,
>>>
>>>
>>>
>>> I setup ceph cluster thru mkcephfs command, after I enter “ceph –s”,
>>> it always returns 4950 stuck unclean pgs. I tried the same “ceph -s”
>>> after 12 hrs,  there still returns the same unclean pgs number, nothing 
>>> changed.
>>> Does mkcephfs always has the problem or I did something wrong? I
>>> attached the result of “ceph -s”, “ceph osd tree” and ceph.conf I
>>> have, please kindly help.
>>>
>>>
>>>
>>>
>>>
>>> [root@ceph]# ceph -s
>>>
>>> cluster 99fd4ff8-0fb8-47b9-8179-fefbba1c2503
>>>
>>>  health HEALTH_WARN 4950 pgs degraded; 4950 pgs stuck unclean;
>>> recovery
>>> 21/42 objects degraded (50.000%); 3/24 in osds are down; clock skew
>>> detected on mon.1, mon.2
>>>
>>>  monmap e1: 3 mons at
>>> {0=192.168.0.2:6789/0,1=192.168.0.3:6789/0,2=192.168.0.4:6789/0},
>>> election epoch 6, quorum 0,1,2 0,1,2
>>>
>>>  mdsmap e4: 1/1/1 up {0=0=up:active}
>>>
>>>  osdmap e6019: 24 osds: 21 up, 24 in
>>>
>>>   pgmap v16445: 4950 pgs, 6 pools, 9470 bytes data, 21 objects
>>>
>>> 4900 MB used, 93118 MB / 98019 MB avail
>>>
>>> 21/42 objects degraded (50.000%)
>>>
>>> 4950 active+degraded
>>>
>>>
>>>
>>> [root@ceph]# ceph osd tree //part of returns
>>>
>>> # idweight  type name   up/down reweight
>>>
>>> -36 25  root vsm
>>>
>>> -31 3.2 storage_group ssd
>>>
>>> -16 3   zone zone_a_ssd
>>>
>>> -1  1   host vsm2_ssd_zone_a
>>>
>>> 2   1   osd.2   up  1
>>>
>>> -6  1   host vsm3_ssd_zone_a
>>>
>>> 10  1   osd.10  up  1
>>>
>>> -11 1   host vsm4_ssd_zone_a
>>>
>>> 18  1   osd.18  up  1
>>>
>>> -21 0.0 zone zone_c_ssd
>>>
>>> -26 0.0 zone zone_b_ssd
>>>
>>> -33 3.2 storage_group sata
>>>
>>> -18 3   zone zone_a_sata
>>>
>>> -3  1   host vsm2_sata_zone_a
>>>
>>> 1   1   osd.1   up  1
>>>
>>> -8  1   host vsm3_sata_zone_a
>>>
>>> 9   1   osd.9   up  1
>>>
>>> -13 1   host vsm4_sata_zone_a
>>>
>>> 17  1   osd.17  up  1
>>>
>>> -23 0.0 zone zone_c_sata
>>>
>>> -28 0.0 zone zone_b_sata
>>>
>>>
>>>
>>>
>>>
>>> Wei Cao (Buddy)
>>>
>>>
>>>
>>>
>>> ___
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>>
>>
>>
>> --
>> Best Regards,
>>
>> Wheat
>
>
>
> --
> Best Regards,
>
> Wheat



-- 
Best Regards,

Wheat
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-deploy - how to get in-memory ceph.conf

2014-04-30 Thread Robert Sander
On 30.04.2014 09:38, Cao, Buddy wrote:
> Thanks Robert. The auto-created ceph.conf file in local working directory is 
> too simple, almost nothing inside it. How do I know the osd.x created by 
> ceph-deploy, and populate these kinda necessary information into ceph.conf? 

This information is not necessary any more.

The important information are the monitors' addresses and network
addresses of public and cluster networks. Plus the cluster fsid.

Regards
-- 
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin

http://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein -- Sitz: Berlin



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] [SOLVED] RE: Ceph 0.72.2 installation on Ubuntu 12.04.4 LTS never got active + сlean

2014-04-30 Thread Robert van Leeuwen
>71260 kB used,
> 1396 GB / 1396 GB avail
> 192 active+clean
>
>The only thing embarrasses me is available space. I have 2x750Gb 
> drives and the total amount
> of available space is indeed 1396Gb, but if Ceph automatically
> creates 2 replicas of every object, then 
> this space should be divided by 2 isn’t it? The correct number should be 
> ~700Gb

Vadim,

Ceph reports the total space.
Since you can have different pools with replication numbers Ceph always reports 
the total space and usage without considering replica counts.
When you write something with a replica count of 2 it will show up as using 
twice the amount of space.
So 1 GB usage will result in:
1394 GB / 1396 GB avail

Cheers,
Robert van Leeuwen




___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-deploy - how to get in-memory ceph.conf

2014-04-30 Thread Cao, Buddy
Thanks Robert. The auto-created ceph.conf file in local working directory is 
too simple, almost nothing inside it. How do I know the osd.x created by 
ceph-deploy, and populate these kinda necessary information into ceph.conf? 


Wei Cao (Buddy)

-Original Message-
From: ceph-users-boun...@lists.ceph.com 
[mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Robert Sander
Sent: Wednesday, April 30, 2014 3:32 PM
To: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] ceph-deploy - how to get in-memory ceph.conf

On 30.04.2014 08:18, Cao, Buddy wrote:
> Thanks for your reply Haomai. There is no /etc/ceph/ceph.conf on any ceph 
> nodes, that is why I raised the question at beginning.

ceph-deploy creates the ceph.conf file in the local working directory.
You can distribute that with "ceph-deploy admin".

Regards
-- 
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin

http://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein -- Sitz: Berlin

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-deploy - how to get in-memory ceph.conf

2014-04-30 Thread Robert Sander
On 30.04.2014 08:18, Cao, Buddy wrote:
> Thanks for your reply Haomai. There is no /etc/ceph/ceph.conf on any ceph 
> nodes, that is why I raised the question at beginning.

ceph-deploy creates the ceph.conf file in the local working directory.
You can distribute that with "ceph-deploy admin".

Regards
-- 
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin

http://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein -- Sitz: Berlin



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph mds laggy and failed assert in function replay mds/journal.cc

2014-04-30 Thread Mohd Bazli Ab Karim
Hi Zheng,

Sorry for the late reply. For sure, I will try this again after we completely 
verifying all content in the file system. Hopefully all will be good.
And, please confirm this, I will set debug_mds=10 for the ceph-mds, and do you 
want me to send the ceph-mon log too?

BTW, how to confirm that the mds has passed the beacon to mon or not?

Thank you so much Zheng!

Bazli

-Original Message-
From: Yan, Zheng [mailto:uker...@gmail.com]
Sent: Tuesday, April 29, 2014 10:13 PM
To: Mohd Bazli Ab Karim
Cc: Luke Jing Yuan; Wong Ming Tat
Subject: Re: [ceph-users] Ceph mds laggy and failed assert in function replay 
mds/journal.cc

On Tue, Apr 29, 2014 at 5:30 PM, Mohd Bazli Ab Karim  
wrote:
> Hi Zheng,
>
> The another issue that Luke mentioned just now was like this.
> At first, we ran one mds (mon01) with the new compiled ceph-mds. It works 
> fine with only one MDS running at that time. However, when we ran two more 
> MDSes mon02 mon03 with the new compiled ceph-mds, it started acting weird.
> Mon01 which was became active at first, will have the error and started to 
> respawning. Once respawning happened, mon03 will take over from mon01 as 
> master mds, and replay happened again.
> Again, when mon03 became active, it will have the same error like below, and 
> respawning again. So, it seems to me that replay will continue to happen from 
> one mds to another when they got respawned.
>
> 2014-04-29 15:36:24.917798 7f5c36476700  1 mds.0.server
> reconnect_clients -- 1 sessions
> 2014-04-29 15:36:24.919620 7f5c2fb3e700  0 -- 10.4.118.23:6800/26401
> >> 10.1.64.181:0/1558263174 pipe(0x2924f5780 sd=41 :6800 s=0 pgs=0
> cs=0 l=0 c=0x37056e0).accept peer addr is really
> 10.1.64.181:0/1558263174 (socket is 10.1.64.181:57649/0)
> 2014-04-29 15:36:24.921661 7f5c36476700  0 log [DBG] : reconnect by
> client.884169 10.1.64.181:0/1558263174 after 0.003774
> 2014-04-29 15:36:24.921786 7f5c36476700  1 mds.0.12858 reconnect_done
> 2014-04-29 15:36:25.109391 7f5c36476700  1 mds.0.12858 handle_mds_map
> i am now mds.0.12858
> 2014-04-29 15:36:25.109413 7f5c36476700  1 mds.0.12858 handle_mds_map
> state change up:reconnect --> up:rejoin
> 2014-04-29 15:36:25.109417 7f5c36476700  1 mds.0.12858 rejoin_start
> 2014-04-29 15:36:26.918067 7f5c36476700  1 mds.0.12858
> rejoin_joint_start
> 2014-04-29 15:36:33.520985 7f5c36476700  1 mds.0.12858 rejoin_done
> 2014-04-29 15:36:36.252925 7f5c36476700  1 mds.0.12858 handle_mds_map
> i am now mds.0.12858
> 2014-04-29 15:36:36.252927 7f5c36476700  1 mds.0.12858 handle_mds_map
> state change up:rejoin --> up:active
> 2014-04-29 15:36:36.252932 7f5c36476700  1 mds.0.12858 recovery_done -- 
> successful recovery!
> 2014-04-29 15:36:36.745833 7f5c36476700  1 mds.0.12858 active_start
> 2014-04-29 15:36:36.987854 7f5c36476700  1 mds.0.12858 cluster recovered.
> 2014-04-29 15:36:40.182604 7f5c36476700  0 mds.0.12858
> handle_mds_beacon no longer laggy
> 2014-04-29 15:36:57.947441 7f5c2fb3e700  0 -- 10.4.118.23:6800/26401
> >> 10.1.64.181:0/1558263174 pipe(0x2924f5780 sd=41 :6800 s=2 pgs=156
> cs=1 l=0 c=0x37056e0).fault with nothing to send, going to standby
> 2014-04-29 15:37:10.534593 7f5c36476700  1 mds.-1.-1 handle_mds_map i
> (10.4.118.23:6800/26401) dne in the mdsmap, respawning myself
> 2014-04-29 15:37:10.534604 7f5c36476700  1 mds.-1.-1 respawn
> 2014-04-29 15:37:10.534609 7f5c36476700  1 mds.-1.-1  e: '/usr/bin/ceph-mds'
> 2014-04-29 15:37:10.534612 7f5c36476700  1 mds.-1.-1  0: '/usr/bin/ceph-mds'
> 2014-04-29 15:37:10.534616 7f5c36476700  1 mds.-1.-1  1: '--cluster=ceph'
> 2014-04-29 15:37:10.534619 7f5c36476700  1 mds.-1.-1  2: '-i'
> 2014-04-29 15:37:10.534621 7f5c36476700  1 mds.-1.-1  3: 'mon03'
> 2014-04-29 15:37:10.534623 7f5c36476700  1 mds.-1.-1  4: '-f'
> 2014-04-29 15:37:10.534641 7f5c36476700  1 mds.-1.-1  cwd /
> 2014-04-29 15:37:12.155458 7f8907c8b780  0 ceph version  (), process
> ceph-mds, pid 26401
> 2014-04-29 15:37:12.249780 7f8902d10700  1 mds.-1.0 handle_mds_map
> standby
>
> p/s. we ran ceph-mon and ceph-mds on same servers, (mon01,mon02,mon03)
>
> I sent to you two log files, mon01 and mon03 where the scenario of mon03 have 
> state->standby->replay->active->respawned. And also, mon01 which is now 
> running as active as a single MDS at this moment.
>

After the MDS became ative, it did not send beacon to the monitor. It seems 
like the MDS was busy doing something else. If this issue still happen, set 
debug_mds=10 and send the log to me.

Regards
Yan, Zheng

> Regards,
> Bazli
> -Original Message-
> From: Luke Jing Yuan
> Sent: Tuesday, April 29, 2014 4:46 PM
> To: Yan, Zheng
> Cc: Mohd Bazli Ab Karim; Wong Ming Tat
> Subject: RE: [ceph-users] Ceph mds laggy and failed assert in function
> replay mds/journal.cc
>
> Hi Zheng,
>
> Thanks for the information. Actually we encounter another issue, in our 
> original setup, we have 3 MDS running (say mon01, mon02 and mon03), when we 
> do the replay/recovery we did it on mon01. After we compl