Re: [ceph-users] Ceph release cadence

2017-09-06 Thread Henrik Korkuc
On 17-09-07 02:42, Deepak Naidu wrote: Hope collective feedback helps. So here's one. - Not a lot of people seem to run the "odd" releases (e.g., infernalis, kraken). I think the more obvious reason companies/users wanting to use CEPH will stick with LTS versions as it models the 3yr

Re: [ceph-users] Multiple OSD crashing on 12.2.0. Bluestore / EC pool / rbd

2017-09-06 Thread Brad Hubbard
These error logs look like they are being generated here, https://github.com/ceph/ceph/blob/master/src/os/bluestore/BlueStore.cc#L8987-L8993 or possibly here, https://github.com/ceph/ceph/blob/master/src/os/bluestore/BlueStore.cc#L9230-L9236. Sep 05 17:02:58 r72-k7-06-01.k8s.ash1.cloudsys.tmcs

[ceph-users] RGW snapshot

2017-09-06 Thread donglifec...@gmail.com
Yehuda, Is there any way to create snapshots of individual buckets? I don't find this feature now. Can you give me some ideas? Thanks a lot. donglifec...@gmail.com ___ ceph-users mailing list ceph-users@lists.ceph.com

Re: [ceph-users] PCIe journal benefit for SSD OSDs

2017-09-06 Thread Christian Balzer
Hello, On Wed, 6 Sep 2017 09:09:54 -0400 Alex Gorbachev wrote: > We are planning a Jewel filestore based cluster for a performance > sensitive healthcare client, and the conservative OSD choice is > Samsung SM863A. > While I totally see where you're coming from and me having stated that I'll

Re: [ceph-users] Ceph release cadence

2017-09-06 Thread Deepak Naidu
Hope collective feedback helps. So here's one. >>- Not a lot of people seem to run the "odd" releases (e.g., infernalis, >>kraken). I think the more obvious reason companies/users wanting to use CEPH will stick with LTS versions as it models the 3yr support cycle. >>* Drop the odd releases,

[ceph-users] Client features by IP?

2017-09-06 Thread Bryan Stillwell
I was reading this post by Josh Durgin today and was pretty happy to see we can get a summary of features that clients are using with the 'ceph features' command: http://ceph.com/community/new-luminous-upgrade-complete/ However, I haven't found an option to display the IP address of those

Re: [ceph-users] RadosGW ADMIN API

2017-09-06 Thread Robin H. Johnson
On Wed, Sep 06, 2017 at 02:08:14PM +, Engelmann Florian wrote: > we are running a luminous cluster and three radosgw to serve a s3 compatible > objectstore. As we are (currently) not using Openstack we have to use the > RadosGW Admin API to get our billing data. I tried to access the API

Re: [ceph-users] Ceph release cadence

2017-09-06 Thread Eric Eastman
I have been working with Ceph for the last several years and I help support multiple Ceph clusters. I would like to have the team drop the Even/Odd release schedule, and go to an all production release schedule. I would like releases on no more then a 9 month schedule, with smaller incremental

Re: [ceph-users] Modification Time of RBD Images

2017-09-06 Thread Jason Dillaman
No support for that yet -- it's being tracked by a backlog ticket [1]. [1] https://trello.com/c/npmsOgM5 On Wed, Sep 6, 2017 at 12:27 PM, Christoph Adomeit wrote: > Now that we are 2 years and some ceph releases farther and have bluestor: > > Are there meanwhile

Re: [ceph-users] Modification Time of RBD Images

2017-09-06 Thread Christoph Adomeit
Now that we are 2 years and some ceph releases farther and have bluestor: Are there meanwhile any better ways to find out the mtime of an rbd image ? Thanks Christoph On Thu, Nov 26, 2015 at 06:50:46PM +0100, Jan Schermer wrote: > Find in which block the filesystem on your RBD image stores

Re: [ceph-users] ceph OSD journal (with dmcrypt) replacement

2017-09-06 Thread M Ranga Swami Reddy
Thank you. Iam able to replace the dmcrypt journal successfully. On Sep 5, 2017 18:14, "David Turner" wrote: > Did the journal drive fail during operation? Or was it taken out during > pre-failure. If it fully failed, then most likely you can't guarantee the > consistency

Re: [ceph-users] Ceph release cadence

2017-09-06 Thread Joao Eduardo Luis
On 09/06/2017 04:23 PM, Sage Weil wrote: * Keep even/odd pattern, but force a 'train' model with a more regular cadence + predictable schedule - some features will miss the target and be delayed a year Personally, I think a predictable schedule is the way to go. Two major reasons come

Re: [ceph-users] Ceph release cadence

2017-09-06 Thread Ken Dreyer
On Wed, Sep 6, 2017 at 9:23 AM, Sage Weil wrote: > * Keep even/odd pattern, but force a 'train' model with a more regular > cadence > > + predictable schedule > - some features will miss the target and be delayed a year This one (#2, regular release cadence) is the one I

Re: [ceph-users] Ceph release cadence

2017-09-06 Thread Jack
Hi Sage, The one option I do not want for Ceph is the last one: support upgrade across multiple LTS versions I'd rather wait 3 months for a better release (both in terms of functions and quality) than seeing the Ceph team exhausted, having to maintain for years a lot more releases and code

Re: [ceph-users] Ceph release cadence

2017-09-06 Thread Alex Gorbachev
On Wed, Sep 6, 2017 at 11:23 AM Sage Weil wrote: > Hi everyone, > > Traditionally, we have done a major named "stable" release twice a year, > and every other such release has been an "LTS" release, with fixes > backported for 1-2 years. > > With kraken and luminous we missed

Re: [ceph-users] Ceph release cadence

2017-09-06 Thread Kingsley Tart
On Wed, 2017-09-06 at 15:23 +, Sage Weil wrote: > Hi everyone, > > Traditionally, we have done a major named "stable" release twice a year, > and every other such release has been an "LTS" release, with fixes > backported for 1-2 years. > > With kraken and luminous we missed our schedule

Re: [ceph-users] RBD: How many snapshots is too many?

2017-09-06 Thread Florian Haas
Hi Greg, thanks for your insight! I do have a few follow-up questions. On 09/05/2017 11:39 PM, Gregory Farnum wrote: >> It seems to me that there still isn't a good recommendation along the >> lines of "try not to have more than X snapshots per RBD image" or "try >> not to have more than Y

Re: [ceph-users] Ceph release cadence

2017-09-06 Thread Bryan Banister
Very new to Ceph but long time but long time sys admin who is jaded/opinionated. My 2 cents: 1) This sounds like a perfect thing to put in a poll and ask/beg people to vote. Hopefully that will get you more of a response from a larger number of users. 2) Given that the value of the odd

[ceph-users] Ceph release cadence

2017-09-06 Thread Sage Weil
Hi everyone, Traditionally, we have done a major named "stable" release twice a year, and every other such release has been an "LTS" release, with fixes backported for 1-2 years. With kraken and luminous we missed our schedule by a lot: instead of releasing in October and April we released in

[ceph-users] Changing RGW pool default

2017-09-06 Thread Bruno Carvalho
Hello friends, I have a question. Is it possible to change the default.rgw pool of the ceph used in radosgw already with stored data to a new name? Tested in version: ceph version 10.2.7 and 10.2.9 I already tried to change the metadata of the region and zone and made the renames of the pool, I

Re: [ceph-users] ceph mgr unknown version

2017-09-06 Thread Piotr Dzionek
Oh, I see that this is probably a bug: http://tracker.ceph.com/issues/21260 I also noticed following error in mgr logs: /2017-09-06 16:41:08.537577 7f34c0a7a700 1 mgr send_beacon active// //2017-09-06 16:41:08.539161 7f34c0a7a700 1 mgr[restful] Unknown request ''// //2017-09-06

Re: [ceph-users] Ceph Developers Monthly - September

2017-09-06 Thread Haomai Wang
Oh, I'm on flight at the time On Wed, Sep 6, 2017 at 6:28 PM, Joao Eduardo Luis wrote: > On 09/06/2017 06:06 AM, Leonardo Vaz wrote: >> >> Hey cephers, >> >> The Ceph Developer Monthly is confirmed for tonight, September 6 at 9pm >> Eastern Time (EDT), in an APAC-friendly time

[ceph-users] ceph mgr unknown version

2017-09-06 Thread Piotr Dzionek
Hi, I ran a small test two node ceph cluster - 12.2.0 version. It has 28 osds, 1 mon and 2 mgr. It runs fine, however I noticed this strange thing in output of ceph versions command: /# ceph versions// //{// //"mon": {// //"ceph version 12.2.0

Re: [ceph-users] Multiple OSD crashing on 12.2.0. Bluestore / EC pool / rbd

2017-09-06 Thread Thomas Coelho
Hi, I have the same problem. A bug [1] is reported since months, but unfortunately this is not fixed yet. I hope, if more people are having this problem the developers can reproduce and fix it. I was using Kernel-RBD with a Cache Tier. so long Thomas Coelho [1]

[ceph-users] RadosGW ADMIN API

2017-09-06 Thread Engelmann Florian
Hi, we are running a luminous cluster and three radosgw to serve a s3 compatible objectstore. As we are (currently) not using Openstack we have to use the RadosGW Admin API to get our billing data. I tried to access the API with pathon like: [...] import rgwadmin [...] Users =

Re: [ceph-users] Multiple OSD crashing on 12.2.0. Bluestore / EC pool / rbd

2017-09-06 Thread Henrik Korkuc
On 17-09-06 16:24, Jean-Francois Nadeau wrote: Hi, On a 4 node / 48 OSDs Luminous cluster Im giving a try at RBD on EC pools + Bluestore. Setup went fine but after a few bench runs several OSD are failing and many wont even restart. ceph osd erasure-code-profile set myprofile \    k=2\    

[ceph-users] Multiple OSD crashing on 12.2.0. Bluestore / EC pool / rbd

2017-09-06 Thread Jean-Francois Nadeau
Hi, On a 4 node / 48 OSDs Luminous cluster Im giving a try at RBD on EC pools + Bluestore. Setup went fine but after a few bench runs several OSD are failing and many wont even restart. ceph osd erasure-code-profile set myprofile \ k=2\ m=1 \ crush-failure-domain=host ceph osd pool

[ceph-users] PCIe journal benefit for SSD OSDs

2017-09-06 Thread Alex Gorbachev
We are planning a Jewel filestore based cluster for a performance sensitive healthcare client, and the conservative OSD choice is Samsung SM863A. I am going to put an 8GB Areca HBA in front of it to cache small metadata operations, but was wondering if anyone has seen a positive impact from also

[ceph-users] MDS crashes shortly after startup while trying to purge stray files.

2017-09-06 Thread Micha Krause
Hi, I was deleting a lot of hard linked files, when "something" happened. Now my mds starts for a few seconds, writes a lot of these lines: -43> 2017-09-06 13:51:43.396588 7f9047b21700 10 log_client will send 2017-09-06 13:51:40.531563 mds.0 10.210.32.12:6802/2735447218 4963 : cluster

Re: [ceph-users] Ceph Developers Monthly - September

2017-09-06 Thread Joao Eduardo Luis
On 09/06/2017 06:06 AM, Leonardo Vaz wrote: Hey cephers, The Ceph Developer Monthly is confirmed for tonight, September 6 at 9pm Eastern Time (EDT), in an APAC-friendly time slot. As much as I would love to attend and discuss some topics (especially the RADOS replication stuff), this is an

Re: [ceph-users] Luminous Upgrade KRBD

2017-09-06 Thread Ashley Merrick
Okie thanks all, will hold off  -Original Message- From: Ilya Dryomov [mailto:idryo...@gmail.com] Sent: 06 September 2017 17:58 To: Ashley Merrick Cc: Henrik Korkuc ; ceph-us...@ceph.com Subject: Re: [ceph-users] Luminous Upgrade KRBD On Wed,

Re: [ceph-users] Luminous Upgrade KRBD

2017-09-06 Thread Ilya Dryomov
On Wed, Sep 6, 2017 at 11:23 AM, Ashley Merrick wrote: > Only drive for it was to be able to use this: > > http://docs.ceph.com/docs/master/rados/operations/upmap/ > > To see if would help with the current very uneven PG MAP across 100+ OSD's, > something that can wait if

Re: [ceph-users] Luminous Upgrade KRBD

2017-09-06 Thread Ashley Merrick
Only drive for it was to be able to use this: http://docs.ceph.com/docs/master/rados/operations/upmap/ To see if would help with the current very uneven PG MAP across 100+ OSD's, something that can wait if current kernel isn't ready. ,Ashley -Original Message- From: Ilya Dryomov

Re: [ceph-users] Luminous Upgrade KRBD

2017-09-06 Thread ceph
Quick drop-in, if this is a suitable solution: rbd-nbd This will give you, for a small performance cost, a block device using librbd (in userspace) On 06/09/2017 11:08, Ilya Dryomov wrote: > On Wed, Sep 6, 2017 at 9:16 AM, Henrik Korkuc wrote: >> On 17-09-06 09:10, Ashley

Re: [ceph-users] Luminous Upgrade KRBD

2017-09-06 Thread Ilya Dryomov
On Wed, Sep 6, 2017 at 9:16 AM, Henrik Korkuc wrote: > On 17-09-06 09:10, Ashley Merrick wrote: > > I was just going by : docs.ceph.com/docs/master/start/os-recommendations/ > > > Which states 4.9 > > > docs.ceph.com/docs/master/rados/operations/crush-map > > > Only goes as far

Re: [ceph-users] Luminous Upgrade KRBD

2017-09-06 Thread Henrik Korkuc
On 17-09-06 09:10, Ashley Merrick wrote: I was just going by : docs.ceph.com/docs/master/start/os-recommendations/ Which states 4.9 docs.ceph.com/docs/master/rados/operations/crush-map Only goes as far as Jewel and states 4.5 Not sure where else I can find a concrete answer to if 4.10

Re: [ceph-users] Luminous Upgrade KRBD

2017-09-06 Thread Ashley Merrick
I was just going by : docs.ceph.com/docs/master/start/os-recommendations/ Which states 4.9 docs.ceph.com/docs/master/rados/operations/crush-map Only goes as far as Jewel and states 4.5 Not sure where else I can find a concrete answer to if 4.10 is new enough. ,Ashley