Re: [ceph-users] Ceph release cadence

2017-09-06 Thread Henrik Korkuc

On 17-09-07 02:42, Deepak Naidu wrote:

Hope collective feedback helps. So here's one.


- Not a lot of people seem to run the "odd" releases (e.g., infernalis, kraken).

I think the more obvious reason companies/users wanting to use CEPH will stick 
with LTS versions as it models the 3yr  support cycle.
Maybe I missed something, but I think Ceph does not support LTS releases 
for 3 years.



* Drop the odd releases, and aim for a ~9 month cadence. This splits the 
difference between the current even/odd pattern we've been doing.

Yes, provided an easy upgrade process.


--
Deepak




-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Sage 
Weil
Sent: Wednesday, September 06, 2017 8:24 AM
To: ceph-de...@vger.kernel.org; ceph-maintain...@ceph.com; ceph-us...@ceph.com
Subject: [ceph-users] Ceph release cadence

Hi everyone,

Traditionally, we have done a major named "stable" release twice a year, and every other 
such release has been an "LTS" release, with fixes backported for 1-2 years.

With kraken and luminous we missed our schedule by a lot: instead of releasing 
in October and April we released in January and August.

A few observations:

- Not a lot of people seem to run the "odd" releases (e.g., infernalis, kraken).  
This limits the value of actually making them.  It also means that those who *do* run them 
are running riskier code (fewer users -> more bugs).

- The more recent requirement that upgrading clusters must make a stop at each LTS 
(e.g., hammer -> luminous not supported, must go hammer -> jewel
-> lumninous) has been hugely helpful on the development side by
-> reducing
the amount of cross-version compatibility code to maintain and reducing the 
number of upgrade combinations to test.

- When we try to do a time-based "train" release cadence, there always seems to be some 
"must-have" thing that delays the release a bit.  This doesn't happen as much with the 
odd releases, but it definitely happens with the LTS releases.  When the next LTS is a year away, 
it is hard to suck it up and wait that long.

A couple of options:

* Keep even/odd pattern, and continue being flexible with release dates

   + flexible
   - unpredictable
   - odd releases of dubious value

* Keep even/odd pattern, but force a 'train' model with a more regular cadence

   + predictable schedule
   - some features will miss the target and be delayed a year

* Drop the odd releases but change nothing else (i.e., 12-month release
cadence)

   + eliminate the confusing odd releases with dubious value
  
* Drop the odd releases, and aim for a ~9 month cadence. This splits the difference between the current even/odd pattern we've been doing.


   + eliminate the confusing odd releases with dubious value
   + waiting for the next release isn't quite as bad
   - required upgrades every 9 months instead of ever 12 months

* Drop the odd releases, but relax the "must upgrade through every LTS" to allow 
upgrades across 2 versions (e.g., luminous -> mimic or luminous -> nautilus).  Shorten 
release cycle (~6-9 months).

   + more flexibility for users
   + downstreams have greater choice in adopting an upstrema release
   - more LTS branches to maintain
   - more upgrade paths to consider

Other options we should consider?  Other thoughts?

Thanks!
sage
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
---
This email message is for the sole use of the intended recipient(s) and may 
contain
confidential information.  Any unauthorized review, use, disclosure or 
distribution
is prohibited.  If you are not the intended recipient, please contact the 
sender by
reply email and destroy all copies of the original message.
---
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Multiple OSD crashing on 12.2.0. Bluestore / EC pool / rbd

2017-09-06 Thread Brad Hubbard
These error logs look like they are being generated here,
https://github.com/ceph/ceph/blob/master/src/os/bluestore/BlueStore.cc#L8987-L8993
or possibly here,
https://github.com/ceph/ceph/blob/master/src/os/bluestore/BlueStore.cc#L9230-L9236.

Sep 05 17:02:58 r72-k7-06-01.k8s.ash1.cloudsys.tmcs ceph-osd[4775]:
2017-09-05 17:02:58.686723 7fe1871ac700 -1
bluestore(/var/lib/ceph/osd/ceph-12) _txc_add_transaction error (2) No
such file or directory not handled on operation 15 (op 0, counting
from 0)

The table of operations is here,
https://github.com/ceph/ceph/blob/master/src/os/ObjectStore.h#L370

Operation 15 is OP_SETATTRS so it appears to be some extended
attribute operation that is failing.

Can someone run the ceph-osd under strace and find the last system
call (probably a call that manipulates xattrs) that returns -2 in the
thread that crashes (or that outputs the above messages)?

strace -fvttyyTo /tmp/strace.out -s 1024 ceph-osd [system specific
argumentsarguments]

Capturing logs with "debug_bluestore = 20" may tell us more as well.

We need to work out what resource it is trying to access when it
receives the error '2' (No such file or directory).


On Thu, Sep 7, 2017 at 12:13 AM, Thomas Coelho
 wrote:
> Hi,
>
> I have the same problem. A bug [1] is reported since months, but
> unfortunately this is not fixed yet. I hope, if more people are having
> this problem the developers can reproduce and fix it.
>
> I was using Kernel-RBD with a Cache Tier.
>
> so long
> Thomas Coelho
>
> [1] http://tracker.ceph.com/issues/20222
>
>
> Am 06.09.2017 um 15:41 schrieb Henrik Korkuc:
>> On 17-09-06 16:24, Jean-Francois Nadeau wrote:
>>> Hi,
>>>
>>> On a 4 node / 48 OSDs Luminous cluster Im giving a try at RBD on EC
>>> pools + Bluestore.
>>>
>>> Setup went fine but after a few bench runs several OSD are failing and
>>> many wont even restart.
>>>
>>> ceph osd erasure-code-profile set myprofile \
>>>k=2\
>>>m=1 \
>>>crush-failure-domain=host
>>> ceph osd pool create mypool 1024 1024 erasure myprofile
>>> ceph osd pool set mypool allow_ec_overwrites true
>>> rbd pool init mypool
>>> ceph -s
>>> ceph health detail
>>> ceph osd pool create metapool 1024 1024 replicated
>>> rbd create --size 1024G --data-pool mypool --image metapool/test1
>>> rbd bench -p metapool test1 --io-type write --io-size 8192
>>> --io-pattern rand --io-total 10G
>>> ...
>>>
>>>
>>> One of many OSD failing logs
>>>
>>> Sep 05 17:02:54 r72-k7-06-01.k8s.ash1.cloudsys.tmcs systemd[1]:
>>> Started Ceph object storage daemon osd.12.
>>> Sep 05 17:02:54 r72-k7-06-01.k8s.ash1.cloudsys.tmcs ceph-osd[4775]:
>>> starting osd.12 at - osd_data /var/lib/ceph/osd/ceph-12
>>> /var/lib/ceph/osd/ceph-12/journal
>>> Sep 05 17:02:56 r72-k7-06-01.k8s.ash1.cloudsys.tmcs ceph-osd[4775]:
>>> 2017-09-05 17:02:56.627301 7fe1a2e42d00 -1 osd.12 2219 log_to_monitors
>>> {default=true}
>>> Sep 05 17:02:58 r72-k7-06-01.k8s.ash1.cloudsys.tmcs ceph-osd[4775]:
>>> 2017-09-05 17:02:58.686723 7fe1871ac700 -1
>>> bluestore(/var/lib/ceph/osd/ceph-12) _txc_add_transac
>>> tion error (2) No such file or directory not handled on operation 15
>>> (op 0, counting from 0)
>>> Sep 05 17:02:58 r72-k7-06-01.k8s.ash1.cloudsys.tmcs ceph-osd[4775]:
>>> 2017-09-05 17:02:58.686742 7fe1871ac700 -1
>>> bluestore(/var/lib/ceph/osd/ceph-12) unexpected error
>>>  code
>>> Sep 05 17:02:58 r72-k7-06-01.k8s.ash1.cloudsys.tmcs ceph-osd[4775]:
>>> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/
>>> centos7/DIST/centos7/MACHINE_SIZE/huge/release/12.2.0/rpm/el7/BUILD/ceph-12.2.0/src/os/bluestore/BlueStore.cc:
>>> In function 'void BlueStore::_txc_add_transaction(Blu
>>> eStore::TransContext*, ObjectStore::Transaction*)' thread 7fe1871ac700
>>> time 2017-09-05 17:02:58.686821
>>> Sep 05 17:02:58 r72-k7-06-01.k8s.ash1.cloudsys.tmcs ceph-osd[4775]:
>>> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/
>>> centos7/DIST/centos7/MACHINE_SIZE/huge/release/12.2.0/rpm/el7/BUILD/ceph-12.2.0/src/os/bluestore/BlueStore.cc:
>>> 9282: FAILED assert(0 == "unexpected error")
>>> Sep 05 17:02:58 r72-k7-06-01.k8s.ash1.cloudsys.tmcs ceph-osd[4775]:
>>> ceph version 12.2.0 (32ce2a3ae5239ee33d6150705cdb24d43bab910c)
>>> luminous (rc)
>>> Sep 05 17:02:58 r72-k7-06-01.k8s.ash1.cloudsys.tmcs ceph-osd[4775]: 1:
>>> (ceph::__ceph_assert_fail(char const*, char const*, int, char
>>> const*)+0x110) [0x7fe1a38bf510]
>>> Sep 05 17:02:58 r72-k7-06-01.k8s.ash1.cloudsys.tmcs ceph-osd[4775]: 2:
>>> (BlueStore::_txc_add_transaction(BlueStore::TransContext*,
>>> ObjectStore::Transaction*)+0x1487)
>>>  [0x7fe1a3796057]
>>> Sep 05 17:02:58 r72-k7-06-01.k8s.ash1.cloudsys.tmcs ceph-osd[4775]: 3:
>>> (BlueStore::queue_transactions(ObjectStore::Sequencer*,
>>> std::vector>>  std::allocator >&,
>>> boost::intrusive_ptr, ThreadPool::TPHandle*)+0x3a0)
>>> [0x7fe1a37970a0]
>>> Sep 

[ceph-users] RGW snapshot

2017-09-06 Thread donglifec...@gmail.com
Yehuda,

Is there any way to create snapshots of individual buckets? I don't find this 
feature now.  
Can you  give me some ideas?

Thanks a lot.



donglifec...@gmail.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] PCIe journal benefit for SSD OSDs

2017-09-06 Thread Christian Balzer

Hello,

On Wed, 6 Sep 2017 09:09:54 -0400 Alex Gorbachev wrote:

> We are planning a Jewel filestore based cluster for a performance
> sensitive healthcare client, and the conservative OSD choice is
> Samsung SM863A.
> 

While I totally see where you're coming from and me having stated that
I'll give Luminous and Bluestore some time to mature, I'd also be looking
into that if I were being in the planning phase now, with like 3 months
before deployment.
The inherent performance increase with Bluestore (and having something
that hopefully won't need touching/upgrading for a while) shouldn't be
ignored. 

The SSDs are fine, I've been starting to use those recently (though not
with Ceph yet) as Intel DC S36xx or 37xx are impossible to get.
They're a bit slower in the write IOPS department, but good enough for me.

I'm sure you considered this, but you really want to have a good grip on
the data (write) volume, since with in-line journals and other overheads
those 3.6 DWPD endurance is going to be something closer to 1, 1.5 at best.

> I am going to put an 8GB Areca HBA in front of it to cache small
> metadata operations, 

I see we share the same taste in HW, I vastly prefer Areca over anything
Adaptec (whatever their latest owner is) or LSI (same) have ever done.

An 8GB cache will be helpful, but it boils down to how much data and IOPS
are going through it and how many SSDs are behind it.
At least with JBOD SSDs as the storage, the performance hit won't be as
dramatic compared to a HDD based RAID6 when the cache is overloaded. 

> but was wondering if anyone has seen a positive
> impact from also using PCIe journals (e.g. Intel P3700 or even the
> older 910 series) in front of such SSDs?
> 
NVMe journals (or WAL and DB space for Bluestore) are nice and can
certainly help, especially if Ceph is tuned accordingly.
Avoid non DC NVMes, I doubt you can still get 910s, they are officially
EOL.
You want to match capabilities and endurances, a DC P3700 800GB would be
an OK match for 3-4 SM863a 960GB for example. 

There are people here who have actually done this, hopefully some will
speak up. 

Christian
-- 
Christian BalzerNetwork/Systems Engineer
ch...@gol.com   Rakuten Communications
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph release cadence

2017-09-06 Thread Deepak Naidu
Hope collective feedback helps. So here's one.

>>- Not a lot of people seem to run the "odd" releases (e.g., infernalis, 
>>kraken).  
I think the more obvious reason companies/users wanting to use CEPH will stick 
with LTS versions as it models the 3yr  support cycle.

>>* Drop the odd releases, and aim for a ~9 month cadence. This splits the 
>>difference between the current even/odd pattern we've been doing.
Yes, provided an easy upgrade process.


--
Deepak




-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Sage 
Weil
Sent: Wednesday, September 06, 2017 8:24 AM
To: ceph-de...@vger.kernel.org; ceph-maintain...@ceph.com; ceph-us...@ceph.com
Subject: [ceph-users] Ceph release cadence

Hi everyone,

Traditionally, we have done a major named "stable" release twice a year, and 
every other such release has been an "LTS" release, with fixes backported for 
1-2 years.

With kraken and luminous we missed our schedule by a lot: instead of releasing 
in October and April we released in January and August.

A few observations:

- Not a lot of people seem to run the "odd" releases (e.g., infernalis, 
kraken).  This limits the value of actually making them.  It also means that 
those who *do* run them are running riskier code (fewer users -> more bugs).

- The more recent requirement that upgrading clusters must make a stop at each 
LTS (e.g., hammer -> luminous not supported, must go hammer -> jewel 
-> lumninous) has been hugely helpful on the development side by 
-> reducing
the amount of cross-version compatibility code to maintain and reducing the 
number of upgrade combinations to test.

- When we try to do a time-based "train" release cadence, there always seems to 
be some "must-have" thing that delays the release a bit.  This doesn't happen 
as much with the odd releases, but it definitely happens with the LTS releases. 
 When the next LTS is a year away, it is hard to suck it up and wait that long.

A couple of options:

* Keep even/odd pattern, and continue being flexible with release dates

  + flexible
  - unpredictable
  - odd releases of dubious value

* Keep even/odd pattern, but force a 'train' model with a more regular cadence

  + predictable schedule
  - some features will miss the target and be delayed a year

* Drop the odd releases but change nothing else (i.e., 12-month release
cadence)

  + eliminate the confusing odd releases with dubious value
 
* Drop the odd releases, and aim for a ~9 month cadence. This splits the 
difference between the current even/odd pattern we've been doing.

  + eliminate the confusing odd releases with dubious value
  + waiting for the next release isn't quite as bad
  - required upgrades every 9 months instead of ever 12 months

* Drop the odd releases, but relax the "must upgrade through every LTS" to 
allow upgrades across 2 versions (e.g., luminous -> mimic or luminous -> 
nautilus).  Shorten release cycle (~6-9 months).

  + more flexibility for users
  + downstreams have greater choice in adopting an upstrema release
  - more LTS branches to maintain
  - more upgrade paths to consider

Other options we should consider?  Other thoughts?

Thanks!
sage
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
---
This email message is for the sole use of the intended recipient(s) and may 
contain
confidential information.  Any unauthorized review, use, disclosure or 
distribution
is prohibited.  If you are not the intended recipient, please contact the 
sender by
reply email and destroy all copies of the original message.
---
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Client features by IP?

2017-09-06 Thread Bryan Stillwell
I was reading this post by Josh Durgin today and was pretty happy to see we can 
get a summary of features that clients are using with the 'ceph features' 
command:

http://ceph.com/community/new-luminous-upgrade-complete/

However, I haven't found an option to display the IP address of those clients 
with the older feature sets.  Is there a flag I can pass to 'ceph features' to 
list the IPs associated with each feature set?

Thanks,
Bryan 


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] RadosGW ADMIN API

2017-09-06 Thread Robin H. Johnson
On Wed, Sep 06, 2017 at 02:08:14PM +, Engelmann Florian wrote:
> we are running a luminous cluster and three radosgw to serve a s3 compatible 
> objectstore. As we are (currently) not using Openstack we have to use the 
> RadosGW Admin API to get our billing data. I tried to access the API with 
> pathon like:
> 
> [...]
> import rgwadmin
> [...]
> Users = radosgw.get_users()
> [...]
> 
> But I get a 403 "AccessDenied" using python 2.7.13.
> 
> What's the easiest method to access the Admin API from a remote host?
You can have a look at why it's generating the 403, if you increase the
debug level of rgw & civetweb.

The user associated with the access key & secret key tuple you're using
DOES need to have user capabilities for reading users.

$ sudo radosgw-admin metadata get user:MYADMINUSER-REDACTED
{
"key": "user:MYADMINUSER-REDACTED",
...
"data": {
"user_id": "MYADMINUSER-REDACTED",
"display_name": "MYADMINUSER-REDACTED",
...,
"caps": [
{
"type": "buckets",
"perm": "read"
},
{
"type": "usage",
"perm": "read"
},
{
"type": "users",
"perm": "*"
}
],
"system": "true",
...,

-- 
Robin Hugh Johnson
Gentoo Linux: Dev, Infra Lead, Foundation Asst. Treasurer
E-Mail   : robb...@gentoo.org
GnuPG FP : 11ACBA4F 4778E3F6 E4EDF38E B27B944E 34884E85
GnuPG FP : 7D0B3CEB E9B85B1F 825BCECF EE05E6F6 A48F6136


signature.asc
Description: Digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph release cadence

2017-09-06 Thread Eric Eastman
I have been working with Ceph for the last several years and I help
support multiple Ceph clusters. I would like to have the team drop the
Even/Odd release schedule, and go to an all production release
schedule.  I would like releases on no more then a 9 month schedule,
with smaller incremental changes and predictable dates.  It would be
nice to be able to upgrade from at least the last 2 releases.

I would also like to see the RC schedule be more like the Linux kernel
or Samba releases, where we have an idea on how many RCs to expect and
how often they would come out, so we can schedule our testing, and
provide more helpful feedback during the RC period.

Eric

On Wed, Sep 6, 2017 at 9:23 AM, Sage Weil  wrote:
> Hi everyone,
>
> Traditionally, we have done a major named "stable" release twice a year,
> and every other such release has been an "LTS" release, with fixes
> backported for 1-2 years.
>
> With kraken and luminous we missed our schedule by a lot: instead of
> releasing in October and April we released in January and August.
>
> A few observations:
>
> - Not a lot of people seem to run the "odd" releases (e.g., infernalis,
> kraken).  This limits the value of actually making them.  It also means
> that those who *do* run them are running riskier code (fewer users -> more
> bugs).
>
> - The more recent requirement that upgrading clusters must make a stop at
> each LTS (e.g., hammer -> luminous not supported, must go hammer -> jewel
> -> lumninous) has been hugely helpful on the development side by reducing
> the amount of cross-version compatibility code to maintain and reducing
> the number of upgrade combinations to test.
>
> - When we try to do a time-based "train" release cadence, there always
> seems to be some "must-have" thing that delays the release a bit.  This
> doesn't happen as much with the odd releases, but it definitely happens
> with the LTS releases.  When the next LTS is a year away, it is hard to
> suck it up and wait that long.
>
> A couple of options:
>
> * Keep even/odd pattern, and continue being flexible with release dates
>
>   + flexible
>   - unpredictable
>   - odd releases of dubious value
>
> * Keep even/odd pattern, but force a 'train' model with a more regular
> cadence
>
>   + predictable schedule
>   - some features will miss the target and be delayed a year
>
> * Drop the odd releases but change nothing else (i.e., 12-month release
> cadence)
>
>   + eliminate the confusing odd releases with dubious value
>
> * Drop the odd releases, and aim for a ~9 month cadence. This splits the
> difference between the current even/odd pattern we've been doing.
>
>   + eliminate the confusing odd releases with dubious value
>   + waiting for the next release isn't quite as bad
>   - required upgrades every 9 months instead of ever 12 months
>
> * Drop the odd releases, but relax the "must upgrade through every LTS" to
> allow upgrades across 2 versions (e.g., luminous -> mimic or luminous ->
> nautilus).  Shorten release cycle (~6-9 months).
>
>   + more flexibility for users
>   + downstreams have greater choice in adopting an upstrema release
>   - more LTS branches to maintain
>   - more upgrade paths to consider
>
> Other options we should consider?  Other thoughts?
>
> Thanks!
> sage
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Modification Time of RBD Images

2017-09-06 Thread Jason Dillaman
No support for that yet -- it's being tracked by a backlog ticket [1].

[1] https://trello.com/c/npmsOgM5

On Wed, Sep 6, 2017 at 12:27 PM, Christoph Adomeit
 wrote:
> Now that we are 2 years and some ceph releases farther and have bluestor:
>
> Are there meanwhile any better ways to find out the mtime of an rbd image ?
>
> Thanks
>   Christoph
>
> On Thu, Nov 26, 2015 at 06:50:46PM +0100, Jan Schermer wrote:
>> Find in which block the filesystem on your RBD image stores journal, find 
>> the object hosting this block in rados and use its mtime :-)
>>
>> Jan
>>
>>
>> > On 26 Nov 2015, at 18:49, Gregory Farnum  wrote:
>> >
>> > I don't think anything tracks this explicitly for RBD, but each RADOS 
>> > object does maintain an mtime you can check via the rados tool. You could 
>> > write a script to iterate through all the objects in the image and find 
>> > the most recent mtime (although a custom librados binary will be faster if 
>> > you want to do this frequently).
>> > -Greg
>> >
>> > On Thursday, November 26, 2015, Christoph Adomeit 
>> > > 
>> > wrote:
>> > Hi there,
>> >
>> > I am using Ceph-Hammer and I am wondering about the following:
>> >
>> > What is the recommended way to find out when an rbd-Image was last 
>> > modified ?
>> >
>> > Thanks
>> >   Christoph
>> >
>> > --
>> > Christoph Adomeit
>> > GATWORKS GmbH
>> > Reststrauch 191
>> > 41199 Moenchengladbach
>> > Sitz: Moenchengladbach
>> > Amtsgericht Moenchengladbach, HRB 6303
>> > Geschaeftsfuehrer:
>> > Christoph Adomeit, Hans Wilhelm Terstappen
>> >
>> > christoph.adom...@gatworks.de  Internetloesungen vom 
>> > Feinsten
>> > Fon. +49 2166 9149-32  Fax. +49 2166 9149-10
>> > ___
>> > ceph-users mailing list
>> > ceph-users@lists.ceph.com 
>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
>> > 
>> > ___
>> > ceph-users mailing list
>> > ceph-users@lists.ceph.com
>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



-- 
Jason
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Modification Time of RBD Images

2017-09-06 Thread Christoph Adomeit
Now that we are 2 years and some ceph releases farther and have bluestor:

Are there meanwhile any better ways to find out the mtime of an rbd image ?

Thanks
  Christoph

On Thu, Nov 26, 2015 at 06:50:46PM +0100, Jan Schermer wrote:
> Find in which block the filesystem on your RBD image stores journal, find the 
> object hosting this block in rados and use its mtime :-)
> 
> Jan
> 
> 
> > On 26 Nov 2015, at 18:49, Gregory Farnum  wrote:
> > 
> > I don't think anything tracks this explicitly for RBD, but each RADOS 
> > object does maintain an mtime you can check via the rados tool. You could 
> > write a script to iterate through all the objects in the image and find the 
> > most recent mtime (although a custom librados binary will be faster if you 
> > want to do this frequently).
> > -Greg
> > 
> > On Thursday, November 26, 2015, Christoph Adomeit 
> > > 
> > wrote:
> > Hi there,
> > 
> > I am using Ceph-Hammer and I am wondering about the following:
> > 
> > What is the recommended way to find out when an rbd-Image was last modified 
> > ?
> > 
> > Thanks
> >   Christoph
> > 
> > --
> > Christoph Adomeit
> > GATWORKS GmbH
> > Reststrauch 191
> > 41199 Moenchengladbach
> > Sitz: Moenchengladbach
> > Amtsgericht Moenchengladbach, HRB 6303
> > Geschaeftsfuehrer:
> > Christoph Adomeit, Hans Wilhelm Terstappen
> > 
> > christoph.adom...@gatworks.de  Internetloesungen vom 
> > Feinsten
> > Fon. +49 2166 9149-32  Fax. +49 2166 9149-10
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com 
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
> > 
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph OSD journal (with dmcrypt) replacement

2017-09-06 Thread M Ranga Swami Reddy
Thank you. Iam able to replace the dmcrypt journal successfully.

On Sep 5, 2017 18:14, "David Turner"  wrote:

> Did the journal drive fail during operation? Or was it taken out during
> pre-failure. If it fully failed, then most likely you can't guarantee the
> consistency of the underlying osds. In this case, you just put the affected
> osds and add them back in as new osds.
>
> In the case of having good data on the osds, you follow the standard
> process of closing the journal, create the new partition, set up all of the
> partition metadata so that the ceph udev rules will know what the journal
> is, and just create a new dmcrypt volume on it. I would recommend using the
> same uuid as the old journal so that you don't need to update the symlinks
> and such on the osd. After everything is done, run the journal create
> command for the osd and start the osd.
>
> On Tue, Sep 5, 2017, 2:47 AM M Ranga Swami Reddy 
> wrote:
>
>> Hello,
>> How to replace an OSD's journal created with dmcrypt, from one drive
>> to another drive, in case of current journal drive failed.
>>
>> Thanks
>> Swami
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph release cadence

2017-09-06 Thread Joao Eduardo Luis

On 09/06/2017 04:23 PM, Sage Weil wrote:

* Keep even/odd pattern, but force a 'train' model with a more regular
cadence

   + predictable schedule
   - some features will miss the target and be delayed a year


Personally, I think a predictable schedule is the way to go. Two major 
reasons come to mind:


1. Developers are actually aware of what the cut-off date is, and will 
plan accordingly; and,


2. Downstreams will have a better notion of when the next release is to 
be expected, and plan accordingly.


However, a one year wait for a release may be a can of worms waiting to 
be opened. Even though it would ideally provide us a lot more time to 
merge stuff and test it, there's also the downside that some stuff may 
be pushed further and further down the line, and eventually merged just 
before the window closes.


For that, I'd argue having an intermediate (staging?) release could be 
helpful, but I fear it would not be anything more than any other 
dev-release. Therefore, if we stick to a one-year cadence, let's have 
frequent dev-releases.


I would also like to argue for a hard cut-off date considerably before 
major holiday seasons. Because no one really wants to be dealing with 
bugs or releasing software while a considerable portion of developers 
are away.


Additionally,


* Drop the odd releases, but relax the "must upgrade through every LTS" to
allow upgrades across 2 versions (e.g., luminous -> mimic or luminous ->
nautilus).  Shorten release cycle (~6-9 months).


I can be on board with this too. As long as we have a very clear cut-off 
date regardless.


  -Joao
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph release cadence

2017-09-06 Thread Ken Dreyer
On Wed, Sep 6, 2017 at 9:23 AM, Sage Weil  wrote:
> * Keep even/odd pattern, but force a 'train' model with a more regular
> cadence
>
>   + predictable schedule
>   - some features will miss the target and be delayed a year

This one (#2, regular release cadence) is the one I will value the most.

- Ken
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph release cadence

2017-09-06 Thread Jack
Hi Sage,

The one option I do not want for Ceph is the last one: support upgrade
across multiple LTS versions

I'd rather wait 3 months for a better release (both in terms of
functions and quality) than seeing the Ceph team exhausted, having to
maintain for years a lot more releases and code


Others thoughts:
- I do not think time-based releases is a must-have. As a user, I
prefere quality over short-time releases, especially for a critical
software as Ceph (storage and stuff, I do not want creepy code here):
this eliminates #2
- for the same reason, I do not care if there is a release every 12
months, or every 9 months : a couple of months without the new release
is not business-critical, having a buggy software / not well tested
features is
- odd releases still allow bugs squashing, I guess it gives real-world
feedbacks too. Some people do have a "not so important" cluster, that
may use the odd releases

So:
#2 and especially #5: nope
#1, #3 or #4: I prefer #1, the others are fine too (if odd releases is
somehow a burden, for instance)

Regards,



So, to me, #1 is fine

On 06/09/2017 18:35, Alex Gorbachev wrote:
> On Wed, Sep 6, 2017 at 11:23 AM Sage Weil  wrote:
> 
>> Hi everyone,
>>
>> Traditionally, we have done a major named "stable" release twice a year,
>> and every other such release has been an "LTS" release, with fixes
>> backported for 1-2 years.
>>
>> With kraken and luminous we missed our schedule by a lot: instead of
>> releasing in October and April we released in January and August.
>>
>> A few observations:
>>
>> - Not a lot of people seem to run the "odd" releases (e.g., infernalis,
>> kraken).  This limits the value of actually making them.  It also means
>> that those who *do* run them are running riskier code (fewer users -> more
>> bugs).
>>
>> - The more recent requirement that upgrading clusters must make a stop at
>> each LTS (e.g., hammer -> luminous not supported, must go hammer -> jewel
>> -> lumninous) has been hugely helpful on the development side by reducing
>> the amount of cross-version compatibility code to maintain and reducing
>> the number of upgrade combinations to test.
>>
>> - When we try to do a time-based "train" release cadence, there always
>> seems to be some "must-have" thing that delays the release a bit.  This
>> doesn't happen as much with the odd releases, but it definitely happens
>> with the LTS releases.  When the next LTS is a year away, it is hard to
>> suck it up and wait that long.
>>
>> A couple of options:
>>
>> * Keep even/odd pattern, and continue being flexible with release dates
>>
>>   + flexible
>>   - unpredictable
>>   - odd releases of dubious value
>>
>> * Keep even/odd pattern, but force a 'train' model with a more regular
>> cadence
>>
>>   + predictable schedule
>>   - some features will miss the target and be delayed a year
>>
>> * Drop the odd releases but change nothing else (i.e., 12-month release
>> cadence)
>>
>>   + eliminate the confusing odd releases with dubious value
>>
>> * Drop the odd releases, and aim for a ~9 month cadence. This splits the
>> difference between the current even/odd pattern we've been doing.
>>
>>   + eliminate the confusing odd releases with dubious value
>>   + waiting for the next release isn't quite as bad
>>   - required upgrades every 9 months instead of ever 12 months
>>
>> * Drop the odd releases, but relax the "must upgrade through every LTS" to
>> allow upgrades across 2 versions (e.g., luminous -> mimic or luminous ->
>> nautilus).  Shorten release cycle (~6-9 months).
>>
>>   + more flexibility for users
>>   + downstreams have greater choice in adopting an upstrema release
>>   - more LTS branches to maintain
>>   - more upgrade paths to consider
>>
>> Other options we should consider?  Other thoughts?
> 
> 
> As a mission critical system user, I am in favor of dropping odd releases
> and going to a 9 month cycle.  We never run the odd releases as too risky.
> A good deal if functionality comes in updates, and usually the Ceph team
> brings them in gently, with the more experimental features off by default.
> 
> I suspect the 9 month even cycle will also make it easier to perform more
> incremental upgrades, i.e. small jumps rather than big leaps.
> 
> 
> 
>>
>> Thanks!
>> sage
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
>>
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph release cadence

2017-09-06 Thread Alex Gorbachev
On Wed, Sep 6, 2017 at 11:23 AM Sage Weil  wrote:

> Hi everyone,
>
> Traditionally, we have done a major named "stable" release twice a year,
> and every other such release has been an "LTS" release, with fixes
> backported for 1-2 years.
>
> With kraken and luminous we missed our schedule by a lot: instead of
> releasing in October and April we released in January and August.
>
> A few observations:
>
> - Not a lot of people seem to run the "odd" releases (e.g., infernalis,
> kraken).  This limits the value of actually making them.  It also means
> that those who *do* run them are running riskier code (fewer users -> more
> bugs).
>
> - The more recent requirement that upgrading clusters must make a stop at
> each LTS (e.g., hammer -> luminous not supported, must go hammer -> jewel
> -> lumninous) has been hugely helpful on the development side by reducing
> the amount of cross-version compatibility code to maintain and reducing
> the number of upgrade combinations to test.
>
> - When we try to do a time-based "train" release cadence, there always
> seems to be some "must-have" thing that delays the release a bit.  This
> doesn't happen as much with the odd releases, but it definitely happens
> with the LTS releases.  When the next LTS is a year away, it is hard to
> suck it up and wait that long.
>
> A couple of options:
>
> * Keep even/odd pattern, and continue being flexible with release dates
>
>   + flexible
>   - unpredictable
>   - odd releases of dubious value
>
> * Keep even/odd pattern, but force a 'train' model with a more regular
> cadence
>
>   + predictable schedule
>   - some features will miss the target and be delayed a year
>
> * Drop the odd releases but change nothing else (i.e., 12-month release
> cadence)
>
>   + eliminate the confusing odd releases with dubious value
>
> * Drop the odd releases, and aim for a ~9 month cadence. This splits the
> difference between the current even/odd pattern we've been doing.
>
>   + eliminate the confusing odd releases with dubious value
>   + waiting for the next release isn't quite as bad
>   - required upgrades every 9 months instead of ever 12 months
>
> * Drop the odd releases, but relax the "must upgrade through every LTS" to
> allow upgrades across 2 versions (e.g., luminous -> mimic or luminous ->
> nautilus).  Shorten release cycle (~6-9 months).
>
>   + more flexibility for users
>   + downstreams have greater choice in adopting an upstrema release
>   - more LTS branches to maintain
>   - more upgrade paths to consider
>
> Other options we should consider?  Other thoughts?


As a mission critical system user, I am in favor of dropping odd releases
and going to a 9 month cycle.  We never run the odd releases as too risky.
A good deal if functionality comes in updates, and usually the Ceph team
brings them in gently, with the more experimental features off by default.

I suspect the 9 month even cycle will also make it easier to perform more
incremental upgrades, i.e. small jumps rather than big leaps.



>
> Thanks!
> sage
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
-- 
--
Alex Gorbachev
Storcium
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph release cadence

2017-09-06 Thread Kingsley Tart
On Wed, 2017-09-06 at 15:23 +, Sage Weil wrote:
> Hi everyone,
> 
> Traditionally, we have done a major named "stable" release twice a year, 
> and every other such release has been an "LTS" release, with fixes 
> backported for 1-2 years.
> 
> With kraken and luminous we missed our schedule by a lot: instead of 
> releasing in October and April we released in January and August.
> 
> A few observations:
[snip]

Firstly, I'd like to qualify my comments by saying that I haven't yet
tried Ceph[1], though I have been loosely following its progress. This
is partly because I've been busy doing other things.

[1] OK, this is a slight fib - I had a very brief play with it a few
years back but didn't really get anywhere with it and then got diverted
onto other things.

Unless I absolutely have to deploy now, I find myself doing this:

10 not long for new release, wait a bit
20 new release is here, but there's talk of a new one
30 goto 10

Having frequent minor updates and fixes is reassuring, but having
frequent major update changes with the "L" in "LTS" not being
particularly long tends to put me off a bit, largely because I find the
thought of upgrading something so mission critical quite daunting. I
can't speak from any Ceph experience on this one, obviously, but if
there's an easy rollback (even if it's never needed) without having to
rebuild the entire cluster then that would make me more willing to do
it.

-- 
Cheers,
Kingsley.

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] RBD: How many snapshots is too many?

2017-09-06 Thread Florian Haas
Hi Greg,

thanks for your insight! I do have a few follow-up questions.

On 09/05/2017 11:39 PM, Gregory Farnum wrote:
>> It seems to me that there still isn't a good recommendation along the
>> lines of "try not to have more than X snapshots per RBD image" or "try
>> not to have more than Y snapshots in the cluster overall". Or is the
>> "correct" recommendation actually "create as many snapshots as you
>> might possibly want, none of that is allowed to create any instability
>> nor performance degradation and if it does, that's a bug"?
> 
> I think we're closer to "as many snapshots as you want", but there are
> some known shortages there.
> 
> First of all, if you haven't seen my talk from the last OpenStack
> summit on snapshots and you want a bunch of details, go watch that. :p
> https://www.openstack.org/videos/boston-2017/ceph-snapshots-for-fun-and-profit-1

OK so I just rewatched that to see if I had missed anything regarding
recommendations for how many snapshots are sane. For anyone else
following this thread, there are two items I could make out, and I'm
taking the liberty to include the direct links here:

- From the talk itself: https://youtu.be/rY0OWtllkn8?t=26m29s

This says don't do a snapshot every minute on each RBD, but one per day
is probably OK. That is rather *very* vague, unfortunately, since as you
point out in the talk the overhead associated with snapshots is strongly
related to how many RADOS-level snapshots there are in the cluster
overall, and clearly it makes a big difference whether you're taking one
daily snapshot of 10 RBD images, or of 100,000.

So, can you refine that estimate a bit? As in, can you give at least an
order-of-magnitude estimate for "this many snapshots overall is probably
OK, but multiply by 10 and you're in trouble"?

- From the Q: https://youtu.be/rY0OWtllkn8?t=36m58s

Here, you talk about how having many holes in the interval set governing
the snap trim queue can be a problem. That one is rather tricky too,
because as far as I can tell there is really no way for users to
influence this (other than, of course, deleting *all* snapshots or never
creating or deleting any at all).

> There are a few dimensions there can be failures with snapshots:
> 1) right now the way we mark snapshots as deleted is suboptimal — when
> deleted they go into an interval_set in the OSDMap. So if you have a
> bunch of holes in your deleted snapshots, it is possible to inflate
> the osdmap to a size which causes trouble. But I'm not sure if we've
> actually seen this be an issue yet — it requires both a large cluster,
> and a large map, and probably some other failure causing osdmaps to be
> generated very rapidly.

Can you give an estimate as to what a "large" map is in this context? In
other words, when is a map sufficiently inflated with that interval set
to be a problem?

> 2) There may be issues with how rbd records what snapshots it is
> associated with? No idea about this; haven't heard of any.
> 
> 3) Trimming snapshots requires IO. This is where most (all?) of the
> issues I've seen have come from; either in it being unscheduled IO
> that the rest of the system doesn't account for or throttle (as in the
> links you highlighted) or in admins overwhelming the IO capacity of
> their clusters.

Again, I think (correct me if I'm wrong here) that trimming does factor
into your "one snapshot per RBD image per day" recommendation, but would
you be able to express that in terms of overall RADOS-level snapshots?

Thanks again!

Cheers,
Florian



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph release cadence

2017-09-06 Thread Bryan Banister
Very new to Ceph but long time but long time sys admin who is jaded/opinionated.

My 2 cents:
1) This sounds like a perfect thing to put in a poll and ask/beg people to 
vote.  Hopefully that will get you more of a response from a larger number of 
users.
2) Given that the value of the odd releases is "dubious", maybe those that are 
using these releases can give reason why they feel they need/want them?

I, personally, think having the even releases on a shorter cadence with the 
most users on each would be best, but I'm still new to this game,
-Bryan

-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Sage 
Weil
Sent: Wednesday, September 06, 2017 10:24 AM
To: ceph-de...@vger.kernel.org; ceph-maintain...@ceph.com; ceph-us...@ceph.com
Subject: [ceph-users] Ceph release cadence

Note: External Email
-

Hi everyone,

Traditionally, we have done a major named "stable" release twice a year,
and every other such release has been an "LTS" release, with fixes
backported for 1-2 years.

With kraken and luminous we missed our schedule by a lot: instead of
releasing in October and April we released in January and August.

A few observations:

- Not a lot of people seem to run the "odd" releases (e.g., infernalis,
kraken).  This limits the value of actually making them.  It also means
that those who *do* run them are running riskier code (fewer users -> more
bugs).

- The more recent requirement that upgrading clusters must make a stop at
each LTS (e.g., hammer -> luminous not supported, must go hammer -> jewel
-> lumninous) has been hugely helpful on the development side by reducing
the amount of cross-version compatibility code to maintain and reducing
the number of upgrade combinations to test.

- When we try to do a time-based "train" release cadence, there always
seems to be some "must-have" thing that delays the release a bit.  This
doesn't happen as much with the odd releases, but it definitely happens
with the LTS releases.  When the next LTS is a year away, it is hard to
suck it up and wait that long.

A couple of options:

* Keep even/odd pattern, and continue being flexible with release dates

  + flexible
  - unpredictable
  - odd releases of dubious value

* Keep even/odd pattern, but force a 'train' model with a more regular
cadence

  + predictable schedule
  - some features will miss the target and be delayed a year

* Drop the odd releases but change nothing else (i.e., 12-month release
cadence)

  + eliminate the confusing odd releases with dubious value

* Drop the odd releases, and aim for a ~9 month cadence. This splits the
difference between the current even/odd pattern we've been doing.

  + eliminate the confusing odd releases with dubious value
  + waiting for the next release isn't quite as bad
  - required upgrades every 9 months instead of ever 12 months

* Drop the odd releases, but relax the "must upgrade through every LTS" to
allow upgrades across 2 versions (e.g., luminous -> mimic or luminous ->
nautilus).  Shorten release cycle (~6-9 months).

  + more flexibility for users
  + downstreams have greater choice in adopting an upstrema release
  - more LTS branches to maintain
  - more upgrade paths to consider

Other options we should consider?  Other thoughts?

Thanks!
sage
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




Note: This email is for the confidential use of the named addressee(s) only and 
may contain proprietary, confidential or privileged information. If you are not 
the intended recipient, you are hereby notified that any review, dissemination 
or copying of this email is strictly prohibited, and to please notify the 
sender immediately and destroy this email and any attachments. Email 
transmission cannot be guaranteed to be secure or error-free. The Company, 
therefore, does not make any guarantees as to the completeness or accuracy of 
this email or any attachments. This email is for informational purposes only 
and does not constitute a recommendation, offer, request or solicitation of any 
kind to buy, sell, subscribe, redeem or perform any type of transaction of a 
financial product.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Ceph release cadence

2017-09-06 Thread Sage Weil
Hi everyone,

Traditionally, we have done a major named "stable" release twice a year, 
and every other such release has been an "LTS" release, with fixes 
backported for 1-2 years.

With kraken and luminous we missed our schedule by a lot: instead of 
releasing in October and April we released in January and August.

A few observations:

- Not a lot of people seem to run the "odd" releases (e.g., infernalis, 
kraken).  This limits the value of actually making them.  It also means 
that those who *do* run them are running riskier code (fewer users -> more 
bugs).

- The more recent requirement that upgrading clusters must make a stop at 
each LTS (e.g., hammer -> luminous not supported, must go hammer -> jewel 
-> lumninous) has been hugely helpful on the development side by reducing 
the amount of cross-version compatibility code to maintain and reducing 
the number of upgrade combinations to test.

- When we try to do a time-based "train" release cadence, there always 
seems to be some "must-have" thing that delays the release a bit.  This 
doesn't happen as much with the odd releases, but it definitely happens 
with the LTS releases.  When the next LTS is a year away, it is hard to 
suck it up and wait that long.

A couple of options:

* Keep even/odd pattern, and continue being flexible with release dates

  + flexible
  - unpredictable
  - odd releases of dubious value

* Keep even/odd pattern, but force a 'train' model with a more regular 
cadence

  + predictable schedule
  - some features will miss the target and be delayed a year

* Drop the odd releases but change nothing else (i.e., 12-month release 
cadence)

  + eliminate the confusing odd releases with dubious value
 
* Drop the odd releases, and aim for a ~9 month cadence. This splits the 
difference between the current even/odd pattern we've been doing.

  + eliminate the confusing odd releases with dubious value
  + waiting for the next release isn't quite as bad
  - required upgrades every 9 months instead of ever 12 months

* Drop the odd releases, but relax the "must upgrade through every LTS" to 
allow upgrades across 2 versions (e.g., luminous -> mimic or luminous -> 
nautilus).  Shorten release cycle (~6-9 months).

  + more flexibility for users
  + downstreams have greater choice in adopting an upstrema release
  - more LTS branches to maintain
  - more upgrade paths to consider

Other options we should consider?  Other thoughts?

Thanks!
sage
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Changing RGW pool default

2017-09-06 Thread Bruno Carvalho
Hello friends, I have a question.

Is it possible to change the default.rgw pool of the ceph used in radosgw
already with stored data to a new name?

Tested in version: ceph version 10.2.7 and 10.2.9

I already tried to change the metadata of the region and zone and made the
renames of the pool, I changed the metadata of the bucket and even so when
I execute the command below it presents the information of the pool
default.rgw

# radosgw-admin metadata get bucket.instance: ddd:
aacbdf4d-2458-41ad-84f4-afc46a29a77b.7965272.1

I also tried to change the information of the bucket.instance, but I did
not succeed after the "put" it does not change information from the bucket
in my debugs realize that the objects created in the default.rgw pool do
not change.

Has anyone managed this operation yet?



*Att,*


*Bruno Carvalho*
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph mgr unknown version

2017-09-06 Thread Piotr Dzionek

Oh, I see that this is probably a bug: http://tracker.ceph.com/issues/21260

I also noticed following error in mgr logs:

/2017-09-06 16:41:08.537577 7f34c0a7a700  1 mgr send_beacon active//
//2017-09-06 16:41:08.539161 7f34c0a7a700  1 mgr[restful] Unknown 
request ''//
//2017-09-06 16:41:08.543830 7f34a77de700  0 mgr[restful] Traceback 
(most recent call last)://

//  File "/usr/lib64/ceph/mgr/restful/module.py", line 248, in serve//
//self._serve()//
//  File "/usr/lib64/ceph/mgr/restful/module.py", line 299, in _serve//
//raise RuntimeError('no certificate configured')//
//RuntimeError: no certificate configured//
/
Probably not related, but what kind of certificate it might refer to ?


W dniu 06.09.2017 o 16:31, Piotr Dzionek pisze:


Hi,
I ran a small test two node ceph cluster - 12.2.0 version. It has 28 
osds, 1 mon and 2 mgr. It runs fine, however I noticed this strange 
thing in output of ceph versions command:


/# ceph versions//
//{//
//"mon": {//
//"ceph version 12.2.0 
(32ce2a3ae5239ee33d6150705cdb24d43bab910c) luminous (rc)": 1//

//},//
//"mgr": {//
//"ceph version 12.2.0 
(32ce2a3ae5239ee33d6150705cdb24d43bab910c) luminous (rc)": 1,//

//"unknown": 1//
//},//
//"osd": {//
//"ceph version 12.2.0 
(32ce2a3ae5239ee33d6150705cdb24d43bab910c) luminous (rc)": 28//

//},//
//"mds": {},//
//"overall": {//
//"ceph version 12.2.0 
(32ce2a3ae5239ee33d6150705cdb24d43bab910c) luminous (rc)": 30,//

//"unknown": 1//
//}//
//}/

As you can see one ceph manager is in unknown state. Why is that ? 
FYI, I checked rpms versions and did a restart of all mgr and I still 
get the same result.


Kind regards,
Piotr Dzionek



--
Piotr Dzionek
System Administrator

SEQR Poland Sp. z o.o.
ul. Łąkowa 29, 90-554 Łódź, Poland
Mobile: +48 79687
Mail: piotr.dzio...@seqr.com
www.seqr.com | www.seamless.se

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph Developers Monthly - September

2017-09-06 Thread Haomai Wang
Oh, I'm on flight at the time

On Wed, Sep 6, 2017 at 6:28 PM, Joao Eduardo Luis  wrote:
> On 09/06/2017 06:06 AM, Leonardo Vaz wrote:
>>
>> Hey cephers,
>>
>> The Ceph Developer Monthly is confirmed for tonight, September 6 at 9pm
>> Eastern Time (EDT), in an APAC-friendly time slot.
>
>
> As much as I would love to attend and discuss some topics (especially the
> RADOS replication stuff), this is an ungodly hour to expect my brain to work
> properly ;)
>
> Looking forward to the recording though!
>
>   -Joao
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] ceph mgr unknown version

2017-09-06 Thread Piotr Dzionek

Hi,
I ran a small test two node ceph cluster - 12.2.0 version. It has 28 
osds, 1 mon and 2 mgr. It runs fine, however I noticed this strange 
thing in output of ceph versions command:


/# ceph versions//
//{//
//"mon": {//
//"ceph version 12.2.0 
(32ce2a3ae5239ee33d6150705cdb24d43bab910c) luminous (rc)": 1//

//},//
//"mgr": {//
//"ceph version 12.2.0 
(32ce2a3ae5239ee33d6150705cdb24d43bab910c) luminous (rc)": 1,//

//"unknown": 1//
//},//
//"osd": {//
//"ceph version 12.2.0 
(32ce2a3ae5239ee33d6150705cdb24d43bab910c) luminous (rc)": 28//

//},//
//"mds": {},//
//"overall": {//
//"ceph version 12.2.0 
(32ce2a3ae5239ee33d6150705cdb24d43bab910c) luminous (rc)": 30,//

//"unknown": 1//
//}//
//}/

As you can see one ceph manager is in unknown state. Why is that ? FYI, 
I checked rpms versions and did a restart of all mgr and I still get the 
same result.


Kind regards,
Piotr Dzionek

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Multiple OSD crashing on 12.2.0. Bluestore / EC pool / rbd

2017-09-06 Thread Thomas Coelho
Hi,

I have the same problem. A bug [1] is reported since months, but
unfortunately this is not fixed yet. I hope, if more people are having
this problem the developers can reproduce and fix it.

I was using Kernel-RBD with a Cache Tier.

so long
Thomas Coelho

[1] http://tracker.ceph.com/issues/20222


Am 06.09.2017 um 15:41 schrieb Henrik Korkuc:
> On 17-09-06 16:24, Jean-Francois Nadeau wrote:
>> Hi, 
>>
>> On a 4 node / 48 OSDs Luminous cluster Im giving a try at RBD on EC
>> pools + Bluestore.  
>>
>> Setup went fine but after a few bench runs several OSD are failing and
>> many wont even restart.
>>
>> ceph osd erasure-code-profile set myprofile \
>>k=2\
>>m=1 \
>>crush-failure-domain=host
>> ceph osd pool create mypool 1024 1024 erasure myprofile   
>> ceph osd pool set mypool allow_ec_overwrites true
>> rbd pool init mypool
>> ceph -s
>> ceph health detail
>> ceph osd pool create metapool 1024 1024 replicated
>> rbd create --size 1024G --data-pool mypool --image metapool/test1
>> rbd bench -p metapool test1 --io-type write --io-size 8192
>> --io-pattern rand --io-total 10G
>> ...
>>
>>
>> One of many OSD failing logs
>>
>> Sep 05 17:02:54 r72-k7-06-01.k8s.ash1.cloudsys.tmcs systemd[1]:
>> Started Ceph object storage daemon osd.12.
>> Sep 05 17:02:54 r72-k7-06-01.k8s.ash1.cloudsys.tmcs ceph-osd[4775]:
>> starting osd.12 at - osd_data /var/lib/ceph/osd/ceph-12
>> /var/lib/ceph/osd/ceph-12/journal
>> Sep 05 17:02:56 r72-k7-06-01.k8s.ash1.cloudsys.tmcs ceph-osd[4775]:
>> 2017-09-05 17:02:56.627301 7fe1a2e42d00 -1 osd.12 2219 log_to_monitors
>> {default=true}
>> Sep 05 17:02:58 r72-k7-06-01.k8s.ash1.cloudsys.tmcs ceph-osd[4775]:
>> 2017-09-05 17:02:58.686723 7fe1871ac700 -1
>> bluestore(/var/lib/ceph/osd/ceph-12) _txc_add_transac
>> tion error (2) No such file or directory not handled on operation 15
>> (op 0, counting from 0)
>> Sep 05 17:02:58 r72-k7-06-01.k8s.ash1.cloudsys.tmcs ceph-osd[4775]:
>> 2017-09-05 17:02:58.686742 7fe1871ac700 -1
>> bluestore(/var/lib/ceph/osd/ceph-12) unexpected error
>>  code
>> Sep 05 17:02:58 r72-k7-06-01.k8s.ash1.cloudsys.tmcs ceph-osd[4775]:
>> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/
>> centos7/DIST/centos7/MACHINE_SIZE/huge/release/12.2.0/rpm/el7/BUILD/ceph-12.2.0/src/os/bluestore/BlueStore.cc:
>> In function 'void BlueStore::_txc_add_transaction(Blu
>> eStore::TransContext*, ObjectStore::Transaction*)' thread 7fe1871ac700
>> time 2017-09-05 17:02:58.686821
>> Sep 05 17:02:58 r72-k7-06-01.k8s.ash1.cloudsys.tmcs ceph-osd[4775]:
>> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/
>> centos7/DIST/centos7/MACHINE_SIZE/huge/release/12.2.0/rpm/el7/BUILD/ceph-12.2.0/src/os/bluestore/BlueStore.cc:
>> 9282: FAILED assert(0 == "unexpected error")
>> Sep 05 17:02:58 r72-k7-06-01.k8s.ash1.cloudsys.tmcs ceph-osd[4775]:
>> ceph version 12.2.0 (32ce2a3ae5239ee33d6150705cdb24d43bab910c)
>> luminous (rc)
>> Sep 05 17:02:58 r72-k7-06-01.k8s.ash1.cloudsys.tmcs ceph-osd[4775]: 1:
>> (ceph::__ceph_assert_fail(char const*, char const*, int, char
>> const*)+0x110) [0x7fe1a38bf510]
>> Sep 05 17:02:58 r72-k7-06-01.k8s.ash1.cloudsys.tmcs ceph-osd[4775]: 2:
>> (BlueStore::_txc_add_transaction(BlueStore::TransContext*,
>> ObjectStore::Transaction*)+0x1487)
>>  [0x7fe1a3796057]
>> Sep 05 17:02:58 r72-k7-06-01.k8s.ash1.cloudsys.tmcs ceph-osd[4775]: 3:
>> (BlueStore::queue_transactions(ObjectStore::Sequencer*,
>> std::vector>  std::allocator >&,
>> boost::intrusive_ptr, ThreadPool::TPHandle*)+0x3a0)
>> [0x7fe1a37970a0]
>> Sep 05 17:02:58 r72-k7-06-01.k8s.ash1.cloudsys.tmcs ceph-osd[4775]: 4:
>> (PrimaryLogPG::queue_transactions(std::vector> std::allocator> Store::Transaction> >&, boost::intrusive_ptr)+0x65)
>> [0x7fe1a3508745]
>> Sep 05 17:02:58 r72-k7-06-01.k8s.ash1.cloudsys.tmcs ceph-osd[4775]: 5:
>> (ECBackend::handle_sub_write(pg_shard_t,
>> boost::intrusive_ptr, ECSubWrite&, ZTrace
>> r::Trace const&, Context*)+0x631) [0x7fe1a3628711]
>> Sep 05 17:02:58 r72-k7-06-01.k8s.ash1.cloudsys.tmcs ceph-osd[4775]: 6:
>> (ECBackend::_handle_message(boost::intrusive_ptr)+0x327)
>> [0x7fe1a36392b7]
>> Sep 05 17:02:58 r72-k7-06-01.k8s.ash1.cloudsys.tmcs ceph-osd[4775]: 7:
>> (PGBackend::handle_message(boost::intrusive_ptr)+0x50)
>> [0x7fe1a353da10]
>> Sep 05 17:02:58 r72-k7-06-01.k8s.ash1.cloudsys.tmcs ceph-osd[4775]: 8:
>> (PrimaryLogPG::do_request(boost::intrusive_ptr&,
>> ThreadPool::TPHandle&)+0x58e) [0x
>> 7fe1a34a9a7e]
>> Sep 05 17:02:58 r72-k7-06-01.k8s.ash1.cloudsys.tmcs ceph-osd[4775]: 9:
>> (OSD::dequeue_op(boost::intrusive_ptr,
>> boost::intrusive_ptr, ThreadPool::TPHan
>> dle&)+0x3f9) [0x7fe1a333c729]
>> Sep 05 17:02:58 r72-k7-06-01.k8s.ash1.cloudsys.tmcs ceph-osd[4775]:
>> 10: (PGQueueable::RunVis::operator()(boost::intrusive_ptr
>> const&)+0x57) [0x7fe1a35ac1
>> 97]
>> Sep 05 17:02:58 r72-k7-06-01.k8s.ash1.cloudsys.tmcs 

[ceph-users] RadosGW ADMIN API

2017-09-06 Thread Engelmann Florian
Hi,

we are running a luminous cluster and three radosgw to serve a s3 compatible 
objectstore. As we are (currently) not using Openstack we have to use the 
RadosGW Admin API to get our billing data. I tried to access the API with 
pathon like:

[...]
import rgwadmin
[...]
Users = radosgw.get_users()
[...]

But I get a 403 "AccessDenied" using python 2.7.13.

What's the easiest method to access the Admin API from a remote host?

All the best,
Florian



EveryWare AG
Florian Engelmann
Senior Systems Engineer
Zurlindenstrasse 52a
CH-8003 Zürich

T  +41 44 466 60 00
F  +41 44 466 60 10

florian.engelm...@everyware.ch
www.everyware.ch


smime.p7s
Description: S/MIME cryptographic signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Multiple OSD crashing on 12.2.0. Bluestore / EC pool / rbd

2017-09-06 Thread Henrik Korkuc

On 17-09-06 16:24, Jean-Francois Nadeau wrote:

Hi,

On a 4 node / 48 OSDs Luminous cluster Im giving a try at RBD on EC 
pools + Bluestore.


Setup went fine but after a few bench runs several OSD are failing and 
many wont even restart.


ceph osd erasure-code-profile set myprofile \
   k=2\
   m=1 \
   crush-failure-domain=host
ceph osd pool create mypool 1024 1024 erasure myprofile
ceph osd pool set mypool allow_ec_overwrites true
rbd pool init mypool
ceph -s
ceph health detail
ceph osd pool create metapool 1024 1024 replicated
rbd create --size 1024G --data-pool mypool --image metapool/test1
rbd bench -p metapool test1 --io-type write --io-size 8192 
--io-pattern rand --io-total 10G

...


One of many OSD failing logs

Sep 05 17:02:54 r72-k7-06-01.k8s.ash1.cloudsys.tmcs systemd[1]: 
Started Ceph object storage daemon osd.12.
Sep 05 17:02:54 r72-k7-06-01.k8s.ash1.cloudsys.tmcs ceph-osd[4775]: 
starting osd.12 at - osd_data /var/lib/ceph/osd/ceph-12 
/var/lib/ceph/osd/ceph-12/journal
Sep 05 17:02:56 r72-k7-06-01.k8s.ash1.cloudsys.tmcs ceph-osd[4775]: 
2017-09-05 17:02:56.627301 7fe1a2e42d00 -1 osd.12 2219 log_to_monitors 
{default=true}
Sep 05 17:02:58 r72-k7-06-01.k8s.ash1.cloudsys.tmcs ceph-osd[4775]: 
2017-09-05 17:02:58.686723 7fe1871ac700 -1 
bluestore(/var/lib/ceph/osd/ceph-12) _txc_add_transac
tion error (2) No such file or directory not handled on operation 15 
(op 0, counting from 0)
Sep 05 17:02:58 r72-k7-06-01.k8s.ash1.cloudsys.tmcs ceph-osd[4775]: 
2017-09-05 17:02:58.686742 7fe1871ac700 -1 
bluestore(/var/lib/ceph/osd/ceph-12) unexpected error

 code
Sep 05 17:02:58 r72-k7-06-01.k8s.ash1.cloudsys.tmcs ceph-osd[4775]: 
/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/
centos7/DIST/centos7/MACHINE_SIZE/huge/release/12.2.0/rpm/el7/BUILD/ceph-12.2.0/src/os/bluestore/BlueStore.cc: 
In function 'void BlueStore::_txc_add_transaction(Blu
eStore::TransContext*, ObjectStore::Transaction*)' thread 7fe1871ac700 
time 2017-09-05 17:02:58.686821
Sep 05 17:02:58 r72-k7-06-01.k8s.ash1.cloudsys.tmcs ceph-osd[4775]: 
/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/
centos7/DIST/centos7/MACHINE_SIZE/huge/release/12.2.0/rpm/el7/BUILD/ceph-12.2.0/src/os/bluestore/BlueStore.cc: 
9282: FAILED assert(0 == "unexpected error")
Sep 05 17:02:58 r72-k7-06-01.k8s.ash1.cloudsys.tmcs ceph-osd[4775]: 
ceph version 12.2.0 (32ce2a3ae5239ee33d6150705cdb24d43bab910c) 
luminous (rc)
Sep 05 17:02:58 r72-k7-06-01.k8s.ash1.cloudsys.tmcs ceph-osd[4775]: 1: 
(ceph::__ceph_assert_fail(char const*, char const*, int, char 
const*)+0x110) [0x7fe1a38bf510]
Sep 05 17:02:58 r72-k7-06-01.k8s.ash1.cloudsys.tmcs ceph-osd[4775]: 2: 
(BlueStore::_txc_add_transaction(BlueStore::TransContext*, 
ObjectStore::Transaction*)+0x1487)

 [0x7fe1a3796057]
Sep 05 17:02:58 r72-k7-06-01.k8s.ash1.cloudsys.tmcs ceph-osd[4775]: 3: 
(BlueStore::queue_transactions(ObjectStore::Sequencer*, 
std::vector&, 
boost::intrusive_ptr, ThreadPool::TPHandle*)+0x3a0) 
[0x7fe1a37970a0]
Sep 05 17:02:58 r72-k7-06-01.k8s.ash1.cloudsys.tmcs ceph-osd[4775]: 4: 
(PrimaryLogPG::queue_transactions(std::vector >&, boost::intrusive_ptr)+0x65) 
[0x7fe1a3508745]
Sep 05 17:02:58 r72-k7-06-01.k8s.ash1.cloudsys.tmcs ceph-osd[4775]: 5: 
(ECBackend::handle_sub_write(pg_shard_t, 
boost::intrusive_ptr, ECSubWrite&, ZTrace

r::Trace const&, Context*)+0x631) [0x7fe1a3628711]
Sep 05 17:02:58 r72-k7-06-01.k8s.ash1.cloudsys.tmcs ceph-osd[4775]: 6: 
(ECBackend::_handle_message(boost::intrusive_ptr)+0x327) 
[0x7fe1a36392b7]
Sep 05 17:02:58 r72-k7-06-01.k8s.ash1.cloudsys.tmcs ceph-osd[4775]: 7: 
(PGBackend::handle_message(boost::intrusive_ptr)+0x50) 
[0x7fe1a353da10]
Sep 05 17:02:58 r72-k7-06-01.k8s.ash1.cloudsys.tmcs ceph-osd[4775]: 8: 
(PrimaryLogPG::do_request(boost::intrusive_ptr&, 
ThreadPool::TPHandle&)+0x58e) [0x

7fe1a34a9a7e]
Sep 05 17:02:58 r72-k7-06-01.k8s.ash1.cloudsys.tmcs ceph-osd[4775]: 9: 
(OSD::dequeue_op(boost::intrusive_ptr, 
boost::intrusive_ptr, ThreadPool::TPHan

dle&)+0x3f9) [0x7fe1a333c729]
Sep 05 17:02:58 r72-k7-06-01.k8s.ash1.cloudsys.tmcs ceph-osd[4775]: 
10: (PGQueueable::RunVis::operator()(boost::intrusive_ptr 
const&)+0x57) [0x7fe1a35ac1

97]
Sep 05 17:02:58 r72-k7-06-01.k8s.ash1.cloudsys.tmcs ceph-osd[4775]: 
11: (OSD::ShardedOpWQ::_process(unsigned int, 
ceph::heartbeat_handle_d*)+0xfce) [0x7fe1a3367c8e]
Sep 05 17:02:58 r72-k7-06-01.k8s.ash1.cloudsys.tmcs ceph-osd[4775]: 
12: (ShardedThreadPool::shardedthreadpool_worker(unsigned int)+0x839) 
[0x7fe1a38c5029]
Sep 05 17:02:58 r72-k7-06-01.k8s.ash1.cloudsys.tmcs ceph-osd[4775]: 
13: (ShardedThreadPool::WorkThreadSharded::entry()+0x10) [0x7fe1a38c6fc0]
Sep 05 17:02:58 r72-k7-06-01.k8s.ash1.cloudsys.tmcs ceph-osd[4775]: 
14: (()+0x7dc5) [0x7fe1a0484dc5]
Sep 05 17:02:58 r72-k7-06-01.k8s.ash1.cloudsys.tmcs ceph-osd[4775]: 
15: 

[ceph-users] Multiple OSD crashing on 12.2.0. Bluestore / EC pool / rbd

2017-09-06 Thread Jean-Francois Nadeau
Hi,

On a 4 node / 48 OSDs Luminous cluster Im giving a try at RBD on EC pools +
Bluestore.

Setup went fine but after a few bench runs several OSD are failing and many
wont even restart.

ceph osd erasure-code-profile set myprofile \
   k=2\
   m=1 \
   crush-failure-domain=host
ceph osd pool create mypool 1024 1024 erasure myprofile
ceph osd pool set mypool allow_ec_overwrites true
rbd pool init mypool
ceph -s
ceph health detail
ceph osd pool create metapool 1024 1024 replicated
rbd create --size 1024G --data-pool mypool --image metapool/test1
rbd bench -p metapool test1 --io-type write --io-size 8192 --io-pattern
rand --io-total 10G
...


One of many OSD failing logs

Sep 05 17:02:54 r72-k7-06-01.k8s.ash1.cloudsys.tmcs systemd[1]: Started
Ceph object storage daemon osd.12.
Sep 05 17:02:54 r72-k7-06-01.k8s.ash1.cloudsys.tmcs ceph-osd[4775]:
starting osd.12 at - osd_data /var/lib/ceph/osd/ceph-12
/var/lib/ceph/osd/ceph-12/journal
Sep 05 17:02:56 r72-k7-06-01.k8s.ash1.cloudsys.tmcs ceph-osd[4775]:
2017-09-05 17:02:56.627301 7fe1a2e42d00 -1 osd.12 2219 log_to_monitors
{default=true}
Sep 05 17:02:58 r72-k7-06-01.k8s.ash1.cloudsys.tmcs ceph-osd[4775]:
2017-09-05 17:02:58.686723 7fe1871ac700 -1 bluestore(/var/lib/ceph/osd/ceph-12)
_txc_add_transac
tion error (2) No such file or directory not handled on operation 15 (op 0,
counting from 0)
Sep 05 17:02:58 r72-k7-06-01.k8s.ash1.cloudsys.tmcs ceph-osd[4775]:
2017-09-05 17:02:58.686742 7fe1871ac700 -1 bluestore(/var/lib/ceph/osd/ceph-12)
unexpected error
 code
Sep 05 17:02:58 r72-k7-06-01.k8s.ash1.cloudsys.tmcs ceph-osd[4775]:
/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_
64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/
centos7/DIST/centos7/MACHINE_SIZE/huge/release/12.2.0/rpm/
el7/BUILD/ceph-12.2.0/src/os/bluestore/BlueStore.cc: In function 'void
BlueStore::_txc_add_transaction(Blu
eStore::TransContext*, ObjectStore::Transaction*)' thread 7fe1871ac700 time
2017-09-05 17:02:58.686821
Sep 05 17:02:58 r72-k7-06-01.k8s.ash1.cloudsys.tmcs ceph-osd[4775]:
/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_
64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/
centos7/DIST/centos7/MACHINE_SIZE/huge/release/12.2.0/rpm/
el7/BUILD/ceph-12.2.0/src/os/bluestore/BlueStore.cc: 9282: FAILED assert(0
== "unexpected error")
Sep 05 17:02:58 r72-k7-06-01.k8s.ash1.cloudsys.tmcs ceph-osd[4775]: ceph
version 12.2.0 (32ce2a3ae5239ee33d6150705cdb24d43bab910c) luminous (rc)
Sep 05 17:02:58 r72-k7-06-01.k8s.ash1.cloudsys.tmcs ceph-osd[4775]: 1:
(ceph::__ceph_assert_fail(char const*, char const*, int, char
const*)+0x110) [0x7fe1a38bf510]
Sep 05 17:02:58 r72-k7-06-01.k8s.ash1.cloudsys.tmcs ceph-osd[4775]: 2:
(BlueStore::_txc_add_transaction(BlueStore::TransContext*,
ObjectStore::Transaction*)+0x1487)
 [0x7fe1a3796057]
Sep 05 17:02:58 r72-k7-06-01.k8s.ash1.cloudsys.tmcs ceph-osd[4775]: 3:
(BlueStore::queue_transactions(ObjectStore::Sequencer*,
std::vector&, boost::intrusive_ptr,
ThreadPool::TPHandle*)+0x3a0) [0x7fe1a37970a0]
Sep 05 17:02:58 r72-k7-06-01.k8s.ash1.cloudsys.tmcs ceph-osd[4775]: 4:
(PrimaryLogPG::queue_transactions(std::vector&, boost::intrusive_ptr)+0x65)
[0x7fe1a3508745]
Sep 05 17:02:58 r72-k7-06-01.k8s.ash1.cloudsys.tmcs ceph-osd[4775]: 5:
(ECBackend::handle_sub_write(pg_shard_t, boost::intrusive_ptr,
ECSubWrite&, ZTrace
r::Trace const&, Context*)+0x631) [0x7fe1a3628711]
Sep 05 17:02:58 r72-k7-06-01.k8s.ash1.cloudsys.tmcs ceph-osd[4775]: 6:
(ECBackend::_handle_message(boost::intrusive_ptr)+0x327)
[0x7fe1a36392b7]
Sep 05 17:02:58 r72-k7-06-01.k8s.ash1.cloudsys.tmcs ceph-osd[4775]: 7:
(PGBackend::handle_message(boost::intrusive_ptr)+0x50)
[0x7fe1a353da10]
Sep 05 17:02:58 r72-k7-06-01.k8s.ash1.cloudsys.tmcs ceph-osd[4775]: 8:
(PrimaryLogPG::do_request(boost::intrusive_ptr&,
ThreadPool::TPHandle&)+0x58e) [0x
7fe1a34a9a7e]
Sep 05 17:02:58 r72-k7-06-01.k8s.ash1.cloudsys.tmcs ceph-osd[4775]: 9:
(OSD::dequeue_op(boost::intrusive_ptr, boost::intrusive_ptr,
ThreadPool::TPHan
dle&)+0x3f9) [0x7fe1a333c729]
Sep 05 17:02:58 r72-k7-06-01.k8s.ash1.cloudsys.tmcs ceph-osd[4775]: 10:
(PGQueueable::RunVis::operator()(boost::intrusive_ptr
const&)+0x57) [0x7fe1a35ac1
97]
Sep 05 17:02:58 r72-k7-06-01.k8s.ash1.cloudsys.tmcs ceph-osd[4775]: 11:
(OSD::ShardedOpWQ::_process(unsigned int, ceph::heartbeat_handle_d*)+0xfce)
[0x7fe1a3367c8e]
Sep 05 17:02:58 r72-k7-06-01.k8s.ash1.cloudsys.tmcs ceph-osd[4775]: 12:
(ShardedThreadPool::shardedthreadpool_worker(unsigned int)+0x839)
[0x7fe1a38c5029]
Sep 05 17:02:58 r72-k7-06-01.k8s.ash1.cloudsys.tmcs ceph-osd[4775]: 13:
(ShardedThreadPool::WorkThreadSharded::entry()+0x10) [0x7fe1a38c6fc0]
Sep 05 17:02:58 r72-k7-06-01.k8s.ash1.cloudsys.tmcs ceph-osd[4775]: 14:
(()+0x7dc5) [0x7fe1a0484dc5]
Sep 05 17:02:58 r72-k7-06-01.k8s.ash1.cloudsys.tmcs ceph-osd[4775]: 15:
(clone()+0x6d) [0x7fe19f57876d]
Sep 05 17:02:58 r72-k7-06-01.k8s.ash1.cloudsys.tmcs ceph-osd[4775]: NOTE: a
copy of the 

[ceph-users] PCIe journal benefit for SSD OSDs

2017-09-06 Thread Alex Gorbachev
We are planning a Jewel filestore based cluster for a performance
sensitive healthcare client, and the conservative OSD choice is
Samsung SM863A.

I am going to put an 8GB Areca HBA in front of it to cache small
metadata operations, but was wondering if anyone has seen a positive
impact from also using PCIe journals (e.g. Intel P3700 or even the
older 910 series) in front of such SSDs?

Thanks for any info you can share.
--
Alex Gorbachev
Storcium
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] MDS crashes shortly after startup while trying to purge stray files.

2017-09-06 Thread Micha Krause

Hi,

I was deleting a lot of hard linked files, when "something" happened.

Now my mds starts for a few seconds, writes a lot of these lines:

   -43> 2017-09-06 13:51:43.396588 7f9047b21700 10 log_client  will send 
2017-09-06 13:51:40.531563 mds.0 10.210.32.12:6802/2735447218 4963 : cluster [ERR] 
loaded dup inode 17d6511 [2,head] v17234443 at ~mds
0/stray8/17d6511, but inode 17d6511.head v17500983 already exists at 
~mds0/stray7/17d6511


And finally this:


-3> 2017-09-06 13:51:43.396762 7f9047b21700 10 monclient: _send_mon_message 
to mon.2 at 10.210.34.11:6789/0
-2> 2017-09-06 13:51:43.396770 7f9047b21700  1 -- 10.210.32.12:6802/2735447218 
--> 10.210.34.11:6789/0 -- log(1000 entries from seq 4003 at 2017-09-06 
13:51:38.718139) v1 -- ?+0 0x7f905c5d5d40 con 0x7f905902
c600
-1> 2017-09-06 13:51:43.399561 7f9047b21700  1 -- 10.210.32.12:6802/2735447218 
<== mon.2 10.210.34.11:6789/0 26  mdsbeacon(152160002/0 up:active seq 8 
v47532) v7  126+0+0 (20071477 0 0) 0x7f90591b208
0 con 0x7f905902c600
 0> 2017-09-06 13:51:43.401125 7f9043b19700 -1 *** Caught signal (Aborted) 
**
 in thread 7f9043b19700 thread_name:mds_rank_progr

 ceph version 10.2.9 (2ee413f77150c0f375ff6f10edd6c8f9c7d060d0)
 1: (()+0x5087b7) [0x7f904ed547b7]
 2: (()+0xf890) [0x7f904e156890]
 3: (gsignal()+0x37) [0x7f904c5e1067]
 4: (abort()+0x148) [0x7f904c5e2448]
 5: (ceph::__ceph_assert_fail(char const*, char const*, int, char 
const*)+0x256) [0x7f904ee5e386]
 6: (StrayManager::eval_remote_stray(CDentry*, CDentry*)+0x492) [0x7f904ebaad12]
 7: (StrayManager::__eval_stray(CDentry*, bool)+0x5f5) [0x7f904ebaefd5]
 8: (StrayManager::eval_stray(CDentry*, bool)+0x1e) [0x7f904ebaf7ae]
 9: (MDCache::scan_stray_dir(dirfrag_t)+0x165) [0x7f904eb04145]
 10: (MDCache::populate_mydir()+0x7fc) [0x7f904eb73acc]
 11: (MDCache::open_root()+0xef) [0x7f904eb7447f]
 12: (MDSInternalContextBase::complete(int)+0x203) [0x7f904ecad5c3]
 13: (MDSRank::_advance_queues()+0x382) [0x7f904ea689e2]
 14: (MDSRank::ProgressThread::entry()+0x4a) [0x7f904ea68e6a]
 15: (()+0x8064) [0x7f904e14f064]
 16: (clone()+0x6d) [0x7f904c69462d]
 NOTE: a copy of the executable, or `objdump -rdS ` is needed to 
interpret this.

--- logging levels ---
   0/ 5 none
   0/ 1 lockdep
   0/ 1 context
   1/ 1 crush
   1/ 5 mds
   1/ 5 mds_balancer
   1/ 5 mds_locker
   1/ 5 mds_log
   1/ 5 mds_log_expire
   1/ 5 mds_migrator
   0/ 1 buffer
   0/ 1 timer
   0/ 1 filer
   0/ 1 striper
   0/ 1 objecter
   0/ 5 rados
   0/ 5 rbd
   0/ 5 rbd_mirror
   0/ 5 rbd_replay
   0/ 5 journaler
   0/ 5 objectcacher
   0/ 5 client
   0/ 5 osd
   0/ 5 optracker
   0/ 5 objclass
   1/ 3 filestore
   1/ 3 journal
   0/ 5 ms
   1/ 5 mon
   0/10 monc
   1/ 5 paxos
   0/ 5 tp
   1/ 5 auth
   1/ 5 crypto
   1/ 1 finisher
   1/ 5 heartbeatmap
   1/ 5 perfcounter
   1/ 5 rgw
   1/10 civetweb
   1/ 5 javaclient
   1/ 5 asok
   1/ 1 throttle
   0/ 0 refs
   1/ 5 xio
   1/ 5 compressor
   1/ 5 newstore
   1/ 5 bluestore
   1/ 5 bluefs
   1/ 3 bdev
   1/ 5 kstore
   4/ 5 rocksdb
   4/ 5 leveldb
   1/ 5 kinetic
   1/ 5 fuse
  99/99 (syslog threshold)
  -1/-1 (stderr threshold)
  max_recent 1
  max_new 1000
  log_file /var/log/ceph/ceph-mds.0.log
--- end dump of recent events ---

Looking at daemonperf, it seems the mds crashes when trying to write something:

root@mds01:~ # /etc/init.d/ceph restart
[ ok ] Restarting ceph (via systemctl): ceph.service.

root@mds01:~ # ceph daemonperf mds.0
---objecter---
writ read actv|
  000
  000
  000
  6   120
  000
  000
  000
  031
  011
  000
  010
  011
  011
  011
  011
  000
  010
  010
  011
  000
 6400
Traceback (most recent call last):
  File "/usr/bin/ceph", line 948, in 
retval = main()
  File "/usr/bin/ceph", line 638, in main
DaemonWatcher(sockpath).run(interval, count)
  File "/usr/lib/python2.7/dist-packages/ceph_daemon.py", line 265, in run
dump = json.loads(admin_socket(self.asok_path, ["perf", "dump"]))
  File "/usr/lib/python2.7/dist-packages/ceph_daemon.py", line 60, in 
admin_socket
raise RuntimeError('exception getting command descriptions: ' + str(e))
RuntimeError: exception getting command descriptions: [Errno 111] Connection 
refused


And indeed, I am able to prevent the crash by running:

root@mds02:~ # ceph --admin-daemon /var/run/ceph/ceph-mds.1.asok force_readonly

during startup of the mds.

Any advice on how to repair the filesystem?

I already tried this without success:

http://docs.ceph.com/docs/jewel/cephfs/disaster-recovery/

Ceph Version used is Jewel 10.2.9.


Micha Krause
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph Developers Monthly - September

2017-09-06 Thread Joao Eduardo Luis

On 09/06/2017 06:06 AM, Leonardo Vaz wrote:

Hey cephers,

The Ceph Developer Monthly is confirmed for tonight, September 6 at 9pm
Eastern Time (EDT), in an APAC-friendly time slot.


As much as I would love to attend and discuss some topics (especially 
the RADOS replication stuff), this is an ungodly hour to expect my brain 
to work properly ;)


Looking forward to the recording though!

  -Joao
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Luminous Upgrade KRBD

2017-09-06 Thread Ashley Merrick
Okie thanks all, will hold off 

-Original Message-
From: Ilya Dryomov [mailto:idryo...@gmail.com] 
Sent: 06 September 2017 17:58
To: Ashley Merrick 
Cc: Henrik Korkuc ; ceph-us...@ceph.com
Subject: Re: [ceph-users] Luminous Upgrade KRBD

On Wed, Sep 6, 2017 at 11:23 AM, Ashley Merrick  wrote:
> Only drive for it was to be able to use this:
>
> http://docs.ceph.com/docs/master/rados/operations/upmap/
>
> To see if would help with the current very uneven PG MAP across 100+ OSD's, 
> something that can wait if current kernel isn't ready.

I guess it depends on the meaning of "current".  Proxmox's latest is indeed 
4.10.  The just-released upstream kernel 4.13 does support upmap exception 
table and other luminous features.

That said, the balancer isn't fully baked yet, so I'd wait.

Thanks,

Ilya
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Luminous Upgrade KRBD

2017-09-06 Thread Ilya Dryomov
On Wed, Sep 6, 2017 at 11:23 AM, Ashley Merrick  wrote:
> Only drive for it was to be able to use this:
>
> http://docs.ceph.com/docs/master/rados/operations/upmap/
>
> To see if would help with the current very uneven PG MAP across 100+ OSD's, 
> something that can wait if current kernel isn't ready.

I guess it depends on the meaning of "current".  Proxmox's latest is
indeed 4.10.  The just-released upstream kernel 4.13 does support upmap
exception table and other luminous features.

That said, the balancer isn't fully baked yet, so I'd wait.

Thanks,

Ilya
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Luminous Upgrade KRBD

2017-09-06 Thread Ashley Merrick
Only drive for it was to be able to use this:

http://docs.ceph.com/docs/master/rados/operations/upmap/

To see if would help with the current very uneven PG MAP across 100+ OSD's, 
something that can wait if current kernel isn't ready.

,Ashley

-Original Message-
From: Ilya Dryomov [mailto:idryo...@gmail.com] 
Sent: 06 September 2017 17:09
To: Henrik Korkuc 
Cc: Ashley Merrick ; ceph-us...@ceph.com
Subject: Re: [ceph-users] Luminous Upgrade KRBD

On Wed, Sep 6, 2017 at 9:16 AM, Henrik Korkuc  wrote:
> On 17-09-06 09:10, Ashley Merrick wrote:
>
> I was just going by : 
> docs.ceph.com/docs/master/start/os-recommendations/
>
>
> Which states 4.9
>
>
> docs.ceph.com/docs/master/rados/operations/crush-map
>
>
> Only goes as far as Jewel and states 4.5
>
>
> Not sure where else I can find a concrete answer to if 4.10 is new enough.
>
> Well, it looks like docs may need to be revisited as I was unable to 
> use kcephfs on 4.9 with luminous before downgrading tunables, not sure 
> about 4.10.
>
>
>
> ,Ashley
>
> 
> From: Henrik Korkuc 
> Sent: 06 September 2017 06:58:52
> To: Ashley Merrick; ceph-us...@ceph.com
> Subject: Re: [ceph-users] Luminous Upgrade KRBD
>
> On 17-09-06 07:33, Ashley Merrick wrote:
>
> Hello,
>
> Have recently upgraded a cluster to Luminous (Running Proxmox), at the 
> same time I have upgraded the Compute Cluster to 5.x meaning we now 
> run the latest kernel version (Linux 4.10.15-1) Looking to do the following :
>
> ceph osd set-require-min-compat-client luminous

This would effectively require 4.13 kernel for krbd and kcephfs.

You won't gain much by requiring luminous clients, so unless you have a 
well-formulated reason to do it, I'd advise against it.  That's also the reason 
os-recommendations page still lists 4.9 -- it is the latest jewel-compatible 
LTS kernel.

Thanks,

Ilya
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Luminous Upgrade KRBD

2017-09-06 Thread ceph
Quick drop-in, if this is a suitable solution: rbd-nbd
This will give you, for a small performance cost, a block device using
librbd (in userspace)

On 06/09/2017 11:08, Ilya Dryomov wrote:
> On Wed, Sep 6, 2017 at 9:16 AM, Henrik Korkuc  wrote:
>> On 17-09-06 09:10, Ashley Merrick wrote:
>>
>> I was just going by : docs.ceph.com/docs/master/start/os-recommendations/
>>
>>
>> Which states 4.9
>>
>>
>> docs.ceph.com/docs/master/rados/operations/crush-map
>>
>>
>> Only goes as far as Jewel and states 4.5
>>
>>
>> Not sure where else I can find a concrete answer to if 4.10 is new enough.
>>
>> Well, it looks like docs may need to be revisited as I was unable to use
>> kcephfs on 4.9 with luminous before downgrading tunables, not sure about
>> 4.10.
>>
>>
>>
>> ,Ashley
>>
>> 
>> From: Henrik Korkuc 
>> Sent: 06 September 2017 06:58:52
>> To: Ashley Merrick; ceph-us...@ceph.com
>> Subject: Re: [ceph-users] Luminous Upgrade KRBD
>>
>> On 17-09-06 07:33, Ashley Merrick wrote:
>>
>> Hello,
>>
>> Have recently upgraded a cluster to Luminous (Running Proxmox), at the same
>> time I have upgraded the Compute Cluster to 5.x meaning we now run the
>> latest kernel version (Linux 4.10.15-1) Looking to do the following :
>>
>> ceph osd set-require-min-compat-client luminous
> 
> This would effectively require 4.13 kernel for krbd and kcephfs.
> 
> You won't gain much by requiring luminous clients, so unless you have
> a well-formulated reason to do it, I'd advise against it.  That's also
> the reason os-recommendations page still lists 4.9 -- it is the latest
> jewel-compatible LTS kernel.
> 
> Thanks,
> 
> Ilya
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Luminous Upgrade KRBD

2017-09-06 Thread Ilya Dryomov
On Wed, Sep 6, 2017 at 9:16 AM, Henrik Korkuc  wrote:
> On 17-09-06 09:10, Ashley Merrick wrote:
>
> I was just going by : docs.ceph.com/docs/master/start/os-recommendations/
>
>
> Which states 4.9
>
>
> docs.ceph.com/docs/master/rados/operations/crush-map
>
>
> Only goes as far as Jewel and states 4.5
>
>
> Not sure where else I can find a concrete answer to if 4.10 is new enough.
>
> Well, it looks like docs may need to be revisited as I was unable to use
> kcephfs on 4.9 with luminous before downgrading tunables, not sure about
> 4.10.
>
>
>
> ,Ashley
>
> 
> From: Henrik Korkuc 
> Sent: 06 September 2017 06:58:52
> To: Ashley Merrick; ceph-us...@ceph.com
> Subject: Re: [ceph-users] Luminous Upgrade KRBD
>
> On 17-09-06 07:33, Ashley Merrick wrote:
>
> Hello,
>
> Have recently upgraded a cluster to Luminous (Running Proxmox), at the same
> time I have upgraded the Compute Cluster to 5.x meaning we now run the
> latest kernel version (Linux 4.10.15-1) Looking to do the following :
>
> ceph osd set-require-min-compat-client luminous

This would effectively require 4.13 kernel for krbd and kcephfs.

You won't gain much by requiring luminous clients, so unless you have
a well-formulated reason to do it, I'd advise against it.  That's also
the reason os-recommendations page still lists 4.9 -- it is the latest
jewel-compatible LTS kernel.

Thanks,

Ilya
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Luminous Upgrade KRBD

2017-09-06 Thread Henrik Korkuc

On 17-09-06 09:10, Ashley Merrick wrote:


I was just going by : docs.ceph.com/docs/master/start/os-recommendations/


Which states 4.9


docs.ceph.com/docs/master/rados/operations/crush-map


Only goes as far as Jewel and states 4.5


Not sure where else I can find a concrete answer to if 4.10 is new enough.

Well, it looks like docs may need to be revisited as I was unable to use 
kcephfs on 4.9 with luminous before downgrading tunables, not sure about 
4.10.




,Ashley


*From:* Henrik Korkuc 
*Sent:* 06 September 2017 06:58:52
*To:* Ashley Merrick; ceph-us...@ceph.com
*Subject:* Re: [ceph-users] Luminous Upgrade KRBD
On 17-09-06 07:33, Ashley Merrick wrote:

Hello,

Have recently upgraded a cluster to Luminous (Running Proxmox), at 
the same time I have upgraded the Compute Cluster to 5.x meaning we 
now run the latest kernel version (Linux 4.10.15-1) Looking to do the 
following :


ceph osd set-require-min-compat-client luminous

Does 4.10 kernel support luminous features? I am afraid (but do not 
have info to back it up) that 4.10 is too old for Luminous features.


Below is the output of ceph features, the 4 number next to the last 
row of luminous is as expected for the 4 Compute nodes, are the other 
4 spread across hammer & jewel log's of when the node's last 
connected before they was upgraded to Proxmox 5.0, am I safe to run 
the above command? No other RBD resources are connected to this cluster.


  "client": {
        "group": {
            "features": "0x106b84a842a42",
            "release": "hammer",
            "num": 1
        },
        "group": {
            "features": "0x40106b84a842a52",
            "release": "jewel",
            "num": 3
        },
        "group": {
            "features": "0x1ffddff8eea4fffb",
            "release": "luminous",
            "num": 4
        }




___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Luminous Upgrade KRBD

2017-09-06 Thread Ashley Merrick
I was just going by : docs.ceph.com/docs/master/start/os-recommendations/


Which states 4.9


docs.ceph.com/docs/master/rados/operations/crush-map


Only goes as far as Jewel and states 4.5


Not sure where else I can find a concrete answer to if 4.10 is new enough.


,Ashley


From: Henrik Korkuc 
Sent: 06 September 2017 06:58:52
To: Ashley Merrick; ceph-us...@ceph.com
Subject: Re: [ceph-users] Luminous Upgrade KRBD

On 17-09-06 07:33, Ashley Merrick wrote:
Hello,

Have recently upgraded a cluster to Luminous (Running Proxmox), at the same 
time I have upgraded the Compute Cluster to 5.x meaning we now run the latest 
kernel version (Linux 4.10.15-1) Looking to do the following :

ceph osd set-require-min-compat-client luminous

Does 4.10 kernel support luminous features? I am afraid (but do not have info 
to back it up) that 4.10 is too old for Luminous features.

Below is the output of ceph features, the 4 number next to the last row of 
luminous is as expected for the 4 Compute nodes, are the other 4 spread across 
hammer & jewel log's of when the node's last connected before they was upgraded 
to Proxmox 5.0, am I safe to run the above command? No other RBD resources are 
connected to this cluster.

  "client": {
"group": {
"features": "0x106b84a842a42",
"release": "hammer",
"num": 1
},
"group": {
"features": "0x40106b84a842a52",
"release": "jewel",
"num": 3
},
"group": {
"features": "0x1ffddff8eea4fffb",
"release": "luminous",
"num": 4
}





___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com