Re: [ceph-users] Ceph Day Germany :)

2018-02-11 Thread Kai Wagner
On 12.02.2018 00:33, c...@elchaka.de wrote: > I absolutely agree, too. This was really great! Would be Fantastic if the > ceph days will happen again in Darmstadt - or Düsseldorf ;) > > Btw. Will the Slides and perhaps Videos of the presentation be online > avaiable? AFAIK Danny is working on

[ceph-users] rbd feature overheads

2018-02-11 Thread Blair Bethwaite
Hi all, Wondering if anyone can clarify whether there are any significant overheads from rbd features like object-map, fast-diff, etc. I'm interested in both performance overheads from a latency and space perspective, e.g., can object-map be sanely deployed on a 100TB volume or does the client

[ceph-users] ceph mons de-synced from rest of cluster?

2018-02-11 Thread Chris Apsey
All, Recently doubled the number of OSDs in our cluster, and towards the end of the rebalancing, I noticed that recovery IO fell to nothing and that the ceph mons eventually looked like this when I ran ceph -s cluster: id: 6a65c3d0-b84e-4c89-bbf7-a38a1966d780

Re: [ceph-users] max number of pools per cluster

2018-02-11 Thread Konstantin Shalygin
And if for any reason even single PG was damaged and for example stuck inactive - then all RBDs will be affected. First that come to mind is to create a separate pool for every RBD. I think this is insane. Is better to think how Kipod save data in CRUSH. Plan your failure domains and perform

Re: [ceph-users] Ceph Day Germany :)

2018-02-11 Thread ceph
Am 9. Februar 2018 11:51:08 MEZ schrieb Lenz Grimmer : >Hi all, > >On 02/08/2018 11:23 AM, Martin Emrich wrote: > >> I just want to thank all organizers and speakers for the awesome Ceph >> Day at Darmstadt, Germany yesterday. >> >> I learned of some cool stuff I'm eager to

Re: [ceph-users] degraded PGs when adding OSDs

2018-02-11 Thread Brad Hubbard
On Mon, Feb 12, 2018 at 8:51 AM, Simon Ironside wrote: > On 09/02/18 09:05, Janne Johansson wrote: >> >> 2018-02-08 23:38 GMT+01:00 Simon Ironside > >: >> >> Hi Everyone, >> I recently added an OSD to an

Re: [ceph-users] degraded PGs when adding OSDs

2018-02-11 Thread Simon Ironside
On 09/02/18 09:05, Janne Johansson wrote: 2018-02-08 23:38 GMT+01:00 Simon Ironside >: Hi Everyone, I recently added an OSD to an active+clean Jewel (10.2.3) cluster and was surprised to see a peak of 23% objects degraded.

Re: [ceph-users] ceph-disk vs. ceph-volume: both error prone

2018-02-11 Thread Willem Jan Withagen
On 09/02/2018 21:56, Alfredo Deza wrote: On Fri, Feb 9, 2018 at 10:48 AM, Nico Schottelius wrote: Dear list, for a few days we are disecting ceph-disk and ceph-volume to find out, what is the appropriate way of creating partitions for ceph. ceph-volume does

Re: [ceph-users] Is there a "set pool readonly" command?

2018-02-11 Thread David Turner
If you set min_size to 2 or more, it will disable reads and writes to the pool by blocking requests. Min_size is the minimum copies of a PG that need to be online to allow it to the data. If you only have 1 copy, then it will prevent io. It's not a flag you can set on the pool, but it should work

[ceph-users] Is there a "set pool readonly" command?

2018-02-11 Thread Nico Schottelius
Hello, we have one pool, in which about 10 disks failed last week (fortunately mostly sequentially), which now has now some pgs that are only left on one disk. Is there a command to set one pool into "read-only" mode or even "recovery io-only" mode so that the only thing same is doing is