[ceph-users] Ceph IRC channel linked to Slack

2019-06-10 Thread Alvaro Soto
Hello Cephers, for the ones who find it easy to be connected to the community using slack, the openstack community in Latam configured this on [1] In the channel #ceph and you can auto invite in [2]. Feel free to use and share. [1] https://openstack-latam.slack.com [2] https://latam.openstackday.m

[ceph-users] Ceph Day Netherlands CFP Extended to June 14th

2019-06-10 Thread Mike Perez
Hey everyone, We have extended the CFP for Ceph Day Netherlands to June 14! The event itself will be taking place on July 2nd. You can find more information on how to register for the event and apply for the CFP here: https://ceph.com/cephdays/netherlands-2019/ We look forward to seeing you for

Re: [ceph-users] krbd namespace missing in /dev

2019-06-10 Thread Ilya Dryomov
On Mon, Jun 10, 2019 at 8:03 PM Jason Dillaman wrote: > > On Mon, Jun 10, 2019 at 1:50 PM Jonas Jelten wrote: > > > > When I run: > > > > rbd map --name client.lol poolname/somenamespace/imagename > > > > The image is mapped to /dev/rbd0 and > > > > /dev/rbd/poolname/imagename > > > > I would

Re: [ceph-users] krbd namespace missing in /dev

2019-06-10 Thread Jason Dillaman
On Mon, Jun 10, 2019 at 1:50 PM Jonas Jelten wrote: > > When I run: > > rbd map --name client.lol poolname/somenamespace/imagename > > The image is mapped to /dev/rbd0 and > > /dev/rbd/poolname/imagename > > I would expect the rbd to be mapped to (the rbdmap tool tries this name): > > /dev/r

[ceph-users] krbd namespace missing in /dev

2019-06-10 Thread Jonas Jelten
When I run: rbd map --name client.lol poolname/somenamespace/imagename The image is mapped to /dev/rbd0 and /dev/rbd/poolname/imagename I would expect the rbd to be mapped to (the rbdmap tool tries this name): /dev/rbd/poolname/somenamespace/imagename The current map point would not all

Re: [ceph-users] slow requests are blocked > 32 sec. Implicated osds 0, 2, 3, 4, 5 (REQUEST_SLOW)

2019-06-10 Thread Robert LeBlanc
I'm glad it's working, to be clear did you use wpq, or is it still the prio queue? Sent from a mobile device, please excuse any typos. On Mon, Jun 10, 2019, 4:45 AM BASSAGET Cédric wrote: > an update from 12.2.9 to 12.2.12 seems to have fixed the problem ! > > Le lun. 10 juin 2019 à 12:25, BASS

Re: [ceph-users] balancer module makes OSD distribution worse

2019-06-10 Thread Josh Haft
PGs are not perfectly balanced per OSD, but I think that's expected/OK due to setting crush_compat_metrics to bytes? Though realizing as I type this that what I really want is equal percent-used, which may not be possible given the slight variation in disk size (see below) in my cluster? # ceph os

[ceph-users] Luminous PG stuck peering after added nodes with noin

2019-06-10 Thread Aleksei Gutikov
Hi all! Previous week we ran into terrible situation after added 4 new nodes into one of our clusters. Trying to reduce pg moves we set noin flag. Then deployed 4 new node so added 30% of OSDs with reweight=0. After that a huge number of PGs stalled in peering or activating state - about 20%

Re: [ceph-users] OSD hanging on 12.2.12 by message worker

2019-06-10 Thread Stefan Kooman
Quoting solarflow99 (solarflo...@gmail.com): > can the bitmap allocator be set in ceph-ansible? I wonder why is it not > default in 12.2.12 We don't use ceph-ansible. But if ceph-ansible allow you to set specific ([osd]) settings in ceph.conf I guess you can do it. I don't know what the policy i

Re: [ceph-users] slow requests are blocked > 32 sec. Implicated osds 0, 2, 3, 4, 5 (REQUEST_SLOW)

2019-06-10 Thread BASSAGET Cédric
an update from 12.2.9 to 12.2.12 seems to have fixed the problem ! Le lun. 10 juin 2019 à 12:25, BASSAGET Cédric a écrit : > Hi Robert, > Before doing anything on my prod env, I generate r/w on ceph cluster using > fio . > On my newest cluster, release 12.2.12, I did not manage to get > the (REQ

Re: [ceph-users] OSD caching on EC-pools (heavy cross OSD communication on cached reads)

2019-06-10 Thread Janne Johansson
Den sön 9 juni 2019 kl 18:29 skrev : > make sense - makes the cases for ec pools smaller though. > > Sunday, 9 June 2019, 17.48 +0200 from paul.emmer...@croit.io < > paul.emmer...@croit.io>: > > Caching is handled in BlueStore itself, erasure coding happens on a higher > layer. > > > In your case,

Re: [ceph-users] slow requests are blocked > 32 sec. Implicated osds 0, 2, 3, 4, 5 (REQUEST_SLOW)

2019-06-10 Thread BASSAGET Cédric
Hi Robert, Before doing anything on my prod env, I generate r/w on ceph cluster using fio . On my newest cluster, release 12.2.12, I did not manage to get the (REQUEST_SLOW) warning, even if my OSD disk usage goes above 95% (fio ran from 4 diffrent hosts) On my prod cluster, release 12.2.9, as soo

Re: [ceph-users] slow requests are blocked > 32 sec. Implicated osds 0, 2, 3, 4, 5 (REQUEST_SLOW)

2019-06-10 Thread Robert LeBlanc
On Mon, Jun 10, 2019 at 1:00 AM BASSAGET Cédric < cedric.bassaget...@gmail.com> wrote: > Hello Robert, > My disks did not reach 100% on the last warning, they climb to 70-80% > usage. But I see rrqm / wrqm counters increasing... > > Device: rrqm/s wrqm/s r/s w/srkB/swkB/s

Re: [ceph-users] slow requests are blocked > 32 sec. Implicated osds 0, 2, 3, 4, 5 (REQUEST_SLOW)

2019-06-10 Thread BASSAGET Cédric
Hello Robert, My disks did not reach 100% on the last warning, they climb to 70-80% usage. But I see rrqm / wrqm counters increasing... Device: rrqm/s wrqm/s r/s w/srkB/swkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util sda 0.00 4.000.00