Re: [ceph-users] Should I use "rgw s3 auth order = local, external"

2019-07-29 Thread Abhishek Lekshmanan
Christian writes: > Hi, > > I found this (rgw s3 auth order = local, external) on the web: > https://opendev.org/openstack/charm-ceph-radosgw/commit/3e54b570b1124354704bd5c35c93dce6d260a479 > > Which is seemingly exactly what I need for circumventing higher > latency when switching on keystone au

Re: [ceph-users] Nautilus:14.2.2 Legacy BlueStore stats reporting detected

2019-07-29 Thread Robert Sander
On 24.07.19 09:18, nokia ceph wrote: > Please let us know disabling bluestore warn on legacy statfs is the only > option for upgraded clusters. You can repair the OSD with systemctl stop ceph-osd@$OSDID ceph-bluestore-tool repair --path /var/lib/ceph/osd/ceph-$OSDID systemctl start ceph-osd@$OSD

[ceph-users] Problems understanding 'ceph-features' output

2019-07-29 Thread Massimo Sgaravatto
I have a ceph cluster where mon, osd and mgr are running ceph luminous If I try running ceph features [*], I see that clients are grouped in 2 sets: - the first one appears using luminous with features 0x3ffddff8eea4fffb - the second one appears using luminous too, but with features 0x3ffddff8eea

Re: [ceph-users] loaded dup inode (but no mds crash)

2019-07-29 Thread Yan, Zheng
On Fri, Jul 26, 2019 at 4:45 PM Dan van der Ster wrote: > > Hi all, > > Last night we had 60 ERRs like this: > > 2019-07-26 00:56:44.479240 7efc6cca1700 0 mds.2.cache.dir(0x617) > _fetched badness: got (but i already had) [inode 0x > [...2,head] ~mds2/stray1/10006289992 auth v14438219972 dirtypa

Re: [ceph-users] loaded dup inode (but no mds crash)

2019-07-29 Thread Dan van der Ster
On Mon, Jul 29, 2019 at 2:52 PM Yan, Zheng wrote: > > On Fri, Jul 26, 2019 at 4:45 PM Dan van der Ster wrote: > > > > Hi all, > > > > Last night we had 60 ERRs like this: > > > > 2019-07-26 00:56:44.479240 7efc6cca1700 0 mds.2.cache.dir(0x617) > > _fetched badness: got (but i already had) [inod

Re: [ceph-users] loaded dup inode (but no mds crash)

2019-07-29 Thread Yan, Zheng
On Mon, Jul 29, 2019 at 9:13 PM Dan van der Ster wrote: > > On Mon, Jul 29, 2019 at 2:52 PM Yan, Zheng wrote: > > > > On Fri, Jul 26, 2019 at 4:45 PM Dan van der Ster > > wrote: > > > > > > Hi all, > > > > > > Last night we had 60 ERRs like this: > > > > > > 2019-07-26 00:56:44.479240 7efc6cca1

Re: [ceph-users] loaded dup inode (but no mds crash)

2019-07-29 Thread Dan van der Ster
On Mon, Jul 29, 2019 at 3:47 PM Yan, Zheng wrote: > > On Mon, Jul 29, 2019 at 9:13 PM Dan van der Ster wrote: > > > > On Mon, Jul 29, 2019 at 2:52 PM Yan, Zheng wrote: > > > > > > On Fri, Jul 26, 2019 at 4:45 PM Dan van der Ster > > > wrote: > > > > > > > > Hi all, > > > > > > > > Last night w

Re: [ceph-users] Returning to the performance in a small cluster topic

2019-07-29 Thread vitalif
Your results are okay..ish. General rule is that it's hard to achieve read latencies below 0.5ms and write latencies below 1ms with Ceph, **no matter what drives or network you use**. 1 iops with one thread is 0.1 ms. It's just impossible with Ceph currently. I've heard that some people ma

Re: [ceph-users] Problems understanding 'ceph-features' output

2019-07-29 Thread Paul Emmerich
yes, that's good enough for "upmap". Mapping client features to versions is somewhat unreliable by design: not every new release adds a new feature, some features are backported to older releases, kernel clients are a completely independent implementation not directly mapable to a Ceph release.

Re: [ceph-users] Problems understanding 'ceph-features' output

2019-07-29 Thread Massimo Sgaravatto
Thanks ! On Mon, Jul 29, 2019 at 5:26 PM Paul Emmerich wrote: > yes, that's good enough for "upmap". > > Mapping client features to versions is somewhat unreliable by design: not > every new release adds a new feature, some features are backported to older > releases, kernel clients are a comple

Re: [ceph-users] Error in ceph rbd mirroring(rbd::mirror::InstanceWatcher: C_NotifyInstanceRequestfinish: resending after timeout)

2019-07-29 Thread Mykola Golub
On Sat, Jul 27, 2019 at 06:08:58PM +0530, Ajitha Robert wrote: > *1) Will there be any folder related to rbd-mirroring in /var/lib/ceph ? * no > *2) Is ceph rbd-mirror authentication mandatory?* no. But why are you asking? > *3)when even i create any cinder volume loaded with glance image i ge

Re: [ceph-users] Ceph Nautilus - can't balance due to degraded state

2019-07-29 Thread EDH - Manuel Rios Fernandez
Same here, Nautilus 14.2.2. Evacuate one host and join another one at the same time and all is unbalance. Best De: ceph-users En nombre de David Herselman Enviado el: lunes, 29 de julio de 2019 11:31 Para: ceph-users@lists.ceph.com Asunto: [ceph-users] Ceph Nautilus - can't balan

Re: [ceph-users] reproducable rbd-nbd crashes

2019-07-29 Thread Marc Schöchlin
Hello Jason, i updated the ticket https://tracker.ceph.com/issues/40822 Am 24.07.19 um 19:20 schrieb Jason Dillaman: > On Wed, Jul 24, 2019 at 12:47 PM Marc Schöchlin wrote: >> >> Testing with a 10.2.5 librbd/rbd-nbd ist currently not that easy for me, >> because the ceph apt source does not co

[ceph-users] Wrong ceph df result

2019-07-29 Thread Sylvain PORTIER
Hi, When I get me ceph status, I do not understand the result : ceph df detail RAW STORAGE:     CLASS SIZE    AVAIL   USED   RAW USED %RAW USED     hdd   131 TiB 102 TiB 29 TiB   29 TiB 21.98     TOTAL 131 TiB 102 TiB 29 TiB   29 TiB 21.98 POO

[ceph-users] Ceph Health Check error ( havent seen before )

2019-07-29 Thread Brent Kennedy
;: "", "config/mgr/mgr/dashboard/ssl": "false", "config/mgr/mgr/devicehealth/enable_monitoring": "true", "mgr/dashboard/accessdb_v1": "{\"version\": 1, \"users\": {\"ceph\": {\"usernam

Re: [ceph-users] Ceph Health Check error ( havent seen before )

2019-07-29 Thread Brent Kennedy
lt;<< binary blob of length 12 >>>", "config-history/7/+mgr/mgr/dashboard/RGW_API_SECRET_KEY": "", "config/mgr/mgr/dashboard/RGW_API_ACCESS_KEY": "", "config/mgr/mgr/dashboard/RGW_API_SECRET_KEY": "", &q

Re: [ceph-users] Fwd: [lca-announce] linux.conf.au 2020 - Call for Sessions and Miniconfs now open!

2019-07-29 Thread Tim Serong
Good news! The CFP deadline has been extended to August 11, in case anyone missed out. On 7/25/19 9:21 PM, Tim Serong wrote: > Hi All, > > Just a reminder, there's only a few days left to submit talks for this > most excellent conference; the CFP is open until Sunday 28 July Anywhere > on Earth.

Re: [ceph-users] loaded dup inode (but no mds crash)

2019-07-29 Thread Yan, Zheng
On Mon, Jul 29, 2019 at 9:54 PM Dan van der Ster wrote: > > On Mon, Jul 29, 2019 at 3:47 PM Yan, Zheng wrote: > > > > On Mon, Jul 29, 2019 at 9:13 PM Dan van der Ster > > wrote: > > > > > > On Mon, Jul 29, 2019 at 2:52 PM Yan, Zheng wrote: > > > > > > > > On Fri, Jul 26, 2019 at 4:45 PM Dan va

Re: [ceph-users] Multiple OSD crashes

2019-07-29 Thread Daniel Aberger - Profihost AG
As you can see we are running 12.2.12 luminous. I could not manage to find if this fix has been backported to luminous. Which version of luminous fixes this issue? Or is it fixed at all for luminous? Am 20.07.19 um 03:25 schrieb Alex Litvak: > The issue should have been resolved by backport > ht