[ceph-users] strange osd beacon

2019-06-13 Thread Rafał Wądołowski
Hi, Is it normal that osd beacon could be without pgs? Like below. This drive contain data, but I cannot make him to run. Ceph v.12.2.4  {     "description": "osd_beacon(pgs [] lec 857158 v869771)",     "initiated_at": "2019-06-14 06:39:37.972795",     "age":

[ceph-users] mutable health warnings

2019-06-13 Thread Neha Ojha
Hi everyone, There has been some interest in a feature that helps users to mute health warnings. There is a trello card[1] associated with it and we've had some discussion[2] in the past in a CDM about it. In general, we want to understand a few things: 1. what is the level of interest in this

Re: [ceph-users] Verifying current configuration values

2019-06-13 Thread Jorge Garcia
Thanks! That's the correct solution. I upgraded to 13.2.6 (latest mimic) and the option is now there... On 6/13/19 10:22 AM, Paul Emmerich wrote: I think this option was added in 13.2.4 (or 13.2.5?) Paul -- Paul Emmerich Looking for help with your Ceph cluster? Contact us at

[ceph-users] Octopus roadmap planning series is now available

2019-06-13 Thread Mike Perez
In case you missed these events on the community calendar, here are the recordings: https://www.youtube.com/playlist?list=PLrBUGiINAakPCrcdqjbBR_VlFa5buEW2J -- Mike Perez (thingee) ___ ceph-users mailing list ceph-users@lists.ceph.com

Re: [ceph-users] rocksdb corruption, stale pg, rebuild bucket index

2019-06-13 Thread Harald Staub
Looks fine (at least so far), thank you all! After having exported all 3 copies of the bad PG, we decided to try it in-place. We also set norebalance to make sure that no data is moved. When the PG was up, the resharding finished with a "success" message. The large omap warning is gone after

[ceph-users] Ceph Day Netherlands Schedule Now Available!

2019-06-13 Thread Mike Perez
Hi everyone, The Ceph Day Netherlands schedule is now available! https://ceph.com/cephdays/netherlands-2019/ Registration is free and still open, so please come join us for some great content and discussion with members of the community of all levels!

Re: [ceph-users] Verifying current configuration values

2019-06-13 Thread Paul Emmerich
I think this option was added in 13.2.4 (or 13.2.5?) Paul -- Paul Emmerich Looking for help with your Ceph cluster? Contact us at https://croit.io croit GmbH Freseniusstr. 31h 81247 München www.croit.io Tel: +49 89 1896585 90 On Thu, Jun 13, 2019 at 7:00 PM Jorge Garcia wrote: > I'm using

Re: [ceph-users] Verifying current configuration values

2019-06-13 Thread Jorge Garcia
I'm using mimic, which I thought was supported. Here's the full version: # ceph -v ceph version 13.2.2 (02899bfda814146b021136e9d8e80eba494e1126) mimic (stable) # ceph daemon osd.0 config show | grep memory     "debug_deliberately_leak_memory": "false",     "mds_cache_memory_limit":

Re: [ceph-users] Any way to modify Bluestore label ?

2019-06-13 Thread Vincent Pharabot
Woaw ok thanks a lot i missed that in the doc... Le jeu. 13 juin 2019 à 16:49, Konstantin Shalygin a écrit : > Hello, > > I would like to modify Bluestore label of an OSD, is there a way to do this > ? > > I so that we could diplay them with "ceph-bluestore-tool show-label" but i > did not

Re: [ceph-users] Any way to modify Bluestore label ?

2019-06-13 Thread Konstantin Shalygin
Hello, I would like to modify Bluestore label of an OSD, is there a way to do this ? I so that we could diplay them with "ceph-bluestore-tool show-label" but i did not find anyway to modify them... Is it possible ? I changed LVM tags but that don't help with bluestore labels.. #

[ceph-users] Any way to modify Bluestore label ?

2019-06-13 Thread Vincent Pharabot
Hello, I would like to modify Bluestore label of an OSD, is there a way to do this ? I so that we could diplay them with "ceph-bluestore-tool show-label" but i did not find anyway to modify them... Is it possible ? I changed LVM tags but that don't help with bluestore labels.. #

Re: [ceph-users] rocksdb corruption, stale pg, rebuild bucket index

2019-06-13 Thread Sage Weil
On Thu, 13 Jun 2019, Harald Staub wrote: > On 13.06.19 15:52, Sage Weil wrote: > > On Thu, 13 Jun 2019, Harald Staub wrote: > [...] > > I think that increasing the various suicide timeout options will allow > > it to stay up long enough to clean up the ginormous objects: > > > > ceph config set

Re: [ceph-users] rocksdb corruption, stale pg, rebuild bucket index

2019-06-13 Thread Harald Staub
On 13.06.19 15:52, Sage Weil wrote: On Thu, 13 Jun 2019, Harald Staub wrote: [...] I think that increasing the various suicide timeout options will allow it to stay up long enough to clean up the ginormous objects: ceph config set osd.NNN osd_op_thread_suicide_timeout 2h ok It looks

Re: [ceph-users] rocksdb corruption, stale pg, rebuild bucket index

2019-06-13 Thread Sage Weil
On Thu, 13 Jun 2019, Paul Emmerich wrote: > Something I had suggested off-list (repeated here if anyone else finds > themselves in a similar scenario): > > since only one PG is dead and the OSD now seems to be alive enough to > start/mount: consider taking a backup of the affected PG with > >

Re: [ceph-users] rocksdb corruption, stale pg, rebuild bucket index

2019-06-13 Thread Paul Emmerich
Something I had suggested off-list (repeated here if anyone else finds themselves in a similar scenario): since only one PG is dead and the OSD now seems to be alive enough to start/mount: consider taking a backup of the affected PG with ceph-objectstore-tool --op export --pgid X.YY (That might

Re: [ceph-users] rocksdb corruption, stale pg, rebuild bucket index

2019-06-13 Thread Sage Weil
On Thu, 13 Jun 2019, Harald Staub wrote: > Idea received from Wido den Hollander: > bluestore rocksdb options = "compaction_readahead_size=0" > > With this option, I just tried to start 1 of the 3 crashing OSDs, and it came > up! I did with "ceph osd set noin" for now. Yay! > Later it aborted:

Re: [ceph-users] radosgw-admin list bucket based on "last modified"

2019-06-13 Thread Paul Emmerich
There's no (useful) internal ordering of these entries, so there isn't a more efficient way than getting everything and sorting it :( Paul -- Paul Emmerich Looking for help with your Ceph cluster? Contact us at https://croit.io croit GmbH Freseniusstr. 31h 81247 München www.croit.io Tel: +49

Re: [ceph-users] Enable buffered write for bluestore

2019-06-13 Thread Tarek Zegar
http://docs.ceph.com/docs/master/rbd/rbd-config-ref/ From: Trilok Agarwal To: ceph-users@lists.ceph.com Date: 06/12/2019 07:31 PM Subject:[EXTERNAL] [ceph-users] Enable buffered write for bluestore Sent by:"ceph-users" Hi How can we enable

[ceph-users] radosgw-admin list bucket based on "last modified"

2019-06-13 Thread M Ranga Swami Reddy
hello - Can we list the objects in rgw, via last modified date? For example - I wanted to list all the objects which were modified 01 Jun 2019. Thanks Swami ___ ceph-users mailing list ceph-users@lists.ceph.com

[ceph-users] OSD: bind unable to bind on any port in range 6800-7300

2019-06-13 Thread Carlos Valiente
I'm running Ceph 14.2.1 (d555a9489eb35f84f2e1ef49b77e19da9d113972) nautilus (stable) on a Kubernetes cluster using Rook (https://github.com/rook/rook), and my OSD daemons do not start. Each OSD process runs inside a Kubernetes pod, and each pod gets its own IP address. I spotted the following log

Re: [ceph-users] rocksdb corruption, stale pg, rebuild bucket index

2019-06-13 Thread Harald Staub
Idea received from Wido den Hollander: bluestore rocksdb options = "compaction_readahead_size=0" With this option, I just tried to start 1 of the 3 crashing OSDs, and it came up! I did with "ceph osd set noin" for now. Later it aborted: 2019-06-13 13:11:11.862 7f2a19f5f700 1 heartbeat_map

[ceph-users] one pg blocked at ctive+undersized+degraded+remapped+backfilling

2019-06-13 Thread Brian Chang-Chien
We want to change index pool(radosgw) rule from sata to ssd, when we run ceph osd pool set default.rgw.buckets.index crush_ruleset x All of index pg migrated to ssd, but only one pg is still stuck in sata and cannot be migrated and it status is active+undersized+degraded+remapped+backfilling

Re: [ceph-users] num of objects degraded

2019-06-13 Thread Simon Ironside
Hi, 20067 objects actual data with 3x replication = 60201 objects On 13/06/2019 08:36, zhanrzh...@teamsun.com.cn wrote: And total num of objects are 20067 /[root@ceph-25 src]# ./rados -p rbd ls| wc -l/ /20013/ /[root@ceph-25 src]# ./rados -p cephfs_data ls | wc -l/ /0/ /[root@ceph-25 src]#

[ceph-users] num of objects degraded

2019-06-13 Thread zhanrzh...@teamsun.com.cn
hi everyone, I am a bit confused about num of objects degraded that ceph -s show when ceph recovery. ceph -s as flollow: [root@ceph-25 src]# ./ceph -s *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** cluster 3d52f70a-d82f-46e3-9f03-be03e5e68e33 health

Re: [ceph-users] rocksdb corruption, stale pg, rebuild bucket index

2019-06-13 Thread Harald Staub
On 13.06.19 00:33, Sage Weil wrote: [...] One other thing to try before taking any drastic steps (as described below): ceph-bluestore-tool --path /var/lib/ceph/osd/ceph-NNN fsck This gives: fsck success and the large alloc warnings: tcmalloc: large alloc 2145263616 bytes == 0x562412e1

Re: [ceph-users] rocksdb corruption, stale pg, rebuild bucket index

2019-06-13 Thread Harald Staub
On 13.06.19 00:29, Sage Weil wrote: On Thu, 13 Jun 2019, Simon Leinen wrote: Sage Weil writes: 2019-06-12 23:40:43.555 7f724b27f0c0 1 rocksdb: do_open column families: [default] Unrecognized command: stats ceph-kvstore-tool: /build/ceph-14.2.1/src/rocksdb/db/version_set.cc:356: