Re: [ceph-users] OSD bluestore initialization failed

2019-06-21 Thread Saulo Silva
After read a lot of documentation I started to try to recover my data doing PG export and import to try get my pool up but all command that I did result a error . I started the work to list all PG to create a simple import/export script . but I can neither list the PGs . ceph-objectstore-tool

[ceph-users] Cannot delete bucket

2019-06-21 Thread Sergei Genchev
Hello, Trying to delete bucket using radosgw-admin, and failing. Bucket has 50K objects but all of them are large. This is what I get: $ radosgw-admin bucket rm --bucket=di-omt-mapupdate --purge-objects --bypass-gc 2019-06-21 17:09:12.424 7f53f621f700 0 WARNING : aborted 1000 incomplete

Re: [ceph-users] OSD bluestore initialization failed

2019-06-21 Thread Saulo Silva
HI Igor , Here the assert line and the 10 line that goes before and after . -4> 2019-06-21 10:54:45.161493 7f689291ed00 20 bluefs _read left 0x8000 len 0x8000 -3> 2019-06-21 10:54:45.161497 7f689291ed00 20 bluefs _read got 32768 -2> 2019-06-21 10:54:45.161533 7f689291ed00 10 bluefs

Re: [ceph-users] OSD bluestore initialization failed

2019-06-21 Thread Saulo Silva
Hi Igor , I am looking at the log and I am not sure what is the exactly line that I shuold send . I tried to tail -f /var/log/ceph/ceph-osd.6.log | grep -i assertion -A 5 But no valid result returned . What you be the regex to get this specific line ? I also could send the entire log . Best

Re: [ceph-users] OSD bluestore initialization failed

2019-06-21 Thread Igor Fedotov
Saulo, please share a few log lines immediately before the assertion, not the starting ones. Thanks, Igor On 6/21/2019 5:37 PM, Saulo Silva wrote: Hi Igor , thanks for helping , Here part of the log : head ceph-osd.6.log -n80 2019-06-21 10:50:56.090891 7f462db84d00  0 set uid:gid to

Re: [ceph-users] OSD bluestore initialization failed

2019-06-21 Thread Saulo Silva
Hi Igor , thanks for helping , Here part of the log : head ceph-osd.6.log -n80 2019-06-21 10:50:56.090891 7f462db84d00 0 set uid:gid to 167:167 (ceph:ceph) 2019-06-21 10:50:56.090910 7f462db84d00 0 ceph version 12.2.10-551-gbb089269ea (bb089269ea0c1272294c6b9777123ac81662b6d2) luminous

Re: [ceph-users] problems after upgrade to 14.2.1

2019-06-21 Thread Brent Kennedy
After installing the package on each mgr server and restarting the service, i disabled the module then enabled the module with the force option. ( seems I cut that out of the output I pasted ) It was essentially trial and error. After doing this, check and make sure you can see the module as

Re: [ceph-users] out of date python-rtslib repo on https://shaman.ceph.com/

2019-06-21 Thread Matthias Leopold
Am 20.06.19 um 07:19 schrieb Michael Christie: On 06/17/2019 03:41 AM, Matthias Leopold wrote: thank you very much for updating python-rtslib!! could you maybe also do this for tcmu-runner (version 1.4.1)? I am just about to make a new 1.5 release. Give me a week. I am working on a last

[ceph-users] Binding library for ceph admin api in C#?

2019-06-21 Thread LuD j
Hello guys, We are working rados gateway automation, we saw than there already have binding libraries in goland, python and java to interact with ceph admin api. But is there any existing binding in C# or powershell or do we need to do it by ourselves? Thank you in advance for your help.

Re: [ceph-users] RGW: Is 'radosgw-admin reshard stale-instances rm' safe?

2019-06-21 Thread Konstantin Shalygin
Hi, folks. I have Luminous 12.2.12. Auto-resharding is enabled. In stale instances list I have: # radosgw-admin reshard stale-instances list | grep clx "clx:default.422998.196", I have the same marker-id in bucket stats of this bucket: # radosgw-admin bucket stats --bucket clx | grep

Re: [ceph-users] understanding the bluestore blob, chunk and compression params

2019-06-21 Thread Igor Fedotov
Actually there are two issues here - the first one (fixed by #28688) is unloaded OSD compression settings when OSD compression mode = none and pool one isn't none. Submitted https://github.com/ceph/ceph/pull/28688 to fix this part. The second - OSD doesn't see pool settings after restart

Re: [ceph-users] mimic: MDS standby-replay causing blocked ops (MDS bug?)

2019-06-21 Thread Yan, Zheng
On Fri, Jun 21, 2019 at 6:10 PM Frank Schilder wrote: > > Dear Yan, Zheng, > > does mimic 13.2.6 fix the snapshot issue? If not, could you please send me a > link to the issue tracker? > no https://tracker.ceph.com/issues/39987 > Thanks and best regards, > > = > Frank Schilder

Re: [ceph-users] mimic: MDS standby-replay causing blocked ops (MDS bug?)

2019-06-21 Thread Frank Schilder
Dear Yan, Zheng, does mimic 13.2.6 fix the snapshot issue? If not, could you please send me a link to the issue tracker? Thanks and best regards, = Frank Schilder AIT Risø Campus Bygning 109, rum S14 From: Yan, Zheng Sent: 20 May 2019

Re: [ceph-users] understanding the bluestore blob, chunk and compression params

2019-06-21 Thread Igor Fedotov
On 6/20/2019 10:12 PM, Dan van der Ster wrote: I will try to reproduce with logs and create a tracker once I find the smoking gun... It's very strange -- I had the osd mode set to 'passive', and pool option set to 'force', and the osd was compressing objects for around 15 minutes. Then

Re: [ceph-users] understanding the bluestore blob, chunk and compression params

2019-06-21 Thread Dan van der Ster
http://tracker.ceph.com/issues/40480 On Thu, Jun 20, 2019 at 9:12 PM Dan van der Ster wrote: > > I will try to reproduce with logs and create a tracker once I find the > smoking gun... > > It's very strange -- I had the osd mode set to 'passive', and pool > option set to 'force', and the osd was

[ceph-users] RGW: Is 'radosgw-admin reshard stale-instances rm' safe?

2019-06-21 Thread Rudenko Aleksandr
Hi, folks. I have Luminous 12.2.12. Auto-resharding is enabled. In stale instances list I have: # radosgw-admin reshard stale-instances list | grep clx "clx:default.422998.196", I have the same marker-id in bucket stats of this bucket: # radosgw-admin bucket stats --bucket clx | grep

Re: [ceph-users] OSD bluestore initialization failed

2019-06-21 Thread Igor Fedotov
Hi Saulo, looks like disk I/O error. Will you set debug_bluefs to 20 and collect the log, then share a few lines prior to the assertion? Checking smartctl output might be a good idea too. Thanks, Igor On 6/21/2019 11:30 AM, Saulo Silva wrote: Hi, After a power failure all OSD´s from a

[ceph-users] OSD bluestore initialization failed

2019-06-21 Thread Saulo Silva
Hi, After a power failure all OSD´s from a pool are fail with the following error : -5> 2019-06-20 13:32:58.886299 7f146bcb2d00 4 rocksdb: [/home/abuild/rpmbuild/BUILD/ceph-12.2.12-573-g67074fa839/src/rocksdb/db/version_set.cc:2859] Recovered from manifest file:db/MANIFEST-003373