Re: [ceph-users] radosgw pegging down 5 CPU cores when no data is being transferred

2019-08-23 Thread Eric Ivancich
Good morning, Vladimir, Please create a tracker for this (https://tracker.ceph.com/projects/rgw/issues/new ) and include the link to it in an email reply. And if you can include any more potentially relevant details, please do so. I’ll add my

Re: [ceph-users] BlueStore.cc: 11208: ceph_abort_msg("unexpected error")

2019-08-23 Thread Paul Emmerich
I've seen that before (but never on Nautilus), there's already an issue at tracker.ceph.com but I don't recall the id or title. Paul -- Paul Emmerich Looking for help with your Ceph cluster? Contact us at https://croit.io croit GmbH Freseniusstr. 31h 81247 München www.croit.io Tel: +49 89

Re: [ceph-users] BlueStore.cc: 11208: ceph_abort_msg("unexpected error")

2019-08-23 Thread Lars Täuber
Hi Paul, a result of fgrep is attached. Can you do something with it? I can't read it. Maybe this is the relevant part: " bluestore(/var/lib/ceph/osd/first-16) _txc_add_transaction error (39) Directory not empty not handled on operation 21 (op 1, counting from 0)" Later I tried it again and

Re: [ceph-users] BlueStore.cc: 11208: ceph_abort_msg("unexpected error")

2019-08-23 Thread Paul Emmerich
Filter the log for "7f266bdc9700" which is the id of the crashed thread, it should contain more information on the transaction that caused the crash. Paul -- Paul Emmerich Looking for help with your Ceph cluster? Contact us at https://croit.io croit GmbH Freseniusstr. 31h 81247 München

[ceph-users] BlueStore.cc: 11208: ceph_abort_msg("unexpected error")

2019-08-23 Thread Lars Täuber
Hi there! In our testcluster is an osd that won't start anymore. Here is a short part of the log: -1> 2019-08-23 08:56:13.316 7f266bdc9700 -1 /tmp/release/Debian/WORKDIR/ceph-14.2.2/src/os/bluestore/BlueStore.cc: In function 'void BlueStore::_txc_add_transaction(BlueStore::TransContext*,

[ceph-users] Balancer dont work with state pgs backfill_toofull

2019-08-23 Thread EDH - Manuel Rios Fernandez
Root affected got more than 70TB free. The only solution is manual reweight the OSD. But in this situacion balancer in unmap mode should move data to get all HEALTHY Hope some fix come in the next 14.2.X to fix that issue. Ceph 14.2.2 Centos 7.6 cluster: id: