[ceph-users] Massive TCP connection on radosgw

2019-05-19 Thread Li Wang
Dear ceph community members, We have a ceph cluster (mimic 13.2.4) with 7 nodes and 130+ OSDs. However, we observed over 70 millions active TCP connections on the radosgw host, which makes the radosgw very unstable. After further investigation, we found most of the TCP connections on the

Re: [ceph-users] Fixing a HEALTH_ERR situation

2019-05-19 Thread Jorge Garcia
I ended up taking Brett's recommendation and doing a "ceph osd set noscrub" and "ceph osd set nodeep-scrub", then waiting for the running scrubs to finish which doing a "ceph -w" to see what it was doing. Eventually, it reported the following: 2019-05-18 16:08:44.032780 mon.gi-cba-01 [ERR] Health

Re: [ceph-users] Default min_size value for EC pools

2019-05-19 Thread Marc Roos
https://ceph.com/community/new-luminous-erasure-coding-rbd-cephfs/ -Original Message- From: Florent B [mailto:flor...@coppint.com] Sent: zondag 19 mei 2019 12:06 To: Paul Emmerich Cc: Ceph Users Subject: Re: [ceph-users] Default min_size value for EC pools Thank you Paul for your

Re: [ceph-users] Fixing a HEALTH_ERR situation

2019-05-19 Thread Paul Emmerich
Check out the log of the primary OSD in that PG to see what happened during scrubbing -- Paul Emmerich Looking for help with your Ceph cluster? Contact us at https://croit.io croit GmbH Freseniusstr. 31h 81247 München www.croit.io Tel: +49 89 1896585 90 On Sun, May 19, 2019 at 12:41 AM Jorge

Re: [ceph-users] Default min_size value for EC pools

2019-05-19 Thread Paul Emmerich
Default is k+1 or k if m == 1 min_size = k is unsafe and should never be set. Paul -- Paul Emmerich Looking for help with your Ceph cluster? Contact us at https://croit.io croit GmbH Freseniusstr. 31h 81247 München www.croit.io Tel: +49 89 1896585 90 On Sun, May 19, 2019 at 11:31 AM

Re: [ceph-users] Lost OSD from PCIe error, recovered, HOW to restore OSD process

2019-05-19 Thread Alfredo Deza
On Thu, May 16, 2019 at 3:55 PM Mark Lehrer wrote: > > Steps 3-6 are to get the drive lvm volume back > > How much longer will we have to deal with LVM? If we can migrate non-LVM > drives from earlier versions, how about we give ceph-volume the ability to > create non-LVM OSDs directly? > We