Dear ceph community members,
We have a ceph cluster (mimic 13.2.4) with 7 nodes and 130+ OSDs. However, we
observed over 70 millions active TCP connections on the radosgw host, which
makes the radosgw very unstable.
After further investigation, we found most of the TCP connections on the
I ended up taking Brett's recommendation and doing a "ceph osd set noscrub"
and "ceph osd set nodeep-scrub", then waiting for the running scrubs to
finish which doing a "ceph -w" to see what it was doing. Eventually, it
reported the following:
2019-05-18 16:08:44.032780 mon.gi-cba-01 [ERR] Health
https://ceph.com/community/new-luminous-erasure-coding-rbd-cephfs/
-Original Message-
From: Florent B [mailto:flor...@coppint.com]
Sent: zondag 19 mei 2019 12:06
To: Paul Emmerich
Cc: Ceph Users
Subject: Re: [ceph-users] Default min_size value for EC pools
Thank you Paul for your
Check out the log of the primary OSD in that PG to see what happened during
scrubbing
--
Paul Emmerich
Looking for help with your Ceph cluster? Contact us at https://croit.io
croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90
On Sun, May 19, 2019 at 12:41 AM Jorge
Default is k+1 or k if m == 1
min_size = k is unsafe and should never be set.
Paul
--
Paul Emmerich
Looking for help with your Ceph cluster? Contact us at https://croit.io
croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90
On Sun, May 19, 2019 at 11:31 AM
On Thu, May 16, 2019 at 3:55 PM Mark Lehrer wrote:
> > Steps 3-6 are to get the drive lvm volume back
>
> How much longer will we have to deal with LVM? If we can migrate non-LVM
> drives from earlier versions, how about we give ceph-volume the ability to
> create non-LVM OSDs directly?
>
We