[ceph-users] Re: Questions RE: Ceph/CentOS/IBM

2021-03-04 Thread Marc
I am still thinking about it, I guess I have to switch at some point. Currently I manage with the default. > -Original Message- > From: Szabo, Istvan (Agoda) > Sent: 04 March 2021 02:23 > To: Marc ; Alexander E. Patrakov > ; Drew Weaver > Cc: ceph-users@ceph.io > Subject: RE: [ceph-use

[ceph-users] Re: Questions RE: Ceph/CentOS/IBM

2021-03-04 Thread Teoman ONAY
On Wed, Mar 03, 2021 at 03:21:04PM -0800, Matt Wilder wrote: > On Wed, Mar 3, 2021 at 9:20 AM Teoman Onay wrote: > > > Just go for CentOS stream it will be at least as stable as CentOS and > > probably even more. > > > > CentOS Stream is just the next minor version of the current RHEL minor > > w

[ceph-users] Re: Metadata for LibRADOS

2021-03-04 Thread Clyso GmbH - Ceph Foundation Member
Hello Cary, take a look at the Deutsche Telekom project. It is also based on librados and they also needed a solution for managing the metadata of over 30 million email accounts. https://github.com/ceph-dovecot/dovecot-ceph-plugin but also think about how such a system will behave during rec

[ceph-users] Resolving LARGE_OMAP_OBJECTS

2021-03-04 Thread Drew Weaver
Howdy, the dashboard on our cluster keeps showing LARGE_OMAP_OBJECTS. I went through this document https://www.suse.com/support/kb/doc/?id=19698 I've found that we have a total of 5 buckets, each one is owned by a different user. >From what I have read on this issue it seems to flip flop b

[ceph-users] Re: Metadata for LibRADOS

2021-03-04 Thread David Caro
On 03/04 09:46, Clyso GmbH - Ceph Foundation Member wrote: > Hello Cary, > > take a look at the Deutsche Telekom project. It is also based on librados > and they also needed a solution for managing the metadata of over 30 million > email accounts. > > https://github.com/ceph-dovecot/dovecot-ceph-

[ceph-users] Re: Best practices for OSD on bcache

2021-03-04 Thread Alexander E. Patrakov
вт, 2 мар. 2021 г. в 13:52, James Page : (Disclaimer: I have never tried to run Ceph on bcache in production, and the test cluster was destroyed before reaching its first deep scrub) > b) turn off the sequential cutoff > > sequential_cutoff = 0 > > This means that sequential writes will also alwa

[ceph-users] Re: Resolving LARGE_OMAP_OBJECTS

2021-03-04 Thread Benoît Knecht
Hi Drew, On Thursday, March 4th, 2021 at 15:18, Drew Weaver wrote: > Howdy, the dashboard on our cluster keeps showing LARGE_OMAP_OBJECTS. > > I went through this document > > https://www.suse.com/support/kb/doc/?id=19698 > > I've found that we have a total of 5 buckets, each one is owned by

[ceph-users] Bluestore OSD crash with tcmalloc::allocate_full_cpp_throw_oom in multisite setup with PG_DAMAGED cluster error

2021-03-04 Thread Szabo, Istvan (Agoda)
Hi, I have a 3 DC multisite setup. The replication is directional like HKG->SGP->US so the bucket is replicated from HKG to SGP and the same bucket is replicated further from SGP to US. The HKG > SGP connection is pretty fast 12.5millions objects (600GB) transferred in 6.5 hours. Once the OSD