I am still thinking about it, I guess I have to switch at some point. Currently
I manage with the default.
> -Original Message-
> From: Szabo, Istvan (Agoda)
> Sent: 04 March 2021 02:23
> To: Marc ; Alexander E. Patrakov
> ; Drew Weaver
> Cc: ceph-users@ceph.io
> Subject: RE: [ceph-use
On Wed, Mar 03, 2021 at 03:21:04PM -0800, Matt Wilder wrote:
> On Wed, Mar 3, 2021 at 9:20 AM Teoman Onay wrote:
>
> > Just go for CentOS stream it will be at least as stable as CentOS and
> > probably even more.
> >
> > CentOS Stream is just the next minor version of the current RHEL minor
> > w
Hello Cary,
take a look at the Deutsche Telekom project. It is also based on
librados and they also needed a solution for managing the metadata of
over 30 million email accounts.
https://github.com/ceph-dovecot/dovecot-ceph-plugin
but also think about how such a system will behave during rec
Howdy, the dashboard on our cluster keeps showing LARGE_OMAP_OBJECTS.
I went through this document
https://www.suse.com/support/kb/doc/?id=19698
I've found that we have a total of 5 buckets, each one is owned by a different
user.
>From what I have read on this issue it seems to flip flop b
On 03/04 09:46, Clyso GmbH - Ceph Foundation Member wrote:
> Hello Cary,
>
> take a look at the Deutsche Telekom project. It is also based on librados
> and they also needed a solution for managing the metadata of over 30 million
> email accounts.
>
> https://github.com/ceph-dovecot/dovecot-ceph-
вт, 2 мар. 2021 г. в 13:52, James Page :
(Disclaimer: I have never tried to run Ceph on bcache in production,
and the test cluster was destroyed before reaching its first deep
scrub)
> b) turn off the sequential cutoff
>
> sequential_cutoff = 0
>
> This means that sequential writes will also alwa
Hi Drew,
On Thursday, March 4th, 2021 at 15:18, Drew Weaver
wrote:
> Howdy, the dashboard on our cluster keeps showing LARGE_OMAP_OBJECTS.
>
> I went through this document
>
> https://www.suse.com/support/kb/doc/?id=19698
>
> I've found that we have a total of 5 buckets, each one is owned by
Hi,
I have a 3 DC multisite setup.
The replication is directional like HKG->SGP->US so the bucket is replicated
from HKG to SGP and the same bucket is replicated further from SGP to US.
The HKG > SGP connection is pretty fast 12.5millions objects (600GB)
transferred in 6.5 hours. Once the OSD