[ceph-users] Re: Question on multi-site

2021-02-23 Thread Konstantin Shalygin
Replication works on osd layer, rgw is a http frontend for objects. If you write some object via librados directly, rgw will not be awared about this k Sent from my iPhone > On 22 Feb 2021, at 18:52, Cary FitzHugh wrote: > > Question is - do files which are written directly to an OSD get

[ceph-users] Re: Storing 20 billions of immutable objects in Ceph, 75% <16KB

2021-02-23 Thread Konstantin Shalygin
OMAP with keys works as database-like replication, new keys/updates comes to acting set as data stream, not a full object k Sent from my iPhone > On 22 Feb 2021, at 17:13, Benoît Knecht wrote: > > Is recovery faster for OMAP compared to the equivalent number of RADOS > objects?

[ceph-users] Multisite sync shards cleanup

2021-02-23 Thread Szabo, Istvan (Agoda)
Hi, Is there a way to cleanup the sync shards and start from scratch? Thank you This message is confidential and is for the sole use of the intended recipient(s). It may also be privileged or otherwise protected by copyright or other legal rules. If you have

[ceph-users] Unable to delete bucket - endless multipart uploads?

2021-02-23 Thread David Monschein
Hi All, We've been dealing with what seems to be a pretty annoying bug for a while now. We are unable to delete a customer's bucket that seems to have an extremely large number of aborted multipart uploads. I've had $(radosgw-admin bucket rm --bucket=pusulax --purge-objects) running in a screen

[ceph-users] Re: ceph-radosgw: Initialization timeout, failed to initialize

2021-02-23 Thread Mathew Snyder
I increased the debug level to 20. There isn't anything additional being written: 2021-02-23 16:26:38.736642 7£2c45£3700 -1 Initialization timeout, failed to initialize 2021-02-23 16:26:38.931400 7f4d7bf4a000 0 deferred set uid:gid to 167:167 (ceph:ceph) 2021-02-23 16:26:38.931707 7f4d7bf4a000

[ceph-users] Re: ceph-radosgw: Initialization timeout, failed to initialize

2021-02-23 Thread Janne Johansson
Den tis 23 feb. 2021 kl 16:53 skrev Mathew Snyder : > > We have a Red Hat installation of Luminuous (full packages version: > 12.2.8-128.1). We're experiencing an issue where the ceph-radosgw service > will timeout during initialization and cycle through attempts every five > minutes until it

[ceph-users] Re: Network design issues

2021-02-23 Thread Stefan Kooman
On 2/21/21 9:51 AM, Frank Schilder wrote: Hi Stefan, thanks for the additional info. Dell will put me in touch with their deployment team soonish and then I can ask about matching abilities. It turns out that the problem I observed might have a much more profane reason. I saw really long

[ceph-users] splitting Volume Group with odd number of PE in 2 logical volumes

2021-02-23 Thread Gheorghiță Butnaru
Hello, Recently I deployed a small ceph cluster using cephadm. In this cluster, I have 3 OSD nodes with 8 HDDs Hitachi (9.1 TiB), 4 NVMes Micron_9300 (2.9 TiB), and 2 NVMes Intel Optane P4800X (375 GiB). I want to use spinning disks for the data block, 2.9 NVMes for the block.DB and the intel

[ceph-users] Re: Ceph nvme timeout and then aborting

2021-02-23 Thread Marc
I don't think there are here people advising to use consumer grade ssd's/nvme's. The enterprise ssd's often have more twpd, and are just stable under high constant load. My 1,5 year old sm863a still has 099 wearlevel and 097 poweronhours, some other sm863a of 3,8 years has 099 wearlevel and

[ceph-users] Re: multiple-domain for S3 on rgws with same ceph backend on one zone

2021-02-23 Thread Janne Johansson
>>> Hello, >>> We have functional ceph swarm with a pair of S3 rgw in front that uses >>> A.B.C.D domain to be accessed. >>> >>> Now a new client asks to have access using the domain : E.C.D, but to >>> already existing buckets. This is not a scenario discussed in the docs. >>> Apparently,