[ceph-users] Small RGW objects and RADOS 64KB minimun size

2021-02-14 Thread Loïc Dachary
Bonjour, Reading Karan's blog post about benchmarking the insertion of billions objects to Ceph via S3 / RGW[0] from last year, it reads: > we decided to lower bluestore_min_alloc_size_hdd to 18KB and re-test. As > represented in chart-5, the object creation rate found to be notably reduced >

[ceph-users] Re: share haproxy config for radosgw [EXT]

2021-02-14 Thread Graham Allan
On Tue, Feb 9, 2021 at 11:00 AM Matthew Vernon wrote: > On 07/02/2021 22:19, Marc wrote: > > > > I was wondering if someone could post a config for haproxy. Is there > something specific to configure? Like binding clients to a specific backend > server, client timeouts, security specific to rgw e

[ceph-users] Is replacing OSD whose data is on HDD and DB is on SSD supported?

2021-02-14 Thread Tony Liu
​Hi, I've been trying with v15.2 and v15.2.8, no luck. Wondering if this is actually supported or ever worked for anyone? Here is what I've done. 1) Create a cluster with 1 controller (mon and mgr) and 3 OSD nodes, each of which is with 1 SSD for DB and 8 HDDs for data. 2) OSD service spec. se

[ceph-users] Re: share haproxy config for radosgw [EXT]

2021-02-14 Thread Tony Liu
You can have BGP-ECMP to multiple HAProxy instances to support active-active mode, instead of using keepalived for active-backup mode, if the traffic amount does required multiple HAProxy instances. Tony From: Graham Allan Sent: February 14, 2021 01:31 PM

[ceph-users] Re: reinstalling node with orchestrator/cephadm

2021-02-14 Thread Tony Liu
I followed https://tracker.ceph.com/issues/46691 to bring up the OSD. "ceph osd tree" shows it's up. "ceph pg dump" shows PGs are remapped. How can I make it to be aware by cephadm (showed up by "ceph orch ps")? Because "ceph status" complains "1 stray daemons(s) not managed by cephadm". Thanks! T

[ceph-users] Re: Latency increase after upgrade 14.2.8 to 14.2.16

2021-02-14 Thread Björn Dolkemeier
Setting bluefs_buffered_io=true via restart of (all) OSDs didn’t change anything. But I made another observation: Once a week a lot of number of objects (and space) is reclaimed because of fstrim running inside the VMs. After this the latency is fine for about 12 hours or so and is then gradual