[ceph-users] Re: ceph-osd performance on ram disk

2020-09-14 Thread George Shuklin
On 11/09/2020 17:44, Mark Nelson wrote: On 9/11/20 4:15 AM, George Shuklin wrote: On 10/09/2020 19:37, Mark Nelson wrote: On 9/10/20 11:03 AM, George Shuklin wrote: ... Are there any knobs to tweak to see higher performance for ceph-osd? I'm pretty sure it's not any kind of leveling, GC or

[ceph-users] Re: ceph-osd performance on ram disk

2020-09-11 Thread Mark Nelson
On 9/11/20 4:15 AM, George Shuklin wrote: On 10/09/2020 19:37, Mark Nelson wrote: On 9/10/20 11:03 AM, George Shuklin wrote: ... Are there any knobs to tweak to see higher performance for ceph-osd? I'm pretty sure it's not any kind of leveling, GC or other 'iops-related' issues (brd has

[ceph-users] Re: ceph-osd performance on ram disk

2020-09-11 Thread George Shuklin
On 10/09/2020 19:37, Mark Nelson wrote: On 9/10/20 11:03 AM, George Shuklin wrote: ... Are there any knobs to tweak to see higher performance for ceph-osd? I'm pretty sure it's not any kind of leveling, GC or other 'iops-related' issues (brd has performance of two order of magnitude

[ceph-users] Re: ceph-osd performance on ram disk

2020-09-11 Thread George Shuklin
On 10/09/2020 22:35, vita...@yourcmc.ru wrote: Hi George Author of Ceph_performance here! :) I suspect you're running tests with 1 PG. Every PG's requests are always serialized, that's why OSD doesn't utilize all threads with 1 PG. You need something like 8 PGs per OSD. More than 8 usually

[ceph-users] Re: ceph-osd performance on ram disk

2020-09-10 Thread George Shuklin
Latency from a client side is not an issue. It just combines with other latencies in the stack. The more client lags, the easier it's for the cluster. Here, the thing I talk, is slightly different. When you want to establish baseline performance for osd daemon (disregarding block device and

[ceph-users] Re: ceph-osd performance on ram disk

2020-09-10 Thread vitalif
Yeah, of course... but RBD is primarily used for KVM VMs, so the results from a VM are the thing that real clients see. So they do mean something... :) I know. I tested fio before testing cephwith fio. On null ioengine fio can handle up to 14M IOPS (on my dusty lab's R220). On blk_null to gets

[ceph-users] Re: ceph-osd performance on ram disk

2020-09-10 Thread George Shuklin
I know. I tested fio before testing ceph with fio. On null ioengine fio can handle up to 14M IOPS (on my dusty lab's R220). On blk_null to gets down to 2.4-2.8M IOPS. On brd it drops to sad 700k IOPS. BTW, never run synthetic high-performance benchmarks on kvm. My old server with

[ceph-users] Re: ceph-osd performance on ram disk

2020-09-10 Thread Виталий Филиппов
By the way, DON'T USE rados bench. It's an incorrect benchmark. ONLY use fio 10 сентября 2020 г. 22:35:53 GMT+03:00, vita...@yourcmc.ru пишет: >Hi George > >Author of Ceph_performance here! :) > >I suspect you're running tests with 1 PG. Every PG's requests are >always serialized, that's why OSD

[ceph-users] Re: ceph-osd performance on ram disk

2020-09-10 Thread vitalif
Hi George Author of Ceph_performance here! :) I suspect you're running tests with 1 PG. Every PG's requests are always serialized, that's why OSD doesn't utilize all threads with 1 PG. You need something like 8 PGs per OSD. More than 8 usually doesn't improve results. Also note that read

[ceph-users] Re: ceph-osd performance on ram disk

2020-09-10 Thread George Shuklin
Thank you! I know that article, but they promise 6 core use per OSD, and I got barely over three, and all this in totally synthetic environment with no SDD to blame (brd is more than fast and have a very consistent latency under any kind of load). On Thu, Sep 10, 2020, 19:39 Marc Roos wrote: >

[ceph-users] Re: ceph-osd performance on ram disk

2020-09-10 Thread Marc Roos
Hi George, Very interesting and also a bit expecting result. Some messages posted here are already indicating that getting expensive top of the line hardware does not really result in any performance increase above some level. Vitaliy has documented something similar[1] [1]

[ceph-users] Re: ceph-osd performance on ram disk

2020-09-10 Thread Mark Nelson
On 9/10/20 11:03 AM, George Shuklin wrote: I'm creating a benchmark suite for Сeph. During benchmarking of benchmark, I've checked how fast ceph-osd works. I decided to skip all 'SSD mess' and use brd (block ram disk, modprobe brd) as underlying storage. Brd itself can yield up to 2.7Mpps