[ceph-users] Re: Understanding Bluestore performance characteristics

2020-02-05 Thread Bradley Kite
Thanks Vitaliy Posting here for the archives or if anyone else sees the same problem it might save them some work. After going through the code and logs (debug bluestore 20/5) it actually looks like the write-small-pre-read counter increases every time the WAL gets appended to (it reads the previ

[ceph-users] Re: Understanding Bluestore performance characteristics

2020-02-05 Thread Bradley Kite
Hi Vitality, I completely destroyed the test cluster and re-deployed it after changing these settings but it did not make a difference - there are still a high number of deferred writes. Regards -- Brad. On Wed, 5 Feb 2020 at 10:55, wrote: > min_alloc_size can't be changed after formatting an

[ceph-users] Re: Understanding Bluestore performance characteristics

2020-02-04 Thread Bradley Kite
mp' command in Nautilus. A bit > different command for Luminous AFAIR. > > Then look for 'read' substring in the dump and try to find unexpectedly > high read-related counter values if any. > > And/or share it here for brief analysis. > > > Thanks, > &g

[ceph-users] Re: Understanding Bluestore performance characteristics

2020-02-04 Thread Bradley Kite
Hi Vitaliy Yes - I tried this and I can still see a number of reads (~110 iops, 440KB/sec) on the SSD, so it is significantly better, but the result is still puzzling - I'm trying to understand what is causing the reads. The problem is amplified with numjobs >= 2 but it looks like it is still ther

[ceph-users] Understanding Bluestore performance characteristics

2020-02-03 Thread Bradley Kite
Hi, We have a production cluster of 27 OSD's across 5 servers (all SSD's running bluestore), and have started to notice a possible performance issue. In order to isolate the problem, we built a single server with a single OSD, and ran a few FIO tests. The results are puzzling, not that we were ex