Sorry for the noise.
I have find out the cause in our setup and case: We gathered too many
logs in our RADOS IO path, and the latency seems to be
reasonable(about 0.026 ms) if we don't gather that many logs...
2015-08-05 20:29 GMT+08:00 Sage Weil s...@newdream.net:
On Wed, 5 Aug 2015, Ding
On Wed, 5 Aug 2015, Ding Dinghua wrote:
2015-08-05 0:13 GMT+08:00 Somnath Roy somnath@sandisk.com:
Yes, it has to re-acquire pg_lock today..
But, between journal write and initiating the ondisk ack, there is one
context switche in the code path. So, I guess the pg_lock is not the only
Hi:
Now we are doing some ceph performance tuning work, our setup has
ten ceph nodes, and SSD as journal, HDD for filestore, and ceph
version is 0.80.9.
We run fio in virtual maching with random 4KB write workload, we
find that It took about 1ms in average for ondisk_finisher, while
journal
Dinghua
Sent: Tuesday, August 04, 2015 3:00 AM
To: ceph-devel@vger.kernel.org
Subject: More ondisk_finisher thread?
Hi:
Now we are doing some ceph performance tuning work, our setup has ten ceph
nodes, and SSD as journal, HDD for filestore, and ceph version is 0.80.9.
We run fio in virtual
[mailto:ceph-devel-ow...@vger.kernel.org] On Behalf Of Ding Dinghua
Sent: Tuesday, August 04, 2015 3:00 AM
To: ceph-devel@vger.kernel.org
Subject: More ondisk_finisher thread?
Hi:
Now we are doing some ceph performance tuning work, our setup has ten ceph
nodes, and SSD as journal, HDD
...@vger.kernel.org] On Behalf Of Ding Dinghua
Sent: Tuesday, August 04, 2015 3:00 AM
To: ceph-devel@vger.kernel.org
Subject: More ondisk_finisher thread?
Hi:
Now we are doing some ceph performance tuning work, our setup has ten ceph
nodes, and SSD as journal, HDD for filestore, and ceph version