On Wed, Sep 23, 2015 at 1:51 PM, Deneau, Tom <tom.den...@amd.com> wrote:
>
>
>> -----Original Message-----
>> From: Gregory Farnum [mailto:gfar...@redhat.com]
>> So if you've got 20k objects and 5 OSDs then each OSD is getting ~4k reads
>> during this test. Which if I'm reading these properly means OSD-side
>> latency is something like 1.5 milliseconds for the single client and...144
>> milliseconds for the two-client case! You might try dumping some of the
>> historic ops out of the admin socket and seeing where the time is getting
>> spent (is it all on disk accesses?). And trying to reproduce something
>> like this workload on your disks without Ceph involved.
>> -Greg
>
> Greg --
>
> Not sure how much it matters but in looking at the pools more closely I
> was getting mixed up with an earlier experiment with pools that just used 5 
> osds.
> The pools for this example actually distributed across 15 osds on 3 nodes.

Okay, so this is running on hard drives, not SSDs. The speed
differential is a lot more plausibly in the drive/filesystem/total
layout in that case, then...

>
> What is the recommended command for dumping historic ops out of the admin 
> socket?

"ceph daemon osd.<N> dump_historic_ops", I think. "ceph daemon osd.<N>
help" will include it in the list, though.
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to