Hi Jeff,

I would be surprised as well - we initially tested on a 2-replica cluster
with 8 nodes having 12 osd each - and went to 3-replica as we re-built the
cluster.

The performance seems to be where I'd expect it (doing consistent writes in
a rbd VM @ ~400MB/sec on 10GbE which I'd expect is either a limit in disks,
network, qemu/kvm or the 3-replica setup kicking in)

Just curious, anything in dmesg about the disk mounted as osd.4?

Cheers,
Martin


On Tue, Aug 20, 2013 at 4:02 PM, Mark Nelson <mark.nel...@inktank.com>wrote:

> On 08/20/2013 08:42 AM, Jeff Moskow wrote:
>
>> Hi,
>>
>> More information.  If I look in /var/log/ceph/ceph.log,  I see 7893 slow
>> requests in the last 3 hours of which 7890 are from osd.4. Should I
>> assume a bad drive?  I SMART says the drive is healthy? Bad osd?
>>
>
> Definitely sounds suspicious!  Might be worth taking that OSD out and
> doing some testing on the drive.
>
>
>> Thanks,
>>               Jeff
>>
>>
> ______________________________**_________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/**listinfo.cgi/ceph-users-ceph.**com<http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to