Okay, well, let's try and track some of these down. What's the content
of the "ceph.layout" xattr on the directory you're running this test
in? Can you verify that pool 0 is the data pool used by CephFS, and
that all reported slow ops are in that pool? Can you record the IO
patterns on an OSD while
It's really bizarre, since we can easily pump ~1GB/s into the cluster with
rados bench from a single 10Gig-E client. We only observe this with kernel
CephFS on that host -- which is why our original theory something like this:
- client caches 4GB of writes
- client starts many opening IOs in
I'm with Zheng on this one. I'm a little confused though, because I
thought this was a pretty large cluster that should be able to absorb
that much data pretty easily. But if you're using a custom striping
strategy and pushing it all through one OSD, that could do it. Or
anything else with that sor
On Sat, Feb 22, 2014 at 12:04 AM, Dan van der Ster
wrote:
> Hi Greg,
> Yes, this still happens after the updatedb fix.
>
> [root@xxx dan]# mount
> ...
> zzz:6789:/ on /mnt/ceph type ceph (name=cephfs,key=client.cephfs)
>
> [root@xxx dan]# pwd
> /mnt/ceph/dan
>
> [root@xxx dan]# dd if=/dev/zero of=
On Fri, Jan 31, 2014 at 9:52 PM, Arne Wiebalck wrote:
> Hi,
>
> We observe that we can easily create slow requests with a simple dd on
> CephFS:
>
> -->
> [root@p05153026953834 dd]# dd if=/dev/zero of=xxx bs=4M count=1000
> 1000+0 records in
> 1000+0 records out
> 4194304000 bytes (4.2 GB) copied,
Hi Greg,
Yes, this still happens after the updatedb fix.
[root@xxx dan]# mount
...
zzz:6789:/ on /mnt/ceph type ceph (name=cephfs,key=client.cephfs)
[root@xxx dan]# pwd
/mnt/ceph/dan
[root@xxx dan]# dd if=/dev/zero of=yyy bs=4M count=2000
2000+0 records in
2000+0 records out
8388608000 bytes (8.
Arne,
Sorry this got dropped -- I had it marked in my mail but didn't have
the chance to think about it seriously when you sent it. Does this
still happen after the updatedb config change you guys made recently?
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
On Fri, Jan 31, 20
Hi,
We observe that we can easily create slow requests with a simple dd on CephFS:
-->
[root@p05153026953834 dd]# dd if=/dev/zero of=xxx bs=4M count=1000
1000+0 records in
1000+0 records out
4194304000 bytes (4.2 GB) copied, 4.27824 s, 980 MB/s
ceph -w:
2014-01-31 14:28:44.009543 osd.450 [WRN] 1