Yeah, looks like it. If I disable the rbd ccahe:
$ tail /etc/ceph/ceph.conf
...
[client]
rbd cache = false
then the 2-4M reads work fine (no invalid reads in valgrind either).
I'll let the fio guys know.
Cheers
Mark
On 25/10/14 06:56, Gregory Farnum wrote:
There's an issue in master branch
FWIW the specific fio read problem appears to have started after 0.86
and before commit 42bcabf.
Mark
On 10/24/2014 12:56 PM, Gregory Farnum wrote:
There's an issue in master branch temporarily that makes rbd reads
greater than the cache size hang (if the cache was on). This might be
that. (Ja
There's an issue in master branch temporarily that makes rbd reads
greater than the cache size hang (if the cache was on). This might be
that. (Jason is working on it: http://tracker.ceph.com/issues/9854)
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
On Thu, Oct 23, 2014 at 5
On 24/10/14 13:09, Mark Kirkwood wrote:
I'm doing some fio tests on Giant using fio rbd driver to measure
performance on a new ceph cluster.
However with block sizes > 1M (initially noticed with 4M) I am seeing
absolutely no IOPS for *reads* - and the fio process becomes non
interrupteable (need
I'm doing some fio tests on Giant using fio rbd driver to measure
performance on a new ceph cluster.
However with block sizes > 1M (initially noticed with 4M) I am seeing
absolutely no IOPS for *reads* - and the fio process becomes non
interrupteable (needs kill -9):
$ ceph -v
ceph version 0