Am 06.07.2012 20:17, schrieb Gregory Farnum:
Am 06.07.2012 um 19:11 schrieb Gregory Farnum <g...@inktank.com>:
I'm interested in figuring out why we aren't getting useful data out
of the admin socket, and for that I need the actual configuration
files. It wouldn't surprise me if there are several layers to this
issue but I'd like to start at the client's endpoint. :)

While I'm on holiday I can't send you my ceph.conf but it doesn't contain 
anything else than the locations and journal dio false for tmpfs and 
/var/run/ceph_$name.sock

Is that socket in the global area?
Yes

> Does the KVM process have
permission to access that directory?
Yes it is also created if i skip $name and set it to /var/run/ceph.sock

Regarding the random IO, you shouldn't overestimate your storage.
Under plenty of scenarios your drives are lucky to do more than 2k
IO/s, which is about what you're seeing....
http://techreport.com/articles.x/22415/9
You're fine if the ceph workload is the same as the iometer file server 
workload. I don't know. I've measured the raw random 4k workload. Also I've 
tested adding another osd and speed still doesn't change but with a size of 
200gb I should hit several osd servers.
Okay — just wanted to point it out.

Thanks also with sheepdog i can get 40 000 IOp/s.

Stefan

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to