Write speed of osd is very slow.
Test osd1 according to
http://ceph.newdream.net/wiki/Troubleshooting#OSD performance
# ceph osd tell 1 bench
10.09.23_21:33:06.774427 mon - [osd,tell,1,bench]
10.09.23_21:33:06.775481 mon0 - 'ok' (0)
log 10.09.23_21:39:09.949601 osd1 ???.???.248.176:6801/25594 1
Thanks for your replay,Sege.I think ceph is a very good distributed
filesystem and want to test it in product environment.Your reply is
very important to us.
2010/9/18 Sage Weil s...@newdream.net
Sorry, I just realized this one slipped through the cracks!
On Sat, 4 Sep 2010, FWDF wrote:
We
I found the clients(1 local,2 remote) can’t access ceph today.
r...@ceph01:/ # ceph -s
10.09.22_20:05:48.344485pg v24138: 1320 pgs: 1320 active+clean;
111 GB data, 286 GB used, 924 GB / 1210 GB avail
10.09.22_20:05:48.352327 mds e28: 1/1/1 up {0=up:active(laggy or crashed)}
2010/9/23 Sage Weil s...@newdream.net:
On Wed, 22 Sep 2010, cang lin wrote:
What confuse me is why the client can't access ceph?Even if the osd was
down shouldn't affect the client.what is the reason for the client canÿÿt
access or unmount ceph?
It could be a number of things. The output
Hi,Greg,the mds log has been sent to you.
2010/9/23 Gregory Farnum gr...@hq.newdream.net:
On Wed, Sep 22, 2010 at 7:47 AM, cang lin fwdfl...@gmail.com wrote:
I found the clients(1 local,2 remote) can’t access ceph today.
The clients require an active MDS to connect, since that handles all