Hi, i do some test, to reproduce this problem.
As you can see, only one drive (each drive in same PG) is much more
utilize, then others, and there are some ops in queue on this slow
osd. This test is getting heads from s3 objects, alphabetically
sorted. This is strange. why this files is going in
On Wed, Mar 6, 2013 at 5:06 AM, Sławomir Skowron szi...@gmail.com wrote:
Hi, i do some test, to reproduce this problem.
As you can see, only one drive (each drive in same PG) is much more
utilize, then others, and there are some ops in queue on this slow
osd. This test is getting heads from
Great, thanks. Now i understand everything.
Best Regards
SS
Dnia 6 mar 2013 o godz. 15:04 Yehuda Sadeh yeh...@inktank.com napisał(a):
On Wed, Mar 6, 2013 at 5:06 AM, Sławomir Skowron szi...@gmail.com wrote:
Hi, i do some test, to reproduce this problem.
As you can see, only one drive (each
Ok, thanks for response. But if i have crush map like this in attachment.
All data should be balanced equal, not including hosts with 0.5 weight.
How make data auto balanced ?? when i know that some pq's have too
much data ?? I have 4800 pg's on RGW only with 78 OSD, it is quite
enough.
pool 3
On Mon, 4 Mar 2013, S?awomir Skowron wrote:
Ok, thanks for response. But if i have crush map like this in attachment.
All data should be balanced equal, not including hosts with 0.5 weight.
How make data auto balanced ?? when i know that some pq's have too
much data ?? I have 4800 pg's on
Alone (one of this slow osd in mentioned tripple)
2013-03-04 18:39:27.683035 osd.23 [INF] bench: wrote 1024 MB in blocks
of 4096 KB in 15.241943 sec at 68795 KB/sec
in for loop (some slow request appear):
for x in `seq 0 25`; do ceph osd tell $x bench;done
2013-03-04 18:41:08.259454 osd.12
And some output from rest-bench:
2013-03-04 19:31:41.503865min lat: 0.166207 max lat: 3.44611 avg lat: 0.911577
2013-03-04 19:31:41.503865 sec Cur ops started finished avg MB/s
cur MB/s last lat avg lat
2013-03-04 19:31:41.50386540 16 715 699 69.7985
64