Hi, cephers
I'm doing the performance test of ceph when recovering. The scene is simple:
1. run fio on 6 krbd device
2. stop one OSD for 10 seconds
3. start that OSD
However, when the OSD up and start recovering, the performance of fio
drop down from 9k to 1k for about 20 seconds. At the same
-- Forwarded message --
From: Libin Wu <hzwuli...@gmail.com>
Date: 2015-12-08 9:12 GMT+08:00
Subject: Re: [ceph-users] poor performance when recovering
To: Oliver Dzombic <i...@ip-interactive.de>
抄送: ceph-users <ceph-us...@lists.ceph.com>
Yeah, we will upgrade
On Sun, Nov 4, 2012 at 7:13 AM, Aleksey Samarin nrg3...@gmail.com wrote:
What may be possible solutions?
Update centos to 6.3?
From what I've heard the RHEL libc doesn't support the syncfs syscall
(even though the kernel does have it). :( So you'd need to make sure
the kernel supports it and
Thanks for your reply!
I was easier to change rhel on ubuntu. Now everything is fast and
stable! :) If interested can attach logs.
All the best, Alex!
2012/11/16 Gregory Farnum g...@inktank.com:
On Sun, Nov 4, 2012 at 7:13 AM, Aleksey Samarin nrg3...@gmail.com wrote:
What may be possible
On Sun, Nov 4, 2012 at 10:58 AM, Aleksey Samarin nrg3...@gmail.com wrote:
Hi all
Im planning use ceph for cloud storage.
My test setup is 2 servers connected via infiniband 40Gb, 6x2Tb disks per
node.
Centos 6.2
Ceph 0.52 from http://ceph.com/rpms/el6/x86_64
This is my config
On Sun, Nov 4, 2012 at 1:04 PM, Aleksey Samarin nrg3...@gmail.com wrote:
Hi!
This command? ceph tell osd \* bench
Output: tell target 'osd' not a valid entity name
Well, i did pool by command ceph osd pool create bench2 120
This output of rados -p bench2 bench 30 write --no-cleanup
rados
[Sorry for the blank email; I missed!]
On Sun, Nov 4, 2012 at 1:04 PM, Aleksey Samarin nrg3...@gmail.com wrote:
Hi!
This command? ceph tell osd \* bench
Output: tell target 'osd' not a valid entity name
I guess it's ceph osd tell \* bench. Try that one. :)
Well, i did pool by command ceph
It`s ok!
Output:
2012-11-04 16:19:23.195891 osd.0 [INF] bench: wrote 1024 MB in blocks
of 4096 KB in 11.441035 sec at 91650 KB/sec
2012-11-04 16:19:24.981631 osd.1 [INF] bench: wrote 1024 MB in blocks
of 4096 KB in 13.225048 sec at 79287 KB/sec
2012-11-04 16:19:25.672896 osd.2 [INF] bench: wrote
On 11/04/2012 03:58 AM, Aleksey Samarin wrote:
Hi all
Im planning use ceph for cloud storage.
My test setup is 2 servers connected via infiniband 40Gb, 6x2Tb disks per node.
Centos 6.2
Ceph 0.52 from http://ceph.com/rpms/el6/x86_64
This is my config http://pastebin.com/Pzxafnsm
One thing that
That's only nine — where are the other three? If you have three slow
disks that could definitely cause the troubles you're seeing.
Also, what Mark said about sync versus syncfs.
On Sun, Nov 4, 2012 at 1:26 PM, Aleksey Samarin nrg3...@gmail.com wrote:
It`s ok!
Output:
2012-11-04
Ok!
Well, I'll take these tests and write about the results.
btw,
disks are the same, as some may be faster than others?
2012/11/4 Gregory Farnum g...@inktank.com:
That's only nine — where are the other three? If you have three slow
disks that could definitely cause the troubles you're seeing.
Well, i create ceph cluster with 2 osd ( 1 osd per node), 2 mon, 2 mds.
here is what I did:
ceph osd pool create bench
ceph osd tell \* bench
rados -p bench bench 30 write --no-cleanup
output:
Maintaining 16 concurrent writes of 4194304 bytes for at least 30 seconds.
Object prefix:
On 11/04/2012 07:18 AM, Aleksey Samarin wrote:
Well, i create ceph cluster with 2 osd ( 1 osd per node), 2 mon, 2 mds.
here is what I did:
ceph osd pool create bench
ceph osd tell \* bench
rados -p bench bench 30 write --no-cleanup
output:
Maintaining 16 concurrent writes of 4194304
What may be possible solutions?
Update centos to 6.3?
About issue with writes to lots of disk, i think parallel dd command
will be good as test! :)
2012/11/4 Mark Nelson mark.nel...@inktank.com:
On 11/04/2012 07:18 AM, Aleksey Samarin wrote:
Well, i create ceph cluster with 2 osd ( 1 osd per
bandwidth (MB/sec): 0
Average Latency:0.278853
Stddev Latency: 0.441319
Max latency:2.96921
Min latency:0.066844
The biggest problem in my clusters is poor performance. Simple insert
to the database on the client often takes seconds. It is unacceptable
and I am
:2.96921
Min latency:0.066844
The biggest problem in my clusters is poor performance. Simple insert
to the database on the client often takes seconds. It is unacceptable
and I am trying to find the bottleneck. Could you please help me with
it?
Replica count is set to 2
What's this data showing us? Write requests to any disk on the node of any
size?
That's more than I'd expect to see, but Ceph is going to engage in
some background chatter, and I notice you have at least a little bit
of logging enabled.
Graphs show writes per second to partition on which
Hi,I think you should really put your journal on ssd, or tmpfs for testing.
- Mail original -
De: Maciej Gałkiewicz maciejgalkiew...@ragnarson.com
À: ceph-devel@vger.kernel.org
Envoyé: Mardi 16 Octobre 2012 16:27:08
Objet: Poor performance with rbd volumes
Hello
I have two ceph
18 matches
Mail list logo