Am 06.07.2012 20:17, schrieb Gregory Farnum:
Am 06.07.2012 um 19:11 schrieb Gregory Farnum g...@inktank.com:
I'm interested in figuring out why we aren't getting useful data out
of the admin socket, and for that I need the actual configuration
files. It wouldn't surprise me if there are several
Juillet 2012 23:33:18
Objet: Re: speedup ceph / scaling / find the bottleneck
Could you send over the ceph.conf on your KVM host, as well as how
you're configuring KVM to use rbd?
On Tue, Jul 3, 2012 at 11:20 AM, Stefan Priebe s.pri...@profihost.ag wrote:
I'm sorry but this is the KVM
Am 06.07.2012 um 19:11 schrieb Gregory Farnum g...@inktank.com:
On Thu, Jul 5, 2012 at 8:50 PM, Alexandre DERUMIER aderum...@odiso.com
wrote:
Hi,
Stefan is on vacation for the moment,I don't know if he can reply you.
But I can reoly for him for the kvm part (as we do same tests together
On Fri, Jul 6, 2012 at 11:09 AM, Stefan Priebe - Profihost AG
s.pri...@profihost.ag wrote:
Am 06.07.2012 um 19:11 schrieb Gregory Farnum g...@inktank.com:
On Thu, Jul 5, 2012 at 8:50 PM, Alexandre DERUMIER aderum...@odiso.com
wrote:
Hi,
Stefan is on vacation for the moment,I don't know if
Could you send over the ceph.conf on your KVM host, as well as how
you're configuring KVM to use rbd?
On Tue, Jul 3, 2012 at 11:20 AM, Stefan Priebe s.pri...@profihost.ag wrote:
I'm sorry but this is the KVM Host Machine there is no ceph running on this
machine.
If i change the admin socket
...@inktank.com
À: Stefan Priebe s.pri...@profihost.ag
Cc: ceph-devel@vger.kernel.org, Sage Weil s...@inktank.com
Envoyé: Jeudi 5 Juillet 2012 23:33:18
Objet: Re: speedup ceph / scaling / find the bottleneck
Could you send over the ceph.conf on your KVM host, as well as how
you're configuring KVM
On Tue, 3 Jul 2012, Stefan Priebe - Profihost AG wrote:
Hello,
Am 02.07.2012 22:30, schrieb Josh Durgin:
If you add admin_socket=/path/to/admin_socket for your client running
qemu (in that client's ceph.conf section or manually in the qemu
command line) you can check that caching is
I'm sorry but this is the KVM Host Machine there is no ceph running on
this machine.
If i change the admin socket to:
admin_socket=/var/run/ceph_$name.sock
i don't have any socket at all ;-(
Am 03.07.2012 17:31, schrieb Sage Weil:
On Tue, 3 Jul 2012, Stefan Priebe - Profihost AG wrote:
Am 03.07.2012 17:31, schrieb Sage Weil:
~]# ceph -v
ceph version 0.48argonaut-2-gb576faa
(commit:b576faa6f24356f4d3ec7205e298d58659e29c68)
Out of curiousity, what patches are you applying on top of the release?
just wip-filestore-min
Stefan
--
To unsubscribe from this list: send the line
Am 02.07.2012 07:02, schrieb Alexandre DERUMIER:
Hi,
my 2cent,
maybe with lower range (like 100MB) of random io,
you have more chance to aggregate them in 4MB block ?
Yes maybe. If you have just a range of 100MB the chance you'll hit the
same 4MB block again is very high.
@sage / mark
How
Hello,
i just want to report back some test results.
Just some results from a sheepdog test using the same hardware.
Sheepdog:
1 VM:
write: io=12544MB, bw=142678KB/s, iops=35669, runt= 90025msec
read : io=14519MB, bw=165186KB/s, iops=41296, runt= 90003msec
write: io=16520MB,
On Sun, Jul 1, 2012 at 11:12 PM, Stefan Priebe - Profihost AG
s.pri...@profihost.ag wrote:
Am 02.07.2012 07:02, schrieb Alexandre DERUMIER:
Hi,
my 2cent,
maybe with lower range (like 100MB) of random io,
you have more chance to aggregate them in 4MB block ?
Yes maybe. If you have just a
Am 02.07.2012 18:51, schrieb Gregory Farnum:
On Sun, Jul 1, 2012 at 11:12 PM, Stefan Priebe - Profihost AG
s.pri...@profihost.ag wrote:
@sage / mark
How does the aggregation work? Does it work 4MB blockwise or target node
based?
Aggregation is based on the 4MB blocks, and if you've got caching
On 07/02/2012 12:22 PM, Stefan Priebe wrote:
Am 02.07.2012 18:51, schrieb Gregory Farnum:
On Sun, Jul 1, 2012 at 11:12 PM, Stefan Priebe - Profihost AG
s.pri...@profihost.ag wrote:
@sage / mark
How does the aggregation work? Does it work 4MB blockwise or target node
based?
Aggregation is
DERUMIER
aderum...@odiso.com, Sage Weil s...@inktank.com,
ceph-devel@vger.kernel.org, Mark Nelson mark.nel...@inktank.com
Envoyé: Lundi 2 Juillet 2012 22:30:19
Objet: Re: speedup ceph / scaling / find the bottleneck
On 07/02/2012 12:22 PM, Stefan Priebe wrote:
Am 02.07.2012 18:51, schrieb
DERUMIER
aderum...@odiso.com, Sage Weil s...@inktank.com,
ceph-devel@vger.kernel.org, Mark Nelson mark.nel...@inktank.com
Envoyé: Lundi 2 Juillet 2012 22:30:19
Objet: Re: speedup ceph / scaling / find the bottleneck
On 07/02/2012 12:22 PM, Stefan Priebe wrote:
Am 02.07.2012 18:51, schrieb
Hello list,
Hello sage,
i've made some further tests.
Sequential 4k writes over 200GB: 300% CPU usage of kvm process 34712 iops
Random 4k writes over 200GB: 170% CPU usage of kvm process 5500 iops
When i make random 4k writes over 100MB: 450% CPU usage of kvm process
and !! 25059 iops !!
On 7/1/12 4:01 PM, Stefan Priebe wrote:
Hello list,
Hello sage,
i've made some further tests.
Sequential 4k writes over 200GB: 300% CPU usage of kvm process 34712 iops
Random 4k writes over 200GB: 170% CPU usage of kvm process 5500 iops
When i make random 4k writes over 100MB: 450% CPU usage
Am 01.07.2012 23:13, schrieb Mark Nelson:
On 7/1/12 4:01 PM, Stefan Priebe wrote:
Hello list,
Hello sage,
i've made some further tests.
Sequential 4k writes over 200GB: 300% CPU usage of kvm process 34712 iops
Random 4k writes over 200GB: 170% CPU usage of kvm process 5500 iops
When i make
...@inktank.com, ceph-devel@vger.kernel.org
Envoyé: Dimanche 1 Juillet 2012 23:27:30
Objet: Re: speedup ceph / scaling / find the bottleneck
Am 01.07.2012 23:13, schrieb Mark Nelson:
On 7/1/12 4:01 PM, Stefan Priebe wrote:
Hello list,
Hello sage,
i've made some further tests.
Sequential 4k
Hello list,
i've made some further testing and have the problem that ceph doesn't
scale for me. I added a 4th osd server to my existing 3 node osd
cluster. I also reformated all to be able to start with a clean system.
While doing random 4k writes from two VMs i see about 8% idle on the osd
:46:42
Objet: speedup ceph / scaling / find the bottleneck
Hello list,
i've made some further testing and have the problem that ceph doesn't
scale for me. I added a 4th osd server to my existing 3 node osd
cluster. I also reformated all to be able to start with a clean system.
While doing
Am 29.06.2012 13:49, schrieb Mark Nelson:
I'll try to replicate your findings in house. I've got some other
things I have to do today, but hopefully I can take a look next week. If
I recall correctly, in the other thread you said that sequential writes
are using much less CPU time on your
Another BIG hint.
While doing random 4k I/O from one VM i archieve 14k I/Os. This is
around 54MB/s. But EACH ceph-osd machine is writing between 500MB/s and
750MB/s. What do they write?!?!
Just an idea?:
Do they completely rewrite EACH 4MB block for each 4k write?
Stefan
Am 29.06.2012
Big sorry. ceph was scrubbing during my last test. Didn't recognized this.
When i redo the test i see writes between 20MB/s and 100Mb/s. That is
OK. Sorry.
Stefan
Am 29.06.2012 15:11, schrieb Stefan Priebe - Profihost AG:
Another BIG hint.
While doing random 4k I/O from one VM i archieve
iostat output via iostat -x -t 5 while 4k random writes
06/29/2012 03:20:55 PM
avg-cpu: %user %nice %system %iowait %steal %idle
31,630,00 52,640,780,00 14,95
Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s
avgrq-sz avgqu-sz await svctm
26 matches
Mail list logo