Re: speedup ceph / scaling / find the bottleneck

2012-07-09 Thread Stefan Priebe
Am 06.07.2012 20:17, schrieb Gregory Farnum: Am 06.07.2012 um 19:11 schrieb Gregory Farnum g...@inktank.com: I'm interested in figuring out why we aren't getting useful data out of the admin socket, and for that I need the actual configuration files. It wouldn't surprise me if there are several

Re: speedup ceph / scaling / find the bottleneck

2012-07-06 Thread Stefan Priebe
Juillet 2012 23:33:18 Objet: Re: speedup ceph / scaling / find the bottleneck Could you send over the ceph.conf on your KVM host, as well as how you're configuring KVM to use rbd? On Tue, Jul 3, 2012 at 11:20 AM, Stefan Priebe s.pri...@profihost.ag wrote: I'm sorry but this is the KVM

Re: speedup ceph / scaling / find the bottleneck

2012-07-06 Thread Stefan Priebe - Profihost AG
Am 06.07.2012 um 19:11 schrieb Gregory Farnum g...@inktank.com: On Thu, Jul 5, 2012 at 8:50 PM, Alexandre DERUMIER aderum...@odiso.com wrote: Hi, Stefan is on vacation for the moment,I don't know if he can reply you. But I can reoly for him for the kvm part (as we do same tests together

Re: speedup ceph / scaling / find the bottleneck

2012-07-06 Thread Gregory Farnum
On Fri, Jul 6, 2012 at 11:09 AM, Stefan Priebe - Profihost AG s.pri...@profihost.ag wrote: Am 06.07.2012 um 19:11 schrieb Gregory Farnum g...@inktank.com: On Thu, Jul 5, 2012 at 8:50 PM, Alexandre DERUMIER aderum...@odiso.com wrote: Hi, Stefan is on vacation for the moment,I don't know if

Re: speedup ceph / scaling / find the bottleneck

2012-07-05 Thread Gregory Farnum
Could you send over the ceph.conf on your KVM host, as well as how you're configuring KVM to use rbd? On Tue, Jul 3, 2012 at 11:20 AM, Stefan Priebe s.pri...@profihost.ag wrote: I'm sorry but this is the KVM Host Machine there is no ceph running on this machine. If i change the admin socket

Re: speedup ceph / scaling / find the bottleneck

2012-07-05 Thread Alexandre DERUMIER
...@inktank.com À: Stefan Priebe s.pri...@profihost.ag Cc: ceph-devel@vger.kernel.org, Sage Weil s...@inktank.com Envoyé: Jeudi 5 Juillet 2012 23:33:18 Objet: Re: speedup ceph / scaling / find the bottleneck Could you send over the ceph.conf on your KVM host, as well as how you're configuring KVM

Re: speedup ceph / scaling / find the bottleneck

2012-07-03 Thread Sage Weil
On Tue, 3 Jul 2012, Stefan Priebe - Profihost AG wrote: Hello, Am 02.07.2012 22:30, schrieb Josh Durgin: If you add admin_socket=/path/to/admin_socket for your client running qemu (in that client's ceph.conf section or manually in the qemu command line) you can check that caching is

Re: speedup ceph / scaling / find the bottleneck

2012-07-03 Thread Stefan Priebe
I'm sorry but this is the KVM Host Machine there is no ceph running on this machine. If i change the admin socket to: admin_socket=/var/run/ceph_$name.sock i don't have any socket at all ;-( Am 03.07.2012 17:31, schrieb Sage Weil: On Tue, 3 Jul 2012, Stefan Priebe - Profihost AG wrote:

Re: speedup ceph / scaling / find the bottleneck

2012-07-03 Thread Stefan Priebe
Am 03.07.2012 17:31, schrieb Sage Weil: ~]# ceph -v ceph version 0.48argonaut-2-gb576faa (commit:b576faa6f24356f4d3ec7205e298d58659e29c68) Out of curiousity, what patches are you applying on top of the release? just wip-filestore-min Stefan -- To unsubscribe from this list: send the line

Re: speedup ceph / scaling / find the bottleneck

2012-07-02 Thread Stefan Priebe - Profihost AG
Am 02.07.2012 07:02, schrieb Alexandre DERUMIER: Hi, my 2cent, maybe with lower range (like 100MB) of random io, you have more chance to aggregate them in 4MB block ? Yes maybe. If you have just a range of 100MB the chance you'll hit the same 4MB block again is very high. @sage / mark How

Re: speedup ceph / scaling / find the bottleneck

2012-07-02 Thread Stefan Priebe - Profihost AG
Hello, i just want to report back some test results. Just some results from a sheepdog test using the same hardware. Sheepdog: 1 VM: write: io=12544MB, bw=142678KB/s, iops=35669, runt= 90025msec read : io=14519MB, bw=165186KB/s, iops=41296, runt= 90003msec write: io=16520MB,

Re: speedup ceph / scaling / find the bottleneck

2012-07-02 Thread Gregory Farnum
On Sun, Jul 1, 2012 at 11:12 PM, Stefan Priebe - Profihost AG s.pri...@profihost.ag wrote: Am 02.07.2012 07:02, schrieb Alexandre DERUMIER: Hi, my 2cent, maybe with lower range (like 100MB) of random io, you have more chance to aggregate them in 4MB block ? Yes maybe. If you have just a

Re: speedup ceph / scaling / find the bottleneck

2012-07-02 Thread Stefan Priebe
Am 02.07.2012 18:51, schrieb Gregory Farnum: On Sun, Jul 1, 2012 at 11:12 PM, Stefan Priebe - Profihost AG s.pri...@profihost.ag wrote: @sage / mark How does the aggregation work? Does it work 4MB blockwise or target node based? Aggregation is based on the 4MB blocks, and if you've got caching

Re: speedup ceph / scaling / find the bottleneck

2012-07-02 Thread Josh Durgin
On 07/02/2012 12:22 PM, Stefan Priebe wrote: Am 02.07.2012 18:51, schrieb Gregory Farnum: On Sun, Jul 1, 2012 at 11:12 PM, Stefan Priebe - Profihost AG s.pri...@profihost.ag wrote: @sage / mark How does the aggregation work? Does it work 4MB blockwise or target node based? Aggregation is

Re: speedup ceph / scaling / find the bottleneck

2012-07-02 Thread Alexandre DERUMIER
DERUMIER aderum...@odiso.com, Sage Weil s...@inktank.com, ceph-devel@vger.kernel.org, Mark Nelson mark.nel...@inktank.com Envoyé: Lundi 2 Juillet 2012 22:30:19 Objet: Re: speedup ceph / scaling / find the bottleneck On 07/02/2012 12:22 PM, Stefan Priebe wrote: Am 02.07.2012 18:51, schrieb

Re: speedup ceph / scaling / find the bottleneck

2012-07-02 Thread Alexandre DERUMIER
DERUMIER aderum...@odiso.com, Sage Weil s...@inktank.com, ceph-devel@vger.kernel.org, Mark Nelson mark.nel...@inktank.com Envoyé: Lundi 2 Juillet 2012 22:30:19 Objet: Re: speedup ceph / scaling / find the bottleneck On 07/02/2012 12:22 PM, Stefan Priebe wrote: Am 02.07.2012 18:51, schrieb

Re: speedup ceph / scaling / find the bottleneck

2012-07-01 Thread Stefan Priebe
Hello list, Hello sage, i've made some further tests. Sequential 4k writes over 200GB: 300% CPU usage of kvm process 34712 iops Random 4k writes over 200GB: 170% CPU usage of kvm process 5500 iops When i make random 4k writes over 100MB: 450% CPU usage of kvm process and !! 25059 iops !!

Re: speedup ceph / scaling / find the bottleneck

2012-07-01 Thread Mark Nelson
On 7/1/12 4:01 PM, Stefan Priebe wrote: Hello list, Hello sage, i've made some further tests. Sequential 4k writes over 200GB: 300% CPU usage of kvm process 34712 iops Random 4k writes over 200GB: 170% CPU usage of kvm process 5500 iops When i make random 4k writes over 100MB: 450% CPU usage

Re: speedup ceph / scaling / find the bottleneck

2012-07-01 Thread Stefan Priebe
Am 01.07.2012 23:13, schrieb Mark Nelson: On 7/1/12 4:01 PM, Stefan Priebe wrote: Hello list, Hello sage, i've made some further tests. Sequential 4k writes over 200GB: 300% CPU usage of kvm process 34712 iops Random 4k writes over 200GB: 170% CPU usage of kvm process 5500 iops When i make

Re: speedup ceph / scaling / find the bottleneck

2012-07-01 Thread Alexandre DERUMIER
...@inktank.com, ceph-devel@vger.kernel.org Envoyé: Dimanche 1 Juillet 2012 23:27:30 Objet: Re: speedup ceph / scaling / find the bottleneck Am 01.07.2012 23:13, schrieb Mark Nelson: On 7/1/12 4:01 PM, Stefan Priebe wrote: Hello list, Hello sage, i've made some further tests. Sequential 4k

speedup ceph / scaling / find the bottleneck

2012-06-29 Thread Stefan Priebe - Profihost AG
Hello list, i've made some further testing and have the problem that ceph doesn't scale for me. I added a 4th osd server to my existing 3 node osd cluster. I also reformated all to be able to start with a clean system. While doing random 4k writes from two VMs i see about 8% idle on the osd

Re: speedup ceph / scaling / find the bottleneck

2012-06-29 Thread Alexandre DERUMIER
:46:42 Objet: speedup ceph / scaling / find the bottleneck Hello list, i've made some further testing and have the problem that ceph doesn't scale for me. I added a 4th osd server to my existing 3 node osd cluster. I also reformated all to be able to start with a clean system. While doing

Re: speedup ceph / scaling / find the bottleneck

2012-06-29 Thread Stefan Priebe - Profihost AG
Am 29.06.2012 13:49, schrieb Mark Nelson: I'll try to replicate your findings in house. I've got some other things I have to do today, but hopefully I can take a look next week. If I recall correctly, in the other thread you said that sequential writes are using much less CPU time on your

Re: speedup ceph / scaling / find the bottleneck

2012-06-29 Thread Stefan Priebe - Profihost AG
Another BIG hint. While doing random 4k I/O from one VM i archieve 14k I/Os. This is around 54MB/s. But EACH ceph-osd machine is writing between 500MB/s and 750MB/s. What do they write?!?! Just an idea?: Do they completely rewrite EACH 4MB block for each 4k write? Stefan Am 29.06.2012

Re: speedup ceph / scaling / find the bottleneck

2012-06-29 Thread Stefan Priebe - Profihost AG
Big sorry. ceph was scrubbing during my last test. Didn't recognized this. When i redo the test i see writes between 20MB/s and 100Mb/s. That is OK. Sorry. Stefan Am 29.06.2012 15:11, schrieb Stefan Priebe - Profihost AG: Another BIG hint. While doing random 4k I/O from one VM i archieve

Re: speedup ceph / scaling / find the bottleneck

2012-06-29 Thread Stefan Priebe - Profihost AG
iostat output via iostat -x -t 5 while 4k random writes 06/29/2012 03:20:55 PM avg-cpu: %user %nice %system %iowait %steal %idle 31,630,00 52,640,780,00 14,95 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm