Re: [pve-devel] less cores more iops / speed

2012-11-12 Thread Stefan Priebe - Profihost AG
Adding this to ceph.conf on kvm host adds another 2000 iops (20.000 
iop/s with one VM). I'm sure most of them are useless on a client kvm / 
rbd host but i don't know which one makes sense ;-)


[global]
debug ms = 0/0
debug rbd = 0/0
debug lockdep = 0/0
debug context = 0/0
debug crush = 0/0
debug buffer = 0/0
debug timer = 0/0
debug journaler = 0/0
debug osd = 0/0
debug optracker = 0/0
debug objclass = 0/0
debug filestore = 0/0
debug journal = 0/0
debug ms = 0/0
debug monc = 0/0
debug tp = 0/0
debug auth = 0/0
debug finisher = 0/0
debug heartbeatmap = 0/0
debug perfcounter = 0/0
debug asok = 0/0
debug throttle = 0/0

[client]
debug ms = 0/0
debug rbd = 0/0
debug lockdep = 0/0
debug context = 0/0
debug crush = 0/0
debug buffer = 0/0
debug timer = 0/0
debug journaler = 0/0
debug osd = 0/0
debug optracker = 0/0
debug objclass = 0/0
debug filestore = 0/0
debug journal = 0/0
debug ms = 0/0
debug monc = 0/0
debug tp = 0/0
debug auth = 0/0
debug finisher = 0/0
debug heartbeatmap = 0/0
debug perfcounter = 0/0
debug asok = 0/0
debug throttle = 0/0

Stefan

Am 12.11.2012 15:35, schrieb Alexandre DERUMIER:

Another idea,

do you have tried to put

  debug lockdep = 0/0
  debug context = 0/0
  debug crush = 0/0
  debug buffer = 0/0
  debug timer = 0/0
  debug journaler = 0/0
  debug osd = 0/0
  debug optracker = 0/0
  debug objclass = 0/0
  debug filestore = 0/0
  debug journal = 0/0
  debug ms = 0/0
  debug monc = 0/0
  debug tp = 0/0
  debug auth = 0/0
  debug finisher = 0/0
  debug heartbeatmap = 0/0
  debug perfcounter = 0/0
  debug asok = 0/0
  debug throttle = 0/0


in a ceph.conf on your kvm host ?


- Mail original -

De: Alexandre DERUMIER aderum...@odiso.com
À: Stefan Priebe - Profihost AG s.pri...@profihost.ag
Cc: pve-de...@pve.proxmox.com
Envoyé: Lundi 12 Novembre 2012 15:26:36
Objet: Re: [pve-devel] less cores more iops / speed

Maybe some tracing on kvm process could give us clues to find where the cpu is 
used ?

Also another idea, can you try with auth supported=none ? maybe they are some 
overhead with ceph authenfication ?




- Mail original -

De: Alexandre DERUMIER aderum...@odiso.com
À: Stefan Priebe - Profihost AG s.pri...@profihost.ag
Cc: pve-de...@pve.proxmox.com
Envoyé: Lundi 12 Novembre 2012 15:20:07
Objet: Re: [pve-devel] less cores more iops / speed

Ok thanks.

Seem to use a lot of cpu vs nfs,iscsi ...

I hope that ceph dev will work on this soon !


- Mail original -

De: Stefan Priebe - Profihost AG s.pri...@profihost.ag
À: Alexandre DERUMIER aderum...@odiso.com
Cc: eric e...@netwalk.com, pve-de...@pve.proxmox.com
Envoyé: Lundi 12 Novembre 2012 15:05:08
Objet: Re: [pve-devel] less cores more iops / speed

Am 12.11.2012 13:49, schrieb Alexandre DERUMIER:

One VM on one Host: 18.000 IOP/s
Two VM on one Host: 2x11.000 IOP/s
Three VM on one Host: 3x7.000 IOP/s


And host cpu is 100% ?


No. For three VMs yes. For one and two no. I think librbd / rbd
implementation in kvm is the bottleneck here.

Stefan


- Mail original -

De: Stefan Priebe - Profihost AG s.pri...@profihost.ag
À: Alexandre DERUMIER aderum...@odiso.com
Cc: eric e...@netwalk.com, pve-de...@pve.proxmox.com
Envoyé: Lundi 12 Novembre 2012 12:58:35
Objet: Re: [pve-devel] less cores more iops / speed

Am 12.11.2012 08:51, schrieb Alexandre DERUMIER:

Right now RBD in KVM is limited by CPU speed.


Good to known, so it's seem lack of threading, or maybe somes locks. (so faster 
cpu give more iops).

If you lauch parallel fio on same host on different guest, do you get more 
total iops ? (for me it's scale)


One VM on one Host: 18.000 IOP/s
Two VM on one Host: 2x11.000 IOP/s
Three VM on one Host: 3x7.000 IOP/s


if you launch 2 parallel fio, on same guest (on differents disk), do you get 
more iops ? (for me, it doesn't scale, so raid0 in guest doesn't help).

No it doesn't scale.

Stefan


- Mail original -

De: Stefan Priebe s.pri...@profihost.ag
À: Alexandre DERUMIER aderum...@odiso.com
Cc: eric e...@netwalk.com, pve-de...@pve.proxmox.com
Envoyé: Dimanche 11 Novembre 2012 13:07:36
Objet: Re: [pve-devel] less cores more iops / speed

Am 11.11.2012 12:12, schrieb Alexandre DERUMIER:

If I remember good, stefan can achieve 100.000 iops with iscsi with same kvm 
host.


Correct but this was always with scsi-generic and I/O multipathing on
host. rbd does not support scsi-generic ;-(


I have checked ceph mailing, stefan seem to have resolved his problem with dual 
core with bios update !

Correct. So speed on Dual Xeon is now 14.000 IOP/s and 18.000 IOP/s on
Single Xeon. But the difference is an issue of the CPU Speed. 3,6Ghz
Single Xeon vs

Re: [pve-devel] less cores more iops / speed

2012-11-12 Thread Josh Durgin

On 11/12/2012 07:33 AM, Stefan Priebe - Profihost AG wrote:

Adding this to ceph.conf on kvm host adds another 2000 iops (20.000
iop/s with one VM). I'm sure most of them are useless on a client kvm /
rbd host but i don't know which one makes sense ;-)

[global]
 debug ms = 0/0
 debug rbd = 0/0
 debug lockdep = 0/0
 debug context = 0/0
 debug crush = 0/0
 debug buffer = 0/0
 debug timer = 0/0
 debug journaler = 0/0
 debug osd = 0/0
 debug optracker = 0/0
 debug objclass = 0/0
 debug filestore = 0/0
 debug journal = 0/0
 debug ms = 0/0
 debug monc = 0/0
 debug tp = 0/0
 debug auth = 0/0
 debug finisher = 0/0
 debug heartbeatmap = 0/0
 debug perfcounter = 0/0
 debug asok = 0/0
 debug throttle = 0/0

[client]


For the client side you'd these settings to disable all debug logging:

[client]
debug lockdep = 0/0
debug context = 0/0
debug crush = 0/0
debug buffer = 0/0
debug timer = 0/0
debug filer = 0/0
debug objecter = 0/0
debug rados = 0/0
debug rbd = 0/0
debug objectcacher = 0/0
debug client = 0/0
debug ms = 0/0
debug monc = 0/0
debug tp = 0/0
debug auth = 0/0
debug finisher = 0/0
debug perfcounter = 0/0
debug asok = 0/0
debug throttle = 0/0

Josh


 debug ms = 0/0
 debug rbd = 0/0
 debug lockdep = 0/0
 debug context = 0/0
 debug crush = 0/0
 debug buffer = 0/0
 debug timer = 0/0
 debug journaler = 0/0
 debug osd = 0/0
 debug optracker = 0/0
 debug objclass = 0/0
 debug filestore = 0/0
 debug journal = 0/0
 debug ms = 0/0
 debug monc = 0/0
 debug tp = 0/0
 debug auth = 0/0
 debug finisher = 0/0
 debug heartbeatmap = 0/0
 debug perfcounter = 0/0
 debug asok = 0/0
 debug throttle = 0/0

Stefan

Am 12.11.2012 15:35, schrieb Alexandre DERUMIER:

Another idea,

do you have tried to put

  debug lockdep = 0/0
  debug context = 0/0
  debug crush = 0/0
  debug buffer = 0/0
  debug timer = 0/0
  debug journaler = 0/0
  debug osd = 0/0
  debug optracker = 0/0
  debug objclass = 0/0
  debug filestore = 0/0
  debug journal = 0/0
  debug ms = 0/0
  debug monc = 0/0
  debug tp = 0/0
  debug auth = 0/0
  debug finisher = 0/0
  debug heartbeatmap = 0/0
  debug perfcounter = 0/0
  debug asok = 0/0
  debug throttle = 0/0


in a ceph.conf on your kvm host ?


- Mail original -

De: Alexandre DERUMIER aderum...@odiso.com
À: Stefan Priebe - Profihost AG s.pri...@profihost.ag
Cc: pve-de...@pve.proxmox.com
Envoyé: Lundi 12 Novembre 2012 15:26:36
Objet: Re: [pve-devel] less cores more iops / speed

Maybe some tracing on kvm process could give us clues to find where
the cpu is used ?

Also another idea, can you try with auth supported=none ? maybe they
are some overhead with ceph authenfication ?




- Mail original -

De: Alexandre DERUMIER aderum...@odiso.com
À: Stefan Priebe - Profihost AG s.pri...@profihost.ag
Cc: pve-de...@pve.proxmox.com
Envoyé: Lundi 12 Novembre 2012 15:20:07
Objet: Re: [pve-devel] less cores more iops / speed

Ok thanks.

Seem to use a lot of cpu vs nfs,iscsi ...

I hope that ceph dev will work on this soon !


- Mail original -

De: Stefan Priebe - Profihost AG s.pri...@profihost.ag
À: Alexandre DERUMIER aderum...@odiso.com
Cc: eric e...@netwalk.com, pve-de...@pve.proxmox.com
Envoyé: Lundi 12 Novembre 2012 15:05:08
Objet: Re: [pve-devel] less cores more iops / speed

Am 12.11.2012 13:49, schrieb Alexandre DERUMIER:

One VM on one Host: 18.000 IOP/s
Two VM on one Host: 2x11.000 IOP/s
Three VM on one Host: 3x7.000 IOP/s


And host cpu is 100% ?


No. For three VMs yes. For one and two no. I think librbd / rbd
implementation in kvm is the bottleneck here.

Stefan


- Mail original -

De: Stefan Priebe - Profihost AG s.pri...@profihost.ag
À: Alexandre DERUMIER aderum...@odiso.com
Cc: eric e...@netwalk.com, pve-de...@pve.proxmox.com
Envoyé: Lundi 12 Novembre 2012 12:58:35
Objet: Re: [pve-devel] less cores more iops / speed

Am 12.11.2012 08:51, schrieb Alexandre DERUMIER:

Right now RBD in KVM is limited by CPU speed.


Good to known, so it's seem lack of threading, or maybe somes locks.
(so faster cpu give more iops).

If you lauch parallel fio on same host on different guest, do you
get more total iops ? (for me it's scale)


One VM on one Host: 18.000 IOP/s
Two VM on one Host: 2x11.000 IOP/s
Three VM on one Host: 3x7.000 IOP/s


if you launch 2 parallel fio, on same guest (on differents disk), do
you get more iops ? (for me, it doesn't scale, so raid0 in guest
doesn't help).

No it doesn't scale.

Stefan


- Mail original -

De: Stefan Priebe s.pri...@profihost.ag
À: Alexandre DERUMIER aderum...@odiso.com
Cc: eric e...@netwalk.com, pve-de...@pve.proxmox.com
Envoyé: Dimanche 11 Novembre 2012 13:07:36
Objet: Re: [pve-devel

Re: [pve-devel] less cores more iops / speed

2012-11-12 Thread Stefan Priebe

Hi Josh,


For the client side you'd these settings to disable all debug logging:

...

Thanks!

Stefan
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: less cores more iops / speed

2012-11-09 Thread Stefan Priebe - Profihost AG


Am 08.11.2012 16:53, schrieb Alexandre DERUMIER:

So it is a problem of KVM which let's the processes jump between cores a
lot.


maybe numad from redhat can help ?
http://fedoraproject.org/wiki/Features/numad

It's try to keep process on same numa node and I think it's also doing some 
dynamic pinning.


numad doesn't help but libvirt seems to support pinning of kvm 
instances. Maybe pve should support pinning too?


Greets
Stefan
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: less cores more iops / speed

2012-11-08 Thread Stefan Priebe - Profihost AG

Am 08.11.2012 01:59, schrieb Mark Nelson:

There's also the context switching overhead.  It'd be interesting to
know how much the writer processes were shifting around on cores.
What do you mean by that? I'm talking about the KVM guest not about the 
ceph nodes.



Stefan, what tool were you using to do writes?

as always: fio ;-)

Stefan
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: less cores more iops / speed

2012-11-08 Thread Alexandre DERUMIER
What do you mean by that? I'm talking about the KVM guest not about the 
ceph nodes. 

Do you have tried to compare virtio-blk and virtio-scsi ?

Do you have tried directly from the host with the rbd kernel module ?



- Mail original - 

De: Stefan Priebe - Profihost AG s.pri...@profihost.ag 
À: Mark Nelson mark.nel...@inktank.com 
Cc: Joao Eduardo Luis joao.l...@inktank.com, ceph-devel@vger.kernel.org 
Envoyé: Jeudi 8 Novembre 2012 09:45:17 
Objet: Re: less cores more iops / speed 

Am 08.11.2012 01:59, schrieb Mark Nelson: 
 There's also the context switching overhead. It'd be interesting to 
 know how much the writer processes were shifting around on cores. 
What do you mean by that? I'm talking about the KVM guest not about the 
ceph nodes. 

 Stefan, what tool were you using to do writes? 
as always: fio ;-) 

Stefan 
-- 
To unsubscribe from this list: send the line unsubscribe ceph-devel in 
the body of a message to majord...@vger.kernel.org 
More majordomo info at http://vger.kernel.org/majordomo-info.html 
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: less cores more iops / speed

2012-11-08 Thread Stefan Priebe - Profihost AG

Am 08.11.2012 09:58, schrieb Alexandre DERUMIER:

What do you mean by that? I'm talking about the KVM guest not about the
ceph nodes.


Do you have tried to compare virtio-blk and virtio-scsi ?

How to change? Right now i'm using the PVE defaults = scsi-hd.


Do you have tried directly from the host with the rbd kernel module ?

No don't know how to use ;-)

Stefan



- Mail original -

De: Stefan Priebe - Profihost AG s.pri...@profihost.ag
À: Mark Nelson mark.nel...@inktank.com
Cc: Joao Eduardo Luis joao.l...@inktank.com, ceph-devel@vger.kernel.org
Envoyé: Jeudi 8 Novembre 2012 09:45:17
Objet: Re: less cores more iops / speed

Am 08.11.2012 01:59, schrieb Mark Nelson:

There's also the context switching overhead. It'd be interesting to
know how much the writer processes were shifting around on cores.

What do you mean by that? I'm talking about the KVM guest not about the
ceph nodes.


Stefan, what tool were you using to do writes?

as always: fio ;-)

Stefan


--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: less cores more iops / speed

2012-11-08 Thread Alexandre DERUMIER
 Do you have tried to compare virtio-blk and virtio-scsi ? 
How to change? Right now i'm using the PVE defaults = scsi-hd. 

(virtio-blk is classic virtio ;)

 Do you have tried directly from the host with the rbd kernel module ? 
No don't know how to use ;-) 
http://ceph.com/docs/master/rbd/rbd-ko/
#modprobe rbd
#sudo rbd map {image-name} --pool {pool-name} --id {user-name}

(then you'll have a /dev/rbd1)




- Mail original - 

De: Stefan Priebe - Profihost AG s.pri...@profihost.ag 
À: Alexandre DERUMIER aderum...@odiso.com 
Cc: Joao Eduardo Luis joao.l...@inktank.com, ceph-devel@vger.kernel.org, 
Mark Nelson mark.nel...@inktank.com 
Envoyé: Jeudi 8 Novembre 2012 10:02:23 
Objet: Re: less cores more iops / speed 

Am 08.11.2012 09:58, schrieb Alexandre DERUMIER: 
 What do you mean by that? I'm talking about the KVM guest not about the 
 ceph nodes. 
 
 Do you have tried to compare virtio-blk and virtio-scsi ? 
How to change? Right now i'm using the PVE defaults = scsi-hd. 

 Do you have tried directly from the host with the rbd kernel module ? 
No don't know how to use ;-) 

Stefan 


 - Mail original - 
 
 De: Stefan Priebe - Profihost AG s.pri...@profihost.ag 
 À: Mark Nelson mark.nel...@inktank.com 
 Cc: Joao Eduardo Luis joao.l...@inktank.com, ceph-devel@vger.kernel.org 
 Envoyé: Jeudi 8 Novembre 2012 09:45:17 
 Objet: Re: less cores more iops / speed 
 
 Am 08.11.2012 01:59, schrieb Mark Nelson: 
 There's also the context switching overhead. It'd be interesting to 
 know how much the writer processes were shifting around on cores. 
 What do you mean by that? I'm talking about the KVM guest not about the 
 ceph nodes. 
 
 Stefan, what tool were you using to do writes? 
 as always: fio ;-) 
 
 Stefan 
 
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: less cores more iops / speed

2012-11-08 Thread Stefan Priebe - Profihost AG

Am 08.11.2012 10:05, schrieb Alexandre DERUMIER:

Do you have tried to compare virtio-blk and virtio-scsi ?

How to change? Right now i'm using the PVE defaults = scsi-hd.


(virtio-blk is classic virtio ;)


Do you have tried directly from the host with the rbd kernel module ?
No don't know how to use ;-)

http://ceph.com/docs/master/rbd/rbd-ko/
#modprobe rbd
#sudo rbd map {image-name} --pool {pool-name} --id {user-name}


this gives me also 8000 iops on the host with 3.6 Ghz. So this is the 
same like in KVM.


Stefan
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: less cores more iops / speed

2012-11-08 Thread Mark Nelson

On 11/08/2012 02:45 AM, Stefan Priebe - Profihost AG wrote:

Am 08.11.2012 01:59, schrieb Mark Nelson:

There's also the context switching overhead.  It'd be interesting to
know how much the writer processes were shifting around on cores.

What do you mean by that? I'm talking about the KVM guest not about the
ceph nodes.


in this case, is fio bouncing around between cores?




Stefan, what tool were you using to do writes?

as always: fio ;-)


You could try using numactl to pin fio to a specific core.  Also, it may 
be interesting to try multiple concurrent fio processes, and then 
concurrent fio processes with each pinned.




Stefan


--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: less cores more iops / speed

2012-11-08 Thread Stefan Priebe - Profihost AG

Am 08.11.2012 14:19, schrieb Mark Nelson:

On 11/08/2012 02:45 AM, Stefan Priebe - Profihost AG wrote:

Am 08.11.2012 01:59, schrieb Mark Nelson:

There's also the context switching overhead.  It'd be interesting to
know how much the writer processes were shifting around on cores.

What do you mean by that? I'm talking about the KVM guest not about the
ceph nodes.


in this case, is fio bouncing around between cores?


Thanks you're correct. If i bind fio to two cores on a 8 core VM it runs 
with 16.000 iops.


So it is a problem of KVM which let's the processes jump between cores a 
lot.


Greets,
Stefan
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: less cores more iops / speed

2012-11-08 Thread Alexandre DERUMIER
So it is a problem of KVM which let's the processes jump between cores a 
lot. 

maybe numad from redhat can help ?
http://fedoraproject.org/wiki/Features/numad

It's try to keep process on same numa node and I think it's also doing some 
dynamic pinning.

- Mail original - 

De: Stefan Priebe - Profihost AG s.pri...@profihost.ag 
À: Mark Nelson mark.nel...@inktank.com 
Cc: Joao Eduardo Luis joao.l...@inktank.com, ceph-devel@vger.kernel.org 
Envoyé: Jeudi 8 Novembre 2012 16:14:32 
Objet: Re: less cores more iops / speed 

Am 08.11.2012 14:19, schrieb Mark Nelson: 
 On 11/08/2012 02:45 AM, Stefan Priebe - Profihost AG wrote: 
 Am 08.11.2012 01:59, schrieb Mark Nelson: 
 There's also the context switching overhead. It'd be interesting to 
 know how much the writer processes were shifting around on cores. 
 What do you mean by that? I'm talking about the KVM guest not about the 
 ceph nodes. 
 
 in this case, is fio bouncing around between cores? 

Thanks you're correct. If i bind fio to two cores on a 8 core VM it runs 
with 16.000 iops. 

So it is a problem of KVM which let's the processes jump between cores a 
lot. 

Greets, 
Stefan 
-- 
To unsubscribe from this list: send the line unsubscribe ceph-devel in 
the body of a message to majord...@vger.kernel.org 
More majordomo info at http://vger.kernel.org/majordomo-info.html 
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: less cores more iops / speed

2012-11-08 Thread Andrey Korolyov
On Thu, Nov 8, 2012 at 7:53 PM, Alexandre DERUMIER aderum...@odiso.com wrote:
So it is a problem of KVM which let's the processes jump between cores a
lot.

 maybe numad from redhat can help ?
 http://fedoraproject.org/wiki/Features/numad

 It's try to keep process on same numa node and I think it's also doing some 
 dynamic pinning.

Numad keeps only memory chunks on the preferred node, cpu pinning,
which is a primary goal there, should be done separately via libvirt
or manually for qemu process via cpuset(libvirt does pinning via
taskset and seems that it is broken at least in debian wheezy - even
affinity mask is set for qemu process, load spreads all over numa
node, including cpus outside the set).


 - Mail original -

 De: Stefan Priebe - Profihost AG s.pri...@profihost.ag
 À: Mark Nelson mark.nel...@inktank.com
 Cc: Joao Eduardo Luis joao.l...@inktank.com, ceph-devel@vger.kernel.org
 Envoyé: Jeudi 8 Novembre 2012 16:14:32
 Objet: Re: less cores more iops / speed

 Am 08.11.2012 14:19, schrieb Mark Nelson:
 On 11/08/2012 02:45 AM, Stefan Priebe - Profihost AG wrote:
 Am 08.11.2012 01:59, schrieb Mark Nelson:
 There's also the context switching overhead. It'd be interesting to
 know how much the writer processes were shifting around on cores.
 What do you mean by that? I'm talking about the KVM guest not about the
 ceph nodes.

 in this case, is fio bouncing around between cores?

 Thanks you're correct. If i bind fio to two cores on a 8 core VM it runs
 with 16.000 iops.

 So it is a problem of KVM which let's the processes jump between cores a
 lot.

 Greets,
 Stefan
 --
 To unsubscribe from this list: send the line unsubscribe ceph-devel in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at http://vger.kernel.org/majordomo-info.html
 --
 To unsubscribe from this list: send the line unsubscribe ceph-devel in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


less cores more iops / speed

2012-11-07 Thread Stefan Priebe

Hello again,

I've noticed something really interesting.

I get 5000 iops / VM for rand. 4k writes while assigning 4 cores on a 
2.5 Ghz Xeon.


When i move this VM to another kvm host with 3.6Ghz i get 8000 iops 
(still 8 cores) when i then LOWER the assigned cores from 8 to 4 i get 
14.500 iops. If i assign only 2 cores i get 16.000 iops...


Why does less kvm cores mean more speed?

Greets,
Stefan
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: less cores more iops / speed

2012-11-07 Thread Joao Eduardo Luis
On 11/07/2012 10:02 PM, Stefan Priebe wrote:
 Hello again,
 
 I've noticed something really interesting.
 
 I get 5000 iops / VM for rand. 4k writes while assigning 4 cores on a
 2.5 Ghz Xeon.
 
 When i move this VM to another kvm host with 3.6Ghz i get 8000 iops
 (still 8 cores) when i then LOWER the assigned cores from 8 to 4 i get
 14.500 iops. If i assign only 2 cores i get 16.000 iops...
 
 Why does less kvm cores mean more speed?

Totally going on a limb here, but might be related to the cache maybe?
When you have more cores your threads may bounce around the cores and
invalidate cache entries as they go by; will less cores you might end up
with some sort of twisted, forced cpu affinity that allows you to take
advantage of caching.

But I don't know, really. I would be amazed if what I just wrote had an
ounce of truth, and would be completely astonished if that was the cause
for such a sudden increase on iops.

  -Joao

 
 Greets,
 Stefan
 -- 
 To unsubscribe from this list: send the line unsubscribe ceph-devel in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html

--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: less cores more iops / speed

2012-11-07 Thread Mark Nelson

On 11/07/2012 06:00 PM, Joao Eduardo Luis wrote:

On 11/07/2012 10:02 PM, Stefan Priebe wrote:

Hello again,

I've noticed something really interesting.

I get 5000 iops / VM for rand. 4k writes while assigning 4 cores on a
2.5 Ghz Xeon.

When i move this VM to another kvm host with 3.6Ghz i get 8000 iops
(still 8 cores) when i then LOWER the assigned cores from 8 to 4 i get
14.500 iops. If i assign only 2 cores i get 16.000 iops...

Why does less kvm cores mean more speed?


Totally going on a limb here, but might be related to the cache maybe?
When you have more cores your threads may bounce around the cores and
invalidate cache entries as they go by; will less cores you might end up
with some sort of twisted, forced cpu affinity that allows you to take
advantage of caching.


There's also the context switching overhead.  It'd be interesting to 
know how much the writer processes were shifting around on cores. 
Stefan, what tool were you using to do writes?




But I don't know, really. I would be amazed if what I just wrote had an
ounce of truth, and would be completely astonished if that was the cause
for such a sudden increase on iops.


Yeah, it's seems pretty surprising that there would be any significant 
effect at this level of performance.




   -Joao



Greets,
Stefan
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html



--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: less cores more iops / speed

2012-11-07 Thread Mark Nelson

On 11/07/2012 06:00 PM, Joao Eduardo Luis wrote:

On 11/07/2012 10:02 PM, Stefan Priebe wrote:

Hello again,

I've noticed something really interesting.

I get 5000 iops / VM for rand. 4k writes while assigning 4 cores on a
2.5 Ghz Xeon.

When i move this VM to another kvm host with 3.6Ghz i get 8000 iops
(still 8 cores) when i then LOWER the assigned cores from 8 to 4 i get
14.500 iops. If i assign only 2 cores i get 16.000 iops...

Why does less kvm cores mean more speed?


Totally going on a limb here, but might be related to the cache maybe?
When you have more cores your threads may bounce around the cores and
invalidate cache entries as they go by; will less cores you might end up
with some sort of twisted, forced cpu affinity that allows you to take
advantage of caching.


There's also the context switching overhead.  It'd be interesting to 
know how much the writer processes were shifting around on cores. 
Stefan, what tool were you using to do writes?




But I don't know, really. I would be amazed if what I just wrote had an
ounce of truth, and would be completely astonished if that was the cause
for such a sudden increase on iops.


Yeah, it's seems pretty surprising that there would be any significant 
effect at this level of performance.




   -Joao



Greets,
Stefan
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html



--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: less cores more iops / speed

2012-11-07 Thread Dietmar Maurer
  I've noticed something really interesting.
 
  I get 5000 iops / VM for rand. 4k writes while assigning 4 cores on a
  2.5 Ghz Xeon.
 
  When i move this VM to another kvm host with 3.6Ghz i get 8000 iops
  (still 8
  cores) when i then LOWER the assigned cores from 8 to 4 i get
  14.500 iops. If i assign only 2 cores i get 16.000 iops...
 
  Why does less kvm cores mean more speed?
 
 There is a serious bug in the kvm vhost code. Do you use virtio-net with
 vhost?
 
 see: http://lists.nongnu.org/archive/html/qemu-devel/2012-
 11/msg00579.html
 
 Please test using the e1000 driver instead.

Or update the guest kernel (what guest kernel do you use?). AFAIK 3.X kernels 
does not trigger the bug.

--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: less cores more iops / speed

2012-11-07 Thread Stefan Priebe - Profihost AG
Am 08.11.2012 um 06:42 schrieb Dietmar Maurer diet...@proxmox.com:

 I've noticed something really interesting.
 
 I get 5000 iops / VM for rand. 4k writes while assigning 4 cores on a
 2.5 Ghz Xeon.
 
 When i move this VM to another kvm host with 3.6Ghz i get 8000 iops (still 8
 cores) when i then LOWER the assigned cores from 8 to 4 i get
 14.500 iops. If i assign only 2 cores i get 16.000 iops...
 
 Why does less kvm cores mean more speed?
 
 There is a serious bug in the kvm vhost code. Do you use virtio-net with 
 vhost?
 
 see: http://lists.nongnu.org/archive/html/qemu-devel/2012-11/msg00579.html
 
 Please test using the e1000 driver instead.

Why is vhost net driver involved here at all? Kvm guest only uses ssh here.

Stefan
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: less cores more iops / speed

2012-11-07 Thread Dietmar Maurer
 Why is vhost net driver involved here at all? Kvm guest only uses ssh here.

I though you are testing things (rdb) which depends on KVM network speed?

--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: less cores more iops / speed

2012-11-07 Thread Stefan Priebe - Profihost AG
Am 08.11.2012 um 06:49 schrieb Dietmar Maurer diet...@proxmox.com:

 I've noticed something really interesting.
 
 I get 5000 iops / VM for rand. 4k writes while assigning 4 cores on a
 2.5 Ghz Xeon.
 
 When i move this VM to another kvm host with 3.6Ghz i get 8000 iops
 (still 8
 cores) when i then LOWER the assigned cores from 8 to 4 i get
 14.500 iops. If i assign only 2 cores i get 16.000 iops...
 
 Why does less kvm cores mean more speed?
 
 There is a serious bug in the kvm vhost code. Do you use virtio-net with
 vhost?
 
 see: http://lists.nongnu.org/archive/html/qemu-devel/2012-
 11/msg00579.html
 
 Please test using the e1000 driver instead.
 
 Or update the guest kernel (what guest kernel do you use?). AFAIK 3.X kernels 
 does not trigger the bug.

Guest and Host habe 3.6.6 installed.


 
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: less cores more iops / speed

2012-11-07 Thread Stefan Priebe - Profihost AG

Am 08.11.2012 um 06:54 schrieb Dietmar Maurer diet...@proxmox.com:

 Why is vhost net driver involved here at all? Kvm guest only uses ssh here.
 
 I though you are testing things (rdb) which depends on KVM network speed?

Kvm process uses librbd and both are running on host not in guest.

Stefan 

 
 --
 To unsubscribe from this list: send the line unsubscribe ceph-devel in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html