>>Yes. CPU Load in Ceph for rand. 4k writes can be dropped by 50% when >>disabling all debug setting (see my last post on ceph mailinglist).
I just see it, great,I'll test it on my side :) (how many ssds - nodes ?) I hope that Intank is working on random io performance. I don't understand why we can't get more iops, if they are no cpu or disks bottleneck... (or they are locks somewhere ....) I really would like to known if iops scale with the number of nodes... ----- Mail original ----- De: "Stefan Priebe - Profihost AG" <s.pri...@profihost.ag> À: "Alexandre DERUMIER" <aderum...@odiso.com> Cc: pve-devel@pve.proxmox.com, "Dietmar Maurer" <diet...@proxmox.com> Envoyé: Vendredi 9 Novembre 2012 11:24:56 Objet: Re: [pve-devel] less cores more iops / speed Am 09.11.2012 11:10, schrieb Alexandre DERUMIER: >>> Doesn't make any difference. But i'm now at 18.000 iops ;-) > > Read and writes ? do you have found your write cpu problem ? Yes. CPU Load in Ceph for rand. 4k writes can be dropped by 50% when disabling all debug setting (see my last post on ceph mailinglist). Stefan > ----- Mail original ----- > > De: "Stefan Priebe - Profihost AG" <s.pri...@profihost.ag> > À: "Alexandre DERUMIER" <aderum...@odiso.com> > Cc: pve-devel@pve.proxmox.com, "Dietmar Maurer" <diet...@proxmox.com> > Envoyé: Vendredi 9 Novembre 2012 11:08:18 > Objet: Re: [pve-devel] less cores more iops / speed > > Am 09.11.2012 11:01, schrieb Alexandre DERUMIER: >> do you have tried fio with --numjobs option ? (I'll make more fio threads I >> think, maybe set it to the number of cpu you have) > > Doesn't make any difference. But i'm now at 18.000 iops ;-) > >> ----- Mail original ----- >> >> De: "Stefan Priebe - Profihost AG" <s.pri...@profihost.ag> >> À: "Dietmar Maurer" <diet...@proxmox.com> >> Cc: "Alexandre DERUMIER" <aderum...@odiso.com>, pve-devel@pve.proxmox.com >> Envoyé: Vendredi 9 Novembre 2012 10:06:45 >> Objet: Re: [pve-devel] less cores more iops / speed >> >> I'm pinnin inside the kvm process the fio process. SO it shouldn't be a >> problem of librbd alone. >> >> Stefan >> Am 09.11.2012 09:59, schrieb Dietmar Maurer: >>>> Am 09.11.2012 09:50, schrieb Dietmar Maurer: >>>>>>> It's try to keep process on same numa node and I think it's also >>>>>>> doing some >>>>>> dynamic pinning. >>>>>> >>>>>> numad doesn't help but libvirt seems to support pinning of kvm >>>>>> instances. >>>>>> Maybe pve should support pinning too? >>>>> >>>>> So far I never had such problems. >>>>> >>>>> IMHO, Manual pinning is the wrong way. >>>> >>>> But doing heavy I/O with 8 Cores is much slower than with 2 Cores that >>>> can't >>>> be correct... >>> >>> What exactly is the suggestion? So far only rbd client shows such behavior? >>> _______________________________________________ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel