>>Did your RAID setup improve anything?

I have  tried to launch 2 fio test in parallel, on 2 disks in the same guest 
vm, I get 2500iops for each test ....

Running 2 fio tests, on 2 differents guests, give me 5000iops for each test.

I really don't understand...Maybe something don't use parallelim from 1 kvm 
process ? (monitor access, or something else...)

So raid don't help.

>> Have you tried scaling past 4 guests in parallel?

Not Yet,I'll done more tests this week


----- Mail original ----- 

De: "Gregory Farnum" <g...@inktank.com> 
À: "Dietmar Maurer" <diet...@proxmox.com> 
Cc: "Marcus Sorensen" <shadow...@gmail.com>, "Josh Durgin" 
<josh.dur...@inktank.com>, "Alexandre DERUMIER" <aderum...@odiso.com>, "Sage 
Weil" <s...@inktank.com>, "ceph-devel" <ceph-devel@vger.kernel.org> 
Envoyé: Samedi 3 Novembre 2012 18:09:11 
Objet: Re: slow fio random read benchmark, need help 

On Thu, Nov 1, 2012 at 6:00 PM, Dietmar Maurer <diet...@proxmox.com> wrote: 
> I always thought a distributed block storage could do such things 
> faster (or at least as fast) than a single centralized store? 

That rather depends on what makes up each of them. ;) 

On Thu, Nov 1, 2012 at 6:11 AM, Alexandre DERUMIER <aderum...@odiso.com> wrote: 
> I have some customers with some huge databases (too big to be handle in the 
> bufer), require a lot of ios. (around 10K). 
> 
> I have redone tests with 4 guest in parallel, I get 4 x 5000iops, so it seem 
> to scale ! (and cpu is very low on the ceph cluster). 
> 
> 
> So I'll try some tricks, like raid over multiple rbd devices, maybe it'll 
> help. 

Did your RAID setup improve anything? Have you tried scaling past 4 
guests in parallel? 
I still haven't come up with a good model for what could be causing 
these symptoms. :/ 
-Greg 
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to