On 20 Jan 2012, at 18:16, Sanne Grinovero wrote:

> On 20 January 2012 12:40, Manik Surtani <ma...@jboss.org> wrote:
>> 
>> On 20 Jan 2012, at 17:57, Sanne Grinovero wrote:
>> 
>>> inline:
>>> 
>>> On 20 January 2012 08:43, Bela Ban <b...@redhat.com> wrote:
>>>> Hi Sanne,
>>>> 
>>>> (redirected back to infinispan-dev)
>>>> 
>>>>> Hello,
>>>>> I've run the same Infinispan benchmark mentioned today on the
>>>>> Infinispan mailing list, but having the goal to test NAKACK2
>>>>> development.
>>>>> 
>>>>> Infinispan 5.1.0 at 2d7c65e with JGroups 3.0.2.Final :
>>>>> 
>>>>> Done 844,952,883 transactional operations in 22.08 minutes using
>>>>> 5.1.0-SNAPSHOT
>>>>>  839,810,425 reads and 5,142,458 writes
>>>>>  Reads / second: 634,028
>>>>>  Writes/ second: 3,882
>>>>> 
>>>>> Same Infinispan, with JGroups b294965 (and reconfigured for NAKACK2):
>>>>> 
>>>>> 
>>>>> Done 807,220,989 transactional operations in 18.15 minutes using
>>>>> 5.1.0-SNAPSHOT
>>>>>  804,162,247 reads and 3,058,742 writes
>>>>>  Reads / second: 738,454
>>>>>  Writes/ second: 2,808
>>>>> 
>>>>> same versions and configuration, run it again as I was too surprised:
>>>>> 
>>>>> Done 490,928,700 transactional operations in 10.94 minutes using
>>>>> 5.1.0-SNAPSHOT
>>>>>  489,488,339 reads and 1,440,361 writes
>>>>>  Reads / second: 745,521
>>>>>  Writes/ second: 2,193
>>>>> 
>>>>> So the figures aren't very stable, I might need to run longer tests,
>>>>> but there seems to be a trend of this new protocol speeding up Read
>>>>> operations at the cost of writes.
>>>> 
>>>> 
>>>> 
>>>> This is really strange !
>>>> 
>>>> In my own tests with 2 members on the same box (using MPerf), I found that
>>>> the blockings on Table.add() and Table.removeMany() were much smaller than
>>>> in the previous tests, and now the TP.TransferQueueBundler.send() method 
>>>> was
>>>> the culprit #1 by far ! Of course, still being much smaller than the
>>>> previous highest blockings !
>>> 
>>> I totally believe you, I'm wondering if the fact that JGroups is more
>>> efficient is making Infinispan writes slower. Consider as well that
>>> these read figures are stellar, it's never been that fast before (on
>>> this test on my laptop), makes me think of some unfair lock acquired
>>> by readers so that writers are not getting a chance to make any
>>> progress.
>>> Manik, Dan, any such lock around? If I profiler monitors, these
>>> figures change dramatically..
>> 
>> Yes, our (transactional) reads are phenomenally fast now.  I think it has to 
>> do with contention on the CHMs in the transaction table being optimised.  In 
>> terms of JGroups, perhaps writer threads being faster reduce the contention 
>> on these CHMs so more reads can be squeezed through.  This is REPL mode 
>> though.  In DIST our reads are about the same as 5.0.
>> 
>>> 
>>> We could be in a situation in which the faster JGroups gets, the worse
>>> the write numbers I get
>> 
>> That's the fault of the test.  In a real-world scenario, faster reads will 
>> always be good, since the reads (per timeslice) are finite.  Once they are 
>> done, they are done, and the writes can proceed.  To model this in your 
>> test, fix the number of reads and writes that will be performed.  Maybe even 
>> per timeslice like per minute or something, and then measure the average 
>> time per read or write operation.
> 
> +1000, the good effects of a cocktail @ seaside.

All manner of problems get solved this way.  ;)

> Stupid me as I even
> thought about that yesterday, as it's a common problem..
> But this test seemed to be designed from the beginning to stress
> contention as much as possible to identify bottlenecks; to get that
> kind of performance measurements is RadarGun not more suited?

Yes, bit RadarGun isn't that easy to profile.


--
Manik Surtani
ma...@jboss.org
twitter.com/maniksurtani

Lead, Infinispan
http://www.infinispan.org




_______________________________________________
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Reply via email to