So you are testing embedded mode with RadarGun? Or remotely over hotrod? On 20 Feb 2013, at 14:38, Radim Vansa wrote:
> Ouch, call me dumbass... I haven't checked the test results. Something > revoked my cluster allocation and the test was prematurely stopped. > > I'll rerun it (and check!), and show performance numbers as well. > > Radim (the dumbass) > > ----- Original Message ----- > | From: "Dan Berindei" <[email protected]> > | To: "infinispan -Dev List" <[email protected]> > | Cc: "Manik Surtani" <[email protected]> > | Sent: Wednesday, February 20, 2013 3:24:46 PM > | Subject: Re: [infinispan-dev] Staggering remote GET calls > | > | > | Radim, just to be sure, you are testing embedded mode with RadarGun, > | right? With HotRod most of the get operations should be initiated > | from the main owner, so Manik's changes shouldn't make a big > | difference in the number of active threads. > | > | How about throughput, has it also improved compared to 5.2.0.CR3, or > | is it the same? > | > | > | > | > | > | On Wed, Feb 20, 2013 at 2:15 PM, Radim Vansa < [email protected] > > | wrote: > | > | > | Hi Manik, > | > | so I have tried to compile this branch and issued a 20 minute stress > | test (preceded by 10 minute warmup) on 128 nodes, where each node > | has 10 stressor threads. > | While in 5.2.0.CR3 the maximum OOB threadpool size was 553 with this > | configuration, with t_825 it was 219. This looks good, but it's > | actually better :). When I looked on the per-node maximum, in t_825 > | there was only one node with the 219 threads (as the max), others > | were usually around 25, few around 40. On the contrary, in 5.2.0.CR3 > | all the nodes had maximum around 500! > | > | Glad to bring good news :) > | > | Radim > | > | > | > | ----- Original Message ----- > | | From: "Manik Surtani" < [email protected] > > | | To: "infinispan -Dev List" < [email protected] >, > | | "Radim Vansa" < [email protected] > > | | Sent: Tuesday, February 19, 2013 6:33:04 PM > | | Subject: Staggering remote GET calls > | | > | | Guys, > | | > | | I have a topic branch with a fix for ISPN-825, to stagger remote > | | GET > | | calls. (See the JIRA for details on this patch). > | | > | | This should have an interesting effect on greatly reducing the > | | pressure on the OOB thread pool. This isn't a *real* fix for the > | | problem that Radim reported (Pedro is working on that with Bela), > | | but reducing pressure on the OOB thread pool is a side effect of > | | this fix. > | | > | | It should generally make things faster too, with less traffic on > | | the > | | network. I'd be curious for you to give this branch a try, Radim - > | | see how it impacts your tests. > | | > | | https://github.com/maniksurtani/infinispan/tree/t_825 > | | > | | Cheers > | | Manik > | | -- > | | Manik Surtani > | | [email protected] > | | twitter.com/maniksurtani > | | > | | Platform Architect, JBoss Data Grid > | | http://red.ht/data-grid > | | > | | > | _______________________________________________ > | infinispan-dev mailing list > | [email protected] > | https://lists.jboss.org/mailman/listinfo/infinispan-dev > | > | > | _______________________________________________ > | infinispan-dev mailing list > | [email protected] > | https://lists.jboss.org/mailman/listinfo/infinispan-dev > _______________________________________________ > infinispan-dev mailing list > [email protected] > https://lists.jboss.org/mailman/listinfo/infinispan-dev Cheers, -- Mircea Markus Infinispan lead (www.infinispan.org)
_______________________________________________ infinispan-dev mailing list [email protected] https://lists.jboss.org/mailman/listinfo/infinispan-dev
