Re: [infinispan-dev] The need for a 5.1.1

2012-01-27 Thread Manik Surtani
ISPN-1786 is related to a potential bug in the config parser that didn't pick 
up numVirtualNodes settings.  As well as a proposal for a different default 
value for numVirtualNodes - which really should be a separate JIRA.

On 27 Jan 2012, at 07:03, Bela Ban wrote:

 Regarding ISPN-1786, I'd like to work with Sanne/Mircea on trying out 
 the new UNICAST2. In my local tests, I got a 15% speedup, but this is 
 JGroups only, so I'm not sure how big the impact would be on Infinispan.
 
 If we see a big speedup, UNICAST2 and NAKACK2 could then be backported 
 to a 3.0.4, and added to 5.1.1.
 
 However, I want to spend some time on testing UNICAST2, so this won't be 
 available tomorrow...
 
 
 On 1/26/12 11:42 PM, Manik Surtani wrote:
 I really didn't want to do this, but it looks like a 5.1.1 will be 
 necessary.  The biggest (critical, IMO, for 5.1.1) issues I see are:
 
 1. https://issues.jboss.org/browse/ISPN-1786 - I presume this has to do with 
 a bug Mircea spotted that virtual nodes were not being enabled by the config 
 parser.  Which meant that even in the case of tests enabling virtual nodes, 
 we still saw uneven distribution and hence poor performance (well spotted, 
 Mircea).
 2. Related to 1, I don't think there is a JIRA for this yet, to change the 
 default number of virtual nodes from 1 to 100 or so.  After we profile and 
 analyse the impact of enabling this by default.  I'm particularly concerned 
 about (a) memory footprint and (b) effects on Hot Rod relaying topology 
 information back to clients.  Maybe 10 is a more sane default as a result.
 3. https://issues.jboss.org/browse/ISPN-1788 - config parser out of sync 
 with XSD!
 4. https://issues.jboss.org/browse/ISPN-1798 - forceReturnValues parameter 
 in the RemoteCacheManager.getCache() method is ignored!
 
 In addition, we may as well have these nice to have's in as well:
 
 https://issues.jboss.org/browse/ISPN-1787
 https://issues.jboss.org/browse/ISPN-1793
 https://issues.jboss.org/browse/ISPN-1795
 https://issues.jboss.org/browse/ISPN-1789
 https://issues.jboss.org/browse/ISPN-1784
 
 What do you think?  Anything else you feel that is crucial for a 5.1.1?  I'd 
 like to do this sooner rather than later, so we can still focus on 5.2.0.  
 So please respond asap.
 
 Paul, I'd also like your thoughts on this from an AS7 perspective.
 
 Cheers
 Manik
 
 --
 Manik Surtani
 ma...@jboss.org
 twitter.com/maniksurtani
 
 Lead, Infinispan
 http://www.infinispan.org
 
 
 
 
 
 
 
 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev
 
 -- 
 Bela Ban
 Lead JGroups (http://www.jgroups.org)
 JBoss / Red Hat
 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev

--
Manik Surtani
ma...@jboss.org
twitter.com/maniksurtani

Lead, Infinispan
http://www.infinispan.org




___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] default value for virtualNodes

2012-01-27 Thread Manik Surtani
Good stuff!  Thanks for this.  Yes, I'm ok with numVirtualNodes=48 as a 
default.  Galder, your thoughts from a Hot Rod perspective?

On 27 Jan 2012, at 08:41, Dan Berindei wrote:

 Hi guys
 
 I've been working on a test to search for an optimal default value here:
 https://github.com/danberindei/infinispan/commit/983c0328dc40be9609fcabb767dd46f9b98af464
 
 I'm measuring both the number of keys for which a node is primary
 owner and the number of keys for which it is one of the owners
 compared to the ideal distribution (K/N keys on each node). The former
 tells us how much more work the node could be expected to do, the
 latter how much memory the node is likely to need.
 
 I'm only running 1 loops, so the max figure is not the absolute
 maximum. But it's certainly bigger than the 0. percentile.
 
 The full results are here:
 http://fpaste.org/cI1r/
 
 The uniformity of the distribution goes up with the number of virtual
 nodes but down with the number of physical nodes. I think we should go
 with a default of 48 nodes (or 50 if you prefer decimal). With 32
 nodes, there's only a 0.1% chance that a node will hold more than 1.35
 * K/N keys, and a 0.1% chance that the node will be primary owner for
 more than 1.5 * K/N keys.
 
 We could go higher, but we run against the risk of node addresses
 colliding on the hash wheel. According to the formula on the Birthday
 Paradox page (http://en.wikipedia.org/wiki/Birthday_problem), we only
 need 2072 addresses on our 2^31 hash wheel to get a 0.1% chance of
 collision. That means 21 nodes * 96 virtual nodes, 32 nodes * 64
 virtual nodes or 43 nodes * 48 virtual nodes.
 
 Cheers
 Dan
 
 
 On Fri, Jan 27, 2012 at 12:37 AM, Sanne Grinovero sa...@infinispan.org 
 wrote:
 On 26 January 2012 22:29, Manik Surtani ma...@jboss.org wrote:
 
 On 26 Jan 2012, at 20:16, Sanne Grinovero wrote:
 
 +1
 Which default? 100? A prime?
 
 We should also make sure the CH function is optimized for this being on.
 
 
 Yes, we should profile a session with vnodes enabled.
 
 Manik, we're using VNodes in our performance tests. The proposal is if
 we can provide a good default value, as the feature is currently
 disabled by default.
 
 Cheers,
 Sanne
 
 
 
 --
 Manik Surtani
 ma...@jboss.org
 twitter.com/maniksurtani
 
 Lead, Infinispan
 http://www.infinispan.org
 
 
 
 
 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev
 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev
 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev

--
Manik Surtani
ma...@jboss.org
twitter.com/maniksurtani

Lead, Infinispan
http://www.infinispan.org




___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] default value for virtualNodes

2012-01-27 Thread Bela Ban
I assume the number of vnodes cannot be changed at runtime, dynamically 
adapting to a changing environment ?

I understand everybody has to have the exact same number of vnodes for 
reads and writes to hit the correct node, right ?

On 1/27/12 9:41 AM, Dan Berindei wrote:
 Hi guys

 I've been working on a test to search for an optimal default value here:
 https://github.com/danberindei/infinispan/commit/983c0328dc40be9609fcabb767dd46f9b98af464

 I'm measuring both the number of keys for which a node is primary
 owner and the number of keys for which it is one of the owners
 compared to the ideal distribution (K/N keys on each node). The former
 tells us how much more work the node could be expected to do, the
 latter how much memory the node is likely to need.

 I'm only running 1 loops, so the max figure is not the absolute
 maximum. But it's certainly bigger than the 0. percentile.

 The full results are here:
 http://fpaste.org/cI1r/

 The uniformity of the distribution goes up with the number of virtual
 nodes but down with the number of physical nodes. I think we should go
 with a default of 48 nodes (or 50 if you prefer decimal). With 32
 nodes, there's only a 0.1% chance that a node will hold more than 1.35
 * K/N keys, and a 0.1% chance that the node will be primary owner for
 more than 1.5 * K/N keys.

 We could go higher, but we run against the risk of node addresses
 colliding on the hash wheel. According to the formula on the Birthday
 Paradox page (http://en.wikipedia.org/wiki/Birthday_problem), we only
 need 2072 addresses on our 2^31 hash wheel to get a 0.1% chance of
 collision. That means 21 nodes * 96 virtual nodes, 32 nodes * 64
 virtual nodes or 43 nodes * 48 virtual nodes.

 Cheers
 Dan


 On Fri, Jan 27, 2012 at 12:37 AM, Sanne Grinoverosa...@infinispan.org  
 wrote:
 On 26 January 2012 22:29, Manik Surtanima...@jboss.org  wrote:

 On 26 Jan 2012, at 20:16, Sanne Grinovero wrote:

 +1
 Which default? 100? A prime?

 We should also make sure the CH function is optimized for this being on.


 Yes, we should profile a session with vnodes enabled.

 Manik, we're using VNodes in our performance tests. The proposal is if
 we can provide a good default value, as the feature is currently
 disabled by default.


-- 
Bela Ban
Lead JGroups (http://www.jgroups.org)
JBoss / Red Hat
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] default value for virtualNodes

2012-01-27 Thread Manik Surtani
On 27 Jan 2012, at 10:52, Bela Ban wrote:

 I assume the number of vnodes cannot be changed at runtime, dynamically 
 adapting to a changing environment ?
 
 I understand everybody has to have the exact same number of vnodes for 
 reads and writes to hit the correct node, right ?

Yes.

--
Manik Surtani
ma...@jboss.org
twitter.com/maniksurtani

Lead, Infinispan
http://www.infinispan.org



___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] very interesting performance results

2012-01-27 Thread Mircea Markus

On 26 Jan 2012, at 23:04, Sanne Grinovero wrote:

 Very nice!
 All my previous tests also confirm that there is a correlation between PUT 
 and GET performance, when one increases the other goes down.
 
 These PUT operations are doing a GET as well, correct? I'd love to see such 
 graphs using SKIP_REMOTE_LOOKUP.
it is configured with unsafe return values. With safe return, the values might 
get even better...
 How long are you warming up the VM? As mentioned in the other thread, I've 
 discovered that even under high load it will take more than 15 minutes before 
 all of Infinispan's code is running in compiled mode.
The warmup is 100k operations, doesn't seem too much.


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] very interesting performance results

2012-01-27 Thread Manik Surtani

On 27 Jan 2012, at 11:07, Mircea Markus wrote:

 
 These PUT operations are doing a GET as well, correct? I'd love to see such 
 graphs using SKIP_REMOTE_LOOKUP.
 it is configured with unsafe return values. With safe return, the values 
 might get even better…

Eh?  Why would safe return values be better?  You then need to do a GET before 
the PUT … 

--
Manik Surtani
ma...@jboss.org
twitter.com/maniksurtani

Lead, Infinispan
http://www.infinispan.org



___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] DIST.retrieveFromRemoteSource

2012-01-27 Thread Manik Surtani

On 25 Jan 2012, at 08:51, Dan Berindei wrote:

 Slightly related, I wonder if Manik's comment is still true:
 
if at all possible, try not to use JGroups' ANYCAST for now.
 Multiple (parallel) UNICASTs are much faster.)
 
 Intuitively it shouldn't be true, unicasts+FutureCollator do basically
 the same thing as anycast+GroupRequest.

Yes, this is outdated and may not be an issue with JGroups 3.x anymore.  The 
problem, IIRC, was that ANYCAST would end up doing UNICASTs in sequence and not 
in parallel.

--
Manik Surtani
ma...@jboss.org
twitter.com/maniksurtani

Lead, Infinispan
http://www.infinispan.org



___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] DIST.retrieveFromRemoteSource

2012-01-27 Thread Manik Surtani

On 25 Jan 2012, at 09:42, Bela Ban wrote:

 No, parallel unicasts will be faster, as an anycast to A,B,C sends the 
 unicasts sequentially

Is this still the case in JG 3.x?

--
Manik Surtani
ma...@jboss.org
twitter.com/maniksurtani

Lead, Infinispan
http://www.infinispan.org



___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] DIST.retrieveFromRemoteSource

2012-01-27 Thread Manik Surtani

On 25 Jan 2012, at 17:09, Dan Berindei wrote:

 
 Keep in mind that we also want to introduce eventual consistency - I
 think that's going to eliminate our optimization opportunity here
 because we'll need to get the values from a majority of owners (if not
 all the owners).


Also keep in mind that an eventually consistent mode will be a (non-default) 
option.  I still see most people using Infinispan in a strongly consistent mode.
--
Manik Surtani
ma...@jboss.org
twitter.com/maniksurtani

Lead, Infinispan
http://www.infinispan.org



___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] very interesting performance results

2012-01-27 Thread Sanne Grinovero
On 27 January 2012 11:07, Mircea Markus mircea.mar...@jboss.com wrote:

 On 26 Jan 2012, at 23:04, Sanne Grinovero wrote:

 Very nice!
 All my previous tests also confirm that there is a correlation between PUT 
 and GET performance, when one increases the other goes down.

 These PUT operations are doing a GET as well, correct? I'd love to see such 
 graphs using SKIP_REMOTE_LOOKUP.
 it is configured with unsafe return values. With safe return, the values 
 might get even better...
 How long are you warming up the VM? As mentioned in the other thread, I've 
 discovered that even under high load it will take more than 15 minutes 
 before all of Infinispan's code is running in compiled mode.
 The warmup is 100k operations, doesn't seem too much.

I'm now experimenting with -XX:CompileThreshold=10 , and it's fairly
warmed up only after 100k Write operations and a million read
operations. And that's all in the same VM!

Maybe you could try RadarGun making sure that each VM runs at least a
million operations in the warmup phase? Maybe it doesn't matter at
all, but I'd measure rather than guess it.
Also your test is different than mine; maybe a better strategy is to
figure out what's your correct warmup by looking at the output of
-XX:+PrintCompilation, and see how long it takes before it's
relatively quiet.
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] DIST.retrieveFromRemoteSource

2012-01-27 Thread Manik Surtani

On 25 Jan 2012, at 17:09, Dan Berindei wrote:

 I think we already have a JIRA to make PutKeyValueCommands return the
 previous value, that would eliminate lots of GetKeyValueCommands and
 it would actually improve the performance of puts - we should probably
 make this a priority.

Yes, this is definitely important.

https://issues.jboss.org/browse/ISPN-317



--
Manik Surtani
ma...@jboss.org
twitter.com/maniksurtani

Lead, Infinispan
http://www.infinispan.org



___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] DIST.retrieveFromRemoteSource

2012-01-27 Thread Bela Ban
yes.

On 1/27/12 12:13 PM, Manik Surtani wrote:

 On 25 Jan 2012, at 09:42, Bela Ban wrote:

 No, parallel unicasts will be faster, as an anycast to A,B,C sends the
 unicasts sequentially

 Is this still the case in JG 3.x?


-- 
Bela Ban
Lead JGroups (http://www.jgroups.org)
JBoss / Red Hat
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] DIST.retrieveFromRemoteSource

2012-01-27 Thread Manik Surtani

On 25 Jan 2012, at 14:22, Mircea Markus wrote:

 Agreed that having a configurable remote get policy makes sense. 
 We already have a JIRA for this[1], I'll start working on it as the 
 performance results are hunting me.
 I'd like to have Dan's input on this as well first, as he has worked with 
 remote gets and I still don't know why null results are not considered valid 
 :)
 
 [1] https://issues.jboss.org/browse/ISPN-825

IMO we should work on a configurable scheme such as:

numInitialRemoteGets (default 1)
remoteGetTimeout (default 500ms?  What's our average remote GET time anyway?)

So with this scheme, we'd:

* Randomly select 'numInitialRemoteGets' from dataOwners
* Send them the remote get
* After remoteGetTimeout, send a remote get to the next data owner, 
until we run out of data owners
* Return the first valid response.

--
Manik Surtani
ma...@jboss.org
twitter.com/maniksurtani

Lead, Infinispan
http://www.infinispan.org



___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] very interesting performance results

2012-01-27 Thread Bela Ban
10 is an awfully small value; in my experience (I had a default of 100 
for my JGroups perf tests), this made the tests longer than with the 
default (which is 1 IIRC) !


On 1/27/12 12:20 PM, Sanne Grinovero wrote:
 On 27 January 2012 11:07, Mircea Markusmircea.mar...@jboss.com  wrote:

 On 26 Jan 2012, at 23:04, Sanne Grinovero wrote:

 Very nice!
 All my previous tests also confirm that there is a correlation between PUT 
 and GET performance, when one increases the other goes down.

 These PUT operations are doing a GET as well, correct? I'd love to see such 
 graphs using SKIP_REMOTE_LOOKUP.
 it is configured with unsafe return values. With safe return, the values 
 might get even better...
 How long are you warming up the VM? As mentioned in the other thread, I've 
 discovered that even under high load it will take more than 15 minutes 
 before all of Infinispan's code is running in compiled mode.
 The warmup is 100k operations, doesn't seem too much.

 I'm now experimenting with -XX:CompileThreshold=10 , and it's fairly
 warmed up only after 100k Write operations and a million read
 operations. And that's all in the same VM!

 Maybe you could try RadarGun making sure that each VM runs at least a
 million operations in the warmup phase? Maybe it doesn't matter at
 all, but I'd measure rather than guess it.
 Also your test is different than mine; maybe a better strategy is to
 figure out what's your correct warmup by looking at the output of
 -XX:+PrintCompilation, and see how long it takes before it's
 relatively quiet.

-- 
Bela Ban
Lead JGroups (http://www.jgroups.org)
JBoss / Red Hat
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] very interesting performance results

2012-01-27 Thread Mircea Markus
 Are both reads and writes marked as OOB ? Then they share the same OOB
 thread pool !
 
 
 We do mark reads as OOB
 (DistributionManagerImpl.retrieveFromRemoteSource). So reads and
 writes share the same OOB pool.
I was looking an non-trnasactional puts, and these are not OOB. This benchmark 
uses optimistic transactions though, and that sends prepares and commits async.
 
 I remember not long ago Galder extended RadarGun to monitor GC
 activity during the test. Mircea, would it be possible to use that to
 also monitor the number of active threads in the OOB and Incoming
 pools?
I'll take a look. 
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] very interesting performance results

2012-01-27 Thread Mircea Markus

On 27 Jan 2012, at 10:59, Manik Surtani wrote:

 
 On 26 Jan 2012, at 23:04, Sanne Grinovero wrote:
 
 How long are you warming up the VM? As mentioned in the other thread, I've 
 discovered that even under high load it will take more than 15 minutes 
 before all of Infinispan's code is running in compiled mode.
 
 We should be able to tune this though?  IIRC the JVM will take in options 
 related to after how many loops a method gets compiled.
radargun doesn't support time-based warmup, but only operation based warmup. 
Not hard to implement though.

___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] very interesting performance results

2012-01-27 Thread Sanne Grinovero
On 27 January 2012 11:37, Mircea Markus mircea.mar...@jboss.com wrote:

 On 27 Jan 2012, at 10:59, Manik Surtani wrote:


 On 26 Jan 2012, at 23:04, Sanne Grinovero wrote:

 How long are you warming up the VM? As mentioned in the other thread, I've
 discovered that even under high load it will take more than 15 minutes
 before all of Infinispan's code is running in compiled mode.


 We should be able to tune this though?  IIRC the JVM will take in options
 related to after how many loops a method gets compiled.

 radargun doesn't support time-based warmup, but only operation based warmup.
 Not hard to implement though.

just have it log JIT activity, than you can figure out what number of
operation you need. After all you only need an approximation, and
maybe it's fine already.

___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] very interesting performance results

2012-01-27 Thread Mircea Markus

On 27 Jan 2012, at 11:20, Sanne Grinovero wrote:

 On 27 January 2012 11:07, Mircea Markus mircea.mar...@jboss.com wrote:
 
 On 26 Jan 2012, at 23:04, Sanne Grinovero wrote:
 
 Very nice!
 All my previous tests also confirm that there is a correlation between PUT 
 and GET performance, when one increases the other goes down.
 
 These PUT operations are doing a GET as well, correct? I'd love to see such 
 graphs using SKIP_REMOTE_LOOKUP.
 it is configured with unsafe return values. With safe return, the values 
 might get even better...
 How long are you warming up the VM? As mentioned in the other thread, I've 
 discovered that even under high load it will take more than 15 minutes 
 before all of Infinispan's code is running in compiled mode.
 The warmup is 100k operations, doesn't seem too much.
 
 I'm now experimenting with -XX:CompileThreshold=10 , and it's fairly
 warmed up only after 100k Write operations and a million read
 operations. And that's all in the same VM! 

 Maybe you could try RadarGun making sure that each VM runs at least a
 million operations in the warmup phase? Maybe it doesn't matter at
 all, but I'd measure rather than guess it.
 Also your test is different than mine; maybe a better strategy is to
 figure out what's your correct warmup by looking at the output of
 -XX:+PrintCompilation, and see how long it takes before it's
 relatively quiet.

Thanks for the tips, I'll improve the warmup based on your suggestion: 
https://sourceforge.net/apps/trac/radargun/ticket/26

___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] very interesting performance results

2012-01-27 Thread Mircea Markus

On 27 Jan 2012, at 11:36, Bela Ban wrote:

 10 is an awfully small value; in my experience (I had a default of 100 
 for my JGroups perf tests), this made the tests longer than with the 
 default (which is 1 IIRC) !
default is 1500: 
http://java.sun.com/docs/books/performance/1st_edition/html/JPAppHotspot.fm.html

___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] Looking into OProfile logs

2012-01-27 Thread Manik Surtani
Also, lets please move this discussion to infinispan-dev… 


On 25 Jan 2012, at 17:57, Dan Berindei wrote:

 On Wed, Jan 25, 2012 at 6:32 PM, Sanne Grinovero sa...@infinispan.org wrote:
 On 25 January 2012 15:56, Dan Berindei dan.berin...@gmail.com wrote:
 On Wed, Jan 25, 2012 at 3:16 PM, Sanne Grinovero sa...@infinispan.org 
 wrote:
 
  - ComponentRegistry.getComponent # can't we make sure this is not
 needed at runtime, or create direct accessors for the hottest ones,
 like Configuration.class ? I'll make a proposal and measure it.
 
 
 I had an idea of registering lots of CacheInboundInvocationHandlers in
 CommandAwareRpcDispatcher instead of a single global
 InboundInvocationHandler but I never implemented it. Are you thinking
 along the same lines?
 
 
 No I've been patching CacheRpcCommandExternalizer instead. But please
 change that one if you have an idea.
 
 
 Ok, I'll do that.
 
 
  - DefaultConsistentHash.isKeyLocalToAddress # Should be possible to
 speed up this one
 
 
 I didn't think of any optimization specific for isKeyLocalToAddress,
 but we could precompute the list of owners for each hash wheel
 segment and store that in the positionValues array instead of a
 single address. It would get kind of expensive with huge numbers of
 virtual nodes, so it would be nice if we could prevent the users from
 using thousands of virtual nodes.
 
 Address interning could help us somewhat, if we could eliminate the
 equals() calls with reference equality checks.
 
 Right, but it means that all Address should be created via the same
 service, including unmarshalled ones.
 Would be nice doing it, but sounds like dangerous if not doing an
 extensive refactoring.
 I'd try something like this by introducing a new type, mandate the
 type on the API, and do this possibly after changing the Address
 collections to an ad-hoc Collection as suggested last week; not sure
 yet how it would look like, but let's evaluate options after the
 custom collections is in place.
 
 
 I was actually thinking knowing that a1 != a2 = !a1.equals(a2) would
 enable us to use even more efficient custom collections.
 But I agree that replacing all addresses with interned ones is not an easy 
 task.
 
 - boolean 
 org.infinispan.transaction.xa.GlobalTransaction.equals(java.lang.Object)
 # let's see if we can do something about this.
 
 
 Checking the address is more expensive than checking the id, we should
 check the id first.
 Other than that, the only thing we can do is call it less often :)
 
 And idea on less often ?
 
 
 Nope, no idea I'm afraid.
 
 
 - jni_GetObjectField # would like to know where this is coming from
 
 
 It looks like it's PlainDatagramSocketImpl.send and receive:
 
 6184  0.2442  libnet.solibnet.so
 Java_java_net_PlainDatagramSocketImpl_send
  7483 34.3556  libjvm.solibnet.so
 jni_GetObjectField
 
 3849  0.1520  libnet.solibnet.so
 Java_java_net_PlainDatagramSocketImpl_receive0
  8221 34.7773  libjvm.solibnet.so
 jni_GetObjectField
 
 Right, that's likely. Would like to make sure.
 
 
 This is certainly a big part of where it's coming from - but perhaps
 there are other places as well.
 
 
 I also have a question, are you using virtual nodes? We should enable
 it in our perf tests (say with numVirtualNodes = 10), I suspect it
 will make DCH.locate and DCH.isKeyLocalToAddress even slower.
 
 We've discussed VNodes a lot on IRC, you should join us.
 [My tests where without, but have already applied the patch to enable it]
 
 
 I'm also going to update VNodesCHPerfTest to look closer at key distribution.
 
 Cheers
 Dan

--
Manik Surtani
ma...@jboss.org
twitter.com/maniksurtani

Lead, Infinispan
http://www.infinispan.org




___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] default value for virtualNodes

2012-01-27 Thread Mircea Markus
I've created a JIRA to track this: https://issues.jboss.org/browse/ISPN-1801

 I understand everybody has to have the exact same number of vnodes for 
 reads and writes to hit the correct node, right ?
 Yes.

That's true, but it is not a good thing: numVirtNodes should be proportional 
with the node's capacity, i.e. more powerful machines in the cluster should 
have assigned more virtual nodes.
This way we can better control the load. A node would need to send its 
configured numVirtualNodes when joining in order to support this, but that's a 
thing we already do for  TACH.___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] default value for virtualNodes

2012-01-27 Thread Dan Berindei
On Fri, Jan 27, 2012 at 2:35 PM, Mircea Markus mircea.mar...@jboss.com wrote:
 I've created a JIRA to track this: https://issues.jboss.org/browse/ISPN-1801

 I understand everybody has to have the exact same number of vnodes for

 reads and writes to hit the correct node, right ?

 Yes.

 That's true, but it is not a good thing: numVirtNodes should be proportional
 with the node's capacity, i.e. more powerful machines in the cluster should
 have assigned more virtual nodes.
 This way we can better control the load. A node would need to send its
 configured numVirtualNodes when joining in order to support this, but that's
 a thing we already do for  TACH.


We should use a different mechanism than the TopologyAwareUUID we use
for TACH, because the address is sent with every command. The capacity
instead should be fairly static. We may want to make it changeable at
runtime, but it will take a state transfer to propagate that info to
all the members of the cluster (because the nodes' CHs need to stay in
sync).

In fact, I can imagine users wanting to balance key ownership between
machines/racks/sites with TACH, but without actually using RELAY - the
TopologyAwareUUID is just an overhead for them.

Cheers
Dan

___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] very interesting performance results

2012-01-27 Thread Sanne Grinovero
My experiments where using the default JVM settings regarding compile
settings, with these others:

-Xmx2G -Xms2G -XX:MaxPermSize=128M -XX:+HeapDumpOnOutOfMemoryError
-Xss512k -XX:HeapDumpPath=/tmp/java_heap
-Djava.net.preferIPv4Stack=true -Djgroups.bind_addr=127.0.0.1
-Dlog4j.configuration=file:/opt/log4j.xml -XX:+PrintCompilation
-Xbatch -server -XX:+UseCompressedOops -XX:+UseLargePages
-XX:LargePageSizeInBytes=2m -XX:+AlwaysPreTouch

And it's with these options that after 30 minutes of full-stress it's
still not finished warming up all of Infinispan+JGroups code.
After that I stated I was going to *experiment* with
-XX:CompileThreshold=10, to see if I could get it to compile in a
shorter time, just to save me from waiting too long in performance
tests. It doesn't seem to matter much, so I'm reverting it back to the
above values (default for this parameter for my VM is 1).

-- Sanne


On 27 January 2012 11:54, Bela Ban b...@redhat.com wrote:
 Make sure you check the values for the server JVM, not the client JVM !
 1500 might be for the client JVM...

 On 1/27/12 12:50 PM, Mircea Markus wrote:

 On 27 Jan 2012, at 11:36, Bela Ban wrote:

 10 is an awfully small value; in my experience (I had a default of 100
 for my JGroups perf tests), this made the tests longer than with the
 default (which is 1 IIRC) !
 default is 1500: 
 http://java.sun.com/docs/books/performance/1st_edition/html/JPAppHotspot.fm.html





 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev

 --
 Bela Ban
 Lead JGroups (http://www.jgroups.org)
 JBoss / Red Hat
 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] very interesting performance results

2012-01-27 Thread Mircea Markus
On 27 Jan 2012, at 13:31, Sanne Grinovero wrote:
 My experiments where using the default JVM settings regarding compile
 settings, with these others:
 
 -Xmx2G -Xms2G -XX:MaxPermSize=128M -XX:+HeapDumpOnOutOfMemoryError
 -Xss512k -XX:HeapDumpPath=/tmp/java_heap
 -Djava.net.preferIPv4Stack=true -Djgroups.bind_addr=127.0.0.1
 -Dlog4j.configuration=file:/opt/log4j.xml -XX:+PrintCompilation
 -Xbatch -server -XX:+UseCompressedOops -XX:+UseLargePages
 -XX:LargePageSizeInBytes=2m -XX:+AlwaysPreTouch
 
 And it's with these options that after 30 minutes of full-stress it's
 still not finished warming up all of Infinispan+JGroups code.
 After that I stated I was going to *experiment* with
 -XX:CompileThreshold=10, to see if I could get it to compile in a
 shorter time, just to save me from waiting too long in performance
 tests. It doesn't seem to matter much, so I'm reverting it back to the
 above values (default for this parameter for my VM is 1).

That's surprising, I'd say that in 30 mins of invocations all the critical 
paths are touched much more many times than 1. E.g. the number of reads/sec 
is in thousands (20k on the cluster lab). Might be that this param is ignored 
or it collides with other -XX ?


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] The need for a 5.1.1

2012-01-27 Thread Mircea Markus

On 26 Jan 2012, at 22:42, Manik Surtani wrote:
 I really didn't want to do this, but it looks like a 5.1.1 will be necessary. 
  The biggest (critical, IMO, for 5.1.1) issues I see are:
 
 1. https://issues.jboss.org/browse/ISPN-1786 - I presume this has to do with 
 a bug Mircea spotted that virtual nodes were not being enabled by the config 
 parser.  Which meant that even in the case of tests enabling virtual nodes, 
 we still saw uneven distribution and hence poor performance (well spotted, 
 Mircea).  
 2. Related to 1, I don't think there is a JIRA for this yet, to change the 
 default number of virtual nodes from 1 to 100 or so.  After we profile and 
 analyse the impact of enabling this by default.  I'm particularly concerned 
 about (a) memory footprint and (b) effects on Hot Rod relaying topology 
 information back to clients.  Maybe 10 is a more sane default as a result.

There is one now:  https://issues.jboss.org/browse/ISPN-1801

 3. https://issues.jboss.org/browse/ISPN-1788 - config parser out of sync with 
 XSD!
 4. https://issues.jboss.org/browse/ISPN-1798 - forceReturnValues parameter in 
 the RemoteCacheManager.getCache() method is ignored!

I'm sure there will some others as community starts reporting! but that's good 
as we can provide a quick release for the main issues.

 In addition, we may as well have these nice to have's in as well:
 
 https://issues.jboss.org/browse/ISPN-1787
 https://issues.jboss.org/browse/ISPN-1793
 https://issues.jboss.org/browse/ISPN-1795

these ^^ are already in master so we can include them straight away.  
 https://issues.jboss.org/browse/ISPN-1789

this looks like a low prio, as doesn't have an impact on the  functionality

 https://issues.jboss.org/browse/ISPN-1784
pull request sent, so IMO makes sense.
 
 What do you think?  Anything else you feel that is crucial for a 5.1.1?  I'd 
 like to do this sooner rather than later, so we can still focus on 5.2.0.  So 
 please respond asap.
As everybody is in the performance min set, I think the following issues, in 
this order, would be a quick win:
https://issues.jboss.org/browse/ISPN-825
https://issues.jboss.org/browse/ISPN-317
https://issues.jboss.org/browse/ISPN-1748

I don't think implementing them would take more than a week's time, and their 
impact on performance is massive. During this week we can also gather and fix 
the most critical issues raised from 5.1.Final.

Cheers,
Mircea



___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] default value for virtualNodes

2012-01-27 Thread Dan Berindei
On Fri, Jan 27, 2012 at 2:53 PM, Mircea Markus mircea.mar...@jboss.com wrote:
 That's true, but it is not a good thing: numVirtNodes should be proportional
 with the node's capacity, i.e. more powerful machines in the cluster should
 have assigned more virtual nodes.
 This way we can better control the load. A node would need to send its
 configured numVirtualNodes when joining in order to support this, but that's
 a thing we already do for  TACH.


 We should use a different mechanism than the TopologyAwareUUID we use
 for TACH, because the address is sent with every command.
 so every command sends cluster, rack and machine info? That's sounds a bit 
 redundant. Can't we just send them once with the JOIN request?

When RELAY is enabled, it actually needs the topology info in order to
relay messages between sites.
I agree that the topology info should never change, but RELAY requires
it for now so we can't avoid it (in the general case).

Cheers
Dan

___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] The need for a 5.1.1

2012-01-27 Thread Bela Ban

The branch is JGRP-1417 and the config for UNICAST2 is:

UNICAST2
  max_bytes=20M
  xmit_table_num_rows=20
  xmit_table_msgs_per_row=1
  xmit_table_max_compaction_time=1
  max_msg_batch_size=100/

I've attached the config I've used for UPerf.

Cheers,




On 1/27/12 3:12 PM, Mircea Markus wrote:

On 27 Jan 2012, at 07:03, Bela Ban wrote:

Regarding ISPN-1786, I'd like to work with Sanne/Mircea on trying out
the new UNICAST2. In my local tests, I got a 15% speedup, but this is
JGroups only, so I'm not sure how big the impact would be on Infinispan.


nice! I can trigger a radargun run very quickly, just let me know the branch.
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


--
Bela Ban
Lead JGroups (http://www.jgroups.org)
JBoss / Red Hat

!--
  Fast configuration for local mode, ie. all members reside on the same host. Setting ip_ttl to 0 means that
  no multicast packet will make it outside the local host.
  Therefore, this configuration will NOT work to cluster members residing on different hosts !

  Author: Bela Ban
--

config xmlns=urn:org:jgroups
xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance;
xsi:schemaLocation=urn:org:jgroups
http://www.jgroups.org/schema/JGroups-3.0.xsd;

!--
 mcast_addr=232.5.5.5 mcast_port=5
 ucast_recv_buf_size=20M
 ucast_send_buf_size=640K
 mcast_recv_buf_size=25M
 mcast_send_buf_size=640K
 tos=8
 ip_ttl=0
 --
 

UDP
 mcast_addr=232.5.5.5 mcast_port=5
 ucast_recv_buf_size=20M
 ucast_send_buf_size=640K
 mcast_recv_buf_size=25M
 mcast_send_buf_size=640K
 tos=8
 ip_ttl=0


 loopback=false
 
 discard_incompatible_packets=true
 max_bundle_size=64000
 max_bundle_timeout=30
 bundler_type=old
 enable_bundling=true
 bundler_capacity=20
 enable_unicast_bundling=true
 enable_diagnostics=true
 thread_naming_pattern=cl

 timer_type=new
 timer.min_threads=2
 timer.max_threads=4
 timer.keep_alive_time=3000
 timer.queue_max_size=500

 thread_pool.enabled=true
 thread_pool.min_threads=2
 thread_pool.max_threads=10
 thread_pool.keep_alive_time=5000
 thread_pool.queue_enabled=true
 thread_pool.queue_max_size=10
 thread_pool.rejection_policy=discard

 oob_thread_pool.enabled=true
 oob_thread_pool.min_threads=1
 oob_thread_pool.max_threads=8
 oob_thread_pool.keep_alive_time=5000
 oob_thread_pool.queue_enabled=false
 oob_thread_pool.queue_max_size=100
 oob_thread_pool.rejection_policy=discard/

!--DISCARD down=0.1 up=0.1/--

PING timeout=1000
num_initial_members=3/
MERGE2 max_interval=3
min_interval=1/
FD_SOCK/

!--pbcast.NAKACK exponential_backoff=300
   max_msg_batch_size=100
   xmit_stagger_timeout=200
   use_mcast_xmit=false
   discard_delivered_msgs=true/--

pbcast.NAKACK2 xmit_interval=1000
xmit_table_num_rows=100
xmit_table_msgs_per_row=1
xmit_table_max_compaction_time=1
max_msg_batch_size=100
use_mcast_xmit=false
discard_delivered_msgs=true/
  
UNICAST2 
  max_bytes=20M
  xmit_table_num_rows=20
  xmit_table_msgs_per_row=1
  xmit_table_max_compaction_time=1
  max_msg_batch_size=100/
pbcast.STABLE stability_delay=1000 desired_avg_gossip=5
   max_bytes=8m/
pbcast.GMS print_local_addr=true join_timeout=3000
view_bundling=true/
UFC max_credits=4M
 min_threshold=0.2/
MFC max_credits=4M
 min_threshold=0.2/
FRAG2 frag_size=6  /
/config
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] The need for a 5.1.1

2012-01-27 Thread Manik Surtani

On 27 Jan 2012, at 14:09, Mircea Markus wrote:

 
 On 26 Jan 2012, at 22:42, Manik Surtani wrote:
 I really didn't want to do this, but it looks like a 5.1.1 will be 
 necessary.  The biggest (critical, IMO, for 5.1.1) issues I see are:
 
 1. https://issues.jboss.org/browse/ISPN-1786 - I presume this has to do with 
 a bug Mircea spotted that virtual nodes were not being enabled by the config 
 parser.  Which meant that even in the case of tests enabling virtual nodes, 
 we still saw uneven distribution and hence poor performance (well spotted, 
 Mircea).  
 2. Related to 1, I don't think there is a JIRA for this yet, to change the 
 default number of virtual nodes from 1 to 100 or so.  After we profile and 
 analyse the impact of enabling this by default.  I'm particularly concerned 
 about (a) memory footprint and (b) effects on Hot Rod relaying topology 
 information back to clients.  Maybe 10 is a more sane default as a result.
 
 There is one now:  https://issues.jboss.org/browse/ISPN-1801
 
 3. https://issues.jboss.org/browse/ISPN-1788 - config parser out of sync 
 with XSD!
 4. https://issues.jboss.org/browse/ISPN-1798 - forceReturnValues parameter 
 in the RemoteCacheManager.getCache() method is ignored!
 
 I'm sure there will some others as community starts reporting! but that's 
 good as we can provide a quick release for the main issues.
 
 In addition, we may as well have these nice to have's in as well:
 
 https://issues.jboss.org/browse/ISPN-1787
 https://issues.jboss.org/browse/ISPN-1793
 https://issues.jboss.org/browse/ISPN-1795
 
 these ^^ are already in master so we can include them straight away.  
 https://issues.jboss.org/browse/ISPN-1789
 
 this looks like a low prio, as doesn't have an impact on the  functionality

Agreed, but it is such a trivial fix and it greatly affects usability (who 
wants to see such verbose and misleading log messages?)

 
 https://issues.jboss.org/browse/ISPN-1784
 pull request sent, so IMO makes sense.
 
 What do you think?  Anything else you feel that is crucial for a 5.1.1?  I'd 
 like to do this sooner rather than later, so we can still focus on 5.2.0.  
 So please respond asap.
 As everybody is in the performance min set, I think the following issues, in 
 this order, would be a quick win:
 https://issues.jboss.org/browse/ISPN-825
 https://issues.jboss.org/browse/ISPN-317
 https://issues.jboss.org/browse/ISPN-1748

-1 to all 3.  I think these are all non-trivial and shouldn't be in a point 
release - even if it is a week's worth of work.

Cheers
Manik
--
Manik Surtani
ma...@jboss.org
twitter.com/maniksurtani

Lead, Infinispan
http://www.infinispan.org



___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] The need for a 5.1.1

2012-01-27 Thread Mircea Markus
There's an initialisation error indicating a dependency on a test class[1].
Seems like jgroups is not jar-less anymore :-)

[1] 
at 
org.infinispan.util.ReflectionUtil.invokeAccessibly(ReflectionUtil.java:236)
at 
org.infinispan.factories.AbstractComponentRegistry$PrioritizedMethod.invoke(AbstractComponentRegistry.java
:875)
at 
org.infinispan.factories.AbstractComponentRegistry.invokeStartMethods(AbstractComponentRegistry.java:630)
at 
org.infinispan.factories.AbstractComponentRegistry.internalStart(AbstractComponentRegistry.java:619)
at 
org.infinispan.factories.AbstractComponentRegistry.start(AbstractComponentRegistry.java:523)
at 
org.infinispan.factories.GlobalComponentRegistry.start(GlobalComponentRegistry.java:200)
... 12 more
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:616)
at 
org.infinispan.util.ReflectionUtil.invokeAccessibly(ReflectionUtil.java:234)
... 17 more
Caused by: java.lang.ExceptionInInitializerError
at 
org.jgroups.conf.ClassConfigurator.clinit(ClassConfigurator.java:57)
at org.jgroups.stack.Protocol.init(Protocol.java:57)
at org.jgroups.stack.ProtocolStack.init(ProtocolStack.java:144)
at org.jgroups.JChannel.init(JChannel.java:793)
at org.jgroups.JChannel.init(JChannel.java:167)
at org.jgroups.JChannel.init(JChannel.java:137)
at 
org.infinispan.remoting.transport.jgroups.JGroupsTransport.buildChannel(JGroupsTransport.java:327)
at 
org.infinispan.remoting.transport.jgroups.JGroupsTransport.initChannel(JGroupsTransport.java:250)
at 
org.infinispan.remoting.transport.jgroups.JGroupsTransport.initChannelAndRPCDispatcher(JGroupsTransport.java:290)
at 
org.infinispan.remoting.transport.jgroups.JGroupsTransport.start(JGroupsTransport.java:168)
... 22 more
Caused by: java.lang.ClassNotFoundException: 
org.jgroups.tests.perf.MPerf$MPerfHeader
at org.jgroups.util.Util.loadClass(Util.java:2661)
at org.jgroups.conf.ClassConfigurator.init(ClassConfigurator.java:84)
at 
org.jgroups.conf.ClassConfigurator.clinit(ClassConfigurator.java:54)
... 31 more


On 27 Jan 2012, at 14:24, Bela Ban wrote:
 The branch is JGRP-1417 and the config for UNICAST2 is:
 
 UNICAST2
  max_bytes=20M
  xmit_table_num_rows=20
  xmit_table_msgs_per_row=1
  xmit_table_max_compaction_time=1
  max_msg_batch_size=100/
 
 I've attached the config I've used for UPerf.
 
 Cheers,
 
 
 
 
 On 1/27/12 3:12 PM, Mircea Markus wrote:
 On 27 Jan 2012, at 07:03, Bela Ban wrote:
 Regarding ISPN-1786, I'd like to work with Sanne/Mircea on trying out
 the new UNICAST2. In my local tests, I got a 15% speedup, but this is
 JGroups only, so I'm not sure how big the impact would be on Infinispan.
 
 nice! I can trigger a radargun run very quickly, just let me know the branch.
 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev
 
 -- 
 Bela Ban
 Lead JGroups (http://www.jgroups.org)
 JBoss / Red Hat
 fast.xml___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] The need for a 5.1.1

2012-01-27 Thread Mircea Markus

On 27 Jan 2012, at 15:08, Bela Ban wrote:

 Build the JGroups JAR with ./build.sh jar, *not* via maven !
 
 I attached the JAR for you.
Thanks!
 JGroups *is* and *will remain* JAR less ! :-)

Sorry for loosing the faith :)
Might make sense to have the mvn install work as well though, I think people 
would expect it to behave correctly when they see a pom.xml. 
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] The need for a 5.1.1

2012-01-27 Thread Bela Ban


On 1/27/12 4:26 PM, Mircea Markus wrote:

 On 27 Jan 2012, at 15:08, Bela Ban wrote:

 Build the JGroups JAR with ./build.sh jar, *not* via maven !

 I attached the JAR for you.
 Thanks!
 JGroups *is* and *will remain* JAR less ! :-)

 Sorry for loosing the faith :)
 Might make sense to have the mvn install work as well though, I think people 
 would expect it to behave correctly when they see a pom.xml.

Are you volunteering ? I'd be happy to integrate your changes ! :-)

-- 
Bela Ban
Lead JGroups (http://www.jgroups.org)
JBoss / Red Hat
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] very interesting performance results

2012-01-27 Thread Dan Berindei
On Fri, Jan 27, 2012 at 3:43 PM, Mircea Markus mircea.mar...@jboss.com wrote:
 On 27 Jan 2012, at 13:31, Sanne Grinovero wrote:
 My experiments where using the default JVM settings regarding compile
 settings, with these others:

 -Xmx2G -Xms2G -XX:MaxPermSize=128M -XX:+HeapDumpOnOutOfMemoryError
 -Xss512k -XX:HeapDumpPath=/tmp/java_heap
 -Djava.net.preferIPv4Stack=true -Djgroups.bind_addr=127.0.0.1
 -Dlog4j.configuration=file:/opt/log4j.xml -XX:+PrintCompilation
 -Xbatch -server -XX:+UseCompressedOops -XX:+UseLargePages
 -XX:LargePageSizeInBytes=2m -XX:+AlwaysPreTouch

 And it's with these options that after 30 minutes of full-stress it's
 still not finished warming up all of Infinispan+JGroups code.
 After that I stated I was going to *experiment* with
 -XX:CompileThreshold=10, to see if I could get it to compile in a
 shorter time, just to save me from waiting too long in performance
 tests. It doesn't seem to matter much, so I'm reverting it back to the
 above values (default for this parameter for my VM is 1).

 That's surprising, I'd say that in 30 mins of invocations all the critical 
 paths are touched much more many times than 1. E.g. the number of 
 reads/sec is in thousands (20k on the cluster lab). Might be that this param 
 is ignored or it collides with other -XX ?


PrintCompilation tells you which methods are compiled, but it doesn't
tell you which methods were inlined with it. So something like this
can (and probably does) happen:

1. ConcurrentSkipListMap.doPut is compiled
2. UNICAST2.down() (assuming JGroups 3.0.3.Final) calls
AckSenderWindow.add() - ConcurrentSkipListMap.put - doPut. This gets
compiled, and everything is inlined in it.
2. The conditions where ConcurrentSkipListMap.put is called change
dramatically - so the initial optimizations are no longer valid.
2. Eventually the timers and everything else that's using
ConcurrentSkipListMap call put() 1 times and
ConcurrentSkipListMap.doPut is compiled again.

AFAIK oprofile will not report any inlined methods, so if
ConcurrentSkipListMap appears in the report it means that it still
called a fair amount of times. On the other hand, oprofile also
doesn't report interpreted methods - so if it appears as a separate
method in the oprofile report than it means it is compiled.

You should also try running the test without -Xbatch, apparently it's
only good for debugging the JVM:
http://stackoverflow.com/questions/3369791/java-vm-tuning-xbatch-and-xcomp

It shouldn't change what methods get compiled, but compilation should
be less expensive without it.

Cheers
Dan

___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] The need for a 5.1.1

2012-01-27 Thread Manik Surtani
Branch created.

https://github.com/infinispan/infinispan/tree/5.1.x

Of the JIRAs I mentioned below, if they have been committed to master I'll 
cherry pick them onto 5.1.x as well.  If they haven't been completed, I'll 
change their target accordingly, please make sure you create pull reqs for both 
master and 5.1.x.

Thanks
Manik

On 27 Jan 2012, at 14:49, Manik Surtani wrote:

 
 On 27 Jan 2012, at 14:09, Mircea Markus wrote:
 
 
 On 26 Jan 2012, at 22:42, Manik Surtani wrote:
 I really didn't want to do this, but it looks like a 5.1.1 will be 
 necessary.  The biggest (critical, IMO, for 5.1.1) issues I see are:
 
 1. https://issues.jboss.org/browse/ISPN-1786 - I presume this has to do 
 with a bug Mircea spotted that virtual nodes were not being enabled by the 
 config parser.  Which meant that even in the case of tests enabling virtual 
 nodes, we still saw uneven distribution and hence poor performance (well 
 spotted, Mircea).  
 2. Related to 1, I don't think there is a JIRA for this yet, to change the 
 default number of virtual nodes from 1 to 100 or so.  After we profile and 
 analyse the impact of enabling this by default.  I'm particularly concerned 
 about (a) memory footprint and (b) effects on Hot Rod relaying topology 
 information back to clients.  Maybe 10 is a more sane default as a result.
 
 There is one now:  https://issues.jboss.org/browse/ISPN-1801
 
 3. https://issues.jboss.org/browse/ISPN-1788 - config parser out of sync 
 with XSD!
 4. https://issues.jboss.org/browse/ISPN-1798 - forceReturnValues parameter 
 in the RemoteCacheManager.getCache() method is ignored!
 
 I'm sure there will some others as community starts reporting! but that's 
 good as we can provide a quick release for the main issues.
 
 In addition, we may as well have these nice to have's in as well:
 
 https://issues.jboss.org/browse/ISPN-1787
 https://issues.jboss.org/browse/ISPN-1793
 https://issues.jboss.org/browse/ISPN-1795
 
 these ^^ are already in master so we can include them straight away.  
 https://issues.jboss.org/browse/ISPN-1789
 
 this looks like a low prio, as doesn't have an impact on the  functionality
 
 Agreed, but it is such a trivial fix and it greatly affects usability (who 
 wants to see such verbose and misleading log messages?)
 
 
 https://issues.jboss.org/browse/ISPN-1784
 pull request sent, so IMO makes sense.
 
 What do you think?  Anything else you feel that is crucial for a 5.1.1?  
 I'd like to do this sooner rather than later, so we can still focus on 
 5.2.0.  So please respond asap.
 As everybody is in the performance min set, I think the following issues, in 
 this order, would be a quick win:
 https://issues.jboss.org/browse/ISPN-825
 https://issues.jboss.org/browse/ISPN-317
 https://issues.jboss.org/browse/ISPN-1748
 
 -1 to all 3.  I think these are all non-trivial and shouldn't be in a point 
 release - even if it is a week's worth of work.
 
 Cheers
 Manik
 --
 Manik Surtani
 ma...@jboss.org
 twitter.com/maniksurtani
 
 Lead, Infinispan
 http://www.infinispan.org
 
 
 
 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev

--
Manik Surtani
ma...@jboss.org
twitter.com/maniksurtani

Lead, Infinispan
http://www.infinispan.org



___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] DIST.retrieveFromRemoteSource

2012-01-27 Thread Dan Berindei
Manik, Bela, I think we send the requests sequentially as well. In
ReplicationTask.call:

   for (Address a : targets) {
  NotifyingFutureObject f =
sendMessageWithFuture(constructMessage(buf, a), opts);
  futureCollator.watchFuture(f, a);
   }


In MessageDispatcher.sendMessageWithFuture:

UnicastRequestT req=new UnicastRequestT(msg, corr, dest, options);
req.setBlockForResults(false);
req.execute();


Did we use to send each request on a separate thread?


Cheers
Dan


On Fri, Jan 27, 2012 at 1:21 PM, Bela Ban b...@redhat.com wrote:
 yes.

 On 1/27/12 12:13 PM, Manik Surtani wrote:

 On 25 Jan 2012, at 09:42, Bela Ban wrote:

 No, parallel unicasts will be faster, as an anycast to A,B,C sends the
 unicasts sequentially

 Is this still the case in JG 3.x?


 --
 Bela Ban
 Lead JGroups (http://www.jgroups.org)
 JBoss / Red Hat
 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] DIST.retrieveFromRemoteSource

2012-01-27 Thread Manik Surtani
Doesn't setBlockForResults(false) mean that we're not waiting on a response, 
and can proceed to the next message to the next recipient?

On 27 Jan 2012, at 16:34, Dan Berindei wrote:

 Manik, Bela, I think we send the requests sequentially as well. In
 ReplicationTask.call:
 
   for (Address a : targets) {
  NotifyingFutureObject f =
 sendMessageWithFuture(constructMessage(buf, a), opts);
  futureCollator.watchFuture(f, a);
   }
 
 
 In MessageDispatcher.sendMessageWithFuture:
 
UnicastRequestT req=new UnicastRequestT(msg, corr, dest, options);
req.setBlockForResults(false);
req.execute();
 
 
 Did we use to send each request on a separate thread?
 
 
 Cheers
 Dan
 
 
 On Fri, Jan 27, 2012 at 1:21 PM, Bela Ban b...@redhat.com wrote:
 yes.
 
 On 1/27/12 12:13 PM, Manik Surtani wrote:
 
 On 25 Jan 2012, at 09:42, Bela Ban wrote:
 
 No, parallel unicasts will be faster, as an anycast to A,B,C sends the
 unicasts sequentially
 
 Is this still the case in JG 3.x?
 
 
 --
 Bela Ban
 Lead JGroups (http://www.jgroups.org)
 JBoss / Red Hat
 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev
 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev

--
Manik Surtani
ma...@jboss.org
twitter.com/maniksurtani

Lead, Infinispan
http://www.infinispan.org




___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] Write Skew issue (versioning)

2012-01-27 Thread Mircea Markus
Looks like a bug, mind creating a JIRA for it?

On 24 Jan 2012, at 21:45, Pedro Ruivo wrote:
 Hi,
 
 yes I have the versioning enabled. Like you said, I've posted in the forum 
 too [1].
 
 btw, the ISPN config is here [2]
 
 [1] -- https://community.jboss.org/thread/177846
 [2] -- http://pastebin.com/UCxGXw3K
 
 Cheers,
 Pedro
 
 On 24-01-2012 19:15, Mircea Markus wrote:
 
 Hi Pedro and thanks for reporting this.
 Do you have versioning enabled? Otherwise the writeSkewCheck won't be 
 performed at commit time. 
 If you do have versioning enabled, may I suggest to take this on the user 
 forums[1] - this way it would be easier for other users that have the same 
 problem to find it.
 
 [1] https://community.jboss.org/community/infinispan?view=discussionsstart=0
 
 On 24 Jan 2012, at 18:42, Pedro Ruivo wrote:
 Hi,
 
 I think I have spotted a problem with the write skew check 
 implementation based on versioning.
 
 I've made this test to confirm:
 
 I have a global counter that is incremented concurrently by two 
 different nodes, running ISPN with Repeatable Read with write skew 
 enabled. I expected that each successfully transaction will commit a 
 different value.
 
 In detail, each node do the following:
 
 beginTx
 Integer count = cache.get(counter);
 count = count + 1;
 cache.put(counter, count)
 commitTx
 
 To avoid errors, I've run this test on two ISPN versions: 5.1.0.CR4 and 
 5.0.1.Final. In 5.0.1.Final, it works as expected. However, on 5.1.0.CR4 
 I have a lot of repeated values. After a first check at the code, I've 
 the impression that the problem may be due to that the version numbers 
 of the keys for which the write skew check should be run is not sent 
 with the prepare command.
 
 Cheers,
 Pedro Ruivo
 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev
 
 
 
 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev
 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev

___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] The need for a 5.1.1

2012-01-27 Thread Dan Berindei
On Fri, Jan 27, 2012 at 5:31 PM, Bela Ban b...@redhat.com wrote:


 On 1/27/12 4:26 PM, Mircea Markus wrote:

 On 27 Jan 2012, at 15:08, Bela Ban wrote:

 Build the JGroups JAR with ./build.sh jar, *not* via maven !

 I attached the JAR for you.
 Thanks!
 JGroups *is* and *will remain* JAR less ! :-)

 Sorry for loosing the faith :)
 Might make sense to have the mvn install work as well though, I think people 
 would expect it to behave correctly when they see a pom.xml.

 Are you volunteering ? I'd be happy to integrate your changes ! :-)


Tests probably don't work, but it's enough to get it building:
https://github.com/belaban/JGroups/pull/25

Cheers
Dan
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] Write Skew issue (versioning)

2012-01-27 Thread Manik Surtani
I'm taking a look - started a discussion on the forums.  :)

On 27 Jan 2012, at 16:44, Mircea Markus wrote:

 Looks like a bug, mind creating a JIRA for it?
 
 On 24 Jan 2012, at 21:45, Pedro Ruivo wrote:
 Hi,
 
 yes I have the versioning enabled. Like you said, I've posted in the forum 
 too [1].
 
 btw, the ISPN config is here [2]
 
 [1] -- https://community.jboss.org/thread/177846
 [2] -- http://pastebin.com/UCxGXw3K
 
 Cheers,
 Pedro
 
 On 24-01-2012 19:15, Mircea Markus wrote:
 
 Hi Pedro and thanks for reporting this.
 Do you have versioning enabled? Otherwise the writeSkewCheck won't be 
 performed at commit time. 
 If you do have versioning enabled, may I suggest to take this on the user 
 forums[1] - this way it would be easier for other users that have the same 
 problem to find it.
 
 [1] 
 https://community.jboss.org/community/infinispan?view=discussionsstart=0
 
 On 24 Jan 2012, at 18:42, Pedro Ruivo wrote:
 Hi,
 
 I think I have spotted a problem with the write skew check 
 implementation based on versioning.
 
 I've made this test to confirm:
 
 I have a global counter that is incremented concurrently by two 
 different nodes, running ISPN with Repeatable Read with write skew 
 enabled. I expected that each successfully transaction will commit a 
 different value.
 
 In detail, each node do the following:
 
 beginTx
 Integer count = cache.get(counter);
 count = count + 1;
 cache.put(counter, count)
 commitTx
 
 To avoid errors, I've run this test on two ISPN versions: 5.1.0.CR4 and 
 5.0.1.Final. In 5.0.1.Final, it works as expected. However, on 5.1.0.CR4 
 I have a lot of repeated values. After a first check at the code, I've 
 the impression that the problem may be due to that the version numbers 
 of the keys for which the write skew check should be run is not sent 
 with the prepare command.
 
 Cheers,
 Pedro Ruivo
 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev
 
 
 
 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev
 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev
 
 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev

--
Manik Surtani
ma...@jboss.org
twitter.com/maniksurtani

Lead, Infinispan
http://www.infinispan.org



___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev