Re: [infinispan-dev] Store as binary

2014-01-20 Thread Mircea Markus
Hi Radim,

I think 4 nodes with numOwner=2 is too small of a cluster. My calculus here[1] 
points out that for numOwners=1, the performance benefits is only visible for 
clusters having more than two nodes. Following a similar logic for numOwenrs=2, 
the benefit would only be visible for clusters having more than 4 nodes. Would 
it be possible to run the test on a larger cluster, 8+ nodes?

[1] http://lists.jboss.org/pipermail/infinispan-dev/2009-October/004299.html

On Jan 17, 2014, at 1:06 PM, Radim Vansa rva...@redhat.com wrote:

 Hi Mircea,
 
 I've ran a simple stress test [1] in dist mode with store as binary (not 
 enabled, enabled keys only, enabled values only, enabled both).
 The difference is  2 % (with storeAsBinary enabled fully being slower).
 
 Radim
 
 [1] 
 https://jenkins.mw.lab.eng.bos.redhat.com/hudson/job/jdg-radargun-perf-store-as-binary/1/artifact/report/All_report.html
 
 -- 
 Radim Vansa rva...@redhat.com
 JBoss DataGrid QA
 
 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev

Cheers,
-- 
Mircea Markus
Infinispan lead (www.infinispan.org)





___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] Store as binary

2014-01-20 Thread Pedro Ruivo
Hi,

IMO, we should try the worst scenario: Local Mode + Single thread.

this will show us the highest impact in performance.

Cheers,
Pedro

On 01/20/2014 09:41 AM, Mircea Markus wrote:
 Hi Radim,

 I think 4 nodes with numOwner=2 is too small of a cluster. My calculus 
 here[1] points out that for numOwners=1, the performance benefits is only 
 visible for clusters having more than two nodes. Following a similar logic 
 for numOwenrs=2, the benefit would only be visible for clusters having more 
 than 4 nodes. Would it be possible to run the test on a larger cluster, 8+ 
 nodes?

 [1] http://lists.jboss.org/pipermail/infinispan-dev/2009-October/004299.html

 On Jan 17, 2014, at 1:06 PM, Radim Vansa rva...@redhat.com wrote:

 Hi Mircea,

 I've ran a simple stress test [1] in dist mode with store as binary (not
 enabled, enabled keys only, enabled values only, enabled both).
 The difference is  2 % (with storeAsBinary enabled fully being slower).

 Radim

 [1]
 https://jenkins.mw.lab.eng.bos.redhat.com/hudson/job/jdg-radargun-perf-store-as-binary/1/artifact/report/All_report.html

 --
 Radim Vansa rva...@redhat.com
 JBoss DataGrid QA

 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev

 Cheers,

___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] Store as binary

2014-01-20 Thread Mircea Markus
Would be interesting to see as well, though performance figure would not 
include the network latency, hence it would not tell much about the benefit of 
using this on a real life system.

On Jan 20, 2014, at 9:48 AM, Pedro Ruivo pe...@infinispan.org wrote:

 IMO, we should try the worst scenario: Local Mode + Single thread.
 
 this will show us the highest impact in performance.

Cheers,
-- 
Mircea Markus
Infinispan lead (www.infinispan.org)





___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] Time stamps in infinispan cluster

2014-01-20 Thread Galder Zamarreño
Infinispan does nothing to synchronize the time in each of the nodes.

On Jan 13, 2014, at 10:29 PM, Meena Rajani meenakraj...@gmail.com wrote:

 Hi 
 
How does the distributed clock work in infinispan/jboss cluster.
 Can some one please guide me. I have read a little bit about the total order 
 messaging and vector clock. 
 I have extended the infinispan API for freshness Aware caching. I have 
 assumed the time is synchronized all the time and timestamps are comparable. 
 But I want to know how the timestamp work in Infinispan in distributed 
 environment, specially when  the communication among the cluster nodes  is in 
 synchronous mode.
 
 Regards
 
 Meena


--
Galder Zamarreño
gal...@redhat.com
twitter.com/galderz

Project Lead, Escalante
http://escalante.io

Engineer, Infinispan
http://infinispan.org


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] Dropping AtomicMap/FineGrainedAtomicMap

2014-01-20 Thread Pedro Ruivo
Hi,

On 01/20/2014 11:28 AM, Galder Zamarreño wrote:
 Hi all,

 Dropping AtomicMap and FineGrainedAtomicMap was discussed last week in the 
 F2F meeting [1]. It's complex and buggy, and we'd recommend people to use the 
 Grouping API instead [2]. Grouping API would allow data to reside together, 
 while the standard map API would apply per-key locking.

+1. are we going to dropping the Delta stuff?

 We don't have a timeline for this yet, but we want to get as much feedback on 
 the topic as possible so that we can evaluate the options.

before starting with it, I would recommend to add the following method 
to cache API:

/**
  * returns all the keys and values associated with the group name. The 
MapK, V is immutable (i.e. read-only)
  **/
MapK, V getGroup(String groupName);

Cheers,
Pedro


 Cheers,

 [1] https://issues.jboss.org/browse/ISPN-3901
 [2] 
 http://infinispan.org/docs/6.0.x/user_guide/user_guide.html#_the_grouping_api
 --
 Galder Zamarreño
 gal...@redhat.com
 twitter.com/galderz

 Project Lead, Escalante
 http://escalante.io

 Engineer, Infinispan
 http://infinispan.org


 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev

___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] infinispan build process - fresh mvn-repo

2014-01-20 Thread Galder Zamarreño
Did you look at 
http://infinispan.org/docs/6.0.x/contributing/contributing.html#_building_infinispan
 ?

On Jan 15, 2014, at 3:58 PM, Wolf-Dieter Fink wf...@redhat.com wrote:

 Hi,
 
 I build the g...@github.com:infinispan/infinispan.git from scratch and 
 follow the documentation/README.
 
 I use the maven-settings.xml
 mvn -s maven-settings.xml -Dmaven.test.skip=true clean install
 with that setting the build failed, see error 1.Build
 
 A build with skipping test will not work due to dependency issues
 mvn -s maven-settings.xml -Dmaven.test.skip=true clean install
 see 2.Build
 
 I found that -Dmaven.test.skip.exec=true will build correct. after 
 that the test hung forever (or longer than my patience ;)
 
 Test suite progress: tests succeeded: 506, failed: 0, skipped: 7.
 [testng-BulkGetSimpleTest] Test 
 testBulkGetWithSize(org.infinispan.client.hotrod.BulkGetSimpleTest) 
 succeeded.
 Test suite progress: tests succeeded: 507, failed: 0, skipped: 7.
 [testng-ClientSocketReadTimeoutTest] Test 
 testPutTimeout(org.infinispan.client.hotrod.ClientSocketReadTimeoutTest) 
 succeeded.
 Test suite progress: tests succeeded: 508, failed: 0, skipped: 7.
 ==  this test hung a longer time
 
 [testng-DistributionRetryTest] Test 
 testRemoveIfUnmodified(org.infinispan.client.hotrod.retry.DistributionRetryTest)
  
 failed.
 Test suite progress: tests succeeded: 508, failed: 1, skipped: 7.
 === this test never came back
 
 
 
 The main problem is that the first build will have issues and you need 
 to bypass it.
 Second is that there is a dependency if the tests are skipped, a hint 
 within the documentation or readme might be helpful to avoid frustration ;)
 And last but not least is there a reason why the 
 [testng-ClientSocketReadTimeoutTest hung? Would it be an idea to 
 rename it if it takes long, i.e. ClientSocket10MinuteReadTimeoutTest? 
 to show that this test takes a long time, And also a time-limit for the 
 test.
 
 
 - Wolf
 
 
 
    1. Build 
 ---
 ~ ENVIRONMENT INFO ~~
 Tests run: 4044, Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 
 357.511 sec  FAILURE!
 testNoEntryInL1GetWithConcurrentReplace(org.infinispan.distribution.DistSyncL1FuncTest)
  
 Time elapsed: 0.005 sec   FAILURE!
 java.lang.AssertionError: Entry for key [key-to-the-cache] should be in 
 L1 on cache at [DistSyncL1FuncTest-NodeA-21024]!
 at 
 org.infinispan.distribution.DistributionTestHelper.assertIsInL1(DistributionTestHelper.java:31)
 at 
 org.infinispan.distribution.BaseDistFunctionalTest.assertIsInL1(BaseDistFunctionalTest.java:183)
 at 
 org.infinispan.distribution.DistSyncL1FuncTest.testNoEntryInL1GetWithConcurrentReplace(DistSyncL1FuncTest.java:193)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at 
 org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:80)
 at org.testng.internal.Invoker.invokeMethod(Invoker.java:714)
 at org.testng.internal.Invoker.invokeTestMethod(Invoker.java:901)
 at org.testng.internal.Invoker.invokeTestMethods(Invoker.java:1231)
 at 
 org.testng.internal.TestMethodWorker.invokeTestMethods(TestMethodWorker.java:127)
 at org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:111)
 at org.testng.TestRunner.privateRun(TestRunner.java:767)
 at org.testng.TestRunner.run(TestRunner.java:617)
 at org.testng.SuiteRunner.runTest(SuiteRunner.java:334)
 at org.testng.SuiteRunner.access$000(SuiteRunner.java:37)
 at org.testng.SuiteRunner$SuiteWorker.run(SuiteRunner.java:368)
 at org.testng.internal.thread.ThreadUtil$2.call(ThreadUtil.java:64)
 at java.util.concurrent.FutureTask.run(FutureTask.java:262)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:744)
 
 testInvokeMapWithReduceExceptionPhaseInRemoteExecution(org.infinispan.distexec.mapreduce.SimpleTwoNodesMapReduceTest)
  
 Time elapsed: 0.018 sec   FAILURE!
 org.testng.TestException:
 Method 
 SimpleTwoNodesMapReduceTest.testInvokeMapWithReduceExceptionPhaseInRemoteExecution()[pri:0,
  
 instance:org.infinispan.distexec.mapreduce.SimpleTwoNodesMapReduceTest@70bd631a]
  
 should have thrown an exception of class 
 org.infinispan.commons.CacheException
 at 
 org.testng.internal.Invoker.handleInvocationResults(Invoker.java:1512)
 at org.testng.internal.Invoker.invokeMethod(Invoker.java:754)
 at org.testng.internal.Invoker.invokeTestMethod(Invoker.java:901)
 at 

Re: [infinispan-dev] Performance of accessing off-heap buffers: NIO Unsafe

2014-01-20 Thread Tristan Tarrant
Hi Sanne,

ultimately I believe that it is not about the intrinsic (sorry for 
overloading the term) performance of the memory allocation invocations, 
but the advantage of using ByteBuffers as the de-facto standard for 
passing data around between Infinispan, JGroups and any I/O layers 
(network, disk). Removing various points of copying, marshalling, etc is 
the real win.

Tristan

On 01/20/2014 03:01 PM, Sanne Grinovero wrote:
 At our meeting last week, there was a debate about the fact that the
 (various) off-heap buffer usage proposals, including NIO2 reads, would
 potentially be slower because of it potentially needing more native
 invocations.

 At the following link you can see the full list of methods which will
 actually be optimised using intrinsics i.e. being replaced by the
 compiler as it was a macro with highly optimized ad-hoc code which
 might be platform dependant (or in other words, which will be able to
 take best advantage of the capabilities of the executing platform):

 http://hg.openjdk.java.net/jdk8/awt/hotspot/file/d61761bf3050/src/share/vm/classfile/vmSymbols.hpp

 In particular, note the do_intrinsic qualifier marking all uses of
 Unsafe and the NIO Buffer.

 Hope you'll all agree now that further arguing about any of this will
 be dismissed unless we want to talk about measurements :-)

 Kudos to all scepticals (always good), still let's not dismiss the
 large work needed for this yet, nor let us revert from the rightful
 path until we know we've tried it to the end: I do not expect to see
 incremental performance improvements while we make progress, it might
 even slow down until we get to the larger rewards.

 Cheers,
 Sanne
 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev



___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] Performance of accessing off-heap buffers: NIO Unsafe

2014-01-20 Thread Sanne Grinovero
On 20 January 2014 14:23, Tristan Tarrant ttarr...@redhat.com wrote:
 Hi Sanne,

 ultimately I believe that it is not about the intrinsic (sorry for
 overloading the term) performance of the memory allocation invocations,
 but the advantage of using ByteBuffers as the de-facto standard for
 passing data around between Infinispan, JGroups and any I/O layers
 (network, disk). Removing various points of copying, marshalling, etc is
 the real win.

Absolutely. Still there was some skepticism from others building on
the amount of times we' d need to do some random access to these
buffers; my point is that it's probably an unfounded concern, and I
wouldn't like to have such theories to prevent evolution in this
direction.

Sanne


 Tristan

 On 01/20/2014 03:01 PM, Sanne Grinovero wrote:
 At our meeting last week, there was a debate about the fact that the
 (various) off-heap buffer usage proposals, including NIO2 reads, would
 potentially be slower because of it potentially needing more native
 invocations.

 At the following link you can see the full list of methods which will
 actually be optimised using intrinsics i.e. being replaced by the
 compiler as it was a macro with highly optimized ad-hoc code which
 might be platform dependant (or in other words, which will be able to
 take best advantage of the capabilities of the executing platform):

 http://hg.openjdk.java.net/jdk8/awt/hotspot/file/d61761bf3050/src/share/vm/classfile/vmSymbols.hpp

 In particular, note the do_intrinsic qualifier marking all uses of
 Unsafe and the NIO Buffer.

 Hope you'll all agree now that further arguing about any of this will
 be dismissed unless we want to talk about measurements :-)

 Kudos to all scepticals (always good), still let's not dismiss the
 large work needed for this yet, nor let us revert from the rightful
 path until we know we've tried it to the end: I do not expect to see
 incremental performance improvements while we make progress, it might
 even slow down until we get to the larger rewards.

 Cheers,
 Sanne
 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev



 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] Store as binary

2014-01-20 Thread Radim Vansa
OK, I have results for dist-udp-no-tx or local-no-tx modes on 8 nodes 
(in local mode the nodes don't communicate, naturally):
Dist mode: 3 % down for reads, 1 % for writes
Local mode: 19 % down for reads, 16 % for writes

Details in [1], ^ is for both keys and values stored as binary.

Radim

[1] 
https://jenkins.mw.lab.eng.bos.redhat.com/hudson/job/jdg-radargun-perf-store-as-binary/4/artifact/report/All_report.html

On 01/20/2014 11:14 AM, Pedro Ruivo wrote:

 On 01/20/2014 10:07 AM, Mircea Markus wrote:
 Would be interesting to see as well, though performance figure would not 
 include the network latency, hence it would not tell much about the benefit 
 of using this on a real life system.
 that's my point. I'm interested to see the worst scenario since all
 other cluster modes, will have a lower (or none) impact in performance.

 Of course, the best scenario would be only each node have access to
 remote keys...

 Pedro

 On Jan 20, 2014, at 9:48 AM, Pedro Ruivo pe...@infinispan.org wrote:

 IMO, we should try the worst scenario: Local Mode + Single thread.

 this will show us the highest impact in performance.
 Cheers,

 ___
 infinispan-dev mailing list
 infinispan-dev@lists.jboss.org
 https://lists.jboss.org/mailman/listinfo/infinispan-dev


-- 
Radim Vansa rva...@redhat.com
JBoss DataGrid QA

___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev