Re: snapshot transaction isolation
Hm… how about allowing to acquire locks within a transaction and just releasing locks on transaction commit? It won’t break current compatibility because we simply do not allow acquiring locks within transactions right now. I am wrong? D. On Mon, Feb 8, 2016 at 4:10 AM, Alexey Goncharukwrote: > Currently lock-only functionality is exposed via j.u.c.Lock interface on > IgniteCache. We have two choices here: > * Release such locks on transaction commit, which would break the contract > of j.u.c.Lock > * Do not release such locks on transaction commit, which, in my opinion, > conflicts with the expectation of transaction locks. > > Either way looks dirty to me, so I would vote for adding a new > properly-named method on IgniteCache specifically for this case. >
[GitHub] ignite pull request: Ignite 2195 "Accessing from IGFS to HDFS that...
GitHub user iveselovskiy opened a pull request: https://github.com/apache/ignite/pull/464 Ignite 2195 "Accessing from IGFS to HDFS that is in kerberised environment" You can merge this pull request into a Git repository by running: $ git pull https://github.com/iveselovskiy/ignite ignite-2195b Alternatively you can review and apply these changes as the patch at: https://github.com/apache/ignite/pull/464.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #464 commit 5487cd3f4badb7a2d7223f53856a74ee81ecd089 Author: iveselovskiyDate: 2016-02-05T11:19:24Z IGNITE-2195: initial workable version with the factory from customer. commit c08ce4899d62ea3d7ecf706e4421c19d2c942e52 Author: iveselovskiy Date: 2016-02-05T13:42:21Z IGNITE-2195: more-or-less cleaned up version with spawned renewer thread yet. commit 3639cb0e2ba8a6f61380310003f254c0c4370ece Author: iveselovskiy Date: 2016-02-08T16:16:46Z Merge branch 'master' of https://github.com/apache/ignite into ignite-2195b commit a3a8afa16bbf851701dcaf14bb6c82246dc0fcb1 Author: iveselovskiy Date: 2016-02-08T16:36:40Z IGNITE-2195: Accessing from IGFS to HDFS that is in kerberised environment. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Created] (IGNITE-2588) [Failed test] GridAffinityNoCacheSelfTes.testAffinityImplCacheDeleted with assertion
Andrey Gura created IGNITE-2588: --- Summary: [Failed test] GridAffinityNoCacheSelfTes.testAffinityImplCacheDeleted with assertion Key: IGNITE-2588 URL: https://issues.apache.org/jira/browse/IGNITE-2588 Project: Ignite Issue Type: Test Affects Versions: 1.5.0.final Reporter: Andrey Gura Assignee: Andrey Gura Fix For: 1.6 Tests fails due to a race during dynamic cache creation and destroy. {noformat} at junit.framework.Assert.fail(Assert.java:55) at junit.framework.Assert.assertTrue(Assert.java:22) at junit.framework.Assert.assertTrue(Assert.java:31) at junit.framework.TestCase.assertTrue(TestCase.java:201) at org.apache.ignite.internal.GridAffinityNoCacheSelfTest.checkAffinityImplCacheDeleted(GridAffinityNoCacheSelfTest.java:113) at org.apache.ignite.internal.GridAffinityNoCacheSelfTest.testAffinityImplCacheDeleted(GridAffinityNoCacheSelfTest.java:91) {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] ignite pull request: ignite-2588 Test fixed
GitHub user agura opened a pull request: https://github.com/apache/ignite/pull/463 ignite-2588 Test fixed https://issues.apache.org/jira/browse/IGNITE-2588 You can merge this pull request into a Git repository by running: $ git pull https://github.com/agura/incubator-ignite ignite-2588 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/ignite/pull/463.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #463 commit 90c939f55fd6ae87c3e521e65962cb9d8d712b86 Author: aguraDate: 2016-02-08T16:22:51Z ignite-2588 Test fixed --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Created] (IGNITE-2581) .NET: AtomicReference binary mode
Pavel Tupitsyn created IGNITE-2581: -- Summary: .NET: AtomicReference binary mode Key: IGNITE-2581 URL: https://issues.apache.org/jira/browse/IGNITE-2581 Project: Ignite Issue Type: Task Components: platforms Affects Versions: 1.6 Reporter: Pavel Tupitsyn Fix For: 1.7 Work with AtomicReference in binary mode, see comments in IGNITE-1563 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (IGNITE-2578) .NET: Native object comparison
Pavel Tupitsyn created IGNITE-2578: -- Summary: .NET: Native object comparison Key: IGNITE-2578 URL: https://issues.apache.org/jira/browse/IGNITE-2578 Project: Ignite Issue Type: Task Components: platforms Affects Versions: 1.1.4 Reporter: Pavel Tupitsyn Fix For: 1.6 Currently all comparisons (cache key comparisons, atomic operations, etc) are performed in binary form on Java side. This may not work as intended when user has overridden Equals/GetHashCode. Need to investigate whether we can or should do anything about this. * Is it really an issue? * Is there a workaround? * Are there any user requests about this? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (IGNITE-2586) JVM crash under load test
Sergey Kozlov created IGNITE-2586: - Summary: JVM crash under load test Key: IGNITE-2586 URL: https://issues.apache.org/jira/browse/IGNITE-2586 Project: Ignite Issue Type: Bug Affects Versions: 1.5.0.final Environment: Windows 10, Oracle JDK 1.7.0_80-b15 Reporter: Sergey Kozlov Fix For: 1.6 1. Start 4 servers 2. Compile and start 4 clients: {noformat} C:\Java\jdk1.7.0_80\bin\java -classpath "C:\work\apache-ignite-fabric-1.5.0.final\libs\*;C:\work\apache-ignite-fabric-1.5.0.final\libs\ignite-spring\*;C:\work\apache-ignite-fabric-1.5.0.final\libs\ignite-indexing\*;C:\gg-qa\testtools\target\ignite-test-tools-1.0.0-SNAPSHOT.jar" -Xmx256m -Xms256m -DIGNITE_QUIET=false org.apache.ignite.testtools.Iron -config=c:\work\iron_client.xml -prefix=cache_ -keys=10 -load-keys=30 -duration-per-cache=30 -operation-weights=put:30,get:30,remove:5,putall:2,getall:5,removeall:10,replace:10,scanquery:5 {noformat} 3. Wait until one server node crashes. Logs are attched -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (IGNITE-2576) .NET: Update readme.io with more info on how Ignite.NET is related to Java part
Pavel Tupitsyn created IGNITE-2576: -- Summary: .NET: Update readme.io with more info on how Ignite.NET is related to Java part Key: IGNITE-2576 URL: https://issues.apache.org/jira/browse/IGNITE-2576 Project: Ignite Issue Type: Task Components: platforms Affects Versions: 1.1.4 Reporter: Pavel Tupitsyn Assignee: Pavel Tupitsyn Fix For: 1.6 There were multiple questions in userlist and gitter about how .NET part works with Java, is it a standalone app, do they interconnect, etc. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (IGNITE-2580) Investigate HashMap.Node[] allocations from GridDhtTxPrepareFuture.readyLocks(Iterable)
Vladimir Ozerov created IGNITE-2580: --- Summary: Investigate HashMap.Node[] allocations from GridDhtTxPrepareFuture.readyLocks(Iterable) Key: IGNITE-2580 URL: https://issues.apache.org/jira/browse/IGNITE-2580 Project: Ignite Issue Type: Sub-task Components: cache Affects Versions: 1.5.0.final Reporter: Vladimir Ozerov Fix For: 1.6 *Problem* GridDhtTxPrepareFuture is initialized with empty HashSet by default. When single lock is ready HashSet.add() is called causing immediate expansion of underlying HashMap table. *Solution* Do we really need fully-fledged hash set immediately? Probably we can optimize for single-lock case so that HashSet is not needed at all. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (IGNITE-2582) j.u.Collections.singletonIterator() allocations during TX cache puts.
Vladimir Ozerov created IGNITE-2582: --- Summary: j.u.Collections.singletonIterator() allocations during TX cache puts. Key: IGNITE-2582 URL: https://issues.apache.org/jira/browse/IGNITE-2582 Project: Ignite Issue Type: Sub-task Components: cache Affects Versions: 1.5.0.final Reporter: Vladimir Ozerov Fix For: 1.6 *Problem* Allocations came from several sources: 1) IgniteTxManager.lockMultiple 2) IgniteTxManager.notifyEvictions 3) IgniteTxManager.removeObsolete 4) IgniteTxManager.unlockMultiple 5) GridDhtTxLocalAdapter.mapExplicitLocks In all these code pieces we have the same pattern: {code} for (T t : collection) { logic(t) } {code} *Solution* Perform simple refactoring: {code} if (collection isntanceof List) { for (int i = 0; i < ((List)colllection).size()) { logic(collection.get(i)); } } else { for (T t : collection) { logic(t) } } {code} Though, we should be careful with LinkedList here - such refactoring will slowdown processing. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (IGNITE-2583) Investigate CHM8 allocations in GridDhtTxLocalAdapter.
Vladimir Ozerov created IGNITE-2583: --- Summary: Investigate CHM8 allocations in GridDhtTxLocalAdapter. Key: IGNITE-2583 URL: https://issues.apache.org/jira/browse/IGNITE-2583 Project: Ignite Issue Type: Sub-task Components: cache Affects Versions: 1.5.0.final Reporter: Vladimir Ozerov Fix For: 1.6 *Problem* Whenever GridDhtTxLocalAdapter is allocated, two CHM8-s are created immediately: "nearMap" and "dhtMap". This is visible as a memory hotspot, which is responsible for >2% of overall allocations during single PUT. *Proposed solution* 1) Investigate whether we really need concurrent semantics here. Can these maps be replaced with HashMap-s or (HashMap + synchronzied)? 2) Investigate whether we can optimize for single-mapping scenario. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Re: snapshot transaction isolation
Currently lock-only functionality is exposed via j.u.c.Lock interface on IgniteCache. We have two choices here: * Release such locks on transaction commit, which would break the contract of j.u.c.Lock * Do not release such locks on transaction commit, which, in my opinion, conflicts with the expectation of transaction locks. Either way looks dirty to me, so I would vote for adding a new properly-named method on IgniteCache specifically for this case.
[jira] [Created] (IGNITE-2585) Force Unit tests to fail in case of NPE in logs.
Vladimir Ershov created IGNITE-2585: --- Summary: Force Unit tests to fail in case of NPE in logs. Key: IGNITE-2585 URL: https://issues.apache.org/jira/browse/IGNITE-2585 Project: Ignite Issue Type: Improvement Components: build Reporter: Vladimir Ershov As for now NPE could be thrown from separated threads, without affecting the test result, but still appears in logs. Since it's not possible for one to review thousands of logs after his commit, we can inject in our GridAbstractTest special FailOnMessageLogger logger, that will fail test in case NPE appears in log. Though this change could affect tests, that expects NPE to be thrown. FailOnMessageLogger should be switch off for that tests. Expected amount of such test: 16. Should not take a lot of time. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (IGNITE-2577) Investigate HashMap.Node[] allocations from GridDistributedTxMapping.add()
Vladimir Ozerov created IGNITE-2577: --- Summary: Investigate HashMap.Node[] allocations from GridDistributedTxMapping.add() Key: IGNITE-2577 URL: https://issues.apache.org/jira/browse/IGNITE-2577 Project: Ignite Issue Type: Task Components: cache Affects Versions: 1.5.0.final Reporter: Vladimir Ozerov Fix For: 1.6 *Problem* GridDistributedTxMapping is initialized with empty HashSet() for TX entries by default. When the very first element is added, undelying HashMap expands causing memory traffic. *Proposed solutions* 1) Use LinkedList instead. One can notice that when GridDistributedTxMapping is deserialized using direct reader, "entries" are read as list. Furthermore, both "reads" and "writes" projections are returned as wrapped views, so that do not benefit form fast lookups. If we neither perform lookups from entries, nor require "unique" Set semantics, "entries" could be changed to LinkedList thus decresaing memory traffic. 2) Use special singleton collection. This way we will have to evaluate all "entries" usages very carefully. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Re: Apache Flink integration
Hi Saikat, Probably [1] can give you some pointers where to start. [1] http://apache-ignite-developers.2346864.n4.nabble.com/Apache-Flink-and-Spark-Integration-and-Acceleration-td82.html -Roman On Monday, February 8, 2016 2:25 AM, Saikat Maitrawrote: Hi, I am looking forward to taking up this ticket https://issues.apache.org/jira/browse/IGNITE-813 (Apache Flink Integration). I wanted to understand which module will be suitable for integration. Any reference to similar integration or pointers to design discussion will be helpful. Regards Saikat
[jira] [Created] (IGNITE-2584) Investigate whether GridDhtPartitionTopologyImpl.part2Node could have List as value.
Vladimir Ozerov created IGNITE-2584: --- Summary: Investigate whether GridDhtPartitionTopologyImpl.part2Node could have List as value. Key: IGNITE-2584 URL: https://issues.apache.org/jira/browse/IGNITE-2584 Project: Ignite Issue Type: Sub-task Components: cache Affects Versions: 1.5.0.final Reporter: Vladimir Ozerov Fix For: 1.6 *Problem* "GridDhtPartitionTopologyImpl.part2Node" has value of type Set. However, set semantics is almost never used except of node leave events which are pretty rate. Iterations over this Set require instantiation of iterators. This could be avoided if we replace HashSet with ArrayList. *Proposed solution* 1) Investigate whether Set "unique" semantics is exploited anywhere. 2) Investigate whether HashSet.get() O(1) is exploited on hot paths. 3) If neither p.1 nor p.2 hold - replace HashSet with ArrayList and change corresponding foreach-loops to counted for-loops. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] ignite pull request: IGNITE-2562 fixed bs-affix behavior
GitHub user Dmitriyff opened a pull request: https://github.com/apache/ignite/pull/461 IGNITE-2562 fixed bs-affix behavior You can merge this pull request into a Git repository by running: $ git pull https://github.com/Dmitriyff/ignite ignite-2562 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/ignite/pull/461.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #461 commit ebcc79c304a5526f5ca6a6d61b7796d8be05428c Author: DmitriyffDate: 2016-02-08T09:05:48Z IGNITE-2562 fixed bs-affix behavior commit fe9e385384dc38d7dda001ea3a07ef127ab67d75 Author: Dmitriyff Date: 2016-02-08T09:08:43Z Merge branch 'ignite-843-rc2' into ignite-2562 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Created] (IGNITE-2579) Investigate HashMap.Node[] allocations from GridCacheMvccManager$3
Vladimir Ozerov created IGNITE-2579: --- Summary: Investigate HashMap.Node[] allocations from GridCacheMvccManager$3 Key: IGNITE-2579 URL: https://issues.apache.org/jira/browse/IGNITE-2579 Project: Ignite Issue Type: Task Components: cache Affects Versions: 1.5.0.final Reporter: Vladimir Ozerov Fix For: 1.6 *Problem* See GridCacheMvccManager.addFuture() method. We create a weird HashSet there with internal table size == 5. Can we have something more efficient here? *Proposed solution* Need to run single get-put benchmarks and check usual size of this collection. If it is often equal to 1, then instead of allocating the whole collection, we'd better to have a singleton first and expand to collection if there are more elements. Please pay attention that collection usually used as monitor in some synchronized blocks. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (IGNITE-2592) Aggregation query with subquery returns incorrect result
Valentin Kulichenko created IGNITE-2592: --- Summary: Aggregation query with subquery returns incorrect result Key: IGNITE-2592 URL: https://issues.apache.org/jira/browse/IGNITE-2592 Project: Ignite Issue Type: Bug Components: cache Reporter: Valentin Kulichenko Priority: Critical Fix For: 1.6 Issue is discussed here: http://apache-ignite-users.70518.x6.nabble.com/SQL-query-result-variation-td2889.html Here is the code that reproduces it: https://gist.github.com/anonymous/8e2af218598e46577b2a -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (IGNITE-2596) The downloaded project zip-file has no compression
Pavel Konstantinov created IGNITE-2596: -- Summary: The downloaded project zip-file has no compression Key: IGNITE-2596 URL: https://issues.apache.org/jira/browse/IGNITE-2596 Project: Ignite Issue Type: Sub-task Reporter: Pavel Konstantinov Assignee: Andrey Novikov I've noticed that our downloaded zip-file has no compression -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (IGNITE-2594) Cached web session requires setAttribute() to be called on each update
Valentin Kulichenko created IGNITE-2594: --- Summary: Cached web session requires setAttribute() to be called on each update Key: IGNITE-2594 URL: https://issues.apache.org/jira/browse/IGNITE-2594 Project: Ignite Issue Type: Bug Components: general Reporter: Valentin Kulichenko Assignee: Valentin Kulichenko Fix For: 1.6 Issue is described here: http://stackoverflow.com/questions/35268184/updating-apache-ignite-websession-attributes -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (IGNITE-2591) web agent improvement
Pavel Konstantinov created IGNITE-2591: -- Summary: web agent improvement Key: IGNITE-2591 URL: https://issues.apache.org/jira/browse/IGNITE-2591 Project: Ignite Issue Type: Sub-task Reporter: Pavel Konstantinov 1) Agent must always read 'rel-date' from default.properties file even if user specify the own properties-file in command line 2) Add comment "Please do not delete this file" in default.properties file -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (IGNITE-2593) In JCloude variant of discovery Zones do not save.
Vasiliy Sisko created IGNITE-2593: - Summary: In JCloude variant of discovery Zones do not save. Key: IGNITE-2593 URL: https://issues.apache.org/jira/browse/IGNITE-2593 Project: Ignite Issue Type: Bug Components: wizards Affects Versions: 1.6 Reporter: Vasiliy Sisko Assignee: Vasiliy Sisko -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (IGNITE-2595) Allow to save cache with store settings and no one model linked to cache
Pavel Konstantinov created IGNITE-2595: -- Summary: Allow to save cache with store settings and no one model linked to cache Key: IGNITE-2595 URL: https://issues.apache.org/jira/browse/IGNITE-2595 Project: Ignite Issue Type: Sub-task Reporter: Pavel Konstantinov Currently we disallow to user to save cache if it has store setting but has no one model assigned to it. This case is correct and we need to allow to save cache. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Re: Full API coverage enhancement
Sergey, I think we should start more caches, like 1000 in one time. But we have to have enough memory on our TC agents. As I know, empty cache is require about 50 mb (without indexing), am I right? You are right, I keep in mind that *backups* and *REPLICATED* mode make no sense together, but we still have to test it in one node and multi node casesю Any other *no sense* combinations? I forgot about custom BinaryConfiguration at IgniteConfiguration for BinaryMarshaller. So, at least 6 IgniteConfigurations. -- Artem -- On Mon, Feb 8, 2016 at 5:17 PM, Sergey Kozlovwrote: > Hi Artem > > It's good idea to create 20-30 cache configurations at once and then to > iterate tests over those caches in parallel (but make sure that cache names > are unique). > Another point that some combinations make no sense like *backups *and > *REPLICATED > *cache > > > On Mon, Feb 8, 2016 at 5:07 PM, Artem Shutak wrote: > > > Hi all, > > > > I have an update. > > > > I've started from *CacheConfiguration* permutations. I wrote out a list > > with all CacheConfiguration setters and filtered it with Alexey G. > > > > Finally we have: > > > >1. CacheMode - 3 variants > >2. CacheAtomicityMode - 2 variants > >3. CacheMemoryMode - 3 variants > >4. setLoadPreviousValue - 2 variants > >5. setReadFromBackup - 2 variants > >6. setStoreKeepBinary - 2 variants > >7. setRebalanceMode - SYNC and ASYNC (2 variants) > >8. setSwapEnabled - 2 variants > >9. setCopyOnRead - 2 variants > >10. NearConfiguration disabled / default NearConfiguration / custom > >NearConfiguration - 3 variants > >11. With and without a complex parameter. The complex parameter > defines > >not-default Eviction policy and filter, cache store configuration > >(storeFactory and storeSessionListenerFactory), rebalancing > > configuration, > >affinity function, offHeapMaxMemory, interceptor, topology validator > and > >CacheEntryListener. > > > > I've run 123 Cache Full Api test cases for all permutations of parameters > > 1-9 and got 256896 test cases (1152 configuration variants * 123 test > > cases). All these tests take 4 hours 40 minutes. Not all tests pass, so > MAY > > BE when all tests will pass it will take less time (3,5 hours for > example). > > > > As we can see the tests take a lot of time. > > > > The following permutation should be supported too: > > > >1. Nodes count and Bakups count - 1 node and 0 backup, 3 nodes and 1 > >backups, 4 nodes and 2 backups - 3 variants > >2. Client and Server nodes - 2 variants > >3. Indexing enabled and disabled for cache - 2 variants > >4. IgniteConfiguration permutations - how many variants? I see at > least > >4 (2 Marshallers, P2P). > > > > Plus we need add new test cases to test different key and value types and > > etc. > > > > So, we need multiply more then 3,5-4,5 hours on ~250. If we will split > all > > tests on 250 suites and run on all 30 TC agents it will take about 30-40 > > hours. Ok, we can do it during weekends. > > > > I think it will take too much time. > > > > As an option we can start a cache for each configuration and run tests > > concurrently. But we need to implement this opportunity in our test > > framework. > > > > Any other thoughts how we can decrease time for tests? > > > > Thanks, > > -- Artem -- > > > > On Thu, Feb 4, 2016 at 8:43 AM, Semyon Boikov > > wrote: > > > > > Artem, > > > > > > One more thing for new tests: I think test should start both server and > > > client nodes and use Ignite API from all nodes. > > > > > > On Wed, Feb 3, 2016 at 6:40 PM, Artem Shutak > > wrote: > > > > > > > Dmitriy, > > > > > > > > Actually, I don't have a list with all the permutations. > > > > > > > > At first, we need to split in our discussion test cases and Ignite > > > > configuration which should be covered. > > > > > > > > For example, new Full Api test cases for cache are based on old Full > > Api > > > > test cases. So, it need to think what the test cases was not covered > > > > before. > > > > > > > > About Ignite configurations, I'm going to add permutation for each > > > > IgniteConfiguration and CacheConfiguration property. > > > > > > > > By the way, the jira contains the following list of permutation (feel > > > free > > > > to add something): > > > > > > > > The following tests should be added (for functional blocks): > > > > > > > >1. Interceptor > > > >2. Queries: continuous, scan, SQL, fields and text queries. > > > >3. cache events > > > >4. We should also test with Serializable, Externalizable, and > plain > > > >Pojos for keys and values. > > > >5. The Pojo in the above test should contain an enum value > > > >6. We should also test Enums as keys and Enums as values > > > >7. All operations should have single-key and multi-key operations > > > > > > > > New tests should
Re: Full API coverage enhancement
1000 caches x 50MB = 50GB heap. Do we really have >50GB RAM on each agents? On Mon, Feb 8, 2016 at 5:45 PM, Artem Shutakwrote: > Sergey, > > I think we should start more caches, like 1000 in one time. But we have to > have enough memory on our TC agents. As I know, empty cache is require > about 50 mb (without indexing), am I right? > > You are right, I keep in mind that *backups* and *REPLICATED* mode make no > sense together, but we still have to test it in one node and multi node > casesю > > Any other *no sense* combinations? > > I forgot about custom BinaryConfiguration at IgniteConfiguration for > BinaryMarshaller. So, at least 6 IgniteConfigurations. > > -- Artem -- > > On Mon, Feb 8, 2016 at 5:17 PM, Sergey Kozlov > wrote: > > > Hi Artem > > > > It's good idea to create 20-30 cache configurations at once and then to > > iterate tests over those caches in parallel (but make sure that cache > names > > are unique). > > Another point that some combinations make no sense like *backups *and > > *REPLICATED > > *cache > > > > > > On Mon, Feb 8, 2016 at 5:07 PM, Artem Shutak > wrote: > > > > > Hi all, > > > > > > I have an update. > > > > > > I've started from *CacheConfiguration* permutations. I wrote out a list > > > with all CacheConfiguration setters and filtered it with Alexey G. > > > > > > Finally we have: > > > > > >1. CacheMode - 3 variants > > >2. CacheAtomicityMode - 2 variants > > >3. CacheMemoryMode - 3 variants > > >4. setLoadPreviousValue - 2 variants > > >5. setReadFromBackup - 2 variants > > >6. setStoreKeepBinary - 2 variants > > >7. setRebalanceMode - SYNC and ASYNC (2 variants) > > >8. setSwapEnabled - 2 variants > > >9. setCopyOnRead - 2 variants > > >10. NearConfiguration disabled / default NearConfiguration / custom > > >NearConfiguration - 3 variants > > >11. With and without a complex parameter. The complex parameter > > defines > > >not-default Eviction policy and filter, cache store configuration > > >(storeFactory and storeSessionListenerFactory), rebalancing > > > configuration, > > >affinity function, offHeapMaxMemory, interceptor, topology validator > > and > > >CacheEntryListener. > > > > > > I've run 123 Cache Full Api test cases for all permutations of > parameters > > > 1-9 and got 256896 test cases (1152 configuration variants * 123 test > > > cases). All these tests take 4 hours 40 minutes. Not all tests pass, so > > MAY > > > BE when all tests will pass it will take less time (3,5 hours for > > example). > > > > > > As we can see the tests take a lot of time. > > > > > > The following permutation should be supported too: > > > > > >1. Nodes count and Bakups count - 1 node and 0 backup, 3 nodes and 1 > > >backups, 4 nodes and 2 backups - 3 variants > > >2. Client and Server nodes - 2 variants > > >3. Indexing enabled and disabled for cache - 2 variants > > >4. IgniteConfiguration permutations - how many variants? I see at > > least > > >4 (2 Marshallers, P2P). > > > > > > Plus we need add new test cases to test different key and value types > and > > > etc. > > > > > > So, we need multiply more then 3,5-4,5 hours on ~250. If we will split > > all > > > tests on 250 suites and run on all 30 TC agents it will take about > 30-40 > > > hours. Ok, we can do it during weekends. > > > > > > I think it will take too much time. > > > > > > As an option we can start a cache for each configuration and run tests > > > concurrently. But we need to implement this opportunity in our test > > > framework. > > > > > > Any other thoughts how we can decrease time for tests? > > > > > > Thanks, > > > -- Artem -- > > > > > > On Thu, Feb 4, 2016 at 8:43 AM, Semyon Boikov > > > wrote: > > > > > > > Artem, > > > > > > > > One more thing for new tests: I think test should start both server > and > > > > client nodes and use Ignite API from all nodes. > > > > > > > > On Wed, Feb 3, 2016 at 6:40 PM, Artem Shutak > > > wrote: > > > > > > > > > Dmitriy, > > > > > > > > > > Actually, I don't have a list with all the permutations. > > > > > > > > > > At first, we need to split in our discussion test cases and Ignite > > > > > configuration which should be covered. > > > > > > > > > > For example, new Full Api test cases for cache are based on old > Full > > > Api > > > > > test cases. So, it need to think what the test cases was not > covered > > > > > before. > > > > > > > > > > About Ignite configurations, I'm going to add permutation for each > > > > > IgniteConfiguration and CacheConfiguration property. > > > > > > > > > > By the way, the jira contains the following list of permutation > (feel > > > > free > > > > > to add something): > > > > > > > > > > The following tests should be added (for functional blocks): > > > > > > > > > >1. Interceptor > > > > >2. Queries: continuous,
[jira] [Created] (IGNITE-2587) Unexpected exception during cache update
Sergey Kozlov created IGNITE-2587: - Summary: Unexpected exception during cache update Key: IGNITE-2587 URL: https://issues.apache.org/jira/browse/IGNITE-2587 Project: Ignite Issue Type: Bug Affects Versions: 1.5.0.final Environment: Windows 10, Oracle JDK 1.7.0_80-b15 Reporter: Sergey Kozlov Priority: Critical Attachments: client_output.txt, ignite-33cb560a.0.log, ignite-5e9294d1.0.log, server_output.txt 1. Start server node 2. Start client node (see the code in IGNITE-2586) 3. Server node failed: {noformat} [16:05:03,866][SEVERE][sys-#17%null%][GridDhtAtomicCache] Unexpected exception during cache update java.lang.UnsupportedOperationException at org.apache.ignite.internal.binary.BinaryObjectOffheapImpl.finishUnmarshal(BinaryObjectOffheapImpl.java:363) at org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousQueryManager.onEntryUpdated(CacheContinuousQueryManager.java:243) at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateSingle(GridDhtAtomicCache.java:2070) at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAllAsyncInternal0(GridDhtAtomicCache.java:1407) at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAllAsyncInternal(GridDhtAtomicCache.java:1282) at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.processNearAtomicUpdateRequest(GridDhtAtomicCache.java:2692) at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.access$600(GridDhtAtomicCache.java:128) at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache$5.apply(GridDhtAtomicCache.java:257) at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache$5.apply(GridDhtAtomicCache.java:255) at org.apache.ignite.internal.processors.cache.GridCacheIoManager.processMessage(GridCacheIoManager.java:582) at org.apache.ignite.internal.processors.cache.GridCacheIoManager.onMessage0(GridCacheIoManager.java:280) at org.apache.ignite.internal.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:204) at org.apache.ignite.internal.processors.cache.GridCacheIoManager.access$000(GridCacheIoManager.java:80) at org.apache.ignite.internal.processors.cache.GridCacheIoManager$1.onMessage(GridCacheIoManager.java:163) at org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:821) at org.apache.ignite.internal.managers.communication.GridIoManager.access$1600(GridIoManager.java:103) at org.apache.ignite.internal.managers.communication.GridIoManager$5.run(GridIoManager.java:784) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Re: Full API coverage enhancement
How about such approach: Generate all permutation descriptors and save them to database. On each agent start some process that will grab from DB some descriptors (lets say 100) mark them as "in progress" and execute them. Other agents will grab remaining "not in progress" descriptors and do the same. So we could run in parallel all permutations even if we have 1M of them. What do you think? On Mon, Feb 8, 2016 at 9:52 PM, Sergey Kozlovwrote: > 1000 caches x 50MB = 50GB heap. Do we really have >50GB RAM on each agents? > > On Mon, Feb 8, 2016 at 5:45 PM, Artem Shutak wrote: > > > Sergey, > > > > I think we should start more caches, like 1000 in one time. But we have > to > > have enough memory on our TC agents. As I know, empty cache is require > > about 50 mb (without indexing), am I right? > > > > You are right, I keep in mind that *backups* and *REPLICATED* mode make > no > > sense together, but we still have to test it in one node and multi node > > casesю > > > > Any other *no sense* combinations? > > > > I forgot about custom BinaryConfiguration at IgniteConfiguration for > > BinaryMarshaller. So, at least 6 IgniteConfigurations. > > > > -- Artem -- > > > > On Mon, Feb 8, 2016 at 5:17 PM, Sergey Kozlov > > wrote: > > > > > Hi Artem > > > > > > It's good idea to create 20-30 cache configurations at once and then to > > > iterate tests over those caches in parallel (but make sure that cache > > names > > > are unique). > > > Another point that some combinations make no sense like *backups *and > > > *REPLICATED > > > *cache > > > > > > > > > On Mon, Feb 8, 2016 at 5:07 PM, Artem Shutak > > wrote: > > > > > > > Hi all, > > > > > > > > I have an update. > > > > > > > > I've started from *CacheConfiguration* permutations. I wrote out a > list > > > > with all CacheConfiguration setters and filtered it with Alexey G. > > > > > > > > Finally we have: > > > > > > > >1. CacheMode - 3 variants > > > >2. CacheAtomicityMode - 2 variants > > > >3. CacheMemoryMode - 3 variants > > > >4. setLoadPreviousValue - 2 variants > > > >5. setReadFromBackup - 2 variants > > > >6. setStoreKeepBinary - 2 variants > > > >7. setRebalanceMode - SYNC and ASYNC (2 variants) > > > >8. setSwapEnabled - 2 variants > > > >9. setCopyOnRead - 2 variants > > > >10. NearConfiguration disabled / default NearConfiguration / > custom > > > >NearConfiguration - 3 variants > > > >11. With and without a complex parameter. The complex parameter > > > defines > > > >not-default Eviction policy and filter, cache store configuration > > > >(storeFactory and storeSessionListenerFactory), rebalancing > > > > configuration, > > > >affinity function, offHeapMaxMemory, interceptor, topology > validator > > > and > > > >CacheEntryListener. > > > > > > > > I've run 123 Cache Full Api test cases for all permutations of > > parameters > > > > 1-9 and got 256896 test cases (1152 configuration variants * 123 test > > > > cases). All these tests take 4 hours 40 minutes. Not all tests pass, > so > > > MAY > > > > BE when all tests will pass it will take less time (3,5 hours for > > > example). > > > > > > > > As we can see the tests take a lot of time. > > > > > > > > The following permutation should be supported too: > > > > > > > >1. Nodes count and Bakups count - 1 node and 0 backup, 3 nodes > and 1 > > > >backups, 4 nodes and 2 backups - 3 variants > > > >2. Client and Server nodes - 2 variants > > > >3. Indexing enabled and disabled for cache - 2 variants > > > >4. IgniteConfiguration permutations - how many variants? I see at > > > least > > > >4 (2 Marshallers, P2P). > > > > > > > > Plus we need add new test cases to test different key and value types > > and > > > > etc. > > > > > > > > So, we need multiply more then 3,5-4,5 hours on ~250. If we will > split > > > all > > > > tests on 250 suites and run on all 30 TC agents it will take about > > 30-40 > > > > hours. Ok, we can do it during weekends. > > > > > > > > I think it will take too much time. > > > > > > > > As an option we can start a cache for each configuration and run > tests > > > > concurrently. But we need to implement this opportunity in our test > > > > framework. > > > > > > > > Any other thoughts how we can decrease time for tests? > > > > > > > > Thanks, > > > > -- Artem -- > > > > > > > > On Thu, Feb 4, 2016 at 8:43 AM, Semyon Boikov > > > > wrote: > > > > > > > > > Artem, > > > > > > > > > > One more thing for new tests: I think test should start both server > > and > > > > > client nodes and use Ignite API from all nodes. > > > > > > > > > > On Wed, Feb 3, 2016 at 6:40 PM, Artem Shutak > > > > > wrote: > > > > > > > > > > > Dmitriy, > > > > > > > > > > > > Actually, I don't have a list with all the permutations. > > > > > > > > > > > > At first, we
Re: Full API coverage enhancement
Hi Artem It's good idea to create 20-30 cache configurations at once and then to iterate tests over those caches in parallel (but make sure that cache names are unique). Another point that some combinations make no sense like *backups *and *REPLICATED *cache On Mon, Feb 8, 2016 at 5:07 PM, Artem Shutakwrote: > Hi all, > > I have an update. > > I've started from *CacheConfiguration* permutations. I wrote out a list > with all CacheConfiguration setters and filtered it with Alexey G. > > Finally we have: > >1. CacheMode - 3 variants >2. CacheAtomicityMode - 2 variants >3. CacheMemoryMode - 3 variants >4. setLoadPreviousValue - 2 variants >5. setReadFromBackup - 2 variants >6. setStoreKeepBinary - 2 variants >7. setRebalanceMode - SYNC and ASYNC (2 variants) >8. setSwapEnabled - 2 variants >9. setCopyOnRead - 2 variants >10. NearConfiguration disabled / default NearConfiguration / custom >NearConfiguration - 3 variants >11. With and without a complex parameter. The complex parameter defines >not-default Eviction policy and filter, cache store configuration >(storeFactory and storeSessionListenerFactory), rebalancing > configuration, >affinity function, offHeapMaxMemory, interceptor, topology validator and >CacheEntryListener. > > I've run 123 Cache Full Api test cases for all permutations of parameters > 1-9 and got 256896 test cases (1152 configuration variants * 123 test > cases). All these tests take 4 hours 40 minutes. Not all tests pass, so MAY > BE when all tests will pass it will take less time (3,5 hours for example). > > As we can see the tests take a lot of time. > > The following permutation should be supported too: > >1. Nodes count and Bakups count - 1 node and 0 backup, 3 nodes and 1 >backups, 4 nodes and 2 backups - 3 variants >2. Client and Server nodes - 2 variants >3. Indexing enabled and disabled for cache - 2 variants >4. IgniteConfiguration permutations - how many variants? I see at least >4 (2 Marshallers, P2P). > > Plus we need add new test cases to test different key and value types and > etc. > > So, we need multiply more then 3,5-4,5 hours on ~250. If we will split all > tests on 250 suites and run on all 30 TC agents it will take about 30-40 > hours. Ok, we can do it during weekends. > > I think it will take too much time. > > As an option we can start a cache for each configuration and run tests > concurrently. But we need to implement this opportunity in our test > framework. > > Any other thoughts how we can decrease time for tests? > > Thanks, > -- Artem -- > > On Thu, Feb 4, 2016 at 8:43 AM, Semyon Boikov > wrote: > > > Artem, > > > > One more thing for new tests: I think test should start both server and > > client nodes and use Ignite API from all nodes. > > > > On Wed, Feb 3, 2016 at 6:40 PM, Artem Shutak > wrote: > > > > > Dmitriy, > > > > > > Actually, I don't have a list with all the permutations. > > > > > > At first, we need to split in our discussion test cases and Ignite > > > configuration which should be covered. > > > > > > For example, new Full Api test cases for cache are based on old Full > Api > > > test cases. So, it need to think what the test cases was not covered > > > before. > > > > > > About Ignite configurations, I'm going to add permutation for each > > > IgniteConfiguration and CacheConfiguration property. > > > > > > By the way, the jira contains the following list of permutation (feel > > free > > > to add something): > > > > > > The following tests should be added (for functional blocks): > > > > > >1. Interceptor > > >2. Queries: continuous, scan, SQL, fields and text queries. > > >3. cache events > > >4. We should also test with Serializable, Externalizable, and plain > > >Pojos for keys and values. > > >5. The Pojo in the above test should contain an enum value > > >6. We should also test Enums as keys and Enums as values > > >7. All operations should have single-key and multi-key operations > > > > > > New tests should cover all combinations for following properties: > > > > > >1. cache modes > > >2. operation from client nodes and server nodes > > >3. store enabled/disabled > > >4. evicts sycn/non-sync > > >5. eviction policies > > >6. near on/off > > >7. marshallers (+ Binary marshaller with different mappers) > > >8. keys and values - externalizable, serializable, binaryzable, > "none > > of > > >previous" > > >9. classes available on servers: true/false > > >10. Peer loading on/off > > >11. Affinity functions > > >12. expiry policies > > > > > > > > > Thanks, > > > -- Artem -- > > > > > > On Wed, Feb 3, 2016 at 6:14 PM, Dmitriy Setrakyan < > dsetrak...@apache.org > > > > > > wrote: > > > > > > > Artem, I think it is best to specify all the permutations here, so > > others > > > > can make
[GitHub] ignite pull request: IGNITE-2333
GitHub user ilantukh opened a pull request: https://github.com/apache/ignite/pull/462 IGNITE-2333 You can merge this pull request into a Git repository by running: $ git pull https://github.com/ilantukh/ignite ignite-2333 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/ignite/pull/462.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #462 commit 3a15b9514209d2f334985393e38bcadea733205b Author: Ilya LantukhDate: 2016-02-08T12:52:42Z ignite-2333 : StripedCompositeReadWriteLock. commit b96539de3a1db9fb77bbdab968ef30d92ef04594 Author: Ilya Lantukh Date: 2016-02-08T13:01:56Z ignite-2333 : Minors. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
Re: Full API coverage enhancement
Hi all, I have an update. I've started from *CacheConfiguration* permutations. I wrote out a list with all CacheConfiguration setters and filtered it with Alexey G. Finally we have: 1. CacheMode - 3 variants 2. CacheAtomicityMode - 2 variants 3. CacheMemoryMode - 3 variants 4. setLoadPreviousValue - 2 variants 5. setReadFromBackup - 2 variants 6. setStoreKeepBinary - 2 variants 7. setRebalanceMode - SYNC and ASYNC (2 variants) 8. setSwapEnabled - 2 variants 9. setCopyOnRead - 2 variants 10. NearConfiguration disabled / default NearConfiguration / custom NearConfiguration - 3 variants 11. With and without a complex parameter. The complex parameter defines not-default Eviction policy and filter, cache store configuration (storeFactory and storeSessionListenerFactory), rebalancing configuration, affinity function, offHeapMaxMemory, interceptor, topology validator and CacheEntryListener. I've run 123 Cache Full Api test cases for all permutations of parameters 1-9 and got 256896 test cases (1152 configuration variants * 123 test cases). All these tests take 4 hours 40 minutes. Not all tests pass, so MAY BE when all tests will pass it will take less time (3,5 hours for example). As we can see the tests take a lot of time. The following permutation should be supported too: 1. Nodes count and Bakups count - 1 node and 0 backup, 3 nodes and 1 backups, 4 nodes and 2 backups - 3 variants 2. Client and Server nodes - 2 variants 3. Indexing enabled and disabled for cache - 2 variants 4. IgniteConfiguration permutations - how many variants? I see at least 4 (2 Marshallers, P2P). Plus we need add new test cases to test different key and value types and etc. So, we need multiply more then 3,5-4,5 hours on ~250. If we will split all tests on 250 suites and run on all 30 TC agents it will take about 30-40 hours. Ok, we can do it during weekends. I think it will take too much time. As an option we can start a cache for each configuration and run tests concurrently. But we need to implement this opportunity in our test framework. Any other thoughts how we can decrease time for tests? Thanks, -- Artem -- On Thu, Feb 4, 2016 at 8:43 AM, Semyon Boikovwrote: > Artem, > > One more thing for new tests: I think test should start both server and > client nodes and use Ignite API from all nodes. > > On Wed, Feb 3, 2016 at 6:40 PM, Artem Shutak wrote: > > > Dmitriy, > > > > Actually, I don't have a list with all the permutations. > > > > At first, we need to split in our discussion test cases and Ignite > > configuration which should be covered. > > > > For example, new Full Api test cases for cache are based on old Full Api > > test cases. So, it need to think what the test cases was not covered > > before. > > > > About Ignite configurations, I'm going to add permutation for each > > IgniteConfiguration and CacheConfiguration property. > > > > By the way, the jira contains the following list of permutation (feel > free > > to add something): > > > > The following tests should be added (for functional blocks): > > > >1. Interceptor > >2. Queries: continuous, scan, SQL, fields and text queries. > >3. cache events > >4. We should also test with Serializable, Externalizable, and plain > >Pojos for keys and values. > >5. The Pojo in the above test should contain an enum value > >6. We should also test Enums as keys and Enums as values > >7. All operations should have single-key and multi-key operations > > > > New tests should cover all combinations for following properties: > > > >1. cache modes > >2. operation from client nodes and server nodes > >3. store enabled/disabled > >4. evicts sycn/non-sync > >5. eviction policies > >6. near on/off > >7. marshallers (+ Binary marshaller with different mappers) > >8. keys and values - externalizable, serializable, binaryzable, "none > of > >previous" > >9. classes available on servers: true/false > >10. Peer loading on/off > >11. Affinity functions > >12. expiry policies > > > > > > Thanks, > > -- Artem -- > > > > On Wed, Feb 3, 2016 at 6:14 PM, Dmitriy Setrakyan > > > wrote: > > > > > Artem, I think it is best to specify all the permutations here, so > others > > > can make additional suggestions. Otherwise, we cannot get a full > picture. > > > > > > Thanks, > > > D. > > > > > > On Wed, Feb 3, 2016 at 2:02 AM, Artem Shutak > > wrote: > > > > > > > Igniters, > > > > > > > > I thought a little bit more and think we need to add a support for > the > > > > following permutations too (I've added these to the jira > description): > > > > - We should also test with Serializable, Externalizable, and plain > > Pojos > > > > for keys and values. > > > > - The Pojo in the above test should contain an enum value > > > > - We should also test Enums
Re: Full API coverage enhancement
Alexey, First of all, it's strange to have disk-dependent tests for in-memory data fabric ;) I'm sure we can solve this issue without bottleneck. I see many technical issues with that approach. Another bad thing about the approach is we don't know how to find test result by configuration. I think, if we will split tests it should be done manually: by marshaller, enable/disable indexing, enabled/disabled near configuration and etc. It will be really simple when I will finish with the test framework. In my view, it's not really hard to run tests concurrently. My question is in approach. May be we can another options or we cannot do it at all by any reason. -- Artem -- On Mon, Feb 8, 2016 at 6:38 PM, Alexey Kuznetsovwrote: > How about such approach: > > Generate all permutation descriptors and save them to database. > On each agent start some process that will grab from DB some descriptors > (lets say 100) mark them as "in progress" and execute them. > Other agents will grab remaining "not in progress" descriptors and do the > same. > So we could run in parallel all permutations even if we have 1M of them. > > What do you think? > > > On Mon, Feb 8, 2016 at 9:52 PM, Sergey Kozlov > wrote: > > > 1000 caches x 50MB = 50GB heap. Do we really have >50GB RAM on each > agents? > > > > On Mon, Feb 8, 2016 at 5:45 PM, Artem Shutak > wrote: > > > > > Sergey, > > > > > > I think we should start more caches, like 1000 in one time. But we have > > to > > > have enough memory on our TC agents. As I know, empty cache is require > > > about 50 mb (without indexing), am I right? > > > > > > You are right, I keep in mind that *backups* and *REPLICATED* mode make > > no > > > sense together, but we still have to test it in one node and multi node > > > casesю > > > > > > Any other *no sense* combinations? > > > > > > I forgot about custom BinaryConfiguration at IgniteConfiguration for > > > BinaryMarshaller. So, at least 6 IgniteConfigurations. > > > > > > -- Artem -- > > > > > > On Mon, Feb 8, 2016 at 5:17 PM, Sergey Kozlov > > > wrote: > > > > > > > Hi Artem > > > > > > > > It's good idea to create 20-30 cache configurations at once and then > to > > > > iterate tests over those caches in parallel (but make sure that cache > > > names > > > > are unique). > > > > Another point that some combinations make no sense like *backups *and > > > > *REPLICATED > > > > *cache > > > > > > > > > > > > On Mon, Feb 8, 2016 at 5:07 PM, Artem Shutak > > > wrote: > > > > > > > > > Hi all, > > > > > > > > > > I have an update. > > > > > > > > > > I've started from *CacheConfiguration* permutations. I wrote out a > > list > > > > > with all CacheConfiguration setters and filtered it with Alexey G. > > > > > > > > > > Finally we have: > > > > > > > > > >1. CacheMode - 3 variants > > > > >2. CacheAtomicityMode - 2 variants > > > > >3. CacheMemoryMode - 3 variants > > > > >4. setLoadPreviousValue - 2 variants > > > > >5. setReadFromBackup - 2 variants > > > > >6. setStoreKeepBinary - 2 variants > > > > >7. setRebalanceMode - SYNC and ASYNC (2 variants) > > > > >8. setSwapEnabled - 2 variants > > > > >9. setCopyOnRead - 2 variants > > > > >10. NearConfiguration disabled / default NearConfiguration / > > custom > > > > >NearConfiguration - 3 variants > > > > >11. With and without a complex parameter. The complex parameter > > > > defines > > > > >not-default Eviction policy and filter, cache store > configuration > > > > >(storeFactory and storeSessionListenerFactory), rebalancing > > > > > configuration, > > > > >affinity function, offHeapMaxMemory, interceptor, topology > > validator > > > > and > > > > >CacheEntryListener. > > > > > > > > > > I've run 123 Cache Full Api test cases for all permutations of > > > parameters > > > > > 1-9 and got 256896 test cases (1152 configuration variants * 123 > test > > > > > cases). All these tests take 4 hours 40 minutes. Not all tests > pass, > > so > > > > MAY > > > > > BE when all tests will pass it will take less time (3,5 hours for > > > > example). > > > > > > > > > > As we can see the tests take a lot of time. > > > > > > > > > > The following permutation should be supported too: > > > > > > > > > >1. Nodes count and Bakups count - 1 node and 0 backup, 3 nodes > > and 1 > > > > >backups, 4 nodes and 2 backups - 3 variants > > > > >2. Client and Server nodes - 2 variants > > > > >3. Indexing enabled and disabled for cache - 2 variants > > > > >4. IgniteConfiguration permutations - how many variants? I see > at > > > > least > > > > >4 (2 Marshallers, P2P). > > > > > > > > > > Plus we need add new test cases to test different key and value > types > > > and > > > > > etc. > > > > > > > > > > So, we need multiply more then 3,5-4,5 hours on ~250. If we will > > split > > > > all > > > > > tests
Re: Apache Flink integration
Thank you Roman !!! On Mon, Feb 8, 2016 at 3:20 PM, Roman Shtykhwrote: > Hi Saikat, > Probably [1] can give you some pointers where to start. > [1] > http://apache-ignite-developers.2346864.n4.nabble.com/Apache-Flink-and-Spark-Integration-and-Acceleration-td82.html > -Roman > > > On Monday, February 8, 2016 2:25 AM, Saikat Maitra < > saikat.mai...@gmail.com> wrote: > > > Hi, > > I am looking forward to taking up this ticket > https://issues.apache.org/jira/browse/IGNITE-813 (Apache Flink > Integration). > > I wanted to understand which module will be suitable for integration. Any > reference to similar integration or pointers to design discussion will be > helpful. > > Regards > Saikat > > > >
[jira] [Created] (IGNITE-2589) Value is not loaded from store in pessimistic transaction when near cache is enabled
Alexey Goncharuk created IGNITE-2589: Summary: Value is not loaded from store in pessimistic transaction when near cache is enabled Key: IGNITE-2589 URL: https://issues.apache.org/jira/browse/IGNITE-2589 Project: Ignite Issue Type: Bug Components: cache Affects Versions: 1.5.0.final Reporter: Alexey Goncharuk -- This message was sent by Atlassian JIRA (v6.3.4#6332)