[jira] [Commented] (IGNITE-9228) Spark SQL Table Schema Specification

2019-09-05 Thread Manoj G T (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-9228?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16923107#comment-16923107
 ] 

Manoj G T commented on IGNITE-9228:
---

[~NIzhikov] [~stuartmacd] I have read the code review comments given in 
[https://github.com/apache/ignite/pull/4551]. If my understanding is correct, 
this feature was implemented during Ignite 2.6 timeline and at that point of 
time Ignite doesn't allow to create table on any schema other than Public 
Schema and this is the reason for not supporting "OPTION_SCHEMA" during 
Overwrite mode. Now that Ignite supports to create the table in any given 
schema it will be great if we can incorporate the changes to support 
"OPTION_SCHEMA" during Overwrite mode and make it available as part of next 
Ignite release. Kindly share your thoughts on this.

> Spark SQL Table Schema Specification
> 
>
> Key: IGNITE-9228
> URL: https://issues.apache.org/jira/browse/IGNITE-9228
> Project: Ignite
>  Issue Type: Improvement
>  Components: spark
>Affects Versions: 2.6
>Reporter: Stuart Macdonald
>Assignee: Stuart Macdonald
>Priority: Major
> Fix For: 2.8
>
>
> The Ignite Spark SQL interface currently takes just “table name” as a
> parameter which it uses to supply a Spark dataset with data from the
> underlying Ignite SQL table with that name.
> To do this it loops through each cache and finds the first one with the
> given table name [1]. This causes issues if there are multiple tables
> registered in different schema with the same table name as you can only
> access one of those from Spark. We could either:
> 1. Pass an extra parameter through the Ignite Spark data source which
> optionally specifies the schema name.
> 2. Support namespacing in the existing table name parameter, ie
> “schemaName.tableName”
> [1 
> ]https://github.com/apache/ignite/blob/ca973ad99c6112160a305df05be9458e29f88307/modules/spark/src/main/scala/org/apache/ignite/spark/impl/package.scala#L119



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Assigned] (IGNITE-11686) MVCC: Create separate test for vacuum checks.

2019-09-05 Thread Diana Iakovleva (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-11686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Diana Iakovleva reassigned IGNITE-11686:


Assignee: Diana Iakovleva

> MVCC: Create separate test for vacuum checks.
> -
>
> Key: IGNITE-11686
> URL: https://issues.apache.org/jira/browse/IGNITE-11686
> Project: Ignite
>  Issue Type: Test
>  Components: mvcc
>Reporter: Andrew Mashenkov
>Assignee: Diana Iakovleva
>Priority: Major
>  Labels: MakeTeamcityGreenAgain, newbie
>
> Most of tests (inherited from CacheMvccAbstractTest) run vacuum synchronously 
> on afterTest() method and check if vacuum is ok. This hits performance, can 
> cause false negative results and
> vacuum issues can be hidden if afterTest method will overriden at once.
> For now we have CacheMvccVacuumTest that just check vacuum workers state, but 
> there is no check if vacuum really cleans all old versions correctly. I'd 
> expect to find it in this class.
> So, let's mode vacuum verification from afterTest method into 
> CacheMvccVacuumTest as a new separate test.
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Resolved] (IGNITE-11825) Test GridCommandHandlerTest#testCacheIdleVerifyNodeFilter fails with "Duplicate row in index"

2019-09-05 Thread Ilya Kasnacheev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-11825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ilya Kasnacheev resolved IGNITE-11825.
--
Release Note: No longer see this issue in recent TC runs.
  Resolution: Cannot Reproduce

> Test GridCommandHandlerTest#testCacheIdleVerifyNodeFilter fails with 
> "Duplicate row in index"
> -
>
> Key: IGNITE-11825
> URL: https://issues.apache.org/jira/browse/IGNITE-11825
> Project: Ignite
>  Issue Type: Bug
>  Components: persistence
>Reporter: Ilya Kasnacheev
>Assignee: Ilya Kasnacheev
>Priority: Major
>
> A freshly contributed test fails around half of runs with exceptions like:
> {code}
> [2019-04-30 
> 14:15:14,355][ERROR][data-streamer-stripe-0-#20402%gridCommandHandlerTest0%][IgniteTestResources]
>  Failed to set initial value for cache entry: DataStreamerEntry 
> [key=UserKeyCach
> eObjectImpl [part=25, val=25, hasValBytes=true], val=UserCacheObjectImpl 
> [val=25, hasValBytes=true]]
> class 
> org.apache.ignite.internal.processors.cache.persistence.tree.CorruptedTreeException:
>  Runtime failure on search row: SearchRow [key=KeyCacheObjectImpl [part=25, 
> val=25, hasValBytes=tru
> e], hash=25, cacheId=0]
> at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.invoke(BPlusTree.java:1817)
> at 
> org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.invoke0(IgniteCacheOffheapManagerImpl.java:1619)
> at 
> org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.invoke(IgniteCacheOffheapManagerImpl.java:1602)
> at 
> org.apache.ignite.internal.processors.cache.persistence.GridCacheOffheapManager$GridCacheDataStore.invoke(GridCacheOffheapManager.java:2160)
> at 
> org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl.invoke(IgniteCacheOffheapManagerImpl.java:433)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheMapEntry.storeValue(GridCacheMapEntry.java:4282)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheMapEntry.initialValue(GridCacheMapEntry.java:3430)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheEntryEx.initialValue(GridCacheEntryEx.java:772)
> at 
> org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl$IsolatedUpdater.receive(DataStreamerImpl.java:2280)
> at 
> org.apache.ignite.internal.processors.datastreamer.DataStreamerUpdateJob.call(DataStreamerUpdateJob.java:140)
> at 
> org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:6845)
> at 
> org.apache.ignite.internal.processors.closure.GridClosureProcessor$2.body(GridClosureProcessor.java:967)
> at 
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
> at 
> org.apache.ignite.internal.util.StripedExecutor$Stripe.body(StripedExecutor.java:550)
> at 
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.IllegalStateException: Duplicate row in index.
> at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Insert.run0(BPlusTree.java:437)
> at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Insert.run0(BPlusTree.java:423)
> at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$GetPageHandler.run(BPlusTree.java:5643)
> at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$GetPageHandler.run(BPlusTree.java:5629)
> at 
> org.apache.ignite.internal.processors.cache.persistence.tree.util.PageHandler.writePage(PageHandler.java:359)
> at 
> org.apache.ignite.internal.processors.cache.persistence.DataStructure.write(DataStructure.java:285)
> at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.access$11400(BPlusTree.java:92)
> at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Put.tryInsert(BPlusTree.java:3622)
> at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Put.access$7100(BPlusTree.java:3302)
> at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Invoke.onNotFound(BPlusTree.java:3860)
> at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Invoke.access$5800(BPlusTree.java:3652)
> at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.invokeDown(BPlusTree.java:1902)
> at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.invoke(BPlusTree.java:1784)
> {code}
> 

[jira] [Updated] (IGNITE-11825) Test GridCommandHandlerTest#testCacheIdleVerifyNodeFilter fails with "Duplicate row in index"

2019-09-05 Thread Ilya Kasnacheev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-11825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ilya Kasnacheev updated IGNITE-11825:
-
Labels: MakeTeamcityGreenAgain  (was: )

> Test GridCommandHandlerTest#testCacheIdleVerifyNodeFilter fails with 
> "Duplicate row in index"
> -
>
> Key: IGNITE-11825
> URL: https://issues.apache.org/jira/browse/IGNITE-11825
> Project: Ignite
>  Issue Type: Bug
>  Components: persistence
>Reporter: Ilya Kasnacheev
>Assignee: Ilya Kasnacheev
>Priority: Major
>  Labels: MakeTeamcityGreenAgain
>
> A freshly contributed test fails around half of runs with exceptions like:
> {code}
> [2019-04-30 
> 14:15:14,355][ERROR][data-streamer-stripe-0-#20402%gridCommandHandlerTest0%][IgniteTestResources]
>  Failed to set initial value for cache entry: DataStreamerEntry 
> [key=UserKeyCach
> eObjectImpl [part=25, val=25, hasValBytes=true], val=UserCacheObjectImpl 
> [val=25, hasValBytes=true]]
> class 
> org.apache.ignite.internal.processors.cache.persistence.tree.CorruptedTreeException:
>  Runtime failure on search row: SearchRow [key=KeyCacheObjectImpl [part=25, 
> val=25, hasValBytes=tru
> e], hash=25, cacheId=0]
> at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.invoke(BPlusTree.java:1817)
> at 
> org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.invoke0(IgniteCacheOffheapManagerImpl.java:1619)
> at 
> org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.invoke(IgniteCacheOffheapManagerImpl.java:1602)
> at 
> org.apache.ignite.internal.processors.cache.persistence.GridCacheOffheapManager$GridCacheDataStore.invoke(GridCacheOffheapManager.java:2160)
> at 
> org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl.invoke(IgniteCacheOffheapManagerImpl.java:433)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheMapEntry.storeValue(GridCacheMapEntry.java:4282)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheMapEntry.initialValue(GridCacheMapEntry.java:3430)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheEntryEx.initialValue(GridCacheEntryEx.java:772)
> at 
> org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl$IsolatedUpdater.receive(DataStreamerImpl.java:2280)
> at 
> org.apache.ignite.internal.processors.datastreamer.DataStreamerUpdateJob.call(DataStreamerUpdateJob.java:140)
> at 
> org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:6845)
> at 
> org.apache.ignite.internal.processors.closure.GridClosureProcessor$2.body(GridClosureProcessor.java:967)
> at 
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
> at 
> org.apache.ignite.internal.util.StripedExecutor$Stripe.body(StripedExecutor.java:550)
> at 
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.IllegalStateException: Duplicate row in index.
> at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Insert.run0(BPlusTree.java:437)
> at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Insert.run0(BPlusTree.java:423)
> at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$GetPageHandler.run(BPlusTree.java:5643)
> at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$GetPageHandler.run(BPlusTree.java:5629)
> at 
> org.apache.ignite.internal.processors.cache.persistence.tree.util.PageHandler.writePage(PageHandler.java:359)
> at 
> org.apache.ignite.internal.processors.cache.persistence.DataStructure.write(DataStructure.java:285)
> at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.access$11400(BPlusTree.java:92)
> at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Put.tryInsert(BPlusTree.java:3622)
> at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Put.access$7100(BPlusTree.java:3302)
> at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Invoke.onNotFound(BPlusTree.java:3860)
> at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Invoke.access$5800(BPlusTree.java:3652)
> at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.invokeDown(BPlusTree.java:1902)
> at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.invoke(BPlusTree.java:1784)
> {code}
> which wil

[jira] [Commented] (IGNITE-9228) Spark SQL Table Schema Specification

2019-09-05 Thread Nikolay Izhikov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-9228?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16923217#comment-16923217
 ] 

Nikolay Izhikov commented on IGNITE-9228:
-

Hello, [~gtmanoj235]

Thanks for your feedback!

Feel free to raise a ticket to support schema on table write. Please, write the 
number of the ticket here.

Do you want to contribute changes to implement your idea?
I will review them, for sure.

> Spark SQL Table Schema Specification
> 
>
> Key: IGNITE-9228
> URL: https://issues.apache.org/jira/browse/IGNITE-9228
> Project: Ignite
>  Issue Type: Improvement
>  Components: spark
>Affects Versions: 2.6
>Reporter: Stuart Macdonald
>Assignee: Stuart Macdonald
>Priority: Major
> Fix For: 2.8
>
>
> The Ignite Spark SQL interface currently takes just “table name” as a
> parameter which it uses to supply a Spark dataset with data from the
> underlying Ignite SQL table with that name.
> To do this it loops through each cache and finds the first one with the
> given table name [1]. This causes issues if there are multiple tables
> registered in different schema with the same table name as you can only
> access one of those from Spark. We could either:
> 1. Pass an extra parameter through the Ignite Spark data source which
> optionally specifies the schema name.
> 2. Support namespacing in the existing table name parameter, ie
> “schemaName.tableName”
> [1 
> ]https://github.com/apache/ignite/blob/ca973ad99c6112160a305df05be9458e29f88307/modules/spark/src/main/scala/org/apache/ignite/spark/impl/package.scala#L119



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (IGNITE-11815) Get rid of GridTestUtils.retryAssert method.

2019-09-05 Thread Ignite TC Bot (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-11815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16923232#comment-16923232
 ] 

Ignite TC Bot commented on IGNITE-11815:


{panel:title=Branch: [pull/6839/head] Base: [master] : No blockers 
found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}{panel}
[TeamCity *--> Run :: All* 
Results|https://ci.ignite.apache.org/viewLog.html?buildId=4567597&buildTypeId=IgniteTests24Java8_RunAll]

> Get rid of GridTestUtils.retryAssert method.
> 
>
> Key: IGNITE-11815
> URL: https://issues.apache.org/jira/browse/IGNITE-11815
> Project: Ignite
>  Issue Type: Test
>Reporter: Andrew Mashenkov
>Assignee: Diana Iakovleva
>Priority: Major
>  Labels: MakeTeamcityGreenAgain, newbie
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> For now we have GridTestUtils.retryAssert() method which runs a closure 'n' 
> times to check if some invariantes become ok, eventually.
> This method catch assertion error (this looks like a very bad idea) and can 
> print them to log many times even if assertion is acceptable for the moment.
>  Also, it is possible to miss an assertion is not related to those ones that 
> closure checks  (e.g. assertion error thrown from ignite internals).
> Let's replace retryAssert with GridTestUtils.waitForCondition() usage to make 
> logs clearer and to avoid possible false positive results.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Assigned] (IGNITE-12139) RPM for 2.7.5 release built from incorrect version

2019-09-05 Thread Ilya Kasnacheev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ilya Kasnacheev reassigned IGNITE-12139:


Assignee: Dmitriy Pavlov

> RPM for 2.7.5 release built from incorrect version
> --
>
> Key: IGNITE-12139
> URL: https://issues.apache.org/jira/browse/IGNITE-12139
> Project: Ignite
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.7.5
>Reporter: Jan Kupec
>Assignee: Dmitriy Pavlov
>Priority: Minor
>
> The {{apache-ignite-2.7.5}} RPM found in the [official RPM 
> repository|https://ignite.apache.org/download.cgi#rpm-package] has been built 
> from commit *{{c9521338}}*, which is several weeks of development away from 
> the head of the {{ignite-2.7.5}} release branch (*{{be4f2a15}}*) and 
> apparently contains incompatible changes.
> Is this a result of a human error or an error in the automated build system? 
> Can this easily be fixed?



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Created] (IGNITE-12142) Ignite ignores that on-heap store is disabled when putting values through near cache

2019-09-05 Thread Jira
Bartłomiej Stefański created IGNITE-12142:
-

 Summary: Ignite ignores that on-heap store is disabled when 
putting values through near cache
 Key: IGNITE-12142
 URL: https://issues.apache.org/jira/browse/IGNITE-12142
 Project: Ignite
  Issue Type: Bug
  Components: cache
Affects Versions: 2.7.5, 2.7
Reporter: Bartłomiej Stefański


I have an Ignite cluster that consists of two nodes:

* @n0 - server node
* @n1 - client node

Installed PARTITIONED cache {{myCache}} on both of them. Cache has near cache 
on client node and disabled on-heap caching on server.

Server configuration:

{code:xml}


http://www.springframework.org/schema/beans";
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"; xsi:schemaLocation="
   http://www.springframework.org/schema/beans
   http://www.springframework.org/schema/beans/spring-beans.xsd";>















{code}

Client configuration:

{code:xml}


http://www.springframework.org/schema/beans";
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"; xsi:schemaLocation="
   http://www.springframework.org/schema/beans
   http://www.springframework.org/schema/beans/spring-beans.xsd";>

























{code}

I have noticed (using visor) strange cache state after putting 20k entries to 
that cache through client node (@n1):

{code}
+==+
|  Node ID8(@), IP  | CPUs | Heap Used | CPU Load |   Up Time|
Size (Primary / Backup)| Hi/Mi/Rd/Wr |
+==+
| 99E64885(@n0), 172.17.0.1 | 4| 2.56 %| 0.50 %   | 00:02:47.615 | 
Total: 4 (4 / 0)  | Hi: 0   |
|   |  |   |  |  |   
Heap: 2 (2 / ) | Mi: 0   |
|   |  |   |  |  |   
Off-Heap: 2 (2 / 0) | Rd: 0   |
|   |  |   |  |  |   
Off-Heap Memory:   | Wr: 0   |
+---+--+---+--+--+---+-+
| FE7BEE4F(@n1), 172.17.0.1 | 4| 3.89 %| 3.10 %   | 00:02:37.269 | 
Total: 1000 (1000 / 0)| Hi: 0   |
|   |  |   |  |  |   
Heap: 1000 (1000 / )   | Mi: 0   |
|   |  |   |  |  |   
Off-Heap: 0 (0 / 0) | Rd: 0   |
|   |  |   |  |  |   
Off-Heap Memory: 0  | Wr: 0   |
+--+
{code}

Why Ignite stores entries on heap space on server node? If I put 20k entries 
through server node then server on-heap space is not used:

{code}
+==+
|  Node ID8(@), IP  | CPUs | Heap Used | CPU Load |   Up Time|
Size (Primary / Backup)| Hi/Mi/Rd/Wr |
+==+
| 9C1D895B(@n0), 172.17.0.1 | 4| 1.68 %| 0.43 %   | 00:15:44.149 | 
Total: 2 (2 / 0)  | Hi: 0   |
|   |  |   |  |  |   
Heap: 0 (0 / ) | Mi: 0   |
|   |  |   |  |  |   
Off-Heap: 2 (2 / 0) | Rd: 0   |
|   |  |   |  |  |   
Off-Heap Memory:   | Wr: 0   |
+---+--+---+--+--+---+-+
| 5059A9F2(@n1), 172.17.0.1 | 4| 2.05 %| 0.00 %   | 00:15:37.410 | 
Total: 0 (0 / 0)  | Hi: 0   |
|   |  |   |  |  |   
Heap: 0 (0 / ) | Mi: 0   |
|   |  |   |  |  |   
Off-Heap: 0 (

[jira] [Commented] (IGNITE-12135) Rework GridCommandHandlerTest

2019-09-05 Thread Sergey Antonov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16923272#comment-16923272
 ] 

Sergey Antonov commented on IGNITE-12135:
-

[~ktkale...@gridgain.com] I left minor comments in PR. Other changes looks good 
for me.

> Rework GridCommandHandlerTest
> -
>
> Key: IGNITE-12135
> URL: https://issues.apache.org/jira/browse/IGNITE-12135
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Kirill Tkalenko
>Assignee: Kirill Tkalenko
>Priority: Major
> Fix For: 2.8
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> There are 50+ tests. In each test we are start and stop nodes. I think we 
> could split tests at least to two groups:
>  # Tests on normal behaviour. We could start nodes before all tests and stop 
> them after all tests.
>  # Tests required start new cluster before each test.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (IGNITE-12142) Ignite ignores that on-heap store is disabled when putting values through near cache

2019-09-05 Thread Jira


[ 
https://issues.apache.org/jira/browse/IGNITE-12142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16923296#comment-16923296
 ] 

Bartłomiej Stefański commented on IGNITE-12142:
---

Also, I tested scenario when on-heap store and lru eviction are enabled on 
server and client nodes:
{code:java}
...






...
{code}
After putting 20k entries to cache through client node:
{code:java}
Nodes for: myCache(@c0)
+==+
|  Node ID8(@), IP  | CPUs | Heap Used | CPU Load |   Up Time|
Size (Primary / Backup)| Hi/Mi/Rd/Wr |
+==+
| F8BCB4F9(@n0), 172.17.0.1 | 4| 1.61 %| 0.10 %   | 00:09:26.228 | 
Total: 4 (4 / 0)  | Hi: 0   |
|   |  |   |  |  |   
Heap: 2 (2 / ) | Mi: 0   |
|   |  |   |  |  |   
Off-Heap: 2 (2 / 0) | Rd: 0   |
|   |  |   |  |  |   
Off-Heap Memory:   | Wr: 0   |
+---+--+---+--+--+---+-+
| 33B76295(@n1), 172.17.0.1 | 4| 2.11 %| 0.10 %   | 00:09:13.108 | 
Total: 1000 (1000 / 0)| Hi: 0   |
|   |  |   |  |  |   
Heap: 1000 (1000 / )   | Mi: 0   |
|   |  |   |  |  |   
Off-Heap: 0 (0 / 0) | Rd: 0   |
|   |  |   |  |  |   
Off-Heap Memory: 0  | Wr: 0   |
+--+
{code}
Heap of server node (@n0) contains 20k entries, but should contain only 2k (max 
size configured in eviction policy). If I put values directly on server node 
eviction works fine:
{code:java}
Nodes for: myCache(@c0)
+==+
|  Node ID8(@), IP  | CPUs | Heap Used | CPU Load |   Up Time|
Size (Primary / Backup)| Hi/Mi/Rd/Wr |
+==+
| F8BCB4F9(@n0), 172.17.0.1 | 4| 2.38 %| 0.33 %   | 00:14:20.920 | 
Total: 22000 (22000 / 0)  | Hi: 0   |
|   |  |   |  |  |   
Heap: 2000 (2000 / )   | Mi: 0   |
|   |  |   |  |  |   
Off-Heap: 2 (2 / 0) | Rd: 0   |
|   |  |   |  |  |   
Off-Heap Memory:   | Wr: 0   |
+---+--+---+--+--+---+-+
| 33B76295(@n1), 172.17.0.1 | 4| 3.30 %| 0.10 %   | 00:14:09.125 | 
Total: 0 (0 / 0)  | Hi: 0   |
|   |  |   |  |  |   
Heap: 0 (0 / ) | Mi: 0   |
|   |  |   |  |  |   
Off-Heap: 0 (0 / 0) | Rd: 0   |
|   |  |   |  |  |   
Off-Heap Memory: 0  | Wr: 0   |
+--+
{code}

> Ignite ignores that on-heap store is disabled when putting values through 
> near cache
> 
>
> Key: IGNITE-12142
> URL: https://issues.apache.org/jira/browse/IGNITE-12142
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.7, 2.7.5
>Reporter: Bartłomiej Stefański
>Priority: Major
>
> I have an Ignite cluster that consists of two nodes:
> * @n0 - server node
> * @n1 - client node
> Installed PARTITIONED cache {{myCache}} on both of them. Cache has near cache 
> on client node and disabled on-heap caching on server.
> Server configuration:
> {code:xml}
> 
> http://www.springframework.org/schema/beans";
> xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"; xsi:schemaLocation="
>http://www.springframework.org/schema/beans
>http://www.springframework.org/schema/beans/spring-beans.xsd";>
> 
> 
> 
> 
>  class="org.apache.ignite.configuratio

[jira] [Commented] (IGNITE-12089) JVM is halted after this error during rolling restart of a cluster

2019-09-05 Thread Stanilovsky Evgeny (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16923303#comment-16923303
 ] 

Stanilovsky Evgeny commented on IGNITE-12089:
-

[~temp2] i think it`s all due to inappropriate  cluster configuration in 
communication or near it.
I suggest : rewrite your test as:
start 2 server nodes from : private static Ignite initIgnite(String[] args, 
*isClient* = false)
...
cfg.setClientMode(isClient);

after run client node.

check org.apache.ignite.examples.cluster.ClusterGroupExample or something near 
it.

thanks !

> JVM is halted after this error during rolling restart of a cluster
> --
>
> Key: IGNITE-12089
> URL: https://issues.apache.org/jira/browse/IGNITE-12089
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.6
>Reporter: temp2
>Priority: Critical
> Attachments: IgniteTest2.java, default-config.xml, ignite27.log, 
> ignite42.log
>
>
> JVM is halted after this error during rolling restart of a cluster:
> excepition is :528-a852-c65782e337f0][2019-08-20 
> 17:22:10,901][ERROR][ttl-cleanup-worker-#155][] Critical system error 
> detected. Will be handled accordingly to configured handler [hnd=class 
> o.a.i.failure.StopNodeOrHaltFailure528-a852-c65782e337f0][2019-08-20 
> 17:22:10,901][ERROR][ttl-cleanup-worker-#155][] Critical system error 
> detected. Will be handled accordingly to configured handler [hnd=class 
> o.a.i.failure.StopNodeOrHaltFailureHandler, failureCtx=FailureContext 
> [type=SYSTEM_WORKER_TERMINATION, err=class o.a.i.IgniteException: Runtime 
> failure on bounds: [lower=PendingRow [], upper=PendingRow 
> [org.apache.ignite.IgniteException: Runtime failure on bounds: 
> [lower=PendingRow [], upper=PendingRow []] at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.find(BPlusTree.java:971)
>  ~[ignite-core-2.6.0.jar:2.6.0] at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.find(BPlusTree.java:950)
>  ~[ignite-core-2.6.0.jar:2.6.0] at 
> org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl.expire(IgniteCacheOffheapManagerImpl.java:1022)
>  ~[ignite-core-2.6.0.jar:2.6.0] at 
> org.apache.ignite.internal.processors.cache.GridCacheTtlManager.expire(GridCacheTtlManager.java:197)
>  ~[ignite-core-2.6.0.jar:2.6.0] at 
> org.apache.ignite.internal.processors.cache.GridCacheSharedTtlCleanupManager$CleanupWorker.body(GridCacheSharedTtlCleanupManager.java:137)
>  [ignite-core-2.6.0.jar:2.6.0] at 
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110) 
> [ignite-core-2.6.0.jar:2.6.0] at java.lang.Thread.run(Thread.java:745) 
> [?:1.8.0_101]Caused by: java.lang.IllegalStateException: Failed to get page 
> IO instance (page content is corrupted) at 
> org.apache.ignite.internal.processors.cache.persistence.tree.io.IOVersions.forVersion(IOVersions.java:83)
>  ~[ignite-core-2.6.0.jar:2.6.0] at 
> org.apache.ignite.internal.processors.cache.persistence.tree.io.IOVersions.forPage(IOVersions.java:95)
>  ~[ignite-core-2.6.0.jar:2.6.0] at 
> org.apache.ignite.internal.processors.cache.persistence.CacheDataRowAdapter.initFromLink(CacheDataRowAdapter.java:148)
>  ~[ignite-core-2.6.0.jar:2.6.0] at 
> org.apache.ignite.internal.processors.cache.persistence.CacheDataRowAdapter.initFromLink(CacheDataRowAdapter.java:102)
>  ~[ignite-core-2.6.0.jar:2.6.0] at 
> org.apache.ignite.internal.processors.cache.tree.PendingRow.initKey(PendingRow.java:72)
>  ~[ignite-core-2.6.0.jar:2.6.0] at 
> org.apache.ignite.internal.processors.cache.tree.PendingEntriesTree.getRow(PendingEntriesTree.java:118)
>  ~[ignite-core-2.6.0.jar:2.6.0] at 
> org.apache.ignite.internal.processors.cache.tree.PendingEntriesTree.getRow(PendingEntriesTree.java:31)
>  ~[ignite-core-2.6.0.jar:2.6.0] at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$ForwardCursor.fillFromBuffer(BPlusTree.java:4660)
>  ~[ignite-core-2.6.0.jar:2.6.0] at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$ForwardCursor.init(BPlusTree.java:4562)
>  ~[ignite-core-2.6.0.jar:2.6.0] at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$ForwardCursor.access$5300(BPlusTree.java:4501)
>  ~[ignite-core-2.6.0.jar:2.6.0] at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$GetCursor.notFound(BPlusTree.java:2633)
>  ~[ignite-core-2.6.0.jar:2.6.0] at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Search.run0(BPlusTree.java:293)
>  ~[ignite-core-2.6.0.jar:2.6.0] at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$GetPageHandler.run(BPlusTree.java:4816)
>  ~[ignite-core-2.6.0.jar:2.6.0] at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusT

[jira] [Commented] (IGNITE-10245) o.a.i.internal.util.nio.ssl.GridNioSslFilter failed with Assertion if invalid SSL Cipher suite name specified

2019-09-05 Thread Ignite TC Bot (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-10245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16923354#comment-16923354
 ] 

Ignite TC Bot commented on IGNITE-10245:


{panel:title=Branch: [pull/6843/head] Base: [master] : No blockers 
found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}{panel}
[TeamCity *--> Run :: All* 
Results|https://ci.ignite.apache.org/viewLog.html?buildId=4569579&buildTypeId=IgniteTests24Java8_RunAll]

> o.a.i.internal.util.nio.ssl.GridNioSslFilter failed with Assertion if invalid 
> SSL Cipher suite name specified
> -
>
> Key: IGNITE-10245
> URL: https://issues.apache.org/jira/browse/IGNITE-10245
> Project: Ignite
>  Issue Type: Task
>Reporter: Alexey Kuznetsov
>Assignee: Ryabov Dmitrii
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This issue is related to IGNITE-10189.
> In case of invalid cipher suite name GridNioSslFilter  failed with assertion 
> in org.apache.ignite.internal.util.nio.ssl.GridNioSslFilter#sslHandler method.
> Need to investigate and fix.
>  
> See test: ClientSslParametersTest.testNonExistentCipherSuite()



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Assigned] (IGNITE-10245) o.a.i.internal.util.nio.ssl.GridNioSslFilter failed with Assertion if invalid SSL Cipher suite name specified

2019-09-05 Thread Ryabov Dmitrii (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-10245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryabov Dmitrii reassigned IGNITE-10245:
---

Assignee: Ryabov Dmitrii

> o.a.i.internal.util.nio.ssl.GridNioSslFilter failed with Assertion if invalid 
> SSL Cipher suite name specified
> -
>
> Key: IGNITE-10245
> URL: https://issues.apache.org/jira/browse/IGNITE-10245
> Project: Ignite
>  Issue Type: Task
>Reporter: Alexey Kuznetsov
>Assignee: Ryabov Dmitrii
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This issue is related to IGNITE-10189.
> In case of invalid cipher suite name GridNioSslFilter  failed with assertion 
> in org.apache.ignite.internal.util.nio.ssl.GridNioSslFilter#sslHandler method.
> Need to investigate and fix.
>  
> See test: ClientSslParametersTest.testNonExistentCipherSuite()



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (IGNITE-12127) WAL writer may close file IO with unflushed changes when MMAP is disabled

2019-09-05 Thread Ignite TC Bot (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16923392#comment-16923392
 ] 

Ignite TC Bot commented on IGNITE-12127:


{panel:title=Branch: [pull/6840/head] Base: [master] : No blockers 
found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}{panel}
[TeamCity *--> Run :: All* 
Results|https://ci.ignite.apache.org/viewLog.html?buildId=4567088&buildTypeId=IgniteTests24Java8_RunAll]

> WAL writer may close file IO with unflushed changes when MMAP is disabled
> -
>
> Key: IGNITE-12127
> URL: https://issues.apache.org/jira/browse/IGNITE-12127
> Project: Ignite
>  Issue Type: Bug
>Reporter: Dmitriy Govorukhin
>Assignee: Dmitriy Govorukhin
>Priority: Critical
> Fix For: 2.7.6
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Most likely the issue manifests itself as the following critical error:
> {code}
> 2019-08-27 14:52:31.286 ERROR 26835 --- [wal-write-worker%null-#447] ROOT : 
> Critical system error detected. Will be handled accordingly to configured 
> handler [hnd=class o.a.i.failure.StopNodeOrHaltFailureHandler, 
> failureCtx=FailureContext [type=CRITICAL_ERROR, err=class 
> o.a.i.i.processors.cache.persistence.StorageException: Failed to write 
> buffer.]]
> org.apache.ignite.internal.processors.cache.persistence.StorageException: 
> Failed to write buffer.
> at 
> org.apache.ignite.internal.processors.cache.persistence.wal.FileWriteAheadLogManager$WALWriter.writeBuffer(FileWriteAheadLogManager.java:3444)
>  [ignite-core-2.5.7.jar!/:2.5.7]
> at 
> org.apache.ignite.internal.processors.cache.persistence.wal.FileWriteAheadLogManager$WALWriter.body(FileWriteAheadLogManager.java:3249)
>  [ignite-core-2.5.7.jar!/:2.5.7]
> at 
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110) 
> [ignite-core-2.5.7.jar!/:2.5.7]
> at java.lang.Thread.run(Thread.java:748) [na:1.8.0_201]
> Caused by: java.nio.channels.ClosedChannelException: null
> at sun.nio.ch.FileChannelImpl.ensureOpen(FileChannelImpl.java:110) 
> ~[na:1.8.0_201]
> at sun.nio.ch.FileChannelImpl.position(FileChannelImpl.java:253) 
> ~[na:1.8.0_201]
> at 
> org.apache.ignite.internal.processors.cache.persistence.file.RandomAccessFileIO.position(RandomAccessFileIO.java:48)
>  ~[ignite-core-2.5.7.jar!/:2.5.7]
> at 
> org.apache.ignite.internal.processors.cache.persistence.file.FileIODecorator.position(FileIODecorator.java:41)
>  ~[ignite-core-2.5.7.jar!/:2.5.7]
> at 
> org.apache.ignite.internal.processors.cache.persistence.file.AbstractFileIO.writeFully(AbstractFileIO.java:111)
>  ~[ignite-core-2.5.7.jar!/:2.5.7]
> at 
> org.apache.ignite.internal.processors.cache.persistence.wal.FileWriteAheadLogManager$WALWriter.writeBuffer(FileWriteAheadLogManager.java:3437)
>  [ignite-core-2.5.7.jar!/:2.5.7]
> ... 3 common frames omitted
> {code}
> It appears that there following sequence is possible:
>  * Thread A attempts to log a large record which does not fit segment, 
> {{addRecord}} fails and the thread A starts segment rollover. I successfully 
> runs {{flushOrWait(null)}} and gets de-scheduled before adding switch segment 
> record
>  * Thread B attempts to log another record, which fits exactly till the end 
> of the current segment. The record is added to the buffer
>  * Thread A resumes and fails to add the switch segment record. No flush is 
> performed and the thread immediately proceeds for wal-writer close
>  * WAL writer thread wakes up, sees that there is a CLOSE request, closes the 
> file IO and immediately proceeds to write unflushed changes causing the 
> exception.
> Unconditional flush after switch segment record write should fix the issue.
> Besides the bug itself, I suggest the following changes to the 
> {{FileWriteHandleImpl}} ({{FileWriteAheadLogManager}} in earlier versions):
>  * There is an {{fsync(filePtr)}} call inside {{close()}}; however, 
> {{fsync()}} checks the {{stop}} flag (which is set inside {{close}}) and 
> returns immediately after {{flushOrWait()}} if the flag is set - this is very 
> confusing. After all, the {{close()}} itself explicitly calls {{force}} after 
> flush
>  * There is an ignored IO exception in mmap mode - this should be propagated 
> to the failure handler
>  * In WAL writer, we check for file CLOSE and then attemp to write to 
> (possibly) the same write handle - write should be always before close
>  * In WAL writer, there are racy reads of current handle - it would be better 
> if we read the current handle once and then operate on it during the whole 
> loop iteration



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (IGNITE-12127) WAL writer may close file IO with unflushed changes when MMAP is disabled

2019-09-05 Thread Andrey Gura (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16923405#comment-16923405
 ] 

Andrey Gura commented on IGNITE-12127:
--

[~DmitriyGovorukhin] LGTM. Thanks for contribution! Please merge to master 
branch.

> WAL writer may close file IO with unflushed changes when MMAP is disabled
> -
>
> Key: IGNITE-12127
> URL: https://issues.apache.org/jira/browse/IGNITE-12127
> Project: Ignite
>  Issue Type: Bug
>Reporter: Dmitriy Govorukhin
>Assignee: Dmitriy Govorukhin
>Priority: Critical
> Fix For: 2.7.6
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Most likely the issue manifests itself as the following critical error:
> {code}
> 2019-08-27 14:52:31.286 ERROR 26835 --- [wal-write-worker%null-#447] ROOT : 
> Critical system error detected. Will be handled accordingly to configured 
> handler [hnd=class o.a.i.failure.StopNodeOrHaltFailureHandler, 
> failureCtx=FailureContext [type=CRITICAL_ERROR, err=class 
> o.a.i.i.processors.cache.persistence.StorageException: Failed to write 
> buffer.]]
> org.apache.ignite.internal.processors.cache.persistence.StorageException: 
> Failed to write buffer.
> at 
> org.apache.ignite.internal.processors.cache.persistence.wal.FileWriteAheadLogManager$WALWriter.writeBuffer(FileWriteAheadLogManager.java:3444)
>  [ignite-core-2.5.7.jar!/:2.5.7]
> at 
> org.apache.ignite.internal.processors.cache.persistence.wal.FileWriteAheadLogManager$WALWriter.body(FileWriteAheadLogManager.java:3249)
>  [ignite-core-2.5.7.jar!/:2.5.7]
> at 
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110) 
> [ignite-core-2.5.7.jar!/:2.5.7]
> at java.lang.Thread.run(Thread.java:748) [na:1.8.0_201]
> Caused by: java.nio.channels.ClosedChannelException: null
> at sun.nio.ch.FileChannelImpl.ensureOpen(FileChannelImpl.java:110) 
> ~[na:1.8.0_201]
> at sun.nio.ch.FileChannelImpl.position(FileChannelImpl.java:253) 
> ~[na:1.8.0_201]
> at 
> org.apache.ignite.internal.processors.cache.persistence.file.RandomAccessFileIO.position(RandomAccessFileIO.java:48)
>  ~[ignite-core-2.5.7.jar!/:2.5.7]
> at 
> org.apache.ignite.internal.processors.cache.persistence.file.FileIODecorator.position(FileIODecorator.java:41)
>  ~[ignite-core-2.5.7.jar!/:2.5.7]
> at 
> org.apache.ignite.internal.processors.cache.persistence.file.AbstractFileIO.writeFully(AbstractFileIO.java:111)
>  ~[ignite-core-2.5.7.jar!/:2.5.7]
> at 
> org.apache.ignite.internal.processors.cache.persistence.wal.FileWriteAheadLogManager$WALWriter.writeBuffer(FileWriteAheadLogManager.java:3437)
>  [ignite-core-2.5.7.jar!/:2.5.7]
> ... 3 common frames omitted
> {code}
> It appears that there following sequence is possible:
>  * Thread A attempts to log a large record which does not fit segment, 
> {{addRecord}} fails and the thread A starts segment rollover. I successfully 
> runs {{flushOrWait(null)}} and gets de-scheduled before adding switch segment 
> record
>  * Thread B attempts to log another record, which fits exactly till the end 
> of the current segment. The record is added to the buffer
>  * Thread A resumes and fails to add the switch segment record. No flush is 
> performed and the thread immediately proceeds for wal-writer close
>  * WAL writer thread wakes up, sees that there is a CLOSE request, closes the 
> file IO and immediately proceeds to write unflushed changes causing the 
> exception.
> Unconditional flush after switch segment record write should fix the issue.
> Besides the bug itself, I suggest the following changes to the 
> {{FileWriteHandleImpl}} ({{FileWriteAheadLogManager}} in earlier versions):
>  * There is an {{fsync(filePtr)}} call inside {{close()}}; however, 
> {{fsync()}} checks the {{stop}} flag (which is set inside {{close}}) and 
> returns immediately after {{flushOrWait()}} if the flag is set - this is very 
> confusing. After all, the {{close()}} itself explicitly calls {{force}} after 
> flush
>  * There is an ignored IO exception in mmap mode - this should be propagated 
> to the failure handler
>  * In WAL writer, we check for file CLOSE and then attemp to write to 
> (possibly) the same write handle - write should be always before close
>  * In WAL writer, there are racy reads of current handle - it would be better 
> if we read the current handle once and then operate on it during the whole 
> loop iteration



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (IGNITE-10245) o.a.i.internal.util.nio.ssl.GridNioSslFilter failed with Assertion if invalid SSL Cipher suite name specified

2019-09-05 Thread Ryabov Dmitrii (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-10245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryabov Dmitrii updated IGNITE-10245:

Fix Version/s: 2.8

> o.a.i.internal.util.nio.ssl.GridNioSslFilter failed with Assertion if invalid 
> SSL Cipher suite name specified
> -
>
> Key: IGNITE-10245
> URL: https://issues.apache.org/jira/browse/IGNITE-10245
> Project: Ignite
>  Issue Type: Task
>Reporter: Alexey Kuznetsov
>Assignee: Ryabov Dmitrii
>Priority: Major
> Fix For: 2.8
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This issue is related to IGNITE-10189.
> In case of invalid cipher suite name GridNioSslFilter  failed with assertion 
> in org.apache.ignite.internal.util.nio.ssl.GridNioSslFilter#sslHandler method.
> Need to investigate and fix.
>  
> See test: ClientSslParametersTest.testNonExistentCipherSuite()



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (IGNITE-12127) WAL writer may close file IO with unflushed changes when MMAP is disabled

2019-09-05 Thread Dmitriy Govorukhin (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16923412#comment-16923412
 ] 

Dmitriy Govorukhin commented on IGNITE-12127:
-

Merged to master a13337d94755d7e1cc097c6f00311552fea25ae6

> WAL writer may close file IO with unflushed changes when MMAP is disabled
> -
>
> Key: IGNITE-12127
> URL: https://issues.apache.org/jira/browse/IGNITE-12127
> Project: Ignite
>  Issue Type: Bug
>Reporter: Dmitriy Govorukhin
>Assignee: Dmitriy Govorukhin
>Priority: Critical
> Fix For: 2.7.6
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Most likely the issue manifests itself as the following critical error:
> {code}
> 2019-08-27 14:52:31.286 ERROR 26835 --- [wal-write-worker%null-#447] ROOT : 
> Critical system error detected. Will be handled accordingly to configured 
> handler [hnd=class o.a.i.failure.StopNodeOrHaltFailureHandler, 
> failureCtx=FailureContext [type=CRITICAL_ERROR, err=class 
> o.a.i.i.processors.cache.persistence.StorageException: Failed to write 
> buffer.]]
> org.apache.ignite.internal.processors.cache.persistence.StorageException: 
> Failed to write buffer.
> at 
> org.apache.ignite.internal.processors.cache.persistence.wal.FileWriteAheadLogManager$WALWriter.writeBuffer(FileWriteAheadLogManager.java:3444)
>  [ignite-core-2.5.7.jar!/:2.5.7]
> at 
> org.apache.ignite.internal.processors.cache.persistence.wal.FileWriteAheadLogManager$WALWriter.body(FileWriteAheadLogManager.java:3249)
>  [ignite-core-2.5.7.jar!/:2.5.7]
> at 
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110) 
> [ignite-core-2.5.7.jar!/:2.5.7]
> at java.lang.Thread.run(Thread.java:748) [na:1.8.0_201]
> Caused by: java.nio.channels.ClosedChannelException: null
> at sun.nio.ch.FileChannelImpl.ensureOpen(FileChannelImpl.java:110) 
> ~[na:1.8.0_201]
> at sun.nio.ch.FileChannelImpl.position(FileChannelImpl.java:253) 
> ~[na:1.8.0_201]
> at 
> org.apache.ignite.internal.processors.cache.persistence.file.RandomAccessFileIO.position(RandomAccessFileIO.java:48)
>  ~[ignite-core-2.5.7.jar!/:2.5.7]
> at 
> org.apache.ignite.internal.processors.cache.persistence.file.FileIODecorator.position(FileIODecorator.java:41)
>  ~[ignite-core-2.5.7.jar!/:2.5.7]
> at 
> org.apache.ignite.internal.processors.cache.persistence.file.AbstractFileIO.writeFully(AbstractFileIO.java:111)
>  ~[ignite-core-2.5.7.jar!/:2.5.7]
> at 
> org.apache.ignite.internal.processors.cache.persistence.wal.FileWriteAheadLogManager$WALWriter.writeBuffer(FileWriteAheadLogManager.java:3437)
>  [ignite-core-2.5.7.jar!/:2.5.7]
> ... 3 common frames omitted
> {code}
> It appears that there following sequence is possible:
>  * Thread A attempts to log a large record which does not fit segment, 
> {{addRecord}} fails and the thread A starts segment rollover. I successfully 
> runs {{flushOrWait(null)}} and gets de-scheduled before adding switch segment 
> record
>  * Thread B attempts to log another record, which fits exactly till the end 
> of the current segment. The record is added to the buffer
>  * Thread A resumes and fails to add the switch segment record. No flush is 
> performed and the thread immediately proceeds for wal-writer close
>  * WAL writer thread wakes up, sees that there is a CLOSE request, closes the 
> file IO and immediately proceeds to write unflushed changes causing the 
> exception.
> Unconditional flush after switch segment record write should fix the issue.
> Besides the bug itself, I suggest the following changes to the 
> {{FileWriteHandleImpl}} ({{FileWriteAheadLogManager}} in earlier versions):
>  * There is an {{fsync(filePtr)}} call inside {{close()}}; however, 
> {{fsync()}} checks the {{stop}} flag (which is set inside {{close}}) and 
> returns immediately after {{flushOrWait()}} if the flag is set - this is very 
> confusing. After all, the {{close()}} itself explicitly calls {{force}} after 
> flush
>  * There is an ignored IO exception in mmap mode - this should be propagated 
> to the failure handler
>  * In WAL writer, we check for file CLOSE and then attemp to write to 
> (possibly) the same write handle - write should be always before close
>  * In WAL writer, there are racy reads of current handle - it would be better 
> if we read the current handle once and then operate on it during the whole 
> loop iteration



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (IGNITE-12127) WAL writer may close file IO with unflushed changes when MMAP is disabled

2019-09-05 Thread Dmitriy Govorukhin (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16923423#comment-16923423
 ] 

Dmitriy Govorukhin commented on IGNITE-12127:
-

Cherry-picked to ignite-2.7.6 402c9450dafbb201708f66d8bdab0ade0b87bd4f

> WAL writer may close file IO with unflushed changes when MMAP is disabled
> -
>
> Key: IGNITE-12127
> URL: https://issues.apache.org/jira/browse/IGNITE-12127
> Project: Ignite
>  Issue Type: Bug
>Reporter: Dmitriy Govorukhin
>Assignee: Dmitriy Govorukhin
>Priority: Critical
> Fix For: 2.7.6
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Most likely the issue manifests itself as the following critical error:
> {code}
> 2019-08-27 14:52:31.286 ERROR 26835 --- [wal-write-worker%null-#447] ROOT : 
> Critical system error detected. Will be handled accordingly to configured 
> handler [hnd=class o.a.i.failure.StopNodeOrHaltFailureHandler, 
> failureCtx=FailureContext [type=CRITICAL_ERROR, err=class 
> o.a.i.i.processors.cache.persistence.StorageException: Failed to write 
> buffer.]]
> org.apache.ignite.internal.processors.cache.persistence.StorageException: 
> Failed to write buffer.
> at 
> org.apache.ignite.internal.processors.cache.persistence.wal.FileWriteAheadLogManager$WALWriter.writeBuffer(FileWriteAheadLogManager.java:3444)
>  [ignite-core-2.5.7.jar!/:2.5.7]
> at 
> org.apache.ignite.internal.processors.cache.persistence.wal.FileWriteAheadLogManager$WALWriter.body(FileWriteAheadLogManager.java:3249)
>  [ignite-core-2.5.7.jar!/:2.5.7]
> at 
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110) 
> [ignite-core-2.5.7.jar!/:2.5.7]
> at java.lang.Thread.run(Thread.java:748) [na:1.8.0_201]
> Caused by: java.nio.channels.ClosedChannelException: null
> at sun.nio.ch.FileChannelImpl.ensureOpen(FileChannelImpl.java:110) 
> ~[na:1.8.0_201]
> at sun.nio.ch.FileChannelImpl.position(FileChannelImpl.java:253) 
> ~[na:1.8.0_201]
> at 
> org.apache.ignite.internal.processors.cache.persistence.file.RandomAccessFileIO.position(RandomAccessFileIO.java:48)
>  ~[ignite-core-2.5.7.jar!/:2.5.7]
> at 
> org.apache.ignite.internal.processors.cache.persistence.file.FileIODecorator.position(FileIODecorator.java:41)
>  ~[ignite-core-2.5.7.jar!/:2.5.7]
> at 
> org.apache.ignite.internal.processors.cache.persistence.file.AbstractFileIO.writeFully(AbstractFileIO.java:111)
>  ~[ignite-core-2.5.7.jar!/:2.5.7]
> at 
> org.apache.ignite.internal.processors.cache.persistence.wal.FileWriteAheadLogManager$WALWriter.writeBuffer(FileWriteAheadLogManager.java:3437)
>  [ignite-core-2.5.7.jar!/:2.5.7]
> ... 3 common frames omitted
> {code}
> It appears that there following sequence is possible:
>  * Thread A attempts to log a large record which does not fit segment, 
> {{addRecord}} fails and the thread A starts segment rollover. I successfully 
> runs {{flushOrWait(null)}} and gets de-scheduled before adding switch segment 
> record
>  * Thread B attempts to log another record, which fits exactly till the end 
> of the current segment. The record is added to the buffer
>  * Thread A resumes and fails to add the switch segment record. No flush is 
> performed and the thread immediately proceeds for wal-writer close
>  * WAL writer thread wakes up, sees that there is a CLOSE request, closes the 
> file IO and immediately proceeds to write unflushed changes causing the 
> exception.
> Unconditional flush after switch segment record write should fix the issue.
> Besides the bug itself, I suggest the following changes to the 
> {{FileWriteHandleImpl}} ({{FileWriteAheadLogManager}} in earlier versions):
>  * There is an {{fsync(filePtr)}} call inside {{close()}}; however, 
> {{fsync()}} checks the {{stop}} flag (which is set inside {{close}}) and 
> returns immediately after {{flushOrWait()}} if the flag is set - this is very 
> confusing. After all, the {{close()}} itself explicitly calls {{force}} after 
> flush
>  * There is an ignored IO exception in mmap mode - this should be propagated 
> to the failure handler
>  * In WAL writer, we check for file CLOSE and then attemp to write to 
> (possibly) the same write handle - write should be always before close
>  * In WAL writer, there are racy reads of current handle - it would be better 
> if we read the current handle once and then operate on it during the whole 
> loop iteration



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Resolved] (IGNITE-11558) Developer warning when HashMap is passed to putAll()

2019-09-05 Thread Ilya Kasnacheev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-11558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ilya Kasnacheev resolved IGNITE-11558.
--
Resolution: Duplicate

> Developer warning when HashMap is passed to putAll()
> 
>
> Key: IGNITE-11558
> URL: https://issues.apache.org/jira/browse/IGNITE-11558
> Project: Ignite
>  Issue Type: Improvement
>  Components: cache
>Affects Versions: 2.7
>Reporter: Ilya Kasnacheev
>Assignee: Ilya Kasnacheev
>Priority: Major
>
> Currently when HashMap is passed to putAll it's very easy to cause deadlock 
> since the order of keys is not stable.
> This is a pity because users will use HashMap by default and not expect any 
> trouble.
> We should issue a warning when user passes HashMap (but not LinkedHashMap) to 
> putAll(). On .Net we should probably check for Dictionary. Warning similar to 
> the one issues when index cannot be efficiently inlined.
> Another approach is to turn keys into binary form and then sort them, if map 
> is not a SortedMap.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Assigned] (IGNITE-6804) Print a warning if HashMap is passed into bulk update operations

2019-09-05 Thread Ilya Kasnacheev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-6804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ilya Kasnacheev reassigned IGNITE-6804:
---

Assignee: Ilya Kasnacheev

> Print a warning if HashMap is passed into bulk update operations
> 
>
> Key: IGNITE-6804
> URL: https://issues.apache.org/jira/browse/IGNITE-6804
> Project: Ignite
>  Issue Type: Improvement
>  Components: cache
>Reporter: Denis Magda
>Assignee: Ilya Kasnacheev
>Priority: Critical
>  Labels: usability
>
> Ignite newcomers tend to stumble on deadlocks simply because the keys are 
> passed in an unordered HashMap. Propose to do the following:
> * update bulk operations Java docs.
> * print out a warning if not SortedMap (e.g. HashMap, 
> Weak/Identity/Concurrent/Linked HashMap etc) is passed into
> a bulk method (instead of SortedMap) and contains more than 1 element. 
> However, we should make sure that we only print that warning once and not 
> every time the API is called.
> * do not produce warning for explicit optimistic transactions
> More details are here:
> http://apache-ignite-developers.2346864.n4.nabble.com/Re-Ignite-2-0-0-GridUnsafe-unmonitor-td23706.html



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Assigned] (IGNITE-11558) Developer warning when HashMap is passed to putAll()

2019-09-05 Thread Ilya Kasnacheev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-11558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ilya Kasnacheev reassigned IGNITE-11558:


Assignee: Ilya Kasnacheev

> Developer warning when HashMap is passed to putAll()
> 
>
> Key: IGNITE-11558
> URL: https://issues.apache.org/jira/browse/IGNITE-11558
> Project: Ignite
>  Issue Type: Improvement
>  Components: cache
>Affects Versions: 2.7
>Reporter: Ilya Kasnacheev
>Assignee: Ilya Kasnacheev
>Priority: Major
>
> Currently when HashMap is passed to putAll it's very easy to cause deadlock 
> since the order of keys is not stable.
> This is a pity because users will use HashMap by default and not expect any 
> trouble.
> We should issue a warning when user passes HashMap (but not LinkedHashMap) to 
> putAll(). On .Net we should probably check for Dictionary. Warning similar to 
> the one issues when index cannot be efficiently inlined.
> Another approach is to turn keys into binary form and then sort them, if map 
> is not a SortedMap.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (IGNITE-12135) Rework GridCommandHandlerTest

2019-09-05 Thread Kirill Tkalenko (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirill Tkalenko updated IGNITE-12135:
-
Reviewer: Dmitriy Govorukhin  (was: Sergey Antonov)

> Rework GridCommandHandlerTest
> -
>
> Key: IGNITE-12135
> URL: https://issues.apache.org/jira/browse/IGNITE-12135
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Kirill Tkalenko
>Assignee: Kirill Tkalenko
>Priority: Major
> Fix For: 2.8
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> There are 50+ tests. In each test we are start and stop nodes. I think we 
> could split tests at least to two groups:
>  # Tests on normal behaviour. We could start nodes before all tests and stop 
> them after all tests.
>  # Tests required start new cluster before each test.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (IGNITE-12135) Rework GridCommandHandlerTest

2019-09-05 Thread Kirill Tkalenko (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16923429#comment-16923429
 ] 

Kirill Tkalenko commented on IGNITE-12135:
--

[~antonovsergey93] Made corrections on comments in PR.
[~DmitriyGovorukhin], please review the code.

> Rework GridCommandHandlerTest
> -
>
> Key: IGNITE-12135
> URL: https://issues.apache.org/jira/browse/IGNITE-12135
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Kirill Tkalenko
>Assignee: Kirill Tkalenko
>Priority: Major
> Fix For: 2.8
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> There are 50+ tests. In each test we are start and stop nodes. I think we 
> could split tests at least to two groups:
>  # Tests on normal behaviour. We could start nodes before all tests and stop 
> them after all tests.
>  # Tests required start new cluster before each test.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (IGNITE-12069) Implement file rebalancing management

2019-09-05 Thread Pavel Pereslegin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Pereslegin updated IGNITE-12069:
--
Description: 
{{Preloader}} should be able to do the following:
 # build the map of partitions and corresponding supplier nodes from which 
partitions will be loaded;
 # switch cache data storage to {{no-op}} and back to original (HWM must be 
fixed here for the needs of historical rebalance) under the checkpoint and keep 
the partition update counter for each partition;
 # run async the eviction indexes for the list of collected partitions;
 # send a request message to each node one by one with the list of partitions 
to load;
 # wait for files received (listening for the transmission handler);
 # run rebuild indexes async over the receiving partitions;
 # run historical rebalance from LWM to HWM collected above (LWM can be read 
from the received file meta page);

h5. Stage 1. implement "read-only" mode for cache data store. Implement data 
store reinitialization on the updated persistence file.
h6. Tests:
 - Switching under load.
 - Check re-initialization of partition on new file.
 - Check that in read-only mode
 ** indexes are not updated
 ** update counter is updated
 ** eviction works fine
 ** tx/atomic updates on this partition works fine in cluster

h5. Stage 2. Build Map for request partitions by node, add message that will be 
sent to the supplier. Send a demand request, handle the response, switch 
datastore when file received.
h6. Tests:
 - Check partition consistency after receiving a file.
 - File transmission under load.
 - Failover - some of the partitions have been switched, the node has been 
restarted, rebalancing is expected to continue only for fully loaded large 
partitions through the historical rebalance, for the rest of partitions it 
should restart from the beginning. 

h5. Stage 3. Add WAL history reservation on supplier. Add historical rebalance 
triggering (LWM (partition) - HWM (read-only)).
h6. Tests:
 - File rebalancing under load and without on atomic/tx caches. (check existing 
PDS-enabled rebalancing tests).
 - Ensure that MVCC groups use regular rebalancing.
 - The rebalancing on the unstable topology and failures of the 
supplier/demander nodes at different stages.
 - (compatibility) The old nodes should use regular rebalancing.

h5. Stage 4 Eviction and rebuild of indexes.
h6. Tests:
 - File rebalancing of caches with H2 indexes.
 - Check consistency of H2 indexes.

  was:
{{Preloader}} should be able to do the following:
 # build the map of partitions and corresponding supplier nodes from which 
partitions will be loaded;
 # switch cache data storage to {{no-op}} and back to original (HWM must be 
fixed here for the needs of historical rebalance) under the checkpoint and keep 
the partition update counter for each partition;
 # run async the eviction indexes for the list of collected partitions;
 # send a request message to each node one by one with the list of partitions 
to load;
 # wait for files received (listening for the transmission handler);
 # run rebuild indexes async over the receiving partitions;
 # run historical rebalance from LWM to HWM collected above (LWM can be read 
from the received file meta page);

h5. Stage 1. implement "read-only" mode for cache data store. Implement data 
store reinitialization on the updated persistence file.
h6. Tests:
 - Switching under load.
 - Check re-initialization of partition on new file.
 - Check that in read-only mode
 ** indexes are not updated
 ** update counter is valid
 ** tx/atomic updates on this partition works fine in cluster.

h5. Stage 2. Build Map for request partitions by node, add message that will be 
sent to the supplier. Send a demand request, handle the response, switch 
datastore when file received.
h6. Tests:
 - Check partition consistency after receiving a file.
 - File transmission under load.
 - Failover - some of the partitions have been switched, the node has been 
restarted, rebalancing is expected to continue only for fully loaded large 
partitions through the historical rebalance, for the rest of partitions it 
should restart from the beginning. 

h5. Stage 3. Add WAL history reservation on supplier. Add historical rebalance 
triggering (LWM (partition) - HWM (read-only)).
h6. Tests:
 - File rebalancing under load and without on atomic/tx caches. (check existing 
PDS-enabled rebalancing tests).
 - Ensure that MVCC groups use regular rebalancing.
 - The rebalancing on the unstable topology and failures of the 
supplier/demander nodes at different stages.
 - (compatibility) The old nodes should use regular rebalancing.

h5. Stage 4 Eviction and rebuild of indexes.
h6. Tests:
 - File rebalancing of caches with H2 indexes.
 - Check consistency of H2 indexes.


> Implement file rebalancing management
> -
>
> Key: IGNITE-12069
>  

[jira] [Updated] (IGNITE-12069) Implement file rebalancing management

2019-09-05 Thread Pavel Pereslegin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Pereslegin updated IGNITE-12069:
--
Description: 
{{Preloader}} should be able to do the following:
 # build the map of partitions and corresponding supplier nodes from which 
partitions will be loaded;
 # switch cache data storage to {{no-op}} and back to original (HWM must be 
fixed here for the needs of historical rebalance) under the checkpoint and keep 
the partition update counter for each partition;
 # run async the eviction indexes for the list of collected partitions;
 # send a request message to each node one by one with the list of partitions 
to load;
 # wait for files received (listening for the transmission handler);
 # run rebuild indexes async over the receiving partitions;
 # run historical rebalance from LWM to HWM collected above (LWM can be read 
from the received file meta page);

h5. Stage 1. implement "read-only" mode for cache data store. Implement data 
store reinitialization on the updated persistence file.
h6. Tests:
 - Switching under load.
 - Check re-initialization of partition on new file.
 - Check that in read-only mode
 ** H2 indexes are not updated
 ** update counter is updated
 ** cache entries eviction works fine
 ** tx/atomic updates on this partition works fine in cluster

h5. Stage 2. Build Map for request partitions by node, add message that will be 
sent to the supplier. Send a demand request, handle the response, switch 
datastore when file received.
h6. Tests:
 - Check partition consistency after receiving a file.
 - File transmission under load.
 - Failover - some of the partitions have been switched, the node has been 
restarted, rebalancing is expected to continue only for fully loaded large 
partitions through the historical rebalance, for the rest of partitions it 
should restart from the beginning. 

h5. Stage 3. Add WAL history reservation on supplier. Add historical rebalance 
triggering (LWM (partition) - HWM (read-only)).
h6. Tests:
 - File rebalancing under load and without on atomic/tx caches. (check existing 
PDS-enabled rebalancing tests).
 - Ensure that MVCC groups use regular rebalancing.
 - The rebalancing on the unstable topology and failures of the 
supplier/demander nodes at different stages.
 - (compatibility) The old nodes should use regular rebalancing.

h5. Stage 4 Eviction and rebuild of indexes.
h6. Tests:
 - File rebalancing of caches with H2 indexes.
 - Check consistency of H2 indexes.

  was:
{{Preloader}} should be able to do the following:
 # build the map of partitions and corresponding supplier nodes from which 
partitions will be loaded;
 # switch cache data storage to {{no-op}} and back to original (HWM must be 
fixed here for the needs of historical rebalance) under the checkpoint and keep 
the partition update counter for each partition;
 # run async the eviction indexes for the list of collected partitions;
 # send a request message to each node one by one with the list of partitions 
to load;
 # wait for files received (listening for the transmission handler);
 # run rebuild indexes async over the receiving partitions;
 # run historical rebalance from LWM to HWM collected above (LWM can be read 
from the received file meta page);

h5. Stage 1. implement "read-only" mode for cache data store. Implement data 
store reinitialization on the updated persistence file.
h6. Tests:
 - Switching under load.
 - Check re-initialization of partition on new file.
 - Check that in read-only mode
 ** indexes are not updated
 ** update counter is updated
 ** cache entries eviction works fine
 ** tx/atomic updates on this partition works fine in cluster

h5. Stage 2. Build Map for request partitions by node, add message that will be 
sent to the supplier. Send a demand request, handle the response, switch 
datastore when file received.
h6. Tests:
 - Check partition consistency after receiving a file.
 - File transmission under load.
 - Failover - some of the partitions have been switched, the node has been 
restarted, rebalancing is expected to continue only for fully loaded large 
partitions through the historical rebalance, for the rest of partitions it 
should restart from the beginning. 

h5. Stage 3. Add WAL history reservation on supplier. Add historical rebalance 
triggering (LWM (partition) - HWM (read-only)).
h6. Tests:
 - File rebalancing under load and without on atomic/tx caches. (check existing 
PDS-enabled rebalancing tests).
 - Ensure that MVCC groups use regular rebalancing.
 - The rebalancing on the unstable topology and failures of the 
supplier/demander nodes at different stages.
 - (compatibility) The old nodes should use regular rebalancing.

h5. Stage 4 Eviction and rebuild of indexes.
h6. Tests:
 - File rebalancing of caches with H2 indexes.
 - Check consistency of H2 indexes.


> Implement file rebalancing management
> ---

[jira] [Updated] (IGNITE-12069) Implement file rebalancing management

2019-09-05 Thread Pavel Pereslegin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Pereslegin updated IGNITE-12069:
--
Description: 
{{Preloader}} should be able to do the following:
 # build the map of partitions and corresponding supplier nodes from which 
partitions will be loaded;
 # switch cache data storage to {{no-op}} and back to original (HWM must be 
fixed here for the needs of historical rebalance) under the checkpoint and keep 
the partition update counter for each partition;
 # run async the eviction indexes for the list of collected partitions;
 # send a request message to each node one by one with the list of partitions 
to load;
 # wait for files received (listening for the transmission handler);
 # run rebuild indexes async over the receiving partitions;
 # run historical rebalance from LWM to HWM collected above (LWM can be read 
from the received file meta page);

h5. Stage 1. implement "read-only" mode for cache data store. Implement data 
store reinitialization on the updated persistence file.
h6. Tests:
 - Switching under load.
 - Check re-initialization of partition on new file.
 - Check that in read-only mode
 ** indexes are not updated
 ** update counter is updated
 ** cache entries eviction works fine
 ** tx/atomic updates on this partition works fine in cluster

h5. Stage 2. Build Map for request partitions by node, add message that will be 
sent to the supplier. Send a demand request, handle the response, switch 
datastore when file received.
h6. Tests:
 - Check partition consistency after receiving a file.
 - File transmission under load.
 - Failover - some of the partitions have been switched, the node has been 
restarted, rebalancing is expected to continue only for fully loaded large 
partitions through the historical rebalance, for the rest of partitions it 
should restart from the beginning. 

h5. Stage 3. Add WAL history reservation on supplier. Add historical rebalance 
triggering (LWM (partition) - HWM (read-only)).
h6. Tests:
 - File rebalancing under load and without on atomic/tx caches. (check existing 
PDS-enabled rebalancing tests).
 - Ensure that MVCC groups use regular rebalancing.
 - The rebalancing on the unstable topology and failures of the 
supplier/demander nodes at different stages.
 - (compatibility) The old nodes should use regular rebalancing.

h5. Stage 4 Eviction and rebuild of indexes.
h6. Tests:
 - File rebalancing of caches with H2 indexes.
 - Check consistency of H2 indexes.

  was:
{{Preloader}} should be able to do the following:
 # build the map of partitions and corresponding supplier nodes from which 
partitions will be loaded;
 # switch cache data storage to {{no-op}} and back to original (HWM must be 
fixed here for the needs of historical rebalance) under the checkpoint and keep 
the partition update counter for each partition;
 # run async the eviction indexes for the list of collected partitions;
 # send a request message to each node one by one with the list of partitions 
to load;
 # wait for files received (listening for the transmission handler);
 # run rebuild indexes async over the receiving partitions;
 # run historical rebalance from LWM to HWM collected above (LWM can be read 
from the received file meta page);

h5. Stage 1. implement "read-only" mode for cache data store. Implement data 
store reinitialization on the updated persistence file.
h6. Tests:
 - Switching under load.
 - Check re-initialization of partition on new file.
 - Check that in read-only mode
 ** indexes are not updated
 ** update counter is updated
 ** eviction works fine
 ** tx/atomic updates on this partition works fine in cluster

h5. Stage 2. Build Map for request partitions by node, add message that will be 
sent to the supplier. Send a demand request, handle the response, switch 
datastore when file received.
h6. Tests:
 - Check partition consistency after receiving a file.
 - File transmission under load.
 - Failover - some of the partitions have been switched, the node has been 
restarted, rebalancing is expected to continue only for fully loaded large 
partitions through the historical rebalance, for the rest of partitions it 
should restart from the beginning. 

h5. Stage 3. Add WAL history reservation on supplier. Add historical rebalance 
triggering (LWM (partition) - HWM (read-only)).
h6. Tests:
 - File rebalancing under load and without on atomic/tx caches. (check existing 
PDS-enabled rebalancing tests).
 - Ensure that MVCC groups use regular rebalancing.
 - The rebalancing on the unstable topology and failures of the 
supplier/demander nodes at different stages.
 - (compatibility) The old nodes should use regular rebalancing.

h5. Stage 4 Eviction and rebuild of indexes.
h6. Tests:
 - File rebalancing of caches with H2 indexes.
 - Check consistency of H2 indexes.


> Implement file rebalancing management
> -
>
>   

[jira] [Assigned] (IGNITE-8622) Zookeeper and TCP discovery SPI' getSpiState method inconsistent

2019-09-05 Thread Ryabov Dmitrii (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryabov Dmitrii reassigned IGNITE-8622:
--

Assignee: Ryabov Dmitrii

> Zookeeper and TCP discovery SPI' getSpiState method inconsistent
> 
>
> Key: IGNITE-8622
> URL: https://issues.apache.org/jira/browse/IGNITE-8622
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.5
>Reporter: Max Shonichev
>Assignee: Ryabov Dmitrii
>Priority: Minor
>  Labels: jmx
> Fix For: 2.8
>
>
> getSpiState of TcpDiscoverySpi Mbean returns uppercased human-readable state, 
> e.g. 'CONNECTED'
> getSpiState of ZookeeperDiscoverySpi Mbean returns camelcased one, e.g. 
> 'Connected'



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (IGNITE-10557) Control.sh validate index work long and broke down

2019-09-05 Thread Alexey Kuznetsov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-10557?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Kuznetsov updated IGNITE-10557:
--
Component/s: (was: visor)
 sql

> Control.sh validate index work long and broke down
> --
>
> Key: IGNITE-10557
> URL: https://issues.apache.org/jira/browse/IGNITE-10557
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 2.6
>Reporter: Alexand Polyakov
>Priority: Major
>
> cluster in the amount of 27Gb
> performing validate_indexes took more than 1 hour
> and execution failed
> {code}
> control.sh --cache validate_indexes
> Control utility [ver. 2.6]
> 2018 Copyright(C) Apache Software Foundation
> User: pprbusr
> 
> Connection to cluster failed.
> Error: Failed to perform request (connection failed): /10.117.102.207:11211
> You have mail in /var/spool/mail/busr
> {code}
> analysis of the thread for 40 minutes showed that out of 32 nodes only on 3 
> nodes were flows ValidateIndexesClosure
> at the same time, some threads are blocked
> {code}
> "pool-55-thread-53" #9255 prio=5 os_prio=0 tid=0x7eb5a0073800 nid=0xb408 
> waiting for monitor entry [0x7eb6554f3000]
>java.lang.Thread.State: BLOCKED (on object monitor)
>   at 
> org.apache.ignite.internal.pagemem.PageUtils.getBytes(PageUtils.java:63)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.CacheDataRowAdapter.readFullRow(CacheDataRowAdapter.java:296)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.CacheDataRowAdapter.initFromLink(CacheDataRowAdapter.java:159)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.CacheDataRowAdapter.initFromLink(CacheDataRowAdapter.java:102)
>   at 
> org.apache.ignite.internal.processors.query.h2.database.H2RowFactory.getRow(H2RowFactory.java:61)
>   at 
> org.apache.ignite.internal.processors.query.h2.database.H2Tree.createRowFromLink(H2Tree.java:152)
>   at 
> org.apache.ignite.internal.processors.query.h2.database.io.H2InnerIO.getLookupRow(H2InnerIO.java:60)
>   at 
> org.apache.ignite.internal.processors.query.h2.database.io.H2InnerIO.getLookupRow(H2InnerIO.java:33)
>   at 
> org.apache.ignite.internal.processors.query.h2.database.H2Tree.getRow(H2Tree.java:170)
>   at 
> org.apache.ignite.internal.processors.query.h2.database.H2Tree.getRow(H2Tree.java:47)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.getRow(BPlusTree.java:4524)
>   at 
> org.apache.ignite.internal.processors.query.h2.database.H2Tree.compare(H2Tree.java:212)
>   at 
> org.apache.ignite.internal.processors.query.h2.database.H2Tree.compare(H2Tree.java:47)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.compare(BPlusTree.java:4511)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.findInsertionPoint(BPlusTree.java:4431)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.access$1300(BPlusTree.java:90)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Search.run0(BPlusTree.java:291)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$GetPageHandler.run(BPlusTree.java:4858)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Search.run(BPlusTree.java:271)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$GetPageHandler.run(BPlusTree.java:4843)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.util.PageHandler.readPage(PageHandler.java:161)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.DataStructure.read(DataStructure.java:332)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.findDown(BPlusTree.java:1157)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.doFind(BPlusTree.java:1124)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.findOne(BPlusTree.java:1091)
>   at 
> org.apache.ignite.internal.processors.query.h2.database.H2TreeIndex.find(H2TreeIndex.java:201)
>   at 
> org.apache.ignite.internal.visor.verify.ValidateIndexesClosure.processPartition(ValidateIndexesClosure.java:524)
>   at 
> org.apache.ignite.internal.visor.verify.ValidateIndexesClosure.access$100(ValidateIndexesClosure.java:86)
>   at 
> org.apache.ignite.internal.visor.verify.ValidateIndexesClosure$2.call(ValidateIndexesClosure.java:394)
>   at 
> org.apache.ignite.internal.visor.verify.ValidateIndexesClosure$2.call(ValidateIndexesClosure.java:392)
>   at java.util.co

[jira] [Commented] (IGNITE-11829) Distribute joins fail if number of tables > 7

2019-09-05 Thread Ignite TC Bot (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-11829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16923444#comment-16923444
 ] 

Ignite TC Bot commented on IGNITE-11829:


{panel:title=Branch: [pull/6842/head] Base: [master] : No blockers 
found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}{panel}
[TeamCity *--> Run :: All* 
Results|https://ci.ignite.apache.org/viewLog.html?buildId=4569463&buildTypeId=IgniteTests24Java8_RunAll]

> Distribute joins fail if number of tables > 7
> -
>
> Key: IGNITE-11829
> URL: https://issues.apache.org/jira/browse/IGNITE-11829
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Affects Versions: 2.7
>Reporter: Stanislav Lukyanov
>Assignee: Diana Iakovleva
>Priority: Major
>  Labels: newbie
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Distributed joins fail with ArrayIndexOutOfBounds when the total number of 
> tables is > 7.
> Example:
> {code}
> try (Ignite ignite = 
> Ignition.start("examples/config/example-ignite.xml");) {
> IgniteCache cache = ignite.createCache("foo");
> cache.query(new SqlFieldsQuery("CREATE TABLE Person(ID INTEGER 
> PRIMARY KEY, NAME VARCHAR(100));"));
> cache.query(new SqlFieldsQuery("INSERT INTO Person(ID, NAME) 
> VALUES (1, 'Ed'), (2, 'Ann'), (3, 'Emma');"));
> cache.query(new SqlFieldsQuery("SELECT *\n" +
> "FROM PERSON P1\n" +
> "JOIN PERSON P2 ON P1.ID = P2.ID\n" +
> "JOIN PERSON P3 ON P1.ID = P3.ID\n" +
> "JOIN PERSON P4 ON P1.ID = P4.ID\n" +
> "JOIN PERSON P5 ON P1.ID = P5.ID\n" +
> "JOIN PERSON P6 ON P1.ID = P6.ID\n" +
> "JOIN PERSON P7 ON P1.ID = P7.ID\n" +
> "JOIN PERSON P8 ON P1.ID = 
> P8.ID").setDistributedJoins(true).setEnforceJoinOrder(false));
> }
> {code}
> throws
> {code}
> Exception in thread "main" javax.cache.CacheException: General error: 
> "java.lang.ArrayIndexOutOfBoundsException" [5-197]
>   at 
> org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.query(IgniteCacheProxyImpl.java:832)
>   at 
> org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.query(IgniteCacheProxyImpl.java:765)
>   at 
> org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.query(GatewayProtectedCacheProxy.java:403)
>   at 
> org.apache.ignite.examples.ExampleNodeStartup.main(ExampleNodeStartup.java:60)
> Caused by: class 
> org.apache.ignite.internal.processors.query.IgniteSQLException: General 
> error: "java.lang.ArrayIndexOutOfBoundsException" [5-197]
>   at 
> org.apache.ignite.internal.processors.query.h2.QueryParser.parseH2(QueryParser.java:454)
>   at 
> org.apache.ignite.internal.processors.query.h2.QueryParser.parse0(QueryParser.java:156)
>   at 
> org.apache.ignite.internal.processors.query.h2.QueryParser.parse(QueryParser.java:121)
>   at 
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.querySqlFields(IgniteH2Indexing.java:1191)
>   at 
> org.apache.ignite.internal.processors.query.GridQueryProcessor$3.applyx(GridQueryProcessor.java:2261)
>   at 
> org.apache.ignite.internal.processors.query.GridQueryProcessor$3.applyx(GridQueryProcessor.java:2257)
>   at 
> org.apache.ignite.internal.util.lang.IgniteOutClosureX.apply(IgniteOutClosureX.java:53)
>   at 
> org.apache.ignite.internal.processors.query.GridQueryProcessor.executeQuery(GridQueryProcessor.java:2767)
>   at 
> org.apache.ignite.internal.processors.query.GridQueryProcessor.lambda$querySqlFields$1(GridQueryProcessor.java:2277)
>   at 
> org.apache.ignite.internal.processors.query.GridQueryProcessor.executeQuerySafe(GridQueryProcessor.java:2297)
>   at 
> org.apache.ignite.internal.processors.query.GridQueryProcessor.querySqlFields(GridQueryProcessor.java:2250)
>   at 
> org.apache.ignite.internal.processors.query.GridQueryProcessor.querySqlFields(GridQueryProcessor.java:2177)
>   at 
> org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.query(IgniteCacheProxyImpl.java:817)
>   ... 3 more
> Caused by: org.h2.jdbc.JdbcSQLException: General error: 
> "java.lang.ArrayIndexOutOfBoundsException" [5-197]
>   at org.h2.message.DbException.getJdbcSQLException(DbException.java:357)
>   at org.h2.message.DbException.get(DbException.java:168)
>   at org.h2.message.DbException.convert(DbException.java:307)
>   at org.h2.message.DbException.toSQLException(DbException.java:280)
>   at org.h2.message.TraceObject.logAndConvert(TraceObject.java:357)
>   at org.h2.jdbc.JdbcConnection.prepareStatement(JdbcConnection.java:308)
> 

[jira] [Commented] (IGNITE-11905) [IEP-35] Monitoring&Profiling. Phase 2

2019-09-05 Thread Nikolay Izhikov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-11905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16923537#comment-16923537
 ] 

Nikolay Izhikov commented on IGNITE-11905:
--

Hello [~alex_pl], can you take a look at this PR?

https://github.com/apache/ignite/pull/6790

> [IEP-35] Monitoring&Profiling. Phase 2
> --
>
> Key: IGNITE-11905
> URL: https://issues.apache.org/jira/browse/IGNITE-11905
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Nikolay Izhikov
>Assignee: Nikolay Izhikov
>Priority: Major
>  Labels: IEP-35
>
> Phase 2 should introduce:
> Ability to collect lists of some internal object Ignite manage.
> Examples of such objects:
> * Caches
> * Queries (including continuous queries)
> * Services
> * Compute tasks
> * Distributed Data Structures
> * etc...
> 1. Fields for each list should be discussed in separate tickets
> 2. Metric Exporters (optionally) can support list export.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (IGNITE-11905) [IEP-35] Monitoring&Profiling. Phase 2

2019-09-05 Thread Nikolay Izhikov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-11905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16923538#comment-16923538
 ] 

Nikolay Izhikov commented on IGNITE-11905:
--

Hello, [~daradurvs].

Can you take a look, please?

> [IEP-35] Monitoring&Profiling. Phase 2
> --
>
> Key: IGNITE-11905
> URL: https://issues.apache.org/jira/browse/IGNITE-11905
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Nikolay Izhikov
>Assignee: Nikolay Izhikov
>Priority: Major
>  Labels: IEP-35
>
> Phase 2 should introduce:
> Ability to collect lists of some internal object Ignite manage.
> Examples of such objects:
> * Caches
> * Queries (including continuous queries)
> * Services
> * Compute tasks
> * Distributed Data Structures
> * etc...
> 1. Fields for each list should be discussed in separate tickets
> 2. Metric Exporters (optionally) can support list export.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (IGNITE-11894) Add fetchSize to JDBC cache stores

2019-09-05 Thread Amit Chavan (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-11894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16923855#comment-16923855
 ] 

Amit Chavan commented on IGNITE-11894:
--

[~slukyanov]

[~ilyak] I can change the state of the ticket to In progress. Also if I have 
some questions regarding the changes what is appropriate form of communication 
? Is there a slack channel or I can email the core developers directly?

> Add fetchSize to JDBC cache stores
> --
>
> Key: IGNITE-11894
> URL: https://issues.apache.org/jira/browse/IGNITE-11894
> Project: Ignite
>  Issue Type: Improvement
>  Components: cache
>Reporter: Stanislav Lukyanov
>Assignee: Amit  Chavan
>Priority: Minor
>  Labels: newbie
>
> JDBC's PreparedStatement accepts a fetchSize parameter which defines how many 
> rows will be loaded from the DB at a time. Currently the only way to change 
> that is by specifying it in a customer implementation of the 
> JdbcDialect::fetchSize method (and even then it seems not be be used in some 
> cases).
> Would be good to have a fetchSize property in all of JDBC-based cache stores.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (IGNITE-12089) JVM is halted after this error during rolling restart of a cluster

2019-09-05 Thread temp2 (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16923860#comment-16923860
 ] 

temp2 commented on IGNITE-12089:


hi Stanilovsky Evgeny,
I don't understand what cluster configuration inconsistency means.

The whole test environment and step I will elaborate on:
1. Prepare 4 physical machines, 3 servers and 1 client. Install Oracle 
jdk1.8.0_101 in all 4 machines. make all kinds of necessary directories in each 
machine.

2. In one of the three server-side machines, decompress 
"apache-ignite-2.7.5-bin.zip", modify "bin\ignite.sh" file in decompression 
directory, increase -Xmx value to 10g, modify "config\default-config.xml", see 
the attachments above.

3. Copy the entire directory of the decompressed and modified apache-ignite to 
two other server machines.

4. Select two of the server-side machines to run "bin\ignite. sh 
config\default-config. xml",  start two server-side nodes

5.  Select one of the server-side machines to run "bin\control.sh --activate", 
activate the cluster. This step can also be avoided.

6. Put IgniteTest2 test code and dependency jars (apache-ignite-2.7.5-bin.zip 
is not required) into the fourth machine and run directly with Java command. 
The running parameters are "com.test.IgniteTest2 5 0 test 2 
192.168.20.12:49500 192.168.20.13:49500 192.168.20.14:49500".

7. 20 minutes later, the third server-side machines to run "bin\ignite. sh 
config\default-config. xml"

8. A few minutes later after the start of the third server, one or two of the 
first two serveres will appear "Partitions cache sizes are inconsistent", and 
then a moment later  one or two of the first two serveres the JVM will exit 
with the errores.


> JVM is halted after this error during rolling restart of a cluster
> --
>
> Key: IGNITE-12089
> URL: https://issues.apache.org/jira/browse/IGNITE-12089
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.6
>Reporter: temp2
>Priority: Critical
> Attachments: IgniteTest2.java, default-config.xml, ignite27.log, 
> ignite42.log
>
>
> JVM is halted after this error during rolling restart of a cluster:
> excepition is :528-a852-c65782e337f0][2019-08-20 
> 17:22:10,901][ERROR][ttl-cleanup-worker-#155][] Critical system error 
> detected. Will be handled accordingly to configured handler [hnd=class 
> o.a.i.failure.StopNodeOrHaltFailure528-a852-c65782e337f0][2019-08-20 
> 17:22:10,901][ERROR][ttl-cleanup-worker-#155][] Critical system error 
> detected. Will be handled accordingly to configured handler [hnd=class 
> o.a.i.failure.StopNodeOrHaltFailureHandler, failureCtx=FailureContext 
> [type=SYSTEM_WORKER_TERMINATION, err=class o.a.i.IgniteException: Runtime 
> failure on bounds: [lower=PendingRow [], upper=PendingRow 
> [org.apache.ignite.IgniteException: Runtime failure on bounds: 
> [lower=PendingRow [], upper=PendingRow []] at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.find(BPlusTree.java:971)
>  ~[ignite-core-2.6.0.jar:2.6.0] at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.find(BPlusTree.java:950)
>  ~[ignite-core-2.6.0.jar:2.6.0] at 
> org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl.expire(IgniteCacheOffheapManagerImpl.java:1022)
>  ~[ignite-core-2.6.0.jar:2.6.0] at 
> org.apache.ignite.internal.processors.cache.GridCacheTtlManager.expire(GridCacheTtlManager.java:197)
>  ~[ignite-core-2.6.0.jar:2.6.0] at 
> org.apache.ignite.internal.processors.cache.GridCacheSharedTtlCleanupManager$CleanupWorker.body(GridCacheSharedTtlCleanupManager.java:137)
>  [ignite-core-2.6.0.jar:2.6.0] at 
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110) 
> [ignite-core-2.6.0.jar:2.6.0] at java.lang.Thread.run(Thread.java:745) 
> [?:1.8.0_101]Caused by: java.lang.IllegalStateException: Failed to get page 
> IO instance (page content is corrupted) at 
> org.apache.ignite.internal.processors.cache.persistence.tree.io.IOVersions.forVersion(IOVersions.java:83)
>  ~[ignite-core-2.6.0.jar:2.6.0] at 
> org.apache.ignite.internal.processors.cache.persistence.tree.io.IOVersions.forPage(IOVersions.java:95)
>  ~[ignite-core-2.6.0.jar:2.6.0] at 
> org.apache.ignite.internal.processors.cache.persistence.CacheDataRowAdapter.initFromLink(CacheDataRowAdapter.java:148)
>  ~[ignite-core-2.6.0.jar:2.6.0] at 
> org.apache.ignite.internal.processors.cache.persistence.CacheDataRowAdapter.initFromLink(CacheDataRowAdapter.java:102)
>  ~[ignite-core-2.6.0.jar:2.6.0] at 
> org.apache.ignite.internal.processors.cache.tree.PendingRow.initKey(PendingRow.java:72)
>  ~[ignite-core-2.6.0.jar:2.6.0] at 
> org.apache.ignite.internal.processors.cache.tree.PendingEntriesTree.getRow(PendingEntriesTree.java:118)
>  ~[ignite-co

[jira] [Comment Edited] (IGNITE-12089) JVM is halted after this error during rolling restart of a cluster

2019-09-05 Thread temp2 (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16923860#comment-16923860
 ] 

temp2 edited comment on IGNITE-12089 at 9/6/19 1:56 AM:


hi Stanilovsky Evgeny,
I don't understand what cluster configuration inconsistency means.

The whole test environment and step I will elaborate on:
1. Prepare 4 physical machines, 3 servers and 1 client. Install Oracle 
jdk1.8.0_101 in all 4 machines. make all kinds of necessary directories in each 
machine.

2. In one of the three server-side machines, decompress 
"apache-ignite-2.7.5-bin.zip", modify "bin\ignite.sh" file in decompression 
directory, increase -Xmx value to 10g, modify "config\default-config.xml", see 
the attachments above.

3. Copy the entire directory of the decompressed and modified apache-ignite to 
two other server machines.

4. Select two of the server-side machines to run "bin\ignite. sh 
config\default-config. xml",  start two server-side nodes

5.  Select one of the server-side machines to run "bin\control.sh --activate", 
activate the cluster. This step can also be avoided.

6. Put IgniteTest2 test code and dependency jars (apache-ignite-2.7.5-bin.zip 
is not required) into the fourth machine and run directly with Java command. 
The running parameters are "com.test.IgniteTest2 5 0 test 20 
192.168.20.12:49500 192.168.20.13:49500 192.168.20.14:49500".

7. 20 minutes later, the third server-side machines to run "bin\ignite. sh 
config\default-config. xml"

8. A few minutes later after the start of the third server, one or two of the 
first two serveres will appear "Partitions cache sizes are inconsistent", and 
then a moment later  one or two of the first two serveres the JVM will exit 
with the errores.



was (Author: temp2):
hi Stanilovsky Evgeny,
I don't understand what cluster configuration inconsistency means.

The whole test environment and step I will elaborate on:
1. Prepare 4 physical machines, 3 servers and 1 client. Install Oracle 
jdk1.8.0_101 in all 4 machines. make all kinds of necessary directories in each 
machine.

2. In one of the three server-side machines, decompress 
"apache-ignite-2.7.5-bin.zip", modify "bin\ignite.sh" file in decompression 
directory, increase -Xmx value to 10g, modify "config\default-config.xml", see 
the attachments above.

3. Copy the entire directory of the decompressed and modified apache-ignite to 
two other server machines.

4. Select two of the server-side machines to run "bin\ignite. sh 
config\default-config. xml",  start two server-side nodes

5.  Select one of the server-side machines to run "bin\control.sh --activate", 
activate the cluster. This step can also be avoided.

6. Put IgniteTest2 test code and dependency jars (apache-ignite-2.7.5-bin.zip 
is not required) into the fourth machine and run directly with Java command. 
The running parameters are "com.test.IgniteTest2 5 0 test 2 
192.168.20.12:49500 192.168.20.13:49500 192.168.20.14:49500".

7. 20 minutes later, the third server-side machines to run "bin\ignite. sh 
config\default-config. xml"

8. A few minutes later after the start of the third server, one or two of the 
first two serveres will appear "Partitions cache sizes are inconsistent", and 
then a moment later  one or two of the first two serveres the JVM will exit 
with the errores.


> JVM is halted after this error during rolling restart of a cluster
> --
>
> Key: IGNITE-12089
> URL: https://issues.apache.org/jira/browse/IGNITE-12089
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.6
>Reporter: temp2
>Priority: Critical
> Attachments: IgniteTest2.java, default-config.xml, ignite27.log, 
> ignite42.log
>
>
> JVM is halted after this error during rolling restart of a cluster:
> excepition is :528-a852-c65782e337f0][2019-08-20 
> 17:22:10,901][ERROR][ttl-cleanup-worker-#155][] Critical system error 
> detected. Will be handled accordingly to configured handler [hnd=class 
> o.a.i.failure.StopNodeOrHaltFailure528-a852-c65782e337f0][2019-08-20 
> 17:22:10,901][ERROR][ttl-cleanup-worker-#155][] Critical system error 
> detected. Will be handled accordingly to configured handler [hnd=class 
> o.a.i.failure.StopNodeOrHaltFailureHandler, failureCtx=FailureContext 
> [type=SYSTEM_WORKER_TERMINATION, err=class o.a.i.IgniteException: Runtime 
> failure on bounds: [lower=PendingRow [], upper=PendingRow 
> [org.apache.ignite.IgniteException: Runtime failure on bounds: 
> [lower=PendingRow [], upper=PendingRow []] at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.find(BPlusTree.java:971)
>  ~[ignite-core-2.6.0.jar:2.6.0] at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.find(BPlusTree.java:950)
>  ~[ignite-core

[jira] [Comment Edited] (IGNITE-12089) JVM is halted after this error during rolling restart of a cluster

2019-09-05 Thread temp2 (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16923860#comment-16923860
 ] 

temp2 edited comment on IGNITE-12089 at 9/6/19 1:58 AM:


hi Stanilovsky Evgeny,
I don't understand what cluster configuration inconsistency means and why need 
isClient=false.

The whole test environment and step I will elaborate on:
1. Prepare 4 physical machines, 3 servers and 1 client. Install Oracle 
jdk1.8.0_101 in all 4 machines. make all kinds of necessary directories in each 
machine.

2. In one of the three server-side machines, decompress 
"apache-ignite-2.7.5-bin.zip", modify "bin\ignite.sh" file in decompression 
directory, increase -Xmx value to 10g, modify "config\default-config.xml", see 
the attachments above.

3. Copy the entire directory of the decompressed and modified apache-ignite to 
two other server machines.

4. Select two of the server-side machines to run "bin\ignite. sh 
config\default-config. xml",  start two server-side nodes

5.  Select one of the server-side machines to run "bin\control.sh --activate", 
activate the cluster. This step can also be avoided.

6. Put IgniteTest2 test code and dependency jars (apache-ignite-2.7.5-bin.zip 
is not required) into the fourth machine and run directly with Java command. 
The running parameters are "com.test.IgniteTest2 5 0 test 20 
192.168.20.12:49500 192.168.20.13:49500 192.168.20.14:49500".

7. 20 minutes later, the third server-side machines to run "bin\ignite. sh 
config\default-config. xml"

8. A few minutes later after the start of the third server, one or two of the 
first two servers will appear "Partitions cache sizes are inconsistent", and 
then a moment later  one or two of the first two servers the JVM will exit with 
the errores.



was (Author: temp2):
hi Stanilovsky Evgeny,
I don't understand what cluster configuration inconsistency means.

The whole test environment and step I will elaborate on:
1. Prepare 4 physical machines, 3 servers and 1 client. Install Oracle 
jdk1.8.0_101 in all 4 machines. make all kinds of necessary directories in each 
machine.

2. In one of the three server-side machines, decompress 
"apache-ignite-2.7.5-bin.zip", modify "bin\ignite.sh" file in decompression 
directory, increase -Xmx value to 10g, modify "config\default-config.xml", see 
the attachments above.

3. Copy the entire directory of the decompressed and modified apache-ignite to 
two other server machines.

4. Select two of the server-side machines to run "bin\ignite. sh 
config\default-config. xml",  start two server-side nodes

5.  Select one of the server-side machines to run "bin\control.sh --activate", 
activate the cluster. This step can also be avoided.

6. Put IgniteTest2 test code and dependency jars (apache-ignite-2.7.5-bin.zip 
is not required) into the fourth machine and run directly with Java command. 
The running parameters are "com.test.IgniteTest2 5 0 test 20 
192.168.20.12:49500 192.168.20.13:49500 192.168.20.14:49500".

7. 20 minutes later, the third server-side machines to run "bin\ignite. sh 
config\default-config. xml"

8. A few minutes later after the start of the third server, one or two of the 
first two serveres will appear "Partitions cache sizes are inconsistent", and 
then a moment later  one or two of the first two serveres the JVM will exit 
with the errores.


> JVM is halted after this error during rolling restart of a cluster
> --
>
> Key: IGNITE-12089
> URL: https://issues.apache.org/jira/browse/IGNITE-12089
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.6
>Reporter: temp2
>Priority: Critical
> Attachments: IgniteTest2.java, default-config.xml, ignite27.log, 
> ignite42.log
>
>
> JVM is halted after this error during rolling restart of a cluster:
> excepition is :528-a852-c65782e337f0][2019-08-20 
> 17:22:10,901][ERROR][ttl-cleanup-worker-#155][] Critical system error 
> detected. Will be handled accordingly to configured handler [hnd=class 
> o.a.i.failure.StopNodeOrHaltFailure528-a852-c65782e337f0][2019-08-20 
> 17:22:10,901][ERROR][ttl-cleanup-worker-#155][] Critical system error 
> detected. Will be handled accordingly to configured handler [hnd=class 
> o.a.i.failure.StopNodeOrHaltFailureHandler, failureCtx=FailureContext 
> [type=SYSTEM_WORKER_TERMINATION, err=class o.a.i.IgniteException: Runtime 
> failure on bounds: [lower=PendingRow [], upper=PendingRow 
> [org.apache.ignite.IgniteException: Runtime failure on bounds: 
> [lower=PendingRow [], upper=PendingRow []] at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.find(BPlusTree.java:971)
>  ~[ignite-core-2.6.0.jar:2.6.0] at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.find(BPlu

[jira] [Comment Edited] (IGNITE-11894) Add fetchSize to JDBC cache stores

2019-09-05 Thread Amit Chavan (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-11894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16923855#comment-16923855
 ] 

Amit Chavan edited comment on IGNITE-11894 at 9/6/19 2:17 AM:
--

[~slukyanov]

[~ilyak] I cannot change the state of the ticket to In progress. Also if I have 
some questions regarding the changes what is appropriate form of communication 
? Is there a slack channel or I can email the core developers directly?


was (Author: achav...@gmail.com):
[~slukyanov]

[~ilyak] I can change the state of the ticket to In progress. Also if I have 
some questions regarding the changes what is appropriate form of communication 
? Is there a slack channel or I can email the core developers directly?

> Add fetchSize to JDBC cache stores
> --
>
> Key: IGNITE-11894
> URL: https://issues.apache.org/jira/browse/IGNITE-11894
> Project: Ignite
>  Issue Type: Improvement
>  Components: cache
>Reporter: Stanislav Lukyanov
>Assignee: Amit  Chavan
>Priority: Minor
>  Labels: newbie
>
> JDBC's PreparedStatement accepts a fetchSize parameter which defines how many 
> rows will be loaded from the DB at a time. Currently the only way to change 
> that is by specifying it in a customer implementation of the 
> JdbcDialect::fetchSize method (and even then it seems not be be used in some 
> cases).
> Would be good to have a fetchSize property in all of JDBC-based cache stores.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (IGNITE-12141) Ignite Spark Integration Support Schema on Table Write

2019-09-05 Thread Manoj G T (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manoj G T updated IGNITE-12141:
---
 Fix Version/s: (was: 2.7.6)
2.8
  Reviewer: Nikolay Izhikov
   Description: 
Ignite 2.6 doesn't allow to create table on any schema other than Public Schema 
and this is the reason for not supporting "OPTION_SCHEMA" during Overwrite 
mode. Now that Ignite supports to create the table in any given schema it will 
be great if we can incorporate the changes to support "OPTION_SCHEMA" during 
Overwrite mode and make it available as part of next Ignite release.

 

+Related Issue:+

[https://stackoverflow.com/questions/57782033/apache-ignite-spark-integration-not-working-with-schema-name]

  was:
Users can't able to specify schema when trying to persist Spark DF to Ignite.

 

[https://stackoverflow.com/questions/57782033/apache-ignite-spark-integration-not-working-with-schema-name]

Issue Type: Improvement  (was: Bug)
   Summary: Ignite Spark Integration Support Schema on Table Write  
(was: Ignite Spark Integration not working with Schema Name)
Remaining Estimate: 4h
 Original Estimate: 4h

> Ignite Spark Integration Support Schema on Table Write
> --
>
> Key: IGNITE-12141
> URL: https://issues.apache.org/jira/browse/IGNITE-12141
> Project: Ignite
>  Issue Type: Improvement
>  Components: spark
>Affects Versions: 2.7.5
>Reporter: Manoj G T
>Priority: Critical
> Fix For: 2.8
>
>   Original Estimate: 4h
>  Remaining Estimate: 4h
>
> Ignite 2.6 doesn't allow to create table on any schema other than Public 
> Schema and this is the reason for not supporting "OPTION_SCHEMA" during 
> Overwrite mode. Now that Ignite supports to create the table in any given 
> schema it will be great if we can incorporate the changes to support 
> "OPTION_SCHEMA" during Overwrite mode and make it available as part of next 
> Ignite release.
>  
> +Related Issue:+
> [https://stackoverflow.com/questions/57782033/apache-ignite-spark-integration-not-working-with-schema-name]



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (IGNITE-9228) Spark SQL Table Schema Specification

2019-09-05 Thread Manoj G T (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-9228?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16923920#comment-16923920
 ] 

Manoj G T commented on IGNITE-9228:
---

Thanks, [~NIzhikov]. I have converted my ticket from bug to an Improvement 
regarding the same https://issues.apache.org/jira/browse/IGNITE-12141

and added you as the reviewer. My Colleague or I will take this ticket early 
next week.

> Spark SQL Table Schema Specification
> 
>
> Key: IGNITE-9228
> URL: https://issues.apache.org/jira/browse/IGNITE-9228
> Project: Ignite
>  Issue Type: Improvement
>  Components: spark
>Affects Versions: 2.6
>Reporter: Stuart Macdonald
>Assignee: Stuart Macdonald
>Priority: Major
> Fix For: 2.8
>
>
> The Ignite Spark SQL interface currently takes just “table name” as a
> parameter which it uses to supply a Spark dataset with data from the
> underlying Ignite SQL table with that name.
> To do this it loops through each cache and finds the first one with the
> given table name [1]. This causes issues if there are multiple tables
> registered in different schema with the same table name as you can only
> access one of those from Spark. We could either:
> 1. Pass an extra parameter through the Ignite Spark data source which
> optionally specifies the schema name.
> 2. Support namespacing in the existing table name parameter, ie
> “schemaName.tableName”
> [1 
> ]https://github.com/apache/ignite/blob/ca973ad99c6112160a305df05be9458e29f88307/modules/spark/src/main/scala/org/apache/ignite/spark/impl/package.scala#L119



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (IGNITE-12089) JVM is halted after this error during rolling restart of a cluster

2019-09-05 Thread Stanilovsky Evgeny (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16923938#comment-16923938
 ] 

Stanilovsky Evgeny commented on IGNITE-12089:
-

[~temp2] thanks for comments? i have no such servers, i`l try to reproduce your 
problem locally.

> JVM is halted after this error during rolling restart of a cluster
> --
>
> Key: IGNITE-12089
> URL: https://issues.apache.org/jira/browse/IGNITE-12089
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.6
>Reporter: temp2
>Priority: Critical
> Attachments: IgniteTest2.java, default-config.xml, ignite27.log, 
> ignite42.log
>
>
> JVM is halted after this error during rolling restart of a cluster:
> excepition is :528-a852-c65782e337f0][2019-08-20 
> 17:22:10,901][ERROR][ttl-cleanup-worker-#155][] Critical system error 
> detected. Will be handled accordingly to configured handler [hnd=class 
> o.a.i.failure.StopNodeOrHaltFailure528-a852-c65782e337f0][2019-08-20 
> 17:22:10,901][ERROR][ttl-cleanup-worker-#155][] Critical system error 
> detected. Will be handled accordingly to configured handler [hnd=class 
> o.a.i.failure.StopNodeOrHaltFailureHandler, failureCtx=FailureContext 
> [type=SYSTEM_WORKER_TERMINATION, err=class o.a.i.IgniteException: Runtime 
> failure on bounds: [lower=PendingRow [], upper=PendingRow 
> [org.apache.ignite.IgniteException: Runtime failure on bounds: 
> [lower=PendingRow [], upper=PendingRow []] at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.find(BPlusTree.java:971)
>  ~[ignite-core-2.6.0.jar:2.6.0] at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.find(BPlusTree.java:950)
>  ~[ignite-core-2.6.0.jar:2.6.0] at 
> org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl.expire(IgniteCacheOffheapManagerImpl.java:1022)
>  ~[ignite-core-2.6.0.jar:2.6.0] at 
> org.apache.ignite.internal.processors.cache.GridCacheTtlManager.expire(GridCacheTtlManager.java:197)
>  ~[ignite-core-2.6.0.jar:2.6.0] at 
> org.apache.ignite.internal.processors.cache.GridCacheSharedTtlCleanupManager$CleanupWorker.body(GridCacheSharedTtlCleanupManager.java:137)
>  [ignite-core-2.6.0.jar:2.6.0] at 
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110) 
> [ignite-core-2.6.0.jar:2.6.0] at java.lang.Thread.run(Thread.java:745) 
> [?:1.8.0_101]Caused by: java.lang.IllegalStateException: Failed to get page 
> IO instance (page content is corrupted) at 
> org.apache.ignite.internal.processors.cache.persistence.tree.io.IOVersions.forVersion(IOVersions.java:83)
>  ~[ignite-core-2.6.0.jar:2.6.0] at 
> org.apache.ignite.internal.processors.cache.persistence.tree.io.IOVersions.forPage(IOVersions.java:95)
>  ~[ignite-core-2.6.0.jar:2.6.0] at 
> org.apache.ignite.internal.processors.cache.persistence.CacheDataRowAdapter.initFromLink(CacheDataRowAdapter.java:148)
>  ~[ignite-core-2.6.0.jar:2.6.0] at 
> org.apache.ignite.internal.processors.cache.persistence.CacheDataRowAdapter.initFromLink(CacheDataRowAdapter.java:102)
>  ~[ignite-core-2.6.0.jar:2.6.0] at 
> org.apache.ignite.internal.processors.cache.tree.PendingRow.initKey(PendingRow.java:72)
>  ~[ignite-core-2.6.0.jar:2.6.0] at 
> org.apache.ignite.internal.processors.cache.tree.PendingEntriesTree.getRow(PendingEntriesTree.java:118)
>  ~[ignite-core-2.6.0.jar:2.6.0] at 
> org.apache.ignite.internal.processors.cache.tree.PendingEntriesTree.getRow(PendingEntriesTree.java:31)
>  ~[ignite-core-2.6.0.jar:2.6.0] at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$ForwardCursor.fillFromBuffer(BPlusTree.java:4660)
>  ~[ignite-core-2.6.0.jar:2.6.0] at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$ForwardCursor.init(BPlusTree.java:4562)
>  ~[ignite-core-2.6.0.jar:2.6.0] at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$ForwardCursor.access$5300(BPlusTree.java:4501)
>  ~[ignite-core-2.6.0.jar:2.6.0] at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$GetCursor.notFound(BPlusTree.java:2633)
>  ~[ignite-core-2.6.0.jar:2.6.0] at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Search.run0(BPlusTree.java:293)
>  ~[ignite-core-2.6.0.jar:2.6.0] at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$GetPageHandler.run(BPlusTree.java:4816)
>  ~[ignite-core-2.6.0.jar:2.6.0] at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$GetPageHandler.run(BPlusTree.java:4801)
>  ~[ignite-core-2.6.0.jar:2.6.0] at 
> org.apache.ignite.internal.processors.cache.persistence.tree.util.PageHandler.readPage(PageHandler.java:158)
>  ~[ignite-core-2.6.0.jar:2.6.0] at 
> org.apache.ignite.internal.processors.cache.persiste

[jira] [Created] (IGNITE-12143) Vacuum error. class org.apache.ignite.internal.transactions.IgniteTxMvccVersionCheckedException

2019-09-05 Thread Aditya Gupta (Jira)
Aditya Gupta created IGNITE-12143:
-

 Summary: Vacuum error.  class 
org.apache.ignite.internal.transactions.IgniteTxMvccVersionCheckedException
 Key: IGNITE-12143
 URL: https://issues.apache.org/jira/browse/IGNITE-12143
 Project: Ignite
  Issue Type: Bug
  Components: mvcc
Affects Versions: 2.7.5
Reporter: Aditya Gupta
 Fix For: 2.7.5


2019-09-05 17:54:46.576 INFO STDERR [Thread-18] :  
[17:54:46] (err) Failed to execute compound future reducer: GridCompoundFuture 
[rdc=null, initFlag=0, lsnrCalls=0, done=false, cancelled=false, err=null, 
futs=[false, false, false, false, false, false, false, false, false, false, 
false, false, false, false, false, false, false, false, false, false, 
true]]class org.apache.ignite.IgniteCheckedException: class 
org.apache.ignite.transactions.TransactionRollbackException: Transaction has 
been rolled back: 108bb6a2d61--0aad-7c5d--0001

2019-09-05 17:54:46.576 INFO STDERR [Thread-18] :   at 
org.apache.ignite.internal.util.IgniteUtils.cast(IgniteUtils.java:7429)

2019-09-05 17:54:46.576 INFO STDERR [Thread-18] :   at 
org.apache.ignite.internal.processors.closure.GridClosureProcessor$2.body(GridClosureProcessor.java:975)

2019-09-05 17:54:46.576 INFO STDERR [Thread-18] :   at 
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)

2019-09-05 17:54:46.577 INFO STDERR [Thread-18] :   at 
org.apache.ignite.internal.util.StripedExecutor$Stripe.body(StripedExecutor.java:505)

2019-09-05 17:54:46.577 INFO STDERR [Thread-18] :   at 
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)

2019-09-05 17:54:46.577 INFO STDERR [Thread-18] :   at 
java.lang.Thread.run(Thread.java:745)

2019-09-05 17:54:46.577 INFO STDERR [Thread-18] :  Caused by: 
javax.cache.CacheException: class 
org.apache.ignite.transactions.TransactionRollbackException: Transaction has 
been rolled back: 108bb6a2d61--0aad-7c5d--0001

2019-09-05 17:54:46.577 INFO STDERR [Thread-18] :     at 
org.apache.ignite.internal.processors.cache.GridCacheUtils.convertToCacheException(GridCacheUtils.java:1337)

2019-09-05 17:54:46.577 INFO STDERR [Thread-18] :     at 
org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.cacheException(IgniteCacheProxyImpl.java:1758)

2019-09-05 17:54:46.577 INFO STDERR [Thread-18] :     at 
org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.put(IgniteCacheProxyImpl.java:1108)

2019-09-05 17:54:46.577 INFO STDERR [Thread-18] :     at 
org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.put(GatewayProtectedCacheProxy.java:820)

2019-09-05 17:54:46.577 INFO STDERR [Thread-18] :     at 
org.apache.ignite.internal.processors.datastreamer.DataStreamerCacheUpdaters$Individual.receive(DataStreamerCacheUpdaters.java:121)

2019-09-05 17:54:46.578 INFO STDERR [Thread-18] :     at 
org.apache.ignite.internal.processors.datastreamer.DataStreamerUpdateJob.call(DataStreamerUpdateJob.java:140)

2019-09-05 17:54:46.578 INFO STDERR [Thread-18] :     at 
org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:6817)

2019-09-05 17:54:46.578 INFO STDERR [Thread-18] :     at 
org.apache.ignite.internal.processors.closure.GridClosureProcessor$2.body(GridClosureProcessor.java:967)

2019-09-05 17:54:46.578 INFO STDERR [Thread-18] :     ... 4 more

2019-09-05 17:54:46.578 INFO STDERR [Thread-18] :  Caused by: class 
org.apache.ignite.transactions.TransactionRollbackException: Transaction has 
been rolled back: 108bb6a2d61--0aad-7c5d--0001

2019-09-05 17:54:46.578 INFO STDERR [Thread-18] :     at 
org.apache.ignite.internal.util.IgniteUtils$11.apply(IgniteUtils.java:920)

2019-09-05 17:54:46.578 INFO STDERR [Thread-18] :     at 
org.apache.ignite.internal.util.IgniteUtils$11.apply(IgniteUtils.java:918)

2019-09-05 17:54:46.578 INFO STDERR [Thread-18] :     ... 12 more

2019-09-05 17:54:46.578 INFO STDERR [Thread-18] :  Caused by: class 
org.apache.ignite.internal.transactions.IgniteTxRollbackCheckedException: 
Transaction has been rolled back: 
108bb6a2d61--0aad-7c5d--0001

2019-09-05 17:54:46.579 INFO STDERR [Thread-18] :     at 
org.apache.ignite.internal.processors.cache.GridCacheAdapter.syncOp(GridCacheAdapter.java:4248)

2019-09-05 17:54:46.579 INFO STDERR [Thread-18] :     at 
org.apache.ignite.internal.processors.cache.GridCacheAdapter.put0(GridCacheAdapter.java:2468)

2019-09-05 17:54:46.579 INFO STDERR [Thread-18] :     at 
org.apache.ignite.internal.processors.cache.GridCacheAdapter.put(GridCacheAdapter.java:2449)

2019-09-05 17:54:46.579 INFO STDERR [Thread-18] :     at 
org.apache.ignite.internal.processors.cache.GridCacheAdapter.put(GridCacheAdapter.java:2426)

2019-09-05 17:54:46.579 INFO STDERR [Thr

[jira] [Created] (IGNITE-12144) Vacuum error. class org.apache.ignite.internal.transactions.IgniteTxMvccVersionCheckedException

2019-09-05 Thread Aditya Gupta (Jira)
Aditya Gupta created IGNITE-12144:
-

 Summary: Vacuum error.  class 
org.apache.ignite.internal.transactions.IgniteTxMvccVersionCheckedException
 Key: IGNITE-12144
 URL: https://issues.apache.org/jira/browse/IGNITE-12144
 Project: Ignite
  Issue Type: Bug
  Components: mvcc
Affects Versions: 2.7.5
Reporter: Aditya Gupta
 Fix For: 2.7.5


2019-09-05 17:54:46.576 INFO STDERR [Thread-18] :  
[17:54:46] (err) Failed to execute compound future reducer: GridCompoundFuture 
[rdc=null, initFlag=0, lsnrCalls=0, done=false, cancelled=false, err=null, 
futs=[false, false, false, false, false, false, false, false, false, false, 
false, false, false, false, false, false, false, false, false, false, 
true]]class org.apache.ignite.IgniteCheckedException: class 
org.apache.ignite.transactions.TransactionRollbackException: Transaction has 
been rolled back: 108bb6a2d61--0aad-7c5d--0001

2019-09-05 17:54:46.576 INFO STDERR [Thread-18] :   at 
org.apache.ignite.internal.util.IgniteUtils.cast(IgniteUtils.java:7429)

2019-09-05 17:54:46.576 INFO STDERR [Thread-18] :   at 
org.apache.ignite.internal.processors.closure.GridClosureProcessor$2.body(GridClosureProcessor.java:975)

2019-09-05 17:54:46.576 INFO STDERR [Thread-18] :   at 
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)

2019-09-05 17:54:46.577 INFO STDERR [Thread-18] :   at 
org.apache.ignite.internal.util.StripedExecutor$Stripe.body(StripedExecutor.java:505)

2019-09-05 17:54:46.577 INFO STDERR [Thread-18] :   at 
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)

2019-09-05 17:54:46.577 INFO STDERR [Thread-18] :   at 
java.lang.Thread.run(Thread.java:745)

2019-09-05 17:54:46.577 INFO STDERR [Thread-18] :  Caused by: 
javax.cache.CacheException: class 
org.apache.ignite.transactions.TransactionRollbackException: Transaction has 
been rolled back: 108bb6a2d61--0aad-7c5d--0001

2019-09-05 17:54:46.577 INFO STDERR [Thread-18] :     at 
org.apache.ignite.internal.processors.cache.GridCacheUtils.convertToCacheException(GridCacheUtils.java:1337)

2019-09-05 17:54:46.577 INFO STDERR [Thread-18] :     at 
org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.cacheException(IgniteCacheProxyImpl.java:1758)

2019-09-05 17:54:46.577 INFO STDERR [Thread-18] :     at 
org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.put(IgniteCacheProxyImpl.java:1108)

2019-09-05 17:54:46.577 INFO STDERR [Thread-18] :     at 
org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.put(GatewayProtectedCacheProxy.java:820)

2019-09-05 17:54:46.577 INFO STDERR [Thread-18] :     at 
org.apache.ignite.internal.processors.datastreamer.DataStreamerCacheUpdaters$Individual.receive(DataStreamerCacheUpdaters.java:121)

2019-09-05 17:54:46.578 INFO STDERR [Thread-18] :     at 
org.apache.ignite.internal.processors.datastreamer.DataStreamerUpdateJob.call(DataStreamerUpdateJob.java:140)

2019-09-05 17:54:46.578 INFO STDERR [Thread-18] :     at 
org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:6817)

2019-09-05 17:54:46.578 INFO STDERR [Thread-18] :     at 
org.apache.ignite.internal.processors.closure.GridClosureProcessor$2.body(GridClosureProcessor.java:967)

2019-09-05 17:54:46.578 INFO STDERR [Thread-18] :     ... 4 more

2019-09-05 17:54:46.578 INFO STDERR [Thread-18] :  Caused by: class 
org.apache.ignite.transactions.TransactionRollbackException: Transaction has 
been rolled back: 108bb6a2d61--0aad-7c5d--0001

2019-09-05 17:54:46.578 INFO STDERR [Thread-18] :     at 
org.apache.ignite.internal.util.IgniteUtils$11.apply(IgniteUtils.java:920)

2019-09-05 17:54:46.578 INFO STDERR [Thread-18] :     at 
org.apache.ignite.internal.util.IgniteUtils$11.apply(IgniteUtils.java:918)

2019-09-05 17:54:46.578 INFO STDERR [Thread-18] :     ... 12 more

2019-09-05 17:54:46.578 INFO STDERR [Thread-18] :  Caused by: class 
org.apache.ignite.internal.transactions.IgniteTxRollbackCheckedException: 
Transaction has been rolled back: 
108bb6a2d61--0aad-7c5d--0001

2019-09-05 17:54:46.579 INFO STDERR [Thread-18] :     at 
org.apache.ignite.internal.processors.cache.GridCacheAdapter.syncOp(GridCacheAdapter.java:4248)

2019-09-05 17:54:46.579 INFO STDERR [Thread-18] :     at 
org.apache.ignite.internal.processors.cache.GridCacheAdapter.put0(GridCacheAdapter.java:2468)

2019-09-05 17:54:46.579 INFO STDERR [Thread-18] :     at 
org.apache.ignite.internal.processors.cache.GridCacheAdapter.put(GridCacheAdapter.java:2449)

2019-09-05 17:54:46.579 INFO STDERR [Thread-18] :     at 
org.apache.ignite.internal.processors.cache.GridCacheAdapter.put(GridCacheAdapter.java:2426)

2019-09-05 17:54:46.579 INFO STDERR [Thr