[jira] [Comment Edited] (IGNITE-8758) Web console: Broken UI under Firefox in case of long user name

2018-06-08 Thread Andrey Novikov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506811#comment-16506811
 ] 

Andrey Novikov edited comment on IGNITE-8758 at 6/9/18 4:26 AM:


Fixed ui for profile menu, user list. [~pkonstantinov], please test.


was (Author: anovikov):
Fixed ui for profile menu, user list.

> Web console: Broken UI under Firefox in case of long user name
> --
>
> Key: IGNITE-8758
> URL: https://issues.apache.org/jira/browse/IGNITE-8758
> Project: Ignite
>  Issue Type: Bug
>  Components: wizards
>Affects Versions: 2.5
>Reporter: Pavel Konstantinov
>Assignee: Andrey Novikov
>Priority: Minor
> Fix For: 2.6
>
>
> Just change a user name in the profile to 1 
>  and check how it looks under Firefox



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (IGNITE-8758) Web console: Broken UI under Firefox in case of long user name

2018-06-08 Thread Andrey Novikov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Novikov reassigned IGNITE-8758:
--

Assignee: Pavel Konstantinov  (was: Andrey Novikov)

> Web console: Broken UI under Firefox in case of long user name
> --
>
> Key: IGNITE-8758
> URL: https://issues.apache.org/jira/browse/IGNITE-8758
> Project: Ignite
>  Issue Type: Bug
>  Components: wizards
>Affects Versions: 2.5
>Reporter: Pavel Konstantinov
>Assignee: Pavel Konstantinov
>Priority: Minor
> Fix For: 2.6
>
>
> Just change a user name in the profile to 1 
>  and check how it looks under Firefox



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8758) Web console: Broken UI under Firefox in case of long user name

2018-06-08 Thread Andrey Novikov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Novikov updated IGNITE-8758:
---
Component/s: wizards

> Web console: Broken UI under Firefox in case of long user name
> --
>
> Key: IGNITE-8758
> URL: https://issues.apache.org/jira/browse/IGNITE-8758
> Project: Ignite
>  Issue Type: Bug
>  Components: wizards
>Affects Versions: 2.5
>Reporter: Pavel Konstantinov
>Assignee: Andrey Novikov
>Priority: Minor
> Fix For: 2.6
>
>
> Just change a user name in the profile to 1 
>  and check how it looks under Firefox



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (IGNITE-8758) Web console: Broken UI under Firefox in case of long user name

2018-06-08 Thread Andrey Novikov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Novikov reassigned IGNITE-8758:
--

Assignee: Andrey Novikov

> Web console: Broken UI under Firefox in case of long user name
> --
>
> Key: IGNITE-8758
> URL: https://issues.apache.org/jira/browse/IGNITE-8758
> Project: Ignite
>  Issue Type: Bug
>Reporter: Pavel Konstantinov
>Assignee: Andrey Novikov
>Priority: Minor
>
> Just change a user name in the profile to 1 
>  and check how it looks under Firefox



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8758) Web console: Broken UI under Firefox in case of long user name

2018-06-08 Thread Pavel Konstantinov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Konstantinov updated IGNITE-8758:
---
Description: Just change a user name in the profile to 
1  and check how it looks under 
Firefox  (was: Jaust change a user name in the profile to 
1  and check how it looks under 
Firefox)

> Web console: Broken UI under Firefox in case of long user name
> --
>
> Key: IGNITE-8758
> URL: https://issues.apache.org/jira/browse/IGNITE-8758
> Project: Ignite
>  Issue Type: Bug
>Reporter: Pavel Konstantinov
>Priority: Minor
>
> Just change a user name in the profile to 1 
>  and check how it looks under Firefox



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-8758) Web console: Broken UI under Firefox in case of long user name

2018-06-08 Thread Pavel Konstantinov (JIRA)
Pavel Konstantinov created IGNITE-8758:
--

 Summary: Web console: Broken UI under Firefox in case of long user 
name
 Key: IGNITE-8758
 URL: https://issues.apache.org/jira/browse/IGNITE-8758
 Project: Ignite
  Issue Type: Bug
Reporter: Pavel Konstantinov


Jaust change a user name in the profile to 1 
 and check how it looks under Firefox



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (IGNITE-8744) Web console: Incorrect behavior of cluster activation control

2018-06-08 Thread Andrey Novikov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Novikov reassigned IGNITE-8744:
--

Assignee: (was: Andrey Novikov)

> Web console: Incorrect behavior of cluster activation control
> -
>
> Key: IGNITE-8744
> URL: https://issues.apache.org/jira/browse/IGNITE-8744
> Project: Ignite
>  Issue Type: Bug
>  Components: wizards
>Reporter: Pavel Konstantinov
>Priority: Minor
>
> # start node 
> # activate
> # go to Queries history tab, click Refresh
> # deactivate cluster using component- after several seconds component gets 
> switched to 'Activating...' stage and hangs in this state for about minute



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8757) idle_verify utility doesn't show both update counter and hash conflicts

2018-06-08 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506670#comment-16506670
 ] 

ASF GitHub Bot commented on IGNITE-8757:


GitHub user glukos opened a pull request:

https://github.com/apache/ignite/pull/4162

IGNITE-8757 idle_verify utility doesn't show both update counter and …

…hash conflict

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-8757

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/4162.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4162


commit af11c6925ef5c42ebf5273e80638e162fec9f35e
Author: Ivan Rakov 
Date:   2018-06-08T23:02:13Z

IGNITE-8757 idle_verify utility doesn't show both update counter and hash 
conflict




> idle_verify utility doesn't show both update counter and hash conflicts
> ---
>
> Key: IGNITE-8757
> URL: https://issues.apache.org/jira/browse/IGNITE-8757
> Project: Ignite
>  Issue Type: Bug
>Reporter: Ivan Rakov
>Assignee: Ivan Rakov
>Priority: Major
>
> If there are two partitions in cluster, one with different update counters 
> and one with different data, idle_verify will show only partition with broken 
> counters. We should show both for better visibility. 
> We should also show notify user about rebalancing partitions that were 
> excluded from analysis.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8756) SQL: CREATE/ALTER USER documentation should contain information about case sensitivity of username

2018-06-08 Thread Denis Magda (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denis Magda updated IGNITE-8756:

Component/s: documentation

> SQL: CREATE/ALTER USER documentation should contain information about case 
> sensitivity of username
> --
>
> Key: IGNITE-8756
> URL: https://issues.apache.org/jira/browse/IGNITE-8756
> Project: Ignite
>  Issue Type: Improvement
>  Components: documentation, sql
>Affects Versions: 2.5
>Reporter: Andrey Aleksandrov
>Priority: Major
>  Labels: doc
> Fix For: 2.6
>
>
> Now documentation contains next:
> https://apacheignite-sql.readme.io/docs/create-user#section-description
> For instance, if {{test}} was set as a username then:
>  * You can use {{Test}}, {{TEst}}, {{TEST}} and other combinations from JDBC 
> and ODBC.
>  * You have to use {{TEST}} as the username from Ignite's native SQL APIs 
> designed for Java, .NET and other programming languages.
> But next behavior exists:
> When you create the user in quotes ("test") using SQL as next: 
> CREATE USER "test" WITH PASSWORD 'test' 
> It will be created as it was set (in this case it will be test) 
> If you create the user without quotes (test) using SQL as next: 
> CREATE USER test WITH PASSWORD 'test' 
> then username will be stored in uppercase (TEST). 
> The same situation with ALTER USER.
> The documentation should be updated to clear that SQL supports case sensitive 
> data too (using quotas).
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (IGNITE-8179) ZookeeperDiscoverySpiTest#testCommunicationFailureResolve_KillRandom always fails on TC

2018-06-08 Thread Vitaliy Biryukov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8179?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaliy Biryukov reassigned IGNITE-8179:


Assignee: Vitaliy Biryukov

> ZookeeperDiscoverySpiTest#testCommunicationFailureResolve_KillRandom always 
> fails on TC
> ---
>
> Key: IGNITE-8179
> URL: https://issues.apache.org/jira/browse/IGNITE-8179
> Project: Ignite
>  Issue Type: Bug
>  Components: zookeeper
>Reporter: Sergey Chugunov
>Assignee: Vitaliy Biryukov
>Priority: Major
>  Labels: MakeTeamcityGreenAgain
>
> Test fails on TC with the following stack trace:
> {noformat}
> class org.apache.ignite.IgniteCheckedException: Failed to start manager: 
> GridManagerAdapter [enabled=true, 
> name=org.apache.ignite.internal.managers.discovery.GridDiscoveryManager]
> at 
> org.apache.ignite.internal.IgniteKernal.startManager(IgniteKernal.java:1698)
> at org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:1007)
> at 
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:1977)
> at 
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1720)
> at org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1148)
> at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:646)
> at 
> org.apache.ignite.testframework.junits.GridAbstractTest.startGrid(GridAbstractTest.java:882)
> at 
> org.apache.ignite.testframework.junits.GridAbstractTest.startGrid(GridAbstractTest.java:845)
> at 
> org.apache.ignite.testframework.junits.GridAbstractTest.startGrid(GridAbstractTest.java:833)
> at 
> org.apache.ignite.testframework.junits.GridAbstractTest.startGrid(GridAbstractTest.java:799)
> at 
> org.apache.ignite.testframework.junits.GridAbstractTest.startGrids(GridAbstractTest.java:683)
> at 
> org.apache.ignite.testframework.junits.GridAbstractTest.startGridsMultiThreaded(GridAbstractTest.java:710)
> at 
> org.apache.ignite.testframework.junits.common.GridCommonAbstractTest.startGridsMultiThreaded(GridCommonAbstractTest.java:507)
> at 
> org.apache.ignite.testframework.junits.common.GridCommonAbstractTest.startGridsMultiThreaded(GridCommonAbstractTest.java:497)
> at 
> org.apache.ignite.spi.discovery.zk.internal.ZookeeperDiscoverySpiTest.testCommunicationFailureResolve_KillRandom(ZookeeperDiscoverySpiTest.java:2742)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at junit.framework.TestCase.runTest(TestCase.java:176)
> at 
> org.apache.ignite.testframework.junits.GridAbstractTest.runTestInternal(GridAbstractTest.java:2080)
> at 
> org.apache.ignite.testframework.junits.GridAbstractTest.access$000(GridAbstractTest.java:140)
> at 
> org.apache.ignite.testframework.junits.GridAbstractTest$5.run(GridAbstractTest.java:1995)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: class org.apache.ignite.IgniteCheckedException: Failed to start 
> SPI: ZookeeperDiscoverySpi [zkRootPath=/apacheIgnite, 
> zkConnectionString=127.0.0.1:40921,127.0.0.1:35014,127.0.0.1:38754, 
> joinTimeout=0, sesTimeout=2000, clientReconnectDisabled=false, 
> internalLsnr=null]
> at 
> org.apache.ignite.internal.managers.GridManagerAdapter.startSpi(GridManagerAdapter.java:300)
> at 
> org.apache.ignite.internal.managers.discovery.GridDiscoveryManager.start(GridDiscoveryManager.java:905)
> at 
> org.apache.ignite.internal.IgniteKernal.startManager(IgniteKernal.java:1693)
> ... 23 more
> Caused by: class org.apache.ignite.spi.IgniteSpiException: Failed to 
> initialize Zookeeper nodes
> at 
> org.apache.ignite.spi.discovery.zk.internal.ZookeeperDiscoveryImpl.initZkNodes(ZookeeperDiscoveryImpl.java:827)
> at 
> org.apache.ignite.spi.discovery.zk.internal.ZookeeperDiscoveryImpl.startJoin(ZookeeperDiscoveryImpl.java:957)
> at 
> org.apache.ignite.spi.discovery.zk.internal.ZookeeperDiscoveryImpl.joinTopology(ZookeeperDiscoveryImpl.java:775)
> at 
> org.apache.ignite.spi.discovery.zk.internal.ZookeeperDiscoveryImpl.startJoinAndWait(ZookeeperDiscoveryImpl.java:693)
> at 
> org.apache.ignite.spi.discovery.zk.ZookeeperDiscoverySpi.spiStart(ZookeeperDiscoverySpi.java:471)
> at 
> org.apache.ignite.internal.managers.GridManagerAdapter.startSpi(GridManagerAdapter.java:297)
> ... 25 more
> Caused by: 
> org.apache.ignite.spi.discovery.zk.internal.ZookeeperClientFailedException: 
> org.apache.zookeeper.KeeperException$SessionExpiredException: 

[jira] [Updated] (IGNITE-8757) idle_verify utility doesn't show both update counter and hash conflicts

2018-06-08 Thread Ivan Rakov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8757?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Rakov updated IGNITE-8757:
---
Description: 
If there are two partitions in cluster, one with different update counters and 
one with different data, idle_verify will show only partition with broken 
counters. We should show both for better visibility. 
We should also show notify user about rebalancing partitions that were excluded 
from analysis.

  was:
If there are two partitions in cluster, one with different update counters and 
one with different data, idle_verify will show only partition with broken 
counters.
We should show both for better visibility. 


> idle_verify utility doesn't show both update counter and hash conflicts
> ---
>
> Key: IGNITE-8757
> URL: https://issues.apache.org/jira/browse/IGNITE-8757
> Project: Ignite
>  Issue Type: Bug
>Reporter: Ivan Rakov
>Assignee: Ivan Rakov
>Priority: Major
>
> If there are two partitions in cluster, one with different update counters 
> and one with different data, idle_verify will show only partition with broken 
> counters. We should show both for better visibility. 
> We should also show notify user about rebalancing partitions that were 
> excluded from analysis.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-8757) idle_verify utility doesn't show both update counter and hash conflicts

2018-06-08 Thread Ivan Rakov (JIRA)
Ivan Rakov created IGNITE-8757:
--

 Summary: idle_verify utility doesn't show both update counter and 
hash conflicts
 Key: IGNITE-8757
 URL: https://issues.apache.org/jira/browse/IGNITE-8757
 Project: Ignite
  Issue Type: Bug
Reporter: Ivan Rakov
Assignee: Ivan Rakov


If there are two partitions in cluster, one with different update counters and 
one with different data, idle_verify will show only partition with broken 
counters.
We should show both for better visibility. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8183) ZookeeperDiscoverySpiTest#testSegmentation3 fails on TC and locally

2018-06-08 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506414#comment-16506414
 ] 

ASF GitHub Bot commented on IGNITE-8183:


Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/4049


> ZookeeperDiscoverySpiTest#testSegmentation3 fails on TC and locally
> ---
>
> Key: IGNITE-8183
> URL: https://issues.apache.org/jira/browse/IGNITE-8183
> Project: Ignite
>  Issue Type: Bug
>  Components: zookeeper
>Reporter: Sergey Chugunov
>Assignee: Denis Garus
>Priority: Major
>  Labels: MakeTeamcityGreenAgain
> Fix For: 2.6
>
>
> Test fails with assertion on awaits on latch:
> {noformat}
> junit.framework.AssertionFailedError
> at junit.framework.Assert.fail(Assert.java:55)
> at junit.framework.Assert.assertTrue(Assert.java:22)
> at junit.framework.Assert.assertTrue(Assert.java:31)
> at junit.framework.TestCase.assertTrue(TestCase.java:201)
> at 
> org.apache.ignite.spi.discovery.zk.internal.ZookeeperDiscoverySpiTest.testSegmentation3(ZookeeperDiscoverySpiTest.java:1060)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at junit.framework.TestCase.runTest(TestCase.java:176)
> at 
> org.apache.ignite.testframework.junits.GridAbstractTest.runTestInternal(GridAbstractTest.java:2080)
> at 
> org.apache.ignite.testframework.junits.GridAbstractTest.access$000(GridAbstractTest.java:140)
> at 
> org.apache.ignite.testframework.junits.GridAbstractTest$5.run(GridAbstractTest.java:1995)
> at java.lang.Thread.run(Thread.java:748)
> {noformat}
> For some reason SEGMENTATION event is never fired, so assertion on latch 
> fails. Investigation is needed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8503) Fix wrong GridCacheMapEntry startVersion initialization.

2018-06-08 Thread Andrew Mashenkov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506262#comment-16506262
 ] 

Andrew Mashenkov commented on IGNITE-8503:
--

Tests looks fine.


I've check this fix against IGNITE-8681 and observe it fixes
 
org.apache.ignite.internal.processors.cache.persistence.db.IgnitePdsWithTtlTest#testTtlIsAppliedAfterRestart

> Fix wrong GridCacheMapEntry startVersion initialization.
> 
>
> Key: IGNITE-8503
> URL: https://issues.apache.org/jira/browse/IGNITE-8503
> Project: Ignite
>  Issue Type: Improvement
>  Components: cache, persistence
>Reporter: Dmitriy Pavlov
>Assignee: Andrew Mashenkov
>Priority: Major
>  Labels: MakeTeamcityGreenAgain, Muted_test, tck
>
> GridCacheMapEntry initialize startVersion in wrong way.
> This leads to IgnitePdsWithTtlTest.testTtlIsAppliedAfterRestart failure and 
> reason is "Entry which should be expired by TTL policy is available after 
> grid restart."
>  
> Test was added during https://issues.apache.org/jira/browse/IGNITE-5874 
> development.
> This test restarts grid and checks all entries are not present in grid.
> But with high possiblity one from 7000 entries to be expired is resurrected 
> instead and returned by cache get.
> {noformat}
> After timeout {{
> >>> 
> >>> Cache memory stats [igniteInstanceName=db.IgnitePdsWithTtlTest0, 
> >>> cache=expirableCache]
> >>>  Cache size: 0
> >>>  Cache partition topology stats 
> >>> [igniteInstanceName=db.IgnitePdsWithTtlTest0, grp=group1]
> >>> 
> >>> Cache event manager memory stats 
> >>> [igniteInstanceName=db.IgnitePdsWithTtlTest0, cache=expirableCache, 
> >>> stats=N/A]
> >>>
> >>> Query manager memory stats [igniteInstanceName=db.IgnitePdsWithTtlTest0, 
> >>> cache=expirableCache]
> >>>   threadsSize: 0
> >>>   futsSize: 0
> >>>
> >>> TTL processor memory stats [igniteInstanceName=db.IgnitePdsWithTtlTest0, 
> >>> cache=expirableCache]
> >>>   pendingEntriesSize: 0
> }} After timeout
> {noformat}
> [https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8=5798755758125626876=testDetails_IgniteTests24Java8=%3Cdefault%3E]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8183) ZookeeperDiscoverySpiTest#testSegmentation3 fails on TC and locally

2018-06-08 Thread Dmitriy Pavlov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitriy Pavlov updated IGNITE-8183:
---
Fix Version/s: 2.6

> ZookeeperDiscoverySpiTest#testSegmentation3 fails on TC and locally
> ---
>
> Key: IGNITE-8183
> URL: https://issues.apache.org/jira/browse/IGNITE-8183
> Project: Ignite
>  Issue Type: Bug
>  Components: zookeeper
>Reporter: Sergey Chugunov
>Assignee: Denis Garus
>Priority: Major
>  Labels: MakeTeamcityGreenAgain
> Fix For: 2.6
>
>
> Test fails with assertion on awaits on latch:
> {noformat}
> junit.framework.AssertionFailedError
> at junit.framework.Assert.fail(Assert.java:55)
> at junit.framework.Assert.assertTrue(Assert.java:22)
> at junit.framework.Assert.assertTrue(Assert.java:31)
> at junit.framework.TestCase.assertTrue(TestCase.java:201)
> at 
> org.apache.ignite.spi.discovery.zk.internal.ZookeeperDiscoverySpiTest.testSegmentation3(ZookeeperDiscoverySpiTest.java:1060)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at junit.framework.TestCase.runTest(TestCase.java:176)
> at 
> org.apache.ignite.testframework.junits.GridAbstractTest.runTestInternal(GridAbstractTest.java:2080)
> at 
> org.apache.ignite.testframework.junits.GridAbstractTest.access$000(GridAbstractTest.java:140)
> at 
> org.apache.ignite.testframework.junits.GridAbstractTest$5.run(GridAbstractTest.java:1995)
> at java.lang.Thread.run(Thread.java:748)
> {noformat}
> For some reason SEGMENTATION event is never fired, so assertion on latch 
> fails. Investigation is needed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8756) SQL: CREATE/ALTER USER documentation should contain information about case sensitivity of username

2018-06-08 Thread Andrey Aleksandrov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Aleksandrov updated IGNITE-8756:
---
Labels: doc  (was: docuentation)

> SQL: CREATE/ALTER USER documentation should contain information about case 
> sensitivity of username
> --
>
> Key: IGNITE-8756
> URL: https://issues.apache.org/jira/browse/IGNITE-8756
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Affects Versions: 2.5
>Reporter: Andrey Aleksandrov
>Priority: Major
>  Labels: doc
> Fix For: 2.6
>
>
> Now documentation contains next:
> https://apacheignite-sql.readme.io/docs/create-user#section-description
> For instance, if {{test}} was set as a username then:
>  * You can use {{Test}}, {{TEst}}, {{TEST}} and other combinations from JDBC 
> and ODBC.
>  * You have to use {{TEST}} as the username from Ignite's native SQL APIs 
> designed for Java, .NET and other programming languages.
> But next behavior exists:
> When you create the user in quotes ("test") using SQL as next: 
> CREATE USER "test" WITH PASSWORD 'test' 
> It will be created as it was set (in this case it will be test) 
> If you create the user without quotes (test) using SQL as next: 
> CREATE USER test WITH PASSWORD 'test' 
> then username will be stored in uppercase (TEST). 
> The same situation with ALTER USER.
> The documentation should be updated to clear that SQL supports case sensitive 
> data too (using quotas).
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8756) SQL: CREATE/ALTER USER documentation should contain information about case sensitivity of username

2018-06-08 Thread Andrey Aleksandrov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Aleksandrov updated IGNITE-8756:
---
Labels: docuentation  (was: )

> SQL: CREATE/ALTER USER documentation should contain information about case 
> sensitivity of username
> --
>
> Key: IGNITE-8756
> URL: https://issues.apache.org/jira/browse/IGNITE-8756
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Affects Versions: 2.5
>Reporter: Andrey Aleksandrov
>Priority: Major
>  Labels: docuentation
> Fix For: 2.6
>
>
> Now documentation contains next:
> https://apacheignite-sql.readme.io/docs/create-user#section-description
> For instance, if {{test}} was set as a username then:
>  * You can use {{Test}}, {{TEst}}, {{TEST}} and other combinations from JDBC 
> and ODBC.
>  * You have to use {{TEST}} as the username from Ignite's native SQL APIs 
> designed for Java, .NET and other programming languages.
> But next behavior exists:
> When you create the user in quotes ("test") using SQL as next: 
> CREATE USER "test" WITH PASSWORD 'test' 
> It will be created as it was set (in this case it will be test) 
> If you create the user without quotes (test) using SQL as next: 
> CREATE USER test WITH PASSWORD 'test' 
> then username will be stored in uppercase (TEST). 
> The same situation with ALTER USER.
> The documentation should be updated to clear that SQL supports case sensitive 
> data too (using quotas).
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-8756) SQL: CREATE/ALTER USER documentation should contain information about case sensitivity of username

2018-06-08 Thread Andrey Aleksandrov (JIRA)
Andrey Aleksandrov created IGNITE-8756:
--

 Summary: SQL: CREATE/ALTER USER documentation should contain 
information about case sensitivity of username
 Key: IGNITE-8756
 URL: https://issues.apache.org/jira/browse/IGNITE-8756
 Project: Ignite
  Issue Type: Improvement
  Components: sql
Affects Versions: 2.5
Reporter: Andrey Aleksandrov
 Fix For: 2.6


Now documentation contains next:

https://apacheignite-sql.readme.io/docs/create-user#section-description

For instance, if {{test}} was set as a username then:
 * You can use {{Test}}, {{TEst}}, {{TEST}} and other combinations from JDBC 
and ODBC.
 * You have to use {{TEST}} as the username from Ignite's native SQL APIs 
designed for Java, .NET and other programming languages.

But next behavior exists:

When you create the user in quotes ("test") using SQL as next: 

CREATE USER "test" WITH PASSWORD 'test' 

It will be created as it was set (in this case it will be test) 

If you create the user without quotes (test) using SQL as next: 

CREATE USER test WITH PASSWORD 'test' 

then username will be stored in uppercase (TEST). 

The same situation with ALTER USER.

The documentation should be updated to clear that SQL supports case sensitive 
data too (using quotas).

 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (IGNITE-8748) All FileIO#write methods should return number of written bytes

2018-06-08 Thread Alexey Stelmak (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Stelmak reassigned IGNITE-8748:
--

Assignee: Alexey Stelmak

> All FileIO#write methods should return number of written bytes
> --
>
> Key: IGNITE-8748
> URL: https://issues.apache.org/jira/browse/IGNITE-8748
> Project: Ignite
>  Issue Type: Improvement
>  Components: persistence
>Reporter: Sergey Chugunov
>Assignee: Alexey Stelmak
>Priority: Major
> Fix For: 2.6
>
>
> FileIO#write(byte[], int, int) doesn't return a value of written bytes which 
> makes it impossible for callers to detect a situation of no space left on 
> device.
> API should be changed to return written bytes, all callers of this method 
> should adopt changes to be ready to detect "no space left" situation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-7818) Incorrect assertion in PDS page eviction method

2018-06-08 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-7818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506249#comment-16506249
 ] 

ASF GitHub Bot commented on IGNITE-7818:


Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/4024


> Incorrect assertion in PDS page eviction method
> ---
>
> Key: IGNITE-7818
> URL: https://issues.apache.org/jira/browse/IGNITE-7818
> Project: Ignite
>  Issue Type: Bug
>  Components: persistence
>Reporter: Aleksey Plekhanov
>Assignee: Ivan Fedotov
>Priority: Major
> Fix For: 2.6
>
> Attachments: PageMemoryPdsAssertTest.java
>
>
> There is an assertion in the method 
> org.apache.ignite.internal.processors.cache.persistence.pagemem.PageMemoryImpl.Segment#removePageForReplacement:
>  
> {code:java}
> assert relRmvAddr != INVALID_REL_PTR;{code}
> Which seems potentially dangerous. In some rare cases, when count of 
> interations more then 40% of allocated pages and all processed pages are 
> aquired, the {{relRmvAddr}} variable will be uninitialized and 
> {{AssertionException}} will be thrown. But it's a correct case and page to 
> evict can be found later in the method {{tryToFindSequentially.}}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (IGNITE-8657) Simultaneous start of bunch of client nodes may lead to some clients hangs

2018-06-08 Thread Sergey Chugunov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506233#comment-16506233
 ] 

Sergey Chugunov edited comment on IGNITE-8657 at 6/8/18 4:31 PM:
-

[~agoncharuk],

Good catch, thanks for spotting this!

I reviewed the code and found out that assertion was caused by quite unusual 
property *forceServerMode*.
The problem with it was that *ClusterNode#isClient* for such client returns 
false when *SinglePartitionMessage#client* sent from this client returns true.

I'm not sure if we need to force reconnecting of such clients so I changed 
implementation in such was that we don't force them to reconnect. After that 
test started passing on TC so I think this logic works.

What do you think?


was (Author: sergey-chugunov):
[~agoncharuk],

Good catch, thanks for spotting this!

I reviewed the code and found out that assertion was caused by quite unusual 
property *forceServerMode*.
The problem with it was that ClusterNode#isClient for such client returns false 
when SinglePartitionMessage sent from this client returns true.

I'm not sure if we need to force reconnecting of such clients so I changed 
implementation in such was that we don't force them to reconnect. After that 
test started passing on TC so I think this logic works.

What do you think?

> Simultaneous start of bunch of client nodes may lead to some clients hangs
> --
>
> Key: IGNITE-8657
> URL: https://issues.apache.org/jira/browse/IGNITE-8657
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.5
>Reporter: Sergey Chugunov
>Assignee: Sergey Chugunov
>Priority: Major
> Fix For: 2.6
>
>
> h3. Description
> PartitionExchangeManager uses a system property 
> *IGNITE_EXCHANGE_HISTORY_SIZE* to manage max number of exchange objects and 
> optimize memory consumption.
> Default value of the property is 1000 but in scenarios with many caches and 
> partitions it is reasonable to set exchange history size to a smaller values 
> around few dozens.
> Then if user starts up at once more client nodes than history size some 
> clients may hang because their exchange information was preempted and no 
> longer available.
> h3. Workarounds
> Two workarounds are possible: 
> * Do not start at once more clients than history size.
> * Restart hanging client node.
> h3. Solution
> Forcing client node to reconnect when server detected loosing its exchange 
> information prevents client nodes hanging.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (IGNITE-8657) Simultaneous start of bunch of client nodes may lead to some clients hangs

2018-06-08 Thread Sergey Chugunov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506233#comment-16506233
 ] 

Sergey Chugunov edited comment on IGNITE-8657 at 6/8/18 4:31 PM:
-

[~agoncharuk],

Good catch, thanks for spotting this!

I reviewed the code and found out that assertion was caused by quite unusual 
property *forceServerMode*.
The problem with it was that ClusterNode#isClient for such client returns false 
when SinglePartitionMessage sent from this client returns true.

I'm not sure if we need to force reconnecting of such clients so I changed 
implementation in such was that we don't force them to reconnect. After that 
test started passing on TC so I think this logic works.

What do you think?


was (Author: sergey-chugunov):
[~agoncharuk],

Good catch, thanks for spotting this!

I reviewed the code and found out that assertion was caused by quite unusual 
property forceServerMode.
The problem with it was that ClusterNode#isClient for such client returns false 
when SinglePartitionMessage sent from this client returns true.

I'm not sure if we need to force reconnecting of such clients so I changed 
implementation in such was that we don't force them to reconnect. After that 
test started passing on TC so I think this logic works.

What do you think?

> Simultaneous start of bunch of client nodes may lead to some clients hangs
> --
>
> Key: IGNITE-8657
> URL: https://issues.apache.org/jira/browse/IGNITE-8657
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.5
>Reporter: Sergey Chugunov
>Assignee: Sergey Chugunov
>Priority: Major
> Fix For: 2.6
>
>
> h3. Description
> PartitionExchangeManager uses a system property 
> *IGNITE_EXCHANGE_HISTORY_SIZE* to manage max number of exchange objects and 
> optimize memory consumption.
> Default value of the property is 1000 but in scenarios with many caches and 
> partitions it is reasonable to set exchange history size to a smaller values 
> around few dozens.
> Then if user starts up at once more client nodes than history size some 
> clients may hang because their exchange information was preempted and no 
> longer available.
> h3. Workarounds
> Two workarounds are possible: 
> * Do not start at once more clients than history size.
> * Restart hanging client node.
> h3. Solution
> Forcing client node to reconnect when server detected loosing its exchange 
> information prevents client nodes hanging.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8657) Simultaneous start of bunch of client nodes may lead to some clients hangs

2018-06-08 Thread Sergey Chugunov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506233#comment-16506233
 ] 

Sergey Chugunov commented on IGNITE-8657:
-

[~agoncharuk],

Good catch, thanks for spotting this!

I reviewed the code and found out that assertion was caused by quite unusual 
property forceServerMode.
The problem with it was that ClusterNode#isClient for such client returns false 
when SinglePartitionMessage sent from this client returns true.

I'm not sure if we need to force reconnecting of such clients so I changed 
implementation in such was that we don't force them to reconnect. After that 
test started passing on TC so I think this logic works.

What do you think?

> Simultaneous start of bunch of client nodes may lead to some clients hangs
> --
>
> Key: IGNITE-8657
> URL: https://issues.apache.org/jira/browse/IGNITE-8657
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.5
>Reporter: Sergey Chugunov
>Assignee: Sergey Chugunov
>Priority: Major
> Fix For: 2.6
>
>
> h3. Description
> PartitionExchangeManager uses a system property 
> *IGNITE_EXCHANGE_HISTORY_SIZE* to manage max number of exchange objects and 
> optimize memory consumption.
> Default value of the property is 1000 but in scenarios with many caches and 
> partitions it is reasonable to set exchange history size to a smaller values 
> around few dozens.
> Then if user starts up at once more client nodes than history size some 
> clients may hang because their exchange information was preempted and no 
> longer available.
> h3. Workarounds
> Two workarounds are possible: 
> * Do not start at once more clients than history size.
> * Restart hanging client node.
> h3. Solution
> Forcing client node to reconnect when server detected loosing its exchange 
> information prevents client nodes hanging.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-7319) Memory leak during creating/destroying local cache

2018-06-08 Thread Andrey Aleksandrov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-7319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506193#comment-16506193
 ] 

Andrey Aleksandrov commented on IGNITE-7319:


[~agura] updated. Please check.

 

> Memory leak during creating/destroying local cache
> --
>
> Key: IGNITE-7319
> URL: https://issues.apache.org/jira/browse/IGNITE-7319
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.3
>Reporter: Mikhail Cherkasov
>Assignee: Andrey Aleksandrov
>Priority: Major
> Fix For: 2.6
>
> Attachments: Demo.java
>
>
> The following code creates local caches:
> {code:java}
> private IgniteCache createLocalCache(String name) { 
> CacheConfiguration cCfg = new 
> CacheConfiguration<>(); 
> cCfg.setName(name); 
> cCfg.setGroupName("localCaches"); // without group leak is much 
> bigger! 
> cCfg.setStoreKeepBinary(true); 
> cCfg.setCacheMode(CacheMode.LOCAL); 
> cCfg.setOnheapCacheEnabled(false); 
> cCfg.setCopyOnRead(false); 
> cCfg.setBackups(0); 
> cCfg.setWriteBehindEnabled(false); 
> cCfg.setReadThrough(false); 
> cCfg.setReadFromBackup(false); 
> cCfg.setQueryEntities(); 
> return ignite.createCache(cCfg).withKeepBinary(); 
> } 
> {code}
> The caches are placed in the queue and are picked up by the worker thread 
> which just destroys them after removing from the queue. 
> This setup seems to generate a memory leak of about 1GB per day. 
> When looking at heap dump, I see all space is occupied by instances of 
> java.util.concurrent.ConcurrentSkipListMap$Node.
> User list: 
> [http://apache-ignite-users.70518.x6.nabble.com/Memory-leak-in-GridCachePartitionExchangeManager-tt18995.html
> Update:
> When local cache is created then new CONTINUOUS_QUERY task is created too. 
> This task should work until it canceled but in Ignite code we don't store the 
> CancelableTask somewhere. After destroying the cache this task continues its 
> work.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8753) Improve error message when requested topology version was preempted from Discovery Cache

2018-06-08 Thread Sergey Chugunov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506138#comment-16506138
 ] 

Sergey Chugunov commented on IGNITE-8753:
-

[~yzhdanov],

Current message looks like this:
{noformat}
Failed to resolve nodes topology [cacheGrp=, topVer=, history=, snap=, 
locNode=]
{noformat}

> Improve error message when requested topology version was preempted from 
> Discovery Cache
> 
>
> Key: IGNITE-8753
> URL: https://issues.apache.org/jira/browse/IGNITE-8753
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Sergey Chugunov
>Priority: Major
>
> When a lot of nodes try to join cluster at the same time (which is common 
> when ZookeeperDiscoverySpi is used), size of Discovery Cache may be exhausted 
> so next node won't find topology version it needs to proceed with joining.
> For now exception is thrown in this situation, we need to improve its message 
> with suggestion to check DISCOVERY_HISTORY setting.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8753) Improve error message when requested topology version was preempted from Discovery Cache

2018-06-08 Thread Yakov Zhdanov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506134#comment-16506134
 ] 

Yakov Zhdanov commented on IGNITE-8753:
---

[~sergey-chugunov] can you please provide current message? I want us to agree 
on final message.

> Improve error message when requested topology version was preempted from 
> Discovery Cache
> 
>
> Key: IGNITE-8753
> URL: https://issues.apache.org/jira/browse/IGNITE-8753
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Sergey Chugunov
>Priority: Major
>
> When a lot of nodes try to join cluster at the same time (which is common 
> when ZookeeperDiscoverySpi is used), size of Discovery Cache may be exhausted 
> so next node won't find topology version it needs to proceed with joining.
> For now exception is thrown in this situation, we need to improve its message 
> with suggestion to check DISCOVERY_HISTORY setting.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-8755) NegativeArraySizeException when trying to serialize in GridClientOptimizedMarshaller humongous object

2018-06-08 Thread Ivan Daschinskiy (JIRA)
Ivan Daschinskiy created IGNITE-8755:


 Summary: NegativeArraySizeException when trying to serialize in 
GridClientOptimizedMarshaller humongous object
 Key: IGNITE-8755
 URL: https://issues.apache.org/jira/browse/IGNITE-8755
 Project: Ignite
  Issue Type: Bug
  Components: binary
Affects Versions: 2.5
Reporter: Ivan Daschinskiy
 Fix For: 2.6


When trying to serialize humongous object in GridClientOptimizedMarshaller, 
NegativeArraySizeException thrown. See below



{code:java}
java.io.IOException: class org.apache.ignite.IgniteCheckedException: Failed to 
serialize object: GridClientResponse [clientId=null, reqId=0, destId=null, 
status=0, errMsg=null, 
result=org.apache.ignite.internal.processors.rest.protocols.tcp.TcpRestParserSelfTest$HugeObject@60a582c1]

at 
org.apache.ignite.internal.client.marshaller.optimized.GridClientOptimizedMarshaller.marshal(GridClientOptimizedMarshaller.java:101)
at 
org.apache.ignite.internal.processors.rest.protocols.tcp.TcpRestParserSelfTest.testHugeObject(TcpRestParserSelfTest.java:103)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at junit.framework.TestCase.runTest(TestCase.java:176)
at 
org.apache.ignite.testframework.junits.GridAbstractTest.runTestInternal(GridAbstractTest.java:2086)
at 
org.apache.ignite.testframework.junits.GridAbstractTest.access$000(GridAbstractTest.java:140)
at 
org.apache.ignite.testframework.junits.GridAbstractTest$5.run(GridAbstractTest.java:2001)
at java.lang.Thread.run(Thread.java:748)
Caused by: class org.apache.ignite.IgniteCheckedException: Failed to serialize 
object: GridClientResponse [clientId=null, reqId=0, destId=null, status=0, 
errMsg=null, 
result=org.apache.ignite.internal.processors.rest.protocols.tcp.TcpRestParserSelfTest$HugeObject@60a582c1]
at 
org.apache.ignite.internal.marshaller.optimized.OptimizedMarshaller.marshal0(OptimizedMarshaller.java:206)
at 
org.apache.ignite.marshaller.AbstractNodeNameAwareMarshaller.marshal(AbstractNodeNameAwareMarshaller.java:58)
at 
org.apache.ignite.internal.util.IgniteUtils.marshal(IgniteUtils.java:10059)
at 
org.apache.ignite.internal.client.marshaller.optimized.GridClientOptimizedMarshaller.marshal(GridClientOptimizedMarshaller.java:88)
... 10 more
Caused by: java.lang.NegativeArraySizeException
at 
org.apache.ignite.internal.util.io.GridUnsafeDataOutput.requestFreeSize(GridUnsafeDataOutput.java:131)
at 
org.apache.ignite.internal.util.io.GridUnsafeDataOutput.write(GridUnsafeDataOutput.java:166)
at 
org.apache.ignite.internal.marshaller.optimized.OptimizedObjectOutputStream.write(OptimizedObjectOutputStream.java:142)
at 
org.apache.ignite.internal.processors.rest.protocols.tcp.TcpRestParserSelfTest$HugeObject.writeExternal(TcpRestParserSelfTest.java:122)
at 
org.apache.ignite.internal.marshaller.optimized.OptimizedObjectOutputStream.writeExternalizable(OptimizedObjectOutputStream.java:319)
at 
org.apache.ignite.internal.marshaller.optimized.OptimizedClassDescriptor.write(OptimizedClassDescriptor.java:814)
at 
org.apache.ignite.internal.marshaller.optimized.OptimizedObjectOutputStream.writeObject0(OptimizedObjectOutputStream.java:242)
at 
org.apache.ignite.internal.marshaller.optimized.OptimizedObjectOutputStream.writeObjectOverride(OptimizedObjectOutputStream.java:159)
at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:344)
at 
org.apache.ignite.internal.processors.rest.client.message.GridClientResponse.writeExternal(GridClientResponse.java:103)
at 
org.apache.ignite.internal.marshaller.optimized.OptimizedObjectOutputStream.writeExternalizable(OptimizedObjectOutputStream.java:319)
at 
org.apache.ignite.internal.marshaller.optimized.OptimizedClassDescriptor.write(OptimizedClassDescriptor.java:814)
at 
org.apache.ignite.internal.marshaller.optimized.OptimizedObjectOutputStream.writeObject0(OptimizedObjectOutputStream.java:242)
at 
org.apache.ignite.internal.marshaller.optimized.OptimizedObjectOutputStream.writeObjectOverride(OptimizedObjectOutputStream.java:159)
at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:344)
at 
org.apache.ignite.internal.marshaller.optimized.OptimizedMarshaller.marshal0(OptimizedMarshaller.java:201)
{code}

The main cause of this that GridClientOptimizedMarshaller marshall object 
through OptimizedMarshaller without backed OutputStream, so arithmetical 
overflow occurs in 

[jira] [Commented] (IGNITE-8739) Implement WA for TCP communication related to hanging on descriptor reservation

2018-06-08 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506125#comment-16506125
 ] 

ASF GitHub Bot commented on IGNITE-8739:


Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/4148


> Implement WA for TCP communication related to hanging on descriptor 
> reservation
> ---
>
> Key: IGNITE-8739
> URL: https://issues.apache.org/jira/browse/IGNITE-8739
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.4
>Reporter: Anton Kalashnikov
>Assignee: Anton Kalashnikov
>Priority: Major
> Fix For: 2.6
>
>
> We observed several times a situation in production environment when a thread 
> establishing an outgoing connection is hanging infinitely on the recovery 
> descriptor. While the root cause is unknown yet, we need to implement a 
> workaround for this case which will close the connection and log additional 
> information.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8752) Deadlock when registering binary metadata while holding topology read lock

2018-06-08 Thread Alexey Goncharuk (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Goncharuk updated IGNITE-8752:
-
Affects Version/s: 2.1

> Deadlock when registering binary metadata while holding topology read lock
> --
>
> Key: IGNITE-8752
> URL: https://issues.apache.org/jira/browse/IGNITE-8752
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.1
>Reporter: Alexey Goncharuk
>Priority: Critical
> Fix For: 2.6
>
>
> The following deadlock was reproduced on ignite-2.4 version:
> {code}
>java.lang.Thread.State: WAITING (parking)
>   at sun.misc.Unsafe.park(Native Method)
>   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:304)
>   at 
> org.apache.ignite.internal.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:177)
>   at 
> org.apache.ignite.internal.util.future.GridFutureAdapter.get(GridFutureAdapter.java:140)
>   at 
> org.apache.ignite.internal.MarshallerContextImpl.registerClassName(MarshallerContextImpl.java:284)
>   at 
> org.apache.ignite.internal.binary.BinaryContext.registerUserClassName(BinaryContext.java:1191)
>   at 
> org.apache.ignite.internal.binary.BinaryContext.registerUserClassDescriptor(BinaryContext.java:773)
>   at 
> org.apache.ignite.internal.binary.BinaryContext.registerClassDescriptor(BinaryContext.java:751)
>   at 
> org.apache.ignite.internal.binary.BinaryContext.descriptorForClass(BinaryContext.java:622)
>   at 
> org.apache.ignite.internal.binary.BinaryWriterExImpl.marshal0(BinaryWriterExImpl.java:164)
>   at 
> org.apache.ignite.internal.binary.BinaryWriterExImpl.marshal(BinaryWriterExImpl.java:147)
>   at 
> org.apache.ignite.internal.binary.BinaryWriterExImpl.marshal(BinaryWriterExImpl.java:134)
>   at 
> org.apache.ignite.internal.binary.GridBinaryMarshaller.marshal(GridBinaryMarshaller.java:251)
>   at 
> org.apache.ignite.internal.processors.cache.binary.CacheObjectBinaryProcessorImpl.marshalToBinary(CacheObjectBinaryProcessorImpl.java:396)
>   at 
> org.apache.ignite.internal.processors.cache.binary.CacheObjectBinaryProcessorImpl.marshalToBinary(CacheObjectBinaryProcessorImpl.java:381)
>   at 
> org.apache.ignite.internal.processors.cache.binary.CacheObjectBinaryProcessorImpl.toBinary(CacheObjectBinaryProcessorImpl.java:875)
>   at 
> org.apache.ignite.internal.processors.cache.binary.CacheObjectBinaryProcessorImpl.toCacheObject(CacheObjectBinaryProcessorImpl.java:825)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheContext.toCacheObject(GridCacheContext.java:1783)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheMapEntry$AtomicCacheUpdateClosure.runEntryProcessor(GridCacheMapEntry.java:5264)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheMapEntry$AtomicCacheUpdateClosure.call(GridCacheMapEntry.java:4667)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheMapEntry$AtomicCacheUpdateClosure.call(GridCacheMapEntry.java:4484)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Invoke.invokeClosure(BPlusTree.java:3083)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Invoke.access$6200(BPlusTree.java:2977)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.invokeDown(BPlusTree.java:1732)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.invoke(BPlusTree.java:1610)
>   at 
> org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.invoke(IgniteCacheOffheapManagerImpl.java:1270)
>   at 
> org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl.invoke(IgniteCacheOffheapManagerImpl.java:370)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheMapEntry.innerUpdate(GridCacheMapEntry.java:1769)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateSingle(GridDhtAtomicCache.java:2420)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.update(GridDhtAtomicCache.java:1883)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAllAsyncInternal0(GridDhtAtomicCache.java:1736)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAllAsyncInternal(GridDhtAtomicCache.java:1628)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicAbstractUpdateFuture.sendSingleRequest(GridNearAtomicAbstractUpdateFuture.java:299)
>   at 
> 

[jira] [Updated] (IGNITE-8739) Implement WA for TCP communication related to hanging on descriptor reservation

2018-06-08 Thread Alexey Goncharuk (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Goncharuk updated IGNITE-8739:
-
Description: We observed several times a situation in production 
environment when a thread establishing an outgoing connection is hanging 
infinitely on the recovery descriptor. While the root cause is unknown yet, we 
need to implement a workaround for this case which will close the connection 
and log additional information.

> Implement WA for TCP communication related to hanging on descriptor 
> reservation
> ---
>
> Key: IGNITE-8739
> URL: https://issues.apache.org/jira/browse/IGNITE-8739
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.4
>Reporter: Anton Kalashnikov
>Assignee: Anton Kalashnikov
>Priority: Major
> Fix For: 2.6
>
>
> We observed several times a situation in production environment when a thread 
> establishing an outgoing connection is hanging infinitely on the recovery 
> descriptor. While the root cause is unknown yet, we need to implement a 
> workaround for this case which will close the connection and log additional 
> information.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8739) Implement WA for TCP communication related to hanging on descriptor reservation

2018-06-08 Thread Alexey Goncharuk (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Goncharuk updated IGNITE-8739:
-
Affects Version/s: 2.4

> Implement WA for TCP communication related to hanging on descriptor 
> reservation
> ---
>
> Key: IGNITE-8739
> URL: https://issues.apache.org/jira/browse/IGNITE-8739
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.4
>Reporter: Anton Kalashnikov
>Assignee: Anton Kalashnikov
>Priority: Major
> Fix For: 2.6
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8754) Node outside of baseline does not start when service configured

2018-06-08 Thread Vladislav Pyatkov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladislav Pyatkov updated IGNITE-8754:
--
Description: 
Enough to configure service in {{ServiceConfiguration}} and the node does not 
started if the node outside of baseline.
{noformat}
"async-runnable-runner-1" #287 prio=5 os_prio=0 tid=0x24e0c800 
nid=0x4e6c waiting on condition [0xe87fe000]
   java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:304)
at 
org.apache.ignite.internal.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:177)
at 
org.apache.ignite.internal.util.future.GridFutureAdapter.get(GridFutureAdapter.java:140)
at 
org.apache.ignite.internal.processors.service.GridServiceProcessor.onKernalStart0(GridServiceProcessor.java:287)
at 
org.apache.ignite.internal.processors.service.GridServiceProcessor.onKernalStart(GridServiceProcessor.java:228)
at org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:1105)
at 
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:2014)
at 
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1723)
- locked <0x00076c142400> (a 
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance)
at org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1151)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:649)
at 
org.apache.ignite.testframework.junits.GridAbstractTest.startGrid(GridAbstractTest.java:882)
at 
org.apache.ignite.testframework.junits.GridAbstractTest.startGrid(GridAbstractTest.java:845)
at 
org.apache.ignite.testframework.junits.GridAbstractTest.startGrid(GridAbstractTest.java:833)
at 
org.apache.ignite.testframework.junits.GridAbstractTest.startGrid(GridAbstractTest.java:799)
at 
org.gridgain.internal.ServiceOnNodeOutOfBaselineTest.lambda$test$0(ServiceOnNodeOutOfBaselineTest.java:107)
at 
org.gridgain.internal.ServiceOnNodeOutOfBaselineTest$$Lambda$22/781127963.run(Unknown
 Source)
at 
org.apache.ignite.testframework.GridTestUtils.lambda$runAsync$1(GridTestUtils.java:898)
at 
org.apache.ignite.testframework.GridTestUtils$$Lambda$23/1655470614.call(Unknown
 Source)
at 
org.apache.ignite.testframework.GridTestUtils.lambda$runAsync$2(GridTestUtils.java:956)
at 
org.apache.ignite.testframework.GridTestUtils$$Lambda$24/1782331932.run(Unknown 
Source)
at 
org.apache.ignite.testframework.GridTestUtils$6.call(GridTestUtils.java:1254)
at 
org.apache.ignite.testframework.GridTestThread.run(GridTestThread.java:86)
{noformat}
Test  [^ServiceOnNodeOutOfBaselineTest.java] hangs with assertion because node 
can not started or stopped.

  was:
Enough to configure service in {{ServiceConfiguration}} and the node does not 
started if the node outside of baseline.
{noformat}
"async-runnable-runner-1" #287 prio=5 os_prio=0 tid=0x24e0c800 
nid=0x4e6c waiting on condition [0xe87fe000]
   java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:304)
at 
org.apache.ignite.internal.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:177)
at 
org.apache.ignite.internal.util.future.GridFutureAdapter.get(GridFutureAdapter.java:140)
at 
org.apache.ignite.internal.processors.service.GridServiceProcessor.onKernalStart0(GridServiceProcessor.java:287)
at 
org.apache.ignite.internal.processors.service.GridServiceProcessor.onKernalStart(GridServiceProcessor.java:228)
at org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:1105)
at 
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:2014)
at 
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1723)
- locked <0x00076c142400> (a 
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance)
at org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1151)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:649)
at 
org.apache.ignite.testframework.junits.GridAbstractTest.startGrid(GridAbstractTest.java:882)
at 
org.apache.ignite.testframework.junits.GridAbstractTest.startGrid(GridAbstractTest.java:845)
at 
org.apache.ignite.testframework.junits.GridAbstractTest.startGrid(GridAbstractTest.java:833)
at 
org.apache.ignite.testframework.junits.GridAbstractTest.startGrid(GridAbstractTest.java:799)
at 
org.gridgain.internal.ServiceOnNodeOutOfBaselineTest.lambda$test$0(ServiceOnNodeOutOfBaselineTest.java:107)
at 

[jira] [Created] (IGNITE-8754) Node outside of baseline does not start when service configured

2018-06-08 Thread Vladislav Pyatkov (JIRA)
Vladislav Pyatkov created IGNITE-8754:
-

 Summary: Node outside of baseline does not start when service 
configured
 Key: IGNITE-8754
 URL: https://issues.apache.org/jira/browse/IGNITE-8754
 Project: Ignite
  Issue Type: Bug
Reporter: Vladislav Pyatkov
 Attachments: ServiceOnNodeOutOfBaselineTest.java

Enough to configure service in {{ServiceConfiguration}} and the node does not 
started if the node outside of baseline.
{noformat}
"async-runnable-runner-1" #287 prio=5 os_prio=0 tid=0x24e0c800 
nid=0x4e6c waiting on condition [0xe87fe000]
   java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:304)
at 
org.apache.ignite.internal.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:177)
at 
org.apache.ignite.internal.util.future.GridFutureAdapter.get(GridFutureAdapter.java:140)
at 
org.apache.ignite.internal.processors.service.GridServiceProcessor.onKernalStart0(GridServiceProcessor.java:287)
at 
org.apache.ignite.internal.processors.service.GridServiceProcessor.onKernalStart(GridServiceProcessor.java:228)
at org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:1105)
at 
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:2014)
at 
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1723)
- locked <0x00076c142400> (a 
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance)
at org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1151)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:649)
at 
org.apache.ignite.testframework.junits.GridAbstractTest.startGrid(GridAbstractTest.java:882)
at 
org.apache.ignite.testframework.junits.GridAbstractTest.startGrid(GridAbstractTest.java:845)
at 
org.apache.ignite.testframework.junits.GridAbstractTest.startGrid(GridAbstractTest.java:833)
at 
org.apache.ignite.testframework.junits.GridAbstractTest.startGrid(GridAbstractTest.java:799)
at 
org.gridgain.internal.ServiceOnNodeOutOfBaselineTest.lambda$test$0(ServiceOnNodeOutOfBaselineTest.java:107)
at 
org.gridgain.internal.ServiceOnNodeOutOfBaselineTest$$Lambda$22/781127963.run(Unknown
 Source)
at 
org.apache.ignite.testframework.GridTestUtils.lambda$runAsync$1(GridTestUtils.java:898)
at 
org.apache.ignite.testframework.GridTestUtils$$Lambda$23/1655470614.call(Unknown
 Source)
at 
org.apache.ignite.testframework.GridTestUtils.lambda$runAsync$2(GridTestUtils.java:956)
at 
org.apache.ignite.testframework.GridTestUtils$$Lambda$24/1782331932.run(Unknown 
Source)
at 
org.apache.ignite.testframework.GridTestUtils$6.call(GridTestUtils.java:1254)
at 
org.apache.ignite.testframework.GridTestThread.run(GridTestThread.java:86)
{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (IGNITE-8751) Possible race on node segmentation.

2018-06-08 Thread Andrey Gura (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506083#comment-16506083
 ] 

Andrey Gura edited comment on IGNITE-8751 at 6/8/18 2:43 PM:
-

It isn't race. {{tcp-disco-srvr}} and {{tcp-disco-msg-worker}} are interrupted 
earlier than segmentation policy handles segmentation. See 
{{org.apache.ignite.internal.managers.discovery.GridDiscoveryManager.DiscoveryWorker#onSegmentation}}
 where we first disconnect SPI and then handle segmentation.

It seems could be fixed by adding check on SPI state in exception handler of 
{{tcp-disco-srvr}} and {{tcp-disco-msg-worker}}.


was (Author: agura):
It isn't race. {{tcp-disco-srvr}} is interrupted earlier than segmentation 
policy handles segmentation. See 
{{org.apache.ignite.internal.managers.discovery.GridDiscoveryManager.DiscoveryWorker#onSegmentation}}
 where we first disconnect SPI and then handle segmentation.

It seems could be fixed by adding check on SPI state in exception handler of 
{{tcp-disco-srvr}}.

> Possible race on node segmentation.
> ---
>
> Key: IGNITE-8751
> URL: https://issues.apache.org/jira/browse/IGNITE-8751
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.5
>Reporter: Andrew Mashenkov
>Assignee: Andrey Gura
>Priority: Major
> Fix For: 2.6
>
>
> Segmentation policy may be ignored, probably, due to a race.
> See [1] for details.
>  [1] 
> [http://apache-ignite-users.70518.x6.nabble.com/Node-pause-for-no-obvious-reason-td21923.html]
> Logs from segmented node.
> [08:42:42,290][INFO][tcp-disco-sock-reader-#15][TcpDiscoverySpi] Finished 
> serving remote node connection [rmtAddr=/10.29.42.45:38712, rmtPort=38712 
> [08:42:42,290][WARNING][disco-event-worker-#161][GridDiscoveryManager] Local 
> node SEGMENTED: TcpDiscoveryNode [id=8333aa56-8bf4-4558-a387-809b1d2e2e5b, 
> addrs=[10.29.42.44, 127.0.0.1], sockAddrs=[sap-datanode1/10.29.42.44:49500, 
> /127.0.0.1:49500], discPort=49500, order=1, intOrder=1, 
> lastExchangeTime=1528447362286, loc=true, ver=2.5.0#20180523-sha1:86e110c7, 
> isClient=false] 
> [08:42:42,294][SEVERE][tcp-disco-srvr-#2][] Critical system error detected. 
> Will be handled accordingly to configured handler [hnd=class 
> o.a.i.failure.StopNodeOrHaltFailureHandler, failureCtx=FailureContext 
> [type=SYSTEM_WORKER_TERMINATION, err=java.lang.IllegalStateException: Thread 
> tcp-disco-srvr-#2 is terminated unexpectedly.]] 
> java.lang.IllegalStateException: Thread tcp-disco-srvr-#2 is terminated 
> unexpectedly. 
>         at 
> org.apache.ignite.spi.discovery.tcp.ServerImpl$TcpServer.body(ServerImpl.java:5686)
>  
>         at 
> org.apache.ignite.spi.IgniteSpiThread.run(IgniteSpiThread.java:62) 
> [08:42:42,294][SEVERE][tcp-disco-srvr-#2][] JVM will be halted immediately 
> due to the failure: [failureCtx=FailureContext 
> [type=SYSTEM_WORKER_TERMINATION, err=java.lang.IllegalStateException: Thread 
> tcp-disco-srvr-#2 is terminated unexpectedly.]] 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8727) Provide way to test MMap WAL modes failures

2018-06-08 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506091#comment-16506091
 ] 

ASF GitHub Bot commented on IGNITE-8727:


GitHub user Jokser opened a pull request:

https://github.com/apache/ignite/pull/4160

IGNITE-8727 Fixed WalFlush with MMap tests.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-8727

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/4160.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4160


commit ceeec8591ad50b8475f0aa2d609b1d95372f6bd9
Author: Pavel Kovalenko 
Date:   2018-06-08T14:39:48Z

IGNITE-8727 Fixed WalFlush with MMap tests.




> Provide way to test MMap WAL modes failures
> ---
>
> Key: IGNITE-8727
> URL: https://issues.apache.org/jira/browse/IGNITE-8727
> Project: Ignite
>  Issue Type: Test
>  Components: persistence
>Reporter: Dmitriy Pavlov
>Assignee: Andrey Gura
>Priority: Major
>  Labels: MakeTeamcityGreenAgain
>
> Currently tests 4 are failed in  PDS 2 suite with timeout:
>   IgnitePdsTestSuite2: 
> IgniteWalFlushBackgroundWithMmapBufferSelfTest.testFailAfterStart (fail rate 
> 100,0%) 
>   IgnitePdsTestSuite2: 
> IgniteWalFlushBackgroundWithMmapBufferSelfTest.testFailWhileStart (fail rate 
> 100,0%) 
>   IgnitePdsTestSuite2: 
> IgniteWalFlushLogOnlyWithMmapBufferSelfTest.testFailAfterStart (fail rate 
> 100,0%) 
>   IgnitePdsTestSuite2: 
> IgniteWalFlushLogOnlyWithMmapBufferSelfTest.testFailWhileStart (fail rate 
> 100,0%) 
> Tests were introduced in ticket [IGNITE-7809]. 
> Tests try to emulate failure using exceptions in file IO, but MMap-ed modes 
> does not allow currently to be tested with current tests approach.
> It is suggested to create new testing opportunities for MMap modes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8529) Implement testing framework for checking WAL delta records consistency

2018-06-08 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506087#comment-16506087
 ] 

ASF GitHub Bot commented on IGNITE-8529:


GitHub user alex-plekhanov opened a pull request:

https://github.com/apache/ignite/pull/4159

IGNITE-8529 Implement testing framework for checking WAL delta records 
consistency



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/alex-plekhanov/ignite ignite-8529

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/4159.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4159


commit a5c142daf7c46a354d5417dac7cf7c3c79a9488b
Author: Aleksey Plekhanov 
Date:   2018-06-07T10:39:52Z

IGNITE-8529 Draft 3 WIP

commit 0ddd4d82c3625e45f21650267685bd2020997cb1
Author: Aleksey Plekhanov 
Date:   2018-06-07T12:25:42Z

IGNITE-8529 Draft 3 WIP

commit ada909a74d5b000ac741c07421da7f5bcc955023
Author: Aleksey Plekhanov 
Date:   2018-06-07T16:46:33Z

IGNITE-8529 Draft 3 WIP

commit 3f570c578b4946c6d599e9efbabf6260a45bce50
Author: Aleksey Plekhanov 
Date:   2018-06-07T16:51:08Z

IGNITE-8529 Draft 3 WIP

commit 883acf9447c2619799f6078523504082ada4dc21
Author: Aleksey Plekhanov 
Date:   2018-06-07T21:36:02Z

IGNITE-8529 Draft 2 WIP

commit 7cb3d90ff758e42ef7d876d17cb4d597fb0ee240
Author: Aleksey Plekhanov 
Date:   2018-06-08T07:46:42Z

IGNITE-8529 Draft 3 WIP

commit 41d2dc6a44c3a3775254f9d68595e04ba4198e98
Author: Aleksey Plekhanov 
Date:   2018-06-08T10:43:18Z

IGNITE-8529 Implement testing framework for checking WAL delta records 
consistency

commit 4678f6a6b4c7a5922063f2118bb4810f5e2b6d12
Author: Aleksey Plekhanov 
Date:   2018-06-08T12:52:01Z

IGNITE-8529 Made page memory reusable after cache destroy.

commit c64719bf6be1562b0ad8f660eecf780cafca4334
Author: Aleksey Plekhanov 
Date:   2018-06-08T14:23:14Z

IGNITE-8529 Made page memory reusable after cache destroy (fix).

commit 755cae5c68ef472a56871a891095721aebe60ff0
Author: Aleksey Plekhanov 
Date:   2018-06-08T14:32:47Z

IGNITE-8529 Cleanup




> Implement testing framework for checking WAL delta records consistency
> --
>
> Key: IGNITE-8529
> URL: https://issues.apache.org/jira/browse/IGNITE-8529
> Project: Ignite
>  Issue Type: New Feature
>  Components: persistence
>Reporter: Ivan Rakov
>Assignee: Aleksey Plekhanov
>Priority: Major
> Fix For: 2.6
>
>
> We use sharp checkpointing of page memory in persistent mode. That implies 
> that we write two types of records to write-ahead log: logical (e.g. data 
> records) and phyisical (page snapshots + binary delta records). Physical 
> records are applied only when node crashes/stops during ongoing checkpoint. 
> We have the following invariant: checkpoint #(n-1) + all physical records = 
> checkpoint #n.
> If correctness of physical records is broken, Ignite node may recover with 
> incorrect page memory state, which in turn can bring unexpected delayed 
> errors. However, consistency of physical records is poorly tested: only small 
> part of our autotests perform node restarts, and even less part of them 
> perform node stop when ongoing checkpoint is running.
> We should implement abstract test that:
> 1. Enforces checkpoint, freezes memory state at the moment of checkpoint.
> 2. Performs necessary test load.
> 3. Enforces checkpoint again, replays WAL and checks that page store at the 
> moment of previous checkpoint with all applied physical records exactly 
> equals to current checkpoint state.
> Except for checking correctness, test framework should do the following:
> 1. Gather statistics (like histogram) for types of wriiten physical records. 
> That will help us to know what types of physical records are covered by test.
> 2. Visualize expected and actual page state (with all applied physical 
> records) if incorrect page state is detected.
> Regarding implementation, I suppose we can use checkpoint listener mechanism 
> to freeze page memory state at the moment of checkpoint.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-8753) Improve error message when requested topology version was preempted from Discovery Cache

2018-06-08 Thread Sergey Chugunov (JIRA)
Sergey Chugunov created IGNITE-8753:
---

 Summary: Improve error message when requested topology version was 
preempted from Discovery Cache
 Key: IGNITE-8753
 URL: https://issues.apache.org/jira/browse/IGNITE-8753
 Project: Ignite
  Issue Type: Improvement
Reporter: Sergey Chugunov


When a lot of nodes try to join cluster at the same time (which is common when 
ZookeeperDiscoverySpi is used), size of Discovery Cache may be exhausted so 
next node won't find topology version it needs to proceed with joining.

For now exception is thrown in this situation, we need to improve its message 
with suggestion to check DISCOVERY_HISTORY setting.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8751) Possible race on node segmentation.

2018-06-08 Thread Andrey Gura (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506083#comment-16506083
 ] 

Andrey Gura commented on IGNITE-8751:
-

It isn't race. {{tcp-disco-srvr}} is interrupted earlier than segmentation 
policy handles segmentation. See 
{{org.apache.ignite.internal.managers.discovery.GridDiscoveryManager.DiscoveryWorker#onSegmentation}}
 where we first disconnect SPI and then handle segmentation.

It seems could be fixed by adding check on SPI state in exception handler of 
{{tcp-disco-srvr}}.

> Possible race on node segmentation.
> ---
>
> Key: IGNITE-8751
> URL: https://issues.apache.org/jira/browse/IGNITE-8751
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.5
>Reporter: Andrew Mashenkov
>Assignee: Andrey Gura
>Priority: Major
> Fix For: 2.6
>
>
> Segmentation policy may be ignored, probably, due to a race.
> See [1] for details.
>  [1] 
> [http://apache-ignite-users.70518.x6.nabble.com/Node-pause-for-no-obvious-reason-td21923.html]
> Logs from segmented node.
> [08:42:42,290][INFO][tcp-disco-sock-reader-#15][TcpDiscoverySpi] Finished 
> serving remote node connection [rmtAddr=/10.29.42.45:38712, rmtPort=38712 
> [08:42:42,290][WARNING][disco-event-worker-#161][GridDiscoveryManager] Local 
> node SEGMENTED: TcpDiscoveryNode [id=8333aa56-8bf4-4558-a387-809b1d2e2e5b, 
> addrs=[10.29.42.44, 127.0.0.1], sockAddrs=[sap-datanode1/10.29.42.44:49500, 
> /127.0.0.1:49500], discPort=49500, order=1, intOrder=1, 
> lastExchangeTime=1528447362286, loc=true, ver=2.5.0#20180523-sha1:86e110c7, 
> isClient=false] 
> [08:42:42,294][SEVERE][tcp-disco-srvr-#2][] Critical system error detected. 
> Will be handled accordingly to configured handler [hnd=class 
> o.a.i.failure.StopNodeOrHaltFailureHandler, failureCtx=FailureContext 
> [type=SYSTEM_WORKER_TERMINATION, err=java.lang.IllegalStateException: Thread 
> tcp-disco-srvr-#2 is terminated unexpectedly.]] 
> java.lang.IllegalStateException: Thread tcp-disco-srvr-#2 is terminated 
> unexpectedly. 
>         at 
> org.apache.ignite.spi.discovery.tcp.ServerImpl$TcpServer.body(ServerImpl.java:5686)
>  
>         at 
> org.apache.ignite.spi.IgniteSpiThread.run(IgniteSpiThread.java:62) 
> [08:42:42,294][SEVERE][tcp-disco-srvr-#2][] JVM will be halted immediately 
> due to the failure: [failureCtx=FailureContext 
> [type=SYSTEM_WORKER_TERMINATION, err=java.lang.IllegalStateException: Thread 
> tcp-disco-srvr-#2 is terminated unexpectedly.]] 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8751) Possible race on node segmentation.

2018-06-08 Thread Andrey Gura (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Gura updated IGNITE-8751:

Fix Version/s: 2.6

> Possible race on node segmentation.
> ---
>
> Key: IGNITE-8751
> URL: https://issues.apache.org/jira/browse/IGNITE-8751
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.5
>Reporter: Andrew Mashenkov
>Assignee: Andrey Gura
>Priority: Major
> Fix For: 2.6
>
>
> Segmentation policy may be ignored, probably, due to a race.
> See [1] for details.
>  [1] 
> [http://apache-ignite-users.70518.x6.nabble.com/Node-pause-for-no-obvious-reason-td21923.html]
> Logs from segmented node.
> [08:42:42,290][INFO][tcp-disco-sock-reader-#15][TcpDiscoverySpi] Finished 
> serving remote node connection [rmtAddr=/10.29.42.45:38712, rmtPort=38712 
> [08:42:42,290][WARNING][disco-event-worker-#161][GridDiscoveryManager] Local 
> node SEGMENTED: TcpDiscoveryNode [id=8333aa56-8bf4-4558-a387-809b1d2e2e5b, 
> addrs=[10.29.42.44, 127.0.0.1], sockAddrs=[sap-datanode1/10.29.42.44:49500, 
> /127.0.0.1:49500], discPort=49500, order=1, intOrder=1, 
> lastExchangeTime=1528447362286, loc=true, ver=2.5.0#20180523-sha1:86e110c7, 
> isClient=false] 
> [08:42:42,294][SEVERE][tcp-disco-srvr-#2][] Critical system error detected. 
> Will be handled accordingly to configured handler [hnd=class 
> o.a.i.failure.StopNodeOrHaltFailureHandler, failureCtx=FailureContext 
> [type=SYSTEM_WORKER_TERMINATION, err=java.lang.IllegalStateException: Thread 
> tcp-disco-srvr-#2 is terminated unexpectedly.]] 
> java.lang.IllegalStateException: Thread tcp-disco-srvr-#2 is terminated 
> unexpectedly. 
>         at 
> org.apache.ignite.spi.discovery.tcp.ServerImpl$TcpServer.body(ServerImpl.java:5686)
>  
>         at 
> org.apache.ignite.spi.IgniteSpiThread.run(IgniteSpiThread.java:62) 
> [08:42:42,294][SEVERE][tcp-disco-srvr-#2][] JVM will be halted immediately 
> due to the failure: [failureCtx=FailureContext 
> [type=SYSTEM_WORKER_TERMINATION, err=java.lang.IllegalStateException: Thread 
> tcp-disco-srvr-#2 is terminated unexpectedly.]] 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8728) Nodes down after other nodes reboot in the cluster

2018-06-08 Thread David Harvey (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506080#comment-16506080
 ] 

David Harvey commented on IGNITE-8728:
--

Have you validated that you are configured to write through to the underlying 
storage?     If writes ignite believes are committed are simply cached by the 
underlying filesystem, you could get this class of symptom.

> Nodes down after other nodes reboot in the cluster
> --
>
> Key: IGNITE-8728
> URL: https://issues.apache.org/jira/browse/IGNITE-8728
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.5
>Reporter: Mahesh Renduchintala
>Priority: Major
>
> I have two nodes on which we have 3 tables which are partitioned.  Index are 
> also built on these tables. 
> For 24 hours caches work fine.  The tables are definitely distributed across 
> both the nodes
> Node 2 reboots, ignite service gets started on Node 2 and in Node 1 we see 
> the below crash. 
>  
> [10:38:35,437][INFO][tcp-disco-srvr-#2|#2][TcpDiscoverySpi] TCP discovery 
> accepted incoming connection [rmtAddr=/192.168.1.7, rmtPort=45102]
>  [10:38:35,437][INFO][tcp-disco-srvr-#2|#2][TcpDiscoverySpi] TCP discovery 
> spawning a new thread for connection [rmtAddr=/192.168.1.7, rmtPort=45102]
>  [10:38:35,437][INFO][tcp-disco-sock-reader-#12|#12][TcpDiscoverySpi] Started 
> serving remote node connection [rmtAddr=/192.168.1.7:45102, rmtPort=45102]
>  [10:38:35,451][INFO][tcp-disco-sock-reader-#12|#12][TcpDiscoverySpi] 
> Finished serving remote node connection [rmtAddr=/192.168.1.7:45102, 
> rmtPort=45102
>  [10:38:35,457][SEVERE][tcp-disco-msg-worker-#3|#3][TcpDiscoverySpi] 
> TcpDiscoverSpi's message worker thread failed abnormally. Stopping the node 
> in order to prevent cluster wide instability.
>  java.lang.IllegalStateException: Duplicate key
>  at org.apache.ignite.cache.QueryEntity.checkIndexes(QueryEntity.java:223)
>  at org.apache.ignite.cache.QueryEntity.makePatch(QueryEntity.java:174)
>  at 
> org.apache.ignite.internal.processors.query.QuerySchema.makePatch(QuerySchema.java:114)
>  at 
> org.apache.ignite.internal.processors.cache.DynamicCacheDescriptor.makeSchemaPatch(DynamicCacheDescriptor.java:360)
>  at 
> org.apache.ignite.internal.processors.cache.GridCacheProcessor.validateNode(GridCacheProcessor.java:2536)
>  at 
> org.apache.ignite.internal.managers.GridManagerAdapter$1.validateNode(GridManagerAdapter.java:566)
>  at 
> org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.processJoinRequestMessage(ServerImpl.java:3629)
>  at 
> org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.processMessage(ServerImpl.java:2736)
>  at 
> org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.processMessage(ServerImpl.java:2536)
>  at 
> org.apache.ignite.spi.discovery.tcp.ServerImpl$MessageWorkerAdapter.body(ServerImpl.java:6775)
>  at 
> org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.body(ServerImpl.java:2621)
>  at org.apache.ignite.spi.IgniteSpiThread.run(IgniteSpiThread.java:62)
>  [10:38:35,459][SEVERE][tcp-disco-msg-worker-#3|#3][] Critical system error 
> detected. Will be handled accordingly to configured handler [hnd=class 
> o.a.i.failure.StopNodeOrHaltFailureHandler, failureCtx=FailureContext 
> [type=SYSTEM_WORKER_TERMINATION, err=java.lang.IllegalStateException: 
> Duplicate key]]
>  java.lang.IllegalStateException: Duplicate key
>  at org.apache.ignite.cache.QueryEntity.checkIndexes(QueryEntity.java:223)
>  at org.apache.ignite.cache.QueryEntity.makePatch(QueryEntity.java:174)
>  at 
> org.apache.ignite.internal.processors.query.QuerySchema.makePatch(QuerySchema.java:114)
>  at 
> org.apache.ignite.internal.processors.cache.DynamicCacheDescriptor.makeSchemaPatch(DynamicCacheDescriptor.java:360)
>  at 
> org.apache.ignite.internal.processors.cache.GridCacheProcessor.validateNode(GridCacheProcessor.java:2536)
>  at 
> org.apache.ignite.internal.managers.GridManagerAdapter$1.validateNode(GridManagerAdapter.java:566)
>  at 
> org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.processJoinRequestMessage(ServerImpl.java:3629)
>  at 
> org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.processMessage(ServerImpl.java:2736)
>  at 
> org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.processMessage(ServerImpl.java:2536)
>  at 
> org.apache.ignite.spi.discovery.tcp.ServerImpl$MessageWorkerAdapter.body(ServerImpl.java:6775)
>  at 
> org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.body(ServerImpl.java:2621)
>  at org.apache.ignite.spi.IgniteSpiThread.run(IgniteSpiThread.java:62)
>  [10:38:35,460][SEVERE][tcp-disco-msg-worker-#3|#3][] JVM will be halted 
> immediately due to the failure: [failureCtx=FailureContext 
> [type=SYSTEM_WORKER_TERMINATION, 

[jira] [Assigned] (IGNITE-8749) Exception for "no space left" situation should be propagated to FailureHandler

2018-06-08 Thread Andrey Gura (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Gura reassigned IGNITE-8749:
---

Assignee: Andrey Gura

> Exception for "no space left" situation should be propagated to FailureHandler
> --
>
> Key: IGNITE-8749
> URL: https://issues.apache.org/jira/browse/IGNITE-8749
> Project: Ignite
>  Issue Type: Improvement
>  Components: persistence
>Reporter: Sergey Chugunov
>Assignee: Andrey Gura
>Priority: Major
> Fix For: 2.6
>
>
> For now if "no space left" situation is detected in 
> FileWriteAheadLogManager#formatFile method and corresponding exception is 
> thrown the exception doesn't get propagated to FailureHandler and node 
> continues working.
> As "no space left" is a critical situation, corresponding exception should be 
> propagated to handler to make necessary actions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (IGNITE-8751) Possible race on node segmentation.

2018-06-08 Thread Andrey Gura (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Gura reassigned IGNITE-8751:
---

Assignee: Andrey Gura

> Possible race on node segmentation.
> ---
>
> Key: IGNITE-8751
> URL: https://issues.apache.org/jira/browse/IGNITE-8751
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.5
>Reporter: Andrew Mashenkov
>Assignee: Andrey Gura
>Priority: Major
>
> Segmentation policy may be ignored, probably, due to a race.
> See [1] for details.
>  [1] 
> [http://apache-ignite-users.70518.x6.nabble.com/Node-pause-for-no-obvious-reason-td21923.html]
> Logs from segmented node.
> [08:42:42,290][INFO][tcp-disco-sock-reader-#15][TcpDiscoverySpi] Finished 
> serving remote node connection [rmtAddr=/10.29.42.45:38712, rmtPort=38712 
> [08:42:42,290][WARNING][disco-event-worker-#161][GridDiscoveryManager] Local 
> node SEGMENTED: TcpDiscoveryNode [id=8333aa56-8bf4-4558-a387-809b1d2e2e5b, 
> addrs=[10.29.42.44, 127.0.0.1], sockAddrs=[sap-datanode1/10.29.42.44:49500, 
> /127.0.0.1:49500], discPort=49500, order=1, intOrder=1, 
> lastExchangeTime=1528447362286, loc=true, ver=2.5.0#20180523-sha1:86e110c7, 
> isClient=false] 
> [08:42:42,294][SEVERE][tcp-disco-srvr-#2][] Critical system error detected. 
> Will be handled accordingly to configured handler [hnd=class 
> o.a.i.failure.StopNodeOrHaltFailureHandler, failureCtx=FailureContext 
> [type=SYSTEM_WORKER_TERMINATION, err=java.lang.IllegalStateException: Thread 
> tcp-disco-srvr-#2 is terminated unexpectedly.]] 
> java.lang.IllegalStateException: Thread tcp-disco-srvr-#2 is terminated 
> unexpectedly. 
>         at 
> org.apache.ignite.spi.discovery.tcp.ServerImpl$TcpServer.body(ServerImpl.java:5686)
>  
>         at 
> org.apache.ignite.spi.IgniteSpiThread.run(IgniteSpiThread.java:62) 
> [08:42:42,294][SEVERE][tcp-disco-srvr-#2][] JVM will be halted immediately 
> due to the failure: [failureCtx=FailureContext 
> [type=SYSTEM_WORKER_TERMINATION, err=java.lang.IllegalStateException: Thread 
> tcp-disco-srvr-#2 is terminated unexpectedly.]] 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8509) A lot of "Execution timeout" result for Cache 6 suite

2018-06-08 Thread Alexei Scherbakov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506070#comment-16506070
 ] 

Alexei Scherbakov commented on IGNITE-8509:
---

So far I've found race condition in TxRollbackAsyncTest suite leading to test 
assertion failures.

Affected tests: testSynchronousRollback, testMixedAsyncRollbackTypes.

Fixed, multiple TC runs in progress.

> A lot of "Execution timeout" result for Cache 6 suite
> -
>
> Key: IGNITE-8509
> URL: https://issues.apache.org/jira/browse/IGNITE-8509
> Project: Ignite
>  Issue Type: Task
>Reporter: Maxim Muzafarov
>Assignee: Alexei Scherbakov
>Priority: Critical
>  Labels: MakeTeamcityGreenAgain
> Fix For: 2.6
>
>
> *Summary*
> Suite Cache 6 fails with execution timeout fails with
> {code:java}
> [org.apache.ignite:ignite-core] [2018-05-15 02:35:14,143][WARN 
> ][grid-timeout-worker-#71656%transactions.TxRollbackOnTimeoutNearCacheTest0%][diagnostic]
>  Found long running transaction [startTime=02:32:57.989, 
> curTime=02:35:14.136, tx=GridDhtTxRemote
> {code}
> *Please, fefer for more details* 
> [https://ci.ignite.apache.org/viewType.html?buildTypeId=IgniteTests24Java8_Cache6=1=buildTypeHistoryList_IgniteTests24Java8=%3Cdefault%3E]
> *Statistics Cache 6 Suite*
>  Recent fails : 42,0% [21 fails / 50 runs]; 
>  Critical recent fails: 10,0% [5 fails / 50 runs];
> Last mounth (15.04 – 15.05)
> Execution timeout: 21,0% [84 fails / 400 runs];



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8752) Deadlock when registering binary metadata while holding topology read lock

2018-06-08 Thread Alexey Goncharuk (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Goncharuk updated IGNITE-8752:
-
Description: 
The following deadlock was reproduced on ignite-2.4 version:
{code}
   java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:304)
at 
org.apache.ignite.internal.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:177)
at 
org.apache.ignite.internal.util.future.GridFutureAdapter.get(GridFutureAdapter.java:140)
at 
org.apache.ignite.internal.MarshallerContextImpl.registerClassName(MarshallerContextImpl.java:284)
at 
org.apache.ignite.internal.binary.BinaryContext.registerUserClassName(BinaryContext.java:1191)
at 
org.apache.ignite.internal.binary.BinaryContext.registerUserClassDescriptor(BinaryContext.java:773)
at 
org.apache.ignite.internal.binary.BinaryContext.registerClassDescriptor(BinaryContext.java:751)
at 
org.apache.ignite.internal.binary.BinaryContext.descriptorForClass(BinaryContext.java:622)
at 
org.apache.ignite.internal.binary.BinaryWriterExImpl.marshal0(BinaryWriterExImpl.java:164)
at 
org.apache.ignite.internal.binary.BinaryWriterExImpl.marshal(BinaryWriterExImpl.java:147)
at 
org.apache.ignite.internal.binary.BinaryWriterExImpl.marshal(BinaryWriterExImpl.java:134)
at 
org.apache.ignite.internal.binary.GridBinaryMarshaller.marshal(GridBinaryMarshaller.java:251)
at 
org.apache.ignite.internal.processors.cache.binary.CacheObjectBinaryProcessorImpl.marshalToBinary(CacheObjectBinaryProcessorImpl.java:396)
at 
org.apache.ignite.internal.processors.cache.binary.CacheObjectBinaryProcessorImpl.marshalToBinary(CacheObjectBinaryProcessorImpl.java:381)
at 
org.apache.ignite.internal.processors.cache.binary.CacheObjectBinaryProcessorImpl.toBinary(CacheObjectBinaryProcessorImpl.java:875)
at 
org.apache.ignite.internal.processors.cache.binary.CacheObjectBinaryProcessorImpl.toCacheObject(CacheObjectBinaryProcessorImpl.java:825)
at 
org.apache.ignite.internal.processors.cache.GridCacheContext.toCacheObject(GridCacheContext.java:1783)
at 
org.apache.ignite.internal.processors.cache.GridCacheMapEntry$AtomicCacheUpdateClosure.runEntryProcessor(GridCacheMapEntry.java:5264)
at 
org.apache.ignite.internal.processors.cache.GridCacheMapEntry$AtomicCacheUpdateClosure.call(GridCacheMapEntry.java:4667)
at 
org.apache.ignite.internal.processors.cache.GridCacheMapEntry$AtomicCacheUpdateClosure.call(GridCacheMapEntry.java:4484)
at 
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Invoke.invokeClosure(BPlusTree.java:3083)
at 
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Invoke.access$6200(BPlusTree.java:2977)
at 
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.invokeDown(BPlusTree.java:1732)
at 
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.invoke(BPlusTree.java:1610)
at 
org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.invoke(IgniteCacheOffheapManagerImpl.java:1270)
at 
org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl.invoke(IgniteCacheOffheapManagerImpl.java:370)
at 
org.apache.ignite.internal.processors.cache.GridCacheMapEntry.innerUpdate(GridCacheMapEntry.java:1769)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateSingle(GridDhtAtomicCache.java:2420)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.update(GridDhtAtomicCache.java:1883)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAllAsyncInternal0(GridDhtAtomicCache.java:1736)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAllAsyncInternal(GridDhtAtomicCache.java:1628)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicAbstractUpdateFuture.sendSingleRequest(GridNearAtomicAbstractUpdateFuture.java:299)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicSingleUpdateFuture.map(GridNearAtomicSingleUpdateFuture.java:483)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicSingleUpdateFuture.mapOnTopology(GridNearAtomicSingleUpdateFuture.java:443)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicAbstractUpdateFuture.map(GridNearAtomicAbstractUpdateFuture.java:248)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.update0(GridDhtAtomicCache.java:1117)
 

[jira] [Updated] (IGNITE-8752) Deadlock when registering binary metadata while holding topology read lock

2018-06-08 Thread Alexey Goncharuk (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Goncharuk updated IGNITE-8752:
-
Priority: Critical  (was: Major)

> Deadlock when registering binary metadata while holding topology read lock
> --
>
> Key: IGNITE-8752
> URL: https://issues.apache.org/jira/browse/IGNITE-8752
> Project: Ignite
>  Issue Type: Bug
>Reporter: Alexey Goncharuk
>Priority: Critical
> Fix For: 2.6
>
>
> The following deadlock was reproduced on ignite-2.4 version:
> {code}
>java.lang.Thread.State: WAITING (parking)
>   at sun.misc.Unsafe.park(Native Method)
>   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:304)
>   at 
> org.apache.ignite.internal.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:177)
>   at 
> org.apache.ignite.internal.util.future.GridFutureAdapter.get(GridFutureAdapter.java:140)
>   at 
> org.apache.ignite.internal.MarshallerContextImpl.registerClassName(MarshallerContextImpl.java:284)
>   at 
> org.apache.ignite.internal.binary.BinaryContext.registerUserClassName(BinaryContext.java:1191)
>   at 
> org.apache.ignite.internal.binary.BinaryContext.registerUserClassDescriptor(BinaryContext.java:773)
>   at 
> org.apache.ignite.internal.binary.BinaryContext.registerClassDescriptor(BinaryContext.java:751)
>   at 
> org.apache.ignite.internal.binary.BinaryContext.descriptorForClass(BinaryContext.java:622)
>   at 
> org.apache.ignite.internal.binary.BinaryWriterExImpl.marshal0(BinaryWriterExImpl.java:164)
>   at 
> org.apache.ignite.internal.binary.BinaryWriterExImpl.marshal(BinaryWriterExImpl.java:147)
>   at 
> org.apache.ignite.internal.binary.BinaryWriterExImpl.marshal(BinaryWriterExImpl.java:134)
>   at 
> org.apache.ignite.internal.binary.GridBinaryMarshaller.marshal(GridBinaryMarshaller.java:251)
>   at 
> org.apache.ignite.internal.processors.cache.binary.CacheObjectBinaryProcessorImpl.marshalToBinary(CacheObjectBinaryProcessorImpl.java:396)
>   at 
> org.apache.ignite.internal.processors.cache.binary.CacheObjectBinaryProcessorImpl.marshalToBinary(CacheObjectBinaryProcessorImpl.java:381)
>   at 
> org.apache.ignite.internal.processors.cache.binary.CacheObjectBinaryProcessorImpl.toBinary(CacheObjectBinaryProcessorImpl.java:875)
>   at 
> org.apache.ignite.internal.processors.cache.binary.CacheObjectBinaryProcessorImpl.toCacheObject(CacheObjectBinaryProcessorImpl.java:825)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheContext.toCacheObject(GridCacheContext.java:1783)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheMapEntry$AtomicCacheUpdateClosure.runEntryProcessor(GridCacheMapEntry.java:5264)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheMapEntry$AtomicCacheUpdateClosure.call(GridCacheMapEntry.java:4667)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheMapEntry$AtomicCacheUpdateClosure.call(GridCacheMapEntry.java:4484)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Invoke.invokeClosure(BPlusTree.java:3083)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Invoke.access$6200(BPlusTree.java:2977)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.invokeDown(BPlusTree.java:1732)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.invoke(BPlusTree.java:1610)
>   at 
> org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.invoke(IgniteCacheOffheapManagerImpl.java:1270)
>   at 
> org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl.invoke(IgniteCacheOffheapManagerImpl.java:370)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheMapEntry.innerUpdate(GridCacheMapEntry.java:1769)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateSingle(GridDhtAtomicCache.java:2420)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.update(GridDhtAtomicCache.java:1883)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAllAsyncInternal0(GridDhtAtomicCache.java:1736)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAllAsyncInternal(GridDhtAtomicCache.java:1628)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicAbstractUpdateFuture.sendSingleRequest(GridNearAtomicAbstractUpdateFuture.java:299)
>   at 
> 

[jira] [Updated] (IGNITE-8752) Deadlock when registering binary metadata while holding topology read lock

2018-06-08 Thread Alexey Goncharuk (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Goncharuk updated IGNITE-8752:
-
Fix Version/s: 2.6

> Deadlock when registering binary metadata while holding topology read lock
> --
>
> Key: IGNITE-8752
> URL: https://issues.apache.org/jira/browse/IGNITE-8752
> Project: Ignite
>  Issue Type: Bug
>Reporter: Alexey Goncharuk
>Priority: Major
> Fix For: 2.6
>
>
> The following deadlock was reproduced on ignite-2.4 version:
> {code}
>java.lang.Thread.State: WAITING (parking)
>   at sun.misc.Unsafe.park(Native Method)
>   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:304)
>   at 
> org.apache.ignite.internal.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:177)
>   at 
> org.apache.ignite.internal.util.future.GridFutureAdapter.get(GridFutureAdapter.java:140)
>   at 
> org.apache.ignite.internal.MarshallerContextImpl.registerClassName(MarshallerContextImpl.java:284)
>   at 
> org.apache.ignite.internal.binary.BinaryContext.registerUserClassName(BinaryContext.java:1191)
>   at 
> org.apache.ignite.internal.binary.BinaryContext.registerUserClassDescriptor(BinaryContext.java:773)
>   at 
> org.apache.ignite.internal.binary.BinaryContext.registerClassDescriptor(BinaryContext.java:751)
>   at 
> org.apache.ignite.internal.binary.BinaryContext.descriptorForClass(BinaryContext.java:622)
>   at 
> org.apache.ignite.internal.binary.BinaryWriterExImpl.marshal0(BinaryWriterExImpl.java:164)
>   at 
> org.apache.ignite.internal.binary.BinaryWriterExImpl.marshal(BinaryWriterExImpl.java:147)
>   at 
> org.apache.ignite.internal.binary.BinaryWriterExImpl.marshal(BinaryWriterExImpl.java:134)
>   at 
> org.apache.ignite.internal.binary.GridBinaryMarshaller.marshal(GridBinaryMarshaller.java:251)
>   at 
> org.apache.ignite.internal.processors.cache.binary.CacheObjectBinaryProcessorImpl.marshalToBinary(CacheObjectBinaryProcessorImpl.java:396)
>   at 
> org.apache.ignite.internal.processors.cache.binary.CacheObjectBinaryProcessorImpl.marshalToBinary(CacheObjectBinaryProcessorImpl.java:381)
>   at 
> org.apache.ignite.internal.processors.cache.binary.CacheObjectBinaryProcessorImpl.toBinary(CacheObjectBinaryProcessorImpl.java:875)
>   at 
> org.apache.ignite.internal.processors.cache.binary.CacheObjectBinaryProcessorImpl.toCacheObject(CacheObjectBinaryProcessorImpl.java:825)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheContext.toCacheObject(GridCacheContext.java:1783)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheMapEntry$AtomicCacheUpdateClosure.runEntryProcessor(GridCacheMapEntry.java:5264)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheMapEntry$AtomicCacheUpdateClosure.call(GridCacheMapEntry.java:4667)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheMapEntry$AtomicCacheUpdateClosure.call(GridCacheMapEntry.java:4484)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Invoke.invokeClosure(BPlusTree.java:3083)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Invoke.access$6200(BPlusTree.java:2977)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.invokeDown(BPlusTree.java:1732)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.invoke(BPlusTree.java:1610)
>   at 
> org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.invoke(IgniteCacheOffheapManagerImpl.java:1270)
>   at 
> org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl.invoke(IgniteCacheOffheapManagerImpl.java:370)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheMapEntry.innerUpdate(GridCacheMapEntry.java:1769)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateSingle(GridDhtAtomicCache.java:2420)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.update(GridDhtAtomicCache.java:1883)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAllAsyncInternal0(GridDhtAtomicCache.java:1736)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAllAsyncInternal(GridDhtAtomicCache.java:1628)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicAbstractUpdateFuture.sendSingleRequest(GridNearAtomicAbstractUpdateFuture.java:299)
>   at 
> 

[jira] [Created] (IGNITE-8752) Deadlock when registering binary metadata while holding topology read lock

2018-06-08 Thread Alexey Goncharuk (JIRA)
Alexey Goncharuk created IGNITE-8752:


 Summary: Deadlock when registering binary metadata while holding 
topology read lock
 Key: IGNITE-8752
 URL: https://issues.apache.org/jira/browse/IGNITE-8752
 Project: Ignite
  Issue Type: Bug
Reporter: Alexey Goncharuk






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-8751) Possible race on node segmentation.

2018-06-08 Thread Andrew Mashenkov (JIRA)
Andrew Mashenkov created IGNITE-8751:


 Summary: Possible race on node segmentation.
 Key: IGNITE-8751
 URL: https://issues.apache.org/jira/browse/IGNITE-8751
 Project: Ignite
  Issue Type: Bug
Affects Versions: 2.5
Reporter: Andrew Mashenkov


Segmentation policy may be ignored, probably, due to a race.
See [1] for details.

 [1] 
[http://apache-ignite-users.70518.x6.nabble.com/Node-pause-for-no-obvious-reason-td21923.html]

Logs from segmented node.
[08:42:42,290][INFO][tcp-disco-sock-reader-#15][TcpDiscoverySpi] Finished 
serving remote node connection [rmtAddr=/10.29.42.45:38712, rmtPort=38712 
[08:42:42,290][WARNING][disco-event-worker-#161][GridDiscoveryManager] Local 
node SEGMENTED: TcpDiscoveryNode [id=8333aa56-8bf4-4558-a387-809b1d2e2e5b, 
addrs=[10.29.42.44, 127.0.0.1], sockAddrs=[sap-datanode1/10.29.42.44:49500, 
/127.0.0.1:49500], discPort=49500, order=1, intOrder=1, 
lastExchangeTime=1528447362286, loc=true, ver=2.5.0#20180523-sha1:86e110c7, 
isClient=false] 
[08:42:42,294][SEVERE][tcp-disco-srvr-#2][] Critical system error detected. 
Will be handled accordingly to configured handler [hnd=class 
o.a.i.failure.StopNodeOrHaltFailureHandler, failureCtx=FailureContext 
[type=SYSTEM_WORKER_TERMINATION, err=java.lang.IllegalStateException: Thread 
tcp-disco-srvr-#2 is terminated unexpectedly.]] 
java.lang.IllegalStateException: Thread tcp-disco-srvr-#2 is terminated 
unexpectedly. 
        at 
org.apache.ignite.spi.discovery.tcp.ServerImpl$TcpServer.body(ServerImpl.java:5686)
 
        at 
org.apache.ignite.spi.IgniteSpiThread.run(IgniteSpiThread.java:62) 
[08:42:42,294][SEVERE][tcp-disco-srvr-#2][] JVM will be halted immediately 
due to the failure: [failureCtx=FailureContext 
[type=SYSTEM_WORKER_TERMINATION, err=java.lang.IllegalStateException: Thread 
tcp-disco-srvr-#2 is terminated unexpectedly.]] 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-5954) Ignite Cache Failover: GridCacheAtomicNearRemoveFailureTest.testPutAndRemove fails

2018-06-08 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-5954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506057#comment-16506057
 ] 

ASF GitHub Bot commented on IGNITE-5954:


Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/4052


> Ignite Cache Failover: GridCacheAtomicNearRemoveFailureTest.testPutAndRemove 
> fails
> --
>
> Key: IGNITE-5954
> URL: https://issues.apache.org/jira/browse/IGNITE-5954
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.1
>Reporter: Eduard Shangareev
>Assignee: Alexey Kuznetsov
>Priority: Major
>  Labels: MakeTeamcityGreenAgain
> Fix For: 2.6
>
>
> Probably, it's broken after IGNITE-5272.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-7319) Memory leak during creating/destroying local cache

2018-06-08 Thread Andrey Aleksandrov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-7319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Aleksandrov updated IGNITE-7319:
---
Description: 
The following code creates local caches:
{code:java}
private IgniteCache createLocalCache(String name) { 
CacheConfiguration cCfg = new 
CacheConfiguration<>(); 
cCfg.setName(name); 
cCfg.setGroupName("localCaches"); // without group leak is much 
bigger! 
cCfg.setStoreKeepBinary(true); 
cCfg.setCacheMode(CacheMode.LOCAL); 
cCfg.setOnheapCacheEnabled(false); 
cCfg.setCopyOnRead(false); 
cCfg.setBackups(0); 
cCfg.setWriteBehindEnabled(false); 
cCfg.setReadThrough(false); 
cCfg.setReadFromBackup(false); 
cCfg.setQueryEntities(); 
return ignite.createCache(cCfg).withKeepBinary(); 
} 
{code}
The caches are placed in the queue and are picked up by the worker thread which 
just destroys them after removing from the queue. 
This setup seems to generate a memory leak of about 1GB per day. 
When looking at heap dump, I see all space is occupied by instances of 
java.util.concurrent.ConcurrentSkipListMap$Node.

User list: 
[http://apache-ignite-users.70518.x6.nabble.com/Memory-leak-in-GridCachePartitionExchangeManager-tt18995.html

Update:

When local cache is created then new CONTINUOUS_QUERY task is created too. This 
task should work until it canceled but in Ignite code we don't store the 
CancelableTask somewhere. After destroying the cache this task continues its 
work.

  was:
The following code creates local caches:
{code:java}
private IgniteCache createLocalCache(String name) { 
CacheConfiguration cCfg = new 
CacheConfiguration<>(); 
cCfg.setName(name); 
cCfg.setGroupName("localCaches"); // without group leak is much 
bigger! 
cCfg.setStoreKeepBinary(true); 
cCfg.setCacheMode(CacheMode.LOCAL); 
cCfg.setOnheapCacheEnabled(false); 
cCfg.setCopyOnRead(false); 
cCfg.setBackups(0); 
cCfg.setWriteBehindEnabled(false); 
cCfg.setReadThrough(false); 
cCfg.setReadFromBackup(false); 
cCfg.setQueryEntities(); 
return ignite.createCache(cCfg).withKeepBinary(); 
} 
{code}
The caches are placed in the queue and are picked up by the worker thread which 
just destroys them after removing from the queue. 
This setup seems to generate a memory leak of about 1GB per day. 
When looking at heap dump, I see all space is occupied by instances of 
java.util.concurrent.ConcurrentSkipListMap$Node.

User list: 
[http://apache-ignite-users.70518.x6.nabble.com/Memory-leak-in-GridCachePartitionExchangeManager-tt18995.html

U|http://apache-ignite-users.70518.x6.nabble.com/Memory-leak-in-GridCachePartitionExchangeManager-tt18995.html]pdate:

When local cache is created then new CONTINUOUS_QUERY task is created too. This 
task should work until it canceled but in Ignite code we don't store the 
CancelableTask somewhere. After destroying the cache this task continue its 
work.


> Memory leak during creating/destroying local cache
> --
>
> Key: IGNITE-7319
> URL: https://issues.apache.org/jira/browse/IGNITE-7319
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.3
>Reporter: Mikhail Cherkasov
>Assignee: Andrey Aleksandrov
>Priority: Major
> Fix For: 2.6
>
> Attachments: Demo.java
>
>
> The following code creates local caches:
> {code:java}
> private IgniteCache createLocalCache(String name) { 
> CacheConfiguration cCfg = new 
> CacheConfiguration<>(); 
> cCfg.setName(name); 
> cCfg.setGroupName("localCaches"); // without group leak is much 
> bigger! 
> cCfg.setStoreKeepBinary(true); 
> cCfg.setCacheMode(CacheMode.LOCAL); 
> cCfg.setOnheapCacheEnabled(false); 
> cCfg.setCopyOnRead(false); 
> cCfg.setBackups(0); 
> cCfg.setWriteBehindEnabled(false); 
> cCfg.setReadThrough(false); 
> cCfg.setReadFromBackup(false); 
> cCfg.setQueryEntities(); 
> return ignite.createCache(cCfg).withKeepBinary(); 
> } 
> {code}
> The caches are placed in the queue and are picked up by the worker thread 
> which just destroys them after removing from the queue. 
> This setup seems to generate a memory leak of about 1GB per day. 
> When looking at heap dump, I see all space is occupied by instances of 
> java.util.concurrent.ConcurrentSkipListMap$Node.
> User list: 
> [http://apache-ignite-users.70518.x6.nabble.com/Memory-leak-in-GridCachePartitionExchangeManager-tt18995.html
> Update:
> When local cache is created then new CONTINUOUS_QUERY task is created too. 
> This task should work until it canceled but in Ignite code we don't 

[jira] [Created] (IGNITE-8750) IgniteWalFlushDefaultSelfTest.testFailAfterStart fails on TC

2018-06-08 Thread Pavel Kovalenko (JIRA)
Pavel Kovalenko created IGNITE-8750:
---

 Summary: IgniteWalFlushDefaultSelfTest.testFailAfterStart fails on 
TC
 Key: IGNITE-8750
 URL: https://issues.apache.org/jira/browse/IGNITE-8750
 Project: Ignite
  Issue Type: Bug
  Components: cache
Affects Versions: 2.5
Reporter: Pavel Kovalenko
Assignee: Pavel Kovalenko
 Fix For: 2.6


{noformat}
org.apache.ignite.IgniteException: Failed to get object field 
[obj=GridCacheSharedManagerAdapter [starting=true, stop=false], 
fieldNames=[mmap]]
Caused by: java.lang.NoSuchFieldException: mmap
{noformat}




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8749) Exception for "no space left" situation should be propagated to FailureHandler

2018-06-08 Thread Dmitriy Pavlov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506024#comment-16506024
 ] 

Dmitriy Pavlov commented on IGNITE-8749:


Ticket seems to be more or less similar to 
https://issues.apache.org/jira/browse/IGNITE-8742 problem. I've linked 2 
tickets as related.

> Exception for "no space left" situation should be propagated to FailureHandler
> --
>
> Key: IGNITE-8749
> URL: https://issues.apache.org/jira/browse/IGNITE-8749
> Project: Ignite
>  Issue Type: Improvement
>  Components: persistence
>Reporter: Sergey Chugunov
>Priority: Major
> Fix For: 2.6
>
>
> For now if "no space left" situation is detected in 
> FileWriteAheadLogManager#formatFile method and corresponding exception is 
> thrown the exception doesn't get propagated to FailureHandler and node 
> continues working.
> As "no space left" is a critical situation, corresponding exception should be 
> propagated to handler to make necessary actions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8689) SQL query execution may lead to NullPointerException while node is stopped

2018-06-08 Thread Vyacheslav Koptilin (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vyacheslav Koptilin updated IGNITE-8689:

Description: 
Let's consider the following scenario:
 * Start a new node (node 'A') and create a new partitioned cache that resides 
on that node

{code:java}
Ignite ignite = Ignition.start("examples/config/segmentation/node-A.xml");
IgniteCache cache = ignite.getOrCreateCache(new 
CacheConfiguration()
.setName("default")
.setIndexedTypes(String.class, String.class)
.setNodeFilter(new NodeFilter())
);

public class NodeFilter implements IgnitePredicate {
@Override public boolean apply(ClusterNode node) {
return node.attribute("test.attribute").equals("first-node");
}
}{code}
 * Start the second node (node 'B') with a custom connector configuration:

{code:java}








Ignite ignite = Ignition.start("examples/config/segmentation/node-B.xml");

Executors.newScheduledThreadPool(1).schedule(
new Runnable() {
@Override public void run() {
DiscoverySpi spi = ignite.configuration().getDiscoverySpi();
spi.failNode(ignite.cluster().localNode().id(), "test message");
}
},
30,
TimeUnit.SECONDS);{code}
 * Execute simple SQL query using sqlline for example (JDBC driver should be 
connected to the node 'B')

{code:java}
./sqlline.sh --verbose=true -u jdbc:ignite:thin://127.0.0.1:2

select * from UNKNOWN_TABLE;{code}
In that case, {{IgniteH2Indexing.prepareStatement()}} throws 
{{SQLException(Table not found)}} and the implementation (see 
{{IgniteH2Indexing.prepareStatementAndCaches()}}) tries to start caches that 
are not started yet by sending {{ClientCacheChangeDummyDiscoveryMessage}} to 
'discovery-worker' thread,
which in turn posts that message to 'exchange-worker' thread.

Assume that while processing of {{ClientCacheChangeDummyDiscoveryMessage}} by 
the 'exchange-worker', the discovery thread receives {{EVT_NODE_FAILED}} (as a 
result of segmentation) and so {{DiscoCache}} history is updated by removing 
the failed node from the list of alive nodes.
At the same time, 'exchange-worker' detects that there is only one alive node 
(node 'B' in our case) and mistakenly believes that node 'B' is a coordinator:
{code:java|title=CacheAffinitySharedManager.java}
void processClientCachesChanges(ClientCacheChangeDummyDiscoveryMessage msg) 
{
AffinityTopologyVersion topVer = cctx.exchange().readyAffinityVersion();

DiscoCache discoCache = cctx.discovery().discoCache(topVer);

boolean crd = 
cctx.localNode().equals(discoCache.oldestAliveServerNode()); // discoCache 
contains only the one node!

Map startedCaches = 
processClientCacheStartRequests(msg, crd, topVer, discoCache);

Set closedCaches = processCacheCloseRequests(msg, crd, topVer);

if (startedCaches != null || closedCaches != null)
scheduleClientChangeMessage(startedCaches, closedCaches);
}
{code}
and results in the following {{NullPointerException}}:
{code:java}
[19:25:57,019][ERROR][exchange-worker-#42][GridCachePartitionExchangeManager] 
Failed to process custom exchange task: ClientCacheChangeDummyDiscoveryMessage 
[reqId=8c7904a2-4b70-4614-bf7b-f4434d274c30, cachesToClose=null, 
startCaches=[default]]
java.lang.NullPointerException
at 
org.apache.ignite.internal.processors.cache.CacheAffinitySharedManager.processClientCacheStartRequests(CacheAffinitySharedManager.java:458)
at 
org.apache.ignite.internal.processors.cache.CacheAffinitySharedManager.processClientCachesChanges(CacheAffinitySharedManager.java:621)
at 
org.apache.ignite.internal.processors.cache.GridCacheProcessor.processCustomExchangeTask(GridCacheProcessor.java:363)
at 
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.processCustomTask(GridCachePartitionExchangeManager.java:2207)
at 
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:2296)
at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at java.lang.Thread.run(Thread.java:748)
{code}
As a result, the node cannot be stopped due to the following reasons:
 * 'exchange' thread throws {{NullPointerException}} and therefore does not 
complete {{DynamicCacheStartFuture}}
 * 'Client connector' thread is blocked on {{DynamicCacheStartFuture.get()}} 
method which never returns control
 * the thread which performs node stopping process is blocked on {{busyLock}}

 Please see the following thread dump:
{code:java}
"Thread-117" #734 prio=5 os_prio=0 tid=0x558b117a9000 nid=0x437 waiting on 
condition [0x7f2466ba1000]
java.lang.Thread.State: TIMED_WAITING (sleeping)
at java.lang.Thread.sleep(Native Method)
at 

[jira] [Commented] (IGNITE-8711) IgniteStandByClientReconnectToNewClusterTest#testInactiveClientReconnectToActiveCluster fails in master

2018-06-08 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506016#comment-16506016
 ] 

ASF GitHub Bot commented on IGNITE-8711:


Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/4157


> IgniteStandByClientReconnectToNewClusterTest#testInactiveClientReconnectToActiveCluster
>  fails in master
> ---
>
> Key: IGNITE-8711
> URL: https://issues.apache.org/jira/browse/IGNITE-8711
> Project: Ignite
>  Issue Type: Bug
>Reporter: Sergey Chugunov
>Assignee: Sergey Chugunov
>Priority: Major
>  Labels: MakeTeamcityGreenAgain
> Fix For: 2.6
>
>
> Test fails on TC as well as locally.
> In master it started failing after this set of changes was applied: 
> https://ci.ignite.apache.org/viewLog.html?buildId=1323957=buildChangesDiv=IgniteTests24Java8_ActivateDeactivateCluster



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (IGNITE-7319) Memory leak during creating/destroying local cache

2018-06-08 Thread Andrey Gura (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-7319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16505967#comment-16505967
 ] 

Andrey Gura edited comment on IGNITE-7319 at 6/8/18 12:25 PM:
--

[~aealeksandrov] Please, don't use Java asserts in test assertions. Use JUnit 
assertXxx() methods instead.


was (Author: agura):
[~aealeksandrov] Please, don't use Java asserts in test assertion. Use 
assertXxx() methods.

> Memory leak during creating/destroying local cache
> --
>
> Key: IGNITE-7319
> URL: https://issues.apache.org/jira/browse/IGNITE-7319
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.3
>Reporter: Mikhail Cherkasov
>Assignee: Andrey Aleksandrov
>Priority: Major
> Fix For: 2.6
>
> Attachments: Demo.java
>
>
> The following code creates local caches:
> {code}
> private IgniteCache createLocalCache(String name) { 
> CacheConfiguration cCfg = new 
> CacheConfiguration<>(); 
> cCfg.setName(name); 
> cCfg.setGroupName("localCaches"); // without group leak is much 
> bigger! 
> cCfg.setStoreKeepBinary(true); 
> cCfg.setCacheMode(CacheMode.LOCAL); 
> cCfg.setOnheapCacheEnabled(false); 
> cCfg.setCopyOnRead(false); 
> cCfg.setBackups(0); 
> cCfg.setWriteBehindEnabled(false); 
> cCfg.setReadThrough(false); 
> cCfg.setReadFromBackup(false); 
> cCfg.setQueryEntities(); 
> return ignite.createCache(cCfg).withKeepBinary(); 
> } 
> {code}
> The caches are placed in the queue and are picked up by the worker thread 
> which just destroys them after removing from the queue. 
> This setup seems to generate a memory leak of about 1GB per day. 
> When looking at heap dump, I see all space is occupied by instances of 
> java.util.concurrent.ConcurrentSkipListMap$Node.
> User list: 
> http://apache-ignite-users.70518.x6.nabble.com/Memory-leak-in-GridCachePartitionExchangeManager-tt18995.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-7319) Memory leak during creating/destroying local cache

2018-06-08 Thread Andrey Gura (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-7319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16505967#comment-16505967
 ] 

Andrey Gura commented on IGNITE-7319:
-

[~aealeksandrov] Please, don't use Java asserts in test assertion. Use 
assertXxx() methods.

> Memory leak during creating/destroying local cache
> --
>
> Key: IGNITE-7319
> URL: https://issues.apache.org/jira/browse/IGNITE-7319
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.3
>Reporter: Mikhail Cherkasov
>Assignee: Andrey Aleksandrov
>Priority: Major
> Fix For: 2.6
>
> Attachments: Demo.java
>
>
> The following code creates local caches:
> {code}
> private IgniteCache createLocalCache(String name) { 
> CacheConfiguration cCfg = new 
> CacheConfiguration<>(); 
> cCfg.setName(name); 
> cCfg.setGroupName("localCaches"); // without group leak is much 
> bigger! 
> cCfg.setStoreKeepBinary(true); 
> cCfg.setCacheMode(CacheMode.LOCAL); 
> cCfg.setOnheapCacheEnabled(false); 
> cCfg.setCopyOnRead(false); 
> cCfg.setBackups(0); 
> cCfg.setWriteBehindEnabled(false); 
> cCfg.setReadThrough(false); 
> cCfg.setReadFromBackup(false); 
> cCfg.setQueryEntities(); 
> return ignite.createCache(cCfg).withKeepBinary(); 
> } 
> {code}
> The caches are placed in the queue and are picked up by the worker thread 
> which just destroys them after removing from the queue. 
> This setup seems to generate a memory leak of about 1GB per day. 
> When looking at heap dump, I see all space is occupied by instances of 
> java.util.concurrent.ConcurrentSkipListMap$Node.
> User list: 
> http://apache-ignite-users.70518.x6.nabble.com/Memory-leak-in-GridCachePartitionExchangeManager-tt18995.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8742) Direct IO 2 suite is timed out by 'out of disk space' failure emulation test: WAL manager failure does not stoped execution

2018-06-08 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16505962#comment-16505962
 ] 

ASF GitHub Bot commented on IGNITE-8742:


GitHub user x-kreator opened a pull request:

https://github.com/apache/ignite/pull/4158

IGNITE-8742: research - test suite constriction.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/x-kreator/ignite ignite-8742

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/4158.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4158


commit b1eb5f29a8b2fcf0f97e7bd216fd2d464ccaaabf
Author: Dmitriy Sorokin 
Date:   2018-06-08T12:19:44Z

IGNITE-8742: research - test suite constriction.




> Direct IO 2 suite is timed out by 'out of disk space' failure emulation test: 
> WAL manager failure does not stoped execution
> ---
>
> Key: IGNITE-8742
> URL: https://issues.apache.org/jira/browse/IGNITE-8742
> Project: Ignite
>  Issue Type: Test
>  Components: persistence
>Reporter: Dmitriy Pavlov
>Assignee: Dmitriy Sorokin
>Priority: Major
>  Labels: MakeTeamcityGreenAgain
>
> https://ci.ignite.apache.org/viewLog.html?buildId=1366882=buildResultsDiv=IgniteTests24Java8_PdsDirectIo2
> Test 
> org.apache.ignite.internal.processors.cache.persistence.IgniteNativeIoWalFlushFsyncSelfTest#testFailAfterStart
> emulates problem with disc space using exception.
> In direct IO environment real IO with disk is performed, tmpfs is not used.
> Sometimes this error can come from rollover() of segment, failure handler 
> reacted accordingly.
> {noformat}
> detected. Will be handled accordingly to configured handler [hnd=class 
> o.a.i.failure.StopNodeFailureHandler, failureCtx=FailureContext 
> [type=CRITICAL_ERROR, err=class o.a.i.i.pagemem.wal.StorageException: Unable 
> to write]]
> class org.apache.ignite.internal.pagemem.wal.StorageException: Unable to write
>   at 
> org.apache.ignite.internal.processors.cache.persistence.wal.FsyncModeFileWriteAheadLogManager$FileWriteHandle.writeBuffer(FsyncModeFileWriteAheadLogManager.java:2964)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.wal.FsyncModeFileWriteAheadLogManager$FileWriteHandle.flush(FsyncModeFileWriteAheadLogManager.java:2640)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.wal.FsyncModeFileWriteAheadLogManager$FileWriteHandle.flush(FsyncModeFileWriteAheadLogManager.java:2572)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.wal.FsyncModeFileWriteAheadLogManager$FileWriteHandle.flushOrWait(FsyncModeFileWriteAheadLogManager.java:2525)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.wal.FsyncModeFileWriteAheadLogManager$FileWriteHandle.close(FsyncModeFileWriteAheadLogManager.java:2795)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.wal.FsyncModeFileWriteAheadLogManager$FileWriteHandle.access$700(FsyncModeFileWriteAheadLogManager.java:2340)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.wal.FsyncModeFileWriteAheadLogManager.rollOver(FsyncModeFileWriteAheadLogManager.java:1029)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.wal.FsyncModeFileWriteAheadLogManager.log(FsyncModeFileWriteAheadLogManager.java:673)
> {noformat}
> But test seems to be not able to stop, node stopper thread tries to stop 
> cache, flush WAL. flush wait for rollover, which will never happen.
> {noformat}
> Thread [name="node-stopper", id=2836, state=WAITING, blockCnt=7, waitCnt=9]
> Lock 
> [object=java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@47f6473,
>  ownerName=null, ownerId=-1]
> at sun.misc.Unsafe.park(Native Method)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitUninterruptibly(AbstractQueuedSynchronizer.java:1976)
> at o.a.i.i.util.IgniteUtils.awaitQuiet(IgniteUtils.java:7473)
> at 
> o.a.i.i.processors.cache.persistence.wal.FsyncModeFileWriteAheadLogManager$FileWriteHandle.flushOrWait(FsyncModeFileWriteAheadLogManager.java:2546)
> at 
> o.a.i.i.processors.cache.persistence.wal.FsyncModeFileWriteAheadLogManager$FileWriteHandle.fsync(FsyncModeFileWriteAheadLogManager.java:2750)
> at 
> o.a.i.i.processors.cache.persistence.wal.FsyncModeFileWriteAheadLogManager$FileWriteHandle.access$2000(FsyncModeFileWriteAheadLogManager.java:2340)
> at 
> 

[jira] [Assigned] (IGNITE-8073) Cache read metric is calculated incorrectly in atomic cache.

2018-06-08 Thread Alexey Kuznetsov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8073?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Kuznetsov reassigned IGNITE-8073:


Assignee: Alexey Kuznetsov

> Cache read metric is calculated incorrectly in atomic cache.
> 
>
> Key: IGNITE-8073
> URL: https://issues.apache.org/jira/browse/IGNITE-8073
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.4
>Reporter: Alexey Kuznetsov
>Assignee: Alexey Kuznetsov
>Priority: Major
> Fix For: 2.6
>
> Attachments: GridCacheNearAtomicMetricsSelfTest.java
>
>
> In atomic cache with near enabled we perform put and remove operations.
> After it, get operation is called.
> Now, cache 'read' metric is calculated incorrectly, because it takes into 
> account near cache entry.
> Reproducer is attached.
> Note that remove operation untracks 'reader' node from dht cache entry, but 
> near cache entry still exists. The following test checks it :
> GridCacheAtomicNearCacheSelfTest#checkNearCache, see checkReaderRemove().
> See also 
> http://apache-ignite-developers.2346864.n4.nabble.com/Near-cache-entry-removal-td28698.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-584) Need to make sure that scan query returns consistent results on topology changes

2018-06-08 Thread Stanilovsky Evgeny (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16505946#comment-16505946
 ] 

Stanilovsky Evgeny commented on IGNITE-584:
---

[~dpavlov], yep, check it, for 2.5, still actual.

> Need to make sure that scan query returns consistent results on topology 
> changes
> 
>
> Key: IGNITE-584
> URL: https://issues.apache.org/jira/browse/IGNITE-584
> Project: Ignite
>  Issue Type: Sub-task
>  Components: data structures
>Affects Versions: 1.9, 2.0, 2.1
>Reporter: Artem Shutak
>Assignee: Semen Boikov
>Priority: Major
>  Labels: MakeTeamcityGreenAgain, Muted_test
> Fix For: 2.6
>
> Attachments: tc1.png
>
>
> Consistent results on topology changes was implemented for sql queries, but 
> looks like it still does not work for scan queries.
> This affects 'cache set' tests since set uses scan query for set iteration 
> (to be unmuted on TC): 
> GridCacheSetAbstractSelfTest testNodeJoinsAndLeaves and 
> testNodeJoinsAndLeavesCollocated; 
> Also see todos here GridCacheSetFailoverAbstractSelfTest



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-8749) Exception for "no space left" situation should be propagated to FailureHandler

2018-06-08 Thread Sergey Chugunov (JIRA)
Sergey Chugunov created IGNITE-8749:
---

 Summary: Exception for "no space left" situation should be 
propagated to FailureHandler
 Key: IGNITE-8749
 URL: https://issues.apache.org/jira/browse/IGNITE-8749
 Project: Ignite
  Issue Type: Improvement
  Components: persistence
Reporter: Sergey Chugunov
 Fix For: 2.6


For now if "no space left" situation is detected in 
FileWriteAheadLogManager#formatFile method and corresponding exception is 
thrown the exception doesn't get propagated to FailureHandler and node 
continues working.

As "no space left" is a critical situation, corresponding exception should be 
propagated to handler to make necessary actions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8748) All FileIO#write methods should return number of written bytes

2018-06-08 Thread Sergey Chugunov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Chugunov updated IGNITE-8748:

Fix Version/s: 2.6

> All FileIO#write methods should return number of written bytes
> --
>
> Key: IGNITE-8748
> URL: https://issues.apache.org/jira/browse/IGNITE-8748
> Project: Ignite
>  Issue Type: Improvement
>  Components: persistence
>Reporter: Sergey Chugunov
>Priority: Major
> Fix For: 2.6
>
>
> FileIO#write(byte[], int, int) doesn't return a value of written bytes which 
> makes it impossible for callers to detect a situation of no space left on 
> device.
> API should be changed to return written bytes, all callers of this method 
> should adopt changes to be ready to detect "no space left" situation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8748) All FileIO#write methods should return number of written bytes

2018-06-08 Thread Dmitriy Pavlov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitriy Pavlov updated IGNITE-8748:
---
Component/s: persistence

> All FileIO#write methods should return number of written bytes
> --
>
> Key: IGNITE-8748
> URL: https://issues.apache.org/jira/browse/IGNITE-8748
> Project: Ignite
>  Issue Type: Improvement
>  Components: persistence
>Reporter: Sergey Chugunov
>Priority: Major
>
> FileIO#write(byte[], int, int) doesn't return a value of written bytes which 
> makes it impossible for callers to detect a situation of no space left on 
> device.
> API should be changed to return written bytes, all callers of this method 
> should adopt changes to be ready to detect "no space left" situation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-8748) All FileIO#write methods should return number of written bytes

2018-06-08 Thread Sergey Chugunov (JIRA)
Sergey Chugunov created IGNITE-8748:
---

 Summary: All FileIO#write methods should return number of written 
bytes
 Key: IGNITE-8748
 URL: https://issues.apache.org/jira/browse/IGNITE-8748
 Project: Ignite
  Issue Type: Improvement
Reporter: Sergey Chugunov


FileIO#write(byte[], int, int) doesn't return a value of written bytes which 
makes it impossible for callers to detect a situation of no space left on 
device.

API should be changed to return written bytes, all callers of this method 
should adopt changes to be ready to detect "no space left" situation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-7339) RENTING partition is not evicted after restore from storage

2018-06-08 Thread Pavel Kovalenko (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-7339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16505935#comment-16505935
 ] 

Pavel Kovalenko commented on IGNITE-7339:
-

[~ascherbakov] I've looked at changes and have 2 proposals to improve the 
solution:
1) To prevent partition eviction/renting you can explicitly reserve it using 
group reservation 
(org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtLocalPartition#addReservation),
 so there is no needs to introduce new debug/test variables.
2) The logic of invoking "clearAsync" during restore process can be moved to 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtPartitionTopology#afterStateRestored
 callback, to avoid GridCacheDatabaseSharedManager restore logic overload.

> RENTING partition is not evicted after restore from storage
> ---
>
> Key: IGNITE-7339
> URL: https://issues.apache.org/jira/browse/IGNITE-7339
> Project: Ignite
>  Issue Type: Bug
>Reporter: Semen Boikov
>Assignee: Alexei Scherbakov
>Priority: Critical
>
> If partition was in RENTING state at the moment when node is stopped, then 
> after restart it is not evicted.
> It seems it is an issue in GridDhtLocalPartition.rent, 'tryEvictAsync' is not 
> called is partition was already in RENTING state.
> Also there is error in GridDhtPartitionTopologyImpl.checkEvictions: partition 
> state is always treated as changed after part.rent call even if part.rent 
> does not actually change state.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (IGNITE-8180) ZookeeperDiscoverySpiTest#testQuorumRestore fails on TC

2018-06-08 Thread Amelchev Nikita (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amelchev Nikita reassigned IGNITE-8180:
---

Assignee: Amelchev Nikita

> ZookeeperDiscoverySpiTest#testQuorumRestore fails on TC
> ---
>
> Key: IGNITE-8180
> URL: https://issues.apache.org/jira/browse/IGNITE-8180
> Project: Ignite
>  Issue Type: Bug
>  Components: zookeeper
>Reporter: Sergey Chugunov
>Assignee: Amelchev Nikita
>Priority: Major
>  Labels: MakeTeamcityGreenAgain
>
> Test fails on TC with the following stack trace:
> {noformat}
> class org.apache.ignite.IgniteCheckedException: Failed to start manager: 
> GridManagerAdapter [enabled=true, 
> name=org.apache.ignite.internal.managers.discovery.GridDiscoveryManager]
> at 
> org.apache.ignite.internal.IgniteKernal.startManager(IgniteKernal.java:1698)
> at org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:1007)
> at 
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:1977)
> at 
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1720)
> at org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1148)
> at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:646)
> at 
> org.apache.ignite.testframework.junits.GridAbstractTest.startGrid(GridAbstractTest.java:882)
> at 
> org.apache.ignite.testframework.junits.GridAbstractTest.startGrid(GridAbstractTest.java:845)
> at 
> org.apache.ignite.testframework.junits.GridAbstractTest.startGrid(GridAbstractTest.java:833)
> at 
> org.apache.ignite.testframework.junits.GridAbstractTest.startGrid(GridAbstractTest.java:799)
> at 
> org.apache.ignite.testframework.junits.GridAbstractTest.startGrids(GridAbstractTest.java:683)
> at 
> org.apache.ignite.spi.discovery.zk.internal.ZookeeperDiscoverySpiTest.testQuorumRestore(ZookeeperDiscoverySpiTest.java:1077)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at junit.framework.TestCase.runTest(TestCase.java:176)
> at 
> org.apache.ignite.testframework.junits.GridAbstractTest.runTestInternal(GridAbstractTest.java:2080)
> at 
> org.apache.ignite.testframework.junits.GridAbstractTest.access$000(GridAbstractTest.java:140)
> at 
> org.apache.ignite.testframework.junits.GridAbstractTest$5.run(GridAbstractTest.java:1995)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: class org.apache.ignite.IgniteCheckedException: Failed to start 
> SPI: ZookeeperDiscoverySpi [zkRootPath=/apacheIgnite, 
> zkConnectionString=127.0.0.1:40921,127.0.0.1:35014,127.0.0.1:38754, 
> joinTimeout=0, sesTimeout=15000, clientReconnectDisabled=false, 
> internalLsnr=null]
> at 
> org.apache.ignite.internal.managers.GridManagerAdapter.startSpi(GridManagerAdapter.java:300)
> at 
> org.apache.ignite.internal.managers.discovery.GridDiscoveryManager.start(GridDiscoveryManager.java:905)
> at 
> org.apache.ignite.internal.IgniteKernal.startManager(IgniteKernal.java:1693)
> ... 20 more
> Caused by: class org.apache.ignite.spi.IgniteSpiException: Failed to 
> initialize Zookeeper nodes
> at 
> org.apache.ignite.spi.discovery.zk.internal.ZookeeperDiscoveryImpl.initZkNodes(ZookeeperDiscoveryImpl.java:827)
> at 
> org.apache.ignite.spi.discovery.zk.internal.ZookeeperDiscoveryImpl.startJoin(ZookeeperDiscoveryImpl.java:957)
> at 
> org.apache.ignite.spi.discovery.zk.internal.ZookeeperDiscoveryImpl.joinTopology(ZookeeperDiscoveryImpl.java:775)
> at 
> org.apache.ignite.spi.discovery.zk.internal.ZookeeperDiscoveryImpl.startJoinAndWait(ZookeeperDiscoveryImpl.java:693)
> at 
> org.apache.ignite.spi.discovery.zk.ZookeeperDiscoverySpi.spiStart(ZookeeperDiscoverySpi.java:471)
> at 
> org.apache.ignite.internal.managers.GridManagerAdapter.startSpi(GridManagerAdapter.java:297)
> ... 22 more
> Caused by: 
> org.apache.ignite.spi.discovery.zk.internal.ZookeeperClientFailedException: 
> org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode 
> = ConnectionLoss for /apacheIgnite
> at 
> org.apache.ignite.spi.discovery.zk.internal.ZookeeperClient.onZookeeperError(ZookeeperClient.java:758)
> at 
> org.apache.ignite.spi.discovery.zk.internal.ZookeeperClient.exists(ZookeeperClient.java:276)
> at 
> org.apache.ignite.spi.discovery.zk.internal.ZookeeperDiscoveryImpl.initZkNodes(ZookeeperDiscoveryImpl.java:789)
> ... 27 more
> Caused by: org.apache.zookeeper.KeeperException$ConnectionLossException: 
> 

[jira] [Commented] (IGNITE-8711) IgniteStandByClientReconnectToNewClusterTest#testInactiveClientReconnectToActiveCluster fails in master

2018-06-08 Thread Sergey Chugunov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16505922#comment-16505922
 ] 

Sergey Chugunov commented on IGNITE-8711:
-

[~dpavlov],

I investigated failing test, it became incorrect after IGNITE-5789 was 
implemented.

I refactored the test to take into account new behavior, now everything looks 
good.

Could you please do a review of my changes?

> IgniteStandByClientReconnectToNewClusterTest#testInactiveClientReconnectToActiveCluster
>  fails in master
> ---
>
> Key: IGNITE-8711
> URL: https://issues.apache.org/jira/browse/IGNITE-8711
> Project: Ignite
>  Issue Type: Bug
>Reporter: Sergey Chugunov
>Assignee: Sergey Chugunov
>Priority: Major
>  Labels: MakeTeamcityGreenAgain
> Fix For: 2.6
>
>
> Test fails on TC as well as locally.
> In master it started failing after this set of changes was applied: 
> https://ci.ignite.apache.org/viewLog.html?buildId=1323957=buildChangesDiv=IgniteTests24Java8_ActivateDeactivateCluster



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8711) IgniteStandByClientReconnectToNewClusterTest#testInactiveClientReconnectToActiveCluster fails in master

2018-06-08 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16505916#comment-16505916
 ] 

ASF GitHub Bot commented on IGNITE-8711:


GitHub user sergey-chugunov-1985 opened a pull request:

https://github.com/apache/ignite/pull/4157

IGNITE-8711 test was adopted to take into account changes from IGNITE-5789



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-8711

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/4157.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4157


commit 9d69d868fe6cc348c3087291da6a5391e88fd97b
Author: Sergey Chugunov 
Date:   2018-06-08T10:48:15Z

IGNITE-8711 test was adopted to take into account changes from IGNITE-5789




> IgniteStandByClientReconnectToNewClusterTest#testInactiveClientReconnectToActiveCluster
>  fails in master
> ---
>
> Key: IGNITE-8711
> URL: https://issues.apache.org/jira/browse/IGNITE-8711
> Project: Ignite
>  Issue Type: Bug
>Reporter: Sergey Chugunov
>Assignee: Sergey Chugunov
>Priority: Major
>  Labels: MakeTeamcityGreenAgain
> Fix For: 2.6
>
>
> Test fails on TC as well as locally.
> In master it started failing after this set of changes was applied: 
> https://ci.ignite.apache.org/viewLog.html?buildId=1323957=buildChangesDiv=IgniteTests24Java8_ActivateDeactivateCluster



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8711) IgniteStandByClientReconnectToNewClusterTest#testInactiveClientReconnectToActiveCluster fails in master

2018-06-08 Thread Sergey Chugunov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16505911#comment-16505911
 ] 

Sergey Chugunov commented on IGNITE-8711:
-

After implementing IGNITE-5789 test became incorrect.

> IgniteStandByClientReconnectToNewClusterTest#testInactiveClientReconnectToActiveCluster
>  fails in master
> ---
>
> Key: IGNITE-8711
> URL: https://issues.apache.org/jira/browse/IGNITE-8711
> Project: Ignite
>  Issue Type: Bug
>Reporter: Sergey Chugunov
>Assignee: Sergey Chugunov
>Priority: Major
>  Labels: MakeTeamcityGreenAgain
> Fix For: 2.6
>
>
> Test fails on TC as well as locally.
> In master it started failing after this set of changes was applied: 
> https://ci.ignite.apache.org/viewLog.html?buildId=1323957=buildChangesDiv=IgniteTests24Java8_ActivateDeactivateCluster



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (IGNITE-8711) IgniteStandByClientReconnectToNewClusterTest#testInactiveClientReconnectToActiveCluster fails in master

2018-06-08 Thread Sergey Chugunov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Chugunov reassigned IGNITE-8711:
---

Assignee: Sergey Chugunov

> IgniteStandByClientReconnectToNewClusterTest#testInactiveClientReconnectToActiveCluster
>  fails in master
> ---
>
> Key: IGNITE-8711
> URL: https://issues.apache.org/jira/browse/IGNITE-8711
> Project: Ignite
>  Issue Type: Bug
>Reporter: Sergey Chugunov
>Assignee: Sergey Chugunov
>Priority: Major
>  Labels: MakeTeamcityGreenAgain
> Fix For: 2.6
>
>
> Test fails on TC as well as locally.
> In master it started failing after this set of changes was applied: 
> https://ci.ignite.apache.org/viewLog.html?buildId=1323957=buildChangesDiv=IgniteTests24Java8_ActivateDeactivateCluster



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8711) IgniteStandByClientReconnectToNewClusterTest#testInactiveClientReconnectToActiveCluster fails in master

2018-06-08 Thread Sergey Chugunov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Chugunov updated IGNITE-8711:

Fix Version/s: 2.6

> IgniteStandByClientReconnectToNewClusterTest#testInactiveClientReconnectToActiveCluster
>  fails in master
> ---
>
> Key: IGNITE-8711
> URL: https://issues.apache.org/jira/browse/IGNITE-8711
> Project: Ignite
>  Issue Type: Bug
>Reporter: Sergey Chugunov
>Assignee: Sergey Chugunov
>Priority: Major
>  Labels: MakeTeamcityGreenAgain
> Fix For: 2.6
>
>
> Test fails on TC as well as locally.
> In master it started failing after this set of changes was applied: 
> https://ci.ignite.apache.org/viewLog.html?buildId=1323957=buildChangesDiv=IgniteTests24Java8_ActivateDeactivateCluster



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8746) EVT_CACHE_REBALANCE_PART_DATA_LOST event received twice on the coordinator node

2018-06-08 Thread Alexey Goncharuk (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Goncharuk updated IGNITE-8746:
-
Fix Version/s: 2.6

> EVT_CACHE_REBALANCE_PART_DATA_LOST event received twice on the coordinator 
> node
> ---
>
> Key: IGNITE-8746
> URL: https://issues.apache.org/jira/browse/IGNITE-8746
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.4
>Reporter: Pavel Vinokurov
>Priority: Major
> Fix For: 2.6
>
> Attachments: EvtDataLostTwiceOnCoordinatorReprocuder.java
>
>
> After a node left the cluster the coordinator recieves the partition lost 
> event twice.
> The reproducer is attached.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (IGNITE-5973) [Test Failed] GridCacheAbstractDataStructuresFailoverSelfTest.testSemaphoreNonFailoverSafe

2018-06-08 Thread Pavel Pereslegin (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-5973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16505888#comment-16505888
 ] 

Pavel Pereslegin edited comment on IGNITE-5973 at 6/8/18 10:18 AM:
---

Log output from TeamCity:
{noformat}
class org.apache.ignite.IgniteInterruptedException: Got interrupted while 
waiting for future to complete.
at org.apache.ignite.internal.util.IgniteUtils$3.apply(IgniteUtils.java:829)
at org.apache.ignite.internal.util.IgniteUtils$3.apply(IgniteUtils.java:827)
at 
org.apache.ignite.internal.util.IgniteUtils.convertException(IgniteUtils.java:985)
at 
org.apache.ignite.internal.processors.datastructures.GridCacheSemaphoreImpl.close(GridCacheSemaphoreImpl.java:969)
at 
org.apache.ignite.internal.processors.cache.datastructures.GridCacheAbstractDataStructuresFailoverSelfTest.testSemaphoreNonFailoverSafe(GridCacheAbstractDataStructuresFailoverSelfTest.java:481)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at junit.framework.TestCase.runTest(TestCase.java:176)
at 
org.apache.ignite.testframework.junits.GridAbstractTest.runTestInternal(GridAbstractTest.java:2086)
at 
org.apache.ignite.testframework.junits.GridAbstractTest.access$000(GridAbstractTest.java:140)
at 
org.apache.ignite.testframework.junits.GridAbstractTest$5.run(GridAbstractTest.java:2001)
at java.lang.Thread.run(Thread.java:748)
{noformat}

In IGNITE-6005 was added ability to close datastructure on interrupted thread.
Retry of "close" operation was added in case of InterruptedException, but in 
some cases InterruptedException does not thrown.
For example GridFutureAdapter#get0 checks interruption flag and throws 
IgniteInterruptedCheckedException.


was (Author: xtern):
Log output from TemCity:
{noformat}
class org.apache.ignite.IgniteInterruptedException: Got interrupted while 
waiting for future to complete.
at org.apache.ignite.internal.util.IgniteUtils$3.apply(IgniteUtils.java:829)
at org.apache.ignite.internal.util.IgniteUtils$3.apply(IgniteUtils.java:827)
at 
org.apache.ignite.internal.util.IgniteUtils.convertException(IgniteUtils.java:985)
at 
org.apache.ignite.internal.processors.datastructures.GridCacheSemaphoreImpl.close(GridCacheSemaphoreImpl.java:969)
at 
org.apache.ignite.internal.processors.cache.datastructures.GridCacheAbstractDataStructuresFailoverSelfTest.testSemaphoreNonFailoverSafe(GridCacheAbstractDataStructuresFailoverSelfTest.java:481)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at junit.framework.TestCase.runTest(TestCase.java:176)
at 
org.apache.ignite.testframework.junits.GridAbstractTest.runTestInternal(GridAbstractTest.java:2086)
at 
org.apache.ignite.testframework.junits.GridAbstractTest.access$000(GridAbstractTest.java:140)
at 
org.apache.ignite.testframework.junits.GridAbstractTest$5.run(GridAbstractTest.java:2001)
at java.lang.Thread.run(Thread.java:748)
{noformat}

In IGNITE-6005 was added ability to close datastructure on interrupted thread.
Retry of "close" operation was added in case of InterruptedException, but in 
some cases InterruptedException does not thrown.
For example GridFutureAdapter#get0 checks interruption flag and throws 
IgniteInterruptedCheckedException.

> [Test Failed] 
> GridCacheAbstractDataStructuresFailoverSelfTest.testSemaphoreNonFailoverSafe
> --
>
> Key: IGNITE-5973
> URL: https://issues.apache.org/jira/browse/IGNITE-5973
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.1
>Reporter: Eduard Shangareev
>Assignee: Pavel Pereslegin
>Priority: Major
>  Labels: MakeTeamcityGreenAgain
>
> Success right is 93.3%. Fails locally.
> Example of failing - 
> http://ci.ignite.apache.org/viewLog.html?buildId=757906=buildResultsDiv=Ignite20Tests_IgniteDataStrucutures#testNameId-979977708202725050



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (IGNITE-5973) [Test Failed] GridCacheAbstractDataStructuresFailoverSelfTest.testSemaphoreNonFailoverSafe

2018-06-08 Thread Pavel Pereslegin (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-5973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16505888#comment-16505888
 ] 

Pavel Pereslegin edited comment on IGNITE-5973 at 6/8/18 10:18 AM:
---

Log output from TemCity:
{noformat}
class org.apache.ignite.IgniteInterruptedException: Got interrupted while 
waiting for future to complete.
at org.apache.ignite.internal.util.IgniteUtils$3.apply(IgniteUtils.java:829)
at org.apache.ignite.internal.util.IgniteUtils$3.apply(IgniteUtils.java:827)
at 
org.apache.ignite.internal.util.IgniteUtils.convertException(IgniteUtils.java:985)
at 
org.apache.ignite.internal.processors.datastructures.GridCacheSemaphoreImpl.close(GridCacheSemaphoreImpl.java:969)
at 
org.apache.ignite.internal.processors.cache.datastructures.GridCacheAbstractDataStructuresFailoverSelfTest.testSemaphoreNonFailoverSafe(GridCacheAbstractDataStructuresFailoverSelfTest.java:481)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at junit.framework.TestCase.runTest(TestCase.java:176)
at 
org.apache.ignite.testframework.junits.GridAbstractTest.runTestInternal(GridAbstractTest.java:2086)
at 
org.apache.ignite.testframework.junits.GridAbstractTest.access$000(GridAbstractTest.java:140)
at 
org.apache.ignite.testframework.junits.GridAbstractTest$5.run(GridAbstractTest.java:2001)
at java.lang.Thread.run(Thread.java:748)
{noformat}

In IGNITE-6005 was added ability to close datastructure on interrupted thread.
Retry of "close" operation was added in case of InterruptedException, but in 
some cases InterruptedException does not thrown.
For example GridFutureAdapter#get0 checks interruption flag and throws 
IgniteInterruptedCheckedException.


was (Author: xtern):
In IGNITE-6005 was added ability to close datastructure on interrupted thread.
Retry of "close" operation was added in case of InterruptedException, but in 
some cases InterruptedException does not thrown.
For example GridFutureAdapter#get0 checks interruption flag and throws 
IgniteInterruptedCheckedException.

> [Test Failed] 
> GridCacheAbstractDataStructuresFailoverSelfTest.testSemaphoreNonFailoverSafe
> --
>
> Key: IGNITE-5973
> URL: https://issues.apache.org/jira/browse/IGNITE-5973
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.1
>Reporter: Eduard Shangareev
>Assignee: Pavel Pereslegin
>Priority: Major
>  Labels: MakeTeamcityGreenAgain
>
> Success right is 93.3%. Fails locally.
> Example of failing - 
> http://ci.ignite.apache.org/viewLog.html?buildId=757906=buildResultsDiv=Ignite20Tests_IgniteDataStrucutures#testNameId-979977708202725050



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-5960) Ignite Continuous Query (Queries 3): CacheContinuousQueryConcurrentPartitionUpdateTest::testConcurrentUpdatesAndQueryStartAtomic is flaky

2018-06-08 Thread Sunny Chan (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-5960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16505892#comment-16505892
 ] 

Sunny Chan commented on IGNITE-5960:


[~agoncharuk] Sure I will get the patch updated and get back to you shortly.

> Ignite Continuous Query (Queries 3): 
> CacheContinuousQueryConcurrentPartitionUpdateTest::testConcurrentUpdatesAndQueryStartAtomic
>  is flaky
> -
>
> Key: IGNITE-5960
> URL: https://issues.apache.org/jira/browse/IGNITE-5960
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.1
>Reporter: Sergey Chugunov
>Assignee: Alexey Kuznetsov
>Priority: Major
>  Labels: MakeTeamcityGreenAgain, test-failure
> Fix For: 2.6
>
>
> According to [TC 
> history|http://ci.ignite.apache.org/project.html?projectId=Ignite20Tests=6546112007182082024=testDetails_Ignite20Tests=%3Cdefault%3E]
>  test is flaky.
> It is possible to reproduce it locally, sample run shows 9 failed tests out 
> of 30 overall executed.
> Test fails with jUnit assertion check: 
> {noformat}
> junit.framework.AssertionFailedError: 
> Expected :1
> Actual   :0
>  
>   at junit.framework.Assert.fail(Assert.java:57)
>   at junit.framework.Assert.failNotEquals(Assert.java:329)
>   at junit.framework.Assert.assertEquals(Assert.java:78)
>   at junit.framework.Assert.assertEquals(Assert.java:234)
>   at junit.framework.Assert.assertEquals(Assert.java:241)
>   at junit.framework.TestCase.assertEquals(TestCase.java:409)
>   at 
> org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousQueryConcurrentPartitionUpdateTest.concurrentUpdatesAndQueryStart(CacheContinuousQueryConcurrentPartitionUpdateTest.java:385)
>   at 
> org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousQueryConcurrentPartitionUpdateTest.testConcurrentUpdatesAndQueryStartTx(CacheContinuousQueryConcurrentPartitionUpdateTest.java:245)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at junit.framework.TestCase.runTest(TestCase.java:176)
>   at 
> org.apache.ignite.testframework.junits.GridAbstractTest.runTestInternal(GridAbstractTest.java:2000)
>   at 
> org.apache.ignite.testframework.junits.GridAbstractTest.access$000(GridAbstractTest.java:132)
>   at 
> org.apache.ignite.testframework.junits.GridAbstractTest$5.run(GridAbstractTest.java:1915)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (IGNITE-7087) Ignite.destroyCache leave cache serialized config and prevent from repeatable cache creation.

2018-06-08 Thread Stanilovsky Evgeny (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-7087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stanilovsky Evgeny resolved IGNITE-7087.

   Resolution: Cannot Reproduce
Fix Version/s: 2.5

> Ignite.destroyCache leave cache serialized config and prevent from repeatable 
> cache creation.
> -
>
> Key: IGNITE-7087
> URL: https://issues.apache.org/jira/browse/IGNITE-7087
> Project: Ignite
>  Issue Type: Bug
>  Components: cache, persistence
>Affects Versions: 2.4
>Reporter: Stanilovsky Evgeny
>Assignee: Stanilovsky Evgeny
>Priority: Major
> Fix For: 2.5
>
>
> enable pds and run simple test.
> {code}
> public void testCreateDestroyCaches() throws Exception {
> Ignite srv0 = startGrid(0);
> srv0.active(true);
> srv0.createCache(new CacheConfiguration("myCache"));
> srv0.destroyCache("myCache");
> stopAllGrids();
> srv0 = startGrid(0);
> srv0.active(true);
> srv0.createCache(new CacheConfiguration("myCache"));
> }
> {code}
> {code}
> [ERROR][main][root] Test failed.
> org.apache.ignite.cache.CacheExistsException: Failed to start cache (a cache 
> with the same name is already started): myCache
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (IGNITE-8742) Direct IO 2 suite is timed out by 'out of disk space' failure emulation test: WAL manager failure does not stoped execution

2018-06-08 Thread Dmitriy Sorokin (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitriy Sorokin reassigned IGNITE-8742:
---

Assignee: Dmitriy Sorokin

> Direct IO 2 suite is timed out by 'out of disk space' failure emulation test: 
> WAL manager failure does not stoped execution
> ---
>
> Key: IGNITE-8742
> URL: https://issues.apache.org/jira/browse/IGNITE-8742
> Project: Ignite
>  Issue Type: Test
>  Components: persistence
>Reporter: Dmitriy Pavlov
>Assignee: Dmitriy Sorokin
>Priority: Major
>  Labels: MakeTeamcityGreenAgain
>
> https://ci.ignite.apache.org/viewLog.html?buildId=1366882=buildResultsDiv=IgniteTests24Java8_PdsDirectIo2
> Test 
> org.apache.ignite.internal.processors.cache.persistence.IgniteNativeIoWalFlushFsyncSelfTest#testFailAfterStart
> emulates problem with disc space using exception.
> In direct IO environment real IO with disk is performed, tmpfs is not used.
> Sometimes this error can come from rollover() of segment, failure handler 
> reacted accordingly.
> {noformat}
> detected. Will be handled accordingly to configured handler [hnd=class 
> o.a.i.failure.StopNodeFailureHandler, failureCtx=FailureContext 
> [type=CRITICAL_ERROR, err=class o.a.i.i.pagemem.wal.StorageException: Unable 
> to write]]
> class org.apache.ignite.internal.pagemem.wal.StorageException: Unable to write
>   at 
> org.apache.ignite.internal.processors.cache.persistence.wal.FsyncModeFileWriteAheadLogManager$FileWriteHandle.writeBuffer(FsyncModeFileWriteAheadLogManager.java:2964)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.wal.FsyncModeFileWriteAheadLogManager$FileWriteHandle.flush(FsyncModeFileWriteAheadLogManager.java:2640)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.wal.FsyncModeFileWriteAheadLogManager$FileWriteHandle.flush(FsyncModeFileWriteAheadLogManager.java:2572)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.wal.FsyncModeFileWriteAheadLogManager$FileWriteHandle.flushOrWait(FsyncModeFileWriteAheadLogManager.java:2525)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.wal.FsyncModeFileWriteAheadLogManager$FileWriteHandle.close(FsyncModeFileWriteAheadLogManager.java:2795)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.wal.FsyncModeFileWriteAheadLogManager$FileWriteHandle.access$700(FsyncModeFileWriteAheadLogManager.java:2340)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.wal.FsyncModeFileWriteAheadLogManager.rollOver(FsyncModeFileWriteAheadLogManager.java:1029)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.wal.FsyncModeFileWriteAheadLogManager.log(FsyncModeFileWriteAheadLogManager.java:673)
> {noformat}
> But test seems to be not able to stop, node stopper thread tries to stop 
> cache, flush WAL. flush wait for rollover, which will never happen.
> {noformat}
> Thread [name="node-stopper", id=2836, state=WAITING, blockCnt=7, waitCnt=9]
> Lock 
> [object=java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@47f6473,
>  ownerName=null, ownerId=-1]
> at sun.misc.Unsafe.park(Native Method)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitUninterruptibly(AbstractQueuedSynchronizer.java:1976)
> at o.a.i.i.util.IgniteUtils.awaitQuiet(IgniteUtils.java:7473)
> at 
> o.a.i.i.processors.cache.persistence.wal.FsyncModeFileWriteAheadLogManager$FileWriteHandle.flushOrWait(FsyncModeFileWriteAheadLogManager.java:2546)
> at 
> o.a.i.i.processors.cache.persistence.wal.FsyncModeFileWriteAheadLogManager$FileWriteHandle.fsync(FsyncModeFileWriteAheadLogManager.java:2750)
> at 
> o.a.i.i.processors.cache.persistence.wal.FsyncModeFileWriteAheadLogManager$FileWriteHandle.access$2000(FsyncModeFileWriteAheadLogManager.java:2340)
> at 
> o.a.i.i.processors.cache.persistence.wal.FsyncModeFileWriteAheadLogManager.flush(FsyncModeFileWriteAheadLogManager.java:699)
> at 
> o.a.i.i.processors.cache.GridCacheProcessor.stopCache(GridCacheProcessor.java:1243)
> at 
> o.a.i.i.processors.cache.GridCacheProcessor.stopCaches(GridCacheProcessor.java:969)
> at 
> o.a.i.i.processors.cache.GridCacheProcessor.stop(GridCacheProcessor.java:943)
> at o.a.i.i.IgniteKernal.stop0(IgniteKernal.java:2289)
> at o.a.i.i.IgniteKernal.stop(IgniteKernal.java:2167)
> at o.a.i.i.IgnitionEx$IgniteNamedInstance.stop0(IgnitionEx.java:2588)
> - locked o.a.i.i.IgnitionEx$IgniteNamedInstance@90f6bfd
> at 

[jira] [Commented] (IGNITE-5973) [Test Failed] GridCacheAbstractDataStructuresFailoverSelfTest.testSemaphoreNonFailoverSafe

2018-06-08 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-5973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16505889#comment-16505889
 ] 

ASF GitHub Bot commented on IGNITE-5973:


GitHub user xtern opened a pull request:

https://github.com/apache/ignite/pull/4156

IGNITE-5973 Flaky failures.

…(fix).

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/xtern/ignite IGNITE-5973

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/4156.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4156


commit 48044b147d11992cefddf74502207dca355d9f62
Author: pereslegin-pa 
Date:   2018-06-08T09:46:13Z

IGNITE-5973 Add ability to close datastructure in interrupted thread (fix).




> [Test Failed] 
> GridCacheAbstractDataStructuresFailoverSelfTest.testSemaphoreNonFailoverSafe
> --
>
> Key: IGNITE-5973
> URL: https://issues.apache.org/jira/browse/IGNITE-5973
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.1
>Reporter: Eduard Shangareev
>Assignee: Pavel Pereslegin
>Priority: Major
>  Labels: MakeTeamcityGreenAgain
>
> Success right is 93.3%. Fails locally.
> Example of failing - 
> http://ci.ignite.apache.org/viewLog.html?buildId=757906=buildResultsDiv=Ignite20Tests_IgniteDataStrucutures#testNameId-979977708202725050



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-5973) [Test Failed] GridCacheAbstractDataStructuresFailoverSelfTest.testSemaphoreNonFailoverSafe

2018-06-08 Thread Pavel Pereslegin (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-5973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16505888#comment-16505888
 ] 

Pavel Pereslegin commented on IGNITE-5973:
--

In IGNITE-6005 was added ability to close datastructure on interrupted thread.
Retry of "close" operation was added in case of InterruptedException, but in 
some cases InterruptedException does not thrown.
For example GridFutureAdapter#get0 checks interruption flag and throws 
IgniteInterruptedCheckedException.

> [Test Failed] 
> GridCacheAbstractDataStructuresFailoverSelfTest.testSemaphoreNonFailoverSafe
> --
>
> Key: IGNITE-5973
> URL: https://issues.apache.org/jira/browse/IGNITE-5973
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.1
>Reporter: Eduard Shangareev
>Assignee: Pavel Pereslegin
>Priority: Major
>  Labels: MakeTeamcityGreenAgain
>
> Success right is 93.3%. Fails locally.
> Example of failing - 
> http://ci.ignite.apache.org/viewLog.html?buildId=757906=buildResultsDiv=Ignite20Tests_IgniteDataStrucutures#testNameId-979977708202725050



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-5960) Ignite Continuous Query (Queries 3): CacheContinuousQueryConcurrentPartitionUpdateTest::testConcurrentUpdatesAndQueryStartAtomic is flaky

2018-06-08 Thread Alexey Goncharuk (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-5960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16505887#comment-16505887
 ] 

Alexey Goncharuk commented on IGNITE-5960:
--

[~Alexey Kuznetsov] If I understand correctly, in your last case the user will 
be notified with entry E2, but since the listeners map is re-acquired after the 
{{needVal}} flag evaluation, the event notification may see incorrect previous 
value. Also note that other methods such as {{innerSet}} and {{innerRemove}} 
also suffer from this race.

I like the solution suggested by [~sunnychanclsa] better, because it linearizes 
entry update and CQ registration. 

[~sunnychanclsa], would you mind pulling master to your PR (there are some 
conflicts due to changes related to Java 9 compatibility) and replacing the 
{{ReentrantReadWriteLock}} with {{StripedCompositeReadWriteLock}} to reduce 
contention because these updates are on a hot path? After this, we will need to 
run a benchmark to verify there are no performance regression.

> Ignite Continuous Query (Queries 3): 
> CacheContinuousQueryConcurrentPartitionUpdateTest::testConcurrentUpdatesAndQueryStartAtomic
>  is flaky
> -
>
> Key: IGNITE-5960
> URL: https://issues.apache.org/jira/browse/IGNITE-5960
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.1
>Reporter: Sergey Chugunov
>Assignee: Alexey Kuznetsov
>Priority: Major
>  Labels: MakeTeamcityGreenAgain, test-failure
> Fix For: 2.6
>
>
> According to [TC 
> history|http://ci.ignite.apache.org/project.html?projectId=Ignite20Tests=6546112007182082024=testDetails_Ignite20Tests=%3Cdefault%3E]
>  test is flaky.
> It is possible to reproduce it locally, sample run shows 9 failed tests out 
> of 30 overall executed.
> Test fails with jUnit assertion check: 
> {noformat}
> junit.framework.AssertionFailedError: 
> Expected :1
> Actual   :0
>  
>   at junit.framework.Assert.fail(Assert.java:57)
>   at junit.framework.Assert.failNotEquals(Assert.java:329)
>   at junit.framework.Assert.assertEquals(Assert.java:78)
>   at junit.framework.Assert.assertEquals(Assert.java:234)
>   at junit.framework.Assert.assertEquals(Assert.java:241)
>   at junit.framework.TestCase.assertEquals(TestCase.java:409)
>   at 
> org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousQueryConcurrentPartitionUpdateTest.concurrentUpdatesAndQueryStart(CacheContinuousQueryConcurrentPartitionUpdateTest.java:385)
>   at 
> org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousQueryConcurrentPartitionUpdateTest.testConcurrentUpdatesAndQueryStartTx(CacheContinuousQueryConcurrentPartitionUpdateTest.java:245)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at junit.framework.TestCase.runTest(TestCase.java:176)
>   at 
> org.apache.ignite.testframework.junits.GridAbstractTest.runTestInternal(GridAbstractTest.java:2000)
>   at 
> org.apache.ignite.testframework.junits.GridAbstractTest.access$000(GridAbstractTest.java:132)
>   at 
> org.apache.ignite.testframework.junits.GridAbstractTest$5.run(GridAbstractTest.java:1915)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8747) Remove\RemoveAll method should not count expired entry as removed.

2018-06-08 Thread Andrew Mashenkov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Mashenkov updated IGNITE-8747:
-
Labels: MakeTeamcityGreenAgain tck  (was: tck)

> Remove\RemoveAll method should not count expired entry as removed.
> --
>
> Key: IGNITE-8747
> URL: https://issues.apache.org/jira/browse/IGNITE-8747
> Project: Ignite
>  Issue Type: Improvement
>  Components: cache
>Reporter: Andrew Mashenkov
>Assignee: Andrew Mashenkov
>Priority: Major
>  Labels: MakeTeamcityGreenAgain, tck, test-failure
>
> We have 2 TCK 1.0 test that are passed due to we have eagerTtl=true by 
> default.
> The reason is remove() return true even if an expired entry was removed.
> Seems, we have to evict expired entry from cache on remove(), but do not 
> count it as removed.
> java.lang.AssertionError
>  at 
> org.jsr107.tck.expiry.CacheExpiryTest.expire_whenAccessed(CacheExpiryTest.java:326)
> java.lang.AssertionError: expected:<0> but was:<1> at 
> org.jsr107.tck.expiry.CacheExpiryTest.testCacheStatisticsRemoveAll(CacheExpiryTest.java:160)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8747) Remove\RemoveAll method should not count expired entry as removed.

2018-06-08 Thread Andrew Mashenkov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Mashenkov updated IGNITE-8747:
-
Labels: MakeTeamcityGreenAgain tck test-failure  (was: 
MakeTeamcityGreenAgain tck)

> Remove\RemoveAll method should not count expired entry as removed.
> --
>
> Key: IGNITE-8747
> URL: https://issues.apache.org/jira/browse/IGNITE-8747
> Project: Ignite
>  Issue Type: Improvement
>  Components: cache
>Reporter: Andrew Mashenkov
>Assignee: Andrew Mashenkov
>Priority: Major
>  Labels: MakeTeamcityGreenAgain, tck, test-failure
>
> We have 2 TCK 1.0 test that are passed due to we have eagerTtl=true by 
> default.
> The reason is remove() return true even if an expired entry was removed.
> Seems, we have to evict expired entry from cache on remove(), but do not 
> count it as removed.
> java.lang.AssertionError
>  at 
> org.jsr107.tck.expiry.CacheExpiryTest.expire_whenAccessed(CacheExpiryTest.java:326)
> java.lang.AssertionError: expected:<0> but was:<1> at 
> org.jsr107.tck.expiry.CacheExpiryTest.testCacheStatisticsRemoveAll(CacheExpiryTest.java:160)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (IGNITE-8747) Remove\RemoveAll method should not count expired entry as removed.

2018-06-08 Thread Andrew Mashenkov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Mashenkov reassigned IGNITE-8747:


Assignee: Andrew Mashenkov

> Remove\RemoveAll method should not count expired entry as removed.
> --
>
> Key: IGNITE-8747
> URL: https://issues.apache.org/jira/browse/IGNITE-8747
> Project: Ignite
>  Issue Type: Improvement
>  Components: cache
>Reporter: Andrew Mashenkov
>Assignee: Andrew Mashenkov
>Priority: Major
>  Labels: tck
>
> We have 2 TCK 1.0 test that are passed due to we have eagerTtl=true by 
> default.
> The reason is remove() return true even if an expired entry was removed.
> Seems, we have to evict expired entry from cache on remove(), but do not 
> count it as removed.
> java.lang.AssertionError
>  at 
> org.jsr107.tck.expiry.CacheExpiryTest.expire_whenAccessed(CacheExpiryTest.java:326)
> java.lang.AssertionError: expected:<0> but was:<1> at 
> org.jsr107.tck.expiry.CacheExpiryTest.testCacheStatisticsRemoveAll(CacheExpiryTest.java:160)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8503) Fix wrong GridCacheMapEntry startVersion initialization.

2018-06-08 Thread Andrew Mashenkov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Mashenkov updated IGNITE-8503:
-
Labels: MakeTeamcityGreenAgain Muted_test tck  (was: MakeTeamcityGreenAgain 
Muted_test tck_issues)

> Fix wrong GridCacheMapEntry startVersion initialization.
> 
>
> Key: IGNITE-8503
> URL: https://issues.apache.org/jira/browse/IGNITE-8503
> Project: Ignite
>  Issue Type: Improvement
>  Components: cache, persistence
>Reporter: Dmitriy Pavlov
>Assignee: Andrew Mashenkov
>Priority: Major
>  Labels: MakeTeamcityGreenAgain, Muted_test, tck
>
> GridCacheMapEntry initialize startVersion in wrong way.
> This leads to IgnitePdsWithTtlTest.testTtlIsAppliedAfterRestart failure and 
> reason is "Entry which should be expired by TTL policy is available after 
> grid restart."
>  
> Test was added during https://issues.apache.org/jira/browse/IGNITE-5874 
> development.
> This test restarts grid and checks all entries are not present in grid.
> But with high possiblity one from 7000 entries to be expired is resurrected 
> instead and returned by cache get.
> {noformat}
> After timeout {{
> >>> 
> >>> Cache memory stats [igniteInstanceName=db.IgnitePdsWithTtlTest0, 
> >>> cache=expirableCache]
> >>>  Cache size: 0
> >>>  Cache partition topology stats 
> >>> [igniteInstanceName=db.IgnitePdsWithTtlTest0, grp=group1]
> >>> 
> >>> Cache event manager memory stats 
> >>> [igniteInstanceName=db.IgnitePdsWithTtlTest0, cache=expirableCache, 
> >>> stats=N/A]
> >>>
> >>> Query manager memory stats [igniteInstanceName=db.IgnitePdsWithTtlTest0, 
> >>> cache=expirableCache]
> >>>   threadsSize: 0
> >>>   futsSize: 0
> >>>
> >>> TTL processor memory stats [igniteInstanceName=db.IgnitePdsWithTtlTest0, 
> >>> cache=expirableCache]
> >>>   pendingEntriesSize: 0
> }} After timeout
> {noformat}
> [https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8=5798755758125626876=testDetails_IgniteTests24Java8=%3Cdefault%3E]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8548) Make Apache Ignite JCache 1.1 specification compliant

2018-06-08 Thread Andrew Mashenkov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8548?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Mashenkov updated IGNITE-8548:
-
Labels: newbie tck  (was: newbie)

> Make Apache Ignite JCache 1.1 specification compliant
> -
>
> Key: IGNITE-8548
> URL: https://issues.apache.org/jira/browse/IGNITE-8548
> Project: Ignite
>  Issue Type: Task
>Reporter: Denis Magda
>Assignee: Alexander Menshikov
>Priority: Major
>  Labels: newbie, tck
> Fix For: 2.6
>
>
> JCache specification's license was changed to Apache 2.0, and 1.1 version was 
> released:
> https://groups.google.com/forum/#!topic/jsr107/BC1qKqknzKU
> Ignite needs to:
> * Upgrade to JCacahe 1.1 in general to use Apache 2.0 license
> *  Become JCache 1.1 compliant implementing new interfaces and passing TCK



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-8747) Remove\RemoveAll method should not count expired entry as removed.

2018-06-08 Thread Andrew Mashenkov (JIRA)
Andrew Mashenkov created IGNITE-8747:


 Summary: Remove\RemoveAll method should not count expired entry as 
removed.
 Key: IGNITE-8747
 URL: https://issues.apache.org/jira/browse/IGNITE-8747
 Project: Ignite
  Issue Type: Improvement
  Components: cache
Reporter: Andrew Mashenkov


We have 2 TCK 1.0 test that are passed due to we have eagerTtl=true by default.
The reason is remove() return true even if an expired entry was removed.
Seems, we have to evict expired entry from cache on remove(), but do not count 
it as removed.

java.lang.AssertionError
 at 
org.jsr107.tck.expiry.CacheExpiryTest.expire_whenAccessed(CacheExpiryTest.java:326)

java.lang.AssertionError: expected:<0> but was:<1> at 
org.jsr107.tck.expiry.CacheExpiryTest.testCacheStatisticsRemoveAll(CacheExpiryTest.java:160)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8747) Remove\RemoveAll method should not count expired entry as removed.

2018-06-08 Thread Andrew Mashenkov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Mashenkov updated IGNITE-8747:
-
Labels: tck  (was: tck_issues)

> Remove\RemoveAll method should not count expired entry as removed.
> --
>
> Key: IGNITE-8747
> URL: https://issues.apache.org/jira/browse/IGNITE-8747
> Project: Ignite
>  Issue Type: Improvement
>  Components: cache
>Reporter: Andrew Mashenkov
>Priority: Major
>  Labels: tck
>
> We have 2 TCK 1.0 test that are passed due to we have eagerTtl=true by 
> default.
> The reason is remove() return true even if an expired entry was removed.
> Seems, we have to evict expired entry from cache on remove(), but do not 
> count it as removed.
> java.lang.AssertionError
>  at 
> org.jsr107.tck.expiry.CacheExpiryTest.expire_whenAccessed(CacheExpiryTest.java:326)
> java.lang.AssertionError: expected:<0> but was:<1> at 
> org.jsr107.tck.expiry.CacheExpiryTest.testCacheStatisticsRemoveAll(CacheExpiryTest.java:160)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (IGNITE-8745) Add ability to monitor TCP discovery ring information

2018-06-08 Thread Evgenii Zagumennov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8745?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Evgenii Zagumennov reassigned IGNITE-8745:
--

Assignee: Evgenii Zagumennov

> Add ability to monitor TCP discovery ring information
> -
>
> Key: IGNITE-8745
> URL: https://issues.apache.org/jira/browse/IGNITE-8745
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Alexey Goncharuk
>Assignee: Evgenii Zagumennov
>Priority: Major
>
> We should add the following modifications:
> 1) Add a method on TCP discovery MBean to dump the ring structure on local 
> node and on all nodes in the grid
> 2) Make tcp-disco-worker thread name reflect the node to which the local node 
> is connected
> 3) Add a method on TCP discovery MBean to return current topology version



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (IGNITE-5973) [Test Failed] GridCacheAbstractDataStructuresFailoverSelfTest.testSemaphoreNonFailoverSafe

2018-06-08 Thread Pavel Pereslegin (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-5973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Pereslegin reassigned IGNITE-5973:


Assignee: Pavel Pereslegin

> [Test Failed] 
> GridCacheAbstractDataStructuresFailoverSelfTest.testSemaphoreNonFailoverSafe
> --
>
> Key: IGNITE-5973
> URL: https://issues.apache.org/jira/browse/IGNITE-5973
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.1
>Reporter: Eduard Shangareev
>Assignee: Pavel Pereslegin
>Priority: Major
>  Labels: MakeTeamcityGreenAgain
>
> Success right is 93.3%. Fails locally.
> Example of failing - 
> http://ci.ignite.apache.org/viewLog.html?buildId=757906=buildResultsDiv=Ignite20Tests_IgniteDataStrucutures#testNameId-979977708202725050



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8661) WALItreater is not stopped if can not deserialize record

2018-06-08 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16505850#comment-16505850
 ] 

ASF GitHub Bot commented on IGNITE-8661:


GitHub user DmitriyGovorukhin opened a pull request:

https://github.com/apache/ignite/pull/4155

IGNITE-8661 WALItreater is not stopped if can not deserialize record



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-8661

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/4155.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4155


commit e57074824be2e114c8d8607f43f66c028a06d860
Author: Dmitriy Govorukhin 
Date:   2018-06-01T11:53:48Z

IGNITE-8661 add IteratorParametersBuilder + refactoring

commit fc63c038e15dd49e7114fad0083af2c0e90e2b98
Author: Dmitriy Govorukhin 
Date:   2018-06-01T13:13:57Z

IGNITE-8661  refactoring

commit 667fea64a99c2dcc4ad9c173a96f2d345fafffc2
Author: Dmitriy Govorukhin 
Date:   2018-06-07T11:10:27Z

IGNITE-8661

commit d64289500c21db130416de2e094a06a44486f2d1
Author: Dmitriy Govorukhin 
Date:   2018-06-07T16:51:51Z

IGNITE-8661 wip

commit 2706b2bc6cb008c3f921c2b1ba33a9aca18ab412
Author: Dmitriy Govorukhin 
Date:   2018-06-08T09:18:02Z

Merge branch 'master' into ignite-8661




> WALItreater is not stopped if can not deserialize record 
> -
>
> Key: IGNITE-8661
> URL: https://issues.apache.org/jira/browse/IGNITE-8661
> Project: Ignite
>  Issue Type: Bug
>Reporter: Dmitriy Govorukhin
>Assignee: Dmitriy Govorukhin
>Priority: Major
> Fix For: 2.6
>
>
> Currently, we have the following code in RecordV1Serializer.readWithCrc:
> {code:java}
> static WALRecord readWithCrc(.) throws EOFException, 
> IgniteCheckedException {
>   
> try (FileInput.Crc32CheckingFileInput in = in0.startRead(skipCrc)) {
>   . 
> }
> catch (EOFException | SegmentEofException | 
> WalSegmentTailReachedException e) {
> throw e;
> }
> catch (Exception e) {
> throw new IgniteCheckedException("Failed to read WAL record at 
> position: " + startPos, e);
> }
> }
> {code}
> So, any runtime error will be remapped to IgniteCheckedException, which will 
> lead to iterator stop due to the following code:
> AbstractWalRecordsIterator.advanceRecord:
> {code}
>try {
>  ..
> }
> catch (IOException | IgniteCheckedException e) {
> if (e instanceof WalSegmentTailReachedException)
> throw (WalSegmentTailReachedException)e;
> if (!(e instanceof SegmentEofException))
> handleRecordException(e, actualFilePtr);
> return null;
> }
> {code}
> Any IgniteCheckedException will be ignored and iterator goes ahead to the 
> next segment. 
> I suggest to make the following changes:
> 1) It is unexpected behavior, need to fix it. We should only stop iteration 
> on known exceptions
> 2) Also, need to provide ability skip records by type or some pointer for the 
> StandaloneWalRecordsIterator



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8645) CacheMetrics.getCacheTxCommits() doesn't include transactions started on client node

2018-06-08 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16505845#comment-16505845
 ] 

ASF GitHub Bot commented on IGNITE-8645:


GitHub user voipp reopened a pull request:

https://github.com/apache/ignite/pull/4154

IGNITE-8645 fix for client tx metrics aren't included to cache metrics



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/voipp/ignite IGNITE-8645

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/4154.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4154


commit c3ed2cf9ebadea7e78ef34de0914c16603d9352a
Author: voipp 
Date:   2018-06-01T16:28:55Z

IGNITE-8645 fix for client tx metrics aren't included to cache metrics




> CacheMetrics.getCacheTxCommits() doesn't include transactions started on 
> client node
> 
>
> Key: IGNITE-8645
> URL: https://issues.apache.org/jira/browse/IGNITE-8645
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.4
>Reporter: Roman Guseinov
>Assignee: Alexey Kuznetsov
>Priority: Major
> Fix For: 2.6
>
> Attachments: CacheTxCommitsMetricTest.java
>
>
> The test is attached [^CacheTxCommitsMetricTest.java]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8645) CacheMetrics.getCacheTxCommits() doesn't include transactions started on client node

2018-06-08 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16505843#comment-16505843
 ] 

ASF GitHub Bot commented on IGNITE-8645:


Github user voipp closed the pull request at:

https://github.com/apache/ignite/pull/4154


> CacheMetrics.getCacheTxCommits() doesn't include transactions started on 
> client node
> 
>
> Key: IGNITE-8645
> URL: https://issues.apache.org/jira/browse/IGNITE-8645
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.4
>Reporter: Roman Guseinov
>Assignee: Alexey Kuznetsov
>Priority: Major
> Fix For: 2.6
>
> Attachments: CacheTxCommitsMetricTest.java
>
>
> The test is attached [^CacheTxCommitsMetricTest.java]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8645) CacheMetrics.getCacheTxCommits() doesn't include transactions started on client node

2018-06-08 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16505839#comment-16505839
 ] 

ASF GitHub Bot commented on IGNITE-8645:


GitHub user voipp opened a pull request:

https://github.com/apache/ignite/pull/4154

IGNITE-8645 fix for client tx metrics aren't included to cache metrics



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/voipp/ignite IGNITE-8645

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/4154.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4154


commit c3ed2cf9ebadea7e78ef34de0914c16603d9352a
Author: voipp 
Date:   2018-06-01T16:28:55Z

IGNITE-8645 fix for client tx metrics aren't included to cache metrics




> CacheMetrics.getCacheTxCommits() doesn't include transactions started on 
> client node
> 
>
> Key: IGNITE-8645
> URL: https://issues.apache.org/jira/browse/IGNITE-8645
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.4
>Reporter: Roman Guseinov
>Assignee: Alexey Kuznetsov
>Priority: Major
> Fix For: 2.6
>
> Attachments: CacheTxCommitsMetricTest.java
>
>
> The test is attached [^CacheTxCommitsMetricTest.java]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8645) CacheMetrics.getCacheTxCommits() doesn't include transactions started on client node

2018-06-08 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16505835#comment-16505835
 ] 

ASF GitHub Bot commented on IGNITE-8645:


Github user voipp closed the pull request at:

https://github.com/apache/ignite/pull/4111


> CacheMetrics.getCacheTxCommits() doesn't include transactions started on 
> client node
> 
>
> Key: IGNITE-8645
> URL: https://issues.apache.org/jira/browse/IGNITE-8645
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.4
>Reporter: Roman Guseinov
>Assignee: Alexey Kuznetsov
>Priority: Major
> Fix For: 2.6
>
> Attachments: CacheTxCommitsMetricTest.java
>
>
> The test is attached [^CacheTxCommitsMetricTest.java]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8745) Add ability to monitor TCP discovery ring information

2018-06-08 Thread Alexey Goncharuk (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8745?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Goncharuk updated IGNITE-8745:
-
Description: 
We should add the following modifications:
1) Add a method on TCP discovery MBean to dump the ring structure on local node 
and on all nodes in the grid
2) Make tcp-disco-worker thread name reflect the node to which the local node 
is connected
3) Add a method on TCP discovery MBean to return current topology version

> Add ability to monitor TCP discovery ring information
> -
>
> Key: IGNITE-8745
> URL: https://issues.apache.org/jira/browse/IGNITE-8745
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Alexey Goncharuk
>Priority: Major
>
> We should add the following modifications:
> 1) Add a method on TCP discovery MBean to dump the ring structure on local 
> node and on all nodes in the grid
> 2) Make tcp-disco-worker thread name reflect the node to which the local node 
> is connected
> 3) Add a method on TCP discovery MBean to return current topology version



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-8746) EVT_CACHE_REBALANCE_PART_DATA_LOST event received twice on the coordinator node

2018-06-08 Thread Pavel Vinokurov (JIRA)
Pavel Vinokurov created IGNITE-8746:
---

 Summary: EVT_CACHE_REBALANCE_PART_DATA_LOST event received twice 
on the coordinator node
 Key: IGNITE-8746
 URL: https://issues.apache.org/jira/browse/IGNITE-8746
 Project: Ignite
  Issue Type: Bug
  Components: cache
Affects Versions: 2.4
Reporter: Pavel Vinokurov
 Attachments: EvtDataLostTwiceOnCoordinatorReprocuder.java

After a node left the cluster the coordinator recieves the partition lost event 
twice.
The reproducer is attached.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-8745) Add ability to monitor TCP discovery ring information

2018-06-08 Thread Alexey Goncharuk (JIRA)
Alexey Goncharuk created IGNITE-8745:


 Summary: Add ability to monitor TCP discovery ring information
 Key: IGNITE-8745
 URL: https://issues.apache.org/jira/browse/IGNITE-8745
 Project: Ignite
  Issue Type: Improvement
Reporter: Alexey Goncharuk






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-1260) S3 IP finder should have an option to use a subfolder instead of bucket root

2018-06-08 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-1260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16505809#comment-16505809
 ] 

ASF GitHub Bot commented on IGNITE-1260:


GitHub user udaykale opened a pull request:

https://github.com/apache/ignite/pull/4153

IGNITE-1260 Added support for S3 keyPrefix in AWS S3-based IP finder

Resolves IGNITE-1260

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/udaykale/ignite IGNITE-1260

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/4153.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4153


commit 515af752392b6872afe4085ed0555d6787f1d4f7
Author: uday 
Date:   2018-06-08T07:54:24Z

IGNITE-1260 Added support for S3 keyPrefix in AWS S3-based IP finder




> S3 IP finder should have an option to use a subfolder instead of bucket root
> 
>
> Key: IGNITE-1260
> URL: https://issues.apache.org/jira/browse/IGNITE-1260
> Project: Ignite
>  Issue Type: Improvement
>  Components: general
>Affects Versions: 1.1.4
>Reporter: Valentin Kulichenko
>Priority: Minor
>  Labels: newbie, usability, user-request
>
> Current implementation forces user to use the bucket root which is not always 
> possible. Need to provide a configuration parameter that allows to provide a 
> path in addition to the bucket name.
> Corresponding user@ thread: 
> http://apache-ignite-users.70518.x6.nabble.com/AWS-Integration-td495.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8722) Issue in REST API 2.5

2018-06-08 Thread Denis Dijak (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16505777#comment-16505777
 ] 

Denis Dijak commented on IGNITE-8722:
-

[~kuaw26] thank you :)

> Issue in REST API 2.5
> -
>
> Key: IGNITE-8722
> URL: https://issues.apache.org/jira/browse/IGNITE-8722
> Project: Ignite
>  Issue Type: Bug
>  Components: rest
>Affects Versions: 2.5
>Reporter: Denis Dijak
>Priority: Major
>  Labels: rest
> Attachments: rest.api.zip
>
>
> In 2.5 ignite REST-API dosent show cache value structure correctly
> rest-api 2.4
> "0013289414": {
>  "timeFrom": 1527166800,
>  "timeTo": 1528199550,
>  "results": ["BUSINESS-EU"],
>  "child":
> { "timeFrom": 1527166800, "timeTo": 10413788400, "results": ["BUSINESS-EU"], 
> "child": null }
> }
>  
>  rest-api2.5
> "0013289414":
> { "timeFrom": 1527166800, "timeTo": 1528199550, "results": ["BUSINESS-EU"] }
> As you can see the child is missing. If i switch back to 2.4 REST-API 
> everything works as expected. 
> The above structure is class ValidityNode and the child that is missing in 
> 2.5 is also a ValidityNode. The structure is meant to be as parent-child 
> implementation.
> public class ValidityNode {
>  private long timeFrom;
>  private long timeTo; 
>  private ArrayList results = null;
>  private ValidityNode child = null;
> public ValidityNode()
> { // default constructor }
> public long getTimeFrom()
> { return timeFrom; }
> public void setTimeFrom(long timeFrom)
> { this.timeFrom = timeFrom; }
> public long getTimeTo()
> { return timeTo; }
> public void setTimeTo(long timeTo)
> { this.timeTo = timeTo; }
> public ArrayList getResults()
> { return results; }
> public void setResults(ArrayList results)
> { this.results = results; }
> public ValidityNode getChild()
> { return child; }
> public void setChild(ValidityNode child)
> { this.child = child; }
> @Override
>  public String toString()
> { return "ValidityNode [timeFrom=" + timeFrom + ", timeTo=" + timeTo + ", 
> results=" + results + ", child=" + child + "]"; }
> Is this issue maybe related to keyType and valueType that were intruduced in 
> 2.5?
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)