[jira] [Commented] (IGNITE-8744) Web console: Incorrect behavior of cluster activation control

2018-06-07 Thread Andrey Novikov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8744?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16505707#comment-16505707
 ] 

Andrey Novikov commented on IGNITE-8744:


After digging deeper into cluster activation found that cluster doesn't have 
state: activation in progress, deactivation in progress. I think we should 
emulate it on server side of Web-Console.  Emulating on frontend side will be 
not displayed on other user tabs or for other user that what work with this 
cluster.

> Web console: Incorrect behavior of cluster activation control
> -
>
> Key: IGNITE-8744
> URL: https://issues.apache.org/jira/browse/IGNITE-8744
> Project: Ignite
>  Issue Type: Bug
>  Components: wizards
>Reporter: Pavel Konstantinov
>Assignee: Andrey Novikov
>Priority: Major
>
> # start node 
> # activate
> # go to Queries history tab, click Refresh
> # deactivate cluster using component- after several seconds component gets 
> switched to 'Activating...' stage and hangs in this state for about minute



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8744) Web console: Incorrect behavior of cluster activation control

2018-06-07 Thread Andrey Novikov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Novikov updated IGNITE-8744:
---
Priority: Minor  (was: Major)

> Web console: Incorrect behavior of cluster activation control
> -
>
> Key: IGNITE-8744
> URL: https://issues.apache.org/jira/browse/IGNITE-8744
> Project: Ignite
>  Issue Type: Bug
>  Components: wizards
>Reporter: Pavel Konstantinov
>Assignee: Andrey Novikov
>Priority: Minor
>
> # start node 
> # activate
> # go to Queries history tab, click Refresh
> # deactivate cluster using component- after several seconds component gets 
> switched to 'Activating...' stage and hangs in this state for about minute



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8744) Web console: Incorrect behavior of cluster activation control

2018-06-07 Thread Pavel Konstantinov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Konstantinov updated IGNITE-8744:
---
Summary: Web console: Incorrect behavior of cluster activation control  
(was: Incorrect behavior of cluster activation control)

> Web console: Incorrect behavior of cluster activation control
> -
>
> Key: IGNITE-8744
> URL: https://issues.apache.org/jira/browse/IGNITE-8744
> Project: Ignite
>  Issue Type: Bug
>  Components: wizards
>Reporter: Pavel Konstantinov
>Assignee: Andrey Novikov
>Priority: Major
>
> # start node 
> # activate
> # go to Queries history tab, click Refresh
> # deactivate cluster using component- after several seconds component gets 
> switched to 'Activating...' stage and hangs in this state for about minute



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8744) Incorrect behavior of cluster activation control

2018-06-07 Thread Pavel Konstantinov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Konstantinov updated IGNITE-8744:
---
Component/s: wizards

> Incorrect behavior of cluster activation control
> 
>
> Key: IGNITE-8744
> URL: https://issues.apache.org/jira/browse/IGNITE-8744
> Project: Ignite
>  Issue Type: Bug
>  Components: wizards
>Reporter: Pavel Konstantinov
>Assignee: Andrey Novikov
>Priority: Major
>
> # start node 
> # activate
> # go to Queries history tab, click Refresh
> # deactivate cluster using component- after several seconds component gets 
> switched to 'Activating...' stage and hangs in this state for about minute



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-8744) Incorrect behavior of cluster activation control

2018-06-07 Thread Pavel Konstantinov (JIRA)
Pavel Konstantinov created IGNITE-8744:
--

 Summary: Incorrect behavior of cluster activation control
 Key: IGNITE-8744
 URL: https://issues.apache.org/jira/browse/IGNITE-8744
 Project: Ignite
  Issue Type: Bug
Reporter: Pavel Konstantinov
Assignee: Andrey Novikov


# start node 
# activate
# go to Queries history tab, click Refresh
# deactivate cluster using component- after several seconds component gets 
switched to 'Activating...' stage and hangs in this state for about minute



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8736) Add transaction label to CU.txString() method output

2018-06-07 Thread Sergey Kosarev (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16505655#comment-16505655
 ] 

Sergey Kosarev commented on IGNITE-8736:


[~agoncharuk], please review my tiny changes.

> Add transaction label to CU.txString() method output
> 
>
> Key: IGNITE-8736
> URL: https://issues.apache.org/jira/browse/IGNITE-8736
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Alexey Goncharuk
>Assignee: Sergey Kosarev
>Priority: Major
> Fix For: 2.6
>
>
> This information may be useful during deadlocked and forcibly rolled back 
> transactions printout



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8736) Add transaction label to CU.txString() method output

2018-06-07 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16505649#comment-16505649
 ] 

ASF GitHub Bot commented on IGNITE-8736:


GitHub user macrergate opened a pull request:

https://github.com/apache/ignite/pull/4152

IGNITE-8736 Add transaction label to CU.txString() method output



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-8736

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/4152.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4152


commit 2a0600acc524128335cffd06ae63e056b217d484
Author: Sergey Kosarev 
Date:   2018-06-08T03:12:30Z

IGNITE-8736 Add transaction label to CU.txString() method output




> Add transaction label to CU.txString() method output
> 
>
> Key: IGNITE-8736
> URL: https://issues.apache.org/jira/browse/IGNITE-8736
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Alexey Goncharuk
>Assignee: Sergey Kosarev
>Priority: Major
> Fix For: 2.6
>
>
> This information may be useful during deadlocked and forcibly rolled back 
> transactions printout



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8698) JDBC metadata missing: Can not find tables in information_schema.columns which contain underlines in the name.

2018-06-07 Thread arklet (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

arklet updated IGNITE-8698:
---
Summary: JDBC metadata missing: Can not find tables in 
information_schema.columns which contain underlines in the name.  (was: JDBC 
metadata missing: Can not find tables in information_schema.columns contain 
underlines in the name.)

> JDBC metadata missing: Can not find tables in information_schema.columns 
> which contain underlines in the name.
> --
>
> Key: IGNITE-8698
> URL: https://issues.apache.org/jira/browse/IGNITE-8698
> Project: Ignite
>  Issue Type: Bug
>  Components: jdbc
>Affects Versions: 2.4, 2.5
> Environment: Presto plugin
>Reporter: arklet
>Priority: Major
> Attachments: presto-ignite.jar
>
>
> I tried to implement a plugin for presto to connect ignite.  Presto can list 
> information_schema and public two schema. I have some tables in public schema 
> that with underlines in the name. But,these tables can't be found in the 
> table information_schema.columns. And, these tables can't be queried in the 
> presto. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8698) JDBC metadata missing: Can not find tables in information_schema.columns contain underlines in the name.

2018-06-07 Thread arklet (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

arklet updated IGNITE-8698:
---
Summary: JDBC metadata missing: Can not find tables in 
information_schema.columns contain underlines in the name.  (was: Presto can't 
query tables in ignite with '_' in name.)

> JDBC metadata missing: Can not find tables in information_schema.columns 
> contain underlines in the name.
> 
>
> Key: IGNITE-8698
> URL: https://issues.apache.org/jira/browse/IGNITE-8698
> Project: Ignite
>  Issue Type: Bug
>  Components: jdbc
>Affects Versions: 2.4, 2.5
> Environment: Presto plugin
>Reporter: arklet
>Priority: Major
> Attachments: presto-ignite.jar
>
>
> I tried to implement a plugin for presto to connect ignite.  Presto can list 
> information_schema and public two schema. I have some tables in public schema 
> that with underlines in the name. But,these tables can't be found in the 
> table information_schema.columns. And, these tables can't be queried in the 
> presto. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Issue Comment Deleted] (IGNITE-8198) Document how to use username/password for REST, drivers and thin clients

2018-06-07 Thread Vitaliy Osipov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaliy Osipov updated IGNITE-8198:
---
Comment: was deleted

(was: 1)

> Document how to use username/password for REST, drivers and thin clients
> 
>
> Key: IGNITE-8198
> URL: https://issues.apache.org/jira/browse/IGNITE-8198
> Project: Ignite
>  Issue Type: Task
>  Components: documentation
>Affects Versions: 2.5
>Reporter: Prachi Garg
>Assignee: Denis Magda
>Priority: Major
> Fix For: 2.5
>
>
> Overall, we have to update the following protocols/driving explaining how to 
> open a secured connection:
>  * JDBC/ODBC
>  * Binary Client Protocol: 
> [https://apacheignite.readme.io/docs/binary-client-protocol#section-handshake]
>  * Thin clients (Java and Net)
>  * REST protocol - [https://apacheignite.readme.io/docs/rest-api]
> Set  in ignite 
> configuration when persistence is enabled.
> Talk to [~vozerov] and [~taras.ledkov] to get more details and support. 
> They've been working on this functionality: 
> https://issues.apache.org/jira/browse/IGNITE-7436



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8198) Document how to use username/password for REST, drivers and thin clients

2018-06-07 Thread Vitaliy Osipov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16505187#comment-16505187
 ] 

Vitaliy Osipov commented on IGNITE-8198:


1

> Document how to use username/password for REST, drivers and thin clients
> 
>
> Key: IGNITE-8198
> URL: https://issues.apache.org/jira/browse/IGNITE-8198
> Project: Ignite
>  Issue Type: Task
>  Components: documentation
>Affects Versions: 2.5
>Reporter: Prachi Garg
>Assignee: Denis Magda
>Priority: Major
> Fix For: 2.5
>
>
> Overall, we have to update the following protocols/driving explaining how to 
> open a secured connection:
>  * JDBC/ODBC
>  * Binary Client Protocol: 
> [https://apacheignite.readme.io/docs/binary-client-protocol#section-handshake]
>  * Thin clients (Java and Net)
>  * REST protocol - [https://apacheignite.readme.io/docs/rest-api]
> Set  in ignite 
> configuration when persistence is enabled.
> Talk to [~vozerov] and [~taras.ledkov] to get more details and support. 
> They've been working on this functionality: 
> https://issues.apache.org/jira/browse/IGNITE-7436



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8740) Support reuse of already initialized Ignite in IgniteSpringBean

2018-06-07 Thread Denis Magda (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denis Magda updated IGNITE-8740:

Priority: Blocker  (was: Major)

> Support reuse of already initialized Ignite in IgniteSpringBean
> ---
>
> Key: IGNITE-8740
> URL: https://issues.apache.org/jira/browse/IGNITE-8740
> Project: Ignite
>  Issue Type: Improvement
>  Components: spring
>Affects Versions: 2.4
>Reporter: Ilya Kasnacheev
>Priority: Blocker
> Fix For: 2.6
>
>
> See 
> http://apache-ignite-users.70518.x6.nabble.com/IgniteSpringBean-amp-Ignite-SpringTransactionManager-broken-with-2-4-td21667.html#a21724
>  (there's patch available)
> The idea is to introduce a workaround for users hit by IGNITE-6555, which 
> unfortunately broke some scenarios.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8740) Support reuse of already initialized Ignite in IgniteSpringBean

2018-06-07 Thread Denis Magda (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denis Magda updated IGNITE-8740:

Fix Version/s: 2.6

> Support reuse of already initialized Ignite in IgniteSpringBean
> ---
>
> Key: IGNITE-8740
> URL: https://issues.apache.org/jira/browse/IGNITE-8740
> Project: Ignite
>  Issue Type: Improvement
>  Components: spring
>Affects Versions: 2.4
>Reporter: Ilya Kasnacheev
>Priority: Major
> Fix For: 2.6
>
>
> See 
> http://apache-ignite-users.70518.x6.nabble.com/IgniteSpringBean-amp-Ignite-SpringTransactionManager-broken-with-2-4-td21667.html#a21724
>  (there's patch available)
> The idea is to introduce a workaround for users hit by IGNITE-6555, which 
> unfortunately broke some scenarios.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8379) Add maven-surefire-plugin support for PDS Compatibility tests

2018-06-07 Thread Vyacheslav Daradur (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16505087#comment-16505087
 ] 

Vyacheslav Daradur commented on IGNITE-8379:


Put it off because I don't have enough time now. I will return to the ticket 
later.

> Add maven-surefire-plugin support for PDS Compatibility tests
> -
>
> Key: IGNITE-8379
> URL: https://issues.apache.org/jira/browse/IGNITE-8379
> Project: Ignite
>  Issue Type: Improvement
>Affects Versions: 2.5
>Reporter: Peter Ivanov
>Assignee: Vyacheslav Daradur
>Priority: Major
> Fix For: 2.6
>
>
> In continuation of the works on PDS Compatibility test suite, it is required 
> to add support for {{maven-surefire-plugin}} in Compatibility Framework.
> See IGNITE-8275 for details.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (IGNITE-8736) Add transaction label to CU.txString() method output

2018-06-07 Thread Sergey Kosarev (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kosarev reassigned IGNITE-8736:
--

Assignee: Sergey Kosarev

> Add transaction label to CU.txString() method output
> 
>
> Key: IGNITE-8736
> URL: https://issues.apache.org/jira/browse/IGNITE-8736
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Alexey Goncharuk
>Assignee: Sergey Kosarev
>Priority: Major
> Fix For: 2.6
>
>
> This information may be useful during deadlocked and forcibly rolled back 
> transactions printout



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-7015) SQL: Index should be updated only when relevant values changed

2018-06-07 Thread Nick Pordash (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-7015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16504989#comment-16504989
 ] 

Nick Pordash commented on IGNITE-7015:
--

[~vozerov] do you have a rough ETA when this might be released? The performance 
implications are so critical that I'm considering manually applying the PR to 
each release and rolling a custom build until then, which is obviously not a 
great situation to be in. For context, without this optimization in place I 
would need to have a cluster 3x-4x bigger just to absorb the excessive B+Tree 
updates.

> SQL: Index should be updated only when relevant values changed
> --
>
> Key: IGNITE-7015
> URL: https://issues.apache.org/jira/browse/IGNITE-7015
> Project: Ignite
>  Issue Type: Task
>  Components: sql
>Reporter: Vladimir Ozerov
>Assignee: Roman Kondakov
>Priority: Major
>  Labels: iep-19, performance
>
> See {{GridH2Table.update}} method. Whenever value is updated, we propagate it 
> to all indexes. Consider the following case:
> 1) Old row is not null, so this is "update", not "create".
> 2) Link hasn't changed
> 3) Indexed fields haven't changed
> If all conditions are met, we can skip index update completely, as state 
> before and after will be the same. This is especially important when 
> persistence is enabled because currently we generate unnecessary dirty pages 
> what increases IO pressure.
> Suggested fix:
> 1) Iterate over index columns, skipping key and affinity columns (as they are 
> guaranteed to be the same);
> 2) Compare relevant index columns of both old and new rows
> 3) If all columns are equal, do nothing.
> Fields should be read through {{GridH2KeyValueRowOnheap#getValue}}, because 
> in this case we will re-use value cache transparently.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8657) Simultaneous start of bunch of client nodes may lead to some clients hangs

2018-06-07 Thread Alexey Goncharuk (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16504922#comment-16504922
 ] 

Alexey Goncharuk commented on IGNITE-8657:
--

[~sergey-chugunov], I think I've found an issue in the tests:
Take a look at the latest run of Binary Objects (Simple Mapper Basic) 
https://ci.ignite.apache.org/viewLog.html?buildId=1367214&buildTypeId=IgniteTests24Java8_BinaryObjectsSimpleMapperBasic&tab=buildResultsDiv
 
I see the following assertion in the log
{code}
[16:30:59]W: [org.apache.ignite:ignite-core] 
java.lang.AssertionError: TcpDiscoveryNode 
[id=d089379e-11db-453f-99a0-a270bc22, addrs=[127.0.0.1], 
sockAddrs=[/127.0.0.1:47502], discPort=47502, order=341, intOrder=172, 
lastExchangeTime=1528378258963, loc=false, ver=2.6.0#20180607-sha1:8f8efe4f, 
isClient=false]
[16:30:59]W: [org.apache.ignite:ignite-core]at 
org.apache.ignite.internal.IgniteNeedReconnectException.(IgniteNeedReconnectException.java:38)
[16:30:59]W: [org.apache.ignite:ignite-core]at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.forceClientReconnect(GridDhtPartitionsExchangeFuture.java:2051)
[16:30:59]W: [org.apache.ignite:ignite-core]at 
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager.processSinglePartitionUpdate(GridCachePartitionExchangeManager.java:1569)
[16:30:59]W: [org.apache.ignite:ignite-core]at 
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager.access$1000(GridCachePartitionExchangeManager.java:138)
[16:30:59]W: [org.apache.ignite:ignite-core]at 
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$2.onMessage(GridCachePartitionExchangeManager.java:345)
[16:30:59]W: [org.apache.ignite:ignite-core]at 
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$2.onMessage(GridCachePartitionExchangeManager.java:325)
[16:30:59]W: [org.apache.ignite:ignite-core]at 
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$MessageHandler.apply(GridCachePartitionExchangeManager.java:2837)
[16:30:59]W: [org.apache.ignite:ignite-core]at 
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$MessageHandler.apply(GridCachePartitionExchangeManager.java:2816)
[16:30:59]W: [org.apache.ignite:ignite-core]at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.processMessage(GridCacheIoManager.java:1054)
[16:30:59]W: [org.apache.ignite:ignite-core]at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.onMessage0(GridCacheIoManager.java:579)
[16:30:59]W: [org.apache.ignite:ignite-core]at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:378)
[16:30:59]W: [org.apache.ignite:ignite-core]at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:304)
[16:30:59]W: [org.apache.ignite:ignite-core]at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.access$100(GridCacheIoManager.java:99)
[16:30:59]W: [org.apache.ignite:ignite-core]at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager$1.onMessage(GridCacheIoManager.java:293)
[16:30:59]W: [org.apache.ignite:ignite-core]at 
org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1556)
[16:30:59]W: [org.apache.ignite:ignite-core]at 
org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1184)
[16:30:59]W: [org.apache.ignite:ignite-core]at 
org.apache.ignite.internal.managers.communication.GridIoManager.access$4200(GridIoManager.java:125)
[16:30:59]W: [org.apache.ignite:ignite-core]at 
org.apache.ignite.internal.managers.communication.GridIoManager$9.run(GridIoManager.java:1091)
[16:30:59]W: [org.apache.ignite:ignite-core]at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
[16:30:59]W: [org.apache.ignite:ignite-core]at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
[16:30:59]W: [org.apache.ignite:ignite-core]at 
java.lang.Thread.run(Thread.java:745)
{code}

Looks like the exception may be deserialized on a non-client node, so the 
assertion should be removed and properly handled on receive.

> Simultaneous start of bunch of client nodes may lead t

[jira] [Updated] (IGNITE-8743) TcpCommunicationSpi hangs in rare circumstances on outgoing descriptor reservation.

2018-06-07 Thread Alexei Scherbakov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexei Scherbakov updated IGNITE-8743:
--
Fix Version/s: 2.6

> TcpCommunicationSpi hangs in rare circumstances on outgoing descriptor 
> reservation.
> ---
>
> Key: IGNITE-8743
> URL: https://issues.apache.org/jira/browse/IGNITE-8743
> Project: Ignite
>  Issue Type: Bug
>Reporter: Alexei Scherbakov
>Assignee: Alexei Scherbakov
>Priority: Major
> Fix For: 2.6
>
>
> Relevant stack trace:
> {noformat}
> java.lang.Thread.State: WAITING (on object monitor)
> at java.lang.Object.wait(Native Method)
> at java.lang.Object.wait(Object.java:502)
> at 
> org.apache.ignite.internal.util.nio.GridNioRecoveryDescriptor.reserve(GridNioRecoveryDescriptor.java:275)
> - locked <0x7fca4b14f560> (a 
> org.apache.ignite.internal.util.nio.GridNioRecoveryDescriptor)
> at 
> org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.createTcpClient(TcpCommunicationSpi.java:3140)
> at 
> org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.createNioClient(TcpCommunicationSpi.java:2863)
> at 
> org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.reserveClient(TcpCommunicationSpi.java:2750)
> at 
> org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.sendMessage0(TcpCommunicationSpi.java:2611)
> at 
> org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.sendMessage(TcpCommunicationSpi.java:2575)
> at 
> org.apache.ignite.internal.managers.communication.GridIoManager.send(GridIoManager.java:1642)
> at 
> org.apache.ignite.internal.managers.communication.GridIoManager.sendToGridTopic(GridIoManager.java:1714)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheIoManager.send(GridCacheIoManager.java:1166)
> at 
> org.apache.ignite.internal.processors.cache.distributed.dht.GridPartitionedSingleGetFuture.map(GridPartitionedSingleGetFuture.java:311)
> at 
> org.apache.ignite.internal.processors.cache.distributed.dht.GridPartitionedSingleGetFuture.init(GridPartitionedSingleGetFuture.java:208)
> at 
> org.apache.ignite.internal.processors.cache.distributed.dht.colocated.GridDhtColocatedCache.loadAsync(GridDhtColocatedCache.java:389)
> at 
> org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal.loadMissing(GridNearTxLocal.java:2506)
> at 
> org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal.checkMissed(GridNearTxLocal.java:3888)
> at 
> org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal.getAllAsync(GridNearTxLocal.java:1927)
> at 
> org.apache.ignite.internal.processors.cache.distributed.dht.colocated.GridDhtColocatedCache$4.op(GridDhtColocatedCache.java:197)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-8743) TcpCommunicationSpi hangs in rare circumstances on outgoing descriptor reservation.

2018-06-07 Thread Alexei Scherbakov (JIRA)
Alexei Scherbakov created IGNITE-8743:
-

 Summary: TcpCommunicationSpi hangs in rare circumstances on 
outgoing descriptor reservation.
 Key: IGNITE-8743
 URL: https://issues.apache.org/jira/browse/IGNITE-8743
 Project: Ignite
  Issue Type: Bug
Reporter: Alexei Scherbakov
Assignee: Alexei Scherbakov


Relevant stack trace:

{noformat}
java.lang.Thread.State: WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
at java.lang.Object.wait(Object.java:502)
at 
org.apache.ignite.internal.util.nio.GridNioRecoveryDescriptor.reserve(GridNioRecoveryDescriptor.java:275)
- locked <0x7fca4b14f560> (a 
org.apache.ignite.internal.util.nio.GridNioRecoveryDescriptor)
at 
org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.createTcpClient(TcpCommunicationSpi.java:3140)
at 
org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.createNioClient(TcpCommunicationSpi.java:2863)
at 
org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.reserveClient(TcpCommunicationSpi.java:2750)
at 
org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.sendMessage0(TcpCommunicationSpi.java:2611)
at 
org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.sendMessage(TcpCommunicationSpi.java:2575)
at 
org.apache.ignite.internal.managers.communication.GridIoManager.send(GridIoManager.java:1642)
at 
org.apache.ignite.internal.managers.communication.GridIoManager.sendToGridTopic(GridIoManager.java:1714)
at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.send(GridCacheIoManager.java:1166)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridPartitionedSingleGetFuture.map(GridPartitionedSingleGetFuture.java:311)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridPartitionedSingleGetFuture.init(GridPartitionedSingleGetFuture.java:208)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.colocated.GridDhtColocatedCache.loadAsync(GridDhtColocatedCache.java:389)
at 
org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal.loadMissing(GridNearTxLocal.java:2506)
at 
org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal.checkMissed(GridNearTxLocal.java:3888)
at 
org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal.getAllAsync(GridNearTxLocal.java:1927)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.colocated.GridDhtColocatedCache$4.op(GridDhtColocatedCache.java:197)
{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8722) Issue in REST API 2.5

2018-06-07 Thread Alexey Kuznetsov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16504902#comment-16504902
 ] 

Alexey Kuznetsov commented on IGNITE-8722:
--

[~skymania] Thanks! I will investigate this issue an let you know the result.

> Issue in REST API 2.5
> -
>
> Key: IGNITE-8722
> URL: https://issues.apache.org/jira/browse/IGNITE-8722
> Project: Ignite
>  Issue Type: Bug
>  Components: rest
>Affects Versions: 2.5
>Reporter: Denis Dijak
>Priority: Major
>  Labels: rest
> Attachments: rest.api.zip
>
>
> In 2.5 ignite REST-API dosent show cache value structure correctly
> rest-api 2.4
> "0013289414": {
>  "timeFrom": 1527166800,
>  "timeTo": 1528199550,
>  "results": ["BUSINESS-EU"],
>  "child":
> { "timeFrom": 1527166800, "timeTo": 10413788400, "results": ["BUSINESS-EU"], 
> "child": null }
> }
>  
>  rest-api2.5
> "0013289414":
> { "timeFrom": 1527166800, "timeTo": 1528199550, "results": ["BUSINESS-EU"] }
> As you can see the child is missing. If i switch back to 2.4 REST-API 
> everything works as expected. 
> The above structure is class ValidityNode and the child that is missing in 
> 2.5 is also a ValidityNode. The structure is meant to be as parent-child 
> implementation.
> public class ValidityNode {
>  private long timeFrom;
>  private long timeTo; 
>  private ArrayList results = null;
>  private ValidityNode child = null;
> public ValidityNode()
> { // default constructor }
> public long getTimeFrom()
> { return timeFrom; }
> public void setTimeFrom(long timeFrom)
> { this.timeFrom = timeFrom; }
> public long getTimeTo()
> { return timeTo; }
> public void setTimeTo(long timeTo)
> { this.timeTo = timeTo; }
> public ArrayList getResults()
> { return results; }
> public void setResults(ArrayList results)
> { this.results = results; }
> public ValidityNode getChild()
> { return child; }
> public void setChild(ValidityNode child)
> { this.child = child; }
> @Override
>  public String toString()
> { return "ValidityNode [timeFrom=" + timeFrom + ", timeTo=" + timeTo + ", 
> results=" + results + ", child=" + child + "]"; }
> Is this issue maybe related to keyType and valueType that were intruduced in 
> 2.5?
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-5954) Ignite Cache Failover: GridCacheAtomicNearRemoveFailureTest.testPutAndRemove fails

2018-06-07 Thread Anton Kalashnikov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-5954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16504883#comment-16504883
 ] 

Anton Kalashnikov commented on IGNITE-5954:
---

Looks ok for me. [~dpavlov], could you please merge it.

> Ignite Cache Failover: GridCacheAtomicNearRemoveFailureTest.testPutAndRemove 
> fails
> --
>
> Key: IGNITE-5954
> URL: https://issues.apache.org/jira/browse/IGNITE-5954
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.1
>Reporter: Eduard Shangareev
>Assignee: Alexey Kuznetsov
>Priority: Major
>  Labels: MakeTeamcityGreenAgain
> Fix For: 2.6
>
>
> Probably, it's broken after IGNITE-5272.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8509) A lot of "Execution timeout" result for Cache 6 suite

2018-06-07 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16504878#comment-16504878
 ] 

ASF GitHub Bot commented on IGNITE-8509:


GitHub user ascherbakoff opened a pull request:

https://github.com/apache/ignite/pull/4150

IGNITE-8509 Fix cache 6 suite flaky tests.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-8509

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/4150.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4150


commit e1b4ff58dd522d6b6c5ac557be437576ded1
Author: Aleksei Scherbakov 
Date:   2018-06-07T16:14:49Z

IGNITE-8509 Fixed race in TxRollbackAsyncTest.

commit 607b091da484c6fb1c07092880f51e1d34292442
Author: Aleksei Scherbakov 
Date:   2018-06-07T16:19:43Z

IGNITE-8509 Fixed race in TxRollbackAsyncTest.




> A lot of "Execution timeout" result for Cache 6 suite
> -
>
> Key: IGNITE-8509
> URL: https://issues.apache.org/jira/browse/IGNITE-8509
> Project: Ignite
>  Issue Type: Task
>Reporter: Maxim Muzafarov
>Assignee: Alexei Scherbakov
>Priority: Critical
>  Labels: MakeTeamcityGreenAgain
> Fix For: 2.6
>
>
> *Summary*
> Suite Cache 6 fails with execution timeout fails with
> {code:java}
> [org.apache.ignite:ignite-core] [2018-05-15 02:35:14,143][WARN 
> ][grid-timeout-worker-#71656%transactions.TxRollbackOnTimeoutNearCacheTest0%][diagnostic]
>  Found long running transaction [startTime=02:32:57.989, 
> curTime=02:35:14.136, tx=GridDhtTxRemote
> {code}
> *Please, fefer for more details* 
> [https://ci.ignite.apache.org/viewType.html?buildTypeId=IgniteTests24Java8_Cache6&page=1&tab=buildTypeHistoryList&branch_IgniteTests24Java8=%3Cdefault%3E]
> *Statistics Cache 6 Suite*
>  Recent fails : 42,0% [21 fails / 50 runs]; 
>  Critical recent fails: 10,0% [5 fails / 50 runs];
> Last mounth (15.04 – 15.05)
> Execution timeout: 21,0% [84 fails / 400 runs];



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8702) Crash in ODBC driver under Informatica connection checker

2018-06-07 Thread Igor Sapego (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Sapego updated IGNITE-8702:

Fix Version/s: 2.6

> Crash in ODBC driver under Informatica connection checker
> -
>
> Key: IGNITE-8702
> URL: https://issues.apache.org/jira/browse/IGNITE-8702
> Project: Ignite
>  Issue Type: Bug
>  Components: odbc
>Affects Versions: 2.4
>Reporter: Ilya Kasnacheev
>Assignee: Igor Sapego
>Priority: Major
> Fix For: 2.6
>
>
> I'm trying to connect Informatica to Ignite via ODBC.
> When I try to specify my connection as a ready-made DSN by its name, it 
> starts connecting to remote but then fails:
> {code}
> [ikasnacheev@lab15 ODBC7.1]$ IGNITE_ODBC_LOG_PATH=/home/ikasnacheev/odbc2.log 
> INFA_HOME=/storage/ssd/ikasnacheev 
> LD_LIBRARY_PATH=/storage/ssd/ikasnacheev/ODBC7.1/lib:$LD_LIBRARY_PATH:/storage/ssd/ikasnacheev/services/shared/bin
>  /storage/ssd/ikasnacheev/java/jre/bin/java -d64 -DpwdDecrypt=true 
> -DconnectionName=Lab -DuserName=lab -Dpassword="nq/Jypc7Q2EhoQ2iAQlOCA==" 
> -DconnectionString=LABignite -DdataStoreType=ODBC 
> -DINFA_HOME=/storage/ssd/ikasnacheev -classpath 
> '.:/storage/ssd/ikasnacheev/services/AdministratorConsole/webapps/administrator/WEB-INF/lib/*:/storage/ssd/ikasnacheev/services/shared/jars/platform/*:/storage/ssd/ikasnacheev/services/shared/jars/thirdparty/*:/storage/ssd/ikasnacheev/plugins/osgi/*:/storage/ssd/ikasnacheev/plugins/infa/*:/storage/ssd/ikasnacheev/plugins/dynamic/*'
>  com.informatica.adminconsole.app.chain.commands.TestODBCConnection
> ...
> #
> # A fatal error has been detected by the Java Runtime Environment:
> #
> #  SIGSEGV (0xb) at pc=0x7faeb806d5e4, pid=26471, tid=140392269498112
> #
> # JRE version: Java(TM) SE Runtime Environment (8.0_77-b03) (build 
> 1.8.0_77-b03)
> # Java VM: Java HotSpot(TM) 64-Bit Server VM (25.77-b03 mixed mode 
> linux-amd64 compressed oops)
> # Problematic frame:
> # C  [libignite-odbc.so+0x2c5e4]  
> ignite::odbc::system::TcpSocketClient::Connect(char const*, unsigned short, 
> int, ignite::odbc::diagnostic::Diagnosable&)+0x7b4
> {code}
> The contents of Ignite driver log file as follows:
> {code}
> SQLAllocEnv: SQLAllocEnv called
> SQLSetEnvAttr: SQLSetEnvAttr called
> AddStatusRecord: Adding new record: ODBC version is not supported., rowNum: 
> 0, columnNum: 0
> SQLAllocConnect: SQLAllocConnect called
> SQLGetInfo: SQLGetInfo called: 77 (SQL_DRIVER_ODBC_VER), 7faec08d1450, 6, 
> 7faf9f5a29ee
> GetInfo: SQLGetInfo called: 77 (SQL_DRIVER_ODBC_VER), 7faec08d1450, 6, 
> 7faf9f5a29ee
> SQLSetConnectOption: SQLSetConnectOption called
> SQLConnect: SQLConnect called
> SQLConnect: DSN: LABignite
> Connect: Host: 172.25.1.16, port: 10800
> Connect: Addr: 172.25.1.16
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8702) Crash in ODBC driver under Informatica connection checker

2018-06-07 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16504823#comment-16504823
 ] 

ASF GitHub Bot commented on IGNITE-8702:


Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/4144


> Crash in ODBC driver under Informatica connection checker
> -
>
> Key: IGNITE-8702
> URL: https://issues.apache.org/jira/browse/IGNITE-8702
> Project: Ignite
>  Issue Type: Bug
>  Components: odbc
>Affects Versions: 2.4
>Reporter: Ilya Kasnacheev
>Assignee: Igor Sapego
>Priority: Major
>
> I'm trying to connect Informatica to Ignite via ODBC.
> When I try to specify my connection as a ready-made DSN by its name, it 
> starts connecting to remote but then fails:
> {code}
> [ikasnacheev@lab15 ODBC7.1]$ IGNITE_ODBC_LOG_PATH=/home/ikasnacheev/odbc2.log 
> INFA_HOME=/storage/ssd/ikasnacheev 
> LD_LIBRARY_PATH=/storage/ssd/ikasnacheev/ODBC7.1/lib:$LD_LIBRARY_PATH:/storage/ssd/ikasnacheev/services/shared/bin
>  /storage/ssd/ikasnacheev/java/jre/bin/java -d64 -DpwdDecrypt=true 
> -DconnectionName=Lab -DuserName=lab -Dpassword="nq/Jypc7Q2EhoQ2iAQlOCA==" 
> -DconnectionString=LABignite -DdataStoreType=ODBC 
> -DINFA_HOME=/storage/ssd/ikasnacheev -classpath 
> '.:/storage/ssd/ikasnacheev/services/AdministratorConsole/webapps/administrator/WEB-INF/lib/*:/storage/ssd/ikasnacheev/services/shared/jars/platform/*:/storage/ssd/ikasnacheev/services/shared/jars/thirdparty/*:/storage/ssd/ikasnacheev/plugins/osgi/*:/storage/ssd/ikasnacheev/plugins/infa/*:/storage/ssd/ikasnacheev/plugins/dynamic/*'
>  com.informatica.adminconsole.app.chain.commands.TestODBCConnection
> ...
> #
> # A fatal error has been detected by the Java Runtime Environment:
> #
> #  SIGSEGV (0xb) at pc=0x7faeb806d5e4, pid=26471, tid=140392269498112
> #
> # JRE version: Java(TM) SE Runtime Environment (8.0_77-b03) (build 
> 1.8.0_77-b03)
> # Java VM: Java HotSpot(TM) 64-Bit Server VM (25.77-b03 mixed mode 
> linux-amd64 compressed oops)
> # Problematic frame:
> # C  [libignite-odbc.so+0x2c5e4]  
> ignite::odbc::system::TcpSocketClient::Connect(char const*, unsigned short, 
> int, ignite::odbc::diagnostic::Diagnosable&)+0x7b4
> {code}
> The contents of Ignite driver log file as follows:
> {code}
> SQLAllocEnv: SQLAllocEnv called
> SQLSetEnvAttr: SQLSetEnvAttr called
> AddStatusRecord: Adding new record: ODBC version is not supported., rowNum: 
> 0, columnNum: 0
> SQLAllocConnect: SQLAllocConnect called
> SQLGetInfo: SQLGetInfo called: 77 (SQL_DRIVER_ODBC_VER), 7faec08d1450, 6, 
> 7faf9f5a29ee
> GetInfo: SQLGetInfo called: 77 (SQL_DRIVER_ODBC_VER), 7faec08d1450, 6, 
> 7faf9f5a29ee
> SQLSetConnectOption: SQLSetConnectOption called
> SQLConnect: SQLConnect called
> SQLConnect: DSN: LABignite
> Connect: Host: 172.25.1.16, port: 10800
> Connect: Addr: 172.25.1.16
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8724) Skip logging 3-rd parameter while calling U.warn with initialized logger.

2018-06-07 Thread Stanilovsky Evgeny (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stanilovsky Evgeny updated IGNITE-8724:
---
Attachment: tc.png

> Skip logging 3-rd parameter while calling U.warn with initialized logger.
> -
>
> Key: IGNITE-8724
> URL: https://issues.apache.org/jira/browse/IGNITE-8724
> Project: Ignite
>  Issue Type: Bug
>  Components: general
>Affects Versions: 2.5
>Reporter: Stanilovsky Evgeny
>Assignee: Stanilovsky Evgeny
>Priority: Major
> Fix For: 2.6
>
> Attachments: tc.png
>
>
> There are a lot of places where exception need to be logged, for example :
> {code:java}
> U.warn(log,"Unable to await partitions release future", e);
> {code}
> but current U.warn realization silently swallow it.
> {code:java}
> public static void warn(@Nullable IgniteLogger log, Object longMsg, 
> Object shortMsg) {
> assert longMsg != null;
> assert shortMsg != null;
> if (log != null)
> log.warning(compact(longMsg.toString()));
> else
> X.println("[" + SHORT_DATE_FMT.format(new java.util.Date()) + "] 
> (wrn) " +
> compact(shortMsg.toString()));
> }
> {code}
> fix, looks like simple add:
> {code:java}
> public static void warn(@Nullable IgniteLogger log, Object longMsg, 
> Throwable ex) {
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8724) Skip logging 3-rd parameter while calling U.warn with initialized logger.

2018-06-07 Thread Stanilovsky Evgeny (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16504802#comment-16504802
 ] 

Stanilovsky Evgeny commented on IGNITE-8724:


TC looks ok,
 !tc.png! 

> Skip logging 3-rd parameter while calling U.warn with initialized logger.
> -
>
> Key: IGNITE-8724
> URL: https://issues.apache.org/jira/browse/IGNITE-8724
> Project: Ignite
>  Issue Type: Bug
>  Components: general
>Affects Versions: 2.5
>Reporter: Stanilovsky Evgeny
>Assignee: Stanilovsky Evgeny
>Priority: Major
> Fix For: 2.6
>
> Attachments: tc.png
>
>
> There are a lot of places where exception need to be logged, for example :
> {code:java}
> U.warn(log,"Unable to await partitions release future", e);
> {code}
> but current U.warn realization silently swallow it.
> {code:java}
> public static void warn(@Nullable IgniteLogger log, Object longMsg, 
> Object shortMsg) {
> assert longMsg != null;
> assert shortMsg != null;
> if (log != null)
> log.warning(compact(longMsg.toString()));
> else
> X.println("[" + SHORT_DATE_FMT.format(new java.util.Date()) + "] 
> (wrn) " +
> compact(shortMsg.toString()));
> }
> {code}
> fix, looks like simple add:
> {code:java}
> public static void warn(@Nullable IgniteLogger log, Object longMsg, 
> Throwable ex) {
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8742) Direct IO 2 suite is timed out by 'out of disk space' failure emulation test: WAL manager failure does not stoped execution

2018-06-07 Thread Dmitriy Pavlov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16504796#comment-16504796
 ] 

Dmitriy Pavlov commented on IGNITE-8742:


Test was disabled for direct IO, new test name for direct IO suite is 
org.apache.ignite.internal.processors.cache.persistence.db.wal.IgniteNativeIoWalFlushFsyncSelfTest#testFailAfterStart()

> Direct IO 2 suite is timed out by 'out of disk space' failure emulation test: 
> WAL manager failure does not stoped execution
> ---
>
> Key: IGNITE-8742
> URL: https://issues.apache.org/jira/browse/IGNITE-8742
> Project: Ignite
>  Issue Type: Test
>  Components: persistence
>Reporter: Dmitriy Pavlov
>Priority: Major
>  Labels: MakeTeamcityGreenAgain
>
> https://ci.ignite.apache.org/viewLog.html?buildId=1366882&tab=buildResultsDiv&buildTypeId=IgniteTests24Java8_PdsDirectIo2
> Test 
> org.apache.ignite.internal.processors.cache.persistence.IgniteNativeIoWalFlushFsyncSelfTest#testFailAfterStart
> emulates problem with disc space using exception.
> In direct IO environment real IO with disk is performed, tmpfs is not used.
> Sometimes this error can come from rollover() of segment, failure handler 
> reacted accordingly.
> {noformat}
> detected. Will be handled accordingly to configured handler [hnd=class 
> o.a.i.failure.StopNodeFailureHandler, failureCtx=FailureContext 
> [type=CRITICAL_ERROR, err=class o.a.i.i.pagemem.wal.StorageException: Unable 
> to write]]
> class org.apache.ignite.internal.pagemem.wal.StorageException: Unable to write
>   at 
> org.apache.ignite.internal.processors.cache.persistence.wal.FsyncModeFileWriteAheadLogManager$FileWriteHandle.writeBuffer(FsyncModeFileWriteAheadLogManager.java:2964)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.wal.FsyncModeFileWriteAheadLogManager$FileWriteHandle.flush(FsyncModeFileWriteAheadLogManager.java:2640)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.wal.FsyncModeFileWriteAheadLogManager$FileWriteHandle.flush(FsyncModeFileWriteAheadLogManager.java:2572)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.wal.FsyncModeFileWriteAheadLogManager$FileWriteHandle.flushOrWait(FsyncModeFileWriteAheadLogManager.java:2525)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.wal.FsyncModeFileWriteAheadLogManager$FileWriteHandle.close(FsyncModeFileWriteAheadLogManager.java:2795)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.wal.FsyncModeFileWriteAheadLogManager$FileWriteHandle.access$700(FsyncModeFileWriteAheadLogManager.java:2340)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.wal.FsyncModeFileWriteAheadLogManager.rollOver(FsyncModeFileWriteAheadLogManager.java:1029)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.wal.FsyncModeFileWriteAheadLogManager.log(FsyncModeFileWriteAheadLogManager.java:673)
> {noformat}
> But test seems to be not able to stop, node stopper thread tries to stop 
> cache, flush WAL. flush wait for rollover, which will never happen.
> {noformat}
> Thread [name="node-stopper", id=2836, state=WAITING, blockCnt=7, waitCnt=9]
> Lock 
> [object=java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@47f6473,
>  ownerName=null, ownerId=-1]
> at sun.misc.Unsafe.park(Native Method)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitUninterruptibly(AbstractQueuedSynchronizer.java:1976)
> at o.a.i.i.util.IgniteUtils.awaitQuiet(IgniteUtils.java:7473)
> at 
> o.a.i.i.processors.cache.persistence.wal.FsyncModeFileWriteAheadLogManager$FileWriteHandle.flushOrWait(FsyncModeFileWriteAheadLogManager.java:2546)
> at 
> o.a.i.i.processors.cache.persistence.wal.FsyncModeFileWriteAheadLogManager$FileWriteHandle.fsync(FsyncModeFileWriteAheadLogManager.java:2750)
> at 
> o.a.i.i.processors.cache.persistence.wal.FsyncModeFileWriteAheadLogManager$FileWriteHandle.access$2000(FsyncModeFileWriteAheadLogManager.java:2340)
> at 
> o.a.i.i.processors.cache.persistence.wal.FsyncModeFileWriteAheadLogManager.flush(FsyncModeFileWriteAheadLogManager.java:699)
> at 
> o.a.i.i.processors.cache.GridCacheProcessor.stopCache(GridCacheProcessor.java:1243)
> at 
> o.a.i.i.processors.cache.GridCacheProcessor.stopCaches(GridCacheProcessor.java:969)
> at 
> o.a.i.i.processors.cache.GridCacheProcessor.stop(GridCacheProcessor.java:943)
> at o.a.i.i.IgniteKernal.stop0(IgniteKernal.java:2289)
> at o.a.i.i.IgniteKernal.stop(IgniteKernal.java:2167)
> a

[jira] [Commented] (IGNITE-8030) Cluster hangs on deactivation process in time stopping indexed cache

2018-06-07 Thread Vladislav Pyatkov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16504791#comment-16504791
 ] 

Vladislav Pyatkov commented on IGNITE-8030:
---

In my point of view, the behavior does not possible with default timeouts on 
caches operations.

> Cluster hangs on deactivation process in time stopping indexed cache
> 
>
> Key: IGNITE-8030
> URL: https://issues.apache.org/jira/browse/IGNITE-8030
> Project: Ignite
>  Issue Type: Bug
>Reporter: Vladislav Pyatkov
>Priority: Major
> Attachments: ignite.log, td.tdump, thrdump-server.log
>
>
> {noformat}
> "sys-#10283%DPL_GRID%DplGridNodeName%" #13068 prio=5 os_prio=0 
> tid=0x7f07040eb000 nid=0x2e0f waiting on condition [0x7e6deb9b8000]
>java.lang.Thread.State: WAITING (parking)
>   at sun.misc.Unsafe.park(Native Method)
>   - parking to wait for  <0x7f0bd2b0> (a 
> java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync)
>   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireInterruptibly(AbstractQueuedSynchronizer.java:897)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireInterruptibly(AbstractQueuedSynchronizer.java:1222)
>   at 
> java.util.concurrent.locks.ReentrantReadWriteLock$WriteLock.lockInterruptibly(ReentrantReadWriteLock.java:998)
>   at 
> org.apache.ignite.internal.processors.query.h2.opt.GridH2Table.lock(GridH2Table.java:292)
>   at 
> org.apache.ignite.internal.processors.query.h2.opt.GridH2Table.lock(GridH2Table.java:253)
>   at org.h2.command.ddl.DropTable.prepareDrop(DropTable.java:87)
>   at org.h2.command.ddl.DropTable.update(DropTable.java:113)
>   at org.h2.command.CommandContainer.update(CommandContainer.java:101)
>   at org.h2.command.Command.executeUpdate(Command.java:260)
>   - locked <0x7f0c276c85b8> (a org.h2.engine.Session)
>   at 
> org.h2.jdbc.JdbcStatement.executeUpdateInternal(JdbcStatement.java:137)
>   - locked <0x7f0c276c85b8> (a org.h2.engine.Session)
>   at org.h2.jdbc.JdbcStatement.executeUpdate(JdbcStatement.java:122)
>   at 
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.dropTable(IgniteH2Indexing.java:654)
>   at 
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.unregisterCache(IgniteH2Indexing.java:2482)
>   at 
> org.apache.ignite.internal.processors.query.GridQueryProcessor.onCacheStop0(GridQueryProcessor.java:1684)
>   - locked <0x7f0b69f822d0> (a java.lang.Object)
>   at 
> org.apache.ignite.internal.processors.query.GridQueryProcessor.onCacheStop(GridQueryProcessor.java:879)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheProcessor.stopCache(GridCacheProcessor.java:1189)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheProcessor.prepareCacheStop(GridCacheProcessor.java:2063)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheProcessor.onExchangeDone(GridCacheProcessor.java:2219)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.onDone(GridDhtPartitionsExchangeFuture.java:1518)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.finishExchangeOnCoordinator(GridDhtPartitionsExchangeFuture.java:2538)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.onAllReceived(GridDhtPartitionsExchangeFuture.java:2297)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.processSingleMessage(GridDhtPartitionsExchangeFuture.java:2034)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.access$100(GridDhtPartitionsExchangeFuture.java:122)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture$2.apply(GridDhtPartitionsExchangeFuture.java:1891)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture$2.apply(GridDhtPartitionsExchangeFuture.java:1879)
>   at 
> org.apache.ignite.internal.util.future.GridFutureAdapter.notifyListener(GridFutureAdapter.java:383)
>   at 
> org.apache.ignite.internal.util.future.GridFutureAdapter.listen(GridFutureAdapter.java:353)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.onReceiveSin

[jira] [Updated] (IGNITE-8742) Direct IO 2 suite is timed out by 'out of disk space' failure emulation test: WAL manager failure does not stoped execution

2018-06-07 Thread Dmitriy Pavlov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitriy Pavlov updated IGNITE-8742:
---
Labels: MakeTeamcityGreenAgain  (was: )

> Direct IO 2 suite is timed out by 'out of disk space' failure emulation test: 
> WAL manager failure does not stoped execution
> ---
>
> Key: IGNITE-8742
> URL: https://issues.apache.org/jira/browse/IGNITE-8742
> Project: Ignite
>  Issue Type: Test
>  Components: persistence
>Reporter: Dmitriy Pavlov
>Priority: Major
>  Labels: MakeTeamcityGreenAgain
>
> https://ci.ignite.apache.org/viewLog.html?buildId=1366882&tab=buildResultsDiv&buildTypeId=IgniteTests24Java8_PdsDirectIo2
> Test 
> org.apache.ignite.internal.processors.cache.persistence.IgniteNativeIoWalFlushFsyncSelfTest#testFailAfterStart
> emulates problem with disc space using exception.
> In direct IO environment real IO with disk is performed, tmpfs is not used.
> Sometimes this error can come from rollover() of segment, failure handler 
> reacted accordingly.
> {noformat}
> detected. Will be handled accordingly to configured handler [hnd=class 
> o.a.i.failure.StopNodeFailureHandler, failureCtx=FailureContext 
> [type=CRITICAL_ERROR, err=class o.a.i.i.pagemem.wal.StorageException: Unable 
> to write]]
> class org.apache.ignite.internal.pagemem.wal.StorageException: Unable to write
>   at 
> org.apache.ignite.internal.processors.cache.persistence.wal.FsyncModeFileWriteAheadLogManager$FileWriteHandle.writeBuffer(FsyncModeFileWriteAheadLogManager.java:2964)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.wal.FsyncModeFileWriteAheadLogManager$FileWriteHandle.flush(FsyncModeFileWriteAheadLogManager.java:2640)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.wal.FsyncModeFileWriteAheadLogManager$FileWriteHandle.flush(FsyncModeFileWriteAheadLogManager.java:2572)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.wal.FsyncModeFileWriteAheadLogManager$FileWriteHandle.flushOrWait(FsyncModeFileWriteAheadLogManager.java:2525)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.wal.FsyncModeFileWriteAheadLogManager$FileWriteHandle.close(FsyncModeFileWriteAheadLogManager.java:2795)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.wal.FsyncModeFileWriteAheadLogManager$FileWriteHandle.access$700(FsyncModeFileWriteAheadLogManager.java:2340)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.wal.FsyncModeFileWriteAheadLogManager.rollOver(FsyncModeFileWriteAheadLogManager.java:1029)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.wal.FsyncModeFileWriteAheadLogManager.log(FsyncModeFileWriteAheadLogManager.java:673)
> {noformat}
> But test seems to be not able to stop, node stopper thread tries to stop 
> cache, flush WAL. flush wait for rollover, which will never happen.
> {noformat}
> Thread [name="node-stopper", id=2836, state=WAITING, blockCnt=7, waitCnt=9]
> Lock 
> [object=java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@47f6473,
>  ownerName=null, ownerId=-1]
> at sun.misc.Unsafe.park(Native Method)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitUninterruptibly(AbstractQueuedSynchronizer.java:1976)
> at o.a.i.i.util.IgniteUtils.awaitQuiet(IgniteUtils.java:7473)
> at 
> o.a.i.i.processors.cache.persistence.wal.FsyncModeFileWriteAheadLogManager$FileWriteHandle.flushOrWait(FsyncModeFileWriteAheadLogManager.java:2546)
> at 
> o.a.i.i.processors.cache.persistence.wal.FsyncModeFileWriteAheadLogManager$FileWriteHandle.fsync(FsyncModeFileWriteAheadLogManager.java:2750)
> at 
> o.a.i.i.processors.cache.persistence.wal.FsyncModeFileWriteAheadLogManager$FileWriteHandle.access$2000(FsyncModeFileWriteAheadLogManager.java:2340)
> at 
> o.a.i.i.processors.cache.persistence.wal.FsyncModeFileWriteAheadLogManager.flush(FsyncModeFileWriteAheadLogManager.java:699)
> at 
> o.a.i.i.processors.cache.GridCacheProcessor.stopCache(GridCacheProcessor.java:1243)
> at 
> o.a.i.i.processors.cache.GridCacheProcessor.stopCaches(GridCacheProcessor.java:969)
> at 
> o.a.i.i.processors.cache.GridCacheProcessor.stop(GridCacheProcessor.java:943)
> at o.a.i.i.IgniteKernal.stop0(IgniteKernal.java:2289)
> at o.a.i.i.IgniteKernal.stop(IgniteKernal.java:2167)
> at o.a.i.i.IgnitionEx$IgniteNamedInstance.stop0(IgnitionEx.java:2588)
> - locked o.a.i.i.IgnitionEx$IgniteNamedInstance@90f6bfd
> at o.a.i.i.IgnitionEx$IgniteNamedInstance.stop(IgnitionEx.ja

[jira] [Updated] (IGNITE-8742) Direct IO 2 suite is timed out by 'out of disk space' failure emulation test: WAL manager failure does not stoped execution

2018-06-07 Thread Dmitriy Pavlov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitriy Pavlov updated IGNITE-8742:
---
Description: 
https://ci.ignite.apache.org/viewLog.html?buildId=1366882&tab=buildResultsDiv&buildTypeId=IgniteTests24Java8_PdsDirectIo2

Test 
org.apache.ignite.internal.processors.cache.persistence.IgniteNativeIoWalFlushFsyncSelfTest#testFailAfterStart
emulates problem with disc space using exception.

In direct IO environment real IO with disk is performed, tmpfs is not used.

Sometimes this error can come from rollover() of segment, failure handler 
reacted accordingly.
{noformat}
detected. Will be handled accordingly to configured handler [hnd=class 
o.a.i.failure.StopNodeFailureHandler, failureCtx=FailureContext 
[type=CRITICAL_ERROR, err=class o.a.i.i.pagemem.wal.StorageException: Unable to 
write]]
class org.apache.ignite.internal.pagemem.wal.StorageException: Unable to write
at 
org.apache.ignite.internal.processors.cache.persistence.wal.FsyncModeFileWriteAheadLogManager$FileWriteHandle.writeBuffer(FsyncModeFileWriteAheadLogManager.java:2964)
at 
org.apache.ignite.internal.processors.cache.persistence.wal.FsyncModeFileWriteAheadLogManager$FileWriteHandle.flush(FsyncModeFileWriteAheadLogManager.java:2640)
at 
org.apache.ignite.internal.processors.cache.persistence.wal.FsyncModeFileWriteAheadLogManager$FileWriteHandle.flush(FsyncModeFileWriteAheadLogManager.java:2572)
at 
org.apache.ignite.internal.processors.cache.persistence.wal.FsyncModeFileWriteAheadLogManager$FileWriteHandle.flushOrWait(FsyncModeFileWriteAheadLogManager.java:2525)
at 
org.apache.ignite.internal.processors.cache.persistence.wal.FsyncModeFileWriteAheadLogManager$FileWriteHandle.close(FsyncModeFileWriteAheadLogManager.java:2795)
at 
org.apache.ignite.internal.processors.cache.persistence.wal.FsyncModeFileWriteAheadLogManager$FileWriteHandle.access$700(FsyncModeFileWriteAheadLogManager.java:2340)
at 
org.apache.ignite.internal.processors.cache.persistence.wal.FsyncModeFileWriteAheadLogManager.rollOver(FsyncModeFileWriteAheadLogManager.java:1029)
at 
org.apache.ignite.internal.processors.cache.persistence.wal.FsyncModeFileWriteAheadLogManager.log(FsyncModeFileWriteAheadLogManager.java:673)
{noformat}

But test seems to be not able to stop, node stopper thread tries to stop cache, 
flush WAL. flush wait for rollover, which will never happen.
{noformat}
Thread [name="node-stopper", id=2836, state=WAITING, blockCnt=7, waitCnt=9]
Lock 
[object=java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@47f6473,
 ownerName=null, ownerId=-1]
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitUninterruptibly(AbstractQueuedSynchronizer.java:1976)
at o.a.i.i.util.IgniteUtils.awaitQuiet(IgniteUtils.java:7473)
at 
o.a.i.i.processors.cache.persistence.wal.FsyncModeFileWriteAheadLogManager$FileWriteHandle.flushOrWait(FsyncModeFileWriteAheadLogManager.java:2546)
at 
o.a.i.i.processors.cache.persistence.wal.FsyncModeFileWriteAheadLogManager$FileWriteHandle.fsync(FsyncModeFileWriteAheadLogManager.java:2750)
at 
o.a.i.i.processors.cache.persistence.wal.FsyncModeFileWriteAheadLogManager$FileWriteHandle.access$2000(FsyncModeFileWriteAheadLogManager.java:2340)
at 
o.a.i.i.processors.cache.persistence.wal.FsyncModeFileWriteAheadLogManager.flush(FsyncModeFileWriteAheadLogManager.java:699)
at 
o.a.i.i.processors.cache.GridCacheProcessor.stopCache(GridCacheProcessor.java:1243)
at 
o.a.i.i.processors.cache.GridCacheProcessor.stopCaches(GridCacheProcessor.java:969)
at 
o.a.i.i.processors.cache.GridCacheProcessor.stop(GridCacheProcessor.java:943)
at o.a.i.i.IgniteKernal.stop0(IgniteKernal.java:2289)
at o.a.i.i.IgniteKernal.stop(IgniteKernal.java:2167)
at o.a.i.i.IgnitionEx$IgniteNamedInstance.stop0(IgnitionEx.java:2588)
- locked o.a.i.i.IgnitionEx$IgniteNamedInstance@90f6bfd
at o.a.i.i.IgnitionEx$IgniteNamedInstance.stop(IgnitionEx.java:2551)
at o.a.i.i.IgnitionEx.stop(IgnitionEx.java:372)
at 
o.a.i.failure.StopNodeFailureHandler$1.run(StopNodeFailureHandler.java:36)
at java.lang.Thread.run(Thread.java:748)
{noformat}


It seems invalidating environment of WAL manager is not working propertly.

  was:
Test 
org.apache.ignite.internal.processors.cache.persistence.IgniteNativeIoWalFlushFsyncSelfTest#testFailAfterStart
emulates problem with disc space using exception.

In direct IO environment real IO with disk is performed, tmpfs is not used.

Sometimes this error can come from rollover() of segment, failure handler 
reacted accordingly.
{noformat}
detected. Will be handled accordingly to configured h

[jira] [Created] (IGNITE-8742) Direct IO 2 suite is timed out by out of disk space failure emulation test: WAL manager failure does not stoped.

2018-06-07 Thread Dmitriy Pavlov (JIRA)
Dmitriy Pavlov created IGNITE-8742:
--

 Summary: Direct IO 2 suite is timed out by out of disk space 
failure emulation test: WAL manager failure does not stoped.
 Key: IGNITE-8742
 URL: https://issues.apache.org/jira/browse/IGNITE-8742
 Project: Ignite
  Issue Type: Test
  Components: persistence
Reporter: Dmitriy Pavlov


Test 
org.apache.ignite.internal.processors.cache.persistence.IgniteNativeIoWalFlushFsyncSelfTest#testFailAfterStart
emulates problem with disc space using exception.

In direct IO environment real IO with disk is performed, tmpfs is not used.

Sometimes this error can come from rollover() of segment, failure handler 
reacted accordingly.
{noformat}
detected. Will be handled accordingly to configured handler [hnd=class 
o.a.i.failure.StopNodeFailureHandler, failureCtx=FailureContext 
[type=CRITICAL_ERROR, err=class o.a.i.i.pagemem.wal.StorageException: Unable to 
write]]
class org.apache.ignite.internal.pagemem.wal.StorageException: Unable to write
at 
org.apache.ignite.internal.processors.cache.persistence.wal.FsyncModeFileWriteAheadLogManager$FileWriteHandle.writeBuffer(FsyncModeFileWriteAheadLogManager.java:2964)
at 
org.apache.ignite.internal.processors.cache.persistence.wal.FsyncModeFileWriteAheadLogManager$FileWriteHandle.flush(FsyncModeFileWriteAheadLogManager.java:2640)
at 
org.apache.ignite.internal.processors.cache.persistence.wal.FsyncModeFileWriteAheadLogManager$FileWriteHandle.flush(FsyncModeFileWriteAheadLogManager.java:2572)
at 
org.apache.ignite.internal.processors.cache.persistence.wal.FsyncModeFileWriteAheadLogManager$FileWriteHandle.flushOrWait(FsyncModeFileWriteAheadLogManager.java:2525)
at 
org.apache.ignite.internal.processors.cache.persistence.wal.FsyncModeFileWriteAheadLogManager$FileWriteHandle.close(FsyncModeFileWriteAheadLogManager.java:2795)
at 
org.apache.ignite.internal.processors.cache.persistence.wal.FsyncModeFileWriteAheadLogManager$FileWriteHandle.access$700(FsyncModeFileWriteAheadLogManager.java:2340)
at 
org.apache.ignite.internal.processors.cache.persistence.wal.FsyncModeFileWriteAheadLogManager.rollOver(FsyncModeFileWriteAheadLogManager.java:1029)
at 
org.apache.ignite.internal.processors.cache.persistence.wal.FsyncModeFileWriteAheadLogManager.log(FsyncModeFileWriteAheadLogManager.java:673)
{noformat}

But test seems to be not able to stop, node stopper thread tries to stop cache, 
flush WAL. flush wait for rollover, which will never happen.
{noformat}
Thread [name="node-stopper", id=2836, state=WAITING, blockCnt=7, waitCnt=9]
Lock 
[object=java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@47f6473,
 ownerName=null, ownerId=-1]
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitUninterruptibly(AbstractQueuedSynchronizer.java:1976)
at o.a.i.i.util.IgniteUtils.awaitQuiet(IgniteUtils.java:7473)
at 
o.a.i.i.processors.cache.persistence.wal.FsyncModeFileWriteAheadLogManager$FileWriteHandle.flushOrWait(FsyncModeFileWriteAheadLogManager.java:2546)
at 
o.a.i.i.processors.cache.persistence.wal.FsyncModeFileWriteAheadLogManager$FileWriteHandle.fsync(FsyncModeFileWriteAheadLogManager.java:2750)
at 
o.a.i.i.processors.cache.persistence.wal.FsyncModeFileWriteAheadLogManager$FileWriteHandle.access$2000(FsyncModeFileWriteAheadLogManager.java:2340)
at 
o.a.i.i.processors.cache.persistence.wal.FsyncModeFileWriteAheadLogManager.flush(FsyncModeFileWriteAheadLogManager.java:699)
at 
o.a.i.i.processors.cache.GridCacheProcessor.stopCache(GridCacheProcessor.java:1243)
at 
o.a.i.i.processors.cache.GridCacheProcessor.stopCaches(GridCacheProcessor.java:969)
at 
o.a.i.i.processors.cache.GridCacheProcessor.stop(GridCacheProcessor.java:943)
at o.a.i.i.IgniteKernal.stop0(IgniteKernal.java:2289)
at o.a.i.i.IgniteKernal.stop(IgniteKernal.java:2167)
at o.a.i.i.IgnitionEx$IgniteNamedInstance.stop0(IgnitionEx.java:2588)
- locked o.a.i.i.IgnitionEx$IgniteNamedInstance@90f6bfd
at o.a.i.i.IgnitionEx$IgniteNamedInstance.stop(IgnitionEx.java:2551)
at o.a.i.i.IgnitionEx.stop(IgnitionEx.java:372)
at 
o.a.i.failure.StopNodeFailureHandler$1.run(StopNodeFailureHandler.java:36)
at java.lang.Thread.run(Thread.java:748)
{noformat}


It seems invalidating environment of WAL manager is not working propertly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8742) Direct IO 2 suite is timed out by out of disk space failure emulation test: WAL manager failure does not stoped execution

2018-06-07 Thread Dmitriy Pavlov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitriy Pavlov updated IGNITE-8742:
---
Summary: Direct IO 2 suite is timed out by out of disk space failure 
emulation test: WAL manager failure does not stoped execution  (was: Direct IO 
2 suite is timed out by out of disk space failure emulation test: WAL manager 
failure does not stoped.)

> Direct IO 2 suite is timed out by out of disk space failure emulation test: 
> WAL manager failure does not stoped execution
> -
>
> Key: IGNITE-8742
> URL: https://issues.apache.org/jira/browse/IGNITE-8742
> Project: Ignite
>  Issue Type: Test
>  Components: persistence
>Reporter: Dmitriy Pavlov
>Priority: Major
>
> Test 
> org.apache.ignite.internal.processors.cache.persistence.IgniteNativeIoWalFlushFsyncSelfTest#testFailAfterStart
> emulates problem with disc space using exception.
> In direct IO environment real IO with disk is performed, tmpfs is not used.
> Sometimes this error can come from rollover() of segment, failure handler 
> reacted accordingly.
> {noformat}
> detected. Will be handled accordingly to configured handler [hnd=class 
> o.a.i.failure.StopNodeFailureHandler, failureCtx=FailureContext 
> [type=CRITICAL_ERROR, err=class o.a.i.i.pagemem.wal.StorageException: Unable 
> to write]]
> class org.apache.ignite.internal.pagemem.wal.StorageException: Unable to write
>   at 
> org.apache.ignite.internal.processors.cache.persistence.wal.FsyncModeFileWriteAheadLogManager$FileWriteHandle.writeBuffer(FsyncModeFileWriteAheadLogManager.java:2964)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.wal.FsyncModeFileWriteAheadLogManager$FileWriteHandle.flush(FsyncModeFileWriteAheadLogManager.java:2640)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.wal.FsyncModeFileWriteAheadLogManager$FileWriteHandle.flush(FsyncModeFileWriteAheadLogManager.java:2572)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.wal.FsyncModeFileWriteAheadLogManager$FileWriteHandle.flushOrWait(FsyncModeFileWriteAheadLogManager.java:2525)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.wal.FsyncModeFileWriteAheadLogManager$FileWriteHandle.close(FsyncModeFileWriteAheadLogManager.java:2795)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.wal.FsyncModeFileWriteAheadLogManager$FileWriteHandle.access$700(FsyncModeFileWriteAheadLogManager.java:2340)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.wal.FsyncModeFileWriteAheadLogManager.rollOver(FsyncModeFileWriteAheadLogManager.java:1029)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.wal.FsyncModeFileWriteAheadLogManager.log(FsyncModeFileWriteAheadLogManager.java:673)
> {noformat}
> But test seems to be not able to stop, node stopper thread tries to stop 
> cache, flush WAL. flush wait for rollover, which will never happen.
> {noformat}
> Thread [name="node-stopper", id=2836, state=WAITING, blockCnt=7, waitCnt=9]
> Lock 
> [object=java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@47f6473,
>  ownerName=null, ownerId=-1]
> at sun.misc.Unsafe.park(Native Method)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitUninterruptibly(AbstractQueuedSynchronizer.java:1976)
> at o.a.i.i.util.IgniteUtils.awaitQuiet(IgniteUtils.java:7473)
> at 
> o.a.i.i.processors.cache.persistence.wal.FsyncModeFileWriteAheadLogManager$FileWriteHandle.flushOrWait(FsyncModeFileWriteAheadLogManager.java:2546)
> at 
> o.a.i.i.processors.cache.persistence.wal.FsyncModeFileWriteAheadLogManager$FileWriteHandle.fsync(FsyncModeFileWriteAheadLogManager.java:2750)
> at 
> o.a.i.i.processors.cache.persistence.wal.FsyncModeFileWriteAheadLogManager$FileWriteHandle.access$2000(FsyncModeFileWriteAheadLogManager.java:2340)
> at 
> o.a.i.i.processors.cache.persistence.wal.FsyncModeFileWriteAheadLogManager.flush(FsyncModeFileWriteAheadLogManager.java:699)
> at 
> o.a.i.i.processors.cache.GridCacheProcessor.stopCache(GridCacheProcessor.java:1243)
> at 
> o.a.i.i.processors.cache.GridCacheProcessor.stopCaches(GridCacheProcessor.java:969)
> at 
> o.a.i.i.processors.cache.GridCacheProcessor.stop(GridCacheProcessor.java:943)
> at o.a.i.i.IgniteKernal.stop0(IgniteKernal.java:2289)
> at o.a.i.i.IgniteKernal.stop(IgniteKernal.java:2167)
> at o.a.i.i.IgnitionEx$IgniteNamedInstance.stop0(IgnitionEx.java:2588)
> - locked o.a.i.i.IgnitionEx$IgniteNamedInstance@90f6bfd
> at o.a.i.i.Ignition

[jira] [Updated] (IGNITE-8742) Direct IO 2 suite is timed out by 'out of disk space' failure emulation test: WAL manager failure does not stoped execution

2018-06-07 Thread Dmitriy Pavlov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitriy Pavlov updated IGNITE-8742:
---
Summary: Direct IO 2 suite is timed out by 'out of disk space' failure 
emulation test: WAL manager failure does not stoped execution  (was: Direct IO 
2 suite is timed out by out of disk space failure emulation test: WAL manager 
failure does not stoped execution)

> Direct IO 2 suite is timed out by 'out of disk space' failure emulation test: 
> WAL manager failure does not stoped execution
> ---
>
> Key: IGNITE-8742
> URL: https://issues.apache.org/jira/browse/IGNITE-8742
> Project: Ignite
>  Issue Type: Test
>  Components: persistence
>Reporter: Dmitriy Pavlov
>Priority: Major
>
> Test 
> org.apache.ignite.internal.processors.cache.persistence.IgniteNativeIoWalFlushFsyncSelfTest#testFailAfterStart
> emulates problem with disc space using exception.
> In direct IO environment real IO with disk is performed, tmpfs is not used.
> Sometimes this error can come from rollover() of segment, failure handler 
> reacted accordingly.
> {noformat}
> detected. Will be handled accordingly to configured handler [hnd=class 
> o.a.i.failure.StopNodeFailureHandler, failureCtx=FailureContext 
> [type=CRITICAL_ERROR, err=class o.a.i.i.pagemem.wal.StorageException: Unable 
> to write]]
> class org.apache.ignite.internal.pagemem.wal.StorageException: Unable to write
>   at 
> org.apache.ignite.internal.processors.cache.persistence.wal.FsyncModeFileWriteAheadLogManager$FileWriteHandle.writeBuffer(FsyncModeFileWriteAheadLogManager.java:2964)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.wal.FsyncModeFileWriteAheadLogManager$FileWriteHandle.flush(FsyncModeFileWriteAheadLogManager.java:2640)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.wal.FsyncModeFileWriteAheadLogManager$FileWriteHandle.flush(FsyncModeFileWriteAheadLogManager.java:2572)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.wal.FsyncModeFileWriteAheadLogManager$FileWriteHandle.flushOrWait(FsyncModeFileWriteAheadLogManager.java:2525)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.wal.FsyncModeFileWriteAheadLogManager$FileWriteHandle.close(FsyncModeFileWriteAheadLogManager.java:2795)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.wal.FsyncModeFileWriteAheadLogManager$FileWriteHandle.access$700(FsyncModeFileWriteAheadLogManager.java:2340)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.wal.FsyncModeFileWriteAheadLogManager.rollOver(FsyncModeFileWriteAheadLogManager.java:1029)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.wal.FsyncModeFileWriteAheadLogManager.log(FsyncModeFileWriteAheadLogManager.java:673)
> {noformat}
> But test seems to be not able to stop, node stopper thread tries to stop 
> cache, flush WAL. flush wait for rollover, which will never happen.
> {noformat}
> Thread [name="node-stopper", id=2836, state=WAITING, blockCnt=7, waitCnt=9]
> Lock 
> [object=java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@47f6473,
>  ownerName=null, ownerId=-1]
> at sun.misc.Unsafe.park(Native Method)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitUninterruptibly(AbstractQueuedSynchronizer.java:1976)
> at o.a.i.i.util.IgniteUtils.awaitQuiet(IgniteUtils.java:7473)
> at 
> o.a.i.i.processors.cache.persistence.wal.FsyncModeFileWriteAheadLogManager$FileWriteHandle.flushOrWait(FsyncModeFileWriteAheadLogManager.java:2546)
> at 
> o.a.i.i.processors.cache.persistence.wal.FsyncModeFileWriteAheadLogManager$FileWriteHandle.fsync(FsyncModeFileWriteAheadLogManager.java:2750)
> at 
> o.a.i.i.processors.cache.persistence.wal.FsyncModeFileWriteAheadLogManager$FileWriteHandle.access$2000(FsyncModeFileWriteAheadLogManager.java:2340)
> at 
> o.a.i.i.processors.cache.persistence.wal.FsyncModeFileWriteAheadLogManager.flush(FsyncModeFileWriteAheadLogManager.java:699)
> at 
> o.a.i.i.processors.cache.GridCacheProcessor.stopCache(GridCacheProcessor.java:1243)
> at 
> o.a.i.i.processors.cache.GridCacheProcessor.stopCaches(GridCacheProcessor.java:969)
> at 
> o.a.i.i.processors.cache.GridCacheProcessor.stop(GridCacheProcessor.java:943)
> at o.a.i.i.IgniteKernal.stop0(IgniteKernal.java:2289)
> at o.a.i.i.IgniteKernal.stop(IgniteKernal.java:2167)
> at o.a.i.i.IgnitionEx$IgniteNamedInstance.stop0(IgnitionEx.java:2588)
> - locked o.a.i.i.IgnitionEx$IgniteNamedInstance@90f6bfd
> at o

[jira] [Created] (IGNITE-8741) [ML] Make a tutorial for data preprocessing

2018-06-07 Thread Yury Babak (JIRA)
Yury Babak created IGNITE-8741:
--

 Summary: [ML] Make a tutorial for data preprocessing
 Key: IGNITE-8741
 URL: https://issues.apache.org/jira/browse/IGNITE-8741
 Project: Ignite
  Issue Type: Wish
  Components: ml
Reporter: Yury Babak
Assignee: Aleksey Zinoviev
 Fix For: 2.6






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (IGNITE-8697) Flink sink throws java.lang.IllegalArgumentException when running in flink cluster mode.

2018-06-07 Thread Roman Shtykh (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Shtykh reassigned IGNITE-8697:


Assignee: (was: Roman Shtykh)

> Flink sink throws java.lang.IllegalArgumentException when running in flink 
> cluster mode.
> 
>
> Key: IGNITE-8697
> URL: https://issues.apache.org/jira/browse/IGNITE-8697
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.3, 2.4, 2.5
>Reporter: Ray
>Priority: Blocker
>
> if I submit the Application to the Flink Cluster using Ignite flink sink I 
> get this error
>  
> java.lang.ExceptionInInitializerError
>   at 
> org.apache.ignite.sink.flink.IgniteSink$SinkContext.getStreamer(IgniteSink.java:201)
>   at 
> org.apache.ignite.sink.flink.IgniteSink$SinkContext.access$100(IgniteSink.java:175)
>   at org.apache.ignite.sink.flink.IgniteSink.invoke(IgniteSink.java:165)
>   at 
> org.apache.flink.streaming.api.functions.sink.SinkFunction.invoke(SinkFunction.java:52)
>   at 
> org.apache.flink.streaming.api.operators.StreamSink.processElement(StreamSink.java:56)
>   at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:560)
>   at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:535)
>   at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:515)
>   at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:679)
>   at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:657)
>   at 
> org.apache.flink.streaming.api.operators.TimestampedCollector.collect(TimestampedCollector.java:51)
>   at 
> org.myorg.quickstart.InstrumentStreamer$Splitter.flatMap(InstrumentStreamer.java:97)
>   at 
> org.myorg.quickstart.InstrumentStreamer$Splitter.flatMap(InstrumentStreamer.java:1)
>   at 
> org.apache.flink.streaming.api.operators.StreamFlatMap.processElement(StreamFlatMap.java:50)
>   at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:560)
>   at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:535)
>   at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:515)
>   at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:679)
>   at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:657)
>   at 
> org.apache.flink.streaming.api.operators.StreamSourceContexts$NonTimestampContext.collect(StreamSourceContexts.java:104)
>   at 
> org.apache.flink.streaming.api.functions.source.SocketTextStreamFunction.run(SocketTextStreamFunction.java:110)
>   at 
> org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:87)
>   at 
> org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:56)
>   at 
> org.apache.flink.streaming.runtime.tasks.SourceStreamTask.run(SourceStreamTask.java:99)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:306)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:703)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.IllegalArgumentException: Ouch! Argument is invalid: 
> Cache name must not be null or empty.
>   at 
> org.apache.ignite.internal.util.GridArgumentCheck.ensure(GridArgumentCheck.java:109)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheUtils.validateCacheName(GridCacheUtils.java:1581)
>   at 
> org.apache.ignite.internal.IgniteKernal.dataStreamer(IgniteKernal.java:3284)
>   at 
> org.apache.ignite.sink.flink.IgniteSink$SinkContext$Holder.(IgniteSink.java:183)
>   ... 27 more



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8702) Crash in ODBC driver under Informatica connection checker

2018-06-07 Thread Sergey Kalashnikov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16504760#comment-16504760
 ] 

Sergey Kalashnikov commented on IGNITE-8702:


[~isapego], Looks good to me.

> Crash in ODBC driver under Informatica connection checker
> -
>
> Key: IGNITE-8702
> URL: https://issues.apache.org/jira/browse/IGNITE-8702
> Project: Ignite
>  Issue Type: Bug
>  Components: odbc
>Affects Versions: 2.4
>Reporter: Ilya Kasnacheev
>Assignee: Igor Sapego
>Priority: Major
>
> I'm trying to connect Informatica to Ignite via ODBC.
> When I try to specify my connection as a ready-made DSN by its name, it 
> starts connecting to remote but then fails:
> {code}
> [ikasnacheev@lab15 ODBC7.1]$ IGNITE_ODBC_LOG_PATH=/home/ikasnacheev/odbc2.log 
> INFA_HOME=/storage/ssd/ikasnacheev 
> LD_LIBRARY_PATH=/storage/ssd/ikasnacheev/ODBC7.1/lib:$LD_LIBRARY_PATH:/storage/ssd/ikasnacheev/services/shared/bin
>  /storage/ssd/ikasnacheev/java/jre/bin/java -d64 -DpwdDecrypt=true 
> -DconnectionName=Lab -DuserName=lab -Dpassword="nq/Jypc7Q2EhoQ2iAQlOCA==" 
> -DconnectionString=LABignite -DdataStoreType=ODBC 
> -DINFA_HOME=/storage/ssd/ikasnacheev -classpath 
> '.:/storage/ssd/ikasnacheev/services/AdministratorConsole/webapps/administrator/WEB-INF/lib/*:/storage/ssd/ikasnacheev/services/shared/jars/platform/*:/storage/ssd/ikasnacheev/services/shared/jars/thirdparty/*:/storage/ssd/ikasnacheev/plugins/osgi/*:/storage/ssd/ikasnacheev/plugins/infa/*:/storage/ssd/ikasnacheev/plugins/dynamic/*'
>  com.informatica.adminconsole.app.chain.commands.TestODBCConnection
> ...
> #
> # A fatal error has been detected by the Java Runtime Environment:
> #
> #  SIGSEGV (0xb) at pc=0x7faeb806d5e4, pid=26471, tid=140392269498112
> #
> # JRE version: Java(TM) SE Runtime Environment (8.0_77-b03) (build 
> 1.8.0_77-b03)
> # Java VM: Java HotSpot(TM) 64-Bit Server VM (25.77-b03 mixed mode 
> linux-amd64 compressed oops)
> # Problematic frame:
> # C  [libignite-odbc.so+0x2c5e4]  
> ignite::odbc::system::TcpSocketClient::Connect(char const*, unsigned short, 
> int, ignite::odbc::diagnostic::Diagnosable&)+0x7b4
> {code}
> The contents of Ignite driver log file as follows:
> {code}
> SQLAllocEnv: SQLAllocEnv called
> SQLSetEnvAttr: SQLSetEnvAttr called
> AddStatusRecord: Adding new record: ODBC version is not supported., rowNum: 
> 0, columnNum: 0
> SQLAllocConnect: SQLAllocConnect called
> SQLGetInfo: SQLGetInfo called: 77 (SQL_DRIVER_ODBC_VER), 7faec08d1450, 6, 
> 7faf9f5a29ee
> GetInfo: SQLGetInfo called: 77 (SQL_DRIVER_ODBC_VER), 7faec08d1450, 6, 
> 7faf9f5a29ee
> SQLSetConnectOption: SQLSetConnectOption called
> SQLConnect: SQLConnect called
> SQLConnect: DSN: LABignite
> Connect: Host: 172.25.1.16, port: 10800
> Connect: Addr: 172.25.1.16
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-8740) Support reuse of already initialized Ignite in IgniteSpringBean

2018-06-07 Thread Ilya Kasnacheev (JIRA)
Ilya Kasnacheev created IGNITE-8740:
---

 Summary: Support reuse of already initialized Ignite in 
IgniteSpringBean
 Key: IGNITE-8740
 URL: https://issues.apache.org/jira/browse/IGNITE-8740
 Project: Ignite
  Issue Type: Improvement
  Components: spring
Affects Versions: 2.4
Reporter: Ilya Kasnacheev


See 
http://apache-ignite-users.70518.x6.nabble.com/IgniteSpringBean-amp-Ignite-SpringTransactionManager-broken-with-2-4-td21667.html#a21724
 (there's patch available)

The idea is to introduce a workaround for users hit by IGNITE-6555, which 
unfortunately broke some scenarios.





--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8668) K-fold cross validation of models

2018-06-07 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8668?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16504713#comment-16504713
 ] 

ASF GitHub Bot commented on IGNITE-8668:


Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/4143


> K-fold cross validation of models
> -
>
> Key: IGNITE-8668
> URL: https://issues.apache.org/jira/browse/IGNITE-8668
> Project: Ignite
>  Issue Type: New Feature
>  Components: ml
>Reporter: Yury Babak
>Assignee: Anton Dmitriev
>Priority: Major
> Fix For: 2.6
>
>
> Cross validation is a well knows approach that allows to avoid overfitting 
> and therefore improve model quality. K-fold cross validation is based on 
> splitting dataset on _k_ disjoint subsets and using _k-1_ of them as train 
> subset and the remaining subset for test (with all possible combinations).
> The goal of this task is to implement K-fold cross validation based on an 
> ability to filter dataset added recently in IGNITE-8666.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (IGNITE-8712) IgniteDataStructureUniqueNameTest#testUniqueNameMultithreaded fails sometimes in master.

2018-06-07 Thread Pavel Pereslegin (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16504544#comment-16504544
 ] 

Pavel Pereslegin edited comment on IGNITE-8712 at 6/7/18 1:49 PM:
--

Data structures store information about their state (meta information) in the 
metacache (cache key is the name of datastructure instance).
 When accessing data structure, a separate instance (*one per node*) is created 
and cached in the local datastructures Map.
 When data structure method close() is invoked the item is removed from the 
metacache and continuous query listener uses to nofify all nodes to remove 
associated local instance from local datastruсtures Map.
 The problem occurs if the listener is notified asynchronously and the 
transaction (that initiated update event) ends earlier.
 The listener on the *non-affinity node* is notified *synchronously* only if 
{{sync}} flag of CacheContinuousQueryHandler is set to true, but for now this 
flag is always set to false (except JCacheQuery). I updated signature of method 
executeInternalQuery() in CacheContinuousQueryManager to change the value of 
this flag.

Btw, I found that this flaky failure has the same reason and will be fixed: 
[IgnitePartitionedSemaphoreSelfTest.testIsolation|https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8&buildTypeId=&tab=testDetails&testNameId=-1569365879053619530&order=TEST_STATUS_DESC&branch_IgniteTests24Java8=&itemsCount=50]

And I believe that this flaky failure will be fixed too: 
[IgniteDataStructureUniqueNameTest.testCreateRemove|https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8&buildTypeId=&tab=testDetails&testNameId=9004139438366655173&order=TEST_STATUS_DESC&branch_IgniteTests24Java8=&itemsCount=50]



was (Author: xtern):
Data structures store information about their state (meta information) in the 
metacache (cache key is the name of datastructure instance).
 When accessing data structure, a separate instance (*one per node*) is created 
and cached in the local datastructures Map.
 When data structure method close() is invoked the item is removed from the 
metacache and continuous query listener uses to nofify all nodes to remove 
associated local instance from local datastruсtures Map.
 The problem occurs if the listener is notified asynchronously and the 
transaction (that initiated update event) ends earlier.
 The listener on the *non-affinity node* is notified *synchronously* only if 
{{sync}} flag of CacheContinuousQueryHandler is set to true, but for now this 
flag is always set to false (except JCacheQuery). I updated signature of method 
executeInternalQuery() in CacheContinuousQueryManager to change the value of 
this flag.

> IgniteDataStructureUniqueNameTest#testUniqueNameMultithreaded fails sometimes 
> in master.
> 
>
> Key: IGNITE-8712
> URL: https://issues.apache.org/jira/browse/IGNITE-8712
> Project: Ignite
>  Issue Type: Bug
>  Components: data structures
>Affects Versions: 2.5
>Reporter: Pavel Pereslegin
>Assignee: Pavel Pereslegin
>Priority: Minor
>  Labels: MakeTeamcityGreenAgain
>
> https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8&buildTypeId=&tab=testDetails&testNameId=5920780021361517364&order=TEST_STATUS_DESC&branch_IgniteTests24Java8=%3Cdefault%3E&itemsCount=10
> Typical output:
> {noformat}
> junit.framework.AssertionFailedError: expected: org.apache.ignite.internal.processors.datastructures.GridCacheSetProxy> but 
> was: org.apache.ignite.internal.processors.datastructures.GridCacheAtomicStampedImpl>
> at 
> org.apache.ignite.internal.processors.cache.datastructures.IgniteDataStructureUniqueNameTest.testUniqueName(IgniteDataStructureUniqueNameTest.java:385)
> at 
> org.apache.ignite.internal.processors.cache.datastructures.IgniteDataStructureUniqueNameTest.testUniqueNameMultithreaded(IgniteDataStructureUniqueNameTest.java:85)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8739) Implement WA for TCP communication related to hanging on descriptor reservation

2018-06-07 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16504673#comment-16504673
 ] 

ASF GitHub Bot commented on IGNITE-8739:


GitHub user akalash opened a pull request:

https://github.com/apache/ignite/pull/4148

IGNITE-8739 cherry picked from GG-13874 Implement WA for TCP communic…

…ation related to hanging on descriptor reservation.

(cherry picked from commit f2a6133)

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-8739

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/4148.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4148


commit e25ef8f076ce01a43b68eee20556041757d003e8
Author: Aleksei Scherbakov 
Date:   2018-05-31T08:01:30Z

IGNITE-8739 cherry picked from GG-13874 Implement WA for TCP communication 
related to hanging on descriptor reservation.

(cherry picked from commit f2a6133)




> Implement WA for TCP communication related to hanging on descriptor 
> reservation
> ---
>
> Key: IGNITE-8739
> URL: https://issues.apache.org/jira/browse/IGNITE-8739
> Project: Ignite
>  Issue Type: Bug
>Reporter: Anton Kalashnikov
>Assignee: Anton Kalashnikov
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8030) Cluster hangs on deactivation process in time stopping indexed cache

2018-06-07 Thread Vladislav Pyatkov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladislav Pyatkov updated IGNITE-8030:
--
Attachment: ignite.log
td.tdump

> Cluster hangs on deactivation process in time stopping indexed cache
> 
>
> Key: IGNITE-8030
> URL: https://issues.apache.org/jira/browse/IGNITE-8030
> Project: Ignite
>  Issue Type: Bug
>Reporter: Vladislav Pyatkov
>Priority: Major
> Attachments: ignite.log, td.tdump, thrdump-server.log
>
>
> {noformat}
> "sys-#10283%DPL_GRID%DplGridNodeName%" #13068 prio=5 os_prio=0 
> tid=0x7f07040eb000 nid=0x2e0f waiting on condition [0x7e6deb9b8000]
>java.lang.Thread.State: WAITING (parking)
>   at sun.misc.Unsafe.park(Native Method)
>   - parking to wait for  <0x7f0bd2b0> (a 
> java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync)
>   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireInterruptibly(AbstractQueuedSynchronizer.java:897)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireInterruptibly(AbstractQueuedSynchronizer.java:1222)
>   at 
> java.util.concurrent.locks.ReentrantReadWriteLock$WriteLock.lockInterruptibly(ReentrantReadWriteLock.java:998)
>   at 
> org.apache.ignite.internal.processors.query.h2.opt.GridH2Table.lock(GridH2Table.java:292)
>   at 
> org.apache.ignite.internal.processors.query.h2.opt.GridH2Table.lock(GridH2Table.java:253)
>   at org.h2.command.ddl.DropTable.prepareDrop(DropTable.java:87)
>   at org.h2.command.ddl.DropTable.update(DropTable.java:113)
>   at org.h2.command.CommandContainer.update(CommandContainer.java:101)
>   at org.h2.command.Command.executeUpdate(Command.java:260)
>   - locked <0x7f0c276c85b8> (a org.h2.engine.Session)
>   at 
> org.h2.jdbc.JdbcStatement.executeUpdateInternal(JdbcStatement.java:137)
>   - locked <0x7f0c276c85b8> (a org.h2.engine.Session)
>   at org.h2.jdbc.JdbcStatement.executeUpdate(JdbcStatement.java:122)
>   at 
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.dropTable(IgniteH2Indexing.java:654)
>   at 
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.unregisterCache(IgniteH2Indexing.java:2482)
>   at 
> org.apache.ignite.internal.processors.query.GridQueryProcessor.onCacheStop0(GridQueryProcessor.java:1684)
>   - locked <0x7f0b69f822d0> (a java.lang.Object)
>   at 
> org.apache.ignite.internal.processors.query.GridQueryProcessor.onCacheStop(GridQueryProcessor.java:879)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheProcessor.stopCache(GridCacheProcessor.java:1189)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheProcessor.prepareCacheStop(GridCacheProcessor.java:2063)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheProcessor.onExchangeDone(GridCacheProcessor.java:2219)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.onDone(GridDhtPartitionsExchangeFuture.java:1518)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.finishExchangeOnCoordinator(GridDhtPartitionsExchangeFuture.java:2538)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.onAllReceived(GridDhtPartitionsExchangeFuture.java:2297)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.processSingleMessage(GridDhtPartitionsExchangeFuture.java:2034)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.access$100(GridDhtPartitionsExchangeFuture.java:122)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture$2.apply(GridDhtPartitionsExchangeFuture.java:1891)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture$2.apply(GridDhtPartitionsExchangeFuture.java:1879)
>   at 
> org.apache.ignite.internal.util.future.GridFutureAdapter.notifyListener(GridFutureAdapter.java:383)
>   at 
> org.apache.ignite.internal.util.future.GridFutureAdapter.listen(GridFutureAdapter.java:353)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.onReceiveSingleMessage(GridDhtPartitionsExchangeFuture.java:1879)
>   at 
> org.apache.ignite.internal.processo

[jira] [Created] (IGNITE-8739) Implement WA for TCP communication related to hanging on descriptor reservation

2018-06-07 Thread Anton Kalashnikov (JIRA)
Anton Kalashnikov created IGNITE-8739:
-

 Summary: Implement WA for TCP communication related to hanging on 
descriptor reservation
 Key: IGNITE-8739
 URL: https://issues.apache.org/jira/browse/IGNITE-8739
 Project: Ignite
  Issue Type: Bug
Reporter: Anton Kalashnikov
Assignee: Anton Kalashnikov






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8657) Simultaneous start of bunch of client nodes may lead to some clients hangs

2018-06-07 Thread Alexey Goncharuk (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16504641#comment-16504641
 ] 

Alexey Goncharuk commented on IGNITE-8657:
--

The fix looks good to me. [~Jokser], would you mind also taking a look at this?

> Simultaneous start of bunch of client nodes may lead to some clients hangs
> --
>
> Key: IGNITE-8657
> URL: https://issues.apache.org/jira/browse/IGNITE-8657
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.5
>Reporter: Sergey Chugunov
>Assignee: Sergey Chugunov
>Priority: Major
> Fix For: 2.6
>
>
> h3. Description
> PartitionExchangeManager uses a system property 
> *IGNITE_EXCHANGE_HISTORY_SIZE* to manage max number of exchange objects and 
> optimize memory consumption.
> Default value of the property is 1000 but in scenarios with many caches and 
> partitions it is reasonable to set exchange history size to a smaller values 
> around few dozens.
> Then if user starts up at once more client nodes than history size some 
> clients may hang because their exchange information was preempted and no 
> longer available.
> h3. Workarounds
> Two workarounds are possible: 
> * Do not start at once more clients than history size.
> * Restart hanging client node.
> h3. Solution
> Forcing client node to reconnect when server detected loosing its exchange 
> information prevents client nodes hanging.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8738) Improve coordinator change information

2018-06-07 Thread Alexey Goncharuk (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Goncharuk updated IGNITE-8738:
-
Description: 
When topology changes and coordinator is also changed, we need to print out 
this alongside with topology information.
An example of such message:
{{Coordinator changed [prev=node.tostring(), cur=node.tostr()]}}

> Improve coordinator change information
> --
>
> Key: IGNITE-8738
> URL: https://issues.apache.org/jira/browse/IGNITE-8738
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Alexey Goncharuk
>Priority: Major
>
> When topology changes and coordinator is also changed, we need to print out 
> this alongside with topology information.
> An example of such message:
> {{Coordinator changed [prev=node.tostring(), cur=node.tostr()]}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-8738) Improve coordinator change information

2018-06-07 Thread Alexey Goncharuk (JIRA)
Alexey Goncharuk created IGNITE-8738:


 Summary: Improve coordinator change information
 Key: IGNITE-8738
 URL: https://issues.apache.org/jira/browse/IGNITE-8738
 Project: Ignite
  Issue Type: Improvement
Reporter: Alexey Goncharuk






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8737) Improve checkpoint logging information

2018-06-07 Thread Alexey Goncharuk (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8737?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Goncharuk updated IGNITE-8737:
-
Fix Version/s: 2.6

> Improve checkpoint logging information
> --
>
> Key: IGNITE-8737
> URL: https://issues.apache.org/jira/browse/IGNITE-8737
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Alexey Goncharuk
>Priority: Major
> Fix For: 2.6
>
>
> 1) Move to INFO log rollover and log archivation events
> 2) Make sure log rollover and archive errors are logged
> 3) When checkpoint finishes, we need to print out which segments were fully 
> covered by this checkpoint in the "Checkpoint finished ..." message



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8737) Improve checkpoint logging information

2018-06-07 Thread Alexey Goncharuk (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8737?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Goncharuk updated IGNITE-8737:
-
Description: 
1) Move to INFO log rollover and log archivation events
2) Make sure log rollover and archive errors are logged
3) When checkpoint finishes, we need to print out which segments were fully 
covered by this checkpoint in the "Checkpoint finished ..." message

> Improve checkpoint logging information
> --
>
> Key: IGNITE-8737
> URL: https://issues.apache.org/jira/browse/IGNITE-8737
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Alexey Goncharuk
>Priority: Major
> Fix For: 2.6
>
>
> 1) Move to INFO log rollover and log archivation events
> 2) Make sure log rollover and archive errors are logged
> 3) When checkpoint finishes, we need to print out which segments were fully 
> covered by this checkpoint in the "Checkpoint finished ..." message



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8736) Add transaction label to CU.txString() method output

2018-06-07 Thread Alexey Goncharuk (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Goncharuk updated IGNITE-8736:
-
Description: This information may be useful during deadlocked and forcibly 
rolled back transactions printout

> Add transaction label to CU.txString() method output
> 
>
> Key: IGNITE-8736
> URL: https://issues.apache.org/jira/browse/IGNITE-8736
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Alexey Goncharuk
>Priority: Major
> Fix For: 2.6
>
>
> This information may be useful during deadlocked and forcibly rolled back 
> transactions printout



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-8737) Improve checkpoint logging information

2018-06-07 Thread Alexey Goncharuk (JIRA)
Alexey Goncharuk created IGNITE-8737:


 Summary: Improve checkpoint logging information
 Key: IGNITE-8737
 URL: https://issues.apache.org/jira/browse/IGNITE-8737
 Project: Ignite
  Issue Type: Improvement
Reporter: Alexey Goncharuk






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8736) Add transaction label to CU.txString() method output

2018-06-07 Thread Alexey Goncharuk (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Goncharuk updated IGNITE-8736:
-
Fix Version/s: 2.6

> Add transaction label to CU.txString() method output
> 
>
> Key: IGNITE-8736
> URL: https://issues.apache.org/jira/browse/IGNITE-8736
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Alexey Goncharuk
>Priority: Major
> Fix For: 2.6
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-8736) Add transaction label to CU.txString() method output

2018-06-07 Thread Alexey Goncharuk (JIRA)
Alexey Goncharuk created IGNITE-8736:


 Summary: Add transaction label to CU.txString() method output
 Key: IGNITE-8736
 URL: https://issues.apache.org/jira/browse/IGNITE-8736
 Project: Ignite
  Issue Type: Improvement
Reporter: Alexey Goncharuk






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8697) Flink sink throws java.lang.IllegalArgumentException when running in flink cluster mode.

2018-06-07 Thread Andrew Mashenkov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16504591#comment-16504591
 ] 

Andrew Mashenkov commented on IGNITE-8697:
--

Static fields looks very suspicous.

Is it possible, Sink object created with one class loader and then transferred 
to thread with another classloader?
In that case static content will not be transfered with the object as it is not 
belongs to object.

Also. Why we have static datastreamer instance here? 
It is a closable object and it looks like can be reused by several threads. And 
once streamer has been closed, it can be used.

> Flink sink throws java.lang.IllegalArgumentException when running in flink 
> cluster mode.
> 
>
> Key: IGNITE-8697
> URL: https://issues.apache.org/jira/browse/IGNITE-8697
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.3, 2.4, 2.5
>Reporter: Ray
>Assignee: Roman Shtykh
>Priority: Blocker
>
> if I submit the Application to the Flink Cluster using Ignite flink sink I 
> get this error
>  
> java.lang.ExceptionInInitializerError
>   at 
> org.apache.ignite.sink.flink.IgniteSink$SinkContext.getStreamer(IgniteSink.java:201)
>   at 
> org.apache.ignite.sink.flink.IgniteSink$SinkContext.access$100(IgniteSink.java:175)
>   at org.apache.ignite.sink.flink.IgniteSink.invoke(IgniteSink.java:165)
>   at 
> org.apache.flink.streaming.api.functions.sink.SinkFunction.invoke(SinkFunction.java:52)
>   at 
> org.apache.flink.streaming.api.operators.StreamSink.processElement(StreamSink.java:56)
>   at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:560)
>   at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:535)
>   at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:515)
>   at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:679)
>   at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:657)
>   at 
> org.apache.flink.streaming.api.operators.TimestampedCollector.collect(TimestampedCollector.java:51)
>   at 
> org.myorg.quickstart.InstrumentStreamer$Splitter.flatMap(InstrumentStreamer.java:97)
>   at 
> org.myorg.quickstart.InstrumentStreamer$Splitter.flatMap(InstrumentStreamer.java:1)
>   at 
> org.apache.flink.streaming.api.operators.StreamFlatMap.processElement(StreamFlatMap.java:50)
>   at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:560)
>   at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:535)
>   at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:515)
>   at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:679)
>   at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:657)
>   at 
> org.apache.flink.streaming.api.operators.StreamSourceContexts$NonTimestampContext.collect(StreamSourceContexts.java:104)
>   at 
> org.apache.flink.streaming.api.functions.source.SocketTextStreamFunction.run(SocketTextStreamFunction.java:110)
>   at 
> org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:87)
>   at 
> org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:56)
>   at 
> org.apache.flink.streaming.runtime.tasks.SourceStreamTask.run(SourceStreamTask.java:99)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:306)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:703)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.IllegalArgumentException: Ouch! Argument is invalid: 
> Cache name must not be null or empty.
>   at 
> org.apache.ignite.internal.util.GridArgumentCheck.ensure(GridArgumentCheck.java:109)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheUtils.validateCacheName(GridCacheUtils.java:1581)
>   at 
> org.apache.ignite.internal.IgniteKernal.dataStreamer(IgniteKernal.java:3284)
>   at 
> org.apache.ignite.sink.flink.IgniteSink$SinkContext$Holder.(IgniteSink.java:183)
>   ... 27 more



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8645) CacheMetrics.getCacheTxCommits() doesn't include transactions started on client node

2018-06-07 Thread Alexey Kuznetsov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16504586#comment-16504586
 ] 

Alexey Kuznetsov commented on IGNITE-8645:
--

[~guseinov] ok

> CacheMetrics.getCacheTxCommits() doesn't include transactions started on 
> client node
> 
>
> Key: IGNITE-8645
> URL: https://issues.apache.org/jira/browse/IGNITE-8645
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.4
>Reporter: Roman Guseinov
>Assignee: Alexey Kuznetsov
>Priority: Major
> Fix For: 2.6
>
> Attachments: CacheTxCommitsMetricTest.java
>
>
> The test is attached [^CacheTxCommitsMetricTest.java]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8645) CacheMetrics.getCacheTxCommits() doesn't include transactions started on client node

2018-06-07 Thread Roman Guseinov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16504582#comment-16504582
 ] 

Roman Guseinov commented on IGNITE-8645:


Hi [~Alexey Kuznetsov] ,

I don't mind. Could you please create an additional ticket for that?

> CacheMetrics.getCacheTxCommits() doesn't include transactions started on 
> client node
> 
>
> Key: IGNITE-8645
> URL: https://issues.apache.org/jira/browse/IGNITE-8645
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.4
>Reporter: Roman Guseinov
>Assignee: Alexey Kuznetsov
>Priority: Major
> Fix For: 2.6
>
> Attachments: CacheTxCommitsMetricTest.java
>
>
> The test is attached [^CacheTxCommitsMetricTest.java]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Issue Comment Deleted] (IGNITE-8645) CacheMetrics.getCacheTxCommits() doesn't include transactions started on client node

2018-06-07 Thread Alexey Kuznetsov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Kuznetsov updated IGNITE-8645:
-
Comment: was deleted

(was: [~guseinov] , [~kuaw26] , [~dpavlov] hi.
 I have nearly fixed cache metrics(now, calling 
_cache(CACHE_NAME).metrics().getCacheTxCommits()_ result in correct value).

But have problems with fixing Visor metrics.

I propose to create a separate ticket for fixing Visor metrics, and assign it 
to [~kuaw26] (not me)

Are you agree ?)

> CacheMetrics.getCacheTxCommits() doesn't include transactions started on 
> client node
> 
>
> Key: IGNITE-8645
> URL: https://issues.apache.org/jira/browse/IGNITE-8645
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.4
>Reporter: Roman Guseinov
>Assignee: Alexey Kuznetsov
>Priority: Major
> Fix For: 2.6
>
> Attachments: CacheTxCommitsMetricTest.java
>
>
> The test is attached [^CacheTxCommitsMetricTest.java]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-6010) ZookeeperIpFinderTest.testFourNodesKillRestartZookeeper fails sometimes

2018-06-07 Thread Amelchev Nikita (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-6010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16504572#comment-16504572
 ] 

Amelchev Nikita commented on IGNITE-6010:
-

The test became flaky after commit e05c012 where the timeout was decreased to 
20s. Sometimes curators closed on session timeout (60 sec) and fail the test.

I prepared PR to fix the test:
- I added the additional block to ensure that clients were connected.
- Now session closes expected without timeouts and "wait for condition" block 
is excess.

> ZookeeperIpFinderTest.testFourNodesKillRestartZookeeper fails sometimes
> ---
>
> Key: IGNITE-6010
> URL: https://issues.apache.org/jira/browse/IGNITE-6010
> Project: Ignite
>  Issue Type: Bug
>  Components: zookeeper
>Affects Versions: 2.1
>Reporter: Ilya Lantukh
>Assignee: Amelchev Nikita
>Priority: Major
>  Labels: MakeTeamcityGreenAgain
> Fix For: 2.6
>
>
> {noformat}
> junit.framework.AssertionFailedError: null
> at junit.framework.Assert.fail(Assert.java:55)
> at junit.framework.Assert.assertTrue(Assert.java:22)
> at junit.framework.Assert.assertTrue(Assert.java:31)
> at junit.framework.TestCase.assertTrue(TestCase.java:201)
> at 
> org.apache.ignite.spi.discovery.tcp.ipfinder.zk.ZookeeperIpFinderTest.testFourNodesKillRestartZookeeper(ZookeeperIpFinderTest.java:365)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8706) IgnitePdsDataRegionMetricsTest#testMemoryUsageMultipleNodes fails in master

2018-06-07 Thread Alexey Goncharuk (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Goncharuk updated IGNITE-8706:
-
Fix Version/s: 2.6

> IgnitePdsDataRegionMetricsTest#testMemoryUsageMultipleNodes fails in master
> ---
>
> Key: IGNITE-8706
> URL: https://issues.apache.org/jira/browse/IGNITE-8706
> Project: Ignite
>  Issue Type: Bug
>Reporter: Alexey Goncharuk
>Assignee: Alexey Goncharuk
>Priority: Major
>  Labels: MakeTeamcityGreenAgain
> Fix For: 2.6
>
>
> The test fails because FilePageStore decrements the pages metric after 
> allocated pages count is set to 0.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8706) IgnitePdsDataRegionMetricsTest#testMemoryUsageMultipleNodes fails in master

2018-06-07 Thread Alexey Goncharuk (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16504565#comment-16504565
 ] 

Alexey Goncharuk commented on IGNITE-8706:
--

TC run: 
https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8&branch_IgniteTests24Java8=ignite-8706

> IgnitePdsDataRegionMetricsTest#testMemoryUsageMultipleNodes fails in master
> ---
>
> Key: IGNITE-8706
> URL: https://issues.apache.org/jira/browse/IGNITE-8706
> Project: Ignite
>  Issue Type: Bug
>Reporter: Alexey Goncharuk
>Assignee: Alexey Goncharuk
>Priority: Major
>  Labels: MakeTeamcityGreenAgain
>
> The test fails because FilePageStore decrements the pages metric after 
> allocated pages count is set to 0.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8732) SQL: REPLICATED cache cannot be left-joined to PARTITIONED

2018-06-07 Thread Vladimir Ozerov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Ozerov updated IGNITE-8732:

Description: 
*Steps to reproduce*
# Run 
{{org.apache.ignite.sqltests.ReplicatedSqlTest#testLeftJoinReplicatedPartitioned}}
# Observe that we have 2x results on 2-node cluster

*Root Cause*
{{left LEFT JOIN right ON cond}} operation assumes full scan of of a left 
expression. Currently we perform this scan on every node and then simply merge 
results on reducer. Two nodes, two scans of {{REPLICATED}} cache, 2x results.

*Potential Solutions*
We may consider several solutions. Deeper analysis is required to understand 
which is the right one.

# Perform deduplication on reducer - this most prospective and general 
technique, described in more details below
# Treat {{REPLICATED}} cache as {{PARTITIONED}}. Essentially, we just need to 
pass proper backup filter. But what if {{REPLICATED}} cache spans more nodes 
than {{PARTITIONED}}? We cannot rely on primary/backup in this case
# Implement additional execution phase as follows: 
{code}
SELECT left.cols, right.cols FROM left INNER JOIN right ON cond;
  // Get "inner join" part
UNION
UNICAST SELECT left.cols, [NULL].cols FROM left WHERE left.id NOT IN ([ids from 
the first phase]) // Get "outer join" part
{code}

*Reducer Deduplication*
The idea is to get all data locally and then perform final deduplication. This 
may incur high network overhead, because of lot of duplicated left parts would 
be transferred. However, this could be optimized greatly with the following 
techniques applied one after another
# Semi-jions: {{left}} is {{joined}} on mapper node, but instead of sending 
{{(left, right)}} relation, we send {{(left) + (right)}}
# In case {{left}} part is known to be idempotent (i.e. it produces the same 
result set on all nodes), only one node will send {{(left) + (right)}}, other 
nodes will send {{(right)}} only
# Merge {{left}} results with if needed (i.e. if idempotence-related opto was 
not applicable)
# Join {{left}} and {{right}} parts on reducer



  was:
*Steps to reproduce*
# Run 
{{org.apache.ignite.sqltests.ReplicatedSqlTest#testLeftJoinReplicatedPartitioned}}
# Observe that we have 2x results on 2-node cluster

*Root Cause*
{{left LEFT JOIN right ON cond}} operation assumes full scan of of a left 
expression. Currently we perform this scan on every node and then simply merge 
results on reducer. Two nodes, two scans of {{REPLICATED}} cache, 2x results.

*Potential Solutions*
We may consider several solutions. Deeper analysis is required to understand 
which is the right one.

# Perform deduplication on reducer - this most prospective and general 
technique, described in more details below
# Treat {{REPLICATED}} cache as {{PARTITIONED}}. Essentially, we just need to 
pass proper backup filter. But what if {{REPLICATED}} cache spans more nodes 
than {{PARTITIONED}}? We cannot rely on primary/backup in this case
# Implement additional execution phase as follows: 
{code}
SELECT left.cols, right.cols FROM left INNER JOIN right ON cond;
  // Get "inner join" part
UNION
UNICAST SELECT left.cols, [NULL].cols FROM left WHERE left.id NOT IN ([ids from 
the first phase]) // Get "outer join" part
{code}

*Reducer Deduplication*
The idea is to get all data locally and then perform final deduplication. This 
may incur high network overhead, because of lot of duplicated left parts would 
be transferred. However, this could be optimized greatly with the following 
techniques applied one after another
# Semi-jions: {{left}] is {{joined}} on mapper node, but instead of sending 
{{(left, right)}} relation, we send {{(left) + (right)}}
# In case {{left}} part is known to be idempotent (i.e. it produces the same 
result set on all nodes), only one node will send {{(left) + (right)}}, other 
nodes will send {{(right)}} only
# Merge {{left}} results with if needed (i.e. if idempotence-related opto was 
not applicable)
# Join {{left}} and {{right}} parts on reducer




> SQL: REPLICATED cache cannot be left-joined to PARTITIONED
> --
>
> Key: IGNITE-8732
> URL: https://issues.apache.org/jira/browse/IGNITE-8732
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 2.5
>Reporter: Vladimir Ozerov
>Priority: Major
>  Labels: sql-engine
>
> *Steps to reproduce*
> # Run 
> {{org.apache.ignite.sqltests.ReplicatedSqlTest#testLeftJoinReplicatedPartitioned}}
> # Observe that we have 2x results on 2-node cluster
> *Root Cause*
> {{left LEFT JOIN right ON cond}} operation assumes full scan of of a left 
> expression. Currently we perform this scan on every node and then simply 
> merge results on reducer. Two nodes, two scans of {{REPLICATED}} cach

[jira] [Updated] (IGNITE-8732) SQL: REPLICATED cache cannot be left-joined to PARTITIONED

2018-06-07 Thread Vladimir Ozerov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Ozerov updated IGNITE-8732:

Description: 
*Steps to reproduce*
# Run 
{{org.apache.ignite.sqltests.ReplicatedSqlTest#testLeftJoinReplicatedPartitioned}}
# Observe that we have 2x results on 2-node cluster

*Root Cause*
{{left LEFT JOIN right ON cond}} operation assumes full scan of of a left 
expression. Currently we perform this scan on every node and then simply merge 
results on reducer. Two nodes, two scans of {{REPLICATED}} cache, 2x results.

*Potential Solutions*
We may consider several solutions. Deeper analysis is required to understand 
which is the right one.

# Perform deduplication on reducer - this most prospective and general 
technique, described in more details below
# Treat {{REPLICATED}} cache as {{PARTITIONED}}. Essentially, we just need to 
pass proper backup filter. But what if {{REPLICATED}} cache spans more nodes 
than {{PARTITIONED}}? We cannot rely on primary/backup in this case
# Implement additional execution phase as follows: 
{code}
SELECT left.cols, right.cols FROM left INNER JOIN right ON cond;
  // Get "inner join" part
UNION
UNICAST SELECT left.cols, [NULL].cols FROM left WHERE left.id NOT IN ([ids from 
the first phase]) // Get "outer join" part
{code}

*Reducer Deduplication*
The idea is to get all data locally and then perform final deduplication. This 
may incur high network overhead, because of lot of duplicated left parts would 
be transferred. However, this could be optimized greatly with the following 
techniques applied one after another
# Semi-jions: {{left}] is {{joined}} on mapper node, but instead of sending 
{{(left, right)}} relation, we send {{(left) + (right)}}
# In case {{left}} part is known to be idempotent (i.e. it produces the same 
result set on all nodes), only one node will send {{(left) + (right)}}, other 
nodes will send {{(right)}} only
# Merge {{left}} results with if needed (i.e. if idempotence-related opto was 
not applicable)
# Join {{left}} and {{right}} parts on reducer



  was:
*Steps to reproduce*
# Run 
{{org.apache.ignite.sqltests.ReplicatedSqlTest#testLeftJoinReplicatedPartitioned}}
# Observe that we have 2x results on 2-node cluster

*Root Cause*
{{left LEFT JOIN right ON cond}} operation assumes full scan of of a left 
expression. Currently we perform this scan on every node and then simply merge 
results on reducer. Two nodes, two scans of {{REPLICATED}} cache, 2x results.

*Solution*
We may consider several solutions. Deeper analysis is required to understand 
which is the right one.

# Perform deduplication on reducer
# Treat {{REPLICATED}} cache as {{PARTITIONED}}. Essentially, we just need to 
pass proper backup filter. But what if {{REPLICATED}} cache spans more nodes 
than {{PARTITIONED}}? We cannot rely on primary/backup in this case
# Implement additional execution phase as follows: 
{code}
SELECT left.cols, right.cols FROM left INNER JOIN right ON cond;
  // Get "inner join" part
UNION
UNICAST SELECT left.cols, [NULL].cols FROM left WHERE left.id NOT IN ([ids from 
the first phase]) // Get "outer join" part
{code}



> SQL: REPLICATED cache cannot be left-joined to PARTITIONED
> --
>
> Key: IGNITE-8732
> URL: https://issues.apache.org/jira/browse/IGNITE-8732
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 2.5
>Reporter: Vladimir Ozerov
>Priority: Major
>  Labels: sql-engine
>
> *Steps to reproduce*
> # Run 
> {{org.apache.ignite.sqltests.ReplicatedSqlTest#testLeftJoinReplicatedPartitioned}}
> # Observe that we have 2x results on 2-node cluster
> *Root Cause*
> {{left LEFT JOIN right ON cond}} operation assumes full scan of of a left 
> expression. Currently we perform this scan on every node and then simply 
> merge results on reducer. Two nodes, two scans of {{REPLICATED}} cache, 2x 
> results.
> *Potential Solutions*
> We may consider several solutions. Deeper analysis is required to understand 
> which is the right one.
> # Perform deduplication on reducer - this most prospective and general 
> technique, described in more details below
> # Treat {{REPLICATED}} cache as {{PARTITIONED}}. Essentially, we just need to 
> pass proper backup filter. But what if {{REPLICATED}} cache spans more nodes 
> than {{PARTITIONED}}? We cannot rely on primary/backup in this case
> # Implement additional execution phase as follows: 
> {code}
> SELECT left.cols, right.cols FROM left INNER JOIN right ON cond;  
> // Get "inner join" part
> UNION
> UNICAST SELECT left.cols, [NULL].cols FROM left WHERE left.id NOT IN ([ids 
> from the first phase]) // Get "outer join" part
> {code}
> *Reducer Deduplication*
> The

[jira] [Commented] (IGNITE-8706) IgnitePdsDataRegionMetricsTest#testMemoryUsageMultipleNodes fails in master

2018-06-07 Thread Alexey Goncharuk (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16504563#comment-16504563
 ] 

Alexey Goncharuk commented on IGNITE-8706:
--

Branch is ignite-8706.

> IgnitePdsDataRegionMetricsTest#testMemoryUsageMultipleNodes fails in master
> ---
>
> Key: IGNITE-8706
> URL: https://issues.apache.org/jira/browse/IGNITE-8706
> Project: Ignite
>  Issue Type: Bug
>Reporter: Alexey Goncharuk
>Assignee: Alexey Goncharuk
>Priority: Major
>  Labels: MakeTeamcityGreenAgain
>
> The test fails because FilePageStore decrements the pages metric after 
> allocated pages count is set to 0.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8735) Metastorage creates its own index partition

2018-06-07 Thread Ivan Rakov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8735?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Rakov updated IGNITE-8735:
---
Description: By design, all metastorage data should be stored in single 
partition with index = 0. However, allocatePageNoReuse is not overriden in 
MetastorageTree, which causes allocation of extra pages for the tree in index 
partition.  (was: By design, all metastorage data should be stored in single 
partition with index = 0. However, allocatePageNoReuse is not overriden in 
MetastorageTree, which cause allocation of extra pages for the tree in index 
partition.)

> Metastorage creates its own index partition
> ---
>
> Key: IGNITE-8735
> URL: https://issues.apache.org/jira/browse/IGNITE-8735
> Project: Ignite
>  Issue Type: Bug
>  Components: persistence
>Reporter: Ivan Rakov
>Priority: Major
> Fix For: 2.6
>
>
> By design, all metastorage data should be stored in single partition with 
> index = 0. However, allocatePageNoReuse is not overriden in MetastorageTree, 
> which causes allocation of extra pages for the tree in index partition.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-8735) Metastorage creates its own index partition

2018-06-07 Thread Ivan Rakov (JIRA)
Ivan Rakov created IGNITE-8735:
--

 Summary: Metastorage creates its own index partition
 Key: IGNITE-8735
 URL: https://issues.apache.org/jira/browse/IGNITE-8735
 Project: Ignite
  Issue Type: Bug
  Components: persistence
Reporter: Ivan Rakov
 Fix For: 2.6


By design, all metastorage data should be stored in single partition with index 
= 0. However, allocatePageNoReuse is not overriden in MetastorageTree, which 
cause allocation of extra pages for the tree in index partition.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8732) SQL: REPLICATED cache cannot be left-joined to PARTITIONED

2018-06-07 Thread Vladimir Ozerov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Ozerov updated IGNITE-8732:

Labels: sql-engine  (was: )

> SQL: REPLICATED cache cannot be left-joined to PARTITIONED
> --
>
> Key: IGNITE-8732
> URL: https://issues.apache.org/jira/browse/IGNITE-8732
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 2.5
>Reporter: Vladimir Ozerov
>Priority: Major
>  Labels: sql-engine
>
> *Steps to reproduce*
> # Run 
> {{org.apache.ignite.sqltests.ReplicatedSqlTest#testLeftJoinReplicatedPartitioned}}
> # Observe that we have 2x results on 2-node cluster
> *Root Cause*
> {{left LEFT JOIN right ON cond}} operation assumes full scan of of a left 
> expression. Currently we perform this scan on every node and then simply 
> merge results on reducer. Two nodes, two scans of {{REPLICATED}} cache, 2x 
> results.
> *Solution*
> We may consider several solutions. Deeper analysis is required to understand 
> which is the right one.
> # Perform deduplication on reducer
> # Treat {{REPLICATED}} cache as {{PARTITIONED}}. Essentially, we just need to 
> pass proper backup filter. But what if {{REPLICATED}} cache spans more nodes 
> than {{PARTITIONED}}? We cannot rely on primary/backup in this case
> # Implement additional execution phase as follows: 
> {code}
> SELECT left.cols, right.cols FROM left INNER JOIN right ON cond;  
> // Get "inner join" part
> UNION
> UNICAST SELECT left.cols, [NULL].cols FROM left WHERE left.id NOT IN ([ids 
> from the first phase]) // Get "outer join" part
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8734) Visor metrics don't include transactions started on client node

2018-06-07 Thread Alexey Kuznetsov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Kuznetsov updated IGNITE-8734:
-
Description: 
Problem arizes from method VisorNodeDataCollectorJob#caches().
When we are collecting metrics in this method, client node is filtered out by
VisorNodeDataCollectorJob#proxyCache().

Test, reproducing the bug is attached

  was:
Problem arizes from method VisorNodeDataCollectorJob#caches().
When we are collecting metrics in this method, client node is filtered out by
VisorNodeDataCollectorJob#proxyCache().


> Visor metrics don't include transactions started on client node
> ---
>
> Key: IGNITE-8734
> URL: https://issues.apache.org/jira/browse/IGNITE-8734
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.5
>Reporter: Alexey Kuznetsov
>Assignee: Alexey Kuznetsov
>Priority: Major
> Attachments: CacheTxCommitsMetricTest.java
>
>
> Problem arizes from method VisorNodeDataCollectorJob#caches().
> When we are collecting metrics in this method, client node is filtered out by
> VisorNodeDataCollectorJob#proxyCache().
> Test, reproducing the bug is attached



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8645) CacheMetrics.getCacheTxCommits() doesn't include transactions started on client node

2018-06-07 Thread Alexey Kuznetsov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Kuznetsov updated IGNITE-8645:
-
Fix Version/s: 2.6

> CacheMetrics.getCacheTxCommits() doesn't include transactions started on 
> client node
> 
>
> Key: IGNITE-8645
> URL: https://issues.apache.org/jira/browse/IGNITE-8645
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.4
>Reporter: Roman Guseinov
>Assignee: Alexey Kuznetsov
>Priority: Major
> Fix For: 2.6
>
> Attachments: CacheTxCommitsMetricTest.java
>
>
> The test is attached [^CacheTxCommitsMetricTest.java]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8734) Visor metrics don't include transactions started on client node

2018-06-07 Thread Alexey Kuznetsov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Kuznetsov updated IGNITE-8734:
-
Affects Version/s: 2.5

> Visor metrics don't include transactions started on client node
> ---
>
> Key: IGNITE-8734
> URL: https://issues.apache.org/jira/browse/IGNITE-8734
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.5
>Reporter: Alexey Kuznetsov
>Assignee: Alexey Kuznetsov
>Priority: Major
> Attachments: CacheTxCommitsMetricTest.java
>
>
> Problem arizes from method VisorNodeDataCollectorJob#caches().
> When we are collecting metrics in this method, client node is filtered out by
> VisorNodeDataCollectorJob#proxyCache().
> Test, reproducing the bug is attached



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8734) Visor metrics don't include transactions started on client node

2018-06-07 Thread Alexey Kuznetsov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Kuznetsov updated IGNITE-8734:
-
Attachment: CacheTxCommitsMetricTest.java

> Visor metrics don't include transactions started on client node
> ---
>
> Key: IGNITE-8734
> URL: https://issues.apache.org/jira/browse/IGNITE-8734
> Project: Ignite
>  Issue Type: Bug
>Reporter: Alexey Kuznetsov
>Assignee: Alexey Kuznetsov
>Priority: Major
> Attachments: CacheTxCommitsMetricTest.java
>
>
> Problem arizes from method VisorNodeDataCollectorJob#caches().
> When we are collecting metrics in this method, client node is filtered out by
> VisorNodeDataCollectorJob#proxyCache().



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-8734) Visor metrics don't include transactions started on client node

2018-06-07 Thread Alexey Kuznetsov (JIRA)
Alexey Kuznetsov created IGNITE-8734:


 Summary: Visor metrics don't include transactions started on 
client node
 Key: IGNITE-8734
 URL: https://issues.apache.org/jira/browse/IGNITE-8734
 Project: Ignite
  Issue Type: Bug
Reporter: Alexey Kuznetsov
Assignee: Alexey Kuznetsov


Problem arizes from method VisorNodeDataCollectorJob#caches().
When we are collecting metrics in this method, client node is filtered out by
VisorNodeDataCollectorJob#proxyCache().



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (IGNITE-8712) IgniteDataStructureUniqueNameTest#testUniqueNameMultithreaded fails sometimes in master.

2018-06-07 Thread Pavel Pereslegin (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16504544#comment-16504544
 ] 

Pavel Pereslegin edited comment on IGNITE-8712 at 6/7/18 11:27 AM:
---

Data structures store information about their state (meta information) in the 
metacache (cache key is the name of datastructure instance).
 When accessing data structure, a separate instance (*one per node*) is created 
and cached in the local datastructures Map.
 When data structure method close() is invoked the item is removed from the 
metacache and continuous query listener uses to nofify all nodes to remove 
associated local instance from local datastruсtures Map.
 The problem occurs if the listener is notified asynchronously and the 
transaction (that initiated update event) ends earlier.
 The listener on the *non-affinity node* is notified *synchronously* only if 
{{sync}} flag of CacheContinuousQueryHandler is set to true, but for now this 
flag is always set to false (except JCacheQuery). I updated signature of method 
executeInternalQuery() in CacheContinuousQueryManager to change the value of 
this flag.


was (Author: xtern):
Data structures store information about their state (meta information) in the 
metacache (cache key is the name of datastructure instance).
 When accessing data structure, a separate instance (*one per node*) is created 
and cached in the local datastructures Map.
 When data structure method close() is invoked the item is removed from the 
metacache and continuous query listener uses to nofify all nodes in the cluster 
to remove associated local instance from local datastruсtures Map.
 The problem occurs if the listener is notified asynchronously and the 
transaction (that initiated update event) ends earlier.
 The listener on the *non-affinity node* is notified *synchronously* only if 
{{sync}} flag of CacheContinuousQueryHandler is set to true, but for now this 
flag is always set to false (except JCacheQuery). I updated signature of method 
executeInternalQuery() in CacheContinuousQueryManager to change the value of 
this flag.

> IgniteDataStructureUniqueNameTest#testUniqueNameMultithreaded fails sometimes 
> in master.
> 
>
> Key: IGNITE-8712
> URL: https://issues.apache.org/jira/browse/IGNITE-8712
> Project: Ignite
>  Issue Type: Bug
>  Components: data structures
>Affects Versions: 2.5
>Reporter: Pavel Pereslegin
>Assignee: Pavel Pereslegin
>Priority: Minor
>  Labels: MakeTeamcityGreenAgain
>
> https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8&buildTypeId=&tab=testDetails&testNameId=5920780021361517364&order=TEST_STATUS_DESC&branch_IgniteTests24Java8=%3Cdefault%3E&itemsCount=10
> Typical output:
> {noformat}
> junit.framework.AssertionFailedError: expected: org.apache.ignite.internal.processors.datastructures.GridCacheSetProxy> but 
> was: org.apache.ignite.internal.processors.datastructures.GridCacheAtomicStampedImpl>
> at 
> org.apache.ignite.internal.processors.cache.datastructures.IgniteDataStructureUniqueNameTest.testUniqueName(IgniteDataStructureUniqueNameTest.java:385)
> at 
> org.apache.ignite.internal.processors.cache.datastructures.IgniteDataStructureUniqueNameTest.testUniqueNameMultithreaded(IgniteDataStructureUniqueNameTest.java:85)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (IGNITE-8712) IgniteDataStructureUniqueNameTest#testUniqueNameMultithreaded fails sometimes in master.

2018-06-07 Thread Pavel Pereslegin (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16504544#comment-16504544
 ] 

Pavel Pereslegin edited comment on IGNITE-8712 at 6/7/18 11:24 AM:
---

Data structures store information about their state (meta information) in the 
metacache (cache key is the name of datastructure instance).
 When accessing data structure, a separate instance (*one per node*) is created 
and cached in the local datastructures Map.
 When data structure method close() is invoked the item is removed from the 
metacache and continuous query listener uses to nofify all nodes in the cluster 
to remove associated local instance from local datastruсtures Map.
 The problem occurs if the listener is notified asynchronously and the 
transaction (that initiated update event) ends earlier.
 The listener on the *non-affinity node* is notified *synchronously* only if 
{{sync}} flag of CacheContinuousQueryHandler is set to true, but for now this 
flag is always set to false (except JCacheQuery). I updated signature of method 
executeInternalQuery() in CacheContinuousQueryManager to change the value of 
this flag.


was (Author: xtern):
Data structures store information about their state (meta information) in the 
metacache (cache-key is the name of the instance of the data structure).
 When accessing data structure, a separate instance (*one per node*) is created 
and cached in the local datastructures Map.
 When data structure method close() is invoked the item is removed from the 
metacache and continuous query listener uses to nofify all nodes in the cluster 
to remove associated local instance from local datastruсtures Map.
 The problem occurs if the listener is notified asynchronously and the 
transaction (that initiated update event) ends earlier.
 The listener on the *non-affinity node* is notified *synchronously* only if 
{{sync}} flag of CacheContinuousQueryHandler is set to true, but for now this 
flag is always set to false (except JCacheQuery). I updated signature of method 
executeInternalQuery() in CacheContinuousQueryManager to change the value of 
this flag.

> IgniteDataStructureUniqueNameTest#testUniqueNameMultithreaded fails sometimes 
> in master.
> 
>
> Key: IGNITE-8712
> URL: https://issues.apache.org/jira/browse/IGNITE-8712
> Project: Ignite
>  Issue Type: Bug
>  Components: data structures
>Affects Versions: 2.5
>Reporter: Pavel Pereslegin
>Assignee: Pavel Pereslegin
>Priority: Minor
>  Labels: MakeTeamcityGreenAgain
>
> https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8&buildTypeId=&tab=testDetails&testNameId=5920780021361517364&order=TEST_STATUS_DESC&branch_IgniteTests24Java8=%3Cdefault%3E&itemsCount=10
> Typical output:
> {noformat}
> junit.framework.AssertionFailedError: expected: org.apache.ignite.internal.processors.datastructures.GridCacheSetProxy> but 
> was: org.apache.ignite.internal.processors.datastructures.GridCacheAtomicStampedImpl>
> at 
> org.apache.ignite.internal.processors.cache.datastructures.IgniteDataStructureUniqueNameTest.testUniqueName(IgniteDataStructureUniqueNameTest.java:385)
> at 
> org.apache.ignite.internal.processors.cache.datastructures.IgniteDataStructureUniqueNameTest.testUniqueNameMultithreaded(IgniteDataStructureUniqueNameTest.java:85)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8712) IgniteDataStructureUniqueNameTest#testUniqueNameMultithreaded fails sometimes in master.

2018-06-07 Thread Pavel Pereslegin (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16504544#comment-16504544
 ] 

Pavel Pereslegin commented on IGNITE-8712:
--

Data structures store information about their state (meta information) in the 
metacache (cache-key is the name of the instance of the data structure).
 When accessing data structure, a separate instance (*one per node*) is created 
and cached in the local datastructures Map.
 When data structure method close() is invoked the item is removed from the 
metacache and continuous query listener uses to nofify all nodes in the cluster 
to remove associated local instance from local datastruсtures Map.
 The problem occurs if the listener is notified asynchronously and the 
transaction (that initiated update event) ends earlier.
 The listener on the *non-affinity node* is notified *synchronously* only if 
{{sync}} flag of CacheContinuousQueryHandler is set to true, but for now this 
flag is always set to false (except JCacheQuery). I updated signature of method 
executeInternalQuery() in CacheContinuousQueryManager to change the value of 
this flag.

> IgniteDataStructureUniqueNameTest#testUniqueNameMultithreaded fails sometimes 
> in master.
> 
>
> Key: IGNITE-8712
> URL: https://issues.apache.org/jira/browse/IGNITE-8712
> Project: Ignite
>  Issue Type: Bug
>  Components: data structures
>Affects Versions: 2.5
>Reporter: Pavel Pereslegin
>Assignee: Pavel Pereslegin
>Priority: Minor
>  Labels: MakeTeamcityGreenAgain
>
> https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8&buildTypeId=&tab=testDetails&testNameId=5920780021361517364&order=TEST_STATUS_DESC&branch_IgniteTests24Java8=%3Cdefault%3E&itemsCount=10
> Typical output:
> {noformat}
> junit.framework.AssertionFailedError: expected: org.apache.ignite.internal.processors.datastructures.GridCacheSetProxy> but 
> was: org.apache.ignite.internal.processors.datastructures.GridCacheAtomicStampedImpl>
> at 
> org.apache.ignite.internal.processors.cache.datastructures.IgniteDataStructureUniqueNameTest.testUniqueName(IgniteDataStructureUniqueNameTest.java:385)
> at 
> org.apache.ignite.internal.processors.cache.datastructures.IgniteDataStructureUniqueNameTest.testUniqueNameMultithreaded(IgniteDataStructureUniqueNameTest.java:85)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8733) Add benchmarks for NodeJS thin client

2018-06-07 Thread Ilya Suntsov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ilya Suntsov updated IGNITE-8733:
-
Component/s: yardstick

> Add benchmarks for NodeJS thin client
> -
>
> Key: IGNITE-8733
> URL: https://issues.apache.org/jira/browse/IGNITE-8733
> Project: Ignite
>  Issue Type: Improvement
>  Components: yardstick
>Affects Versions: 2.5
>Reporter: Ilya Suntsov
>Priority: Major
>
> We have several benchmarks for Java thin client 
> ([PR|https://github.com/apache/ignite/pull/3942]). The same set should be 
> implemented on NodeJS



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-8733) Add benchmarks for NodeJS thin client

2018-06-07 Thread Ilya Suntsov (JIRA)
Ilya Suntsov created IGNITE-8733:


 Summary: Add benchmarks for NodeJS thin client
 Key: IGNITE-8733
 URL: https://issues.apache.org/jira/browse/IGNITE-8733
 Project: Ignite
  Issue Type: Improvement
Affects Versions: 2.5
Reporter: Ilya Suntsov


We have several benchmarks for Java thin client 
([PR|https://github.com/apache/ignite/pull/3942]). The same set should be 
implemented on NodeJS



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8714) CacheEntryEvent.getOldValue should be available

2018-06-07 Thread Alexander Menshikov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Menshikov updated IGNITE-8714:

Description: 
A table is copied from speck changing discussion:

When CacheEntryListenerConfiguration#isOldValueRequired is *true*:
||CacheEntryEvent Method||CreatedListener||UpdatedListener||Removed/Expired 
Listener||
|getValue()|_value_|_value_|_oldValue_|
|getOldValue()|null|_oldValue_|_oldValue_|

The last column is new behavior for #getValue().

Сurrent master fails with this reason on following tests:
 * CacheListenerTest.testFilteredListener
 * CacheListenerTest.testCacheEntryListener

I have to say tests testFilteredListener and testCacheEntryListener also fail 
in `afterTests` section because of IGNITE-8715, so PR will not change the 
number of failed tests if ignite-8715 unfixed. But it will change the reason – 
please see the log.

Please see link on JCache TCK and speck 1.1.0 changes.

  was:
A table is copied from speck changing discussion:

When CacheEntryListenerConfiguration#isOldValueRequired is *true*:
||CacheEntryEvent Method||CreatedListener||UpdatedListener||Removed/Expired 
Listener||
|getValue()|_value_|_value_|_oldValue_|
|getOldValue()|null|_oldValue_|_oldValue_|

The last column is new behavior for #getValue().

Сurrent master fails with this reason on following tests:
 * CacheListenerTest.testFilteredListener
 * CacheListenerTest.testCacheEntryListener

I have to say tests testFilteredListener and testCacheEntryListener also fail 
in `afterTests` section because of IGNITE-8715, so PR will not change the 
number of failed tests if ignite-8715 unfixed.

Please see link on JCache TCK and speck 1.1.0 changes.


> CacheEntryEvent.getOldValue should be available
> ---
>
> Key: IGNITE-8714
> URL: https://issues.apache.org/jira/browse/IGNITE-8714
> Project: Ignite
>  Issue Type: Sub-task
>Reporter: Alexander Menshikov
>Assignee: Alexander Menshikov
>Priority: Major
>
> A table is copied from speck changing discussion:
> When CacheEntryListenerConfiguration#isOldValueRequired is *true*:
> ||CacheEntryEvent Method||CreatedListener||UpdatedListener||Removed/Expired 
> Listener||
> |getValue()|_value_|_value_|_oldValue_|
> |getOldValue()|null|_oldValue_|_oldValue_|
> The last column is new behavior for #getValue().
> Сurrent master fails with this reason on following tests:
>  * CacheListenerTest.testFilteredListener
>  * CacheListenerTest.testCacheEntryListener
> I have to say tests testFilteredListener and testCacheEntryListener also fail 
> in `afterTests` section because of IGNITE-8715, so PR will not change the 
> number of failed tests if ignite-8715 unfixed. But it will change the reason 
> – please see the log.
> Please see link on JCache TCK and speck 1.1.0 changes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8599) Remove LocalWalModeChangeDuringRebalancingSelfTest.testWithExchangesMerge from Direct IO suite

2018-06-07 Thread Dmitriy Pavlov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitriy Pavlov updated IGNITE-8599:
---
Labels: MakeTeamcityGreenAgain  (was: )

> Remove  LocalWalModeChangeDuringRebalancingSelfTest.testWithExchangesMerge 
> from Direct IO suite
> ---
>
> Key: IGNITE-8599
> URL: https://issues.apache.org/jira/browse/IGNITE-8599
> Project: Ignite
>  Issue Type: Test
>Reporter: Dmitriy Pavlov
>Assignee: Dmitriy Pavlov
>Priority: Major
>  Labels: MakeTeamcityGreenAgain
> Fix For: 2.6
>
>
> https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8&testNameId=6346758468206865681&branch=%3Cdefault%3E&tab=testDetails
> It falls only in Direct IO
> It is necessary to exclude it from direct IO because it gives a lot of load.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (IGNITE-8203) Interrupting task can cause node fail with PersistenceStorageIOException.

2018-06-07 Thread Ivan Daschinskiy (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Daschinskiy reassigned IGNITE-8203:


Assignee: (was: Ivan Daschinskiy)

> Interrupting task can cause node fail with PersistenceStorageIOException. 
> --
>
> Key: IGNITE-8203
> URL: https://issues.apache.org/jira/browse/IGNITE-8203
> Project: Ignite
>  Issue Type: Bug
>  Components: persistence
>Affects Versions: 2.4
>Reporter: Ivan Daschinskiy
>Priority: Major
> Fix For: 2.6
>
> Attachments: GridFailNodesOnCanceledTaskTest.java
>
>
> Interrupting task with simple cache operations (i.e. get, put) can cause 
> PersistenceStorageIOException. Main cause of this failure is lack of proper 
> handling InterruptedException in FilePageStore.init() etc. This cause a throw 
> of ClosedByInterruptException by FileChannel.write() and so on. 
> PersistenceStorageIOException is a critical failure and typically makes a 
> node to stop. As a workaround, I would suggest to enable AsyncFileIO by 
> default until the fix was available.
> A reproducer is attached.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (IGNITE-8203) Interrupting task can cause node fail with PersistenceStorageIOException.

2018-06-07 Thread Ivan Daschinskiy (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16504488#comment-16504488
 ] 

Ivan Daschinskiy edited comment on IGNITE-8203 at 6/7/18 10:02 AM:
---

I'm a little bit stuck in fixing this issue. Currently, I fixed initialization 
of pagestore and my reproducer passes. But, after discussion with Alexey, I 
realized that this is not enough. Read and write are unfortunatelly still 
vulnerable to interruptions. Unfortunatelly, my reproducer doesn't touch these 
situations


was (Author: ivandasch):
I'm a little bit stuck in fixing this issue. Currently, I fixed initialization 
and my reproducer passes. But, after discussion with Alexey, I realized that 
this is not enough. Read and write are unfortunatelly still vulnerable to this 
issue. 

> Interrupting task can cause node fail with PersistenceStorageIOException. 
> --
>
> Key: IGNITE-8203
> URL: https://issues.apache.org/jira/browse/IGNITE-8203
> Project: Ignite
>  Issue Type: Bug
>  Components: persistence
>Affects Versions: 2.4
>Reporter: Ivan Daschinskiy
>Assignee: Ivan Daschinskiy
>Priority: Major
> Fix For: 2.6
>
> Attachments: GridFailNodesOnCanceledTaskTest.java
>
>
> Interrupting task with simple cache operations (i.e. get, put) can cause 
> PersistenceStorageIOException. Main cause of this failure is lack of proper 
> handling InterruptedException in FilePageStore.init() etc. This cause a throw 
> of ClosedByInterruptException by FileChannel.write() and so on. 
> PersistenceStorageIOException is a critical failure and typically makes a 
> node to stop. As a workaround, I would suggest to enable AsyncFileIO by 
> default until the fix was available.
> A reproducer is attached.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8203) Interrupting task can cause node fail with PersistenceStorageIOException.

2018-06-07 Thread Ivan Daschinskiy (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16504488#comment-16504488
 ] 

Ivan Daschinskiy commented on IGNITE-8203:
--

I'm a little bit stuck in fixing this issue. Currently, I fixed initialization 
and my reproducer passes. But, after discussion with Alexey, I realized that 
this is not enough. Read and write are unfortunatelly still vulnerable to this 
issue. 

> Interrupting task can cause node fail with PersistenceStorageIOException. 
> --
>
> Key: IGNITE-8203
> URL: https://issues.apache.org/jira/browse/IGNITE-8203
> Project: Ignite
>  Issue Type: Bug
>  Components: persistence
>Affects Versions: 2.4
>Reporter: Ivan Daschinskiy
>Assignee: Ivan Daschinskiy
>Priority: Major
> Fix For: 2.6
>
> Attachments: GridFailNodesOnCanceledTaskTest.java
>
>
> Interrupting task with simple cache operations (i.e. get, put) can cause 
> PersistenceStorageIOException. Main cause of this failure is lack of proper 
> handling InterruptedException in FilePageStore.init() etc. This cause a throw 
> of ClosedByInterruptException by FileChannel.write() and so on. 
> PersistenceStorageIOException is a critical failure and typically makes a 
> node to stop. As a workaround, I would suggest to enable AsyncFileIO by 
> default until the fix was available.
> A reproducer is attached.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8714) CacheEntryEvent.getOldValue should be available

2018-06-07 Thread Alexander Menshikov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Menshikov updated IGNITE-8714:

Description: 
A table is copied from speck changing discussion:

When CacheEntryListenerConfiguration#isOldValueRequired is *true*:
||CacheEntryEvent Method||CreatedListener||UpdatedListener||Removed/Expired 
Listener||
|getValue()|_value_|_value_|_oldValue_|
|getOldValue()|null|_oldValue_|_oldValue_|

The last column is new behavior for #getValue().

Сurrent master fails with this reason on following tests:
 * CacheListenerTest.testFilteredListener
 * CacheListenerTest.testCacheEntryListener

I have to say tests testFilteredListener and testCacheEntryListener also fail 
in `afterTests` section because of IGNITE-8715, so PR will not change the 
number of failed tests if ignite-8715 unfixed.

Please see link on JCache TCK and speck 1.1.0 changes.

  was:
JCache TCK 1.1.0 now tests CacheEntryEvent#getOldValue() behavior.

And current master fails with this reason on following tests:
 * CacheListenerTest.testFilteredListener
 * CacheListenerTest.testCacheEntryListener

Copied from speck changing discussion:

When CacheEntryListenerConfiguration#isOldValueRequired is *true*:
||CacheEntryEvent Method||CreatedListener||UpdatedListener||Removed/Expired 
Listener||
|getValue()|_value_|_value_|_oldValue_|
|getOldValue()|null|_oldValue_|_oldValue_|

The last column is new behavior for #getValue().

I have to say tests testFilteredListener and testCacheEntryListener also fail 
in `afterTests` section because of IGNITE-8715, so PR will not change the 
number of failed tests if ignite-8715 unfixed.

Please see link on JCache TCK and speck 1.1.0 changes.


> CacheEntryEvent.getOldValue should be available
> ---
>
> Key: IGNITE-8714
> URL: https://issues.apache.org/jira/browse/IGNITE-8714
> Project: Ignite
>  Issue Type: Sub-task
>Reporter: Alexander Menshikov
>Assignee: Alexander Menshikov
>Priority: Major
>
> A table is copied from speck changing discussion:
> When CacheEntryListenerConfiguration#isOldValueRequired is *true*:
> ||CacheEntryEvent Method||CreatedListener||UpdatedListener||Removed/Expired 
> Listener||
> |getValue()|_value_|_value_|_oldValue_|
> |getOldValue()|null|_oldValue_|_oldValue_|
> The last column is new behavior for #getValue().
> Сurrent master fails with this reason on following tests:
>  * CacheListenerTest.testFilteredListener
>  * CacheListenerTest.testCacheEntryListener
> I have to say tests testFilteredListener and testCacheEntryListener also fail 
> in `afterTests` section because of IGNITE-8715, so PR will not change the 
> number of failed tests if ignite-8715 unfixed.
> Please see link on JCache TCK and speck 1.1.0 changes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8714) CacheEntryEvent.getOldValue should be available

2018-06-07 Thread Alexander Menshikov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Menshikov updated IGNITE-8714:

Description: 
 

 

JCache TCK 1.1.0 now tests CacheEntryEvent#getOldValue() behavior.

And current master fails with this reason on following tests:
 * CacheListenerTest.testFilteredListener
 * CacheListenerTest.testCacheEntryListener

Copied from speck changing discussion:

When CacheEntryListenerConfiguration#isOldValueRequired is *true*:
||CacheEntryEvent Method||CreatedListener||UpdatedListener||Removed/Expired 
Listener||
|getValue()|_value_|_value_|_oldValue_|
|getOldValue()|null|_oldValue_|_oldValue_|

The last column is new behavior for #getValue().

I have to say tests testFilteredListener and testCacheEntryListener also fail 
in `afterTests` section because of 
[IGNITE-8715|https://issues.apache.org/jira/browse/IGNITE-8715], so PR will not 
change the number of failed tests if ignite-8715 unfixed.

Please see link on JCache TCK and speck 1.1.0 changes.

  was:
JCache TCK 1.1.0 now tests CacheEntryEvent#getOldValue() behavior.

And current master fails with this reason on following tests:
 * CacheListenerTest.testFilteredListener
 * CacheListenerTest.testCacheEntryListener

 

Copied from speck changing discussion:

When CacheEntryListenerConfiguration#isOldValueRequired is *true*:
||CacheEntryEvent Method||CreatedListener||UpdatedListener||Removed/Expired 
Listener||
|getValue()|_value_|_value_|_oldValue_|
|getOldValue()|null|_oldValue_|_oldValue_|

 

Please see link on JCache TCK 1.1.0 changes.


> CacheEntryEvent.getOldValue should be available
> ---
>
> Key: IGNITE-8714
> URL: https://issues.apache.org/jira/browse/IGNITE-8714
> Project: Ignite
>  Issue Type: Sub-task
>Reporter: Alexander Menshikov
>Assignee: Alexander Menshikov
>Priority: Major
>
>  
>  
> JCache TCK 1.1.0 now tests CacheEntryEvent#getOldValue() behavior.
> And current master fails with this reason on following tests:
>  * CacheListenerTest.testFilteredListener
>  * CacheListenerTest.testCacheEntryListener
> Copied from speck changing discussion:
> When CacheEntryListenerConfiguration#isOldValueRequired is *true*:
> ||CacheEntryEvent Method||CreatedListener||UpdatedListener||Removed/Expired 
> Listener||
> |getValue()|_value_|_value_|_oldValue_|
> |getOldValue()|null|_oldValue_|_oldValue_|
> The last column is new behavior for #getValue().
> I have to say tests testFilteredListener and testCacheEntryListener also fail 
> in `afterTests` section because of 
> [IGNITE-8715|https://issues.apache.org/jira/browse/IGNITE-8715], so PR will 
> not change the number of failed tests if ignite-8715 unfixed.
> Please see link on JCache TCK and speck 1.1.0 changes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8714) CacheEntryEvent.getOldValue should be available

2018-06-07 Thread Alexander Menshikov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Menshikov updated IGNITE-8714:

Description: 
JCache TCK 1.1.0 now tests CacheEntryEvent#getOldValue() behavior.

And current master fails with this reason on following tests:
 * CacheListenerTest.testFilteredListener
 * CacheListenerTest.testCacheEntryListener

Copied from speck changing discussion:

When CacheEntryListenerConfiguration#isOldValueRequired is *true*:
||CacheEntryEvent Method||CreatedListener||UpdatedListener||Removed/Expired 
Listener||
|getValue()|_value_|_value_|_oldValue_|
|getOldValue()|null|_oldValue_|_oldValue_|

The last column is new behavior for #getValue().

I have to say tests testFilteredListener and testCacheEntryListener also fail 
in `afterTests` section because of IGNITE-8715, so PR will not change the 
number of failed tests if ignite-8715 unfixed.

Please see link on JCache TCK and speck 1.1.0 changes.

  was:
 

 

JCache TCK 1.1.0 now tests CacheEntryEvent#getOldValue() behavior.

And current master fails with this reason on following tests:
 * CacheListenerTest.testFilteredListener
 * CacheListenerTest.testCacheEntryListener

Copied from speck changing discussion:

When CacheEntryListenerConfiguration#isOldValueRequired is *true*:
||CacheEntryEvent Method||CreatedListener||UpdatedListener||Removed/Expired 
Listener||
|getValue()|_value_|_value_|_oldValue_|
|getOldValue()|null|_oldValue_|_oldValue_|

The last column is new behavior for #getValue().

I have to say tests testFilteredListener and testCacheEntryListener also fail 
in `afterTests` section because of 
[IGNITE-8715|https://issues.apache.org/jira/browse/IGNITE-8715], so PR will not 
change the number of failed tests if ignite-8715 unfixed.

Please see link on JCache TCK and speck 1.1.0 changes.


> CacheEntryEvent.getOldValue should be available
> ---
>
> Key: IGNITE-8714
> URL: https://issues.apache.org/jira/browse/IGNITE-8714
> Project: Ignite
>  Issue Type: Sub-task
>Reporter: Alexander Menshikov
>Assignee: Alexander Menshikov
>Priority: Major
>
> JCache TCK 1.1.0 now tests CacheEntryEvent#getOldValue() behavior.
> And current master fails with this reason on following tests:
>  * CacheListenerTest.testFilteredListener
>  * CacheListenerTest.testCacheEntryListener
> Copied from speck changing discussion:
> When CacheEntryListenerConfiguration#isOldValueRequired is *true*:
> ||CacheEntryEvent Method||CreatedListener||UpdatedListener||Removed/Expired 
> Listener||
> |getValue()|_value_|_value_|_oldValue_|
> |getOldValue()|null|_oldValue_|_oldValue_|
> The last column is new behavior for #getValue().
> I have to say tests testFilteredListener and testCacheEntryListener also fail 
> in `afterTests` section because of IGNITE-8715, so PR will not change the 
> number of failed tests if ignite-8715 unfixed.
> Please see link on JCache TCK and speck 1.1.0 changes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8186) SQL: Create test base to cover sql by features with flexible configuration

2018-06-07 Thread Vladimir Ozerov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16504480#comment-16504480
 ] 

Vladimir Ozerov commented on IGNITE-8186:
-

[~pkouznet], merged to master. Thank you.

> SQL: Create test base to cover sql by features with flexible configuration
> --
>
> Key: IGNITE-8186
> URL: https://issues.apache.org/jira/browse/IGNITE-8186
> Project: Ignite
>  Issue Type: Task
>  Components: sql
>Reporter: Pavel Kuznetsov
>Assignee: Pavel Kuznetsov
>Priority: Major
> Fix For: 2.6
>
>
> We need to cover sql feature by feature.
> We need to be able to run the same test cases with different configurations.
> At the moment configurations in scope:
> 1) Inmemory/persistence
> 2) Distributed joins: on/off 
> 3) Cache mode: PARTITIONED/REPLICATED
> Features in scope:
> 1) Simple SELECT
> 2) JOIN (distributed and local)
> 3) GROUP BY
> Data model:
> Employee (1000)
> Department (50-100)
> Status of distributed joins affects affinity key of data model.
> Test cluster should contain 1 client and 2 server nodes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8186) SQL: Create test base to cover sql by features with flexible configuration

2018-06-07 Thread Vladimir Ozerov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Ozerov updated IGNITE-8186:

Fix Version/s: 2.6

> SQL: Create test base to cover sql by features with flexible configuration
> --
>
> Key: IGNITE-8186
> URL: https://issues.apache.org/jira/browse/IGNITE-8186
> Project: Ignite
>  Issue Type: Task
>  Components: sql
>Reporter: Pavel Kuznetsov
>Assignee: Pavel Kuznetsov
>Priority: Major
> Fix For: 2.6
>
>
> We need to cover sql feature by feature.
> We need to be able to run the same test cases with different configurations.
> At the moment configurations in scope:
> 1) Inmemory/persistence
> 2) Distributed joins: on/off 
> 3) Cache mode: PARTITIONED/REPLICATED
> Features in scope:
> 1) Simple SELECT
> 2) JOIN (distributed and local)
> 3) GROUP BY
> Data model:
> Employee (1000)
> Department (50-100)
> Status of distributed joins affects affinity key of data model.
> Test cluster should contain 1 client and 2 server nodes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8714) CacheEntryEvent.getOldValue should be available

2018-06-07 Thread Alexander Menshikov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Menshikov updated IGNITE-8714:

Description: 
JCache TCK 1.1.0 now tests CacheEntryEvent#getOldValue() behavior.

And current master fails with this reason on following tests:
 * CacheListenerTest.testFilteredListener
 * CacheListenerTest.testCacheEntryListener

 

Copied from speck changing discussion:

When CacheEntryListenerConfiguration#isOldValueRequired is *true*:
||CacheEntryEvent Method||CreatedListener||UpdatedListener||Removed/Expired 
Listener||
|getValue()|_value_|_value_|_oldValue_|
|getOldValue()|null|_oldValue_|_oldValue_|

 

Please see link on JCache TCK 1.1.0 changes.

  was:
JCache TCK 1.1.0 now tests CacheEntryEvent#getOldValue() behavior.

And current master fails with this reason on following tests:
 * CacheListenerTest.testFilteredListener
 * CacheListenerTest.testCacheEntryListener

 

Coped from spek changing discution:
||Heading 1||Heading 2||
|Col A1|Col A2|

 

Please see link on JCache TCK 1.1.0 changes.


> CacheEntryEvent.getOldValue should be available
> ---
>
> Key: IGNITE-8714
> URL: https://issues.apache.org/jira/browse/IGNITE-8714
> Project: Ignite
>  Issue Type: Sub-task
>Reporter: Alexander Menshikov
>Assignee: Alexander Menshikov
>Priority: Major
>
> JCache TCK 1.1.0 now tests CacheEntryEvent#getOldValue() behavior.
> And current master fails with this reason on following tests:
>  * CacheListenerTest.testFilteredListener
>  * CacheListenerTest.testCacheEntryListener
>  
> Copied from speck changing discussion:
> When CacheEntryListenerConfiguration#isOldValueRequired is *true*:
> ||CacheEntryEvent Method||CreatedListener||UpdatedListener||Removed/Expired 
> Listener||
> |getValue()|_value_|_value_|_oldValue_|
> |getOldValue()|null|_oldValue_|_oldValue_|
>  
> Please see link on JCache TCK 1.1.0 changes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8732) SQL: REPLICATED cache cannot be left-joined to PARTITIONED

2018-06-07 Thread Vladimir Ozerov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Ozerov updated IGNITE-8732:

Description: 
*Steps to reproduce*
# Run 
{{org.apache.ignite.sqltests.ReplicatedSqlTest#testLeftJoinReplicatedPartitioned}}
# Observe that we have 2x results on 2-node cluster

*Root Cause*
{{left LEFT JOIN right ON cond}} operation assumes full scan of of a left 
expression. Currently we perform this scan on every node and then simply merge 
results on reducer. Two nodes, two scans of {{REPLICATED}} cache, 2x results.

*Solution*
We may consider several solutions. Deeper analysis is required to understand 
which is the right one.

# Perform deduplication on reducer
# Treat {{REPLICATED}} cache as {{PARTITIONED}}. Essentially, we just need to 
pass proper backup filter. But what if {{REPLICATED}} cache spans more nodes 
than {{PARTITIONED}}? We cannot rely on primary/backup in this case
# Implement additional execution phase as follows: 
{code}
SELECT left.cols, right.cols FROM left INNER JOIN right ON cond;
  // Get "inner join" part
UNION
SELECT left.cols, [NULL].cols FROM left WHERE left.id NOT IN ([ids from the 
first phase]) // Get "outer join" part
{code}


  was:
*Steps to reproduce*
# Run 
{{org.apache.ignite.sqltests.ReplicatedSqlTest#testLeftJoinReplicatedPartitioned}}
# Observe that we have 2x results on 2-node cluster

*Root Cause*
{{left LEFT JOIN right ON cond}} operation assumes full scan of of a left 
expression. Currently we perform this scan on every node and then simply merge 
results on reducer. Two nodes, two scans of {{REPLICATED}} cache, 2x results.

*Solution*
We may consider several solutions. Deeper analysis is required to understand 
which is the right one.

# Perform deduplication on reducer
# Treat {{REPLICATED}} cache as {{PARTITIONED}}. Essentially, we just need to 
pass proper backup filter. But what if {{REPLICATED}} cache spans more nodes 
than {{PARTITIONED}}? We cannot rely on primary/backup in this case
# Implement additional execution phase as follows: 
{code}
SELECT left.cols, right.cols FROM left INNER JOIN right ON cond;
   // Get "inner join" part
UNION
SELECT left.cols, [NULL].cols FROM left WHERE left.id NOT IN ([ids from the 
first phase]) // Get "outer join" part
{code}



> SQL: REPLICATED cache cannot be left-joined to PARTITIONED
> --
>
> Key: IGNITE-8732
> URL: https://issues.apache.org/jira/browse/IGNITE-8732
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 2.5
>Reporter: Vladimir Ozerov
>Priority: Major
>
> *Steps to reproduce*
> # Run 
> {{org.apache.ignite.sqltests.ReplicatedSqlTest#testLeftJoinReplicatedPartitioned}}
> # Observe that we have 2x results on 2-node cluster
> *Root Cause*
> {{left LEFT JOIN right ON cond}} operation assumes full scan of of a left 
> expression. Currently we perform this scan on every node and then simply 
> merge results on reducer. Two nodes, two scans of {{REPLICATED}} cache, 2x 
> results.
> *Solution*
> We may consider several solutions. Deeper analysis is required to understand 
> which is the right one.
> # Perform deduplication on reducer
> # Treat {{REPLICATED}} cache as {{PARTITIONED}}. Essentially, we just need to 
> pass proper backup filter. But what if {{REPLICATED}} cache spans more nodes 
> than {{PARTITIONED}}? We cannot rely on primary/backup in this case
> # Implement additional execution phase as follows: 
> {code}
> SELECT left.cols, right.cols FROM left INNER JOIN right ON cond;  
> // Get "inner join" part
> UNION
> SELECT left.cols, [NULL].cols FROM left WHERE left.id NOT IN ([ids from the 
> first phase]) // Get "outer join" part
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8732) SQL: REPLICATED cache cannot be left-joined to PARTITIONED

2018-06-07 Thread Vladimir Ozerov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Ozerov updated IGNITE-8732:

Description: 
*Steps to reproduce*
# Run 
{{org.apache.ignite.sqltests.ReplicatedSqlTest#testLeftJoinReplicatedPartitioned}}
# Observe that we have 2x results on 2-node cluster

*Root Cause*
{{left LEFT JOIN right ON cond}} operation assumes full scan of of a left 
expression. Currently we perform this scan on every node and then simply merge 
results on reducer. Two nodes, two scans of {{REPLICATED}} cache, 2x results.

*Solution*
We may consider several solutions. Deeper analysis is required to understand 
which is the right one.

# Perform deduplication on reducer
# Treat {{REPLICATED}} cache as {{PARTITIONED}}. Essentially, we just need to 
pass proper backup filter. But what if {{REPLICATED}} cache spans more nodes 
than {{PARTITIONED}}? We cannot rely on primary/backup in this case
# Implement additional execution phase as follows: 
{code}
SELECT left.cols, right.cols FROM left INNER JOIN right ON cond; // Get "inner 
join" part
UNION
SELECT left.cols, [NULL].cols FROM left WHERE left.id NOT IN ([ids from the 
first phase]) // Get "outer join" part
{code}


  was:
*Steps to reproduce*
# Run 
{{org.apache.ignite.sqltests.ReplicatedSqlTest#testLeftJoinReplicatedPartitioned}}
# Observe that we have 2x results on 2-node cluster

*Root Cause*
{{left LEFT JOIN right ON cond}} operation assumes full scan of of a left 
expression. Currently we perform this scan on every node and then simply merge 
results on reducer. Two nodes, two scans of {{REPLICATED}} cache, 2x results.

*Solution*
We may consider several solutions. Deeper analysis is required to understand 
which is the right one.

# Perform deduplication on reducer
# Treat {{REPLICATED}} cache as {{PARTITIONED}}. Essentially, we just need to 
pass proper backup filter. But what if {{REPLICATED}} cache spans more nodes 
than {{PARTITIONED}}? We cannot rely on primary/backup in this case
# Implement additional execution phase as follows: 
{code}
SELECT left.cols, right.cols FROM left INNER JOIN right ON cond; // Get common 
part
UNION
SELECT left.cols, [NULL].cols FROM left WHERE left.id NOT IN ([ids from the 
first phase])
{code}



> SQL: REPLICATED cache cannot be left-joined to PARTITIONED
> --
>
> Key: IGNITE-8732
> URL: https://issues.apache.org/jira/browse/IGNITE-8732
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 2.5
>Reporter: Vladimir Ozerov
>Priority: Major
>
> *Steps to reproduce*
> # Run 
> {{org.apache.ignite.sqltests.ReplicatedSqlTest#testLeftJoinReplicatedPartitioned}}
> # Observe that we have 2x results on 2-node cluster
> *Root Cause*
> {{left LEFT JOIN right ON cond}} operation assumes full scan of of a left 
> expression. Currently we perform this scan on every node and then simply 
> merge results on reducer. Two nodes, two scans of {{REPLICATED}} cache, 2x 
> results.
> *Solution*
> We may consider several solutions. Deeper analysis is required to understand 
> which is the right one.
> # Perform deduplication on reducer
> # Treat {{REPLICATED}} cache as {{PARTITIONED}}. Essentially, we just need to 
> pass proper backup filter. But what if {{REPLICATED}} cache spans more nodes 
> than {{PARTITIONED}}? We cannot rely on primary/backup in this case
> # Implement additional execution phase as follows: 
> {code}
> SELECT left.cols, right.cols FROM left INNER JOIN right ON cond; // Get 
> "inner join" part
> UNION
> SELECT left.cols, [NULL].cols FROM left WHERE left.id NOT IN ([ids from the 
> first phase]) // Get "outer join" part
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8732) SQL: REPLICATED cache cannot be left-joined to PARTITIONED

2018-06-07 Thread Vladimir Ozerov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Ozerov updated IGNITE-8732:

Description: 
*Steps to reproduce*
# Run 
{{org.apache.ignite.sqltests.ReplicatedSqlTest#testLeftJoinReplicatedPartitioned}}
# Observe that we have 2x results on 2-node cluster

*Root Cause*
{{left LEFT JOIN right ON cond}} operation assumes full scan of of a left 
expression. Currently we perform this scan on every node and then simply merge 
results on reducer. Two nodes, two scans of {{REPLICATED}} cache, 2x results.

*Solution*
We may consider several solutions. Deeper analysis is required to understand 
which is the right one.

# Perform deduplication on reducer
# Treat {{REPLICATED}} cache as {{PARTITIONED}}. Essentially, we just need to 
pass proper backup filter. But what if {{REPLICATED}} cache spans more nodes 
than {{PARTITIONED}}? We cannot rely on primary/backup in this case
# Implement additional execution phase as follows: 
{code}
SELECT left.cols, right.cols FROM left INNER JOIN right ON cond;
 // Get "inner join" part
UNION
SELECT left.cols, [NULL].cols FROM left WHERE left.id NOT IN ([ids from the 
first phase]) // Get "outer join" part
{code}


  was:
*Steps to reproduce*
# Run 
{{org.apache.ignite.sqltests.ReplicatedSqlTest#testLeftJoinReplicatedPartitioned}}
# Observe that we have 2x results on 2-node cluster

*Root Cause*
{{left LEFT JOIN right ON cond}} operation assumes full scan of of a left 
expression. Currently we perform this scan on every node and then simply merge 
results on reducer. Two nodes, two scans of {{REPLICATED}} cache, 2x results.

*Solution*
We may consider several solutions. Deeper analysis is required to understand 
which is the right one.

# Perform deduplication on reducer
# Treat {{REPLICATED}} cache as {{PARTITIONED}}. Essentially, we just need to 
pass proper backup filter. But what if {{REPLICATED}} cache spans more nodes 
than {{PARTITIONED}}? We cannot rely on primary/backup in this case
# Implement additional execution phase as follows: 
{code}
SELECT left.cols, right.cols FROM left INNER JOIN right ON cond; // Get "inner 
join" part
UNION
SELECT left.cols, [NULL].cols FROM left WHERE left.id NOT IN ([ids from the 
first phase]) // Get "outer join" part
{code}



> SQL: REPLICATED cache cannot be left-joined to PARTITIONED
> --
>
> Key: IGNITE-8732
> URL: https://issues.apache.org/jira/browse/IGNITE-8732
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 2.5
>Reporter: Vladimir Ozerov
>Priority: Major
>
> *Steps to reproduce*
> # Run 
> {{org.apache.ignite.sqltests.ReplicatedSqlTest#testLeftJoinReplicatedPartitioned}}
> # Observe that we have 2x results on 2-node cluster
> *Root Cause*
> {{left LEFT JOIN right ON cond}} operation assumes full scan of of a left 
> expression. Currently we perform this scan on every node and then simply 
> merge results on reducer. Two nodes, two scans of {{REPLICATED}} cache, 2x 
> results.
> *Solution*
> We may consider several solutions. Deeper analysis is required to understand 
> which is the right one.
> # Perform deduplication on reducer
> # Treat {{REPLICATED}} cache as {{PARTITIONED}}. Essentially, we just need to 
> pass proper backup filter. But what if {{REPLICATED}} cache spans more nodes 
> than {{PARTITIONED}}? We cannot rely on primary/backup in this case
> # Implement additional execution phase as follows: 
> {code}
> SELECT left.cols, right.cols FROM left INNER JOIN right ON cond;  
>// Get "inner join" part
> UNION
> SELECT left.cols, [NULL].cols FROM left WHERE left.id NOT IN ([ids from the 
> first phase]) // Get "outer join" part
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8732) SQL: REPLICATED cache cannot be left-joined to PARTITIONED

2018-06-07 Thread Vladimir Ozerov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Ozerov updated IGNITE-8732:

Description: 
*Steps to reproduce*
# Run 
{{org.apache.ignite.sqltests.ReplicatedSqlTest#testLeftJoinReplicatedPartitioned}}
# Observe that we have 2x results on 2-node cluster

*Root Cause*
{{left LEFT JOIN right ON cond}} operation assumes full scan of of a left 
expression. Currently we perform this scan on every node and then simply merge 
results on reducer. Two nodes, two scans of {{REPLICATED}} cache, 2x results.

*Solution*
We may consider several solutions. Deeper analysis is required to understand 
which is the right one.

# Perform deduplication on reducer
# Treat {{REPLICATED}} cache as {{PARTITIONED}}. Essentially, we just need to 
pass proper backup filter. But what if {{REPLICATED}} cache spans more nodes 
than {{PARTITIONED}}? We cannot rely on primary/backup in this case
# Implement additional execution phase as follows: 
{code}
SELECT left.cols, right.cols FROM left INNER JOIN right ON cond;
  // Get "inner join" part
UNION
UNICAST SELECT left.cols, [NULL].cols FROM left WHERE left.id NOT IN ([ids from 
the first phase]) // Get "outer join" part
{code}


  was:
*Steps to reproduce*
# Run 
{{org.apache.ignite.sqltests.ReplicatedSqlTest#testLeftJoinReplicatedPartitioned}}
# Observe that we have 2x results on 2-node cluster

*Root Cause*
{{left LEFT JOIN right ON cond}} operation assumes full scan of of a left 
expression. Currently we perform this scan on every node and then simply merge 
results on reducer. Two nodes, two scans of {{REPLICATED}} cache, 2x results.

*Solution*
We may consider several solutions. Deeper analysis is required to understand 
which is the right one.

# Perform deduplication on reducer
# Treat {{REPLICATED}} cache as {{PARTITIONED}}. Essentially, we just need to 
pass proper backup filter. But what if {{REPLICATED}} cache spans more nodes 
than {{PARTITIONED}}? We cannot rely on primary/backup in this case
# Implement additional execution phase as follows: 
{code}
SELECT left.cols, right.cols FROM left INNER JOIN right ON cond;
  // Get "inner join" part
UNION
SELECT left.cols, [NULL].cols FROM left WHERE left.id NOT IN ([ids from the 
first phase]) // Get "outer join" part
{code}



> SQL: REPLICATED cache cannot be left-joined to PARTITIONED
> --
>
> Key: IGNITE-8732
> URL: https://issues.apache.org/jira/browse/IGNITE-8732
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 2.5
>Reporter: Vladimir Ozerov
>Priority: Major
>
> *Steps to reproduce*
> # Run 
> {{org.apache.ignite.sqltests.ReplicatedSqlTest#testLeftJoinReplicatedPartitioned}}
> # Observe that we have 2x results on 2-node cluster
> *Root Cause*
> {{left LEFT JOIN right ON cond}} operation assumes full scan of of a left 
> expression. Currently we perform this scan on every node and then simply 
> merge results on reducer. Two nodes, two scans of {{REPLICATED}} cache, 2x 
> results.
> *Solution*
> We may consider several solutions. Deeper analysis is required to understand 
> which is the right one.
> # Perform deduplication on reducer
> # Treat {{REPLICATED}} cache as {{PARTITIONED}}. Essentially, we just need to 
> pass proper backup filter. But what if {{REPLICATED}} cache spans more nodes 
> than {{PARTITIONED}}? We cannot rely on primary/backup in this case
> # Implement additional execution phase as follows: 
> {code}
> SELECT left.cols, right.cols FROM left INNER JOIN right ON cond;  
> // Get "inner join" part
> UNION
> UNICAST SELECT left.cols, [NULL].cols FROM left WHERE left.id NOT IN ([ids 
> from the first phase]) // Get "outer join" part
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8732) SQL: REPLICATED cache cannot be left-joined to PARTITIONED

2018-06-07 Thread Vladimir Ozerov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Ozerov updated IGNITE-8732:

Description: 
*Steps to reproduce*
# Run 
{{org.apache.ignite.sqltests.ReplicatedSqlTest#testLeftJoinReplicatedPartitioned}}
# Observe that we have 2x results on 2-node cluster

*Root Cause*
{{left LEFT JOIN right ON cond}} operation assumes full scan of of a left 
expression. Currently we perform this scan on every node and then simply merge 
results on reducer. Two nodes, two scans of {{REPLICATED}} cache, 2x results.

*Solution*
We may consider several solutions. Deeper analysis is required to understand 
which is the right one.

# Perform deduplication on reducer
# Treat {{REPLICATED}} cache as {{PARTITIONED}}. Essentially, we just need to 
pass proper backup filter. But what if {{REPLICATED}} cache spans more nodes 
than {{PARTITIONED}}? We cannot rely on primary/backup in this case
# Implement additional execution phase as follows: 
{code}
SELECT left.cols, right.cols FROM left INNER JOIN right ON cond;
   // Get "inner join" part
UNION
SELECT left.cols, [NULL].cols FROM left WHERE left.id NOT IN ([ids from the 
first phase]) // Get "outer join" part
{code}


  was:
*Steps to reproduce*
# Run 
{{org.apache.ignite.sqltests.ReplicatedSqlTest#testLeftJoinReplicatedPartitioned}}
# Observe that we have 2x results on 2-node cluster

*Root Cause*
{{left LEFT JOIN right ON cond}} operation assumes full scan of of a left 
expression. Currently we perform this scan on every node and then simply merge 
results on reducer. Two nodes, two scans of {{REPLICATED}} cache, 2x results.

*Solution*
We may consider several solutions. Deeper analysis is required to understand 
which is the right one.

# Perform deduplication on reducer
# Treat {{REPLICATED}} cache as {{PARTITIONED}}. Essentially, we just need to 
pass proper backup filter. But what if {{REPLICATED}} cache spans more nodes 
than {{PARTITIONED}}? We cannot rely on primary/backup in this case
# Implement additional execution phase as follows: 
{code}
SELECT left.cols, right.cols FROM left INNER JOIN right ON cond;
 // Get "inner join" part
UNION
SELECT left.cols, [NULL].cols FROM left WHERE left.id NOT IN ([ids from the 
first phase]) // Get "outer join" part
{code}



> SQL: REPLICATED cache cannot be left-joined to PARTITIONED
> --
>
> Key: IGNITE-8732
> URL: https://issues.apache.org/jira/browse/IGNITE-8732
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 2.5
>Reporter: Vladimir Ozerov
>Priority: Major
>
> *Steps to reproduce*
> # Run 
> {{org.apache.ignite.sqltests.ReplicatedSqlTest#testLeftJoinReplicatedPartitioned}}
> # Observe that we have 2x results on 2-node cluster
> *Root Cause*
> {{left LEFT JOIN right ON cond}} operation assumes full scan of of a left 
> expression. Currently we perform this scan on every node and then simply 
> merge results on reducer. Two nodes, two scans of {{REPLICATED}} cache, 2x 
> results.
> *Solution*
> We may consider several solutions. Deeper analysis is required to understand 
> which is the right one.
> # Perform deduplication on reducer
> # Treat {{REPLICATED}} cache as {{PARTITIONED}}. Essentially, we just need to 
> pass proper backup filter. But what if {{REPLICATED}} cache spans more nodes 
> than {{PARTITIONED}}? We cannot rely on primary/backup in this case
> # Implement additional execution phase as follows: 
> {code}
> SELECT left.cols, right.cols FROM left INNER JOIN right ON cond;  
>  // Get "inner join" part
> UNION
> SELECT left.cols, [NULL].cols FROM left WHERE left.id NOT IN ([ids from the 
> first phase]) // Get "outer join" part
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-8732) SQL: REPLICATED cache cannot be left-joined to PARTITIONED

2018-06-07 Thread Vladimir Ozerov (JIRA)
Vladimir Ozerov created IGNITE-8732:
---

 Summary: SQL: REPLICATED cache cannot be left-joined to PARTITIONED
 Key: IGNITE-8732
 URL: https://issues.apache.org/jira/browse/IGNITE-8732
 Project: Ignite
  Issue Type: Bug
  Components: sql
Affects Versions: 2.5
Reporter: Vladimir Ozerov


*Steps to reproduce*
# Run 
{{org.apache.ignite.sqltests.ReplicatedSqlTest#testLeftJoinReplicatedPartitioned}}
# Observe that we have 2x results on 2-node cluster

*Root Cause*
{{left LEFT JOIN right ON cond}} operation assumes full scan of of a left 
expression. Currently we perform this scan on every node and then simply merge 
results on reducer. Two nodes, two scans of {{REPLICATED}} cache, 2x results.

*Solution*
We may consider several solutions. Deeper analysis is required to understand 
which is the right one.

# Perform deduplication on reducer
# Treat {{REPLICATED}} cache as {{PARTITIONED}}. Essentially, we just need to 
pass proper backup filter. But what if {{REPLICATED}} cache spans more nodes 
than {{PARTITIONED}}? We cannot rely on primary/backup in this case
# Implement additional execution phase as follows: 
{code}
SELECT left.cols, right.cols FROM left INNER JOIN right ON cond; // Get common 
part
UNION
SELECT left.cols, [NULL].cols FROM left WHERE left.id NOT IN ([ids from the 
first phase])
{code}




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8714) CacheEntryEvent.getOldValue should be available

2018-06-07 Thread Alexander Menshikov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Menshikov updated IGNITE-8714:

Description: 
JCache TCK 1.1.0 now tests CacheEntryEvent#getOldValue() behavior.

And current master fails with this reason on following tests:
 * CacheListenerTest.testFilteredListener
 * CacheListenerTest.testCacheEntryListener

 

Coped from spek changing discution:
||Heading 1||Heading 2||
|Col A1|Col A2|

 

Please see link on JCache TCK 1.1.0 changes.

  was:
JCache TCK 1.1.0 now tests CacheEntryEvent#getOldValue() behavior.

And current master fails with this reason on following tests:
 * CacheListenerTest.testFilteredListener
 * CacheListenerTest.testCacheEntryListener

In some cases when old value should be available it's not.

Looks like a bug.

Please see link on JCache TCK 1.1.0 changes.


> CacheEntryEvent.getOldValue should be available
> ---
>
> Key: IGNITE-8714
> URL: https://issues.apache.org/jira/browse/IGNITE-8714
> Project: Ignite
>  Issue Type: Sub-task
>Reporter: Alexander Menshikov
>Assignee: Alexander Menshikov
>Priority: Major
>
> JCache TCK 1.1.0 now tests CacheEntryEvent#getOldValue() behavior.
> And current master fails with this reason on following tests:
>  * CacheListenerTest.testFilteredListener
>  * CacheListenerTest.testCacheEntryListener
>  
> Coped from spek changing discution:
> ||Heading 1||Heading 2||
> |Col A1|Col A2|
>  
> Please see link on JCache TCK 1.1.0 changes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8714) CacheEntryEvent.getOldValue should be available

2018-06-07 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16504468#comment-16504468
 ] 

ASF GitHub Bot commented on IGNITE-8714:


GitHub user SharplEr opened a pull request:

https://github.com/apache/ignite/pull/4146

Ignite-8714

For [Ignite-8714](https://issues.apache.org/jira/browse/IGNITE-8714)

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/SharplEr/ignite ignite-8714

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/4146.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4146


commit ef809b3f0dc6089c5cbd6a108d980959ab8c5c56
Author: Alexander Menshikov 
Date:   2018-06-04T09:11:31Z

ignite-8687 Add JCache TCK 1.1.0 to TC

commit 92c2a9621d497cbc6069716fae7d7a334d3ca211
Author: Alexander Menshikov 
Date:   2018-06-07T09:37:51Z

ignite-8714 CacheEntryEvent.getOldValue should be available




> CacheEntryEvent.getOldValue should be available
> ---
>
> Key: IGNITE-8714
> URL: https://issues.apache.org/jira/browse/IGNITE-8714
> Project: Ignite
>  Issue Type: Sub-task
>Reporter: Alexander Menshikov
>Assignee: Alexander Menshikov
>Priority: Major
>
> JCache TCK 1.1.0 now tests CacheEntryEvent#getOldValue() behavior.
> And current master fails with this reason on following tests:
>  * CacheListenerTest.testFilteredListener
>  * CacheListenerTest.testCacheEntryListener
> In some cases when old value should be available it's not.
> Looks like a bug.
> Please see link on JCache TCK 1.1.0 changes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (IGNITE-8714) CacheEntryEvent.getOldValue should be available

2018-06-07 Thread Alexander Menshikov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Menshikov reassigned IGNITE-8714:
---

Assignee: Alexander Menshikov

> CacheEntryEvent.getOldValue should be available
> ---
>
> Key: IGNITE-8714
> URL: https://issues.apache.org/jira/browse/IGNITE-8714
> Project: Ignite
>  Issue Type: Sub-task
>Reporter: Alexander Menshikov
>Assignee: Alexander Menshikov
>Priority: Major
>
> JCache TCK 1.1.0 now tests CacheEntryEvent#getOldValue() behavior.
> And current master fails with this reason on following tests:
>  * CacheListenerTest.testFilteredListener
>  * CacheListenerTest.testCacheEntryListener
> In some cases when old value should be available it's not.
> Looks like a bug.
> Please see link on JCache TCK 1.1.0 changes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8503) Fix wrong GridCacheMapEntry startVersion initialization.

2018-06-07 Thread Andrew Mashenkov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16504460#comment-16504460
 ] 

Andrew Mashenkov commented on IGNITE-8503:
--

I've restarted 3 suspicious test suites, other looks good.

> Fix wrong GridCacheMapEntry startVersion initialization.
> 
>
> Key: IGNITE-8503
> URL: https://issues.apache.org/jira/browse/IGNITE-8503
> Project: Ignite
>  Issue Type: Improvement
>  Components: cache, persistence
>Reporter: Dmitriy Pavlov
>Assignee: Andrew Mashenkov
>Priority: Major
>  Labels: MakeTeamcityGreenAgain, Muted_test, tck_issues
>
> GridCacheMapEntry initialize startVersion in wrong way.
> This leads to IgnitePdsWithTtlTest.testTtlIsAppliedAfterRestart failure and 
> reason is "Entry which should be expired by TTL policy is available after 
> grid restart."
>  
> Test was added during https://issues.apache.org/jira/browse/IGNITE-5874 
> development.
> This test restarts grid and checks all entries are not present in grid.
> But with high possiblity one from 7000 entries to be expired is resurrected 
> instead and returned by cache get.
> {noformat}
> After timeout {{
> >>> 
> >>> Cache memory stats [igniteInstanceName=db.IgnitePdsWithTtlTest0, 
> >>> cache=expirableCache]
> >>>  Cache size: 0
> >>>  Cache partition topology stats 
> >>> [igniteInstanceName=db.IgnitePdsWithTtlTest0, grp=group1]
> >>> 
> >>> Cache event manager memory stats 
> >>> [igniteInstanceName=db.IgnitePdsWithTtlTest0, cache=expirableCache, 
> >>> stats=N/A]
> >>>
> >>> Query manager memory stats [igniteInstanceName=db.IgnitePdsWithTtlTest0, 
> >>> cache=expirableCache]
> >>>   threadsSize: 0
> >>>   futsSize: 0
> >>>
> >>> TTL processor memory stats [igniteInstanceName=db.IgnitePdsWithTtlTest0, 
> >>> cache=expirableCache]
> >>>   pendingEntriesSize: 0
> }} After timeout
> {noformat}
> [https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8&testNameId=5798755758125626876&tab=testDetails&branch_IgniteTests24Java8=%3Cdefault%3E]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8724) Skip logging 3-rd parameter while calling U.warn with initialized logger.

2018-06-07 Thread Stanilovsky Evgeny (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stanilovsky Evgeny updated IGNITE-8724:
---
Fix Version/s: 2.6

> Skip logging 3-rd parameter while calling U.warn with initialized logger.
> -
>
> Key: IGNITE-8724
> URL: https://issues.apache.org/jira/browse/IGNITE-8724
> Project: Ignite
>  Issue Type: Bug
>  Components: general
>Affects Versions: 2.5
>Reporter: Stanilovsky Evgeny
>Assignee: Stanilovsky Evgeny
>Priority: Major
> Fix For: 2.6
>
>
> There are a lot of places where exception need to be logged, for example :
> {code:java}
> U.warn(log,"Unable to await partitions release future", e);
> {code}
> but current U.warn realization silently swallow it.
> {code:java}
> public static void warn(@Nullable IgniteLogger log, Object longMsg, 
> Object shortMsg) {
> assert longMsg != null;
> assert shortMsg != null;
> if (log != null)
> log.warning(compact(longMsg.toString()));
> else
> X.println("[" + SHORT_DATE_FMT.format(new java.util.Date()) + "] 
> (wrn) " +
> compact(shortMsg.toString()));
> }
> {code}
> fix, looks like simple add:
> {code:java}
> public static void warn(@Nullable IgniteLogger log, Object longMsg, 
> Throwable ex) {
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8503) Fix wrong GridCacheMapEntry startVersion initialization.

2018-06-07 Thread Andrew Mashenkov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16504457#comment-16504457
 ] 

Andrew Mashenkov commented on IGNITE-8503:
--

I've correct issue title as it is a more common issue actually.

> Fix wrong GridCacheMapEntry startVersion initialization.
> 
>
> Key: IGNITE-8503
> URL: https://issues.apache.org/jira/browse/IGNITE-8503
> Project: Ignite
>  Issue Type: Improvement
>  Components: cache, persistence
>Reporter: Dmitriy Pavlov
>Assignee: Andrew Mashenkov
>Priority: Major
>  Labels: MakeTeamcityGreenAgain, Muted_test, tck_issues
>
> GridCacheMapEntry initialize startVersion in wrong way.
> This leads to IgnitePdsWithTtlTest.testTtlIsAppliedAfterRestart failure and 
> reason is "Entry which should be expired by TTL policy is available after 
> grid restart."
>  
> Test was added during https://issues.apache.org/jira/browse/IGNITE-5874 
> development.
> This test restarts grid and checks all entries are not present in grid.
> But with high possiblity one from 7000 entries to be expired is resurrected 
> instead and returned by cache get.
> {noformat}
> After timeout {{
> >>> 
> >>> Cache memory stats [igniteInstanceName=db.IgnitePdsWithTtlTest0, 
> >>> cache=expirableCache]
> >>>  Cache size: 0
> >>>  Cache partition topology stats 
> >>> [igniteInstanceName=db.IgnitePdsWithTtlTest0, grp=group1]
> >>> 
> >>> Cache event manager memory stats 
> >>> [igniteInstanceName=db.IgnitePdsWithTtlTest0, cache=expirableCache, 
> >>> stats=N/A]
> >>>
> >>> Query manager memory stats [igniteInstanceName=db.IgnitePdsWithTtlTest0, 
> >>> cache=expirableCache]
> >>>   threadsSize: 0
> >>>   futsSize: 0
> >>>
> >>> TTL processor memory stats [igniteInstanceName=db.IgnitePdsWithTtlTest0, 
> >>> cache=expirableCache]
> >>>   pendingEntriesSize: 0
> }} After timeout
> {noformat}
> [https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8&testNameId=5798755758125626876&tab=testDetails&branch_IgniteTests24Java8=%3Cdefault%3E]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8503) Fix wrong GridCacheMapEntry startVersion initialization.

2018-06-07 Thread Andrew Mashenkov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Mashenkov updated IGNITE-8503:
-
Description: 
GridCacheMapEntry initializa startVersion in wrong way.

This leads to IgnitePdsWithTtlTest.testTtlIsAppliedAfterRestart failure and 
reason is "Entry which should be expired by TTL policy is available after grid 
restart."

 

Test was added during https://issues.apache.org/jira/browse/IGNITE-5874 
development.

This test restarts grid and checks all entries are not present in grid.

But with high possiblity one from 7000 entries to be expired is resurrected 
instead and returned by cache get.
{noformat}
After timeout {{
>>> 
>>> Cache memory stats [igniteInstanceName=db.IgnitePdsWithTtlTest0, 
>>> cache=expirableCache]
>>>  Cache size: 0
>>>  Cache partition topology stats 
>>> [igniteInstanceName=db.IgnitePdsWithTtlTest0, grp=group1]
>>> 
>>> Cache event manager memory stats 
>>> [igniteInstanceName=db.IgnitePdsWithTtlTest0, cache=expirableCache, 
>>> stats=N/A]
>>>
>>> Query manager memory stats [igniteInstanceName=db.IgnitePdsWithTtlTest0, 
>>> cache=expirableCache]
>>>   threadsSize: 0
>>>   futsSize: 0
>>>
>>> TTL processor memory stats [igniteInstanceName=db.IgnitePdsWithTtlTest0, 
>>> cache=expirableCache]
>>>   pendingEntriesSize: 0
}} After timeout
{noformat}
[https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8&testNameId=5798755758125626876&tab=testDetails&branch_IgniteTests24Java8=%3Cdefault%3E]

  was:
GridCacheMapEntry initializa startVersion in wrong way.

This leads to 

 

Test was added during https://issues.apache.org/jira/browse/IGNITE-5874 
development.

This test restarts grid and checks all entries are not present in grid.

But with high possiblity one from 7000 entries to be expired is resurrected 
instead and returned by cache get.
{noformat}
After timeout {{
>>> 
>>> Cache memory stats [igniteInstanceName=db.IgnitePdsWithTtlTest0, 
>>> cache=expirableCache]
>>>  Cache size: 0
>>>  Cache partition topology stats 
>>> [igniteInstanceName=db.IgnitePdsWithTtlTest0, grp=group1]
>>> 
>>> Cache event manager memory stats 
>>> [igniteInstanceName=db.IgnitePdsWithTtlTest0, cache=expirableCache, 
>>> stats=N/A]
>>>
>>> Query manager memory stats [igniteInstanceName=db.IgnitePdsWithTtlTest0, 
>>> cache=expirableCache]
>>>   threadsSize: 0
>>>   futsSize: 0
>>>
>>> TTL processor memory stats [igniteInstanceName=db.IgnitePdsWithTtlTest0, 
>>> cache=expirableCache]
>>>   pendingEntriesSize: 0
}} After timeout
{noformat}
[https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8&testNameId=5798755758125626876&tab=testDetails&branch_IgniteTests24Java8=%3Cdefault%3E]


> Fix wrong GridCacheMapEntry startVersion initialization.
> 
>
> Key: IGNITE-8503
> URL: https://issues.apache.org/jira/browse/IGNITE-8503
> Project: Ignite
>  Issue Type: Improvement
>  Components: cache, persistence
>Reporter: Dmitriy Pavlov
>Assignee: Andrew Mashenkov
>Priority: Major
>  Labels: MakeTeamcityGreenAgain, Muted_test, tck_issues
>
> GridCacheMapEntry initializa startVersion in wrong way.
> This leads to IgnitePdsWithTtlTest.testTtlIsAppliedAfterRestart failure and 
> reason is "Entry which should be expired by TTL policy is available after 
> grid restart."
>  
> Test was added during https://issues.apache.org/jira/browse/IGNITE-5874 
> development.
> This test restarts grid and checks all entries are not present in grid.
> But with high possiblity one from 7000 entries to be expired is resurrected 
> instead and returned by cache get.
> {noformat}
> After timeout {{
> >>> 
> >>> Cache memory stats [igniteInstanceName=db.IgnitePdsWithTtlTest0, 
> >>> cache=expirableCache]
> >>>  Cache size: 0
> >>>  Cache partition topology stats 
> >>> [igniteInstanceName=db.IgnitePdsWithTtlTest0, grp=group1]
> >>> 
> >>> Cache event manager memory stats 
> >>> [igniteInstanceName=db.IgnitePdsWithTtlTest0, cache=expirableCache, 
> >>> stats=N/A]
> >>>
> >>> Query manager memory stats [igniteInstanceName=db.IgnitePdsWithTtlTest0, 
> >>> cache=expirableCache]
> >>>   threadsSize: 0
> >>>   futsSize: 0
> >>>
> >>> TTL processor memory stats [igniteInstanceName=db.IgnitePdsWithTtlTest0, 
> >>> cache=expirableCache]
> >>>   pendingEntriesSize: 0
> }} After timeout
> {noformat}
> [https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8&testNameId=5798755758125626876&tab=testDetails&branch_IgniteTests24Java8=%3Cdefault%3E]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8503) Fix wrong GridCacheMapEntry startVersion initialization.

2018-06-07 Thread Andrew Mashenkov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Mashenkov updated IGNITE-8503:
-
Description: 
GridCacheMapEntry initialize startVersion in wrong way.
This leads to IgnitePdsWithTtlTest.testTtlIsAppliedAfterRestart failure and 
reason is "Entry which should be expired by TTL policy is available after grid 
restart."

 

Test was added during https://issues.apache.org/jira/browse/IGNITE-5874 
development.

This test restarts grid and checks all entries are not present in grid.

But with high possiblity one from 7000 entries to be expired is resurrected 
instead and returned by cache get.
{noformat}
After timeout {{
>>> 
>>> Cache memory stats [igniteInstanceName=db.IgnitePdsWithTtlTest0, 
>>> cache=expirableCache]
>>>  Cache size: 0
>>>  Cache partition topology stats 
>>> [igniteInstanceName=db.IgnitePdsWithTtlTest0, grp=group1]
>>> 
>>> Cache event manager memory stats 
>>> [igniteInstanceName=db.IgnitePdsWithTtlTest0, cache=expirableCache, 
>>> stats=N/A]
>>>
>>> Query manager memory stats [igniteInstanceName=db.IgnitePdsWithTtlTest0, 
>>> cache=expirableCache]
>>>   threadsSize: 0
>>>   futsSize: 0
>>>
>>> TTL processor memory stats [igniteInstanceName=db.IgnitePdsWithTtlTest0, 
>>> cache=expirableCache]
>>>   pendingEntriesSize: 0
}} After timeout
{noformat}
[https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8&testNameId=5798755758125626876&tab=testDetails&branch_IgniteTests24Java8=%3Cdefault%3E]

  was:
GridCacheMapEntry initializa startVersion in wrong way.

This leads to IgnitePdsWithTtlTest.testTtlIsAppliedAfterRestart failure and 
reason is "Entry which should be expired by TTL policy is available after grid 
restart."

 

Test was added during https://issues.apache.org/jira/browse/IGNITE-5874 
development.

This test restarts grid and checks all entries are not present in grid.

But with high possiblity one from 7000 entries to be expired is resurrected 
instead and returned by cache get.
{noformat}
After timeout {{
>>> 
>>> Cache memory stats [igniteInstanceName=db.IgnitePdsWithTtlTest0, 
>>> cache=expirableCache]
>>>  Cache size: 0
>>>  Cache partition topology stats 
>>> [igniteInstanceName=db.IgnitePdsWithTtlTest0, grp=group1]
>>> 
>>> Cache event manager memory stats 
>>> [igniteInstanceName=db.IgnitePdsWithTtlTest0, cache=expirableCache, 
>>> stats=N/A]
>>>
>>> Query manager memory stats [igniteInstanceName=db.IgnitePdsWithTtlTest0, 
>>> cache=expirableCache]
>>>   threadsSize: 0
>>>   futsSize: 0
>>>
>>> TTL processor memory stats [igniteInstanceName=db.IgnitePdsWithTtlTest0, 
>>> cache=expirableCache]
>>>   pendingEntriesSize: 0
}} After timeout
{noformat}
[https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8&testNameId=5798755758125626876&tab=testDetails&branch_IgniteTests24Java8=%3Cdefault%3E]


> Fix wrong GridCacheMapEntry startVersion initialization.
> 
>
> Key: IGNITE-8503
> URL: https://issues.apache.org/jira/browse/IGNITE-8503
> Project: Ignite
>  Issue Type: Improvement
>  Components: cache, persistence
>Reporter: Dmitriy Pavlov
>Assignee: Andrew Mashenkov
>Priority: Major
>  Labels: MakeTeamcityGreenAgain, Muted_test, tck_issues
>
> GridCacheMapEntry initialize startVersion in wrong way.
> This leads to IgnitePdsWithTtlTest.testTtlIsAppliedAfterRestart failure and 
> reason is "Entry which should be expired by TTL policy is available after 
> grid restart."
>  
> Test was added during https://issues.apache.org/jira/browse/IGNITE-5874 
> development.
> This test restarts grid and checks all entries are not present in grid.
> But with high possiblity one from 7000 entries to be expired is resurrected 
> instead and returned by cache get.
> {noformat}
> After timeout {{
> >>> 
> >>> Cache memory stats [igniteInstanceName=db.IgnitePdsWithTtlTest0, 
> >>> cache=expirableCache]
> >>>  Cache size: 0
> >>>  Cache partition topology stats 
> >>> [igniteInstanceName=db.IgnitePdsWithTtlTest0, grp=group1]
> >>> 
> >>> Cache event manager memory stats 
> >>> [igniteInstanceName=db.IgnitePdsWithTtlTest0, cache=expirableCache, 
> >>> stats=N/A]
> >>>
> >>> Query manager memory stats [igniteInstanceName=db.IgnitePdsWithTtlTest0, 
> >>> cache=expirableCache]
> >>>   threadsSize: 0
> >>>   futsSize: 0
> >>>
> >>> TTL processor memory stats [igniteInstanceName=db.IgnitePdsWithTtlTest0, 
> >>> cache=expirableCache]
> >>>   pendingEntriesSize: 0
> }} After timeout
> {noformat}
> [https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8&testNameId=5798755758125626876&tab=testDetails&branch_IgniteTests24Java8=%3Cdefault%3E]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8724) Skip logging 3-rd parameter while calling U.warn with initialized logger.

2018-06-07 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16504454#comment-16504454
 ] 

ASF GitHub Bot commented on IGNITE-8724:


GitHub user zstan opened a pull request:

https://github.com/apache/ignite/pull/4145

IGNITE-8724 U.warn mislead implementation fix



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-8724

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/4145.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4145


commit 1eacc9477ba05cc5b7908718951bbb1f0999b877
Author: Evgeny Stanilovskiy 
Date:   2018-06-07T09:27:31Z

IGNITE-8724 U.warn mislead implementation fix




> Skip logging 3-rd parameter while calling U.warn with initialized logger.
> -
>
> Key: IGNITE-8724
> URL: https://issues.apache.org/jira/browse/IGNITE-8724
> Project: Ignite
>  Issue Type: Bug
>  Components: general
>Affects Versions: 2.5
>Reporter: Stanilovsky Evgeny
>Assignee: Stanilovsky Evgeny
>Priority: Major
>
> There are a lot of places where exception need to be logged, for example :
> {code:java}
> U.warn(log,"Unable to await partitions release future", e);
> {code}
> but current U.warn realization silently swallow it.
> {code:java}
> public static void warn(@Nullable IgniteLogger log, Object longMsg, 
> Object shortMsg) {
> assert longMsg != null;
> assert shortMsg != null;
> if (log != null)
> log.warning(compact(longMsg.toString()));
> else
> X.println("[" + SHORT_DATE_FMT.format(new java.util.Date()) + "] 
> (wrn) " +
> compact(shortMsg.toString()));
> }
> {code}
> fix, looks like simple add:
> {code:java}
> public static void warn(@Nullable IgniteLogger log, Object longMsg, 
> Throwable ex) {
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


  1   2   >