[jira] [Assigned] (IGNITE-8126) Web console: the method 'LoadCaches' should not be generated if cache doesn't contain cache store configuration

2018-04-09 Thread Pavel Konstantinov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-8126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Konstantinov reassigned IGNITE-8126:
--

Assignee: Alexey Kuznetsov  (was: Alexey Kuznetsov)

> Web console: the method 'LoadCaches' should not be generated if cache doesn't 
> contain cache store configuration
> ---
>
> Key: IGNITE-8126
> URL: https://issues.apache.org/jira/browse/IGNITE-8126
> Project: Ignite
>  Issue Type: Bug
>Reporter: Pavel Konstantinov
>Assignee: Alexey Kuznetsov
>Priority: Major
>
> # create cluster, save
> # create cache, save
> # see project structure



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (IGNITE-8126) Web console: the method 'LoadCaches' should not be generated if cache doesn't contain cache store configuration

2018-04-09 Thread Pavel Konstantinov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-8126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Konstantinov reassigned IGNITE-8126:
--

Assignee: Alexey Kuznetsov  (was: Pavel Konstantinov)

> Web console: the method 'LoadCaches' should not be generated if cache doesn't 
> contain cache store configuration
> ---
>
> Key: IGNITE-8126
> URL: https://issues.apache.org/jira/browse/IGNITE-8126
> Project: Ignite
>  Issue Type: Bug
>Reporter: Pavel Konstantinov
>Assignee: Alexey Kuznetsov
>Priority: Major
>
> # create cluster, save
> # create cache, save
> # see project structure



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8126) Web console: the method 'LoadCaches' should not be generated if cache doesn't contain cache store configuration

2018-04-09 Thread Pavel Konstantinov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-8126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16431663#comment-16431663
 ] 

Pavel Konstantinov commented on IGNITE-8126:


Tested

> Web console: the method 'LoadCaches' should not be generated if cache doesn't 
> contain cache store configuration
> ---
>
> Key: IGNITE-8126
> URL: https://issues.apache.org/jira/browse/IGNITE-8126
> Project: Ignite
>  Issue Type: Bug
>Reporter: Pavel Konstantinov
>Assignee: Pavel Konstantinov
>Priority: Major
>
> # create cluster, save
> # create cache, save
> # see project structure



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8200) Web console: unexpected confirmation

2018-04-09 Thread Pavel Konstantinov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-8200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Konstantinov updated IGNITE-8200:
---
Fix Version/s: 2.6

> Web console: unexpected confirmation
> 
>
> Key: IGNITE-8200
> URL: https://issues.apache.org/jira/browse/IGNITE-8200
> Project: Ignite
>  Issue Type: Bug
>Reporter: Pavel Konstantinov
>Assignee: Ilya Borisov
>Priority: Major
> Fix For: 2.6
>
>
> # initial state - there are no clusters
> # create a new one, do not save
> # open Advanced screen, save
> # import from DB, save
> # open Caches screen - unexpected confirmation about unsaved changes appear 
> # click Cancel, save, open Caches again - unexpected confirmation again



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-8200) Web console: unexpected confirmation

2018-04-09 Thread Pavel Konstantinov (JIRA)
Pavel Konstantinov created IGNITE-8200:
--

 Summary: Web console: unexpected confirmation
 Key: IGNITE-8200
 URL: https://issues.apache.org/jira/browse/IGNITE-8200
 Project: Ignite
  Issue Type: Bug
Reporter: Pavel Konstantinov


# initial state - there are no clusters
# create a new one, do not save
# open Advanced screen, save
# import from DB, save
# open Caches screen - unexpected confirmation about unsaved changes appear 
# click Cancel, save, open Caches again - unexpected confirmation again



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (IGNITE-8200) Web console: unexpected confirmation

2018-04-09 Thread Pavel Konstantinov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-8200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Konstantinov reassigned IGNITE-8200:
--

Assignee: Ilya Borisov

> Web console: unexpected confirmation
> 
>
> Key: IGNITE-8200
> URL: https://issues.apache.org/jira/browse/IGNITE-8200
> Project: Ignite
>  Issue Type: Bug
>Reporter: Pavel Konstantinov
>Assignee: Ilya Borisov
>Priority: Major
>
> # initial state - there are no clusters
> # create a new one, do not save
> # open Advanced screen, save
> # import from DB, save
> # open Caches screen - unexpected confirmation about unsaved changes appear 
> # click Cancel, save, open Caches again - unexpected confirmation again



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8199) Web console: Make the Confirmation dialog a more clear

2018-04-09 Thread Pavel Konstantinov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-8199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Konstantinov updated IGNITE-8199:
---
Component/s: wizards

> Web console: Make the Confirmation dialog a more clear
> --
>
> Key: IGNITE-8199
> URL: https://issues.apache.org/jira/browse/IGNITE-8199
> Project: Ignite
>  Issue Type: Improvement
>  Components: wizards
>Reporter: Pavel Konstantinov
>Assignee: Ilya Borisov
>Priority: Minor
> Fix For: 2.6
>
> Attachments: screenshot-1.png
>
>
> In case of unsaved changes, we show the following confirmation
>  !screenshot-1.png! 
> It is unclear what I have to do to see the changes.
> I suggest to change the text 'Click here to see changes' somehow to make it 
> more obvious.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8199) Web console: Make the Confirmation dialog a more clear

2018-04-09 Thread Pavel Konstantinov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-8199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Konstantinov updated IGNITE-8199:
---
Fix Version/s: 2.6

> Web console: Make the Confirmation dialog a more clear
> --
>
> Key: IGNITE-8199
> URL: https://issues.apache.org/jira/browse/IGNITE-8199
> Project: Ignite
>  Issue Type: Improvement
>  Components: wizards
>Reporter: Pavel Konstantinov
>Assignee: Ilya Borisov
>Priority: Minor
> Fix For: 2.6
>
> Attachments: screenshot-1.png
>
>
> In case of unsaved changes, we show the following confirmation
>  !screenshot-1.png! 
> It is unclear what I have to do to see the changes.
> I suggest to change the text 'Click here to see changes' somehow to make it 
> more obvious.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (IGNITE-8199) Make the Confirmation dialog a more clear

2018-04-09 Thread Pavel Konstantinov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-8199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Konstantinov reassigned IGNITE-8199:
--

Assignee: Ilya Borisov

> Make the Confirmation dialog a more clear
> -
>
> Key: IGNITE-8199
> URL: https://issues.apache.org/jira/browse/IGNITE-8199
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Pavel Konstantinov
>Assignee: Ilya Borisov
>Priority: Minor
> Attachments: screenshot-1.png
>
>
> In case of unsaved changes, we show the following confirmation
>  !screenshot-1.png! 
> It is unclear what I have to do to see the changes.
> I suggest to change the text 'Click here to see changes' somehow to make it 
> more obvious.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8199) Web console: Make the Confirmation dialog a more clear

2018-04-09 Thread Pavel Konstantinov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-8199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Konstantinov updated IGNITE-8199:
---
Summary: Web console: Make the Confirmation dialog a more clear  (was: Make 
the Confirmation dialog a more clear)

> Web console: Make the Confirmation dialog a more clear
> --
>
> Key: IGNITE-8199
> URL: https://issues.apache.org/jira/browse/IGNITE-8199
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Pavel Konstantinov
>Assignee: Ilya Borisov
>Priority: Minor
> Attachments: screenshot-1.png
>
>
> In case of unsaved changes, we show the following confirmation
>  !screenshot-1.png! 
> It is unclear what I have to do to see the changes.
> I suggest to change the text 'Click here to see changes' somehow to make it 
> more obvious.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8199) Make the Confirmation dialog a more clear

2018-04-09 Thread Pavel Konstantinov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-8199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Konstantinov updated IGNITE-8199:
---
Description: 
In case of unsaved changes, we show the following confirmation
 !screenshot-1.png! 
It is unclear what I have to do to see the changes.
I suggest to change the text 'Click here to see changes' somehow to make it 
more obvious.

  was:
In case of unsaved changes, we show the following confirmation
 !screenshot-1.png! 
It is unclear what I have to do to see the changes.
I sudjest 


> Make the Confirmation dialog a more clear
> -
>
> Key: IGNITE-8199
> URL: https://issues.apache.org/jira/browse/IGNITE-8199
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Pavel Konstantinov
>Priority: Minor
> Attachments: screenshot-1.png
>
>
> In case of unsaved changes, we show the following confirmation
>  !screenshot-1.png! 
> It is unclear what I have to do to see the changes.
> I suggest to change the text 'Click here to see changes' somehow to make it 
> more obvious.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8199) Make the Confirmation dialog a more clear

2018-04-09 Thread Pavel Konstantinov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-8199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Konstantinov updated IGNITE-8199:
---
Attachment: screenshot-1.png

> Make the Confirmation dialog a more clear
> -
>
> Key: IGNITE-8199
> URL: https://issues.apache.org/jira/browse/IGNITE-8199
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Pavel Konstantinov
>Priority: Minor
> Attachments: screenshot-1.png
>
>
> In case of unsaved changes, we show the following confirmation
> It is unclear what I have to do to see the changes



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-8199) Make the Confirmation dialog a more clear

2018-04-09 Thread Pavel Konstantinov (JIRA)
Pavel Konstantinov created IGNITE-8199:
--

 Summary: Make the Confirmation dialog a more clear
 Key: IGNITE-8199
 URL: https://issues.apache.org/jira/browse/IGNITE-8199
 Project: Ignite
  Issue Type: Improvement
Reporter: Pavel Konstantinov
 Attachments: screenshot-1.png

In case of unsaved changes, we show the following confirmation

It is unclear what I have to do to see the changes



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8199) Make the Confirmation dialog a more clear

2018-04-09 Thread Pavel Konstantinov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-8199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Konstantinov updated IGNITE-8199:
---
Description: 
In case of unsaved changes, we show the following confirmation
 !screenshot-1.png! 
It is unclear what I have to do to see the changes.
I sudjest 

  was:
In case of unsaved changes, we show the following confirmation

It is unclear what I have to do to see the changes


> Make the Confirmation dialog a more clear
> -
>
> Key: IGNITE-8199
> URL: https://issues.apache.org/jira/browse/IGNITE-8199
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Pavel Konstantinov
>Priority: Minor
> Attachments: screenshot-1.png
>
>
> In case of unsaved changes, we show the following confirmation
>  !screenshot-1.png! 
> It is unclear what I have to do to see the changes.
> I sudjest 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-8198) Document how to use username/password for REST and thin protocol connections

2018-04-09 Thread Prachi Garg (JIRA)
Prachi Garg created IGNITE-8198:
---

 Summary: Document how to use username/password for REST and thin 
protocol connections
 Key: IGNITE-8198
 URL: https://issues.apache.org/jira/browse/IGNITE-8198
 Project: Ignite
  Issue Type: Task
  Components: documentation
Affects Versions: 2.5
Reporter: Prachi Garg
Assignee: Prachi Garg
 Fix For: 2.5


Update REST protocol - [https://apacheignite.readme.io/docs/rest-api]

and binary protocol documentation - 
https://apacheignite.readme.io/docs/binary-client-protocol#section-handshake



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-8197) ignite won't start with spring-boot 1.5.11 - h2 property NESTED_JOINS doesn't exist

2018-04-09 Thread Scott Feldstein (JIRA)
Scott Feldstein created IGNITE-8197:
---

 Summary: ignite won't start with spring-boot 1.5.11 - h2 property 
NESTED_JOINS doesn't exist
 Key: IGNITE-8197
 URL: https://issues.apache.org/jira/browse/IGNITE-8197
 Project: Ignite
  Issue Type: Bug
Reporter: Scott Feldstein


I just upgraded to spring-boot 1.5.11 and am seeing the error below. I think 
this is an issue with the version of h2 associated with spring boot 1.5.11. In 
1.5.10 the h2 version was 1.4.196 and with 1.5.11 it is 1.4.197. The 
NESTED_JOINS property comes from 
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing, i assume it 
was deprecated but not sure. When I lock in my h2 version to 1.4.196 by 
overriding the spring-dependencies parent everything works fine
{code:java}
Caused by: org.h2.jdbc.JdbcSQLException: Unsupported connection setting 
"NESTED_JOINS" [90113-197]
at org.h2.message.DbException.getJdbcSQLException(DbException.java:357) 
~[h2-1.4.197.jar:1.4.197]
at org.h2.message.DbException.get(DbException.java:179) 
~[h2-1.4.197.jar:1.4.197]
at org.h2.message.DbException.get(DbException.java:155) 
~[h2-1.4.197.jar:1.4.197]
at org.h2.engine.ConnectionInfo.readSettingsFromURL(ConnectionInfo.java:268) 
~[h2-1.4.197.jar:1.4.197]
at org.h2.engine.ConnectionInfo.(ConnectionInfo.java:76) 
~[h2-1.4.197.jar:1.4.197]
at org.h2.jdbc.JdbcConnection.(JdbcConnection.java:103) 
~[h2-1.4.197.jar:1.4.197]
at org.h2.Driver.connect(Driver.java:69) ~[h2-1.4.197.jar:1.4.197]
at java.sql.DriverManager.getConnection(DriverManager.java:664) ~[?:1.8.0_131]
at java.sql.DriverManager.getConnection(DriverManager.java:270) ~[?:1.8.0_131]
at 
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing$1.initialValue(IgniteH2Indexing.java:317)
 ~[ignite-indexing-2.4.0.jar:2.4.0]
at 
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing$1.initialValue(IgniteH2Indexing.java:288)
 ~[ignite-indexing-2.4.0.jar:2.4.0]
at java.lang.ThreadLocal.setInitialValue(ThreadLocal.java:180) ~[?:1.8.0_131]
at java.lang.ThreadLocal.get(ThreadLocal.java:170) ~[?:1.8.0_131]
at 
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing$1.get(IgniteH2Indexing.java:290)
 ~[ignite-indexing-2.4.0.jar:2.4.0]
at 
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing$1.get(IgniteH2Indexing.java:288)
 ~[ignite-indexing-2.4.0.jar:2.4.0]
at 
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.connectionForThread(IgniteH2Indexing.java:514)
 ~[ignite-indexing-2.4.0.jar:2.4.0]
at 
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.executeStatement(IgniteH2Indexing.java:582)
 ~[ignite-indexing-2.4.0.jar:2.4.0]
at 
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.createSchema(IgniteH2Indexing.java:551)
 ~[ignite-indexing-2.4.0.jar:2.4.0]
at 
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.registerCache(IgniteH2Indexing.java:2667)
 ~[ignite-indexing-2.4.0.jar:2.4.0]
at 
org.apache.ignite.internal.processors.query.GridQueryProcessor.registerCache0(GridQueryProcessor.java:1594)
 ~[ignite-core-2.4.0.jar:2.4.0]
at 
org.apache.ignite.internal.processors.query.GridQueryProcessor.onCacheStart0(GridQueryProcessor.java:800)
 ~[ignite-core-2.4.0.jar:2.4.0]
at 
org.apache.ignite.internal.processors.query.GridQueryProcessor.onCacheStart(GridQueryProcessor.java:861)
 ~[ignite-core-2.4.0.jar:2.4.0]
at 
org.apache.ignite.internal.processors.cache.GridCacheProcessor.startCache(GridCacheProcessor.java:1158)
 ~[ignite-core-2.4.0.jar:2.4.0]
at 
org.apache.ignite.internal.processors.cache.GridCacheProcessor.prepareCacheStart(GridCacheProcessor.java:1900)
 ~[ignite-core-2.4.0.jar:2.4.0]
at 
org.apache.ignite.internal.processors.cache.GridCacheProcessor.startCachesOnLocalJoin(GridCacheProcessor.java:1764)
 ~[ignite-core-2.4.0.jar:2.4.0]
at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.initCachesOnLocalJoin(GridDhtPartitionsExchangeFuture.java:744)
 ~[ignite-core-2.4.0.jar:2.4.0]
at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.init(GridDhtPartitionsExchangeFuture.java:626)
 ~[ignite-core-2.4.0.jar:2.4.0]
at 
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:2337)
 ~[ignite-core-2.4.0.jar:2.4.0]
at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110) 
~[ignite-core-2.4.0.jar:2.4.0]
at java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_131]{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-7712) Add an ability to globally enable 'lazy' flag for SQL queries

2018-04-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-7712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16430997#comment-16430997
 ] 

ASF GitHub Bot commented on IGNITE-7712:


GitHub user alexpaschenko opened a pull request:

https://github.com/apache/ignite/pull/3783

IGNITE-7712 Fix for tests



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-7712v1

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/3783.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3783


commit e5b2e3a9fa4aca6f74b58bad5b1e58afda39f257
Author: Alexey Kuznetsov 
Date:   2017-10-06T17:11:37Z

IGNITE-6463 Web Console: Fixed output of big numbers in SQL query results.
(cherry picked from commit 35589a7)

commit c32f0af1c2dfb14dc4d52a282c1d2e50bddcd066
Author: Alexey Kuznetsov 
Date:   2017-10-06T18:10:08Z

IGNITE-6574 Remove pending requests in case STATUS_AUTH_FAILURE && 
credentials == null.
(cherry picked from commit 85261a3)

commit f62884b63663bebd9630ef01df59550033b85f1c
Author: Vasiliy Sisko 
Date:   2017-10-09T10:55:23Z

IGNITE-5767 Web console: Use byte array type instead of java.lang.Object 
for binary JDBC types.
(cherry picked from commit 3184437)

commit aa9093a26ddaf91e7f068663c52e090480cdfe6d
Author: Vasiliy Sisko 
Date:   2017-10-09T12:23:23Z

IGNITE-6287 Web Console: Improved DDL support: added checkbox "Use selected 
cache as default schema name".
(cherry picked from commit a45677c)

commit 912ae4b0fa3971499c1e8f9c4272c9b56b0355d2
Author: Sergey Chugunov 
Date:   2017-10-09T15:35:11Z

IGNITE-6583 Proper getters for rebalance metrics were added; ignite-style 
getters (without get) were deprecated

Signed-off-by: Andrey Gura 

commit aceed9498550833f5a0dcf7fcc003ea2f83378fa
Author: AMRepo 
Date:   2017-10-10T08:57:20Z

IGNITE-6545: Failure during Ignite Service.cancel() can break normal 
shutdown process.

commit f006500391c9712d68d5b90f3da72a421fbda48a
Author: vsisko 
Date:   2017-10-02T16:08:40Z

IGNITE-6422 Visor CMD: Fixed cache statistics output.
(cherry picked from commit 16d2370)

commit 252cb5d2a1962731b39505d6c0d711701a525724
Author: Krzysztof Chmielewski 
Date:   2017-10-10T14:50:59Z

Fixed "IGNITE-6234 Initialize schemaIds to empty set if schemas field is 
null during the deserialization".

Signed-off-by: nikolay_tikhonov 

commit 8eaacd10953f31e75433847747ea7fcf4f129d3b
Author: Alexey Kuznetsov 
Date:   2017-10-12T15:48:35Z

IGNITE-6127 Fixed bytes encoding.
(cherry picked from commit 0f3f7d2)

commit d9bba724c841e99d1374368654ddaa95cacf2ba9
Author: Alexey Popov 
Date:   2017-10-06T09:18:38Z

IGNITE-5224 .NET: PadLeft and PadRight support in LINQ

This closes #2808

commit d2b0986d516503ebb46dac19ea1cb074efacc865
Author: Alexey Popov 
Date:   2017-10-13T11:19:14Z

IGNITE-4723 .NET: Support REGEXP_LIKE in LINQ

This closes #2842

commit 0f0194d5e254181afd7f4a4745899d87f5430861
Author: NSAmelchev 
Date:   2017-09-06T14:32:42Z

Backport of IGNITE-2779 BinaryMarshaller caches must be cleaned during 
client reconnect

(cherry picked from commit c6ac6a5)

commit 9b730195dda83820479415abc3569c6076b69b44
Author: Pavel Tupitsyn 
Date:   2017-08-04T09:34:05Z

IGNITE-5927 .NET: Fix DataTable serialization

This closes #2395

commit 3906e5e1abe2d50996d449748be68d5667c0f34d
Author: Alexey Popov 
Date:   2017-10-17T11:45:42Z

IGNITE-6627 .NET: Fix serialization of enums within generic collections

* Fix EnumEqualityComparer serialization
* Fix enum arrays serialization
* Fix empty objects missing metadata

This closes #2864

commit 949bfcca99348c010fcf4a1251c6057911c77db2
Author: Sergey Chugunov 
Date:   2017-10-11T12:33:23Z

IGNITE-6536 Node fails when detects mapping storage corruption

Signed-off-by: Andrey Gura 

commit 0a2ef5929d0453957debdf743cabd46d041c72ae
Author: Alexey Kuznetsov 
Date:   2017-10-19T02:43:20Z

IGNITE-6647 Web Console: Implemented support of schema migration scripts.
(cherry picked from commit c65399c)

commit a16e9d92a57e39ec3d380ce8af9f97250c91594f
Author: Pavel Tupitsyn 
Date:   2017-10-19T09:36:39Z

IGNITE-6627 .NET: Fix repeated known metadata updates

This closes #2876

commit fadad75d80f76569afb3aa9e2dbf0c47a1d1d6af
Author: apopov 

[jira] [Commented] (IGNITE-8110) GridCacheWriteBehindStore.Flusher thread uses the wrong transformation from milliseconds to nanoseconds.

2018-04-09 Thread Dmitriy Pavlov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-8110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16430987#comment-16430987
 ] 

Dmitriy Pavlov commented on IGNITE-8110:


Fix itself is looking good, I left several proposals in PR.

> GridCacheWriteBehindStore.Flusher thread uses the wrong transformation from 
> milliseconds to nanoseconds.
> 
>
> Key: IGNITE-8110
> URL: https://issues.apache.org/jira/browse/IGNITE-8110
> Project: Ignite
>  Issue Type: Bug
>  Components: general
>Affects Versions: 2.4
>Reporter: Vyacheslav Koptilin
>Assignee: Anton Kurbanov
>Priority: Minor
> Fix For: 2.6
>
>
> The initial value of a cache flushing frequency is defined as follows:
> {code}
> /** Cache flushing frequence in nanos. */
> protected long cacheFlushFreqNanos = cacheFlushFreq * 1000;
> {code}
> where is {{cacheFlushFreq}} is equal to
> {code}
> /** Default flush frequency for write-behind cache store in milliseconds. 
> */
> public static final long DFLT_WRITE_BEHIND_FLUSH_FREQUENCY = 5000;
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8110) GridCacheWriteBehindStore.Flusher thread uses the wrong transformation from milliseconds to nanoseconds.

2018-04-09 Thread Dmitriy Pavlov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-8110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitriy Pavlov updated IGNITE-8110:
---
Fix Version/s: 2.6

> GridCacheWriteBehindStore.Flusher thread uses the wrong transformation from 
> milliseconds to nanoseconds.
> 
>
> Key: IGNITE-8110
> URL: https://issues.apache.org/jira/browse/IGNITE-8110
> Project: Ignite
>  Issue Type: Bug
>  Components: general
>Affects Versions: 2.4
>Reporter: Vyacheslav Koptilin
>Assignee: Anton Kurbanov
>Priority: Minor
> Fix For: 2.6
>
>
> The initial value of a cache flushing frequency is defined as follows:
> {code}
> /** Cache flushing frequence in nanos. */
> protected long cacheFlushFreqNanos = cacheFlushFreq * 1000;
> {code}
> where is {{cacheFlushFreq}} is equal to
> {code}
> /** Default flush frequency for write-behind cache store in milliseconds. 
> */
> public static final long DFLT_WRITE_BEHIND_FLUSH_FREQUENCY = 5000;
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8110) GridCacheWriteBehindStore.Flusher thread uses the wrong transformation from milliseconds to nanoseconds.

2018-04-09 Thread Dmitriy Pavlov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-8110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16430983#comment-16430983
 ] 

Dmitriy Pavlov commented on IGNITE-8110:


Not sure it is related to this fix, but anyway, there is strange failure in
{noformat}
IgniteCacheTestSuite2: 
GridCachePartitionedTxSingleThreadedSelfTest.testOptimisticRepeatableReadRollback
 (master fail rate 0,0%)  
java.lang.AssertionError: Invalid cached value [key=1, v1=null, v2=1, grid=2] 
java.lang.AssertionError: Invalid cached value [key=1, v1=null, v2=1, grid=2] 
 
and
IgniteCacheTestSuite2: 
GridCachePartitionedTxSingleThreadedSelfTest.testPessimisticReadCommittedCommit 
(master fail rate 0,0%)  

{noformat}
could you please check if it is related

> GridCacheWriteBehindStore.Flusher thread uses the wrong transformation from 
> milliseconds to nanoseconds.
> 
>
> Key: IGNITE-8110
> URL: https://issues.apache.org/jira/browse/IGNITE-8110
> Project: Ignite
>  Issue Type: Bug
>  Components: general
>Affects Versions: 2.4
>Reporter: Vyacheslav Koptilin
>Assignee: Anton Kurbanov
>Priority: Minor
>
> The initial value of a cache flushing frequency is defined as follows:
> {code}
> /** Cache flushing frequence in nanos. */
> protected long cacheFlushFreqNanos = cacheFlushFreq * 1000;
> {code}
> where is {{cacheFlushFreq}} is equal to
> {code}
> /** Default flush frequency for write-behind cache store in milliseconds. 
> */
> public static final long DFLT_WRITE_BEHIND_FLUSH_FREQUENCY = 5000;
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8084) Unlock write lock in DataStreamerImpl

2018-04-09 Thread Dmitriy Pavlov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-8084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitriy Pavlov updated IGNITE-8084:
---
Fix Version/s: (was: 2.5)
   2.6

> Unlock write lock in DataStreamerImpl
> -
>
> Key: IGNITE-8084
> URL: https://issues.apache.org/jira/browse/IGNITE-8084
> Project: Ignite
>  Issue Type: Improvement
>  Components: streaming
>Reporter: Ivan Fedotov
>Assignee: Ivan Fedotov
>Priority: Major
>  Labels: usability
> Fix For: 2.6
>
>
> In method DataStreamerImpl.CloseEx there is wrire lock without unlock [1]. I 
> think this behavior is based on impossibility to call after closing other 
> public method of DataStreamer, that use read lock.
> It's not correctly that after closing streamer we don't unlock writeLock: I 
> think that we can use *closed* flag to throw exception if streamer will be 
> used after closing.
> [1]https://github.com/apache/ignite/blob/master/modules/core/src/main/java/org/apache/ignite/internal/processors/datastreamer/DataStreamerImpl.java#L1217



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8054) Let serialize only valuable part of GridLongList

2018-04-09 Thread Dmitriy Pavlov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-8054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitriy Pavlov updated IGNITE-8054:
---
Fix Version/s: (was: 2.5)
   2.6

> Let serialize only valuable part of GridLongList
> 
>
> Key: IGNITE-8054
> URL: https://issues.apache.org/jira/browse/IGNITE-8054
> Project: Ignite
>  Issue Type: Improvement
>  Components: messaging
>Affects Versions: 2.4
>Reporter: Alexander Menshikov
>Assignee: Alexander Menshikov
>Priority: Major
>  Labels: easyfix
> Fix For: 2.6
>
>
> Here in GridLongList we serialize all elements and don't take into account 
> `idx` value:
> {code:java}
> @Override public boolean writeTo(ByteBuffer buf, MessageWriter writer) { 
>     writer.setBuffer(buf); 
>   
>     if (!writer.isHeaderWritten()) { 
>     if (!writer.writeHeader(directType(), fieldsCount())) 
>     return false; 
>   
>     writer.onHeaderWritten(); 
>     } 
>   
>     switch (writer.state()) { 
>     case 0: 
>     if (!writer.writeLongArray("arr", arr)) 
>     return false; 
>   
>     writer.incrementState(); 
>   
>     case 1: 
>     if (!writer.writeInt("idx", idx)) 
>     return false; 
>   
>     writer.incrementState(); 
>   
>     } 
>   
>     return true; 
>     } {code}
> Which is not happening in another serialization method in the same class:
> {code:java}
> public static void writeTo(DataOutput out, @Nullable GridLongList list) 
> throws IOException { 
>     out.writeInt(list != null ? list.idx : -1); 
>   
>     if (list != null) { 
>     for (int i = 0; i < list.idx; i++) 
>     out.writeLong(list.arr[i]); 
>     } 
> } {code}
> So, we can simply reduce messages size by sending only a valuable part of the 
> array.
> I created this issue according to a discussion on the mailing list:
> [http://apache-ignite-developers.2346864.n4.nabble.com/Optimize-GridLongList-serialization-td28571.html]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-7844) Transaction incorrect state after client reconnected

2018-04-09 Thread Dmitriy Pavlov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-7844?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitriy Pavlov updated IGNITE-7844:
---
Fix Version/s: (was: 2.5)
   2.6

> Transaction incorrect state after client reconnected
> 
>
> Key: IGNITE-7844
> URL: https://issues.apache.org/jira/browse/IGNITE-7844
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.3
>Reporter: Alexey Kuznetsov
>Assignee: Alexey Kuznetsov
>Priority: Major
> Fix For: 2.6
>
>
> Transaction is started on client node.
>  Client reconnects, transaction rollbacks, but its state is left ACTIVE, 
> which is incorrect.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-5136) GridLogThrottle memory leak

2018-04-09 Thread Dmitriy Pavlov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-5136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitriy Pavlov updated IGNITE-5136:
---
Fix Version/s: (was: 2.5)
   2.6

> GridLogThrottle memory leak
> ---
>
> Key: IGNITE-5136
> URL: https://issues.apache.org/jira/browse/IGNITE-5136
> Project: Ignite
>  Issue Type: Bug
>  Components: general
>Reporter: Stanilovsky Evgeny
>Assignee: Ryabov Dmitrii
>Priority: Major
> Fix For: 2.6
>
>
> class GridLogThrottle stores throttle info into map and noone clears it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8111) Add extra validation for WAL segment size

2018-04-09 Thread Dmitriy Pavlov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-8111?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitriy Pavlov updated IGNITE-8111:
---
Fix Version/s: (was: 2.5)
   2.6

> Add extra validation for WAL segment size
> -
>
> Key: IGNITE-8111
> URL: https://issues.apache.org/jira/browse/IGNITE-8111
> Project: Ignite
>  Issue Type: Improvement
>Affects Versions: 2.4
>Reporter: Ivan Rakov
>Assignee: Denis Garus
>Priority: Major
>  Labels: newbie
> Fix For: 2.6
>
>
> Currently we can set extra small DataStorageConfiguration#walSegmentSize (10 
> pages or even less than one page), which will trigger multiple assertion 
> errors in code.
> We have to implement validation on node start that WAL segment size has 
> reasonable value (512KB or more).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8133) Baseline topology documentation improvement

2018-04-09 Thread Denis Magda (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-8133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16430926#comment-16430926
 ] 

Denis Magda commented on IGNITE-8133:
-

Another discussion which points have to be considered: 
http://apache-ignite-users.70518.x6.nabble.com/Baseline-Topology-and-Node-Failure-td20866.html

> Baseline topology documentation improvement
> ---
>
> Key: IGNITE-8133
> URL: https://issues.apache.org/jira/browse/IGNITE-8133
> Project: Ignite
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.4
>Reporter: Stanislav Lukyanov
>Assignee: Stanislav Lukyanov
>Priority: Critical
> Fix For: 2.5
>
>
> Baseline topology concept was added to Ignite in 2.4 by IEP-4. This changed 
> Ignite cluster behavior when persistence is enabled (first of all, activation 
> and rebalancing timings).
> It seems that the current documentation may be confusing.
> For example, the sentence
> {quote}Note that the baseline topology is not set when the cluster is started 
> for the first time; that's the only time when a manual intervention is 
> needed.{quote}
> may lead one to think that baseline topology is not used by default and needs 
> to be enabled only if one wants to use it.
> Also, the documentation describes the tools and commands that are used to 
> manage the baseline topology and activation, but doesn't give guidelines on 
> which nodes should be in the topology, when should it be changed, etc.
> The documentation should be enhanced to
> - give clear understanding that baseline topology always needs to be 
> considered as a part of the cluster architecture when persistence is enabled;
> - provide overview of the behavioral changes compared to AI 2.3 (use a 
> note/warning block for that to separate it from the main text?);
> - provide basic guidelines and suggestions of how one can start a new cluster 
> and manage it (when to activate/deactivate, when to change baseline topology, 
> what happens and what needs to be done when a node fails or joins, how to use 
> consistentId)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-7809) Ignite PDS 2 & PDS 2 Direct IO: stable failures of IgniteWalFlushDefaultSelfTest

2018-04-09 Thread Dmitriy Pavlov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-7809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16430924#comment-16430924
 ] 

Dmitriy Pavlov commented on IGNITE-7809:


[~ilantukh] if tests are failing because of other reasons, I think we could 
merge this fix and then create separate tickets for other reasons. Agree?


> Ignite PDS 2 & PDS 2 Direct IO: stable failures of 
> IgniteWalFlushDefaultSelfTest
> 
>
> Key: IGNITE-7809
> URL: https://issues.apache.org/jira/browse/IGNITE-7809
> Project: Ignite
>  Issue Type: Task
>  Components: persistence
>Affects Versions: 2.4
>Reporter: Dmitriy Pavlov
>Assignee: Ilya Lantukh
>Priority: Major
>  Labels: MakeTeamcityGreenAgain
> Fix For: 2.5
>
>
> Probably after last WAL default changes 'IGNITE-7594 Fixed performance drop 
> after WAL optimization for FSYNC' 2 tests in 2 build configs began to fail
>Ignite PDS 2 (Direct IO) [ tests 2 ]  
>  IgnitePdsNativeIoTestSuite2: 
> IgniteWalFlushDefaultSelfTest.testFailAfterStart (fail rate 13,0%) 
>  IgnitePdsNativeIoTestSuite2: 
> IgniteWalFlushDefaultSelfTest.testFailWhileStart (fail rate 13,0%) 
>Ignite PDS 2 [ tests 2 ]  
>  IgnitePdsTestSuite2: IgniteWalFlushDefaultSelfTest.testFailAfterStart 
> (fail rate 8,4%) 
>  IgnitePdsTestSuite2: IgniteWalFlushDefaultSelfTest.testFailWhileStart 
> (fail rate 8,4%) 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-7222) DiscoverySpi based on Apache ZooKeeper

2018-04-09 Thread Sergey Chugunov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-7222?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16430877#comment-16430877
 ] 

Sergey Chugunov commented on IGNITE-7222:
-

[~agoncharuk],

I created tickets for all TODO items:
1) IGNITE-8189
2) IGNITE-8187
3) IGNITE-8188
4), 6) IGNITE-8193
5) IGNITE-8194

TODO item was created for commented check in 
ZookeeperDiscoveriSpiTest#afterTest.

Tickets for documentation were created: [for 
javadoc|https://issues.apache.org/jira/browse/IGNITE-8195], [for 
readme.oi|https://issues.apache.org/jira/browse/IGNITE-8196].

ZkRuntimeState was not changed though, this work is in progress.

> DiscoverySpi based on Apache ZooKeeper
> --
>
> Key: IGNITE-7222
> URL: https://issues.apache.org/jira/browse/IGNITE-7222
> Project: Ignite
>  Issue Type: Task
>  Components: general
>Reporter: Semen Boikov
>Assignee: Semen Boikov
>Priority: Major
>  Labels: iep-15
> Fix For: 2.5
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (IGNITE-8122) Partition state restored from WAL may be lost if no checkpoints are done

2018-04-09 Thread Pavel Kovalenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-8122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Kovalenko reassigned IGNITE-8122:
---

Assignee: Alexey Goncharuk  (was: Pavel Kovalenko)

Fix resolution:
1) In case of persistence enabled, partition is created in state OWNING.
2) In case of cache group start request, partition states belong to such cache 
are restored from page memory.
Public TC: 
https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8_IgniteTests24Java8=pull%2F3745%2Fhead
Private TC: 
https://ggtc.gridgain.com/project.html?projectId=id8xIgniteGridGainTestsJava8_id8xIgniteGridGainTestsJava8=ignite-8122

> Partition state restored from WAL may be lost if no checkpoints are done
> 
>
> Key: IGNITE-8122
> URL: https://issues.apache.org/jira/browse/IGNITE-8122
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.4
>Reporter: Pavel Kovalenko
>Assignee: Alexey Goncharuk
>Priority: Minor
> Fix For: 2.5
>
>
> Problem:
> 1) Start several nodes with enabled persistence.
> 2) Make sure that all partitions for 'ignite-sys-cache' have status OWN on 
> all nodes and appropriate PartitionMetaStateRecord record is logged to WAL
> 3) Stop all nodes and start again, activate cluster. Checkpoint for 
> 'ignite-sys-cache' is empty, because there were no data in cache.
> 4) State for all partitions will be restored to OWN 
> (GridCacheDatabaseSharedManager#restoreState) from WAL, but not recorded to 
> page memory, because there were no checkpoints and data in cache. Store 
> manager doesn't have any allocated pages (including meta) for such partitions.
> 5) On exchange done we're trying to restore states of partitions 
> (initPartitionsWhenAffinityReady) on all nodes. Because page memory is empty, 
> states of all partitions will be restored to MOVING by default.
> 6) All nodes start to rebalance partitions from each other and this process 
> become unpredictable because we're trying to rebalance from MOVING partitions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8078) Add new metrics for data storage

2018-04-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-8078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16430885#comment-16430885
 ] 

ASF GitHub Bot commented on IGNITE-8078:


GitHub user DmitriyGovorukhin opened a pull request:

https://github.com/apache/ignite/pull/3782

IGNITE-8078



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-8078

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/3782.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3782


commit 6a4693b84c783ec2a87d399f92bddb82c36aae13
Author: Dmitriy Govorukhin 
Date:   2018-04-02T07:26:28Z

IGNITE-8078 added new method on IgniteMXBean. getCurrentCoordinator

commit e199fe2e700514bb9b6a0474257e8152b60bf527
Author: Dmitriy Govorukhin 
Date:   2018-04-02T12:24:04Z

IGNITE-8078 rename refactoring

commit 93a01affab20f7736cc3eadd8e6360786ad52d1e
Author: Dmitriy Govorukhin 
Date:   2018-04-02T12:25:46Z

IGNITE-8078 rename refactoring

commit 6922e1c4dc78667523f421e339f37fd162d128a0
Author: Dmitriy Govorukhin 
Date:   2018-04-02T14:09:23Z

IGNITE-8078 WIP refactoring

commit 43f97954737fef14c09f66b23c60f48d4f114eac
Author: Dmitriy Govorukhin 
Date:   2018-04-02T16:22:57Z

IGNITE-8078 PkIndex tracker

commit aaba63fa971a685703129b72f4a45a33a2ac8276
Author: Dmitriy Govorukhin 
Date:   2018-04-02T16:38:31Z

IGNITE-8078 ReuseList tracker

commit e99a1353c0aae3e464751ee8dc557441794c52c2
Author: Dmitriy Govorukhin 
Date:   2018-04-02T17:52:26Z

IGNITE-8078 Pure data tracker

commit 0af44ea316edc27ec713fe43e8b5e8ea7e2646cf
Author: Dmitriy Govorukhin 
Date:   2018-04-03T07:33:56Z

IGNITE-8078 add methods to CacheGroupMetrics interface, implement 
partitionIndexes and group type

commit 02740ac701bd5b5140a7066b91fd10a56c327ee3
Author: Dmitriy Govorukhin 
Date:   2018-04-03T07:35:46Z

IGNITE-8078 minor refactoring

commit 8280405e6e714c738c26817d7df7c8cc60a0ae6b
Author: Dmitriy Govorukhin 
Date:   2018-04-03T08:11:11Z

IGNITE-8078 data structure total size count

commit dece316c8c5fcd99e8bb5eb322694d715e4b79d3
Author: Dmitriy Govorukhin 
Date:   2018-04-03T08:23:52Z

IGNITE-8078 secondary indexes size

commit d7ef793b780e2e0c0cd197754bf1e3f34404f81e
Author: Dmitriy Govorukhin 
Date:   2018-04-03T09:51:36Z

IGNITE-8078 total allocated size

commit 352019f75488011f17529f3b61f2576a9538f491
Author: Dmitriy Govorukhin 
Date:   2018-04-03T13:26:56Z

IGNITE-8078 index size + refactoring and improvements

commit fd3d2dcc49f978afc3e09a54bc80b4ba9edcb7a5
Author: Dmitriy Govorukhin 
Date:   2018-04-03T13:55:02Z

IGNITE-8078 refactoring

commit 252453e5782479b6e7808f6c609e4d62271c4006
Author: Dmitriy Govorukhin 
Date:   2018-04-03T13:56:25Z

IGNITE-8078 code cleanup

commit 20251dde60c80069193642f335d0e10b797383a8
Author: Dmitriy Govorukhin 
Date:   2018-04-04T10:29:20Z

IGNITE-8078 wip

commit 736ded6a2330592ecf8961656e1f3761c453ec02
Author: Dmitriy Govorukhin 
Date:   2018-04-04T12:52:03Z

IGNITE-8078 wip

commit 6cce1440dd47cb46762a8608793b01342a655689
Author: Dmitriy Govorukhin 
Date:   2018-04-04T15:01:44Z

IGNITE-8078 wip

commit d311f8a3053424d9d8b3f0801d9110ad0dd76280
Author: Dmitriy Govorukhin 
Date:   2018-04-04T15:10:37Z

IGNITE-8078 minor fix

commit 17b97210cf69e0abac4c0c9f9d0dd1dd25680ca9
Author: Dmitriy Govorukhin 
Date:   2018-04-04T15:36:24Z

IGNITE-8078 minor

commit 4be410803a1a006afe9a7119a7a94e64147f5b94
Author: Dmitriy Govorukhin 
Date:   2018-04-05T09:26:57Z

IGNITE-8078 ccfg size count in internal metric

commit 4982a026e03b4630b06d48151f3d0f838c5bb6c9
Author: Dmitriy Govorukhin 
Date:   2018-04-05T12:31:22Z

IGNITE-8078 refactoring

commit 8baba8caccde7a6136d1cb20c766c4fba0ef62ff
Author: Dmitriy Govorukhin 
Date:   2018-04-05T12:59:36Z

IGNITE-8078 minor refactoring

commit 122676edfdafffbda5ed602e542ced4c88a78438
Author: Dmitriy Govorukhin 
Date:   2018-04-05T14:45:26Z

IGNITE-8078 minor refactoring

commit 1849ed23e68f67867ce55d3317390437636a03e2
Author: Dmitriy Govorukhin 
Date:   2018-04-05T14:48:15Z

IGNITE-8078 minor refactoring


[jira] [Resolved] (IGNITE-3464) Possible race between partition exchange and prepare/finish requests

2018-04-09 Thread Vitaliy Biryukov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-3464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaliy Biryukov resolved IGNITE-3464.
--
Resolution: Duplicate

> Possible race between partition exchange and prepare/finish requests
> 
>
> Key: IGNITE-3464
> URL: https://issues.apache.org/jira/browse/IGNITE-3464
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: ignite-1.4
>Reporter: Alexey Goncharuk
>Assignee: Vitaliy Biryukov
>Priority: Major
> Fix For: 2.6
>
>
> Consider the following scenario:
> Two nodes A (coordinator), B. Node C is joining the grid. Current topology 
> version is 2.
>  - Node A starts a transaction on version 2 and sends a prepare request to 
> node B
>  - Discovery event happens on node A. Exchange future is created, captures 
> the transaction and waits for this transaction to finish.
>  - Discovery event happens on node B. Exchange future is created, but since 
> there is no transaction on this node (the request has not been processed 
> yet), partition release future is completed and exchange waits for an ACK 
> from coordinator.
>  - Prepare request is processed on node B
>  - Node A commits the transaction locally, partition release future is 
> completed. Both finish request and exchange message are sent to the node B.
>  - Node B processes the exchange message first and completes exchange.
>  - Node C starts rebalancing from node B and acquires stale value of the key 
> which was supposed to be updated in the transaction.
>  - Node B processes finish request and commits the transaction.
> As a result, node B and C have different values stored in the cache.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-3464) Possible race between partition exchange and prepare/finish requests

2018-04-09 Thread Pavel Kovalenko (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-3464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16430878#comment-16430878
 ] 

Pavel Kovalenko commented on IGNITE-3464:
-

[~avinogradov] Yes, it should be closed as duplicate.

> Possible race between partition exchange and prepare/finish requests
> 
>
> Key: IGNITE-3464
> URL: https://issues.apache.org/jira/browse/IGNITE-3464
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: ignite-1.4
>Reporter: Alexey Goncharuk
>Assignee: Vitaliy Biryukov
>Priority: Major
> Fix For: 2.6
>
>
> Consider the following scenario:
> Two nodes A (coordinator), B. Node C is joining the grid. Current topology 
> version is 2.
>  - Node A starts a transaction on version 2 and sends a prepare request to 
> node B
>  - Discovery event happens on node A. Exchange future is created, captures 
> the transaction and waits for this transaction to finish.
>  - Discovery event happens on node B. Exchange future is created, but since 
> there is no transaction on this node (the request has not been processed 
> yet), partition release future is completed and exchange waits for an ACK 
> from coordinator.
>  - Prepare request is processed on node B
>  - Node A commits the transaction locally, partition release future is 
> completed. Both finish request and exchange message are sent to the node B.
>  - Node B processes the exchange message first and completes exchange.
>  - Node C starts rebalancing from node B and acquires stale value of the key 
> which was supposed to be updated in the transaction.
>  - Node B processes finish request and commits the transaction.
> As a result, node B and C have different values stored in the cache.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-8196) ZookeeperDiscoverySpi should be documented on readme.io

2018-04-09 Thread Sergey Chugunov (JIRA)
Sergey Chugunov created IGNITE-8196:
---

 Summary: ZookeeperDiscoverySpi should be documented on readme.io
 Key: IGNITE-8196
 URL: https://issues.apache.org/jira/browse/IGNITE-8196
 Project: Ignite
  Issue Type: Task
Reporter: Sergey Chugunov


ZookeeperDiscoverySpi should be documented on readme.io like 
[TcpDiscoverySpi|https://apacheignite.readme.io/docs/cluster-config#multicast-and-static-ip-based-discovery]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-3464) Possible race between partition exchange and prepare/finish requests

2018-04-09 Thread Anton Vinogradov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-3464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16430863#comment-16430863
 ] 

Anton Vinogradov commented on IGNITE-3464:
--

[~Jokser]
We have to close this as duplicate?

> Possible race between partition exchange and prepare/finish requests
> 
>
> Key: IGNITE-3464
> URL: https://issues.apache.org/jira/browse/IGNITE-3464
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: ignite-1.4
>Reporter: Alexey Goncharuk
>Assignee: Vitaliy Biryukov
>Priority: Major
> Fix For: 2.6
>
>
> Consider the following scenario:
> Two nodes A (coordinator), B. Node C is joining the grid. Current topology 
> version is 2.
>  - Node A starts a transaction on version 2 and sends a prepare request to 
> node B
>  - Discovery event happens on node A. Exchange future is created, captures 
> the transaction and waits for this transaction to finish.
>  - Discovery event happens on node B. Exchange future is created, but since 
> there is no transaction on this node (the request has not been processed 
> yet), partition release future is completed and exchange waits for an ACK 
> from coordinator.
>  - Prepare request is processed on node B
>  - Node A commits the transaction locally, partition release future is 
> completed. Both finish request and exchange message are sent to the node B.
>  - Node B processes the exchange message first and completes exchange.
>  - Node C starts rebalancing from node B and acquires stale value of the key 
> which was supposed to be updated in the transaction.
>  - Node B processes finish request and commits the transaction.
> As a result, node B and C have different values stored in the cache.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-8195) ZookeeperDiscoverySpi should be properly documented

2018-04-09 Thread Sergey Chugunov (JIRA)
Sergey Chugunov created IGNITE-8195:
---

 Summary: ZookeeperDiscoverySpi should be properly documented
 Key: IGNITE-8195
 URL: https://issues.apache.org/jira/browse/IGNITE-8195
 Project: Ignite
  Issue Type: Task
  Components: documentation
Reporter: Sergey Chugunov


ZookeeperDiscoverySpi as part of public API should be documented with the same 
detail level as TcpDiscoverySpi.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-8194) Coordinator may need to delete acks for event if the ack with the same id is already published

2018-04-09 Thread Sergey Chugunov (JIRA)
Sergey Chugunov created IGNITE-8194:
---

 Summary: Coordinator may need to delete acks for event if the ack 
with the same id is already published
 Key: IGNITE-8194
 URL: https://issues.apache.org/jira/browse/IGNITE-8194
 Project: Ignite
  Issue Type: Improvement
  Components: zookeeper
Reporter: Sergey Chugunov


During coordinator failure scenarios new coordinator may try to publish 
acknowledgements for events that were already acknowledged.

This situation should be considered comprehensively wrt any possible corner 
cases; if it is safer from discovery protocol point of view to delete duplicate 
acknowledgements and publish them again this should be implemented.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (IGNITE-7366) Affinity assignment exception in service processor during multiple nodes join

2018-04-09 Thread Ilya Kasnacheev (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-7366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16430611#comment-16430611
 ] 

Ilya Kasnacheev edited comment on IGNITE-7366 at 4/9/18 4:29 PM:
-

[~agoncharuk] can you or somebody from core contributors look? My understanding 
of Exchange process is not enough here.


was (Author: ilyak):
[~agoncharuk] can somebody from your team look? My understanding of Exchange 
process is not enough here.

> Affinity assignment exception in service processor during multiple nodes join
> -
>
> Key: IGNITE-7366
> URL: https://issues.apache.org/jira/browse/IGNITE-7366
> Project: Ignite
>  Issue Type: Bug
>  Components: compute
>Affects Versions: 2.3
>Reporter: Ilya Kasnacheev
>Assignee: Pavel Pereslegin
>Priority: Major
>
> When two nodes which are deploying services join at the same time, and 
> exception is observed:
> {code}
> SEVERE: Error when executing service: null
> java.lang.IllegalStateException: Getting affinity for topology version 
> earlier than affinity is calculated [locNode=TcpDiscoveryNode 
> [id=245d4bec-0384-4808-b66d-d2340930207f..., discPort=37500, order=2, 
> intOrder=2, lastExchangeTime=1515394551283, loc=true, 
> ver=2.3.0#20171028-sha1:8add7fd5, isClient=false], grp=ignite-sys-cache, 
> topVer=AffinityTopologyVersion [topVer=3, minorTopVer=0], 
> head=AffinityTopologyVersion [topVer=4, minorTopVer=0], 
> history=[AffinityTopologyVersion [topVer=2, minorTopVer=0], 
> AffinityTopologyVersion [topVer=2, minorTopVer=1], AffinityTopologyVersion 
> [topVer=4, minorTopVer=0]]]
> at 
> org.apache.ignite.internal.processors.affinity.GridAffinityAssignmentCache.cachedAffinity(GridAffinityAssignmentCache.java:514)
> at 
> org.apache.ignite.internal.processors.affinity.GridAffinityAssignmentCache.nodes(GridAffinityAssignmentCache.java:419)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheAffinityManager.nodesByPartition(GridCacheAffinityManager.java:220)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheAffinityManager.primaryByPartition(GridCacheAffinityManager.java:256)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheAffinityManager.primaryByKey(GridCacheAffinityManager.java:247)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheAffinityManager.primaryByKey(GridCacheAffinityManager.java:271)
> at 
> org.apache.ignite.internal.processors.service.GridServiceProcessor$TopologyListener$1.run0(GridServiceProcessor.java:1771)
> at 
> org.apache.ignite.internal.processors.service.GridServiceProcessor$DepRunnable.run(GridServiceProcessor.java:1958)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
> at java.lang.Thread.run(Unknown Source)
> {code}
> This may be caused by exchange merges. There are 4 nodes joining topology. 
> When nodes 3 and 4 join at the same time, exchanges for [3, 0] and [4, 0] are 
> merged. But, TopologyListener in service processor is notified about topVer 
> [3, 0], for which there is no affinity because exchange has already moved 
> forward to [4, 0].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-6439) IgnitePersistentStoreSchemaLoadTest is broken

2018-04-09 Thread Dmitriy Pavlov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-6439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16430810#comment-16430810
 ] 

Dmitriy Pavlov commented on IGNITE-6439:


Yes it seems test is not failing anymore,

https://ci.ignite.apache.org/project.html?tab=testDetails_IgniteTests24Java8=%3Cdefault%3E=IgniteTests24Java8=5005976717438263276=2

Please feel free to close as 'Cannot Reproduce'

> IgnitePersistentStoreSchemaLoadTest is broken
> -
>
> Key: IGNITE-6439
> URL: https://issues.apache.org/jira/browse/IGNITE-6439
> Project: Ignite
>  Issue Type: Bug
>Reporter: Dmitriy Govorukhin
>Assignee: Denis Garus
>Priority: Major
>  Labels: MakeTeamcityGreenAgain
> Fix For: 2.6
>
>
> After start nodes, cluster must be activated explicit.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-3464) Possible race between partition exchange and prepare/finish requests

2018-04-09 Thread Pavel Kovalenko (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-3464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16430804#comment-16430804
 ] 

Pavel Kovalenko commented on IGNITE-3464:
-

[~agoncharuk] This is one of the corner cases that generally solved in 
IGNITE-7871. In general case node A can be client, while waitPartitionRelease 
is not invoked on such nodes. Reproducer presented in PR passes in 
corresponding branch.
[~VitaliyB] Thank you for your work! I will add your reproducer to IGNITE-7871 
ticket.

> Possible race between partition exchange and prepare/finish requests
> 
>
> Key: IGNITE-3464
> URL: https://issues.apache.org/jira/browse/IGNITE-3464
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: ignite-1.4
>Reporter: Alexey Goncharuk
>Assignee: Vitaliy Biryukov
>Priority: Major
> Fix For: 2.6
>
>
> Consider the following scenario:
> Two nodes A (coordinator), B. Node C is joining the grid. Current topology 
> version is 2.
>  - Node A starts a transaction on version 2 and sends a prepare request to 
> node B
>  - Discovery event happens on node A. Exchange future is created, captures 
> the transaction and waits for this transaction to finish.
>  - Discovery event happens on node B. Exchange future is created, but since 
> there is no transaction on this node (the request has not been processed 
> yet), partition release future is completed and exchange waits for an ACK 
> from coordinator.
>  - Prepare request is processed on node B
>  - Node A commits the transaction locally, partition release future is 
> completed. Both finish request and exchange message are sent to the node B.
>  - Node B processes the exchange message first and completes exchange.
>  - Node C starts rebalancing from node B and acquires stale value of the key 
> which was supposed to be updated in the transaction.
>  - Node B processes finish request and commits the transaction.
> As a result, node B and C have different values stored in the cache.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-8193) Joining node data should be cleaned in some cases

2018-04-09 Thread Sergey Chugunov (JIRA)
Sergey Chugunov created IGNITE-8193:
---

 Summary: Joining node data should be cleaned in some cases
 Key: IGNITE-8193
 URL: https://issues.apache.org/jira/browse/IGNITE-8193
 Project: Ignite
  Issue Type: Improvement
  Components: zookeeper
Reporter: Sergey Chugunov


ZookeeperDiscoveryImpl#startJoin method implementation creates two zk nodes: 
one with joining node discovery data and another one with joining node id.

If joining node fails in between its joining node data will be kept by 
ZooKeeper until explicit removal.
For now there is no mechanism implementing such removal but it should be 
implemented to cover this corner case.
It may be implemented in form of timer which removes joining node data with 
some significant timeout to avoid deleting joining data of alive nodes that are 
in the middle of join procedure (and, for instance, got frozen before creating 
alive zk node).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8192) Print out information on how many nodes left until auto-activation

2018-04-09 Thread Alexey Goncharuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-8192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Goncharuk updated IGNITE-8192:
-
Fix Version/s: 2.5

> Print out information on how many nodes left until auto-activation
> --
>
> Key: IGNITE-8192
> URL: https://issues.apache.org/jira/browse/IGNITE-8192
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Alexey Goncharuk
>Priority: Major
> Fix For: 2.5
>
>
> We should print out a message (probably on each topology change event) about 
> baseline topology and how many nodes left to be started until 
> auto-activation. Also, when the number of nodes is not too large, print out 
> their consistent IDs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8192) Print out information on how many nodes left until auto-activation

2018-04-09 Thread Alexey Goncharuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-8192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Goncharuk updated IGNITE-8192:
-
Description: We should print out a message (probably on each topology 
change event) about baseline topology and how many nodes left to be started 
until auto-activation. Also, when the number of nodes is not too large, print 
out their consistent IDs.

> Print out information on how many nodes left until auto-activation
> --
>
> Key: IGNITE-8192
> URL: https://issues.apache.org/jira/browse/IGNITE-8192
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Alexey Goncharuk
>Priority: Major
>
> We should print out a message (probably on each topology change event) about 
> baseline topology and how many nodes left to be started until 
> auto-activation. Also, when the number of nodes is not too large, print out 
> their consistent IDs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-8192) Print out information on how many nodes left until auto-activation

2018-04-09 Thread Alexey Goncharuk (JIRA)
Alexey Goncharuk created IGNITE-8192:


 Summary: Print out information on how many nodes left until 
auto-activation
 Key: IGNITE-8192
 URL: https://issues.apache.org/jira/browse/IGNITE-8192
 Project: Ignite
  Issue Type: Improvement
Reporter: Alexey Goncharuk






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8191) Print out information when cluster is not activated

2018-04-09 Thread Alexey Goncharuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-8191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Goncharuk updated IGNITE-8191:
-
Fix Version/s: 2.5

> Print out information when cluster is not activated
> ---
>
> Key: IGNITE-8191
> URL: https://issues.apache.org/jira/browse/IGNITE-8191
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Alexey Goncharuk
>Priority: Major
> Fix For: 2.5
>
>
> We should add additional information to local node statistics when a cluster 
> is not activated and add a hint on how activation is performed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8191) Print out information when cluster is not activated

2018-04-09 Thread Alexey Goncharuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-8191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Goncharuk updated IGNITE-8191:
-
Description: We should add additional information to local node statistics 
when a cluster is not activated and add a hint on how activation is performed.

> Print out information when cluster is not activated
> ---
>
> Key: IGNITE-8191
> URL: https://issues.apache.org/jira/browse/IGNITE-8191
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Alexey Goncharuk
>Priority: Major
> Fix For: 2.5
>
>
> We should add additional information to local node statistics when a cluster 
> is not activated and add a hint on how activation is performed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-8191) Print out information when cluster is not activated

2018-04-09 Thread Alexey Goncharuk (JIRA)
Alexey Goncharuk created IGNITE-8191:


 Summary: Print out information when cluster is not activated
 Key: IGNITE-8191
 URL: https://issues.apache.org/jira/browse/IGNITE-8191
 Project: Ignite
  Issue Type: Improvement
Reporter: Alexey Goncharuk






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8190) Print out an information message when local node is not in baseline

2018-04-09 Thread Alexey Goncharuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-8190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Goncharuk updated IGNITE-8190:
-
Description: When a node is joined into the cluster and the node is not in 
the baseline topology, we should print out an information message that informs 
how this affects local node and what a user should do in order to add the node 
to baseline.

> Print out an information message when local node is not in baseline
> ---
>
> Key: IGNITE-8190
> URL: https://issues.apache.org/jira/browse/IGNITE-8190
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Alexey Goncharuk
>Priority: Major
> Fix For: 2.5
>
>
> When a node is joined into the cluster and the node is not in the baseline 
> topology, we should print out an information message that informs how this 
> affects local node and what a user should do in order to add the node to 
> baseline.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8190) Print out an information message when local node is not in baseline

2018-04-09 Thread Alexey Goncharuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-8190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Goncharuk updated IGNITE-8190:
-
Fix Version/s: 2.5

> Print out an information message when local node is not in baseline
> ---
>
> Key: IGNITE-8190
> URL: https://issues.apache.org/jira/browse/IGNITE-8190
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Alexey Goncharuk
>Priority: Major
> Fix For: 2.5
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-8190) Print out an information message when local node is not in baseline

2018-04-09 Thread Alexey Goncharuk (JIRA)
Alexey Goncharuk created IGNITE-8190:


 Summary: Print out an information message when local node is not 
in baseline
 Key: IGNITE-8190
 URL: https://issues.apache.org/jira/browse/IGNITE-8190
 Project: Ignite
  Issue Type: Improvement
Reporter: Alexey Goncharuk






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-6699) Optimize client-side data streamer performance

2018-04-09 Thread Anton Vinogradov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-6699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16430784#comment-16430784
 ] 

Anton Vinogradov commented on IGNITE-6699:
--

[~dpavlov]
Done, thanks for hint

> Optimize client-side data streamer performance
> --
>
> Key: IGNITE-6699
> URL: https://issues.apache.org/jira/browse/IGNITE-6699
> Project: Ignite
>  Issue Type: Task
>  Components: streaming
>Affects Versions: 2.3
>Reporter: Vladimir Ozerov
>Assignee: Ivan Fedotov
>Priority: Major
>  Labels: iep-1, performance
> Fix For: 2.5
>
>
> Currently if a user has several server nodes and a single client node with 
> single thread pushing data to streamer, he will not be able to load data at 
> maximum speed. On the other hand, if he start several data loading threads, 
> throughput will increase. 
> One of root causes of this is bad data streamer design. Method 
> {{IgniteDataStreamer.addData(K, V)}} returns new feature for every operation, 
> this is too fine grained approach. Also it generates a lot of garbage and 
> causes contention on streamer internals. 
> Proposed implementation flow:
> 1) Compare performance of {{addData(K, V)}} vs {{addData(Collection)}} 
> methods from one thread in distributed environment. The latter should show 
> considerably higher throughput.
> 2) Users should receive per-batch features, rather than per-key. 
> 3) Try caching thread data in some collection until it is large enough to 
> avoid contention and unnecessary allocations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-4756) Print info about partition distribution to log

2018-04-09 Thread Dmitriy Pavlov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16430768#comment-16430768
 ] 

Dmitriy Pavlov commented on IGNITE-4756:


TC looks good. [~avinogradov], could you please finalize review?

> Print info about partition distribution to log 
> ---
>
> Key: IGNITE-4756
> URL: https://issues.apache.org/jira/browse/IGNITE-4756
> Project: Ignite
>  Issue Type: Improvement
>  Components: general
>Reporter: Taras Ledkov
>Assignee: Vyacheslav Daradur
>Priority: Minor
>  Labels: newbie
> Fix For: 2.5
>
>
> Summarize discussions:
> Add log message in case partitions distribution is not close to even 
> distribution:
>  # Add system property IGNITE_PART_DISTRIBUTION_WARN_THRESHOLD with default 
> value 0.1 to print warn message only when nodes count differs more then 
> threshold;
>  # The statistic is calculated and printed only for the local node;
>  # Statistic is placed at the {{GridAffinityAssignmentCache#calculate}} and 
> calculated for new {{idealAssignment}}.
>  # Message format is
> {noformat}
> Local node affinity assignment distribution is not ideal [cache=, 
> expectedPrimary=, 
> exectedBackups=, 
> primary=, backups=].
> {noformat}
> e.g. for cache with name "test", 2 backups, 4 partition, 3 nodes:
> {noformat}
> Local node affinity assignment distribution is not ideal [cache=test, 
> expectedPrimary=1.33 (33.3%), exectedBackups=2.66 (66.66%), primary=1 (25%), 
> backups=3(75%)].
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-8189) Improve ZkDistributedCollectDataFuture#deleteFutureData implementation

2018-04-09 Thread Sergey Chugunov (JIRA)
Sergey Chugunov created IGNITE-8189:
---

 Summary: Improve ZkDistributedCollectDataFuture#deleteFutureData 
implementation
 Key: IGNITE-8189
 URL: https://issues.apache.org/jira/browse/IGNITE-8189
 Project: Ignite
  Issue Type: Improvement
  Components: zookeeper
Reporter: Sergey Chugunov


Three issues need to be improved in implementation:
* two more deleteIfExists methods within the *deleteFutureData* to be included 
in batching *deleteAll* operation;
* if request exceeds ZooKeeper max size limit fallback to one-by-one deletion 
should be used (related ticket IGNITE-8188);
* ZookeeperClient#deleteAll implementation may throw NoNodeException is case of 
concurrent operation removing the same nodes, in this case fallback to 
one-by-one deletion should be used too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8172) Update Apache Ignite's release scripts to match new RPM build and deploy architecture

2018-04-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-8172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16430730#comment-16430730
 ] 

ASF GitHub Bot commented on IGNITE-8172:


GitHub user vveider opened a pull request:

https://github.com/apache/ignite-release/pull/1

IGNITE-8172



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/vveider/ignite-release ignite-8172

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite-release/pull/1.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1


commit 31dc71092239cfa637eee151dde27c835f816ece
Author: Ivanov Petr 
Date:   2018-04-09T08:10:03Z

IGNITE-8172 Update Apache Ignite's release scripts to match new RPM build 
and deploy architecture
 * updated RPM packages build procedure: they now honor split RPM 
architecture
 * updated release procedure: RPM packages are now deployed to Bintray
 * updated corresponding comments and result messages
 * overall minimal refactoring of affected code

commit bf101cf52e2fe0a9a85bb927d770be630692c4f1
Author: Ivanov Petr 
Date:   2018-04-09T15:33:22Z

IGNITE-8172 Update Apache Ignite's release scripts to match new RPM build 
and deploy architecture
 * added Bintray authentication process
 * improved progress output
 * fixed minor defects and typos




> Update Apache Ignite's release scripts to match new RPM build and deploy 
> architecture
> -
>
> Key: IGNITE-8172
> URL: https://issues.apache.org/jira/browse/IGNITE-8172
> Project: Ignite
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.4
>Reporter: Peter Ivanov
>Assignee: Peter Ivanov
>Priority: Critical
> Fix For: 2.5
>
>
> # Implement new multi-package build scheme for RPM packages.
> # Update release process: deploy RPM packages to {{ignite-rpm}} Bintray 
> repository (with removal from Apache's Development Distribution SVN) instead 
> of moving to ASF's release.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-4958) Make data pages recyclable into index/meta/etc pages and vice versa

2018-04-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16430722#comment-16430722
 ] 

ASF GitHub Bot commented on IGNITE-4958:


GitHub user x-kreator opened a pull request:

https://github.com/apache/ignite/pull/3780

IGNITE-4958: Make data pages recyclable into index/meta/etc pages and…

… vice versa.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/x-kreator/ignite ignite-4958

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/3780.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3780


commit d7204ca83cd321cec02ac76b0cb24d4238b33fa1
Author: Dmitriy Sorokin 
Date:   2018-01-30T11:12:39Z

IGNITE-4958: Make data pages recyclable into index/meta/etc pages and vice 
versa.




> Make data pages recyclable into index/meta/etc pages and vice versa
> ---
>
> Key: IGNITE-4958
> URL: https://issues.apache.org/jira/browse/IGNITE-4958
> Project: Ignite
>  Issue Type: Improvement
>  Components: cache
>Affects Versions: 2.0
>Reporter: Ivan Rakov
>Assignee: Dmitriy Sorokin
>Priority: Major
> Fix For: 2.5
>
>
> Recycling for data pages is disabled for now. Empty data pages are 
> accumulated in FreeListImpl#emptyDataPagesBucket, and can be reused only as 
> data pages again. What has to be done:
> * Empty data pages should be recycled into reuse bucket
> * We should check reuse bucket first before allocating a new data page
> * MemoryPolicyConfiguration#emptyPagesPoolSize should be removed



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8075) .NET: setRollbackOnTopologyChangeTimeout, withLabel, localActiveTransactions

2018-04-09 Thread Alexei Scherbakov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-8075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexei Scherbakov updated IGNITE-8075:
--
Description: 
Neet to add new methods as part of .NET API:

org.apache.ignite.configuration.TransactionConfiguration#txTimeoutOnPartitionMapExchange
 - timeout for automatic rollback on exchange.
org.apache.ignite.IgniteTransactions#withLabel - tx label
org.apache.ignite.IgniteTransactions#localActiveTransactions - list of local 
active transactions.

Java implementation is currently available at a branch [1]

[1] https://github.com/gridgain/apache-ignite/tree/ignite-6827-2

  was:
Neet to add two described method as part of .NET API.

org.apache.ignite.configuration.TransactionConfiguration#setRollbackOnTopologyChangeTimeout
 - timeout for automatic rollback on exchange.
org.apache.ignite.IgniteTransactions#withLabel - tx label
org.apache.ignite.IgniteTransactions#localActiveTransactions - list of local 
active transactions.

Java implementation is currently available in branch [1]

[1] https://github.com/gridgain/apache-ignite/tree/ignite-6827-2


> .NET: setRollbackOnTopologyChangeTimeout, withLabel, localActiveTransactions
> 
>
> Key: IGNITE-8075
> URL: https://issues.apache.org/jira/browse/IGNITE-8075
> Project: Ignite
>  Issue Type: Improvement
>Affects Versions: 2.4
>Reporter: Alexei Scherbakov
>Priority: Major
> Fix For: 2.5
>
>
> Neet to add new methods as part of .NET API:
> org.apache.ignite.configuration.TransactionConfiguration#txTimeoutOnPartitionMapExchange
>  - timeout for automatic rollback on exchange.
> org.apache.ignite.IgniteTransactions#withLabel - tx label
> org.apache.ignite.IgniteTransactions#localActiveTransactions - list of local 
> active transactions.
> Java implementation is currently available at a branch [1]
> [1] https://github.com/gridgain/apache-ignite/tree/ignite-6827-2



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-7918) Huge memory leak when data streamer used together with local cache

2018-04-09 Thread Andrey Aleksandrov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-7918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16430718#comment-16430718
 ] 

Andrey Aleksandrov commented on IGNITE-7918:


[~ilantukh] Could you please also take a look at this PR?

> Huge memory leak when data streamer used together with local cache
> --
>
> Key: IGNITE-7918
> URL: https://issues.apache.org/jira/browse/IGNITE-7918
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.3
>Reporter: Zbyszek B
>Assignee: Andrey Aleksandrov
>Priority: Blocker
> Fix For: 2.5
>
> Attachments: Demo.java, MemLeak-Ignite.png, MemLeak-Ignite.txt
>
>
> Dear Igniters,
> We observe huge memory leak when data streamer used together with local cache.
> In the attached demo producer produces local cache with single binary object 
> and passes this to the queue. Consumer picks up the cache from the queue, 
> constructs different binary object from it, adds it to global partitioned 
> cache and destroys local cache.
> This design causes a significant leak - the whole heap is used within minutes 
> (no matter if this is 4G or 24G).
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8188) Batching operations should perform check for ZooKeeper request max size

2018-04-09 Thread Sergey Chugunov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-8188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Chugunov updated IGNITE-8188:

Description: 
As ZooKeeper documentation 
[says|https://zookeeper.apache.org/doc/r3.4.3/api/org/apache/zookeeper/ZooKeeper.html#multi(java.lang.Iterable)]
 batching *multi* operation has a limit for size of a single request.

ZookeeperClient batching methods *createAll* and *deleteAll* should check this 
limit and fall back to execute operations one by one.

  was:
As ZooKeeper documentation 
[says|https://zookeeper.apache.org/doc/r3.4.3/api/org/apache/zookeeper/ZooKeeper.html#multi(java.lang.Iterable)]
 batching *multi* operation has a limit for size of a single request.

ZookeeperClient batching methods *createAll* and *deleteAll* should check this 
limit and split to multiple requests if necessary.


> Batching operations should perform check for ZooKeeper request max size
> ---
>
> Key: IGNITE-8188
> URL: https://issues.apache.org/jira/browse/IGNITE-8188
> Project: Ignite
>  Issue Type: Improvement
>  Components: zookeeper
>Reporter: Sergey Chugunov
>Priority: Major
>
> As ZooKeeper documentation 
> [says|https://zookeeper.apache.org/doc/r3.4.3/api/org/apache/zookeeper/ZooKeeper.html#multi(java.lang.Iterable)]
>  batching *multi* operation has a limit for size of a single request.
> ZookeeperClient batching methods *createAll* and *deleteAll* should check 
> this limit and fall back to execute operations one by one.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-6827) Configurable rollback for long running transactions before partition exchange

2018-04-09 Thread Alexei Scherbakov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-6827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16430719#comment-16430719
 ] 

Alexei Scherbakov commented on IGNITE-6827:
---

[~avinogradov]

Thanks for comment.

1. This is how upsource works. I have not idea how to fix it. Use [1] to review 
changes what are actually made by me.

2. "Label" field was added to PR before opening a discussion as a part of other 
related task [2]. Currently there is no point to remove it from PR. Moreover, 
it'll be easier for .NET part to be finished (no spread commits between several 
branches)

[1] https://reviews.ignite.apache.org/ignite/branch/PR%203624
[2] https://issues.apache.org/jira/browse/IGNITE-7910

> Configurable rollback for long running transactions before partition exchange
> -
>
> Key: IGNITE-6827
> URL: https://issues.apache.org/jira/browse/IGNITE-6827
> Project: Ignite
>  Issue Type: Improvement
>Affects Versions: 2.0
>Reporter: Alexei Scherbakov
>Assignee: Alexei Scherbakov
>Priority: Major
> Fix For: 2.5
>
>
> Currently long running / buggy user transactions force partition exchange 
> block on waiting for 
> org.apache.ignite.internal.processors.cache.GridCacheSharedContext#partitionReleaseFuture,
>  preventing all grid progress.
> I suggest introducing new global flag in TransactionConfiguration, like 
> {{txRollbackTimeoutOnTopologyChange}}
> which will rollback exchange blocking transaction after the timeout.
> Still need to think what to do with other topology locking activities.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-8188) Batching operations should perform check for ZooKeeper request max size

2018-04-09 Thread Sergey Chugunov (JIRA)
Sergey Chugunov created IGNITE-8188:
---

 Summary: Batching operations should perform check for ZooKeeper 
request max size
 Key: IGNITE-8188
 URL: https://issues.apache.org/jira/browse/IGNITE-8188
 Project: Ignite
  Issue Type: Improvement
  Components: zookeeper
Reporter: Sergey Chugunov


As ZooKeeper documentation 
[says|https://zookeeper.apache.org/doc/r3.4.3/api/org/apache/zookeeper/ZooKeeper.html#multi(java.lang.Iterable)]
 batching *multi* operation has a limit for size of a single request.

ZookeeperClient batching methods *createAll* and *deleteAll* should check this 
limit and split to multiple requests if necessary.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-7743) JDBC driver allows to connect to non existent schema

2018-04-09 Thread Taras Ledkov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-7743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16430651#comment-16430651
 ] 

Taras Ledkov commented on IGNITE-7743:
--

[~pkouznet], the patch is OK with me.

> JDBC driver allows to connect to non existent schema
> 
>
> Key: IGNITE-7743
> URL: https://issues.apache.org/jira/browse/IGNITE-7743
> Project: Ignite
>  Issue Type: Bug
>  Components: jdbc, sql
>Affects Versions: 2.3
>Reporter: Valentin Kulichenko
>Assignee: Pavel Kuznetsov
>Priority: Major
>  Labels: usability
> Fix For: 2.5
>
>
> Currently, if one creates a cache without DDL (via {{QueryEntity}} or 
> {{indexedTypes}}), separate schema for this cache is created. Schema name is 
> case sensitive, so to connect to it with JDBC driver, it's required to 
> provide the name in quotes. Here is how it looks like in SqlLine:
> {noformat}
> ./bin/sqlline.sh -u jdbc:ignite:thin://127.0.0.1/\"CacheQueryExamplePersons\"
> {noformat}
> However, if name is provided without quotes, driver still connects, but then 
> fails with a very unclear exception when a query is executed:
> {noformat}
> ./bin/sqlline.sh -u 
> jdbc:ignite:thin://127.0.0.1/CacheQueryExamplePersons{noformat}
> This is a huge usability issue. We should disallow connections to schema that 
> does not exist, throw exception in this case. Exception should provide proper 
> explanation how to connect properly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-8187) Additional parameters like ACL lists should be taken into account when calculating request overhead

2018-04-09 Thread Sergey Chugunov (JIRA)
Sergey Chugunov created IGNITE-8187:
---

 Summary: Additional parameters like ACL lists should be taken into 
account when calculating request overhead
 Key: IGNITE-8187
 URL: https://issues.apache.org/jira/browse/IGNITE-8187
 Project: Ignite
  Issue Type: Improvement
  Components: zookeeper
Reporter: Sergey Chugunov


Requests to ZooKeeper have a size limit, so all operations of ZK-based 
discovery calculate request overhead before sending anything to ZooKeeper.

For now only path length is used in calculation, other factors like configured 
ACL lists may break this logic.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-8186) SQL: Create test base to cover sql by features with flexible configuration

2018-04-09 Thread Pavel Kuznetsov (JIRA)
Pavel Kuznetsov created IGNITE-8186:
---

 Summary: SQL: Create test base to cover sql by features with 
flexible configuration
 Key: IGNITE-8186
 URL: https://issues.apache.org/jira/browse/IGNITE-8186
 Project: Ignite
  Issue Type: Task
  Components: sql
Reporter: Pavel Kuznetsov
Assignee: Pavel Kuznetsov


We need to cover sql feature by feature.
We need to be able to run the same test cases with different configurations.

At the moment configurations in scope:
1) Inmemory/persistence
2) Distributed joins: on/off 
3) Cache mode: PARTITIONED/REPLICATED

Features in scope:
1) Simple SELECT
2) JOIN (distributed and local)
3) GROUP BY

Data model:
Employee (1000)
Department (50-100)

Status of distributed joins affects affinity key of data model.

Test cluster should contain 1 client and 2 server nodes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-7909) Java code examples are needed for Spark Data Frames.

2018-04-09 Thread Akmal Chaudhri (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-7909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akmal Chaudhri updated IGNITE-7909:
---
Fix Version/s: 2.5

> Java code examples are needed for Spark Data Frames.
> 
>
> Key: IGNITE-7909
> URL: https://issues.apache.org/jira/browse/IGNITE-7909
> Project: Ignite
>  Issue Type: Improvement
>  Components: spark
>Affects Versions: 2.5
>Reporter: Akmal Chaudhri
>Assignee: Akmal Chaudhri
>Priority: Major
> Fix For: 2.5
>
> Attachments: JavaIgniteCatalogExample.java, 
> JavaIgniteDataFrameExample.java, JavaIgniteDataFrameWriteExample.java
>
>   Original Estimate: 336h
>  Remaining Estimate: 336h
>
> Existing Scala code examples have been developed to illustrate Ignite support 
> for Spark Data Frames. But Java code examples are also required. Some Java 
> code has already been developed but requires further testing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-7654) Geospatial queries does not work for JDBC/ODBC

2018-04-09 Thread Ivan Daschinskiy (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-7654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Daschinskiy updated IGNITE-7654:
-
Fix Version/s: (was: 2.5)

> Geospatial queries does not work for JDBC/ODBC
> --
>
> Key: IGNITE-7654
> URL: https://issues.apache.org/jira/browse/IGNITE-7654
> Project: Ignite
>  Issue Type: Bug
>  Components: jdbc, odbc, sql, thin client
>Affects Versions: 2.3
>Reporter: Mikhail Cherkasov
>Assignee: Ivan Daschinskiy
>Priority: Major
>
> Geospatial queries do not work for JDBC/ODBC.
> I can create a table with GEOMETRY from sqlline, like this:
> {code:java}
>  CREATE TABLE GEO_TABLE(GID INTEGER PRIMARY KEY, THE_GEOM GEOMETRY);{code}
>  I can add rows:
> {code:java}
>  INSERT INTO GEO_TABLE(GID, THE_GEOM) VALUES (2, 'POINT(500 505)');{code}
> but there's no way to select GEOMETRY objects:
> {code:java}
> SELECT THE_GEOM FROM GEO_TABLE;{code}
>  sqlline throws the following excpetion: 
> {noformat}
> Error: class org.apache.ignite.binary.BinaryObjectException: Custom objects 
> are not supported (state=5,code=0)
> java.sql.SQLException: class org.apache.ignite.binary.BinaryObjectException: 
> Custom objects are not supported
> at 
> org.apache.ignite.internal.jdbc.thin.JdbcThinConnection.sendRequest(JdbcThinConnection.java:671)
> at 
> org.apache.ignite.internal.jdbc.thin.JdbcThinStatement.execute0(JdbcThinStatement.java:130)
> at 
> org.apache.ignite.internal.jdbc.thin.JdbcThinStatement.execute(JdbcThinStatement.java:299)
> at sqlline.Commands.execute(Commands.java:823)
> at sqlline.Commands.sql(Commands.java:733)
> at sqlline.SqlLine.dispatch(SqlLine.java:795)
> at sqlline.SqlLine.begin(SqlLine.java:668)
> at sqlline.SqlLine.start(SqlLine.java:373)
> at sqlline.SqlLine.main(SqlLine.java:265){noformat}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (IGNITE-7909) Java code examples are needed for Spark Data Frames.

2018-04-09 Thread Akmal Chaudhri (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-7909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akmal Chaudhri reassigned IGNITE-7909:
--

Assignee: Akmal Chaudhri

> Java code examples are needed for Spark Data Frames.
> 
>
> Key: IGNITE-7909
> URL: https://issues.apache.org/jira/browse/IGNITE-7909
> Project: Ignite
>  Issue Type: Improvement
>  Components: spark
>Affects Versions: 2.5
>Reporter: Akmal Chaudhri
>Assignee: Akmal Chaudhri
>Priority: Major
> Attachments: JavaIgniteCatalogExample.java, 
> JavaIgniteDataFrameExample.java, JavaIgniteDataFrameWriteExample.java
>
>   Original Estimate: 336h
>  Remaining Estimate: 336h
>
> Existing Scala code examples have been developed to illustrate Ignite support 
> for Spark Data Frames. But Java code examples are also required. Some Java 
> code has already been developed but requires further testing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8120) Improve test coverage of rebalance failing

2018-04-09 Thread Ivan Daschinskiy (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-8120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Daschinskiy updated IGNITE-8120:
-
Fix Version/s: (was: 2.5)
   2.6

> Improve test coverage of rebalance failing
> --
>
> Key: IGNITE-8120
> URL: https://issues.apache.org/jira/browse/IGNITE-8120
> Project: Ignite
>  Issue Type: Test
>  Components: general
>Affects Versions: 2.4
>Reporter: Ivan Daschinskiy
>Assignee: Ivan Daschinskiy
>Priority: Minor
>  Labels: test
> Fix For: 2.6
>
>
> Need to cover situation, when some archived wal segments, which are not 
> reserved by IgniteWriteAheadLogManager, are deleted during rebalancing or 
> were deleted before. However, rebalancing from WAL is broken. When fix 
> [IGNITE-8116|https://issues.apache.org/jira/browse/IGNITE-8116] is available, 
> it will be implemented.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-6803) UriDeploymentSpi affects execution of other tasks, including Ignite internals

2018-04-09 Thread Ilya Kasnacheev (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-6803?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16430629#comment-16430629
 ] 

Ilya Kasnacheev commented on IGNITE-6803:
-

[~agoncharuk] [~zstan] Maybe you can merge something and forget about this 
problem?

> UriDeploymentSpi affects execution of other tasks, including Ignite internals
> -
>
> Key: IGNITE-6803
> URL: https://issues.apache.org/jira/browse/IGNITE-6803
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.2
>Reporter: Ilya Kasnacheev
>Assignee: Alexey Goncharuk
>Priority: Major
> Fix For: 2.6
>
> Attachments: tc.png
>
>
> From the maillist:
> http://apache-ignite-users.70518.x6.nabble.com/Code-deployment-throught-UriDeploumentSpi-tt17807.html
> In our project we need to deploy custom compute tasks into cluster without 
> cluster restart and p2p class loading.  
> I try to use org.apache.ignite.spi.deployment.uri.UriDeploumentSpi for that 
> purpose, but I have a problem.
> I have simple Ignite Service and Ignite Compute Task which use it throught 
> @ServiceResource.
> This ComputeTask located into .gar file which was deployed via 
> UriDeploumentSpi.
> If I have service implementation on each node(node singleton service) then it 
> works great. 
> But if I deploy service as a cluster singleton then task executes correctly 
> only on node with this service. 
> On other nodes @ServiceResource returns ServiceProxy that throws exception on 
> service remote method invokation (lambda with service call cannot be 
> deployed):
> {code}
> SEVERE: Failed to execute job 
> [jobId=68a96d76f51-7919c34c-9a48-4068-bcd6-70dad5595e86, 
> ses=GridJobSessionImpl [ses=GridTaskSessionImpl [taskName=task-one, 
> dep=GridDeployment [ts=1509275650885, depMode=SHARED, 
> clsLdr=GridUriDeploymentClassLoader 
> [urls=[file:/C:/IdeaProjects/dmp_code_deployment/test/out/deployment/gg.uri.deployment.tmp/428ec712-e6d0-4eab-97f9-ce58d7b3e0f5/dirzip_task-one6814855127293591501.gar/]],
>  clsLdrId=7eb15d76f51-428ec712-e6d0-4eab-97f9-ce58d7b3e0f5, userVer=0, 
> loc=true, sampleClsName=com.gridfore.tfedyanin.deploy.Task1, 
> pendingUndeploy=false, undeployed=false, usage=1], 
> taskClsName=com.gridfore.tfedyanin.deploy.Task1, 
> sesId=38a96d76f51-7919c34c-9a48-4068-bcd6-70dad5595e86, 
> startTime=1509275650601, endTime=9223372036854775807, 
> taskNodeId=7919c34c-9a48-4068-bcd6-70dad5595e86, 
> clsLdr=GridUriDeploymentClassLoader 
> [urls=[file:/C:/IdeaProjects/dmp_code_deployment/test/out/deployment/gg.uri.deployment.tmp/428ec712-e6d0-4eab-97f9-ce58d7b3e0f5/dirzip_task-one6814855127293591501.gar/]],
>  closed=false, cpSpi=null, failSpi=null, loadSpi=null, usage=1, 
> fullSup=false, internal=false, subjId=7919c34c-9a48-4068-bcd6-70dad5595e86, 
> mapFut=IgniteFuture [orig=GridFutureAdapter [ignoreInterrupts=false, 
> state=INIT, res=null, hash=1254296516]], execName=null], 
> jobId=68a96d76f51-7919c34c-9a48-4068-bcd6-70dad5595e86]]
> class org.apache.ignite.IgniteDeploymentException: Failed to auto-deploy task 
> (was task (re|un)deployed?): class 
> org.apache.ignite.internal.processors.service.GridServiceProcessor$ServiceTopologyCallable
> {code}
> Problem works as follows:
> - Ignite has to determine which node has deployed service, by name.
> - Ignite has to send ServiceTopologyCallable task.
> - Ignite tries to deploy ServiceTopologyCallable task using UriDeploymentSpi.
> - UriDeploymentSpi doesn't have it obviously, but it also tries to fallback 
> towards "CLASS" loading from local ClassLoader
> - Which fails because it is told that ServiceTopologyCallable comes from its 
> classloader and not from the local one!
> So I'm at loss where it should be fixed properly. It is also sad that we are 
> using all that deploy pipeline to handle IgniteInternal tasks, but there 
> obviously are non-internal local tasks which might be affected by same 
> problem.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-7366) Affinity assignment exception in service processor during multiple nodes join

2018-04-09 Thread Ilya Kasnacheev (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-7366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16430611#comment-16430611
 ] 

Ilya Kasnacheev commented on IGNITE-7366:
-

[~agoncharuk] can somebody from your team look? My understanding of Exchange 
process is not enough here.

> Affinity assignment exception in service processor during multiple nodes join
> -
>
> Key: IGNITE-7366
> URL: https://issues.apache.org/jira/browse/IGNITE-7366
> Project: Ignite
>  Issue Type: Bug
>  Components: compute
>Affects Versions: 2.3
>Reporter: Ilya Kasnacheev
>Assignee: Pavel Pereslegin
>Priority: Major
>
> When two nodes which are deploying services join at the same time, and 
> exception is observed:
> {code}
> SEVERE: Error when executing service: null
> java.lang.IllegalStateException: Getting affinity for topology version 
> earlier than affinity is calculated [locNode=TcpDiscoveryNode 
> [id=245d4bec-0384-4808-b66d-d2340930207f..., discPort=37500, order=2, 
> intOrder=2, lastExchangeTime=1515394551283, loc=true, 
> ver=2.3.0#20171028-sha1:8add7fd5, isClient=false], grp=ignite-sys-cache, 
> topVer=AffinityTopologyVersion [topVer=3, minorTopVer=0], 
> head=AffinityTopologyVersion [topVer=4, minorTopVer=0], 
> history=[AffinityTopologyVersion [topVer=2, minorTopVer=0], 
> AffinityTopologyVersion [topVer=2, minorTopVer=1], AffinityTopologyVersion 
> [topVer=4, minorTopVer=0]]]
> at 
> org.apache.ignite.internal.processors.affinity.GridAffinityAssignmentCache.cachedAffinity(GridAffinityAssignmentCache.java:514)
> at 
> org.apache.ignite.internal.processors.affinity.GridAffinityAssignmentCache.nodes(GridAffinityAssignmentCache.java:419)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheAffinityManager.nodesByPartition(GridCacheAffinityManager.java:220)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheAffinityManager.primaryByPartition(GridCacheAffinityManager.java:256)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheAffinityManager.primaryByKey(GridCacheAffinityManager.java:247)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheAffinityManager.primaryByKey(GridCacheAffinityManager.java:271)
> at 
> org.apache.ignite.internal.processors.service.GridServiceProcessor$TopologyListener$1.run0(GridServiceProcessor.java:1771)
> at 
> org.apache.ignite.internal.processors.service.GridServiceProcessor$DepRunnable.run(GridServiceProcessor.java:1958)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
> at java.lang.Thread.run(Unknown Source)
> {code}
> This may be caused by exchange merges. There are 4 nodes joining topology. 
> When nodes 3 and 4 join at the same time, exchanges for [3, 0] and [4, 0] are 
> merged. But, TopologyListener in service processor is notified about topVer 
> [3, 0], for which there is no affinity because exchange has already moved 
> forward to [4, 0].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (IGNITE-8155) Warning in log on opening of cluster tab in advanced mode

2018-04-09 Thread Alexey Kuznetsov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-8155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Kuznetsov closed IGNITE-8155.


> Warning in log on opening of cluster tab in advanced mode
> -
>
> Key: IGNITE-8155
> URL: https://issues.apache.org/jira/browse/IGNITE-8155
> Project: Ignite
>  Issue Type: Bug
>Reporter: Vasiliy Sisko
>Assignee: Alexey Kuznetsov
>Priority: Major
> Fix For: 2.5
>
>
> # Create new cluster
> # Save cluster
> # Open cluster tab in advanced mode
> In browser console next message is shown:
> {code}
> The specified value "value" is not a valid number. The value must match to 
> the following regular expression: -?(\d+|\d+\.\d+|\.\d+)([eE][-+]?\d+)?
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8155) Warning in log on opening of cluster tab in advanced mode

2018-04-09 Thread Alexey Kuznetsov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-8155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Kuznetsov updated IGNITE-8155:
-
Fix Version/s: 2.5

> Warning in log on opening of cluster tab in advanced mode
> -
>
> Key: IGNITE-8155
> URL: https://issues.apache.org/jira/browse/IGNITE-8155
> Project: Ignite
>  Issue Type: Bug
>Reporter: Vasiliy Sisko
>Assignee: Alexey Kuznetsov
>Priority: Major
> Fix For: 2.5
>
>
> # Create new cluster
> # Save cluster
> # Open cluster tab in advanced mode
> In browser console next message is shown:
> {code}
> The specified value "value" is not a valid number. The value must match to 
> the following regular expression: -?(\d+|\d+\.\d+|\.\d+)([eE][-+]?\d+)?
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-7330) When client connects during cluster activation process it hangs on obtaining cache proxy

2018-04-09 Thread Sergey Chugunov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-7330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Chugunov updated IGNITE-7330:

Fix Version/s: (was: 2.5)
   2.6

> When client connects during cluster activation process it hangs on obtaining 
> cache proxy
> 
>
> Key: IGNITE-7330
> URL: https://issues.apache.org/jira/browse/IGNITE-7330
> Project: Ignite
>  Issue Type: Bug
>Reporter: Sergey Chugunov
>Priority: Critical
>  Labels: IEP-4
> Fix For: 2.6
>
>
> The test below reproduces the issue:
> {noformat}
> public void testClientJoinWhenActivationInProgress() throws Exception {
> Ignite srv = startGrids(5);
> srv.active(true);
> srv.createCaches(Arrays.asList(cacheConfigurations1()));
> Map cacheData = new LinkedHashMap<>();
> for (int i = 1; i <= 100; i++) {
> for (CacheConfiguration ccfg : cacheConfigurations1()) {
> srv.cache(ccfg.getName()).put(-i, i);
> cacheData.put(-i, i);
> }
> }
> stopAllGrids();
> srv = startGrids(5);
> final CountDownLatch clientStartLatch = new CountDownLatch(1);
> IgniteInternalFuture clStartFut = GridTestUtils.runAsync(new 
> Runnable() {
> @Override public void run() {
> try {
> clientStartLatch.await();
> Thread.sleep(10);
> client = true;
> Ignite cl = startGrid("client0");
> IgniteCache atomicCache = 
> cl.cache(CACHE_NAME_PREFIX + '0');
> IgniteCache txCache = 
> cl.cache(CACHE_NAME_PREFIX + '1');
> assertEquals(100, atomicCache.size());
> assertEquals(100, txCache.size());
> }
> catch (Exception e) {
> log.error("Error occurred", e);
> }
> }
> }, "client-starter-thread");
> clientStartLatch.countDown();
> srv.active(true);
> clStartFut.get();
> }
> {noformat}
> Expected behavior: test finishes successfully.
> Actual behavior: test hangs on waiting for client start future to complete 
> while "client-started-thread" will be hanging on obtaining a reference to the 
> first cache.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-5302) Empty LOST partition may be used as OWNING after resetting lost partitions

2018-04-09 Thread Sergey Chugunov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-5302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Chugunov updated IGNITE-5302:

Fix Version/s: (was: 2.5)
   2.6

> Empty LOST partition may be used as OWNING after resetting lost partitions
> --
>
> Key: IGNITE-5302
> URL: https://issues.apache.org/jira/browse/IGNITE-5302
> Project: Ignite
>  Issue Type: Bug
>Reporter: Sergey Chugunov
>Priority: Blocker
>  Labels: MakeTeamcityGreenAgain, Muted_test, test-fail
> Fix For: 2.6
>
>
> h2. Notes
> Test *testPartitionLossAndRecover* reproducing the issue can be found in 
> ignite-5267 branch with PDS functionality.
> h2. Steps to reproduce
> # Four nodes are started, some key is added to partitioned cache
> # Primary and backup nodes for the key are stopped, key's partition is 
> declared LOST on remaining nodes
> # Primary and backup nodes are started again, cache's lost partitions are 
> reset
> # Key is requested from cache
> h2. Expected behavior
> Correct value is returned from primary for this partition
> h2. Actual behavior
> Request for value is sent to node where partition is empty (not to primary 
> node), null is returned
> h2. Latest findings
> # The main problem with the scenario is that request for key gets mapped not 
> only to P/B nodes with real value but also to the node where that partition 
> existed only in LOST state after P/B shutdown on step #2
> # It was found that on step #3 after primary and backup are joined partition 
> counter is increased for empty partition in LOST state which looks wrong



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-6439) IgnitePersistentStoreSchemaLoadTest is broken

2018-04-09 Thread Denis Garus (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-6439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16430587#comment-16430587
 ] 

Denis Garus commented on IGNITE-6439:
-

[~dpavlov], I propose to close this ticket because issue doesn't reproduce 
neither on  local machine nor on TC

https://ci.ignite.apache.org/viewLog.html?buildId=1185886=IgniteTests24Java8_RunAll=testsInfo

> IgnitePersistentStoreSchemaLoadTest is broken
> -
>
> Key: IGNITE-6439
> URL: https://issues.apache.org/jira/browse/IGNITE-6439
> Project: Ignite
>  Issue Type: Bug
>Reporter: Dmitriy Govorukhin
>Assignee: Denis Garus
>Priority: Major
>  Labels: MakeTeamcityGreenAgain
> Fix For: 2.6
>
>
> After start nodes, cluster must be activated explicit.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-7809) Ignite PDS 2 & PDS 2 Direct IO: stable failures of IgniteWalFlushDefaultSelfTest

2018-04-09 Thread Ilya Lantukh (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-7809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16430586#comment-16430586
 ] 

Ilya Lantukh commented on IGNITE-7809:
--

[~agoncharuk],

I've finalized the patch, but it doesn't make these tests not failing.

> Ignite PDS 2 & PDS 2 Direct IO: stable failures of 
> IgniteWalFlushDefaultSelfTest
> 
>
> Key: IGNITE-7809
> URL: https://issues.apache.org/jira/browse/IGNITE-7809
> Project: Ignite
>  Issue Type: Task
>  Components: persistence
>Affects Versions: 2.4
>Reporter: Dmitriy Pavlov
>Assignee: Ilya Lantukh
>Priority: Major
>  Labels: MakeTeamcityGreenAgain
> Fix For: 2.5
>
>
> Probably after last WAL default changes 'IGNITE-7594 Fixed performance drop 
> after WAL optimization for FSYNC' 2 tests in 2 build configs began to fail
>Ignite PDS 2 (Direct IO) [ tests 2 ]  
>  IgnitePdsNativeIoTestSuite2: 
> IgniteWalFlushDefaultSelfTest.testFailAfterStart (fail rate 13,0%) 
>  IgnitePdsNativeIoTestSuite2: 
> IgniteWalFlushDefaultSelfTest.testFailWhileStart (fail rate 13,0%) 
>Ignite PDS 2 [ tests 2 ]  
>  IgnitePdsTestSuite2: IgniteWalFlushDefaultSelfTest.testFailAfterStart 
> (fail rate 8,4%) 
>  IgnitePdsTestSuite2: IgniteWalFlushDefaultSelfTest.testFailWhileStart 
> (fail rate 8,4%) 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-7624) Cluster Activation from Client Node hangs up in specific configuration

2018-04-09 Thread Sergey Kosarev (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-7624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kosarev updated IGNITE-7624:
---
Fix Version/s: (was: 2.5)

> Cluster Activation from Client Node hangs up in specific configuration
> --
>
> Key: IGNITE-7624
> URL: https://issues.apache.org/jira/browse/IGNITE-7624
> Project: Ignite
>  Issue Type: Bug
>  Components: general
>Affects Versions: 2.3
>Reporter: Sergey Kosarev
>Priority: Major
> Attachments: testStartInactiveAndActivateFromClient.patch
>
>
> if we start cluster in inactive state GridTaskProcessor is not initiated 
> fully:
> {code:java}
> @Override public void onKernalStart(boolean active) throws 
> IgniteCheckedException {
> if (!active)
> return;
> tasksMetaCache = ctx.security().enabled() && !ctx.isDaemon() ?
> ctx.cache().utilityCache() : null;
> startLatch.countDown();
> }{code}
>  
> and those startLatch is still up!
>  
> Later on if we try activate cluster from client node async task is trying to 
> be invoked
> (see 
> org.apache.ignite.internal.processors.cluster.GridClusterStateProcessorImpl#sendComputeChangeGlobalState)
> and if ctx.security().enabled == true then Task is can't start as he waits 
> indefinitely for those startLatch in
> org.apache.ignite.internal.processors.task.GridTaskProcessor#taskMetaCache
> {code:java}
> private IgniteInternalCache taskMetaCache() {
> assert ctx.security().enabled();
> if (tasksMetaCache == null)
> U.awaitQuiet(startLatch);
> return tasksMetaCache;
> }{code}
>  
> stacktrace of the waiting thread:
> {code:java}
> "async-runnable-runner-1@3141" prio=5 tid=0x68 nid=NA waiting
> java.lang.Thread.State: WAITING
> at sun.misc.Unsafe.park(Unsafe.java:-1)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
> at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:231)
> at 
> org.apache.ignite.internal.util.IgniteUtils.awaitQuiet(IgniteUtils.java:7491)
> at 
> org.apache.ignite.internal.processors.task.GridTaskProcessor.taskMetaCache(GridTaskProcessor.java:269)
> at 
> org.apache.ignite.internal.processors.task.GridTaskProcessor.saveTaskMetadata(GridTaskProcessor.java:845)
> at 
> org.apache.ignite.internal.processors.task.GridTaskProcessor.startTask(GridTaskProcessor.java:703)
> at 
> org.apache.ignite.internal.processors.task.GridTaskProcessor.execute(GridTaskProcessor.java:448)
> at 
> org.apache.ignite.internal.processors.closure.GridClosureProcessor.runAsync(GridClosureProcessor.java:244)
> at 
> org.apache.ignite.internal.processors.closure.GridClosureProcessor.runAsync(GridClosureProcessor.java:216)
> at 
> org.apache.ignite.internal.IgniteComputeImpl.runAsync0(IgniteComputeImpl.java:704)
> at 
> org.apache.ignite.internal.IgniteComputeImpl.runAsync(IgniteComputeImpl.java:689)
> at 
> org.apache.ignite.internal.processors.cluster.GridClusterStateProcessorImpl.sendComputeChangeGlobalState(GridClusterStateProcessorImpl.java:837)
> at 
> org.apache.ignite.internal.processors.cluster.GridClusterStateProcessorImpl.changeGlobalState0(GridClusterStateProcessorImpl.java:684)
> at 
> org.apache.ignite.internal.processors.cluster.GridClusterStateProcessorImpl.changeGlobalState(GridClusterStateProcessorImpl.java:618)
> at 
> org.apache.ignite.internal.cluster.IgniteClusterImpl.active(IgniteClusterImpl.java:306)
> at org.apache.ignite.internal.IgniteKernal.active(IgniteKernal.java:3541)
> at 
> org.apache.ignite.internal.processors.cache.IgniteClusterActivateDeactivateTest$7.run(IgniteClusterActivateDeactivateTest.java:628)
> at org.apache.ignite.testframework.GridTestUtils$6.run(GridTestUtils.java:892)
> at 
> org.apache.ignite.testframework.GridTestUtils$9.call(GridTestUtils.java:1237)
> at org.apache.ignite.testframework.GridTestThread.run(GridTestThread.java:86)
> {code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-1553) Optimize transaction prepare step when store is enabled

2018-04-09 Thread Alexey Goncharuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-1553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Goncharuk updated IGNITE-1553:
-
Fix Version/s: (was: 2.6)

> Optimize transaction prepare step when store is enabled
> ---
>
> Key: IGNITE-1553
> URL: https://issues.apache.org/jira/browse/IGNITE-1553
> Project: Ignite
>  Issue Type: Improvement
>  Components: cache
>Affects Versions: ignite-1.4
>Reporter: Alexey Goncharuk
>Assignee: Alexey Kuznetsov
>Priority: Major
>  Labels: important
>
> Currently entries are enlisted in a database transaction after grid 
> transaction is in PREPARED state. We can do this in parallel in the following 
> fashion (pseudo-code):
> {code:java}
> fut = tx.prepareAsync();
> db.write(tx.writes());
> fut.get();
> try {
> db.commit();
> 
> tx.commit();
> }
> catch (Exception e) {
> tx.rollback();
> }
> {code}
> If this approach is applied, we should be able to reduce latency for 
> transactions when write-through is enabled.
>  
> store prepare works on primary nodes only



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-1553) Optimize transaction prepare step when store is enabled

2018-04-09 Thread Alexey Goncharuk (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-1553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16430580#comment-16430580
 ] 

Alexey Goncharuk commented on IGNITE-1553:
--

[~Alexey Kuznetsov], agree, let's postpone this ticket for now. We will need 
this ability to make transactional writes to third-party databases together 
with Ignite native persistence.

> Optimize transaction prepare step when store is enabled
> ---
>
> Key: IGNITE-1553
> URL: https://issues.apache.org/jira/browse/IGNITE-1553
> Project: Ignite
>  Issue Type: Improvement
>  Components: cache
>Affects Versions: ignite-1.4
>Reporter: Alexey Goncharuk
>Assignee: Alexey Kuznetsov
>Priority: Major
>  Labels: important
>
> Currently entries are enlisted in a database transaction after grid 
> transaction is in PREPARED state. We can do this in parallel in the following 
> fashion (pseudo-code):
> {code:java}
> fut = tx.prepareAsync();
> db.write(tx.writes());
> fut.get();
> try {
> db.commit();
> 
> tx.commit();
> }
> catch (Exception e) {
> tx.rollback();
> }
> {code}
> If this approach is applied, we should be able to reduce latency for 
> transactions when write-through is enabled.
>  
> store prepare works on primary nodes only



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-1553) Optimize transaction prepare step when store is enabled

2018-04-09 Thread Alexey Goncharuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-1553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Goncharuk updated IGNITE-1553:
-
Fix Version/s: (was: 2.5)
   2.6

> Optimize transaction prepare step when store is enabled
> ---
>
> Key: IGNITE-1553
> URL: https://issues.apache.org/jira/browse/IGNITE-1553
> Project: Ignite
>  Issue Type: Improvement
>  Components: cache
>Affects Versions: ignite-1.4
>Reporter: Alexey Goncharuk
>Assignee: Alexey Kuznetsov
>Priority: Major
>  Labels: important
> Fix For: 2.6
>
>
> Currently entries are enlisted in a database transaction after grid 
> transaction is in PREPARED state. We can do this in parallel in the following 
> fashion (pseudo-code):
> {code:java}
> fut = tx.prepareAsync();
> db.write(tx.writes());
> fut.get();
> try {
> db.commit();
> 
> tx.commit();
> }
> catch (Exception e) {
> tx.rollback();
> }
> {code}
> If this approach is applied, we should be able to reduce latency for 
> transactions when write-through is enabled.
>  
> store prepare works on primary nodes only



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-3464) Possible race between partition exchange and prepare/finish requests

2018-04-09 Thread Alexey Goncharuk (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-3464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16430576#comment-16430576
 ] 

Alexey Goncharuk commented on IGNITE-3464:
--

[~Jokser] Can you please take a look at the provided PR, given your expertise 
in exchange counters validation? 

> Possible race between partition exchange and prepare/finish requests
> 
>
> Key: IGNITE-3464
> URL: https://issues.apache.org/jira/browse/IGNITE-3464
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: ignite-1.4
>Reporter: Alexey Goncharuk
>Assignee: Vitaliy Biryukov
>Priority: Major
> Fix For: 2.6
>
>
> Consider the following scenario:
> Two nodes A (coordinator), B. Node C is joining the grid. Current topology 
> version is 2.
>  - Node A starts a transaction on version 2 and sends a prepare request to 
> node B
>  - Discovery event happens on node A. Exchange future is created, captures 
> the transaction and waits for this transaction to finish.
>  - Discovery event happens on node B. Exchange future is created, but since 
> there is no transaction on this node (the request has not been processed 
> yet), partition release future is completed and exchange waits for an ACK 
> from coordinator.
>  - Prepare request is processed on node B
>  - Node A commits the transaction locally, partition release future is 
> completed. Both finish request and exchange message are sent to the node B.
>  - Node B processes the exchange message first and completes exchange.
>  - Node C starts rebalancing from node B and acquires stale value of the key 
> which was supposed to be updated in the transaction.
>  - Node B processes finish request and commits the transaction.
> As a result, node B and C have different values stored in the cache.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-5078) Partition lost event is fired only on coordinator when partition loss policy is IGNORE

2018-04-09 Thread Dmitriy Pavlov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-5078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitriy Pavlov updated IGNITE-5078:
---
Fix Version/s: (was: 2.5)

> Partition lost event is fired only on coordinator when partition loss policy 
> is IGNORE
> --
>
> Key: IGNITE-5078
> URL: https://issues.apache.org/jira/browse/IGNITE-5078
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.0
>Reporter: Alexey Goncharuk
>Assignee: Dmitriy Pavlov
>Priority: Major
>  Labels: MakeTeamcityGreenAgain, Muted_test, test-fail
>
> Problem is demonstrated in test 
> org.apache.ignite.internal.processors.cache.distributed.IgniteCachePartitionLossPolicySelfTest#testIgnore()



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-5553) Ignite PDS 2: IgnitePersistentStoreDataStructuresTest testSet assertion error

2018-04-09 Thread Dmitriy Pavlov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-5553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitriy Pavlov updated IGNITE-5553:
---
Fix Version/s: (was: 2.5)
   2.6

> Ignite PDS 2: IgnitePersistentStoreDataStructuresTest testSet assertion error
> -
>
> Key: IGNITE-5553
> URL: https://issues.apache.org/jira/browse/IGNITE-5553
> Project: Ignite
>  Issue Type: Bug
>  Components: data structures, persistence
>Affects Versions: 2.1
>Reporter: Dmitriy Pavlov
>Assignee: Pavel Pereslegin
>Priority: Major
>  Labels: MakeTeamcityGreenAgain, Muted_test, test-fail
> Fix For: 2.6
>
>
> h2. Notes-4435
> When IgniteSet is restored from persistence, size of set is always 0, [link 
> to test 
> history|http://ci.ignite.apache.org/project.html?projectId=Ignite20Tests=-7043871603266099589=testDetails].
> h2. Detailed description
> Unlike *IgniteQueue* which uses separate cache key to store its size 
> *IgniteSet* stores it in a field of some class.
> Test from the link above shows very clearly that after restoring memory state 
> from PDS all set values are restored correctly but size is lost.
> h2. Proposed solution
> One possible solution might be to do the same thing as *IgniteQueue* does: 
> size of *IgniteSet* must be stored is cache instead of volatile in-memory 
> fields of random classes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-3464) Possible race between partition exchange and prepare/finish requests

2018-04-09 Thread Alexey Goncharuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-3464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Goncharuk updated IGNITE-3464:
-
Fix Version/s: (was: 2.5)
   2.6

> Possible race between partition exchange and prepare/finish requests
> 
>
> Key: IGNITE-3464
> URL: https://issues.apache.org/jira/browse/IGNITE-3464
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: ignite-1.4
>Reporter: Alexey Goncharuk
>Assignee: Vitaliy Biryukov
>Priority: Major
> Fix For: 2.6
>
>
> Consider the following scenario:
> Two nodes A (coordinator), B. Node C is joining the grid. Current topology 
> version is 2.
>  - Node A starts a transaction on version 2 and sends a prepare request to 
> node B
>  - Discovery event happens on node A. Exchange future is created, captures 
> the transaction and waits for this transaction to finish.
>  - Discovery event happens on node B. Exchange future is created, but since 
> there is no transaction on this node (the request has not been processed 
> yet), partition release future is completed and exchange waits for an ACK 
> from coordinator.
>  - Prepare request is processed on node B
>  - Node A commits the transaction locally, partition release future is 
> completed. Both finish request and exchange message are sent to the node B.
>  - Node B processes the exchange message first and completes exchange.
>  - Node C starts rebalancing from node B and acquires stale value of the key 
> which was supposed to be updated in the transaction.
>  - Node B processes finish request and commits the transaction.
> As a result, node B and C have different values stored in the cache.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-5555) Ignite PDS 1: JVM crash on teamcity (Rare)

2018-04-09 Thread Dmitriy Pavlov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitriy Pavlov updated IGNITE-:
---
Fix Version/s: (was: 2.5)
   2.6

> Ignite PDS 1: JVM crash on teamcity (Rare)
> --
>
> Key: IGNITE-
> URL: https://issues.apache.org/jira/browse/IGNITE-
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.1
>Reporter: Dmitriy Pavlov
>Assignee: Dmitriy Pavlov
>Priority: Critical
>  Labels: MakeTeamcityGreenAgain, test-fail
> Fix For: 2.6
>
> Attachments: crash_report, hs_err_pid7100.log.txt, thread_dump
>
>
> Most recent crashes
> https://ci.ignite.apache.org/viewLog.html?buildId=1095007=IgniteTests24Java8_IgnitePds1=buildResultsDiv
> {noformat}
>Ignite PDS 1 [ tests 0 JVM CRASH ] 
>  BPlusTreeReuseListPageMemoryImplTest.testEmptyCursors (last started) 
> {noformat}
> https://ci.ignite.apache.org/viewLog.html?buildId=1086130=buildResultsDiv=IgniteTests24Java8_IgnitePds1
> {noformat}
>Ignite PDS 1 [ tests 0 JVM CRASH ] 
>  BPlusTreeReuseListPageMemoryImplTest.testEmptyCursors (last started) 
> {noformat}
> (older failure
> http://ci.ignite.apache.org/viewLog.html?buildId=675694=buildResultsDiv=Ignite20Tests_IgnitePds1#)
> Stacktrace indicates failure was in ignite code related to B+tree
> {noformat}J 34156 C2 
> org.apache.ignite.internal.processors.cache.persistence.pagemem.PageMemoryImpl.readLockPage(JLorg/apache/ignite/internal/pagemem/FullPageId;ZZ)J
>  (88 bytes) @ 0x7f98cfc24a5a [0x7f98cfc24540+0x51a]
> J 34634 C2 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Search.run0(JJJLorg/apache/ignite/internal/processors/cache/persistence/tree/io/BPlusIO;Lorg/apache/ignite/internal/processors/cache/persistence/tree/BPlusTree$Get;I)Lorg/apache/ignite/internal/processors/cache/persistence/tree/BPlusTree$Result;
>  (380 bytes) @ 0x7f98d32dd524 [0x7f98d32dd100+0x424]
> J 34633 C2 
> org.apache.ignite.internal.processors.cache.persistence.tree.util.PageHandler.readPage(Lorg/apache/ignite/internal/pagemem/PageMemory;IJJLorg/apache/ignite/internal/processors/cache/persistence/tree/util/PageLockListener;Lorg/apache/ignite/internal/processors/cache/persistence/tree/util/PageHandler;Ljava/lang/Object;ILjava/lang/Object;)Ljava/lang/Object;
>  (81 bytes) @ 0x7f98d2091c94 [0x7f98d2091a40+0x254]
> J 34888 C2 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.invokeDown(Lorg/apache/ignite/internal/processors/cache/persistence/tree/BPlusTree$Invoke;JJJI)Lorg/apache/ignite/internal/processors/cache/persistence/tree/BPlusTree$Result;
>  (561 bytes) @ 0x7f98d2ca146c [0x7f98d2ca1180+0x2ec]
> J 34888 C2 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.invokeDown(Lorg/apache/ignite/internal/processors/cache/persistence/tree/BPlusTree$Invoke;JJJI)Lorg/apache/ignite/internal/processors/cache/persistence/tree/BPlusTree$Result;
>  (561 bytes) @ 0x7f98d2ca17f8 [0x7f98d2ca1180+0x678]
> J 34888 C2 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.invokeDown(Lorg/apache/ignite/internal/processors/cache/persistence/tree/BPlusTree$Invoke;JJJI)Lorg/apache/ignite/internal/processors/cache/persistence/tree/BPlusTree$Result;
>  (561 bytes) @ 0x7f98d2ca17f8 [0x7f98d2ca1180+0x678]
> J 34888 C2 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.invokeDown(Lorg/apache/ignite/internal/processors/cache/persistence/tree/BPlusTree$Invoke;JJJI)Lorg/apache/ignite/internal/processors/cache/persistence/tree/BPlusTree$Result;
>  (561 bytes) @ 0x7f98d2ca17f8 [0x7f98d2ca1180+0x678]
> J 35053 C2 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.invoke(Ljava/lang/Object;Ljava/lang/Object;Lorg/apache/ignite/internal/util/Igni
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-5759) IgniteCache 6 suite timed out by GridCachePartitionEvictionDuringReadThroughSelfTest.testPartitionRent

2018-04-09 Thread Dmitriy Pavlov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-5759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitriy Pavlov updated IGNITE-5759:
---
Description: 
Test history:
https://ci.ignite.apache.org/project.html?projectId=IgniteTests20=-7061194995222816963=testDetails

There is no 'Test has been timed out' message in logs.
Last 'Starting test:' message was 
GridCachePartitionEvictionDuringReadThroughSelfTest#testPartitionRent

Latest exception from working test was as follows;
{noformat}
[23:19:11]W: [org.apache.ignite:ignite-core] [2017-07-14 
20:19:11,392][ERROR][tcp-comm-worker-#8980%distributed.GridCachePartitionEvictionDuringReadThroughSelfTest4%][TcpCommunicationSpi]
 TcpCommunicationSpi failed to establish connection to node, node will be 
dropped from cluster [rmtNode=TcpDiscoveryNode 
[id=a93fce57-6b2d-4947-8c23-8a677b93, addrs=[127.0.0.1], 
sockAddrs=[/127.0.0.1:47503], discPort=47503, order=4, intOrder=4, 
lastExchangeTime=1500063443391, loc=false, ver=2.1.0#19700101-sha1:, 
isClient=false]]
[23:19:11]W: [org.apache.ignite:ignite-core] class 
org.apache.ignite.IgniteCheckedException: Failed to connect to node (is node 
still alive?). Make sure that each ComputeTask and cache Transaction has a 
timeout set in order to prevent parties from waiting forever in case of network 
issues [nodeId=a93fce57-6b2d-4947-8c23-8a677b93, addrs=[/127.0.0.1:45273]]
[23:19:11]W: [org.apache.ignite:ignite-core]at 
org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.createTcpClient(TcpCommunicationSpi.java:3173)
[23:19:11]W: [org.apache.ignite:ignite-core]at 
org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.createNioClient(TcpCommunicationSpi.java:2757)
[23:19:11]W: [org.apache.ignite:ignite-core]at 
org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.reserveClient(TcpCommunicationSpi.java:2649)
[23:19:11]W: [org.apache.ignite:ignite-core]at 
org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.access$5900(TcpCommunicationSpi.java:245)
[23:19:11]W: [org.apache.ignite:ignite-core]at 
org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi$CommunicationWorker.processDisconnect(TcpCommunicationSpi.java:4065)
[23:19:11]W: [org.apache.ignite:ignite-core]at 
org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi$CommunicationWorker.body(TcpCommunicationSpi.java:3891)
[23:19:11]W: [org.apache.ignite:ignite-core]at 
org.apache.ignite.spi.IgniteSpiThread.run(IgniteSpiThread.java:62)
[23:19:11]W: [org.apache.ignite:ignite-core]Suppressed: 
class org.apache.ignite.IgniteCheckedException: Failed to connect to address 
[addr=/127.0.0.1:45273, err=Connection refused]
[23:19:11]W: [org.apache.ignite:ignite-core]at 
org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.createTcpClient(TcpCommunicationSpi.java:3178)
[23:19:11]W: [org.apache.ignite:ignite-core]... 6 
more
[23:19:11]W: [org.apache.ignite:ignite-core]Caused by: 
java.net.ConnectException: Connection refused
[23:19:11]W: [org.apache.ignite:ignite-core]at 
sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
[23:19:11]W: [org.apache.ignite:ignite-core]at 
sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:744)
[23:19:11]W: [org.apache.ignite:ignite-core]at 
sun.nio.ch.SocketAdaptor.connect(SocketAdaptor.java:117)
[23:19:11]W: [org.apache.ignite:ignite-core]at 
org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.createTcpClient(TcpCommunicationSpi.java:3024)
[23:19:11]W: [org.apache.ignite:ignite-core]... 6 
more
{noformat}

and then
{noformat}
[23:19:11]W: [org.apache.ignite:ignite-core] [2017-07-14 
20:19:11,895][WARN ][main][root] Interrupting threads started so far: 5
[23:19:11] : [Step 4/5] [2017-07-14 20:19:11,895][INFO ][main][root] >>> 
Stopping test class: GridCachePartitionEvictionDuringReadThroughSelfTest <<<
[23:19:11]W: [org.apache.ignite:ignite-core] [20:19:11] (err) 
Failed to execute compound future reducer: GridCompoundFuture 
[rdc=LongSumReducer [sum=0], initFlag=1, lsnrCalls=1, done=false, 
cancelled=false, err=null, futs=[true, true]]class 
org.apache.ignite.IgniteCheckedException: null
[23:19:11]W: [org.apache.ignite:ignite-core]at 
org.apache.ignite.internal.util.IgniteUtils.cast(IgniteUtils.java:7246)
[23:19:11]W: [org.apache.ignite:ignite-core]at 
org.apache.ignite.internal.util.future.GridFutureAdapter.resolve(GridFutureAdapter.java:258)
[23:19:11]W: [org.apache.ignite:ignite-core]at 

[jira] [Assigned] (IGNITE-4551) Reconsider cache key/value peer class loading

2018-04-09 Thread Alexey Goncharuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-4551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Goncharuk reassigned IGNITE-4551:


Assignee: Alexey Goncharuk

> Reconsider cache key/value peer class loading
> -
>
> Key: IGNITE-4551
> URL: https://issues.apache.org/jira/browse/IGNITE-4551
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Reporter: Alexey Goncharuk
>Assignee: Alexey Goncharuk
>Priority: Major
>  Labels: MakeTeamcityGreenAgain, Muted_test
> Fix For: 2.5
>
>
> In new cache implementation after entry is written in offheap information 
> about key/value classloaders in lost (before classloader ids were stored in 
> swap/offheap see GridCacheMapEntry.swap in 'master').
> Need decide how it should work with new architecture (maybe single type per 
> cache can simplify implementation).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (IGNITE-4551) Reconsider cache key/value peer class loading

2018-04-09 Thread Alexey Goncharuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-4551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Goncharuk resolved IGNITE-4551.
--
Resolution: Won't Fix

I do not think this is an issue because we moved towards binary marshaller

> Reconsider cache key/value peer class loading
> -
>
> Key: IGNITE-4551
> URL: https://issues.apache.org/jira/browse/IGNITE-4551
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Reporter: Alexey Goncharuk
>Assignee: Alexey Goncharuk
>Priority: Major
>  Labels: MakeTeamcityGreenAgain, Muted_test
> Fix For: 2.5
>
>
> In new cache implementation after entry is written in offheap information 
> about key/value classloaders in lost (before classloader ids were stored in 
> swap/offheap see GridCacheMapEntry.swap in 'master').
> Need decide how it should work with new architecture (maybe single type per 
> cache can simplify implementation).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-7672) Fix Javadoc in Java 8

2018-04-09 Thread Peter Ivanov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-7672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Ivanov updated IGNITE-7672:
-
Fix Version/s: (was: 2.5)
   2.6

> Fix Javadoc in Java 8
> -
>
> Key: IGNITE-7672
> URL: https://issues.apache.org/jira/browse/IGNITE-7672
> Project: Ignite
>  Issue Type: Task
>Affects Versions: 2.4
>Reporter: Peter Ivanov
>Assignee: Peter Ivanov
>Priority: Critical
> Fix For: 2.6
>
>
> Currently, Javadoc builds on TC with {{-Xdoclint:none}} option enabled, that 
> turns off Java 8's new Javadoc check algorithm. Natively we have lots of 
> warnings and errors in master branch.
> # Create per-module tasks (with inclusion to this one) for receiving scope of 
> debt.
> # Create corresponding build on TC for testing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-5759) IgniteCache 6 suite timed out by GridCachePartitionEvictionDuringReadThroughSelfTest.testPartitionRent

2018-04-09 Thread Dmitriy Pavlov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-5759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitriy Pavlov updated IGNITE-5759:
---
Summary: IgniteCache 6 suite timed out by 
GridCachePartitionEvictionDuringReadThroughSelfTest.testPartitionRent  (was: 
IgniteCache5 suite timed out by 
GridCachePartitionEvictionDuringReadThroughSelfTest.testPartitionRent)

> IgniteCache 6 suite timed out by 
> GridCachePartitionEvictionDuringReadThroughSelfTest.testPartitionRent
> --
>
> Key: IGNITE-5759
> URL: https://issues.apache.org/jira/browse/IGNITE-5759
> Project: Ignite
>  Issue Type: Bug
>Reporter: Dmitriy Pavlov
>Assignee: Dmitriy Pavlov
>Priority: Critical
>  Labels: MakeTeamcityGreenAgain, test-fail
> Fix For: 2.6
>
> Attachments: threadDumpFromLogs.log
>
>
> http://ci.ignite.apache.org/viewLog.html?buildId=727951=Ignite20Tests_IgniteCache5
> There is no 'Test has been timed out' message in logs.
> Last 'Starting test:' message was 
> GridCachePartitionEvictionDuringReadThroughSelfTest#testPartitionRent
> Latest exception from working test was as follows;
> {noformat}
> [23:19:11]W:   [org.apache.ignite:ignite-core] [2017-07-14 
> 20:19:11,392][ERROR][tcp-comm-worker-#8980%distributed.GridCachePartitionEvictionDuringReadThroughSelfTest4%][TcpCommunicationSpi]
>  TcpCommunicationSpi failed to establish connection to node, node will be 
> dropped from cluster [rmtNode=TcpDiscoveryNode 
> [id=a93fce57-6b2d-4947-8c23-8a677b93, addrs=[127.0.0.1], 
> sockAddrs=[/127.0.0.1:47503], discPort=47503, order=4, intOrder=4, 
> lastExchangeTime=1500063443391, loc=false, ver=2.1.0#19700101-sha1:, 
> isClient=false]]
> [23:19:11]W:   [org.apache.ignite:ignite-core] class 
> org.apache.ignite.IgniteCheckedException: Failed to connect to node (is node 
> still alive?). Make sure that each ComputeTask and cache Transaction has a 
> timeout set in order to prevent parties from waiting forever in case of 
> network issues [nodeId=a93fce57-6b2d-4947-8c23-8a677b93, 
> addrs=[/127.0.0.1:45273]]
> [23:19:11]W:   [org.apache.ignite:ignite-core]at 
> org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.createTcpClient(TcpCommunicationSpi.java:3173)
> [23:19:11]W:   [org.apache.ignite:ignite-core]at 
> org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.createNioClient(TcpCommunicationSpi.java:2757)
> [23:19:11]W:   [org.apache.ignite:ignite-core]at 
> org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.reserveClient(TcpCommunicationSpi.java:2649)
> [23:19:11]W:   [org.apache.ignite:ignite-core]at 
> org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.access$5900(TcpCommunicationSpi.java:245)
> [23:19:11]W:   [org.apache.ignite:ignite-core]at 
> org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi$CommunicationWorker.processDisconnect(TcpCommunicationSpi.java:4065)
> [23:19:11]W:   [org.apache.ignite:ignite-core]at 
> org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi$CommunicationWorker.body(TcpCommunicationSpi.java:3891)
> [23:19:11]W:   [org.apache.ignite:ignite-core]at 
> org.apache.ignite.spi.IgniteSpiThread.run(IgniteSpiThread.java:62)
> [23:19:11]W:   [org.apache.ignite:ignite-core]Suppressed: 
> class org.apache.ignite.IgniteCheckedException: Failed to connect to address 
> [addr=/127.0.0.1:45273, err=Connection refused]
> [23:19:11]W:   [org.apache.ignite:ignite-core]at 
> org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.createTcpClient(TcpCommunicationSpi.java:3178)
> [23:19:11]W:   [org.apache.ignite:ignite-core]... 6 
> more
> [23:19:11]W:   [org.apache.ignite:ignite-core]Caused by: 
> java.net.ConnectException: Connection refused
> [23:19:11]W:   [org.apache.ignite:ignite-core]at 
> sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
> [23:19:11]W:   [org.apache.ignite:ignite-core]at 
> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:744)
> [23:19:11]W:   [org.apache.ignite:ignite-core]at 
> sun.nio.ch.SocketAdaptor.connect(SocketAdaptor.java:117)
> [23:19:11]W:   [org.apache.ignite:ignite-core]at 
> org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.createTcpClient(TcpCommunicationSpi.java:3024)
> [23:19:11]W:   [org.apache.ignite:ignite-core]... 6 
> more
> {noformat}
> and then
> {noformat}
> [23:19:11]W:   [org.apache.ignite:ignite-core] [2017-07-14 
> 20:19:11,895][WARN ][main][root] Interrupting 

[jira] [Updated] (IGNITE-5579) Make sure identical binary metadata updates from the same node do not happen twice

2018-04-09 Thread Alexey Goncharuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-5579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Goncharuk updated IGNITE-5579:
-
Fix Version/s: (was: 2.5)
   2.6

> Make sure identical binary metadata updates from the same node do not happen 
> twice
> --
>
> Key: IGNITE-5579
> URL: https://issues.apache.org/jira/browse/IGNITE-5579
> Project: Ignite
>  Issue Type: Improvement
>  Components: cache
>Affects Versions: 1.7
>Reporter: Alexey Goncharuk
>Priority: Major
> Fix For: 2.6
>
>
> It is possible that multiple user threads attempt identical binary metadata 
> update concurrently. In this case, a node will just increase contention on 
> the cache key, but once the lock is acquired there will be nothing changed. 
> On large topologies, this may lead to nodes waiting for the metadata update 
> for hours (!).
> We should work out a way to identify identical metadata updates and allow 
> only one thread to proceed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-5110) Topology version should be included in synchronous continuous message

2018-04-09 Thread Alexey Goncharuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-5110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Goncharuk updated IGNITE-5110:
-
Fix Version/s: (was: 2.5)
   2.6

> Topology version should be included in synchronous continuous message
> -
>
> Key: IGNITE-5110
> URL: https://issues.apache.org/jira/browse/IGNITE-5110
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.0
>Reporter: Alexey Goncharuk
>Priority: Major
> Fix For: 2.6
>
>
> I've observed the following situation on TC:
> {code}
> "sys-stripe-11-#12316%continuous.GridCacheContinuousQueryConcurrentTest1%" 
> prio=10 tid=0x7f8e1888f800 nid=0x44ad waiting on condition 
> [0x7f91050fa000]
>java.lang.Thread.State: WAITING (parking)
>   at sun.misc.Unsafe.park(Native Method)
>   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:315)
>   at 
> org.apache.ignite.internal.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:176)
>   at 
> org.apache.ignite.internal.util.future.GridFutureAdapter.get(GridFutureAdapter.java:139)
>   at 
> org.apache.ignite.internal.processors.continuous.GridContinuousProcessor.addNotification(GridContinuousProcessor.java:930)
>   at 
> org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousQueryHandler.onEntryUpdate(CacheContinuousQueryHandler.java:817)
>   at 
> org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousQueryHandler.access$800(CacheContinuousQueryHandler.java:92)
>   at 
> org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousQueryHandler$1.onEntryUpdated(CacheContinuousQueryHandler.java:420)
>   at 
> org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousQueryManager.onEntryUpdated(CacheContinuousQueryManager.java:346)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheMapEntry.innerSet(GridCacheMapEntry.java:1029)
>   - locked <0x000766998a50> (a 
> org.apache.ignite.internal.processors.cache.distributed.dht.colocated.GridDhtColocatedCacheEntry)
>   at 
> org.apache.ignite.internal.processors.cache.transactions.IgniteTxLocalAdapter.userCommit(IgniteTxLocalAdapter.java:663)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxLocalAdapter.localFinish(GridDhtTxLocalAdapter.java:772)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxLocal.localFinish(GridDhtTxLocal.java:580)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxLocal.finishTx(GridDhtTxLocal.java:466)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxLocal.commitDhtLocalAsync(GridDhtTxLocal.java:514)
>   at 
> org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler.finishDhtLocal(IgniteTxHandler.java:841)
>   at 
> org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler.finish(IgniteTxHandler.java:720)
>   at 
> org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler.processNearTxFinishRequest(IgniteTxHandler.java:676)
>   at 
> org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler.access$200(IgniteTxHandler.java:95)
>   at 
> org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler$3.apply(IgniteTxHandler.java:153)
>   at 
> org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler$3.apply(IgniteTxHandler.java:151)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheIoManager.processMessage(GridCacheIoManager.java:863)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheIoManager.onMessage0(GridCacheIoManager.java:386)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:308)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheIoManager.access$000(GridCacheIoManager.java:100)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheIoManager$1.onMessage(GridCacheIoManager.java:253)
>   at 
> org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1257)
>   at 
> org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:885)
>   at 
> org.apache.ignite.internal.managers.communication.GridIoManager.access$2100(GridIoManager.java:114)
>   at 
> org.apache.ignite.internal.managers.communication.GridIoManager$7.run(GridIoManager.java:802)
>   at 
> org.apache.ignite.internal.util.StripedExecutor$Stripe.run(StripedExecutor.java:483)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> {code}
> 

[jira] [Updated] (IGNITE-5759) IgniteCache5 suite timed out by GridCachePartitionEvictionDuringReadThroughSelfTest.testPartitionRent

2018-04-09 Thread Dmitriy Pavlov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-5759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitriy Pavlov updated IGNITE-5759:
---
Fix Version/s: (was: 2.5)
   2.6

> IgniteCache5 suite timed out by 
> GridCachePartitionEvictionDuringReadThroughSelfTest.testPartitionRent
> -
>
> Key: IGNITE-5759
> URL: https://issues.apache.org/jira/browse/IGNITE-5759
> Project: Ignite
>  Issue Type: Bug
>Reporter: Dmitriy Pavlov
>Assignee: Dmitriy Pavlov
>Priority: Critical
>  Labels: MakeTeamcityGreenAgain, test-fail
> Fix For: 2.6
>
> Attachments: threadDumpFromLogs.log
>
>
> http://ci.ignite.apache.org/viewLog.html?buildId=727951=Ignite20Tests_IgniteCache5
> There is no 'Test has been timed out' message in logs.
> Last 'Starting test:' message was 
> GridCachePartitionEvictionDuringReadThroughSelfTest#testPartitionRent
> Latest exception from working test was as follows;
> {noformat}
> [23:19:11]W:   [org.apache.ignite:ignite-core] [2017-07-14 
> 20:19:11,392][ERROR][tcp-comm-worker-#8980%distributed.GridCachePartitionEvictionDuringReadThroughSelfTest4%][TcpCommunicationSpi]
>  TcpCommunicationSpi failed to establish connection to node, node will be 
> dropped from cluster [rmtNode=TcpDiscoveryNode 
> [id=a93fce57-6b2d-4947-8c23-8a677b93, addrs=[127.0.0.1], 
> sockAddrs=[/127.0.0.1:47503], discPort=47503, order=4, intOrder=4, 
> lastExchangeTime=1500063443391, loc=false, ver=2.1.0#19700101-sha1:, 
> isClient=false]]
> [23:19:11]W:   [org.apache.ignite:ignite-core] class 
> org.apache.ignite.IgniteCheckedException: Failed to connect to node (is node 
> still alive?). Make sure that each ComputeTask and cache Transaction has a 
> timeout set in order to prevent parties from waiting forever in case of 
> network issues [nodeId=a93fce57-6b2d-4947-8c23-8a677b93, 
> addrs=[/127.0.0.1:45273]]
> [23:19:11]W:   [org.apache.ignite:ignite-core]at 
> org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.createTcpClient(TcpCommunicationSpi.java:3173)
> [23:19:11]W:   [org.apache.ignite:ignite-core]at 
> org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.createNioClient(TcpCommunicationSpi.java:2757)
> [23:19:11]W:   [org.apache.ignite:ignite-core]at 
> org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.reserveClient(TcpCommunicationSpi.java:2649)
> [23:19:11]W:   [org.apache.ignite:ignite-core]at 
> org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.access$5900(TcpCommunicationSpi.java:245)
> [23:19:11]W:   [org.apache.ignite:ignite-core]at 
> org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi$CommunicationWorker.processDisconnect(TcpCommunicationSpi.java:4065)
> [23:19:11]W:   [org.apache.ignite:ignite-core]at 
> org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi$CommunicationWorker.body(TcpCommunicationSpi.java:3891)
> [23:19:11]W:   [org.apache.ignite:ignite-core]at 
> org.apache.ignite.spi.IgniteSpiThread.run(IgniteSpiThread.java:62)
> [23:19:11]W:   [org.apache.ignite:ignite-core]Suppressed: 
> class org.apache.ignite.IgniteCheckedException: Failed to connect to address 
> [addr=/127.0.0.1:45273, err=Connection refused]
> [23:19:11]W:   [org.apache.ignite:ignite-core]at 
> org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.createTcpClient(TcpCommunicationSpi.java:3178)
> [23:19:11]W:   [org.apache.ignite:ignite-core]... 6 
> more
> [23:19:11]W:   [org.apache.ignite:ignite-core]Caused by: 
> java.net.ConnectException: Connection refused
> [23:19:11]W:   [org.apache.ignite:ignite-core]at 
> sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
> [23:19:11]W:   [org.apache.ignite:ignite-core]at 
> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:744)
> [23:19:11]W:   [org.apache.ignite:ignite-core]at 
> sun.nio.ch.SocketAdaptor.connect(SocketAdaptor.java:117)
> [23:19:11]W:   [org.apache.ignite:ignite-core]at 
> org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.createTcpClient(TcpCommunicationSpi.java:3024)
> [23:19:11]W:   [org.apache.ignite:ignite-core]... 6 
> more
> {noformat}
> and then
> {noformat}
> [23:19:11]W:   [org.apache.ignite:ignite-core] [2017-07-14 
> 20:19:11,895][WARN ][main][root] Interrupting threads started so far: 5
> [23:19:11] :   [Step 4/5] [2017-07-14 20:19:11,895][INFO ][main][root] >>> 
> Stopping test class: GridCachePartitionEvictionDuringReadThroughSelfTest 

[jira] [Updated] (IGNITE-5286) Reconsider deferredDelete implementation

2018-04-09 Thread Alexey Goncharuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-5286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Goncharuk updated IGNITE-5286:
-
Fix Version/s: (was: 2.5)
   2.6

> Reconsider deferredDelete implementation
> 
>
> Key: IGNITE-5286
> URL: https://issues.apache.org/jira/browse/IGNITE-5286
> Project: Ignite
>  Issue Type: Improvement
>  Components: cache
>Affects Versions: 2.0
>Reporter: Alexey Goncharuk
>Priority: Major
> Fix For: 2.6
>
>
> In current implementation entries marked for 'deferredDelete' should be 
> stored on heap (otherwise information about remove is lost). This potentially 
> can be an issue if there are a lot of removes.
> (note: in current 'deferredDelete' implementation in Ignite there is a bug - 
> https://issues.apache.org/jira/browse/IGNITE-3299).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-5580) Improve node failure cause information

2018-04-09 Thread Alexey Goncharuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-5580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Goncharuk updated IGNITE-5580:
-
Fix Version/s: (was: 2.5)
   2.6

> Improve node failure cause information
> --
>
> Key: IGNITE-5580
> URL: https://issues.apache.org/jira/browse/IGNITE-5580
> Project: Ignite
>  Issue Type: Improvement
>  Components: general
>Affects Versions: 1.7
>Reporter: Alexey Goncharuk
>Assignee: Ryabov Dmitrii
>Priority: Major
>  Labels: observability
> Fix For: 2.6
>
>
> When a node fails, we do not print out any information about the root cause 
> of this failure. This makes it extremely hard to investigate the failure 
> causes - I need to find a previous node for the failed node and check the 
> logs on the previous node.
> I suggest that we add extensive information about the reason of the node 
> failure and the sequence of events that led to this, e.g.:
> [time] [NODE] Sending a message to next node - failed _because_ - write 
> timeout, read timeout, ...?
> [time] [NODE] Connection check - failed - why? Connection refused, handshake 
> timed out, ...?
> ...
> [time] [NODE] Decided to drop the node because of the sequence above
> Maybe we do not need to print out this information always, but we do need 
> this when troubleshooting logger is enabled.
> Also, DiscoverySpi should collect a set of latest important events and dump 
> these events in case of local node segmentation. This will allow users to 
> match the events in the cluster and events on local node and get to the 
> bottom of the failure.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-5560) A failed service must be redeployed when possible

2018-04-09 Thread Alexey Goncharuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-5560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Goncharuk updated IGNITE-5560:
-
Fix Version/s: (was: 2.5)
   2.6

> A failed service must be redeployed when possible
> -
>
> Key: IGNITE-5560
> URL: https://issues.apache.org/jira/browse/IGNITE-5560
> Project: Ignite
>  Issue Type: Improvement
>  Components: managed services
>Affects Versions: 1.7
>Reporter: Alexey Goncharuk
>Priority: Major
> Fix For: 2.6
>
>
> I observed the following behavior: when a service deployment (or run) fails 
> with an unexpected exception, Ignite just outputs a warning to the console 
> and does not attempt anything to recover. 
> In our deployment, we rely on a cluster singleton to be always present in a 
> cluster. 
> If a service fails, Ignite should attempt to failover this service to some 
> other node, if possible



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-5473) Create ignite troubleshooting logger

2018-04-09 Thread Alexey Goncharuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-5473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Goncharuk updated IGNITE-5473:
-
Fix Version/s: (was: 2.5)
   2.6

> Create ignite troubleshooting logger
> 
>
> Key: IGNITE-5473
> URL: https://issues.apache.org/jira/browse/IGNITE-5473
> Project: Ignite
>  Issue Type: Improvement
>  Components: general
>Affects Versions: 2.0
>Reporter: Alexey Goncharuk
>Priority: Critical
>  Labels: important, observability
> Fix For: 2.6
>
>
> Currently, we have two extremes of logging - either INFO wich logs almost 
> nothing, or DEBUG, which will pollute logs with too verbose messages.
> We should create a 'troubleshooting' logger, which should be easily enabled 
> (via a system property, for example) and log all stability-critical node and 
> cluster events:
>  * Connection events (both communication and discovery), handshake status
>  * ALL ignored messages and skipped actions (even those we assume are safe to 
> ignore)
>  * Partition exchange stages and timings
>  * Verbose discovery state changes (this should make it easy to understand 
> the reason for 'Node has not been connected to the topology')
>  * Transaction failover stages and actions
>  * All unlogged exceptions
>  * Responses that took more than N milliseconds when in normal they should 
> return right away
>  * Long discovery SPI messages processing times
>  * Managed service deployment stages
>  * Marshaller mappings registration and notification
>  * Binary metadata registration and notification
>  * Continuous query registration / notification
> (add more)
> The amount of logging should be chosen accurately so that it would be safe to 
> enable this logger in production clusters.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-7326) Fix ignitevisorcmd | sqlline scripts to be able to run from /usr/bin installed as symbolic links

2018-04-09 Thread Peter Ivanov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-7326?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Ivanov updated IGNITE-7326:
-
Fix Version/s: (was: 2.5)
   2.6

> Fix ignitevisorcmd | sqlline scripts to be able to run from /usr/bin 
> installed as symbolic links
> 
>
> Key: IGNITE-7326
> URL: https://issues.apache.org/jira/browse/IGNITE-7326
> Project: Ignite
>  Issue Type: Bug
>Reporter: Peter Ivanov
>Assignee: Peter Ivanov
>Priority: Major
> Fix For: 2.6
>
>
> Currenlty, {{ignitevisorcmd.sh}} and {{sqlline.sh}} being installed into 
> {{/usr/bin}} will fail to run because of:
> * their unawarelessness of theirs real location;
> * necessity to write to {{$\{IGNITE_HOME}/work}} which can have different 
> permissions and owner (in packages, for example).
> It is required to rewrite these scripts to be able to run from anywhere by 
> theirs symbolic linka and with some temporary dir ({{/tmp}} for example) as 
> workdir.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-5821) Implement fuzzy checkpoints

2018-04-09 Thread Alexey Goncharuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-5821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Goncharuk updated IGNITE-5821:
-
Fix Version/s: (was: 2.5)

> Implement fuzzy checkpoints
> ---
>
> Key: IGNITE-5821
> URL: https://issues.apache.org/jira/browse/IGNITE-5821
> Project: Ignite
>  Issue Type: New Feature
>  Components: persistence
>Affects Versions: 2.1
>Reporter: Alexey Goncharuk
>Priority: Major
>
> Currently, we are able to run only sharp checkpoints (all committed 
> transactions are in the checkpoint, all non-committed are not included, all 
> data structures are fully consistent).
> This has the following disadvantages:
> 1) All transactions are blocked for the markCheckpointBegin call
> 2) We have an additional overhead for checkpoint COW buffer
> 3) If checkpoint buffer is exhausted, we block all transactions and 
> synchronously wait for the checkpoint to be finished.
> There is a technique called fuzzy checkpoints:
> 1) We keep a WAL LSN in every dirty page
> 2) When a page is being flushed to disk, we sync WAL up to the LSN
> 3) We maintain checkpoint low watermark so that WAL does not grow indefinitely
> 4) WAL logging is changed in a way that does not allow data structures 
> updates to be mixed in WAL
> 5) The recovery procedure is changed to apply all physical deltas up to the 
> end of WAL and have consistent memory state, then logical records revert all 
> non-committed transactions



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-6114) Replace standard java maps for partition counters in continuous queries

2018-04-09 Thread Alexey Goncharuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-6114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Goncharuk updated IGNITE-6114:
-
Fix Version/s: (was: 2.5)
   2.6

> Replace standard java maps for partition counters in continuous queries
> ---
>
> Key: IGNITE-6114
> URL: https://issues.apache.org/jira/browse/IGNITE-6114
> Project: Ignite
>  Issue Type: Improvement
>  Components: cache
>Affects Versions: 2.1
>Reporter: Alexey Goncharuk
>Priority: Major
> Fix For: 2.6
>
>
> This is a continuation of IGNITE-5872.
> We need to replace standard java maps in StartRoutineDiscoveryMessage with 
> the maps introduced in IGNITE-5872.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-7190) Docker Hub official repository deployment

2018-04-09 Thread Peter Ivanov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-7190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Ivanov updated IGNITE-7190:
-
Fix Version/s: (was: 2.5)
   2.6

> Docker Hub official repository deployment
> -
>
> Key: IGNITE-7190
> URL: https://issues.apache.org/jira/browse/IGNITE-7190
> Project: Ignite
>  Issue Type: New Feature
>Reporter: Peter Ivanov
>Assignee: Peter Ivanov
>Priority: Major
> Fix For: 2.6
>
>
> Research possibility to integrate into Official Docker Hub Repositories.
> Use 
> https://docs.docker.com/docker-hub/official_repos/#how-do-i-create-a-new-official-repository
>  as start.
> https://docs.docker.com/docker-hub/official_repos
> Official Docker Hub Git Hub repository:
> * https://github.com/docker-library/official-images
> * https://github.com/docker-library/docs
> Examples (Apache Geode):
> * [Bashbrew 
> file|https://github.com/docker-library/official-images/pull/3685/commits/0e34248d4b2a0ed10029577f2f50427469d6b2f4]
> * [Docs|https://github.com/docker-library/docs/pull/1062]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-5945) Flaky failure in IgniteCache 5: IgniteCacheAtomicProtocolTest.testPutReaderUpdate2

2018-04-09 Thread Dmitriy Pavlov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-5945?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitriy Pavlov updated IGNITE-5945:
---
Fix Version/s: (was: 2.5)
   2.6

> Flaky failure in IgniteCache 5: 
> IgniteCacheAtomicProtocolTest.testPutReaderUpdate2
> --
>
> Key: IGNITE-5945
> URL: https://issues.apache.org/jira/browse/IGNITE-5945
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.1
>Reporter: Dmitriy Pavlov
>Priority: Major
>  Labels: MakeTeamcityGreenAgain
> Fix For: 2.6
>
>
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.IgniteCacheAtomicProtocolTest#testPutReaderUpdate2
> {noformat}
> junit.framework.AssertionFailedError
>   at junit.framework.Assert.fail(Assert.java:55)
>   at junit.framework.Assert.assertTrue(Assert.java:22)
>   at junit.framework.Assert.assertFalse(Assert.java:39)
>   at junit.framework.Assert.assertFalse(Assert.java:47)
>   at junit.framework.TestCase.assertFalse(TestCase.java:219)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.IgniteCacheAtomicProtocolTest.readerUpdateDhtFails(IgniteCacheAtomicProtocolTest.java:865)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.IgniteCacheAtomicProtocolTest.testPutReaderUpdate2(IgniteCacheAtomicProtocolTest.java:765)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at junit.framework.TestCase.runTest(TestCase.java:176)
>   at 
> org.apache.ignite.testframework.junits.GridAbstractTest.runTestInternal(GridAbstractTest.java:2000)
>   at 
> org.apache.ignite.testframework.junits.GridAbstractTest.access$000(GridAbstractTest.java:132)
>   at 
> org.apache.ignite.testframework.junits.GridAbstractTest$5.run(GridAbstractTest.java:1915)
>   at java.lang.Thread.run(Thread.java:748)
> {noformat}
> Fail is reproducable locally 2 times per 20 runs
> On TeamCity test success rate is 88,2%



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


  1   2   3   >