[jira] [Commented] (IGNITE-8286) ScanQuery ignore setLocal with non local partition

2018-04-25 Thread Roman Shtykh (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-8286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16453516#comment-16453516
 ] 

Roman Shtykh commented on IGNITE-8286:
--

[~sboikov] can I ask for your review please? I found you reviewed IGNITE-2921.

I think that checking for node emptiness suffices. 
_GridCacheQueryAdapter.nodes(...)_ picks proper nodes from the cluster 
projection considering the provided partition. If the local scan is not 
specified explicitly, it will scan with fallbacks or remotely. Do I miss 
anything?

I would also like to rename _CacheScanPartitionQueryFallbackSelfTest_ -> 
_CacheScanPartitionQuerySelfTest_. What do you think?

> ScanQuery ignore setLocal with non local partition
> --
>
> Key: IGNITE-8286
> URL: https://issues.apache.org/jira/browse/IGNITE-8286
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.4
>Reporter: Alexander Belyak
>Assignee: Roman Shtykh
>Priority: Major
> Fix For: 2.6
>
>
> 1) Create partitioned cache on 2+ nodes cluster
> 2) Select some partition N, local node should not be OWNER of partition N
> 3) execute: cache.query(new ScanQuery<>().setLocal(true).setPartition(N))
> Expected result:
> empty result (probaply with logging smth like "Trying to execute local query 
>  with non local partition N") or even throw exception
> Actual result:
> executing (with ScanQueryFallbackClosableIterator) query on remote node.
> Problem is that we execute local query on remote node.
> Same behaviour can be achieved if we get empty node list from 
> GridCacheQueryAdapter.node() by any reasons, for example - if we run "local" 
> query from non data node from given cache (see 
> GridDiscoveryNamager.cacheAffinityNode(ClusterNode node, String cacheName) in 
> GridcacheQueryAdapter.executeScanQuery()



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8286) ScanQuery ignore setLocal with non local partition

2018-04-25 Thread Roman Shtykh (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-8286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Shtykh updated IGNITE-8286:
-
Fix Version/s: 2.6

> ScanQuery ignore setLocal with non local partition
> --
>
> Key: IGNITE-8286
> URL: https://issues.apache.org/jira/browse/IGNITE-8286
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.4
>Reporter: Alexander Belyak
>Assignee: Roman Shtykh
>Priority: Major
> Fix For: 2.6
>
>
> 1) Create partitioned cache on 2+ nodes cluster
> 2) Select some partition N, local node should not be OWNER of partition N
> 3) execute: cache.query(new ScanQuery<>().setLocal(true).setPartition(N))
> Expected result:
> empty result (probaply with logging smth like "Trying to execute local query 
>  with non local partition N") or even throw exception
> Actual result:
> executing (with ScanQueryFallbackClosableIterator) query on remote node.
> Problem is that we execute local query on remote node.
> Same behaviour can be achieved if we get empty node list from 
> GridCacheQueryAdapter.node() by any reasons, for example - if we run "local" 
> query from non data node from given cache (see 
> GridDiscoveryNamager.cacheAffinityNode(ClusterNode node, String cacheName) in 
> GridcacheQueryAdapter.executeScanQuery()



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-7592) Dynamic cache with rebalanceDelay == -1 doesn't trigger late affinity assignment even after explicit rebalance is called on every node

2018-04-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-7592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16453514#comment-16453514
 ] 

ASF GitHub Bot commented on IGNITE-7592:


GitHub user Mmuzaf opened a pull request:

https://github.com/apache/ignite/pull/3918

IGNITE-7592: enforce future return boolean



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/Mmuzaf/ignite ignite-7592

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/3918.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3918


commit 86c1b7d7eb5ed24bd1fdbb7bf0edb6bf28e785e8
Author: Maxim Muzafarov 
Date:   2018-04-26T05:15:00Z

IGNITE-7592: enforce future return boolean




> Dynamic cache with rebalanceDelay == -1 doesn't trigger late affinity 
> assignment even after explicit rebalance is called on every node
> --
>
> Key: IGNITE-7592
> URL: https://issues.apache.org/jira/browse/IGNITE-7592
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.4
>Reporter: Ilya Lantukh
>Assignee: Maxim Muzafarov
>Priority: Major
> Fix For: 2.6
>
>
> Reproducer:
> {noformat}
> startGrids(NODE_COUNT);
> IgniteEx ig = grid(0);
> ig.cluster().active(true);
> awaitPartitionMapExchange();
> IgniteCache cache =
> ig.createCache(
> new CacheConfiguration()
> .setName(CACHE_NAME)
> .setCacheMode(PARTITIONED)
> .setBackups(1)
> .setPartitionLossPolicy(READ_ONLY_SAFE)
> .setReadFromBackup(true)
> .setWriteSynchronizationMode(FULL_SYNC)
> .setRebalanceDelay(-1)
> );
> for (int i = 0; i < NODE_COUNT; i++)
> grid(i).cache(CACHE_NAME).rebalance().get();
> awaitPartitionMapExchange();
> {noformat}
> Sometimes this code will hang on the last awaitPartitionMapExchange(), though 
> probability that it will happen is rather low (<10%).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (IGNITE-8267) Web console: cluster client connector configuration fields are too narrow

2018-04-25 Thread Pavel Konstantinov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-8267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Konstantinov reassigned IGNITE-8267:
--

Resolution: Fixed
  Assignee: Alexey Kuznetsov  (was: Pavel Konstantinov)

> Web console: cluster client connector configuration fields are too narrow
> -
>
> Key: IGNITE-8267
> URL: https://issues.apache.org/jira/browse/IGNITE-8267
> Project: Ignite
>  Issue Type: Improvement
>  Components: wizards
>Reporter: Ilya Borisov
>Assignee: Alexey Kuznetsov
>Priority: Minor
> Attachments: image-2018-04-16-11-31-15-136.png, screenshot-1.png
>
>
> *How to reproduce:*
> 1. Go to advanced cluster configuration.
> 2. Open "client connector configuration" section.
> *What happens:*
> "Socket send buffer size" and "Socket receive buffer size" field labels wrap 
> to second line.
>  !image-2018-04-16-11-31-15-136.png! 
> *What should happen:*
> "Socket send buffer size" and "Socket receive buffer size" field labels 
> should be on a single line.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8267) Web console: cluster client connector configuration fields are too narrow

2018-04-25 Thread Pavel Konstantinov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-8267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16453463#comment-16453463
 ] 

Pavel Konstantinov commented on IGNITE-8267:


Tested on the branch

> Web console: cluster client connector configuration fields are too narrow
> -
>
> Key: IGNITE-8267
> URL: https://issues.apache.org/jira/browse/IGNITE-8267
> Project: Ignite
>  Issue Type: Improvement
>  Components: wizards
>Reporter: Ilya Borisov
>Assignee: Pavel Konstantinov
>Priority: Minor
> Attachments: image-2018-04-16-11-31-15-136.png, screenshot-1.png
>
>
> *How to reproduce:*
> 1. Go to advanced cluster configuration.
> 2. Open "client connector configuration" section.
> *What happens:*
> "Socket send buffer size" and "Socket receive buffer size" field labels wrap 
> to second line.
>  !image-2018-04-16-11-31-15-136.png! 
> *What should happen:*
> "Socket send buffer size" and "Socket receive buffer size" field labels 
> should be on a single line.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8214) Web console: add validation for Persistent + Swap file for data region configuration

2018-04-25 Thread Pavel Konstantinov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-8214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16453452#comment-16453452
 ] 

Pavel Konstantinov commented on IGNITE-8214:


Tested on the branch

> Web console: add validation for Persistent + Swap file for data region 
> configuration
> 
>
> Key: IGNITE-8214
> URL: https://issues.apache.org/jira/browse/IGNITE-8214
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.4
>Reporter: Pavel Konstantinov
>Assignee: Pavel Konstantinov
>Priority: Major
> Fix For: 2.6
>
>
> The 'Swap file' option can be set only if 'Persistent Enable' is OFF.
> Please add corresponding validation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (IGNITE-8214) Web console: add validation for Persistent + Swap file for data region configuration

2018-04-25 Thread Pavel Konstantinov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-8214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Konstantinov reassigned IGNITE-8214:
--

Assignee: Alexey Kuznetsov  (was: Pavel Konstantinov)

> Web console: add validation for Persistent + Swap file for data region 
> configuration
> 
>
> Key: IGNITE-8214
> URL: https://issues.apache.org/jira/browse/IGNITE-8214
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.4
>Reporter: Pavel Konstantinov
>Assignee: Alexey Kuznetsov
>Priority: Major
> Fix For: 2.6
>
>
> The 'Swap file' option can be set only if 'Persistent Enable' is OFF.
> Please add corresponding validation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-7993) Striped pool can't be disabled

2018-04-25 Thread Roman Guseinov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-7993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16453447#comment-16453447
 ] 

Roman Guseinov commented on IGNITE-7993:


[~dpavlov], I've created a new PR [https://github.com/apache/ignite/pull/3860] 
after got comments from [~yzhdanov] and [~amashenkov]. Could you please review 
it?

TC results 1: 
[https://ci.ignite.apache.org/viewLog.html?buildId=1220198=queuedBuildOverviewTab]

TC results 2 (update javadoc + rebase): 
[https://ci.ignite.apache.org/viewQueued.html?itemId=1245472=queuedBuildOverviewTab]

It seems newly appeared test failures (PDS (Direct IO) 2, PDS (Indexing), 
ZooKeeper, etc) aren't related to my changes.

It looks like only flaky tests are failed.

Please let me know if you have any questions.

Thanks.

> Striped pool can't be disabled
> --
>
> Key: IGNITE-7993
> URL: https://issues.apache.org/jira/browse/IGNITE-7993
> Project: Ignite
>  Issue Type: Bug
>  Components: general
>Affects Versions: 2.4
>Reporter: Valentin Kulichenko
>Assignee: Roman Guseinov
>Priority: Major
> Fix For: 2.6
>
>
> Javadoc for {{IgniteConfiguration#setStripedPoolSize}} states that striped 
> pool can be disabled by providing value less or equal than zero:
> {noformat}
> If set to non-positive value then requests get processed in system pool.
> {noformat}
> However, doing that prevents node from startup, it fails with the following 
> exception:
> {noformat}
> Caused by: class org.apache.ignite.IgniteCheckedException: Invalid 
> stripedPool thread pool size (must be greater than 0), actual value: 0
>   at 
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.validateThreadPoolSize(IgnitionEx.java:2061)
>   at 
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:1799)
>   at 
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1716)
>   at org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1144)
>   at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:664)
>   at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:589)
>   at org.apache.ignite.Ignition.start(Ignition.java:322)
>   ... 7 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-7993) Striped pool can't be disabled

2018-04-25 Thread Roman Guseinov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-7993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16453432#comment-16453432
 ] 

Roman Guseinov commented on IGNITE-7993:


[~amashenkov], thank you.

> Striped pool can't be disabled
> --
>
> Key: IGNITE-7993
> URL: https://issues.apache.org/jira/browse/IGNITE-7993
> Project: Ignite
>  Issue Type: Bug
>  Components: general
>Affects Versions: 2.4
>Reporter: Valentin Kulichenko
>Assignee: Roman Guseinov
>Priority: Major
> Fix For: 2.6
>
>
> Javadoc for {{IgniteConfiguration#setStripedPoolSize}} states that striped 
> pool can be disabled by providing value less or equal than zero:
> {noformat}
> If set to non-positive value then requests get processed in system pool.
> {noformat}
> However, doing that prevents node from startup, it fails with the following 
> exception:
> {noformat}
> Caused by: class org.apache.ignite.IgniteCheckedException: Invalid 
> stripedPool thread pool size (must be greater than 0), actual value: 0
>   at 
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.validateThreadPoolSize(IgnitionEx.java:2061)
>   at 
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:1799)
>   at 
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1716)
>   at org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1144)
>   at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:664)
>   at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:589)
>   at org.apache.ignite.Ignition.start(Ignition.java:322)
>   ... 7 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-7131) Document Web Console deployment in Kubernetes

2018-04-25 Thread Denis Magda (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-7131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denis Magda updated IGNITE-7131:

Fix Version/s: 2.6

> Document Web Console deployment in Kubernetes
> -
>
> Key: IGNITE-7131
> URL: https://issues.apache.org/jira/browse/IGNITE-7131
> Project: Ignite
>  Issue Type: Task
>  Components: documentation
>Affects Versions: 2.5
>Reporter: Denis Magda
>Assignee: Alexey Kuznetsov
>Priority: Major
> Fix For: 2.6
>
>
> The ticket is inspired by the following topic:
> http://apache-ignite-users.70518.x6.nabble.com/Web-Console-on-Kubernetes-Cluster-td18591.html
> It will be great to put together a documentation about Web Console deployment 
> on Kubernetes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-5151) Add some warning when offheap eviction occurs

2018-04-25 Thread Dmitriy Pavlov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-5151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitriy Pavlov updated IGNITE-5151:
---
Fix Version/s: 2.6

> Add some warning when offheap eviction occurs
> -
>
> Key: IGNITE-5151
> URL: https://issues.apache.org/jira/browse/IGNITE-5151
> Project: Ignite
>  Issue Type: Improvement
>Affects Versions: 2.0
>Reporter: Ksenia Rybakova
>Assignee: Wuwei Lin
>Priority: Major
> Fix For: 2.6
>
>
> Currently if offheap eviction occurs we are silently losing data. It whould 
> be helpful to have some warning in log as it's done for onheap eviction.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (IGNITE-5151) Add some warning when offheap eviction occurs

2018-04-25 Thread Dmitriy Pavlov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-5151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16452824#comment-16452824
 ] 

Dmitriy Pavlov edited comment on IGNITE-5151 at 4/25/18 6:24 PM:
-

[~ivan.glukos], I've already merged change but I've noticed warning contains 
reference to outdated term:

" 'maxSize' on page memory policy".

Could we replace to data region?


was (Author: dpavlov):
[~ivan.glukos], I've already merged change but I've noticed warning contains 
reference to outdated term 'maxSize' on page memory policy. Could we replace to 
data region?

> Add some warning when offheap eviction occurs
> -
>
> Key: IGNITE-5151
> URL: https://issues.apache.org/jira/browse/IGNITE-5151
> Project: Ignite
>  Issue Type: Improvement
>Affects Versions: 2.0
>Reporter: Ksenia Rybakova
>Assignee: Wuwei Lin
>Priority: Major
>
> Currently if offheap eviction occurs we are silently losing data. It whould 
> be helpful to have some warning in log as it's done for onheap eviction.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-5151) Add some warning when offheap eviction occurs

2018-04-25 Thread Dmitriy Pavlov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-5151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16452824#comment-16452824
 ] 

Dmitriy Pavlov commented on IGNITE-5151:


[~ivan.glukos], I've already merged change but I've noticed warning contains 
reference to outdated term 'maxSize' on page memory policy. Could we replace to 
data region?

> Add some warning when offheap eviction occurs
> -
>
> Key: IGNITE-5151
> URL: https://issues.apache.org/jira/browse/IGNITE-5151
> Project: Ignite
>  Issue Type: Improvement
>Affects Versions: 2.0
>Reporter: Ksenia Rybakova
>Assignee: Wuwei Lin
>Priority: Major
>
> Currently if offheap eviction occurs we are silently losing data. It whould 
> be helpful to have some warning in log as it's done for onheap eviction.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-5151) Add some warning when offheap eviction occurs

2018-04-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-5151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16452822#comment-16452822
 ] 

ASF GitHub Bot commented on IGNITE-5151:


Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/1921


> Add some warning when offheap eviction occurs
> -
>
> Key: IGNITE-5151
> URL: https://issues.apache.org/jira/browse/IGNITE-5151
> Project: Ignite
>  Issue Type: Improvement
>Affects Versions: 2.0
>Reporter: Ksenia Rybakova
>Assignee: Wuwei Lin
>Priority: Major
>
> Currently if offheap eviction occurs we are silently losing data. It whould 
> be helpful to have some warning in log as it's done for onheap eviction.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-8392) Removing WAL history directory leads to JVM crush on that node.

2018-04-25 Thread Pavel Kovalenko (JIRA)
Pavel Kovalenko created IGNITE-8392:
---

 Summary: Removing WAL history directory leads to JVM crush on that 
node.
 Key: IGNITE-8392
 URL: https://issues.apache.org/jira/browse/IGNITE-8392
 Project: Ignite
  Issue Type: Bug
  Components: cache
Affects Versions: 2.4
 Environment: Ubuntu 17.10
Oracle JVM Server (1.8.0_151-b12)
Reporter: Pavel Kovalenko
 Fix For: 2.6


Problem:
1) Start node, load some data, deactivate cluster
2) Remove WAL history directory.
3) Activate cluster.

Cluster activation will be failed due to JVM crush like this:

{noformat}
#
# A fatal error has been detected by the Java Runtime Environment:
#
#  SIGBUS (0x7) at pc=0x7feda1052526, pid=29331, tid=0x7fed193d7700
#
# JRE version: Java(TM) SE Runtime Environment (8.0_151-b12) (build 
1.8.0_151-b12)
# Java VM: Java HotSpot(TM) 64-Bit Server VM (25.151-b12 mixed mode linux-amd64 
compressed oops)
# Problematic frame:
# v  ~StubRoutines::jshort_disjoint_arraycopy
#
# Failed to write core dump. Core dumps have been disabled. To enable core 
dumping, try "ulimit -c unlimited" before starting Java again
#
# If you would like to submit a bug report, please visit:
#   http://bugreport.java.com/bugreport/crash.jsp
#

---  T H R E A D  ---

Current thread (0x7fec8b202800):  JavaThread 
"db-checkpoint-thread-#243%wal.IgniteWalRebalanceTest0%" [_thread_in_Java, 
id=29655, stack(0x7fed192d7000,0x7fed193d8000)]

siginfo: si_signo: 7 (SIGBUS), si_code: 2 (BUS_ADRERR), si_addr: 
0x7fed198ee0b2

Registers:
RAX=0x0007710a9f28, RBX=0x000120b2, RCX=0x0800, 
RDX=0xfe08
RSP=0x7fed193d5c60, RBP=0x7fed193d5c60, RSI=0x7fed198ef0aa, 
RDI=0x0007710a9f20
R8 =0x1000, R9 =0x000120b2, R10=0x7feda1052da0, 
R11=0x1004
R12=0x, R13=0x0007710a9f28, R14=0x1000, 
R15=0x7fec8b202800
RIP=0x7feda1052526, EFLAGS=0x00010282, CSGSFS=0x002b0033, 
ERR=0x0006
  TRAPNO=0x000e

Top of Stack: (sp=0x7fed193d5c60)
0x7fed193d5c60:   0007710a9f28 7feda1be314f
0x7fed193d5c70:   00010002 7feda17747fd
0x7fed193d5c80:   a8008c96 7feda11cfb3e
0x7fed193d5c90:    
0x7fed193d5ca0:    
0x7fed193d5cb0:    
0x7fed193d5cc0:   0007710a9f28 7feda1fb37e0
0x7fed193d5cd0:   0007710a8ef0 00076fa5f5c0
0x7fed193d5ce0:   0007710a9f28 0007710a8ef0
0x7fed193d5cf0:   0007710a8ef0 7fed193d5d18
0x7fed193d5d00:   7fedb8428c76 
0x7fed193d5d10:   1014 00076fa5f650
0x7fed193d5d20:   f8043261 7feda1ee597c
0x7fed193d5d30:   00076fa5f5a8 0007710a9f28
0x7fed193d5d40:   0007710a8ef0 000120a2
0x7fed193d5d50:   00012095 1021
0x7fed193d5d60:   edf4bec3 0001209e
0x7fed193d5d70:   0007710a9f28 00076fa5f650
0x7fed193d5d80:   7fed193d5da8 1014
0x7fed193d5d90:   0007710a8ef0 7fed198dc000
0x7fed193d5da0:   00076fa5f650 7feda1b7a040
0x7fed193d5db0:   0007710a9f28 00076fa700d0
0x7fed193d5dc0:   0007710a9f68 ee2153e5f8043261
0x7fed193d5dd0:   0007710a8ef0 0007710a9f98
0x7fed193d5de0:   00012095 0007710a9f28
0x7fed193d5df0:    1fa0
0x7fed193d5e00:    
0x7fed193d5e10:   0007710a8ef0 7feda2001530
0x7fed193d5e20:   0007710a8ef0 00076f7c05e8
0x7fed193d5e30:   edef80bd 
0x7fed193d5e40:    
0x7fed193d5e50:   7fedb2266000 7feda1cb1f8c 

Instructions: (pc=0x7feda1052526)
0x7feda1052506:   00 00 74 08 66 8b 47 08 66 89 46 08 48 33 c0 c9
0x7feda1052516:   c3 66 0f 1f 84 00 00 00 00 00 c5 fe 6f 44 d7 c8
0x7feda1052526:   c5 fe 7f 44 d6 c8 c5 fe 6f 4c d7 e8 c5 fe 7f 4c
0x7feda1052536:   d6 e8 48 83 c2 08 7e e2 48 83 ea 04 7f 10 c5 fe 

Register to memory mapping:

RAX=0x0007710a9f28 is an oop
java.nio.DirectByteBuffer 
 - klass: 'java/nio/DirectByteBuffer'
RBX=0x000120b2 is an unknown value
RCX=0x0800 is an unknown value
RDX=0xfe08 is an unknown value
RSP=0x7fed193d5c60 is pointing into the stack for thread: 0x7fec8b202800
RBP=0x7fed193d5c60 is pointing into the stack for thread: 0x7fec8b202800
RSI=0x7fed198ef0aa is an unknown value
RDI=0x0007710a9f20 is an oop
{noformat}




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-8391) Removing some WAL history segments leads to WAL rebalance hanging

2018-04-25 Thread Pavel Kovalenko (JIRA)
Pavel Kovalenko created IGNITE-8391:
---

 Summary: Removing some WAL history segments leads to WAL rebalance 
hanging
 Key: IGNITE-8391
 URL: https://issues.apache.org/jira/browse/IGNITE-8391
 Project: Ignite
  Issue Type: Bug
  Components: cache
Affects Versions: 2.4
Reporter: Pavel Kovalenko
 Fix For: 2.6


Problem:
1) Start 2 nodes, load some data to it.
2) Stop node 2, load some data to cache.
3) Remove WAL archived segment which doesn't contain Checkpoint record needed 
to find start point for WAL rebalance, but contains necessary data for 
rebalancing. 
4) Start node 2, this node will start rebalance data from node 1 using WAL.

Rebalance will be hanged with following assertion:

{noformat}
java.lang.AssertionError: Partitions after rebalance should be either done or 
missing: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 
20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31]
at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionSupplier.handleDemandMessage(GridDhtPartitionSupplier.java:417)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPreloader.handleDemandMessage(GridDhtPreloader.java:364)
at 
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$5.apply(GridCachePartitionExchangeManager.java:379)
at 
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$5.apply(GridCachePartitionExchangeManager.java:364)
at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.processMessage(GridCacheIoManager.java:1054)
at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.onMessage0(GridCacheIoManager.java:579)
at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.access$700(GridCacheIoManager.java:99)
at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager$OrderedMessageListener.onMessage(GridCacheIoManager.java:1603)
at 
org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1556)
at 
org.apache.ignite.internal.managers.communication.GridIoManager.access$4100(GridIoManager.java:125)
at 
org.apache.ignite.internal.managers.communication.GridIoManager$GridCommunicationMessageSet.unwind(GridIoManager.java:2752)
at 
org.apache.ignite.internal.managers.communication.GridIoManager.unwindMessageSet(GridIoManager.java:1516)
at 
org.apache.ignite.internal.managers.communication.GridIoManager.access$4400(GridIoManager.java:125)
at 
org.apache.ignite.internal.managers.communication.GridIoManager$10.run(GridIoManager.java:1485)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
{noformat}
 
This happened because we never reached necessary data and updateCounters 
contained in removed WAL segment.

To resolve such problems we should introduce some fallback strategy if 
rebalance by WAL has been failed. Example of fallback strategy is - re-run full 
rebalance for partitions that were not able properly rebalanced using WAL.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-5151) Add some warning when offheap eviction occurs

2018-04-25 Thread Ivan Rakov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-5151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16452796#comment-16452796
 ] 

Ivan Rakov commented on IGNITE-5151:


[~dpavlov], please help with merge.

> Add some warning when offheap eviction occurs
> -
>
> Key: IGNITE-5151
> URL: https://issues.apache.org/jira/browse/IGNITE-5151
> Project: Ignite
>  Issue Type: Improvement
>Affects Versions: 2.0
>Reporter: Ksenia Rybakova
>Assignee: Wuwei Lin
>Priority: Major
>
> Currently if offheap eviction occurs we are silently losing data. It whould 
> be helpful to have some warning in log as it's done for onheap eviction.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8372) Cluster metrics are reported incorrectly on joining node with ZK-based discovery

2018-04-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-8372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16452795#comment-16452795
 ] 

ASF GitHub Bot commented on IGNITE-8372:


Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/3907


> Cluster metrics are reported incorrectly on joining node with ZK-based 
> discovery
> 
>
> Key: IGNITE-8372
> URL: https://issues.apache.org/jira/browse/IGNITE-8372
> Project: Ignite
>  Issue Type: Bug
>  Components: zookeeper
>Reporter: Sergey Chugunov
>Assignee: Sergey Chugunov
>Priority: Blocker
> Fix For: 2.5
>
>
> When new node joins with ZK discovery it sometimes reports negative number of 
> CPUs and incorrect heap size.
> Message in log looks like this:
> {noformat}
> [myid:] - INFO  [disco-event-worker-#61:Log4JLogger@495] - Topology snapshot 
> [ver=100, servers=100, clients=0, CPUs=-6, heap=0.5GB]
> {noformat}
> There is a race though between this report and ClusterMetricsUpdateMessage: 
> if the node receives and process this message first which happens in a 
> separate thread correct values are printed to log.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8066) Reset wal segment idx

2018-04-25 Thread Dmitriy Pavlov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-8066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitriy Pavlov updated IGNITE-8066:
---
Description: 
1) On activation grid read checkpoint status with segment idx=7742:
{noformat}
2018-03-21 02:34:04.465[INFO 
]exchange-worker-#152%DPL_GRID%DplGridNodeName%[o.a.i.i.p.c.d.d.p.GridDhtPartitionsExchangeFuture]
 Successfully activated caches [nodeId=9c0c2e76-fb7f-46df-8b0b-3379d0c91db9, 
clie
nt=false, topVer=AffinityTopologyVersion [topVer=161, minorTopVer=1]]
2018-03-21 02:34:04.479[INFO 
]exchange-worker-#152%DPL_GRID%DplGridNodeName%[o.a.i.i.p.c.d.d.p.GridDhtPartitionsExchangeFuture]
 Finished waiting for partition release future [topVer=AffinityTopologyVersion 
[t
opVer=161, minorTopVer=1], waitTime=0ms, futInfo=NA]
2018-03-21 02:34:04.487[INFO 
]exchange-worker-#152%DPL_GRID%DplGridNodeName%[o.a.i.i.p.c.p.GridCacheDatabaseSharedManager]
 Read checkpoint status 
[startMarker=/gridgain/ssd/data/10_126_1_172_47500/cp/15215870
60132-aafbf88b-f783-40e8-8e3c-ef60cd383e21-START.bin, 
endMarker=/gridgain/ssd/data/10_126_1_172_47500/cp/1521587060132-aafbf88b-f783-40e8-8e3c-ef60cd383e21-END.bin]
2018-03-21 02:34:04.488[INFO 
]exchange-worker-#152%DPL_GRID%DplGridNodeName%[o.a.i.i.p.c.p.GridCacheDatabaseSharedManager]
 Applying lost cache updates since last checkpoint record 
[lastMarked=FileWALPointer [
idx=7742, fileOff=1041057120, len=1470746], 
lastCheckpointId=aafbf88b-f783-40e8-8e3c-ef60cd383e21]
{noformat}
2) but right after it (with only two metrics messages in log between it) write 
checkpoint with wal segment idx=0
 {noformat}
2018-03-21 02:35:21.875[INFO 
]exchange-worker-#152%DPL_GRID%DplGridNodeName%[o.a.i.i.p.c.p.GridCacheDatabaseSharedManager]
 Finished applying WAL changes [updatesApplied=0, time=77388ms]
2018-03-21 02:35:22.386[INFO 
]db-checkpoint-thread-#243%DPL_GRID%DplGridNodeName%[o.a.i.i.p.c.p.GridCacheDatabaseSharedManager]
 Checkpoint started [checkpointId=8cf946e6-a718-4388-8bef-c76bf79d93cd, 
startPtr=
FileWALPointer [idx=0, fileOff=77196029, len=450864], checkpointLockWait=0ms, 
checkpointLockHoldTime=422ms, pages=16379, reason='node started']
2018-03-21 02:35:25.934[INFO 
]db-checkpoint-thread-#243%DPL_GRID%DplGridNodeName%[o.a.i.i.p.c.p.GridCacheDatabaseSharedManager]
 Checkpoint finished [cpId=8cf946e6-a718-4388-8bef-c76bf79d93cd, pages=16379, 
mar
kPos=FileWALPointer [idx=0, fileOff=77196029, len=450864], 
walSegmentsCleared=0, markDuration=508ms, pagesWrite=155ms, fsync=3391ms, 
total=4054ms] 
{noformat}
Then we get some AssertionError while trying to archive wal segment 0 when 
lastArchivedIdx=7742

  was:
1) On activation grid read checkpoint status with segment idx=7742:

2018-03-21 02:34:04.465[INFO 
][exchange-worker-#152%DPL_GRID%DplGridNodeName%][o.a.i.i.p.c.d.d.p.GridDhtPartitionsExchangeFuture]
 Successfully activated caches [nodeId=9c0c2e76-fb7f-46df-8b0b-3379d0c91db9, 
clie
nt=false, topVer=AffinityTopologyVersion [topVer=161, minorTopVer=1]]
2018-03-21 02:34:04.479[INFO 
][exchange-worker-#152%DPL_GRID%DplGridNodeName%][o.a.i.i.p.c.d.d.p.GridDhtPartitionsExchangeFuture]
 Finished waiting for partition release future [topVer=AffinityTopologyVersion 
[t
opVer=161, minorTopVer=1], waitTime=0ms, futInfo=NA]
2018-03-21 02:34:04.487[INFO 
][exchange-worker-#152%DPL_GRID%DplGridNodeName%][o.a.i.i.p.c.p.GridCacheDatabaseSharedManager]
 Read checkpoint status 
[startMarker=/gridgain/ssd/data/10_126_1_172_47500/cp/15215870
60132-aafbf88b-f783-40e8-8e3c-ef60cd383e21-START.bin, 
endMarker=/gridgain/ssd/data/10_126_1_172_47500/cp/1521587060132-aafbf88b-f783-40e8-8e3c-ef60cd383e21-END.bin]
2018-03-21 02:34:04.488[INFO 
][exchange-worker-#152%DPL_GRID%DplGridNodeName%][o.a.i.i.p.c.p.GridCacheDatabaseSharedManager]
 Applying lost cache updates since last checkpoint record 
[lastMarked=FileWALPointer [
idx=7742, fileOff=1041057120, len=1470746], 
lastCheckpointId=aafbf88b-f783-40e8-8e3c-ef60cd383e21]

2) but right after it (with only two metrics messages in log between it) write 
checkpoint with wal segment idx=0

2018-03-21 02:35:21.875[INFO 
][exchange-worker-#152%DPL_GRID%DplGridNodeName%][o.a.i.i.p.c.p.GridCacheDatabaseSharedManager]
 Finished applying WAL changes [updatesApplied=0, time=77388ms]
2018-03-21 02:35:22.386[INFO 
][db-checkpoint-thread-#243%DPL_GRID%DplGridNodeName%][o.a.i.i.p.c.p.GridCacheDatabaseSharedManager]
 Checkpoint started [checkpointId=8cf946e6-a718-4388-8bef-c76bf79d93cd, 
startPtr=
FileWALPointer [idx=0, fileOff=77196029, len=450864], checkpointLockWait=0ms, 
checkpointLockHoldTime=422ms, pages=16379, reason='node started']
2018-03-21 02:35:25.934[INFO 
][db-checkpoint-thread-#243%DPL_GRID%DplGridNodeName%][o.a.i.i.p.c.p.GridCacheDatabaseSharedManager]
 Checkpoint finished [cpId=8cf946e6-a718-4388-8bef-c76bf79d93cd, pages=16379, 
mar
kPos=FileWALPointer [idx=0, fileOff=77196029, len=450864], 
walSegmentsCleared=0, markDuration=508ms, 

[jira] [Commented] (IGNITE-8066) Reset wal segment idx

2018-04-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-8066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16452762#comment-16452762
 ] 

ASF GitHub Bot commented on IGNITE-8066:


Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/3837


> Reset wal segment idx
> -
>
> Key: IGNITE-8066
> URL: https://issues.apache.org/jira/browse/IGNITE-8066
> Project: Ignite
>  Issue Type: Bug
>  Components: general
>Affects Versions: 2.4
>Reporter: Alexander Belyak
>Assignee: Stanilovsky Evgeny
>Priority: Major
> Fix For: 2.5
>
> Attachments: tc.png
>
>
> 1) On activation grid read checkpoint status with segment idx=7742:
> 2018-03-21 02:34:04.465[INFO 
> ][exchange-worker-#152%DPL_GRID%DplGridNodeName%][o.a.i.i.p.c.d.d.p.GridDhtPartitionsExchangeFuture]
>  Successfully activated caches [nodeId=9c0c2e76-fb7f-46df-8b0b-3379d0c91db9, 
> clie
> nt=false, topVer=AffinityTopologyVersion [topVer=161, minorTopVer=1]]
> 2018-03-21 02:34:04.479[INFO 
> ][exchange-worker-#152%DPL_GRID%DplGridNodeName%][o.a.i.i.p.c.d.d.p.GridDhtPartitionsExchangeFuture]
>  Finished waiting for partition release future 
> [topVer=AffinityTopologyVersion [t
> opVer=161, minorTopVer=1], waitTime=0ms, futInfo=NA]
> 2018-03-21 02:34:04.487[INFO 
> ][exchange-worker-#152%DPL_GRID%DplGridNodeName%][o.a.i.i.p.c.p.GridCacheDatabaseSharedManager]
>  Read checkpoint status 
> [startMarker=/gridgain/ssd/data/10_126_1_172_47500/cp/15215870
> 60132-aafbf88b-f783-40e8-8e3c-ef60cd383e21-START.bin, 
> endMarker=/gridgain/ssd/data/10_126_1_172_47500/cp/1521587060132-aafbf88b-f783-40e8-8e3c-ef60cd383e21-END.bin]
> 2018-03-21 02:34:04.488[INFO 
> ][exchange-worker-#152%DPL_GRID%DplGridNodeName%][o.a.i.i.p.c.p.GridCacheDatabaseSharedManager]
>  Applying lost cache updates since last checkpoint record 
> [lastMarked=FileWALPointer [
> idx=7742, fileOff=1041057120, len=1470746], 
> lastCheckpointId=aafbf88b-f783-40e8-8e3c-ef60cd383e21]
> 2) but right after it (with only two metrics messages in log between it) 
> write checkpoint with wal segment idx=0
> 2018-03-21 02:35:21.875[INFO 
> ][exchange-worker-#152%DPL_GRID%DplGridNodeName%][o.a.i.i.p.c.p.GridCacheDatabaseSharedManager]
>  Finished applying WAL changes [updatesApplied=0, time=77388ms]
> 2018-03-21 02:35:22.386[INFO 
> ][db-checkpoint-thread-#243%DPL_GRID%DplGridNodeName%][o.a.i.i.p.c.p.GridCacheDatabaseSharedManager]
>  Checkpoint started [checkpointId=8cf946e6-a718-4388-8bef-c76bf79d93cd, 
> startPtr=
> FileWALPointer [idx=0, fileOff=77196029, len=450864], checkpointLockWait=0ms, 
> checkpointLockHoldTime=422ms, pages=16379, reason='node started']
> 2018-03-21 02:35:25.934[INFO 
> ][db-checkpoint-thread-#243%DPL_GRID%DplGridNodeName%][o.a.i.i.p.c.p.GridCacheDatabaseSharedManager]
>  Checkpoint finished [cpId=8cf946e6-a718-4388-8bef-c76bf79d93cd, pages=16379, 
> mar
> kPos=FileWALPointer [idx=0, fileOff=77196029, len=450864], 
> walSegmentsCleared=0, markDuration=508ms, pagesWrite=155ms, fsync=3391ms, 
> total=4054ms]
> Then we get some AssertionError while trying to archive wal segment 0 when 
> lastArchivedIdx=7742



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (IGNITE-8357) Recreated atomic sequence produces "Sequence was removed from cache"

2018-04-25 Thread Pavel Vinokurov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-8357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Vinokurov reassigned IGNITE-8357:
---

Assignee: Pavel Vinokurov

> Recreated atomic sequence produces "Sequence was removed from cache"
> 
>
> Key: IGNITE-8357
> URL: https://issues.apache.org/jira/browse/IGNITE-8357
> Project: Ignite
>  Issue Type: Bug
>  Components: data structures
>Affects Versions: 2.4
>Reporter: Pavel Vinokurov
>Assignee: Pavel Vinokurov
>Priority: Major
> Attachments: RecreatingAtomicSequence.java
>
>
> If a cluster has two or more nodes, recreated atomic sequence produces error 
> on incrementAndGet operation. 
> The reproducer is attached.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-7108) Apache Ignite 2.5 RPM and DEB packages

2018-04-25 Thread Andrey Gura (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-7108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Gura updated IGNITE-7108:

Component/s: (was: binary)

> Apache Ignite 2.5 RPM and DEB packages
> --
>
> Key: IGNITE-7108
> URL: https://issues.apache.org/jira/browse/IGNITE-7108
> Project: Ignite
>  Issue Type: New Feature
>Reporter: Peter Ivanov
>Assignee: Peter Ivanov
>Priority: Critical
>  Labels: important
> Fix For: 2.5
>
>
> # (/) Update RPM build process to unify with DEB build.
> # (/) Prepare build of DEB package (using architecture and layout from RPM 
> package).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-7821) Unify and improve Apache Ignite and Web Console Dockerfiles

2018-04-25 Thread Andrey Gura (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-7821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16452728#comment-16452728
 ] 

Andrey Gura commented on IGNITE-7821:
-

Merged to master and ignite-2.5 branches. Thanks for contribution!

> Unify and improve Apache Ignite and Web Console Dockerfiles
> ---
>
> Key: IGNITE-7821
> URL: https://issues.apache.org/jira/browse/IGNITE-7821
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Peter Ivanov
>Assignee: Peter Ivanov
>Priority: Critical
> Fix For: 2.5
>
>
> # Unify approach to docker build -- add instructions about how to build 
> specific docker images from binaries .
> # Change Apache Ignite's Dockerfile to get binaries from local build.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8390) WAL historical rebalance is not able to process cache.remove() updates

2018-04-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-8390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16452724#comment-16452724
 ] 

ASF GitHub Bot commented on IGNITE-8390:


GitHub user Jokser opened a pull request:

https://github.com/apache/ignite/pull/3917

IGNITE-8390 Fixed incorrect assertion during WAL historical rebalance



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-8390

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/3917.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3917


commit 20ba426898971ecbd0e41a4640bee07e704a9b58
Author: Pavel Kovalenko 
Date:   2018-04-25T17:29:27Z

IGNITE-8390 Fixed incorrect assertion during WAL historical rebalance.

commit b38434870ed865c4320520821ccb3290e0054fe2
Author: Pavel Kovalenko 
Date:   2018-04-25T17:34:14Z

IGNITE-8390 Corrected tests.




> WAL historical rebalance is not able to process cache.remove() updates
> --
>
> Key: IGNITE-8390
> URL: https://issues.apache.org/jira/browse/IGNITE-8390
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.4
>Reporter: Pavel Kovalenko
>Assignee: Pavel Kovalenko
>Priority: Blocker
> Fix For: 2.5
>
>
> WAL historical rebalance fails on supplier when process entry remove with 
> following assertion:
> {noformat}
> java.lang.AssertionError: GridCacheEntryInfo [key=KeyCacheObjectImpl 
> [part=-1, val=2, hasValBytes=true], cacheId=94416770, val=null, ttl=0, 
> expireTime=0, ver=GridCacheVersion [topVer=136155335, order=1524675346187, 
> nodeOrder=1], isNew=false, deleted=false]
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionSupplyMessage.addEntry0(GridDhtPartitionSupplyMessage.java:220)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionSupplier.handleDemandMessage(GridDhtPartitionSupplier.java:381)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPreloader.handleDemandMessage(GridDhtPreloader.java:364)
>   at 
> org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$5.apply(GridCachePartitionExchangeManager.java:379)
>   at 
> org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$5.apply(GridCachePartitionExchangeManager.java:364)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheIoManager.processMessage(GridCacheIoManager.java:1054)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheIoManager.onMessage0(GridCacheIoManager.java:579)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheIoManager.access$700(GridCacheIoManager.java:99)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheIoManager$OrderedMessageListener.onMessage(GridCacheIoManager.java:1603)
>   at 
> org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1556)
>   at 
> org.apache.ignite.internal.managers.communication.GridIoManager.access$4100(GridIoManager.java:125)
>   at 
> org.apache.ignite.internal.managers.communication.GridIoManager$GridCommunicationMessageSet.unwind(GridIoManager.java:2752)
>   at 
> org.apache.ignite.internal.managers.communication.GridIoManager.unwindMessageSet(GridIoManager.java:1516)
>   at 
> org.apache.ignite.internal.managers.communication.GridIoManager.access$4400(GridIoManager.java:125)
>   at 
> org.apache.ignite.internal.managers.communication.GridIoManager$10.run(GridIoManager.java:1485)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> {noformat}
> Obviously this assertion will work correctly only for full rebalance. We 
> should either soft assertion for historical rebalance case or disable it.
> In case of disabled assertion everything works well and rebalance finished 
> properly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8382) Problem with ignite-spring-data and Spring Boot 2

2018-04-25 Thread Dmitriy Pavlov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-8382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16452698#comment-16452698
 ] 

Dmitriy Pavlov commented on IGNITE-8382:


Pleaase see https://issues.apache.org/jira/browse/IGNITE-6879 this issue is 
intended to support spring data 2.0.

 

Would it solve this issue?

> Problem with ignite-spring-data and Spring Boot 2
> -
>
> Key: IGNITE-8382
> URL: https://issues.apache.org/jira/browse/IGNITE-8382
> Project: Ignite
>  Issue Type: Bug
>  Components: spring
>Affects Versions: 2.4
>Reporter: Patrice R
>Priority: Major
>
> Hi,
> I've tried to update to Spring Boot 2 using an IgniteRepository (from 
> ignite-spring-data) and I got the following exception during the start.
> The same code with Spring Boot 1.5.9 is working.
>  
> {color:#FF}_***_{color}
> {color:#FF}_APPLICATION FAILED TO START_{color}
> {color:#FF}_***_{color}
> {color:#FF}_Description:_{color}
> {color:#FF}_Parameter 0 of constructor in 
> org.apache.ignite.springdata.repository.support.IgniteRepositoryImpl required 
> a bean of type 'org.apache.ignite.IgniteCache' that could not be 
> found._{color}
> {color:#FF}_Action:_{color}
> {color:#FF}_Consider defining a bean of type 
> 'org.apache.ignite.IgniteCache' in your configuration._{color}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8181) Broken javadoc in GA Grid

2018-04-25 Thread Yury Babak (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-8181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yury Babak updated IGNITE-8181:
---
Fix Version/s: (was: 2.6)
   2.5

> Broken javadoc in GA Grid
> -
>
> Key: IGNITE-8181
> URL: https://issues.apache.org/jira/browse/IGNITE-8181
> Project: Ignite
>  Issue Type: Bug
>  Components: ml
>Reporter: Yury Babak
>Assignee: Yury Babak
>Priority: Major
>  Labels: MakeTeamcityGreenAgain
> Fix For: 2.5
>
>
> [05:34:26][WARNING] Javadoc Warnings
> [05:34:26][WARNING] 
> /data/teamcity/work/8241162b5ce21231/modules/ml/src/main/java/org/apache/ignite/ml/genetic/TruncateSelectionTask.java:60:
>  warning - @param argument "config" is not a parameter name.
> [05:34:26][WARNING] 
> /data/teamcity/work/8241162b5ce21231/modules/ml/src/main/java/org/apache/ignite/ml/genetic/functions/GAGridFunction.java:41:
>  warning - @param argument "config" is not a parameter name.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-7877) Improve code style in GA part

2018-04-25 Thread Yury Babak (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-7877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yury Babak updated IGNITE-7877:
---
Fix Version/s: (was: 2.6)
   2.5

> Improve code style in GA part
> -
>
> Key: IGNITE-7877
> URL: https://issues.apache.org/jira/browse/IGNITE-7877
> Project: Ignite
>  Issue Type: Improvement
>  Components: ml
>Reporter: Yury Babak
>Assignee: Turik Campbell
>Priority: Minor
> Fix For: 2.5
>
>
> Not all code in which located in genetic package follows your code style. 
> That should be fixed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8390) WAL historical rebalance is not able to process cache.remove() updates

2018-04-25 Thread Andrey Gura (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-8390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Gura updated IGNITE-8390:

Fix Version/s: 2.5

> WAL historical rebalance is not able to process cache.remove() updates
> --
>
> Key: IGNITE-8390
> URL: https://issues.apache.org/jira/browse/IGNITE-8390
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.4
>Reporter: Pavel Kovalenko
>Assignee: Pavel Kovalenko
>Priority: Blocker
> Fix For: 2.5
>
>
> WAL historical rebalance fails on supplier when process entry remove with 
> following assertion:
> {noformat}
> java.lang.AssertionError: GridCacheEntryInfo [key=KeyCacheObjectImpl 
> [part=-1, val=2, hasValBytes=true], cacheId=94416770, val=null, ttl=0, 
> expireTime=0, ver=GridCacheVersion [topVer=136155335, order=1524675346187, 
> nodeOrder=1], isNew=false, deleted=false]
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionSupplyMessage.addEntry0(GridDhtPartitionSupplyMessage.java:220)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionSupplier.handleDemandMessage(GridDhtPartitionSupplier.java:381)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPreloader.handleDemandMessage(GridDhtPreloader.java:364)
>   at 
> org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$5.apply(GridCachePartitionExchangeManager.java:379)
>   at 
> org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$5.apply(GridCachePartitionExchangeManager.java:364)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheIoManager.processMessage(GridCacheIoManager.java:1054)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheIoManager.onMessage0(GridCacheIoManager.java:579)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheIoManager.access$700(GridCacheIoManager.java:99)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheIoManager$OrderedMessageListener.onMessage(GridCacheIoManager.java:1603)
>   at 
> org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1556)
>   at 
> org.apache.ignite.internal.managers.communication.GridIoManager.access$4100(GridIoManager.java:125)
>   at 
> org.apache.ignite.internal.managers.communication.GridIoManager$GridCommunicationMessageSet.unwind(GridIoManager.java:2752)
>   at 
> org.apache.ignite.internal.managers.communication.GridIoManager.unwindMessageSet(GridIoManager.java:1516)
>   at 
> org.apache.ignite.internal.managers.communication.GridIoManager.access$4400(GridIoManager.java:125)
>   at 
> org.apache.ignite.internal.managers.communication.GridIoManager$10.run(GridIoManager.java:1485)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> {noformat}
> Obviously this assertion will work correctly only for full rebalance. We 
> should either soft assertion for historical rebalance case or disable it.
> In case of disabled assertion everything works well and rebalance finished 
> properly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8242) Remove method GAGridUtils.getGenesForChromosome() as problematic when Chromosome contains duplicate genes.

2018-04-25 Thread Yury Babak (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-8242?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16452674#comment-16452674
 ] 

Yury Babak commented on IGNITE-8242:


Hi [~netmille], this ticket was added into 2.5 branch.

> Remove method GAGridUtils.getGenesForChromosome() as problematic when 
> Chromosome contains duplicate genes.
> --
>
> Key: IGNITE-8242
> URL: https://issues.apache.org/jira/browse/IGNITE-8242
> Project: Ignite
>  Issue Type: Bug
>  Components: ml
>Reporter: Turik Campbell
>Assignee: Turik Campbell
>Priority: Minor
> Fix For: 2.5
>
>
> Remove method GAGridUtils.getGenesForChromosome() as problematic when 
> Chromosome contains duplicate genes.
> GAGridUtils.getGenesInOrderForChromosome() will be used instead.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8242) Remove method GAGridUtils.getGenesForChromosome() as problematic when Chromosome contains duplicate genes.

2018-04-25 Thread Yury Babak (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-8242?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yury Babak updated IGNITE-8242:
---
Fix Version/s: (was: 2.6)
   2.5

> Remove method GAGridUtils.getGenesForChromosome() as problematic when 
> Chromosome contains duplicate genes.
> --
>
> Key: IGNITE-8242
> URL: https://issues.apache.org/jira/browse/IGNITE-8242
> Project: Ignite
>  Issue Type: Bug
>  Components: ml
>Reporter: Turik Campbell
>Assignee: Turik Campbell
>Priority: Minor
> Fix For: 2.5
>
>
> Remove method GAGridUtils.getGenesForChromosome() as problematic when 
> Chromosome contains duplicate genes.
> GAGridUtils.getGenesInOrderForChromosome() will be used instead.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-7108) Apache Ignite 2.5 RPM and DEB packages

2018-04-25 Thread Andrey Gura (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-7108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16452662#comment-16452662
 ] 

Andrey Gura commented on IGNITE-7108:
-

LGTM! Merged to master and ignite-2.5 branch. Thanks!

> Apache Ignite 2.5 RPM and DEB packages
> --
>
> Key: IGNITE-7108
> URL: https://issues.apache.org/jira/browse/IGNITE-7108
> Project: Ignite
>  Issue Type: New Feature
>  Components: binary
>Reporter: Peter Ivanov
>Assignee: Peter Ivanov
>Priority: Critical
>  Labels: important
> Fix For: 2.5
>
>
> # (/) Update RPM build process to unify with DEB build.
> # (/) Prepare build of DEB package (using architecture and layout from RPM 
> package).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8390) WAL historical rebalance is not able to process cache.remove() updates

2018-04-25 Thread Pavel Kovalenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-8390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Kovalenko updated IGNITE-8390:

Fix Version/s: (was: 2.5)

> WAL historical rebalance is not able to process cache.remove() updates
> --
>
> Key: IGNITE-8390
> URL: https://issues.apache.org/jira/browse/IGNITE-8390
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.4
>Reporter: Pavel Kovalenko
>Assignee: Pavel Kovalenko
>Priority: Blocker
>
> WAL historical rebalance fails on supplier when process entry remove with 
> following assertion:
> {noformat}
> java.lang.AssertionError: GridCacheEntryInfo [key=KeyCacheObjectImpl 
> [part=-1, val=2, hasValBytes=true], cacheId=94416770, val=null, ttl=0, 
> expireTime=0, ver=GridCacheVersion [topVer=136155335, order=1524675346187, 
> nodeOrder=1], isNew=false, deleted=false]
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionSupplyMessage.addEntry0(GridDhtPartitionSupplyMessage.java:220)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionSupplier.handleDemandMessage(GridDhtPartitionSupplier.java:381)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPreloader.handleDemandMessage(GridDhtPreloader.java:364)
>   at 
> org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$5.apply(GridCachePartitionExchangeManager.java:379)
>   at 
> org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$5.apply(GridCachePartitionExchangeManager.java:364)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheIoManager.processMessage(GridCacheIoManager.java:1054)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheIoManager.onMessage0(GridCacheIoManager.java:579)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheIoManager.access$700(GridCacheIoManager.java:99)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheIoManager$OrderedMessageListener.onMessage(GridCacheIoManager.java:1603)
>   at 
> org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1556)
>   at 
> org.apache.ignite.internal.managers.communication.GridIoManager.access$4100(GridIoManager.java:125)
>   at 
> org.apache.ignite.internal.managers.communication.GridIoManager$GridCommunicationMessageSet.unwind(GridIoManager.java:2752)
>   at 
> org.apache.ignite.internal.managers.communication.GridIoManager.unwindMessageSet(GridIoManager.java:1516)
>   at 
> org.apache.ignite.internal.managers.communication.GridIoManager.access$4400(GridIoManager.java:125)
>   at 
> org.apache.ignite.internal.managers.communication.GridIoManager$10.run(GridIoManager.java:1485)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> {noformat}
> Obviously this assertion will work correctly only for full rebalance. We 
> should either soft assertion for historical rebalance case or disable it.
> In case of disabled assertion everything works well and rebalance finished 
> properly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8389) Get rid of thread ID in MVCC candidate

2018-04-25 Thread Dmitriy Pavlov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-8389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16452657#comment-16452657
 ] 

Dmitriy Pavlov commented on IGNITE-8389:


[~Alexey Kuznetsov], sure. Thank you.

Jira sends issue content to dev.list immediately after issue creation.

> Get rid of thread ID in MVCC candidate
> --
>
> Key: IGNITE-8389
> URL: https://issues.apache.org/jira/browse/IGNITE-8389
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Alexey Kuznetsov
>Assignee: Alexey Kuznetsov
>Priority: Major
>  Labels: transactions
>
> After implemeting support for suspend\resume operations for pessimistic txs : 
> [ticket|https://issues.apache.org/jira/browse/IGNITE-5714]
> thread id still exists in MVCC candidate, but its unused on remote nodes(xid 
> is used instead) and it leads to hard-to-catch bugs.
> In this ticket we should remove thread id from MVCC candidate and make use of 
> another mechanism instead.
> Currently, MVCC candidate make use of threadID in the following scenarios:
> 1)look at the code:
> cache.lock(key1).lock();
> cache.put(key1, 1);// implicit transaction is started here
> Implicit transaction will check whether key is locked explicitly by current 
> thread(thread ID is used here) , see GridNearTxLocal#updateExplicitVersion. 
> This allows transaction not to gain lock on tx entry, but reuse cache lock.
> 2) Thread ID is used by explicit transaction to check whether key is locally 
> locked(and throw exception) , see GridNearTxLocal#enlistWriteEntry.
> 3) Also, thread ID is used to mark candidate as reentry. etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8390) WAL historical rebalance is not able to process cache.remove() updates

2018-04-25 Thread Pavel Kovalenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-8390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Kovalenko updated IGNITE-8390:

Fix Version/s: 2.5

> WAL historical rebalance is not able to process cache.remove() updates
> --
>
> Key: IGNITE-8390
> URL: https://issues.apache.org/jira/browse/IGNITE-8390
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.4
>Reporter: Pavel Kovalenko
>Assignee: Pavel Kovalenko
>Priority: Blocker
> Fix For: 2.5
>
>
> WAL historical rebalance fails on supplier when process entry remove with 
> following assertion:
> {noformat}
> java.lang.AssertionError: GridCacheEntryInfo [key=KeyCacheObjectImpl 
> [part=-1, val=2, hasValBytes=true], cacheId=94416770, val=null, ttl=0, 
> expireTime=0, ver=GridCacheVersion [topVer=136155335, order=1524675346187, 
> nodeOrder=1], isNew=false, deleted=false]
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionSupplyMessage.addEntry0(GridDhtPartitionSupplyMessage.java:220)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionSupplier.handleDemandMessage(GridDhtPartitionSupplier.java:381)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPreloader.handleDemandMessage(GridDhtPreloader.java:364)
>   at 
> org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$5.apply(GridCachePartitionExchangeManager.java:379)
>   at 
> org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$5.apply(GridCachePartitionExchangeManager.java:364)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheIoManager.processMessage(GridCacheIoManager.java:1054)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheIoManager.onMessage0(GridCacheIoManager.java:579)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheIoManager.access$700(GridCacheIoManager.java:99)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheIoManager$OrderedMessageListener.onMessage(GridCacheIoManager.java:1603)
>   at 
> org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1556)
>   at 
> org.apache.ignite.internal.managers.communication.GridIoManager.access$4100(GridIoManager.java:125)
>   at 
> org.apache.ignite.internal.managers.communication.GridIoManager$GridCommunicationMessageSet.unwind(GridIoManager.java:2752)
>   at 
> org.apache.ignite.internal.managers.communication.GridIoManager.unwindMessageSet(GridIoManager.java:1516)
>   at 
> org.apache.ignite.internal.managers.communication.GridIoManager.access$4400(GridIoManager.java:125)
>   at 
> org.apache.ignite.internal.managers.communication.GridIoManager$10.run(GridIoManager.java:1485)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> {noformat}
> Obviously this assertion will work correctly only for full rebalance. We 
> should either soft assertion for historical rebalance case or disable it.
> In case of disabled assertion everything works well and rebalance finished 
> properly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8390) WAL historical rebalance is not able to process cache.remove() updates

2018-04-25 Thread Pavel Kovalenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-8390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Kovalenko updated IGNITE-8390:

Priority: Blocker  (was: Critical)

> WAL historical rebalance is not able to process cache.remove() updates
> --
>
> Key: IGNITE-8390
> URL: https://issues.apache.org/jira/browse/IGNITE-8390
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.4
>Reporter: Pavel Kovalenko
>Assignee: Pavel Kovalenko
>Priority: Blocker
> Fix For: 2.5
>
>
> WAL historical rebalance fails on supplier when process entry remove with 
> following assertion:
> {noformat}
> java.lang.AssertionError: GridCacheEntryInfo [key=KeyCacheObjectImpl 
> [part=-1, val=2, hasValBytes=true], cacheId=94416770, val=null, ttl=0, 
> expireTime=0, ver=GridCacheVersion [topVer=136155335, order=1524675346187, 
> nodeOrder=1], isNew=false, deleted=false]
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionSupplyMessage.addEntry0(GridDhtPartitionSupplyMessage.java:220)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionSupplier.handleDemandMessage(GridDhtPartitionSupplier.java:381)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPreloader.handleDemandMessage(GridDhtPreloader.java:364)
>   at 
> org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$5.apply(GridCachePartitionExchangeManager.java:379)
>   at 
> org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$5.apply(GridCachePartitionExchangeManager.java:364)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheIoManager.processMessage(GridCacheIoManager.java:1054)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheIoManager.onMessage0(GridCacheIoManager.java:579)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheIoManager.access$700(GridCacheIoManager.java:99)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheIoManager$OrderedMessageListener.onMessage(GridCacheIoManager.java:1603)
>   at 
> org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1556)
>   at 
> org.apache.ignite.internal.managers.communication.GridIoManager.access$4100(GridIoManager.java:125)
>   at 
> org.apache.ignite.internal.managers.communication.GridIoManager$GridCommunicationMessageSet.unwind(GridIoManager.java:2752)
>   at 
> org.apache.ignite.internal.managers.communication.GridIoManager.unwindMessageSet(GridIoManager.java:1516)
>   at 
> org.apache.ignite.internal.managers.communication.GridIoManager.access$4400(GridIoManager.java:125)
>   at 
> org.apache.ignite.internal.managers.communication.GridIoManager$10.run(GridIoManager.java:1485)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> {noformat}
> Obviously this assertion will work correctly only for full rebalance. We 
> should either soft assertion for historical rebalance case or disable it.
> In case of disabled assertion everything works well and rebalance finished 
> properly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8389) Get rid of thread ID in MVCC candidate

2018-04-25 Thread Alexey Kuznetsov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-8389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16452643#comment-16452643
 ] 

Alexey Kuznetsov commented on IGNITE-8389:
--

[~dpavlov] I was in the middle of filling description )

> Get rid of thread ID in MVCC candidate
> --
>
> Key: IGNITE-8389
> URL: https://issues.apache.org/jira/browse/IGNITE-8389
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Alexey Kuznetsov
>Assignee: Alexey Kuznetsov
>Priority: Major
>  Labels: transactions
>
> After implemeting support for suspend\resume operations for pessimistic txs : 
> [ticket|https://issues.apache.org/jira/browse/IGNITE-5714]
> thread id still exists in MVCC candidate, but its unused on remote nodes(xid 
> is used instead) and it leads to hard-to-catch bugs.
> In this ticket we should remove thread id from MVCC candidate and make use of 
> another mechanism instead.
> Currently, MVCC candidate make use of threadID in the following scenarios:
> 1)look at the code:
> cache.lock(key1).lock();
> cache.put(key1, 1);// implicit transaction is started here
> Implicit transaction will check whether key is locked explicitly by current 
> thread(thread ID is used here) , see GridNearTxLocal#updateExplicitVersion. 
> This allows transaction not to gain lock on tx entry, but reuse cache lock.
> 2) Thread ID is used by explicit transaction to check whether key is locally 
> locked(and throw exception) , see GridNearTxLocal#enlistWriteEntry.
> 3) Also, thread ID is used to mark candidate as reentry. etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (IGNITE-7883) Cluster can have inconsistent affinity configuration

2018-04-25 Thread Alexand Polyakov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-7883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexand Polyakov reassigned IGNITE-7883:


Assignee: Alexand Polyakov

> Cluster can have inconsistent affinity configuration 
> -
>
> Key: IGNITE-7883
> URL: https://issues.apache.org/jira/browse/IGNITE-7883
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.3
>Reporter: Mikhail Cherkasov
>Assignee: Alexand Polyakov
>Priority: Major
> Fix For: 2.6
>
>
> A cluster can have inconsistent affinity configuration if you created two 
> nodes, one with affinity key configuration and other without it(in IgniteCfg 
> or CacheCfg),  both nodes will work fine with no exceptions, but in the same 
> time they will apply different affinity rules to keys:
>  
> {code:java}
> package affinity;
> import org.apache.ignite.Ignite;
> import org.apache.ignite.Ignition;
> import org.apache.ignite.cache.CacheAtomicityMode;
> import org.apache.ignite.cache.CacheKeyConfiguration;
> import org.apache.ignite.cache.CacheMode;
> import org.apache.ignite.cache.affinity.Affinity;
> import org.apache.ignite.configuration.CacheConfiguration;
> import org.apache.ignite.configuration.IgniteConfiguration;
> import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi;
> import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder;
> import java.util.Arrays;
> public class Test {
> private static int id = 0;
> public static void main(String[] args) {
> Ignite ignite = Ignition.start(getConfiguration(true, false));
> Ignite ignite2 = Ignition.start(getConfiguration(false, false));
> Affinity affinity = ignite.affinity("TEST");
> Affinity affinity2 = ignite2.affinity("TEST");
> for (int i = 0; i < 1_000_000; i++) {
> AKey key = new AKey(i);
> if(affinity.partition(key) != affinity2.partition(key))
> System.out.println("FAILED for: " + key);
> }
> System.out.println("DONE");
> }
> private static IgniteConfiguration getConfiguration(boolean 
> withAffinityCfg, boolean client) {
> IgniteConfiguration cfg = new IgniteConfiguration();
> TcpDiscoveryVmIpFinder finder = new TcpDiscoveryVmIpFinder(true);
> finder.setAddresses(Arrays.asList("localhost:47500..47600"));
> cfg.setClientMode(client);
> cfg.setIgniteInstanceName("test" + id++);
> CacheConfiguration cacheCfg = new CacheConfiguration("TEST");
> cacheCfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);
> cacheCfg.setCacheMode(CacheMode.PARTITIONED);
> if(withAffinityCfg) {
> cacheCfg.setKeyConfiguration(new 
> CacheKeyConfiguration("affinity.AKey", "a"));
> }
> cfg.setCacheConfiguration(cacheCfg);
> cfg.setDiscoverySpi(new TcpDiscoverySpi().setIpFinder(finder));
> return cfg;
> }
> }
> class AKey {
> int a;
> public AKey(int a) {
> this.a = a;
> }
> @Override public String toString() {
> return "AKey{" +
> "a=" + a +
> '}';
> }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8389) Get rid of thread ID in MVCC candidate

2018-04-25 Thread Alexey Kuznetsov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-8389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Kuznetsov updated IGNITE-8389:
-
Labels: transactions  (was: )

> Get rid of thread ID in MVCC candidate
> --
>
> Key: IGNITE-8389
> URL: https://issues.apache.org/jira/browse/IGNITE-8389
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Alexey Kuznetsov
>Assignee: Alexey Kuznetsov
>Priority: Major
>  Labels: transactions
>
> After implemeting support for suspend\resume operations for pessimistic txs : 
> [ticket|https://issues.apache.org/jira/browse/IGNITE-5714]
> thread id still exists in MVCC candidate, but its unused on remote nodes(xid 
> is used instead) and it leads to hard-to-catch bugs.
> In this ticket we should remove thread id from MVCC candidate and make use of 
> another mechanism instead.
> Currently, MVCC candidate make use of threadID in the following scenarios:
> 1)look at the code:
> cache.lock(key1).lock();
> cache.put(key1, 1);// implicit transaction is started here
> Implicit transaction will check whether key is locked explicitly by current 
> thread(thread ID is used here) , see GridNearTxLocal#updateExplicitVersion. 
> This allows transaction not to gain lock on tx entry, but reuse cache lock.
> 2) Thread ID is used by explicit transaction to check whether key is locally 
> locked(and throw exception) , see GridNearTxLocal#enlistWriteEntry.
> 3) Also, thread ID is used to mark candidate as reentry. etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-8390) WAL historical rebalance is not able to process cache.remove() updates

2018-04-25 Thread Pavel Kovalenko (JIRA)
Pavel Kovalenko created IGNITE-8390:
---

 Summary: WAL historical rebalance is not able to process 
cache.remove() updates
 Key: IGNITE-8390
 URL: https://issues.apache.org/jira/browse/IGNITE-8390
 Project: Ignite
  Issue Type: Bug
  Components: cache
Affects Versions: 2.4
Reporter: Pavel Kovalenko
Assignee: Pavel Kovalenko


WAL historical rebalance fails on supplier when process entry remove with 
following assertion:

{noformat}
java.lang.AssertionError: GridCacheEntryInfo [key=KeyCacheObjectImpl [part=-1, 
val=2, hasValBytes=true], cacheId=94416770, val=null, ttl=0, expireTime=0, 
ver=GridCacheVersion [topVer=136155335, order=1524675346187, nodeOrder=1], 
isNew=false, deleted=false]
at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionSupplyMessage.addEntry0(GridDhtPartitionSupplyMessage.java:220)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionSupplier.handleDemandMessage(GridDhtPartitionSupplier.java:381)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPreloader.handleDemandMessage(GridDhtPreloader.java:364)
at 
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$5.apply(GridCachePartitionExchangeManager.java:379)
at 
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$5.apply(GridCachePartitionExchangeManager.java:364)
at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.processMessage(GridCacheIoManager.java:1054)
at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.onMessage0(GridCacheIoManager.java:579)
at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.access$700(GridCacheIoManager.java:99)
at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager$OrderedMessageListener.onMessage(GridCacheIoManager.java:1603)
at 
org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1556)
at 
org.apache.ignite.internal.managers.communication.GridIoManager.access$4100(GridIoManager.java:125)
at 
org.apache.ignite.internal.managers.communication.GridIoManager$GridCommunicationMessageSet.unwind(GridIoManager.java:2752)
at 
org.apache.ignite.internal.managers.communication.GridIoManager.unwindMessageSet(GridIoManager.java:1516)
at 
org.apache.ignite.internal.managers.communication.GridIoManager.access$4400(GridIoManager.java:125)
at 
org.apache.ignite.internal.managers.communication.GridIoManager$10.run(GridIoManager.java:1485)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
{noformat}

Obviously this assertion will work correctly only for full rebalance. We should 
either soft assertion for historical rebalance case or disable it.
In case of disabled assertion everything works well and rebalance finished 
properly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8389) Get rid of thread ID in MVCC candidate

2018-04-25 Thread Alexey Kuznetsov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-8389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Kuznetsov updated IGNITE-8389:
-
Description: 
After implemeting support for suspend\resume operations for pessimistic txs : 
[ticket|https://issues.apache.org/jira/browse/IGNITE-5714]
thread id still exists in MVCC candidate, but its unused on remote nodes(xid is 
used instead) and it leads to hard-to-catch bugs.

In this ticket we should remove thread id from MVCC candidate and make use of 
another mechanism instead.

Currently, MVCC candidate make use of threadID in the following scenarios:
1)look at the code:

cache.lock(key1).lock();

cache.put(key1, 1);// implicit transaction is started here
Implicit transaction will check whether key is locked explicitly by current 
thread(thread ID is used here) , see GridNearTxLocal#updateExplicitVersion. 
This allows transaction not to gain lock on tx entry, but reuse cache lock.

2) Thread ID is used by explicit transaction to check whether key is locally 
locked(and throw exception) , see GridNearTxLocal#enlistWriteEntry.

3) Also, thread ID is used to mark candidate as reentry. etc.



> Get rid of thread ID in MVCC candidate
> --
>
> Key: IGNITE-8389
> URL: https://issues.apache.org/jira/browse/IGNITE-8389
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Alexey Kuznetsov
>Assignee: Alexey Kuznetsov
>Priority: Major
>
> After implemeting support for suspend\resume operations for pessimistic txs : 
> [ticket|https://issues.apache.org/jira/browse/IGNITE-5714]
> thread id still exists in MVCC candidate, but its unused on remote nodes(xid 
> is used instead) and it leads to hard-to-catch bugs.
> In this ticket we should remove thread id from MVCC candidate and make use of 
> another mechanism instead.
> Currently, MVCC candidate make use of threadID in the following scenarios:
> 1)look at the code:
> cache.lock(key1).lock();
> cache.put(key1, 1);// implicit transaction is started here
> Implicit transaction will check whether key is locked explicitly by current 
> thread(thread ID is used here) , see GridNearTxLocal#updateExplicitVersion. 
> This allows transaction not to gain lock on tx entry, but reuse cache lock.
> 2) Thread ID is used by explicit transaction to check whether key is locally 
> locked(and throw exception) , see GridNearTxLocal#enlistWriteEntry.
> 3) Also, thread ID is used to mark candidate as reentry. etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (IGNITE-8343) InetSocketAddress.getAddress() returns null, should check it in TcpCommunicationSpi

2018-04-25 Thread Ilya Kasnacheev (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-8343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ilya Kasnacheev reassigned IGNITE-8343:
---

Assignee: Ilya Kasnacheev

> InetSocketAddress.getAddress() returns null, should check it in 
> TcpCommunicationSpi
> ---
>
> Key: IGNITE-8343
> URL: https://issues.apache.org/jira/browse/IGNITE-8343
> Project: Ignite
>  Issue Type: Bug
>  Components: general
>Reporter: Ilya Kasnacheev
>Assignee: Ilya Kasnacheev
>Priority: Major
>
> This is especially notorious in the following scenario:
> {code}
> // -Djava.net.preferIPv4Stack=true
> System.err.println(new InetSocketAddress("0:0:0:0:0:0:0:1%lo", 
> 12345).getAddress()); // null
> {code}
> Yes we already warn if different nodes have differing preferIPv4Stack, still 
> this is warning not a error, and there may be other cases where getAddress() 
> returns null. Should make a check.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8389) Get rid of thread ID in MVCC candidate

2018-04-25 Thread Dmitriy Pavlov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-8389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16452614#comment-16452614
 ] 

Dmitriy Pavlov commented on IGNITE-8389:


Please fill description. It is quite unclear why we should remove thread ID and 
what can be alternatives.

 

Please show reasoning behind this change so community members could clearly 
understand why this change should be done.

> Get rid of thread ID in MVCC candidate
> --
>
> Key: IGNITE-8389
> URL: https://issues.apache.org/jira/browse/IGNITE-8389
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Alexey Kuznetsov
>Assignee: Alexey Kuznetsov
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-7628) SqlQuery hangs indefinitely with additional not registered in baseline node.

2018-04-25 Thread Eduard Shangareev (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-7628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16452613#comment-16452613
 ] 

Eduard Shangareev commented on IGNITE-7628:
---

Review - https://reviews.ignite.apache.org/ignite/review/IGNT-CR-588

> SqlQuery hangs indefinitely with additional not registered in baseline node.
> 
>
> Key: IGNITE-7628
> URL: https://issues.apache.org/jira/browse/IGNITE-7628
> Project: Ignite
>  Issue Type: Bug
>  Components: general
>Affects Versions: 2.4
>Reporter: Stanilovsky Evgeny
>Assignee: Eduard Shangareev
>Priority: Major
> Fix For: 2.6
>
> Attachments: 
> IgniteChangingBaselineCacheQueryAdditionalNodeSelfTest.java
>
>
> SqlQuery hangs indefinitely while additional node registered in topology but 
> still not in baseline.
> Reproducer attached. Apparently problem in 
> GridH2IndexRangeResponse#awaitForResponse func.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-6433) We need to handle possible eviction when we should own a partition because we had lost it

2018-04-25 Thread Pavel Kovalenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-6433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Kovalenko updated IGNITE-6433:

Priority: Major  (was: Critical)

> We need to handle possible eviction when we should own a partition because we 
> had lost it
> -
>
> Key: IGNITE-6433
> URL: https://issues.apache.org/jira/browse/IGNITE-6433
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.1
>Reporter: Eduard Shangareev
>Priority: Major
> Attachments: 6433-thread-dump.txt
>
>
> If PartitionLossPolicy.IGNORE is used and we have lost some partition which 
> would belong to us because of affinity assignment and its state was RENTING 
> then we just ignore such partition and don't move to LOST state.
> We should either wait for eviction asynchronously or cancel eviction for such 
> partition and move it's state to LOST



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-6433) We need to handle possible eviction when we should own a partition because we had lost it

2018-04-25 Thread Pavel Kovalenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-6433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Kovalenko updated IGNITE-6433:

Component/s: cache

> We need to handle possible eviction when we should own a partition because we 
> had lost it
> -
>
> Key: IGNITE-6433
> URL: https://issues.apache.org/jira/browse/IGNITE-6433
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.1
>Reporter: Eduard Shangareev
>Priority: Major
> Attachments: 6433-thread-dump.txt
>
>
> If PartitionLossPolicy.IGNORE is used and we have lost some partition which 
> would belong to us because of affinity assignment and its state was RENTING 
> then we just ignore such partition and don't move to LOST state.
> We should either wait for eviction asynchronously or cancel eviction for such 
> partition and move it's state to LOST



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-6433) We need to handle possible eviction when we should own a partition because we had lost it

2018-04-25 Thread Pavel Kovalenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-6433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Kovalenko updated IGNITE-6433:

Description: 
If PartitionLossPolicy.IGNORE is used and we have lost some partition which 
would belong to us because of affinity assignment and its state was RENTING 
then we just ignore such partition and don't move to LOST state.

We should either wait for eviction asynchronously or cancel eviction for such 
partition and move it's state to LOST

  was:
If PartitionLossPolicy.IGNORE is used and we have lost some partition which 
would belong to us because of affinity assignment and its state was RENTING 
then we would wait for its eviction completing what would hang cluster (the 
time of exchange would significantly increase).

Instead of waiting we should cancel eviction and it's all.


> We need to handle possible eviction when we should own a partition because we 
> had lost it
> -
>
> Key: IGNITE-6433
> URL: https://issues.apache.org/jira/browse/IGNITE-6433
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.1
>Reporter: Eduard Shangareev
>Priority: Critical
> Attachments: 6433-thread-dump.txt
>
>
> If PartitionLossPolicy.IGNORE is used and we have lost some partition which 
> would belong to us because of affinity assignment and its state was RENTING 
> then we just ignore such partition and don't move to LOST state.
> We should either wait for eviction asynchronously or cancel eviction for such 
> partition and move it's state to LOST



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-7628) SqlQuery hangs indefinitely with additional not registered in baseline node.

2018-04-25 Thread Eduard Shangareev (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-7628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16452599#comment-16452599
 ] 

Eduard Shangareev commented on IGNITE-7628:
---

PR - https://github.com/apache/ignite/pull/3916
TC - https://ci.ignite.apache.org/viewQueued.html?itemId=1246558

> SqlQuery hangs indefinitely with additional not registered in baseline node.
> 
>
> Key: IGNITE-7628
> URL: https://issues.apache.org/jira/browse/IGNITE-7628
> Project: Ignite
>  Issue Type: Bug
>  Components: general
>Affects Versions: 2.4
>Reporter: Stanilovsky Evgeny
>Assignee: Eduard Shangareev
>Priority: Major
> Fix For: 2.6
>
> Attachments: 
> IgniteChangingBaselineCacheQueryAdditionalNodeSelfTest.java
>
>
> SqlQuery hangs indefinitely while additional node registered in topology but 
> still not in baseline.
> Reproducer attached. Apparently problem in 
> GridH2IndexRangeResponse#awaitForResponse func.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-6433) We need to handle possible eviction when we should own a partition because we had lost it

2018-04-25 Thread Pavel Kovalenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-6433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Kovalenko updated IGNITE-6433:

Summary: We need to handle possible eviction when we should own a partition 
because we had lost it  (was: We need to cancel eviction instead of waiting it 
when we should own a partition because we had lost it)

> We need to handle possible eviction when we should own a partition because we 
> had lost it
> -
>
> Key: IGNITE-6433
> URL: https://issues.apache.org/jira/browse/IGNITE-6433
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.1
>Reporter: Eduard Shangareev
>Priority: Critical
> Attachments: 6433-thread-dump.txt
>
>
> If PartitionLossPolicy.IGNORE is used and we have lost some partition which 
> would belong to us because of affinity assignment and its state was RENTING 
> then we would wait for its eviction completing what would hang cluster (the 
> time of exchange would significantly increase).
> Instead of waiting we should cancel eviction and it's all.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-8389) Get rid of thread ID in MVCC candidate

2018-04-25 Thread Alexey Kuznetsov (JIRA)
Alexey Kuznetsov created IGNITE-8389:


 Summary: Get rid of thread ID in MVCC candidate
 Key: IGNITE-8389
 URL: https://issues.apache.org/jira/browse/IGNITE-8389
 Project: Ignite
  Issue Type: Improvement
Reporter: Alexey Kuznetsov
Assignee: Alexey Kuznetsov






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-8388) Improve SQL functionality test coverage

2018-04-25 Thread Eduard Shangareev (JIRA)
Eduard Shangareev created IGNITE-8388:
-

 Summary: Improve SQL functionality test coverage 
 Key: IGNITE-8388
 URL: https://issues.apache.org/jira/browse/IGNITE-8388
 Project: Ignite
  Issue Type: Improvement
Reporter: Eduard Shangareev


IGNITE-7628 shows test coverage lack.

We should add tests with basic SQL scenarios and BLT:
1. With node in a cluster which is not in BLT.
2. With offline node in BLT.
3. 1 + 2.

I believe, that we should start with extending next test:
org.apache.ignite.internal.processors.cache.IgniteCacheAbstractFieldsQuerySelfTest




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8357) Recreated atomic sequence produces "Sequence was removed from cache"

2018-04-25 Thread Pavel Vinokurov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-8357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16452578#comment-16452578
 ] 

Pavel Vinokurov commented on IGNITE-8357:
-

[~dpavlov], the pull request is not completed yet, [~amashenkov] reviews this 
request and most probably it requires further discussion

> Recreated atomic sequence produces "Sequence was removed from cache"
> 
>
> Key: IGNITE-8357
> URL: https://issues.apache.org/jira/browse/IGNITE-8357
> Project: Ignite
>  Issue Type: Bug
>  Components: data structures
>Affects Versions: 2.4
>Reporter: Pavel Vinokurov
>Priority: Major
> Attachments: RecreatingAtomicSequence.java
>
>
> If a cluster has two or more nodes, recreated atomic sequence produces error 
> on incrementAndGet operation. 
> The reproducer is attached.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-4958) Make data pages recyclable into index/meta/etc pages and vice versa

2018-04-25 Thread Dmitriy Sorokin (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16452571#comment-16452571
 ] 

Dmitriy Sorokin commented on IGNITE-4958:
-

[~agura], [~ivan.glukos], review my patch, please, test results seems good for 
me.

> Make data pages recyclable into index/meta/etc pages and vice versa
> ---
>
> Key: IGNITE-4958
> URL: https://issues.apache.org/jira/browse/IGNITE-4958
> Project: Ignite
>  Issue Type: Improvement
>  Components: cache
>Affects Versions: 2.0
>Reporter: Ivan Rakov
>Assignee: Dmitriy Sorokin
>Priority: Major
> Fix For: 2.6
>
>
> Recycling for data pages is disabled for now. Empty data pages are 
> accumulated in FreeListImpl#emptyDataPagesBucket, and can be reused only as 
> data pages again. What has to be done:
> * Empty data pages should be recycled into reuse bucket
> * We should check reuse bucket first before allocating a new data page
> * MemoryPolicyConfiguration#emptyPagesPoolSize should be removed



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-7592) Dynamic cache with rebalanceDelay == -1 doesn't trigger late affinity assignment even after explicit rebalance is called on every node

2018-04-25 Thread Anton Vinogradov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-7592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16452572#comment-16452572
 ] 

Anton Vinogradov commented on IGNITE-7592:
--

have no objections

> Dynamic cache with rebalanceDelay == -1 doesn't trigger late affinity 
> assignment even after explicit rebalance is called on every node
> --
>
> Key: IGNITE-7592
> URL: https://issues.apache.org/jira/browse/IGNITE-7592
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.4
>Reporter: Ilya Lantukh
>Assignee: Maxim Muzafarov
>Priority: Major
> Fix For: 2.6
>
>
> Reproducer:
> {noformat}
> startGrids(NODE_COUNT);
> IgniteEx ig = grid(0);
> ig.cluster().active(true);
> awaitPartitionMapExchange();
> IgniteCache cache =
> ig.createCache(
> new CacheConfiguration()
> .setName(CACHE_NAME)
> .setCacheMode(PARTITIONED)
> .setBackups(1)
> .setPartitionLossPolicy(READ_ONLY_SAFE)
> .setReadFromBackup(true)
> .setWriteSynchronizationMode(FULL_SYNC)
> .setRebalanceDelay(-1)
> );
> for (int i = 0; i < NODE_COUNT; i++)
> grid(i).cache(CACHE_NAME).rebalance().get();
> awaitPartitionMapExchange();
> {noformat}
> Sometimes this code will hang on the last awaitPartitionMapExchange(), though 
> probability that it will happen is rather low (<10%).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-7108) Apache Ignite 2.5 RPM and DEB packages

2018-04-25 Thread Andrey Gura (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-7108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16452566#comment-16452566
 ] 

Andrey Gura commented on IGNITE-7108:
-

Please, fix {{DEVNOTES.txt}} file becuase it contains two equal headers "Apache 
Ignite RPM Package Build Instructions".

> Apache Ignite 2.5 RPM and DEB packages
> --
>
> Key: IGNITE-7108
> URL: https://issues.apache.org/jira/browse/IGNITE-7108
> Project: Ignite
>  Issue Type: New Feature
>  Components: binary
>Reporter: Peter Ivanov
>Assignee: Peter Ivanov
>Priority: Critical
>  Labels: important
> Fix For: 2.5
>
>
> # (/) Update RPM build process to unify with DEB build.
> # (/) Prepare build of DEB package (using architecture and layout from RPM 
> package).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8277) Add utilities to check and display cache info

2018-04-25 Thread Ivan Rakov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-8277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16452554#comment-16452554
 ] 

Ivan Rakov commented on IGNITE-8277:


Pull request: https://github.com/apache/ignite/pull/3915/files
TC run: https://ci.ignite.apache.org/viewQueued.html?itemId=1246368

> Add utilities to check and display cache info
> -
>
> Key: IGNITE-8277
> URL: https://issues.apache.org/jira/browse/IGNITE-8277
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Alexey Goncharuk
>Assignee: Ivan Rakov
>Priority: Major
>
> It will be useful to add some utilities to control.sh script to control 
> cluster state in production environments:
> 1) An utility which checks partition consistency on primary and backup nodes. 
> This utility should work on an idle cluster and check only owning partitions. 
> Also there should be a way to run per-key comparison of a partition on two 
> selected nodes in a grid.
> 2) An utility to display cache info such as a list of caches with their IDs, 
> the list of cache groups, current partition owners, number of currently 
> owning, moving, renting partitions in the grid
> 3) An utility to display contented keys in caches
> 4) An utility to check the validity of all indexes. Essentially, it will take 
> an iterator over a partition and check that the given entry is reachable via 
> all defined indexes.
> I suggest to add the given commands to {{./bin/control.sh}} script under a 
> new {{--cache}} subcommand.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-7592) Dynamic cache with rebalanceDelay == -1 doesn't trigger late affinity assignment even after explicit rebalance is called on every node

2018-04-25 Thread Alexey Goncharuk (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-7592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16452553#comment-16452553
 ] 

Alexey Goncharuk commented on IGNITE-7592:
--

Guys, let's propagate the result of rebalance future - this is not a big 
change, but fixes the issues that confued Ilya.

> Dynamic cache with rebalanceDelay == -1 doesn't trigger late affinity 
> assignment even after explicit rebalance is called on every node
> --
>
> Key: IGNITE-7592
> URL: https://issues.apache.org/jira/browse/IGNITE-7592
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.4
>Reporter: Ilya Lantukh
>Assignee: Maxim Muzafarov
>Priority: Major
> Fix For: 2.6
>
>
> Reproducer:
> {noformat}
> startGrids(NODE_COUNT);
> IgniteEx ig = grid(0);
> ig.cluster().active(true);
> awaitPartitionMapExchange();
> IgniteCache cache =
> ig.createCache(
> new CacheConfiguration()
> .setName(CACHE_NAME)
> .setCacheMode(PARTITIONED)
> .setBackups(1)
> .setPartitionLossPolicy(READ_ONLY_SAFE)
> .setReadFromBackup(true)
> .setWriteSynchronizationMode(FULL_SYNC)
> .setRebalanceDelay(-1)
> );
> for (int i = 0; i < NODE_COUNT; i++)
> grid(i).cache(CACHE_NAME).rebalance().get();
> awaitPartitionMapExchange();
> {noformat}
> Sometimes this code will hang on the last awaitPartitionMapExchange(), though 
> probability that it will happen is rather low (<10%).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8374) Test IgnitePdsCorruptedStoreTest.testCacheMetaCorruption hangs during node start

2018-04-25 Thread Andrey Gura (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-8374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16452514#comment-16452514
 ] 

Andrey Gura commented on IGNITE-8374:
-

Looks good. Merged to master branch. Thanks!

> Test IgnitePdsCorruptedStoreTest.testCacheMetaCorruption hangs during node 
> start
> 
>
> Key: IGNITE-8374
> URL: https://issues.apache.org/jira/browse/IGNITE-8374
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.5
>Reporter: Aleksey Plekhanov
>Assignee: Aleksey Plekhanov
>Priority: Critical
>  Labels: MakeTeamcityGreenAgain
> Fix For: 2.5
>
>
> Call to cluster().active() in IgniteKernal.ackStart() synchronously waits for 
> state transition to complete, but due to error during activation process this 
> transition will never end.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-6587) Ignite watchdog service

2018-04-25 Thread Andrey Kuznetsov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-6587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Kuznetsov updated IGNITE-6587:
-
Description: 
As described in [1], each Ignite node has a number of system-critical threads. 
We should implement a periodic check that calls failure handler when one of the 
following conditions has been detected:
# Critical thread is not alive anymore.
# Critical thread remains in BLOCKED state for a long time. 

Actual list of system-critical threads can be found at [1].

[1] 
https://cwiki.apache.org/confluence/display/IGNITE/IEP-14+Ignite+failures+handling

  was:
We need to come up with a 'watchdog service' to monitor for Ignite node local 
health and kill the process under some critical conditions.
For example, if one of the mission-critical Ignite threads die, the Ignite node 
must be stopped.
At the first glance, the list of critical threads is:
disco-event-worker
tcp-disco-sock-reader
tcp-disco-srvr
tcp-disco-msg-worker
tcp-comm-worker
grid-nio-worker-tcp-comm
exchange-worker
sys-stripe
grid-timeout-worker
db-checkpoint-thread
wal-file-archiver
ttl-cleanup-worker
nio-acceptor

The mechanism should support pluggable components so that self-check can be 
extended via plugins.


> Ignite watchdog service
> ---
>
> Key: IGNITE-6587
> URL: https://issues.apache.org/jira/browse/IGNITE-6587
> Project: Ignite
>  Issue Type: Improvement
>  Components: general
>Affects Versions: 2.2
>Reporter: Alexey Goncharuk
>Assignee: Andrey Gura
>Priority: Major
>  Labels: IEP-5
> Fix For: 2.6
>
> Attachments: watchdog.sh
>
>
> As described in [1], each Ignite node has a number of system-critical 
> threads. We should implement a periodic check that calls failure handler when 
> one of the following conditions has been detected:
> # Critical thread is not alive anymore.
> # Critical thread remains in BLOCKED state for a long time. 
> Actual list of system-critical threads can be found at [1].
> [1] 
> https://cwiki.apache.org/confluence/display/IGNITE/IEP-14+Ignite+failures+handling



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8086) Flaky test timeouts in Activate/Deactivate Cluster suite

2018-04-25 Thread Dmitriy Pavlov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-8086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16452425#comment-16452425
 ] 

Dmitriy Pavlov commented on IGNITE-8086:


[~avinogradov] [~Mmuzaf], thank you for continuing to improve the stability of 
Ignite tests.

> Flaky test timeouts in Activate/Deactivate Cluster suite
> 
>
> Key: IGNITE-8086
> URL: https://issues.apache.org/jira/browse/IGNITE-8086
> Project: Ignite
>  Issue Type: Test
>Reporter: Dmitriy Pavlov
>Assignee: Maxim Muzafarov
>Priority: Major
>  Labels: MakeTeamcityGreenAgain
> Fix For: 2.6
>
>
> # Activate | Deactivate Cluster 
>  IgniteStandByClusterSuite: 
> CacheBaselineTopologyTest.testPrimaryLeftAndClusterRestart (master fail rate 
> 37,1%) 
>  
> [https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8=6798733272445954906=%3Cdefault%3E=testDetails]
>  # IgniteStandByClusterSuite: 
> CacheBaselineTopologyTest.testBaselineTopologyChangesFromClient (master fail 
> rate 24,9%) 
>  
> [https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8=-9217764610687235146=%3Cdefault%3E=testDetails]
>  #  IgniteStandByClusterSuite: 
> CacheBaselineTopologyTest.testBaselineTopologyChangesFromServer (master fail 
> rate 19,8%)
>  
> [https://ci.ignite.apache.org/viewLog.html?buildId=1199624=buildResultsDiv=IgniteTests24Java8_ActivateDeactivateCluster#testNameId-4432469336264773506]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8357) Recreated atomic sequence produces "Sequence was removed from cache"

2018-04-25 Thread Dmitriy Pavlov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-8357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16452405#comment-16452405
 ] 

Dmitriy Pavlov commented on IGNITE-8357:


[~pvinokurov], please provide link to TC runAll

> Recreated atomic sequence produces "Sequence was removed from cache"
> 
>
> Key: IGNITE-8357
> URL: https://issues.apache.org/jira/browse/IGNITE-8357
> Project: Ignite
>  Issue Type: Bug
>  Components: data structures
>Affects Versions: 2.4
>Reporter: Pavel Vinokurov
>Priority: Major
> Attachments: RecreatingAtomicSequence.java
>
>
> If a cluster has two or more nodes, recreated atomic sequence produces error 
> on incrementAndGet operation. 
> The reproducer is attached.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8357) Recreated atomic sequence produces "Sequence was removed from cache"

2018-04-25 Thread Dmitriy Pavlov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-8357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16452401#comment-16452401
 ] 

Dmitriy Pavlov commented on IGNITE-8357:


Hi [~pvinokurov] please assign ticket to you if you are working on it.

> Recreated atomic sequence produces "Sequence was removed from cache"
> 
>
> Key: IGNITE-8357
> URL: https://issues.apache.org/jira/browse/IGNITE-8357
> Project: Ignite
>  Issue Type: Bug
>  Components: data structures
>Affects Versions: 2.4
>Reporter: Pavel Vinokurov
>Priority: Major
> Attachments: RecreatingAtomicSequence.java
>
>
> If a cluster has two or more nodes, recreated atomic sequence produces error 
> on incrementAndGet operation. 
> The reproducer is attached.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8324) Ignite Cache Restarts 1 suite hangs with assertion error

2018-04-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-8324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16452388#comment-16452388
 ] 

ASF GitHub Bot commented on IGNITE-8324:


Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/3880


> Ignite Cache Restarts 1 suite hangs with assertion error
> 
>
> Key: IGNITE-8324
> URL: https://issues.apache.org/jira/browse/IGNITE-8324
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.4
>Reporter: Pavel Kovalenko
>Assignee: Pavel Kovalenko
>Priority: Major
>  Labels: MakeTeamcityGreenAgain
> Fix For: 2.5
>
>
> {noformat}
> [ERROR][exchange-worker-#620749%replicated.GridCacheReplicatedNodeRestartSelfTest0%][GridDhtPartitionsExchangeFuture]
>  Failed to notify listener: 
> o.a.i.i.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture$2@6dd7cc93
> java.lang.AssertionError: Invalid topology version [grp=ignite-sys-cache, 
> topVer=AffinityTopologyVersion [topVer=323, minorTopVer=0], 
> exchTopVer=AffinityTopologyVersion [topVer=322, minorTopVer=0], 
> discoCacheVer=AffinityTopologyVersion [topVer=322, minorTopVer=0], 
> exchDiscoCacheVer=AffinityTopologyVersion [topVer=323, minorTopVer=0], 
> fut=GridDhtPartitionsExchangeFuture [firstDiscoEvt=DiscoveryEvent 
> [evtNode=TcpDiscoveryNode [id=48a5d243-7f63-4069-aba1-868c6895, 
> addrs=[127.0.0.1], sockAddrs=[/127.0.0.1:47503], discPort=47503, order=322, 
> intOrder=163, lastExchangeTime=1524043684082, loc=false, 
> ver=2.5.0#20180417-sha1:56be24b9, isClient=false], topVer=322, 
> nodeId8=b51b3893, msg=Node joined: TcpDiscoveryNode 
> [id=48a5d243-7f63-4069-aba1-868c6895, addrs=[127.0.0.1], 
> sockAddrs=[/127.0.0.1:47503], discPort=47503, order=322, intOrder=163, 
> lastExchangeTime=1524043684082, loc=false, ver=2.5.0#20180417-sha1:56be24b9, 
> isClient=false], type=NODE_JOINED, tstamp=1524043684166], 
> crd=TcpDiscoveryNode [id=b51b3893-377a-465f-88ea-316a6560, 
> addrs=[127.0.0.1], sockAddrs=[/127.0.0.1:47500], discPort=47500, order=1, 
> intOrder=1, lastExchangeTime=1524043633288, loc=true, 
> ver=2.5.0#20180417-sha1:56be24b9, isClient=false], 
> exchId=GridDhtPartitionExchangeId [topVer=AffinityTopologyVersion 
> [topVer=322, minorTopVer=0], discoEvt=DiscoveryEvent 
> [evtNode=TcpDiscoveryNode [id=48a5d243-7f63-4069-aba1-868c6895, 
> addrs=[127.0.0.1], sockAddrs=[/127.0.0.1:47503], discPort=47503, order=322, 
> intOrder=163, lastExchangeTime=1524043684082, loc=false, 
> ver=2.5.0#20180417-sha1:56be24b9, isClient=false], topVer=322, 
> nodeId8=b51b3893, msg=Node joined: TcpDiscoveryNode 
> [id=48a5d243-7f63-4069-aba1-868c6895, addrs=[127.0.0.1], 
> sockAddrs=[/127.0.0.1:47503], discPort=47503, order=322, intOrder=163, 
> lastExchangeTime=1524043684082, loc=false, ver=2.5.0#20180417-sha1:56be24b9, 
> isClient=false], type=NODE_JOINED, tstamp=1524043684166], nodeId=48a5d243, 
> evt=NODE_JOINED], added=true, initFut=GridFutureAdapter 
> [ignoreInterrupts=false, state=DONE, res=true, hash=527135060], init=true, 
> lastVer=GridCacheVersion [topVer=135523955, order=1524043694535, 
> nodeOrder=3], partReleaseFut=PartitionReleaseFuture 
> [topVer=AffinityTopologyVersion [topVer=322, minorTopVer=0], 
> futures=[ExplicitLockReleaseFuture [topVer=AffinityTopologyVersion 
> [topVer=322, minorTopVer=0], futures=[]], AtomicUpdateReleaseFuture 
> [topVer=AffinityTopologyVersion [topVer=322, minorTopVer=0], futures=[]], 
> DataStreamerReleaseFuture [topVer=AffinityTopologyVersion [topVer=322, 
> minorTopVer=0], futures=[]], LocalTxReleaseFuture 
> [topVer=AffinityTopologyVersion [topVer=322, minorTopVer=0], futures=[]], 
> AllTxReleaseFuture [topVer=AffinityTopologyVersion [topVer=322, 
> minorTopVer=0], futures=[RemoteTxReleaseFuture 
> [topVer=AffinityTopologyVersion [topVer=322, minorTopVer=0], futures=[]], 
> exchActions=null, affChangeMsg=null, initTs=1524043684166, 
> centralizedAff=false, forceAffReassignment=false, changeGlobalStateE=null, 
> done=false, state=CRD, evtLatch=0, remaining=[], super=GridFutureAdapter 
> [ignoreInterrupts=false, state=INIT, res=null, hash=1570781250]]]
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtPartitionTopologyImpl.updateTopologyVersion(GridDhtPartitionTopologyImpl.java:257)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.updateTopologies(GridDhtPartitionsExchangeFuture.java:845)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.onAllReceived(GridDhtPartitionsExchangeFuture.java:2461)
>   at 
> 

[jira] [Updated] (IGNITE-6565) Use long type for size and keySize in cache metrics

2018-04-25 Thread Anton Vinogradov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-6565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Vinogradov updated IGNITE-6565:
-
Fix Version/s: 2.6

> Use long type for size and keySize in cache metrics
> ---
>
> Key: IGNITE-6565
> URL: https://issues.apache.org/jira/browse/IGNITE-6565
> Project: Ignite
>  Issue Type: Improvement
>Affects Versions: 2.2
>Reporter: Ilya Kasnacheev
>Assignee: Alexander Menshikov
>Priority: Major
>  Labels: easyfix
> Fix For: 2.6
>
>
> Currently it's int so for large caches there's no way to convey correct value.
> Should introduce getSizeLong() and getKeySizeLong()
> Also introduce the same in .Net and make sure that compatibility not broken 
> when passing OP_LOCAL_METRICS and OP_GLOBAL_METRICS.
> BTW do we need keySize at all? What's it for?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-6699) Optimize client-side data streamer performance

2018-04-25 Thread Anton Vinogradov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-6699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Vinogradov updated IGNITE-6699:
-
Fix Version/s: (was: 2.5)
   2.6

> Optimize client-side data streamer performance
> --
>
> Key: IGNITE-6699
> URL: https://issues.apache.org/jira/browse/IGNITE-6699
> Project: Ignite
>  Issue Type: Task
>  Components: streaming
>Affects Versions: 2.3
>Reporter: Vladimir Ozerov
>Assignee: Ivan Fedotov
>Priority: Major
>  Labels: performance
> Fix For: 2.6
>
>
> Currently if a user has several server nodes and a single client node with 
> single thread pushing data to streamer, he will not be able to load data at 
> maximum speed. On the other hand, if he start several data loading threads, 
> throughput will increase. 
> One of root causes of this is bad data streamer design. Method 
> {{IgniteDataStreamer.addData(K, V)}} returns new feature for every operation, 
> this is too fine grained approach. Also it generates a lot of garbage and 
> causes contention on streamer internals. 
> Proposed implementation flow:
> 1) Compare performance of {{addData(K, V)}} vs {{addData(Collection)}} 
> methods from one thread in distributed environment. The latter should show 
> considerably higher throughput.
> 2) Users should receive per-batch features, rather than per-key. 
> 3) Try caching thread data in some collection until it is large enough to 
> avoid contention and unnecessary allocations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8324) Ignite Cache Restarts 1 suite hangs with assertion error

2018-04-25 Thread Alexey Goncharuk (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-8324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16452376#comment-16452376
 ] 

Alexey Goncharuk commented on IGNITE-8324:
--

I got why these calls are not needed, will merge the fix shortly.

> Ignite Cache Restarts 1 suite hangs with assertion error
> 
>
> Key: IGNITE-8324
> URL: https://issues.apache.org/jira/browse/IGNITE-8324
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.4
>Reporter: Pavel Kovalenko
>Assignee: Pavel Kovalenko
>Priority: Major
>  Labels: MakeTeamcityGreenAgain
> Fix For: 2.5
>
>
> {noformat}
> [ERROR][exchange-worker-#620749%replicated.GridCacheReplicatedNodeRestartSelfTest0%][GridDhtPartitionsExchangeFuture]
>  Failed to notify listener: 
> o.a.i.i.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture$2@6dd7cc93
> java.lang.AssertionError: Invalid topology version [grp=ignite-sys-cache, 
> topVer=AffinityTopologyVersion [topVer=323, minorTopVer=0], 
> exchTopVer=AffinityTopologyVersion [topVer=322, minorTopVer=0], 
> discoCacheVer=AffinityTopologyVersion [topVer=322, minorTopVer=0], 
> exchDiscoCacheVer=AffinityTopologyVersion [topVer=323, minorTopVer=0], 
> fut=GridDhtPartitionsExchangeFuture [firstDiscoEvt=DiscoveryEvent 
> [evtNode=TcpDiscoveryNode [id=48a5d243-7f63-4069-aba1-868c6895, 
> addrs=[127.0.0.1], sockAddrs=[/127.0.0.1:47503], discPort=47503, order=322, 
> intOrder=163, lastExchangeTime=1524043684082, loc=false, 
> ver=2.5.0#20180417-sha1:56be24b9, isClient=false], topVer=322, 
> nodeId8=b51b3893, msg=Node joined: TcpDiscoveryNode 
> [id=48a5d243-7f63-4069-aba1-868c6895, addrs=[127.0.0.1], 
> sockAddrs=[/127.0.0.1:47503], discPort=47503, order=322, intOrder=163, 
> lastExchangeTime=1524043684082, loc=false, ver=2.5.0#20180417-sha1:56be24b9, 
> isClient=false], type=NODE_JOINED, tstamp=1524043684166], 
> crd=TcpDiscoveryNode [id=b51b3893-377a-465f-88ea-316a6560, 
> addrs=[127.0.0.1], sockAddrs=[/127.0.0.1:47500], discPort=47500, order=1, 
> intOrder=1, lastExchangeTime=1524043633288, loc=true, 
> ver=2.5.0#20180417-sha1:56be24b9, isClient=false], 
> exchId=GridDhtPartitionExchangeId [topVer=AffinityTopologyVersion 
> [topVer=322, minorTopVer=0], discoEvt=DiscoveryEvent 
> [evtNode=TcpDiscoveryNode [id=48a5d243-7f63-4069-aba1-868c6895, 
> addrs=[127.0.0.1], sockAddrs=[/127.0.0.1:47503], discPort=47503, order=322, 
> intOrder=163, lastExchangeTime=1524043684082, loc=false, 
> ver=2.5.0#20180417-sha1:56be24b9, isClient=false], topVer=322, 
> nodeId8=b51b3893, msg=Node joined: TcpDiscoveryNode 
> [id=48a5d243-7f63-4069-aba1-868c6895, addrs=[127.0.0.1], 
> sockAddrs=[/127.0.0.1:47503], discPort=47503, order=322, intOrder=163, 
> lastExchangeTime=1524043684082, loc=false, ver=2.5.0#20180417-sha1:56be24b9, 
> isClient=false], type=NODE_JOINED, tstamp=1524043684166], nodeId=48a5d243, 
> evt=NODE_JOINED], added=true, initFut=GridFutureAdapter 
> [ignoreInterrupts=false, state=DONE, res=true, hash=527135060], init=true, 
> lastVer=GridCacheVersion [topVer=135523955, order=1524043694535, 
> nodeOrder=3], partReleaseFut=PartitionReleaseFuture 
> [topVer=AffinityTopologyVersion [topVer=322, minorTopVer=0], 
> futures=[ExplicitLockReleaseFuture [topVer=AffinityTopologyVersion 
> [topVer=322, minorTopVer=0], futures=[]], AtomicUpdateReleaseFuture 
> [topVer=AffinityTopologyVersion [topVer=322, minorTopVer=0], futures=[]], 
> DataStreamerReleaseFuture [topVer=AffinityTopologyVersion [topVer=322, 
> minorTopVer=0], futures=[]], LocalTxReleaseFuture 
> [topVer=AffinityTopologyVersion [topVer=322, minorTopVer=0], futures=[]], 
> AllTxReleaseFuture [topVer=AffinityTopologyVersion [topVer=322, 
> minorTopVer=0], futures=[RemoteTxReleaseFuture 
> [topVer=AffinityTopologyVersion [topVer=322, minorTopVer=0], futures=[]], 
> exchActions=null, affChangeMsg=null, initTs=1524043684166, 
> centralizedAff=false, forceAffReassignment=false, changeGlobalStateE=null, 
> done=false, state=CRD, evtLatch=0, remaining=[], super=GridFutureAdapter 
> [ignoreInterrupts=false, state=INIT, res=null, hash=1570781250]]]
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtPartitionTopologyImpl.updateTopologyVersion(GridDhtPartitionTopologyImpl.java:257)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.updateTopologies(GridDhtPartitionsExchangeFuture.java:845)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.onAllReceived(GridDhtPartitionsExchangeFuture.java:2461)
>   at 
> 

[jira] [Commented] (IGNITE-7592) Dynamic cache with rebalanceDelay == -1 doesn't trigger late affinity assignment even after explicit rebalance is called on every node

2018-04-25 Thread Anton Vinogradov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-7592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16452374#comment-16452374
 ] 

Anton Vinogradov commented on IGNITE-7592:
--

[~ilantukh],

Agree with your proposal, but ... 

Do we really need manual relalancing? 
I see no production case to call rebalancing manually, especially taking into 
account we have BLT now.

My proposal is to deprecate manual rebalancing, as some odd feature and remove 
it at 3.0.

> Dynamic cache with rebalanceDelay == -1 doesn't trigger late affinity 
> assignment even after explicit rebalance is called on every node
> --
>
> Key: IGNITE-7592
> URL: https://issues.apache.org/jira/browse/IGNITE-7592
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.4
>Reporter: Ilya Lantukh
>Assignee: Maxim Muzafarov
>Priority: Major
> Fix For: 2.6
>
>
> Reproducer:
> {noformat}
> startGrids(NODE_COUNT);
> IgniteEx ig = grid(0);
> ig.cluster().active(true);
> awaitPartitionMapExchange();
> IgniteCache cache =
> ig.createCache(
> new CacheConfiguration()
> .setName(CACHE_NAME)
> .setCacheMode(PARTITIONED)
> .setBackups(1)
> .setPartitionLossPolicy(READ_ONLY_SAFE)
> .setReadFromBackup(true)
> .setWriteSynchronizationMode(FULL_SYNC)
> .setRebalanceDelay(-1)
> );
> for (int i = 0; i < NODE_COUNT; i++)
> grid(i).cache(CACHE_NAME).rebalance().get();
> awaitPartitionMapExchange();
> {noformat}
> Sometimes this code will hang on the last awaitPartitionMapExchange(), though 
> probability that it will happen is rather low (<10%).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8380) Affinity node calculation doesn't take into account BLT

2018-04-25 Thread Andrey Aleksandrov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-8380?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Aleksandrov updated IGNITE-8380:
---
Attachment: ReproducerTest.java

> Affinity node calculation doesn't take into account BLT
> ---
>
> Key: IGNITE-8380
> URL: https://issues.apache.org/jira/browse/IGNITE-8380
> Project: Ignite
>  Issue Type: Bug
>Reporter: Eduard Shangareev
>Assignee: Eduard Shangareev
>Priority: Major
> Attachments: ReproducerTest.java
>
>
> It is source of many issues like:
> https://issues.apache.org/jira/browse/IGNITE-8173
> https://issues.apache.org/jira/browse/IGNITE-7628



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8358) Deadlock in IgnitePdsAtomicCacheRebalancingTest

2018-04-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-8358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16452371#comment-16452371
 ] 

ASF GitHub Bot commented on IGNITE-8358:


Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/3911


> Deadlock in IgnitePdsAtomicCacheRebalancingTest
> ---
>
> Key: IGNITE-8358
> URL: https://issues.apache.org/jira/browse/IGNITE-8358
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.4
>Reporter: Alexei Scherbakov
>Assignee: Pavel Kovalenko
>Priority: Blocker
> Fix For: 2.5
>
> Attachments: Ignite_Tests_2.4_Java_8_PDS_Indexing_141.log.zip
>
>
> Deadlocked threads are:
> {noformat}
> [14:21:46] : [Step 3/4] # DEADLOCKED Thread 
> [name="sys-#22788%persistence.IgnitePdsAtomicCacheRebalancingTest2%", 
> id=25953, state=WAITING, blockCnt=0, waitCnt=2]
> [14:21:46] : [Step 3/4] Lock 
> [object=java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync@adcfad9,
>  
> ownerName=exchange-worker-#22778%persistence.IgnitePdsAtomicCacheRebalancingTest2%,
>  ownerId=25941]
> [14:21:46] : [Step 3/4] at sun.misc.Unsafe.park(Native Method)
> [14:21:46] : [Step 3/4] at 
> java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> [14:21:46] : [Step 3/4] at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
> [14:21:46] : [Step 3/4] at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireShared(AbstractQueuedSynchronizer.java:967)
> [14:21:46] : [Step 3/4] at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireShared(AbstractQueuedSynchronizer.java:1283)
> [14:21:46] : [Step 3/4] at 
> java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.lock(ReentrantReadWriteLock.java:727)
> [14:21:46] : [Step 3/4] at 
> o.a.i.i.processors.cache.distributed.dht.GridDhtPartitionTopologyImpl.localPartitionMap(GridDhtPartitionTopologyImpl.java:1000)
> [14:21:46] : [Step 3/4] at 
> o.a.i.i.processors.cache.GridCachePartitionExchangeManager.createPartitionsSingleMessage(GridCachePartitionExchangeManager.java:1250)
> [14:21:46] : [Step 3/4] at 
> o.a.i.i.processors.cache.GridCachePartitionExchangeManager.sendLocalPartitions(GridCachePartitionExchangeManager.java:1205)
> [14:21:46] : [Step 3/4] at 
> o.a.i.i.processors.cache.GridCachePartitionExchangeManager.refreshPartitions(GridCachePartitionExchangeManager.java:1036)
> [14:21:46] : [Step 3/4] at 
> o.a.i.i.processors.cache.GridCachePartitionExchangeManager$ResendTimeoutObject$1.run(GridCachePartitionExchangeManager.java:2663)
> [14:21:46] : [Step 3/4] at 
> o.a.i.i.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:6751)
> [14:21:46] : [Step 3/4] at 
> o.a.i.i.processors.closure.GridClosureProcessor$1.body(GridClosureProcessor.java:827)
> [14:21:46] : [Step 3/4] at 
> o.a.i.i.util.worker.GridWorker.run(GridWorker.java:110)
> [14:21:46] : [Step 3/4] at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> [14:21:46] : [Step 3/4] at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> [14:21:46] : [Step 3/4] at java.lang.Thread.run(Thread.java:745)
> [14:21:46] : [Step 3/4]
> [14:21:46] : [Step 3/4] Locked synchronizers:
> [14:21:46] : [Step 3/4] 
> java.util.concurrent.ThreadPoolExecutor$Worker@469d36ed
> [14:21:46] : [Step 3/4] # DEADLOCKED Thread 
> [name="sys-#22787%persistence.IgnitePdsAtomicCacheRebalancingTest2%", 
> id=25952, state=WAITING, blockCnt=0, waitCnt=3]
> [14:21:46] : [Step 3/4] Lock 
> [object=java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync@3a2e9f5b,
>  
> ownerName=exchange-worker-#22778%persistence.IgnitePdsAtomicCacheRebalancingTest2%,
>  ownerId=25941]
> [14:21:46] : [Step 3/4] at sun.misc.Unsafe.park(Native Method)
> [14:21:46] : [Step 3/4] at 
> java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> [14:21:46] : [Step 3/4] at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
> [14:21:46] : [Step 3/4] at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:870)
> [14:21:46] : [Step 3/4] at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1199)
> [14:21:46] : [Step 3/4] at 
> java.util.concurrent.locks.ReentrantReadWriteLock$WriteLock.lock(ReentrantReadWriteLock.java:943)
> [14:21:46] : 

[jira] [Updated] (IGNITE-6905) Print a consistent ID into a log file

2018-04-25 Thread Alexey Goncharuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-6905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Goncharuk updated IGNITE-6905:
-
Labels:   (was: IEP-4)

> Print a consistent ID into a log file
> -
>
> Key: IGNITE-6905
> URL: https://issues.apache.org/jira/browse/IGNITE-6905
> Project: Ignite
>  Issue Type: New Feature
>  Components: persistence
>Reporter: Sergey Puchnin
>Priority: Major
> Fix For: 2.4
>
>
> A BLT allows joining nodes by consistent ID. It's necessary to provide an 
> information about consistent ID in log file



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (IGNITE-6905) Print a consistent ID into a log file

2018-04-25 Thread Alexey Goncharuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-6905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Goncharuk resolved IGNITE-6905.
--
   Resolution: Fixed
Fix Version/s: (was: 2.6)
   2.4

This has been fixed in 2.4

> Print a consistent ID into a log file
> -
>
> Key: IGNITE-6905
> URL: https://issues.apache.org/jira/browse/IGNITE-6905
> Project: Ignite
>  Issue Type: New Feature
>  Components: persistence
>Reporter: Sergey Puchnin
>Priority: Major
>  Labels: IEP-4
> Fix For: 2.4
>
>
> A BLT allows joining nodes by consistent ID. It's necessary to provide an 
> information about consistent ID in log file



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-6650) Introduce effective storage format for baseline topology

2018-04-25 Thread Alexey Goncharuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-6650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Goncharuk updated IGNITE-6650:
-
Labels: IEP-4 Phase-3  (was: IEP-4)

> Introduce effective storage format for baseline topology
> 
>
> Key: IGNITE-6650
> URL: https://issues.apache.org/jira/browse/IGNITE-6650
> Project: Ignite
>  Issue Type: Improvement
>Affects Versions: 2.3
>Reporter: Alexey Goncharuk
>Priority: Major
>  Labels: IEP-4, Phase-3
>
> We need to design and implement an effective baseline topology format for 
> metastore so that metastore does not grow too fast.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-6651) Baseline should include only attributes required by affinity function

2018-04-25 Thread Alexey Goncharuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-6651?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Goncharuk updated IGNITE-6651:
-
Description: Currently we store all the user attributes in baseline, which 
has significant overhead. We should only store the attributes specified by 
affinity function.  (was: Baseline should save attributes for AF to metasore)

> Baseline should include only attributes required by affinity function
> -
>
> Key: IGNITE-6651
> URL: https://issues.apache.org/jira/browse/IGNITE-6651
> Project: Ignite
>  Issue Type: Improvement
>Affects Versions: 2.3
>Reporter: Sergey Puchnin
>Priority: Major
>  Labels: IEP-4
>
> Currently we store all the user attributes in baseline, which has significant 
> overhead. We should only store the attributes specified by affinity function.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-6695) Validation of joining node data consistency WRT the same data in grid

2018-04-25 Thread Alexey Goncharuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-6695?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Goncharuk updated IGNITE-6695:
-
Labels: IEP-4 Phase-2  (was: IEP-4)

> Validation of joining node data consistency WRT the same data in grid
> -
>
> Key: IGNITE-6695
> URL: https://issues.apache.org/jira/browse/IGNITE-6695
> Project: Ignite
>  Issue Type: New Feature
>  Components: persistence
>Reporter: Sergey Chugunov
>Priority: Major
>  Labels: IEP-4, Phase-2
>
> h2. Scenario
> Consider the following simple scenario (persistence is active):
> # Start nodes A and B, activate, add (K1, V1) to cache.
> # Stop A; update K1 to (K1, V2) (only B is aware of update). Stop B.
> # Start A, activate, update K1 to (K1, V3).
> After that B joining the cluster will lead to ambiguity of K1 value.
> Also even having BaselineTopology tracking history of cluster nodes 
> activations won't help here as after #3 node B's history is compatible with 
> node A's history.
> h2. Description
> When there is load of data updates and user turns off nodes one by one, it is 
> important to start nodes back in the opposite order. Node turned off the last 
> must be started first and so one.
> If it is not the case, situations like described above may happen.
> A mechanism to detect this scenarios and refuse to join nodes with 
> potentially conflicting data is needed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-6651) Baseline should include only attributes required by affinity function

2018-04-25 Thread Alexey Goncharuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-6651?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Goncharuk updated IGNITE-6651:
-
Summary: Baseline should include only attributes required by affinity 
function  (was: Baseline includes attributes required for affinity function)

> Baseline should include only attributes required by affinity function
> -
>
> Key: IGNITE-6651
> URL: https://issues.apache.org/jira/browse/IGNITE-6651
> Project: Ignite
>  Issue Type: Improvement
>Affects Versions: 2.3
>Reporter: Sergey Puchnin
>Priority: Major
>  Labels: IEP-4
>
> Baseline should save attributes for AF to metasore



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-6653) Check equality configuration between metasore and XML

2018-04-25 Thread Alexey Goncharuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-6653?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Goncharuk updated IGNITE-6653:
-
Labels: IEP-4 Phase-2  (was: IEP-4)

> Check equality configuration between metasore and XML
> -
>
> Key: IGNITE-6653
> URL: https://issues.apache.org/jira/browse/IGNITE-6653
> Project: Ignite
>  Issue Type: Improvement
>Affects Versions: 2.3
>Reporter: Sergey Puchnin
>Priority: Major
>  Labels: IEP-4, Phase-2
>
> To introduce another point of configuration validation between cluster (XML, 
> java code) and metasore. 
> Provide detailed information and decline to start cluster if different 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (IGNITE-6909) Create an API for branching pointing

2018-04-25 Thread Alexey Goncharuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-6909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Goncharuk resolved IGNITE-6909.
--
   Resolution: Fixed
Fix Version/s: (was: 2.6)
   2.4

> Create an API for branching pointing
> 
>
> Key: IGNITE-6909
> URL: https://issues.apache.org/jira/browse/IGNITE-6909
> Project: Ignite
>  Issue Type: New Feature
>  Components: persistence
>Reporter: Sergey Puchnin
>Priority: Major
>  Labels: IEP-4, Phase-1
> Fix For: 2.4
>
>
> To prevent offline split brain a branching pointing API should be provided. 
> With BLT two cluster's parts may be activated separately. Need to prevent 
> joining parts after that. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-6909) Create an API for branching pointing

2018-04-25 Thread Alexey Goncharuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-6909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Goncharuk updated IGNITE-6909:
-
Labels: IEP-4 Phase-1  (was: IEP-4)

> Create an API for branching pointing
> 
>
> Key: IGNITE-6909
> URL: https://issues.apache.org/jira/browse/IGNITE-6909
> Project: Ignite
>  Issue Type: New Feature
>  Components: persistence
>Reporter: Sergey Puchnin
>Priority: Major
>  Labels: IEP-4, Phase-1
> Fix For: 2.4
>
>
> To prevent offline split brain a branching pointing API should be provided. 
> With BLT two cluster's parts may be activated separately. Need to prevent 
> joining parts after that. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (IGNITE-6908) Check PartitionLossPolicy during a cluster activation

2018-04-25 Thread Alexey Goncharuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-6908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Goncharuk resolved IGNITE-6908.
--
Resolution: Won't Fix

Closing as not relevant

> Check PartitionLossPolicy during a cluster activation 
> --
>
> Key: IGNITE-6908
> URL: https://issues.apache.org/jira/browse/IGNITE-6908
> Project: Ignite
>  Issue Type: New Feature
>  Components: persistence
>Reporter: Sergey Puchnin
>Priority: Major
>  Labels: IEP-4
> Fix For: 2.6
>
>
> During a cluster activation from not full BLT, we should check if a partition 
> lost policy is respected.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-6910) Introduce a force join parameter to clear PDS after branching pointing

2018-04-25 Thread Alexey Goncharuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-6910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Goncharuk updated IGNITE-6910:
-
Labels: IEP-4 Phase-2  (was: IEP-4)

> Introduce a force join parameter to clear PDS after branching pointing
> --
>
> Key: IGNITE-6910
> URL: https://issues.apache.org/jira/browse/IGNITE-6910
> Project: Ignite
>  Issue Type: New Feature
>  Components: persistence
>Reporter: Sergey Puchnin
>Priority: Major
>  Labels: IEP-4, Phase-2
> Fix For: 2.6
>
>
> Need to give an opportunity to clean a PDS on a node that is trying to join 
> after branching pointing. It may be a "force-join" or a "force-clean" 
> parameter



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-7330) When client connects during cluster activation process it hangs on obtaining cache proxy

2018-04-25 Thread Alexey Goncharuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-7330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Goncharuk updated IGNITE-7330:
-
Labels: IEP-4 Phase-2  (was: IEP-4)

> When client connects during cluster activation process it hangs on obtaining 
> cache proxy
> 
>
> Key: IGNITE-7330
> URL: https://issues.apache.org/jira/browse/IGNITE-7330
> Project: Ignite
>  Issue Type: Bug
>Reporter: Sergey Chugunov
>Priority: Critical
>  Labels: IEP-4, Phase-2
> Fix For: 2.6
>
>
> The test below reproduces the issue:
> {noformat}
> public void testClientJoinWhenActivationInProgress() throws Exception {
> Ignite srv = startGrids(5);
> srv.active(true);
> srv.createCaches(Arrays.asList(cacheConfigurations1()));
> Map cacheData = new LinkedHashMap<>();
> for (int i = 1; i <= 100; i++) {
> for (CacheConfiguration ccfg : cacheConfigurations1()) {
> srv.cache(ccfg.getName()).put(-i, i);
> cacheData.put(-i, i);
> }
> }
> stopAllGrids();
> srv = startGrids(5);
> final CountDownLatch clientStartLatch = new CountDownLatch(1);
> IgniteInternalFuture clStartFut = GridTestUtils.runAsync(new 
> Runnable() {
> @Override public void run() {
> try {
> clientStartLatch.await();
> Thread.sleep(10);
> client = true;
> Ignite cl = startGrid("client0");
> IgniteCache atomicCache = 
> cl.cache(CACHE_NAME_PREFIX + '0');
> IgniteCache txCache = 
> cl.cache(CACHE_NAME_PREFIX + '1');
> assertEquals(100, atomicCache.size());
> assertEquals(100, txCache.size());
> }
> catch (Exception e) {
> log.error("Error occurred", e);
> }
> }
> }, "client-starter-thread");
> clientStartLatch.countDown();
> srv.active(true);
> clStartFut.get();
> }
> {noformat}
> Expected behavior: test finishes successfully.
> Actual behavior: test hangs on waiting for client start future to complete 
> while "client-started-thread" will be hanging on obtaining a reference to the 
> first cache.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-7027) SQL: Single primary index instead of mulitple per-partition indexes

2018-04-25 Thread Vladimir Ozerov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-7027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Ozerov updated IGNITE-7027:

Summary: SQL: Single primary index instead of mulitple per-partition 
indexes  (was: Single primary index instead of mulitple per-partition indexes)

> SQL: Single primary index instead of mulitple per-partition indexes
> ---
>
> Key: IGNITE-7027
> URL: https://issues.apache.org/jira/browse/IGNITE-7027
> Project: Ignite
>  Issue Type: Task
>  Components: cache, sql
>Reporter: Vladimir Ozerov
>Priority: Major
>  Labels: iep-19, performance
>
> Currently we have per-partition primary index. This gives us easy and 
> effective rebalance/recovery capabilities and efficient lookup in key-value 
> mode. 
> However, this doesn't work well for SQL case. We cannot use this index for 
> range scans. Neither we can use it for PK lookups (it is possible to 
> implement, but will be less then optimal due to necessity to build the whole 
> key object).
> The following change is suggested as optional storage mode:
> 1) Single index data structure for all partitions
> 2) Only single key type is allowed (i.e. no mess in the cache and no cache 
> groups)
> 3) Additional SQL PK index will not be needed in this case
> Advantage:
> - No overhead on the second PK index
> Disadvantage:
> - Less efficient rebalance and recovery



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8385) SQL: Allow variable-length values in index leafs

2018-04-25 Thread Vladimir Ozerov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-8385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Ozerov updated IGNITE-8385:

Summary: SQL: Allow variable-length values in index leafs  (was: SQL: allow 
variable-length values in index leafs)

> SQL: Allow variable-length values in index leafs
> 
>
> Key: IGNITE-8385
> URL: https://issues.apache.org/jira/browse/IGNITE-8385
> Project: Ignite
>  Issue Type: Task
>  Components: sql
>Affects Versions: 2.4
>Reporter: Vladimir Ozerov
>Priority: Major
>  Labels: iep-19, performance
> Fix For: 2.6
>
>
> Currently we have a restriction that every entry inside a BTree leaf should 
> be of fixed size. This restriction is artificial and prevents efficient index 
> usage because we have to choose so-called {{inline size}} for every index 
> manually. This is OK for fixed-size numeric types. But this could be a 
> problem for varlen types such as {{VARCHAR}} because in some cases we cannot 
> fit the whole value and have to fallback to data page lookup. In other cases 
> we may pick too pessimistic inline size value and index space would be 
> wasted. 
> What we need to do is to allow arbitrary item size in index pages. In this 
> case we would be able to inline all necessary values into index pages in most 
> cases. 
> Please pay attention that we may still met page overflow in case too long 
> data types are used. To mitigate this we should:
> 1) Implement IGNITE-6055 first so that we can distinguish between limited and 
> unlimited data types.
> 2) Unlimited data types should be inlined only partially
> 3) We need to have special handling for too long rows (probably just re-use 
> existing logic with minimal adjustments)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-6843) SQL: Optionally do not use WAL when executing CREATE INDEX

2018-04-25 Thread Vladimir Ozerov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-6843?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Ozerov updated IGNITE-6843:

Summary: SQL: Optionally do not use WAL when executing CREATE INDEX  (was: 
SQL: optionally do not use WAL when executing CREATE INDEX)

> SQL: Optionally do not use WAL when executing CREATE INDEX
> --
>
> Key: IGNITE-6843
> URL: https://issues.apache.org/jira/browse/IGNITE-6843
> Project: Ignite
>  Issue Type: Task
>  Components: sql
>Affects Versions: 2.3
>Reporter: Vladimir Ozerov
>Priority: Major
>  Labels: iep-19, performance
>
> Inspired by Oracle {{NOLOGGING}} option [1].
> When an index is being created through {{CREATE INDEX}} command, every single 
> index update is written to WAL. Let's introduce special mode where updates 
> are not written to WAL:
> 1) Index updates during an index_create operation are not written to WAL
> 2) When the index is ready, force a checkpoint and wait for it to happen
> 3) Purge index data if node crashed before checkpoint
> Alternatively, we may even not trigger a checkpoint, hoping that that node 
> will not crash before the nearest checkpoint is finished. If node crashed 
> during this time window, the index should be marked as "invalid", and not 
> used for queries. Then the user should either re-create or rebuild it.
> [1] 
> https://docs.oracle.com/cd/B19306_01/server.102/b14200/statements_5010.htm#i2182589



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-7015) SQL: Index should be updated only when relevant values changed

2018-04-25 Thread Vladimir Ozerov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-7015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Ozerov updated IGNITE-7015:

Summary: SQL: Index should be updated only when relevant values changed  
(was: SQL: index should be updated only when relevant values changed)

> SQL: Index should be updated only when relevant values changed
> --
>
> Key: IGNITE-7015
> URL: https://issues.apache.org/jira/browse/IGNITE-7015
> Project: Ignite
>  Issue Type: Task
>  Components: sql
>Reporter: Vladimir Ozerov
>Assignee: Roman Kondakov
>Priority: Major
>  Labels: iep-19, performance
> Fix For: 2.6
>
>
> See {{GridH2Table.update}} method. Whenever value is updated, we propagate it 
> to all indexes. Consider the following case:
> 1) Old row is not null, so this is "update", not "create".
> 2) Link hasn't changed
> 3) Indexed fields haven't changed
> If all conditions are met, we can skip index update completely, as state 
> before and after will be the same. This is especially important when 
> persistence is enabled because currently we generate unnecessary dirty pages 
> what increases IO pressure.
> Suggested fix:
> 1) Iterate over index columns, skipping key and affinity columns (as they are 
> guaranteed to be the same);
> 2) Compare relevant index columns of both old and new rows
> 3) If all columns are equal, do nothing.
> Fields should be read through {{GridH2KeyValueRowOnheap#getValue}}, because 
> in this case we will re-use value cache transparently.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-6983) SQL: Optimize CREATE INDEX and BPlusTree interaction

2018-04-25 Thread Vladimir Ozerov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-6983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Ozerov updated IGNITE-6983:

Summary: SQL: Optimize CREATE INDEX and BPlusTree interaction  (was: SQL: 
optimize CREATE INDEX and BPlusTree interaction)

> SQL: Optimize CREATE INDEX and BPlusTree interaction
> 
>
> Key: IGNITE-6983
> URL: https://issues.apache.org/jira/browse/IGNITE-6983
> Project: Ignite
>  Issue Type: Task
>  Components: cache, sql
>Reporter: Vladimir Ozerov
>Assignee: Taras Ledkov
>Priority: Major
>  Labels: iep-19, performance
>
> Currently index is built as follows:
> 1) Get next entry from partition's tree
> 2) Read it's key (copy to heap)
> 3) Acquire lock on {{GridCacheMapEntry}}
> 4) Lookup the same key in the tree from the top
> 5) Read it's value (copy to heap)
> 6) Add to index.
> This is very complex flow. We can optimize two things - tree lookup and value 
> deserialization as follows:
> 1) Every data page will have update counter, which is incremented every time 
> anything is changed.
> 2) When lock on {{GridCacheMapEntry}} is acquired, we will acquire lock on 
> the data page and re-check update counter. 
> 3) If page was changed between iterator read and lock acquisition then use 
> old flow. 
> 4) Otherwise - set read lock on the page, read value as *offheap* object, 
> apply it to index.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8386) SQL: Make sure PK index do not use wrapped object

2018-04-25 Thread Vladimir Ozerov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-8386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Ozerov updated IGNITE-8386:

Summary: SQL: Make sure PK index do not use wrapped object  (was: SQL: make 
sure PK index do not use wrapped objects)

> SQL: Make sure PK index do not use wrapped object
> -
>
> Key: IGNITE-8386
> URL: https://issues.apache.org/jira/browse/IGNITE-8386
> Project: Ignite
>  Issue Type: Task
>  Components: sql
>Affects Versions: 2.4
>Reporter: Vladimir Ozerov
>Priority: Major
>  Labels: iep-19, performance
> Fix For: 2.6
>
>
> Currently PK may be built over the whole {{_KEY}} column, i.e. the whole 
> binary object. This could happen in two cases:
> 1) Composite PK
> 2) Plain PK but with {{WRAP_KEY}} option.
> This is critical performance issue for two reasons:
> 1) This index is effectively useless and cannot be used in any sensible 
> queries; it just wastes space and makes updates slower
> 2) Binary object typically has common header bytes what may lead to excessive 
> number of comparisons during index update.
> To mitigate the problem we need to ensure that index is *never* built over 
> {{_KEY}}, Instead, we must always extract target columns and build normal 
> index over them.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-5853) Provide a way to determine which user attributes are used in affinity calculation

2018-04-25 Thread Alexey Goncharuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-5853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Goncharuk updated IGNITE-5853:
-
Labels: IEP-4 Phase-2  (was: IEP-4 Phase-1 Phase-2)

> Provide a way to determine which user attributes are used in affinity 
> calculation
> -
>
> Key: IGNITE-5853
> URL: https://issues.apache.org/jira/browse/IGNITE-5853
> Project: Ignite
>  Issue Type: New Feature
>  Components: persistence
>Affects Versions: 2.1
>Reporter: Alexey Goncharuk
>Priority: Major
>  Labels: IEP-4, Phase-2
> Fix For: 2.6
>
>
> Since an affinity function may use user attributes to calculate affinity 
> distribution, we need to save these attributes to the metastore. However, 
> storing all the attributes is not very effective, so we need to have a way to 
> determine which attributes should be stored.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-6843) SQL: optionally do not use WAL when executing CREATE INDEX

2018-04-25 Thread Vladimir Ozerov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-6843?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Ozerov updated IGNITE-6843:

Labels: iep-19 performance  (was: performance)

> SQL: optionally do not use WAL when executing CREATE INDEX
> --
>
> Key: IGNITE-6843
> URL: https://issues.apache.org/jira/browse/IGNITE-6843
> Project: Ignite
>  Issue Type: Task
>  Components: sql
>Affects Versions: 2.3
>Reporter: Vladimir Ozerov
>Priority: Major
>  Labels: iep-19, performance
>
> Inspired by Oracle {{NOLOGGING}} option [1].
> When an index is being created through {{CREATE INDEX}} command, every single 
> index update is written to WAL. Let's introduce special mode where updates 
> are not written to WAL:
> 1) Index updates during an index_create operation are not written to WAL
> 2) When the index is ready, force a checkpoint and wait for it to happen
> 3) Purge index data if node crashed before checkpoint
> Alternatively, we may even not trigger a checkpoint, hoping that that node 
> will not crash before the nearest checkpoint is finished. If node crashed 
> during this time window, the index should be marked as "invalid", and not 
> used for queries. Then the user should either re-create or rebuild it.
> [1] 
> https://docs.oracle.com/cd/B19306_01/server.102/b14200/statements_5010.htm#i2182589



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-6779) Recreate clients caches after a node join BLT

2018-04-25 Thread Alexey Goncharuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-6779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Goncharuk updated IGNITE-6779:
-
Labels:   (was: IEP-4 Phase-1)

> Recreate clients caches after a node join BLT
> -
>
> Key: IGNITE-6779
> URL: https://issues.apache.org/jira/browse/IGNITE-6779
> Project: Ignite
>  Issue Type: Bug
>  Components: persistence
>Reporter: Sergey Puchnin
>Priority: Critical
>
> If a node already has caches and is going to join to a BLT  than current 
> client caches should be destroyed and re-create server caches.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (IGNITE-6779) Recreate clients caches after a node join BLT

2018-04-25 Thread Alexey Goncharuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-6779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Goncharuk resolved IGNITE-6779.
--
Resolution: Won't Fix

This is no longer relevant.

> Recreate clients caches after a node join BLT
> -
>
> Key: IGNITE-6779
> URL: https://issues.apache.org/jira/browse/IGNITE-6779
> Project: Ignite
>  Issue Type: Bug
>  Components: persistence
>Reporter: Sergey Puchnin
>Priority: Critical
>
> If a node already has caches and is going to join to a BLT  than current 
> client caches should be destroyed and re-create server caches.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-6408) SQL: CREATE INDEX should fill pages in batches

2018-04-25 Thread Vladimir Ozerov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-6408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Ozerov updated IGNITE-6408:

Labels: iep-19 performance  (was: performance)

> SQL: CREATE INDEX should fill pages in batches
> --
>
> Key: IGNITE-6408
> URL: https://issues.apache.org/jira/browse/IGNITE-6408
> Project: Ignite
>  Issue Type: Task
>  Components: sql
>Affects Versions: 2.1
>Reporter: Vladimir Ozerov
>Priority: Major
>  Labels: iep-19, performance
>
> Currently during execution of {{CREATE INDEX}} we add entries to index 
> one-by-one. Every addition should 
> 1) Walk down BTree from the root
> 2) Perform binary search inside index pages over and over again
> Instead, we can try filling index in batches, roughly {{Index.add(Map}}. 
> In this case we will not have to perform index searches from the root over 
> and over again. Instead, wi will effectively walk in left-to-right direction 
> and add entries from the batch to appropriate places. This could save a lot 
> of comparisons and thus improve index build performance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-5853) Provide a way to determine which user attributes are used in affinity calculation

2018-04-25 Thread Alexey Goncharuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-5853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Goncharuk updated IGNITE-5853:
-
Labels: IEP-4 Phase-1 Phase-2  (was: IEP-4 Phase-1)

> Provide a way to determine which user attributes are used in affinity 
> calculation
> -
>
> Key: IGNITE-5853
> URL: https://issues.apache.org/jira/browse/IGNITE-5853
> Project: Ignite
>  Issue Type: New Feature
>  Components: persistence
>Affects Versions: 2.1
>Reporter: Alexey Goncharuk
>Priority: Major
>  Labels: IEP-4, Phase-2
> Fix For: 2.6
>
>
> Since an affinity function may use user attributes to calculate affinity 
> distribution, we need to save these attributes to the metastore. However, 
> storing all the attributes is not very effective, so we need to have a way to 
> determine which attributes should be stored.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8386) SQL: make sure PK index do not use wrapped objects

2018-04-25 Thread Vladimir Ozerov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-8386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Ozerov updated IGNITE-8386:

Labels: iep-19 performance  (was: performance)

> SQL: make sure PK index do not use wrapped objects
> --
>
> Key: IGNITE-8386
> URL: https://issues.apache.org/jira/browse/IGNITE-8386
> Project: Ignite
>  Issue Type: Task
>  Components: sql
>Affects Versions: 2.4
>Reporter: Vladimir Ozerov
>Priority: Major
>  Labels: iep-19, performance
> Fix For: 2.6
>
>
> Currently PK may be built over the whole {{_KEY}} column, i.e. the whole 
> binary object. This could happen in two cases:
> 1) Composite PK
> 2) Plain PK but with {{WRAP_KEY}} option.
> This is critical performance issue for two reasons:
> 1) This index is effectively useless and cannot be used in any sensible 
> queries; it just wastes space and makes updates slower
> 2) Binary object typically has common header bytes what may lead to excessive 
> number of comparisons during index update.
> To mitigate the problem we need to ensure that index is *never* built over 
> {{_KEY}}, Instead, we must always extract target columns and build normal 
> index over them.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-6407) SQL: CREATE INDEX command should build index bottom-up

2018-04-25 Thread Vladimir Ozerov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-6407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Ozerov updated IGNITE-6407:

Labels: iep-19 performance  (was: performance)

> SQL: CREATE INDEX command should build index bottom-up
> --
>
> Key: IGNITE-6407
> URL: https://issues.apache.org/jira/browse/IGNITE-6407
> Project: Ignite
>  Issue Type: Task
>  Components: sql
>Affects Versions: 2.1
>Reporter: Vladimir Ozerov
>Priority: Major
>  Labels: iep-19, performance
>
> Currently when {{CREATE INDEX}} command is executed, entires are added to 
> index one-by-one. This leads to high index build times. 
> Instead, we can build index as follows:
> 1) Iterate over the whole data set and sort it according to index rules
> 2) Build leaf pages 
> 3) Build middle pages
> 4) Build root page
> This approach is used by many vendors. The main difficulty is that the whole 
> data set may not fit to memory. For this reason we will need to implement a 
> kind of disk spilling. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-7015) SQL: index should be updated only when relevant values changed

2018-04-25 Thread Vladimir Ozerov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-7015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Ozerov updated IGNITE-7015:

Labels: iep-19 performance  (was: performance)

> SQL: index should be updated only when relevant values changed
> --
>
> Key: IGNITE-7015
> URL: https://issues.apache.org/jira/browse/IGNITE-7015
> Project: Ignite
>  Issue Type: Task
>  Components: sql
>Reporter: Vladimir Ozerov
>Assignee: Roman Kondakov
>Priority: Major
>  Labels: iep-19, performance
> Fix For: 2.6
>
>
> See {{GridH2Table.update}} method. Whenever value is updated, we propagate it 
> to all indexes. Consider the following case:
> 1) Old row is not null, so this is "update", not "create".
> 2) Link hasn't changed
> 3) Indexed fields haven't changed
> If all conditions are met, we can skip index update completely, as state 
> before and after will be the same. This is especially important when 
> persistence is enabled because currently we generate unnecessary dirty pages 
> what increases IO pressure.
> Suggested fix:
> 1) Iterate over index columns, skipping key and affinity columns (as they are 
> guaranteed to be the same);
> 2) Compare relevant index columns of both old and new rows
> 3) If all columns are equal, do nothing.
> Fields should be read through {{GridH2KeyValueRowOnheap#getValue}}, because 
> in this case we will re-use value cache transparently.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-7027) Single primary index instead of mulitple per-partition indexes

2018-04-25 Thread Vladimir Ozerov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-7027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Ozerov updated IGNITE-7027:

Labels: iep-19 performance  (was: iep-10 iep-19 performance)

> Single primary index instead of mulitple per-partition indexes
> --
>
> Key: IGNITE-7027
> URL: https://issues.apache.org/jira/browse/IGNITE-7027
> Project: Ignite
>  Issue Type: Task
>  Components: cache, sql
>Reporter: Vladimir Ozerov
>Priority: Major
>  Labels: iep-19, performance
>
> Currently we have per-partition primary index. This gives us easy and 
> effective rebalance/recovery capabilities and efficient lookup in key-value 
> mode. 
> However, this doesn't work well for SQL case. We cannot use this index for 
> range scans. Neither we can use it for PK lookups (it is possible to 
> implement, but will be less then optimal due to necessity to build the whole 
> key object).
> The following change is suggested as optional storage mode:
> 1) Single index data structure for all partitions
> 2) Only single key type is allowed (i.e. no mess in the cache and no cache 
> groups)
> 3) Additional SQL PK index will not be needed in this case
> Advantage:
> - No overhead on the second PK index
> Disadvantage:
> - Less efficient rebalance and recovery



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-7027) Single primary index instead of mulitple per-partition indexes

2018-04-25 Thread Vladimir Ozerov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-7027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Ozerov updated IGNITE-7027:

Labels: iep-10 iep-19 performance  (was: iep-10)

> Single primary index instead of mulitple per-partition indexes
> --
>
> Key: IGNITE-7027
> URL: https://issues.apache.org/jira/browse/IGNITE-7027
> Project: Ignite
>  Issue Type: Task
>  Components: cache, sql
>Reporter: Vladimir Ozerov
>Priority: Major
>  Labels: iep-19, performance
>
> Currently we have per-partition primary index. This gives us easy and 
> effective rebalance/recovery capabilities and efficient lookup in key-value 
> mode. 
> However, this doesn't work well for SQL case. We cannot use this index for 
> range scans. Neither we can use it for PK lookups (it is possible to 
> implement, but will be less then optimal due to necessity to build the whole 
> key object).
> The following change is suggested as optional storage mode:
> 1) Single index data structure for all partitions
> 2) Only single key type is allowed (i.e. no mess in the cache and no cache 
> groups)
> 3) Additional SQL PK index will not be needed in this case
> Advantage:
> - No overhead on the second PK index
> Disadvantage:
> - Less efficient rebalance and recovery



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-7038) SQL: nested tables support

2018-04-25 Thread Vladimir Ozerov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-7038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Ozerov updated IGNITE-7038:

Labels:   (was: iep-10)

> SQL: nested tables support
> --
>
> Key: IGNITE-7038
> URL: https://issues.apache.org/jira/browse/IGNITE-7038
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Reporter: Vladimir Ozerov
>Priority: Major
>
> Many commercial databases support a kind of nested tables which is 
> essentially a parent-child relation with special storage format. With this 
> approach child data can be located efficiently without joins. 
> Syntax example:
> {code}
> CREATE TYPE address_t AS OBJECT (
>cityVARCHAR2(20),
>street  VARCHAR2(30)
> );
> CREATE TYPE address_tab IS TABLE OF address_t;
> CREATE TABLE customers (
>custid  NUMBER,
>address address_tab )
> NESTED TABLE address STORE AS customer_addresses;
> INSERT INTO customers VALUES (
> 1,
> address_tab(
> address_t('Redwood Shores', '101 First'),
> address_t('Mill Valley', '123 Maple')
> )
> );
> {code}
> Several storage formats should be considered. First, data can be embedded to 
> parent data row directly or through forward reference to a chain of dedicated 
> blocks (similar to LOB data types). This is how conventional RDBMS systems 
> work. 
> Second, children rows could be stored in the same PK index as parent row. 
> This is how Spanner works. In this case parent and child rows are different 
> rows, but stored in the same data structures. This allows for sequential 
> access to both parent and child data in case of joins, which could be 
> extremely valuable in OLAP cases.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-7026) Index-organized data storage format

2018-04-25 Thread Vladimir Ozerov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-7026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Ozerov updated IGNITE-7026:

Labels:   (was: iep-10)

> Index-organized data storage format
> ---
>
> Key: IGNITE-7026
> URL: https://issues.apache.org/jira/browse/IGNITE-7026
> Project: Ignite
>  Issue Type: Task
>  Components: cache, sql
>Reporter: Vladimir Ozerov
>Priority: Major
>
> In SQL *index-organized* table is a type of table format where rows are 
> stored as leafs of a primary key index (sometimes called "clustered index"). 
> In this format data within a single page is sorted in accordance with PK 
> index. All leaves are always sorted as well. 
> Another table format is *heap*. Data is put into arbitrary page with enough 
> space. Free space is tracked using either free-lists or allocation maps. 
> Primary key index is organized in the same way as secondary index - leaf 
> pages contain a kind of row pointer. This is how Ignite currently works. 
> This ticket is aimed to implement index-organized storage format, which will 
> give us the following advantages:
> 1) Fast scans over PK index due to decreased number of page reads and page 
> locks, which is especially important for JOINs and OLAP cases;
> 2) Faster inserts in OLTP workloads due to less number of page updates.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-8384) SQL: Secondary indexes should sort entries by links rather than keys

2018-04-25 Thread Vladimir Ozerov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-8384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Ozerov updated IGNITE-8384:

Labels: iep-19 performance  (was: performance)

> SQL: Secondary indexes should sort entries by links rather than keys
> 
>
> Key: IGNITE-8384
> URL: https://issues.apache.org/jira/browse/IGNITE-8384
> Project: Ignite
>  Issue Type: Task
>  Components: sql
>Affects Versions: 2.4
>Reporter: Vladimir Ozerov
>Priority: Major
>  Labels: iep-19, performance
> Fix For: 2.6
>
>
> Currently we sort entries in secondary indexes as {{(idx_cols, KEY)}}. The 
> key itself is not stored in the index in general case. It means that we need 
> to perform a lookup to data page to find correct insertion point for index 
> entry.
> This could be fixed easily by sorting entries a bit differently - {{idx_cols, 
> link}}. This is all we need.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


  1   2   >