[jira] [Commented] (IGNITE-6939) Exclude false owners from the execution plan based on query response

2017-11-16 Thread Vladimir Ozerov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-6939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256608#comment-16256608
 ] 

Vladimir Ozerov commented on IGNITE-6939:
-

[~agoncharuk], p.2 should be fairly easy to implement.

> Exclude false owners from the execution plan based on query response
> 
>
> Key: IGNITE-6939
> URL: https://issues.apache.org/jira/browse/IGNITE-6939
> Project: Ignite
>  Issue Type: Task
>  Security Level: Public(Viewable by anyone) 
>  Components: cache, sql
>Reporter: Alexey Goncharuk
> Fix For: 2.4
>
>
> This is related to IGNITE-6858, the fix in the ticket can be improved.
> The scenario leading to the issue is as follows:
> 1) Node A has partition 1 as owning
> 2) Node B has local partition map which has partition 1 on node A as owning
> 3) Topology change is triggered which would move partition 1 from A to 
> another node, topology version is X
> 4) A transaction is started on node B on topology X
> 5) Partition is rebalanced and node A moves partition 1 to RENTING and then 
> to EVICTED state, node A updates it's local partition map.
> 6) A new topology change is triggered
> 7) Node A sends partition map (transitively) to the node B, but since there 
> is a pending exchange, node B ignores the updated map and still thinks that A 
> owns partition 1 [1]
> 8) transaction attempts to execute an SQL query against partition 1 on node A 
> and retries infinitely
> [1] The related code is in 
> GridDhtPartitionTopologyImpl#update(AffinityTopologyVersion, 
> GridDhtPartitionFullMap, CachePartitionFullCountersMap, Set, 
> AffinityTopologyVersion)
> {code}
> if (stopping || !lastTopChangeVer.initialized() ||
> // Ignore message not-related to exchange if exchange is in progress.
> (exchangeVer == null && !lastTopChangeVer.equals(readyTopVer)))
> return false;
> {code}
> There are two possibilities to fix this:
> 1) Make all updates to partition map in a single thread, then we will not 
> need update sequences and then we can update local partition map even when 
> there is a pending exchange (this is a relatively big, but useful change)
> 2) Make a change in SQL query execution so that if a node cannot reserve a 
> partition, do not map the partition to this node on the same topology version 
> anymore (a quick fix)
> This will remove the need to throw an exception from SQL query inside 
> transaction when there is a pending exchange.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (IGNITE-6939) Exclude false owners from the execution plan based on query response

2017-11-16 Thread Vladimir Ozerov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-6939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Ozerov updated IGNITE-6939:

Component/s: sql
 cache

> Exclude false owners from the execution plan based on query response
> 
>
> Key: IGNITE-6939
> URL: https://issues.apache.org/jira/browse/IGNITE-6939
> Project: Ignite
>  Issue Type: Task
>  Security Level: Public(Viewable by anyone) 
>  Components: cache, sql
>Reporter: Alexey Goncharuk
> Fix For: 2.4
>
>
> This is related to IGNITE-6858, the fix in the ticket can be improved.
> The scenario leading to the issue is as follows:
> 1) Node A has partition 1 as owning
> 2) Node B has local partition map which has partition 1 on node A as owning
> 3) Topology change is triggered which would move partition 1 from A to 
> another node, topology version is X
> 4) A transaction is started on node B on topology X
> 5) Partition is rebalanced and node A moves partition 1 to RENTING and then 
> to EVICTED state, node A updates it's local partition map.
> 6) A new topology change is triggered
> 7) Node A sends partition map (transitively) to the node B, but since there 
> is a pending exchange, node B ignores the updated map and still thinks that A 
> owns partition 1 [1]
> 8) transaction attempts to execute an SQL query against partition 1 on node A 
> and retries infinitely
> [1] The related code is in 
> GridDhtPartitionTopologyImpl#update(AffinityTopologyVersion, 
> GridDhtPartitionFullMap, CachePartitionFullCountersMap, Set, 
> AffinityTopologyVersion)
> {code}
> if (stopping || !lastTopChangeVer.initialized() ||
> // Ignore message not-related to exchange if exchange is in progress.
> (exchangeVer == null && !lastTopChangeVer.equals(readyTopVer)))
> return false;
> {code}
> There are two possibilities to fix this:
> 1) Make all updates to partition map in a single thread, then we will not 
> need update sequences and then we can update local partition map even when 
> there is a pending exchange (this is a relatively big, but useful change)
> 2) Make a change in SQL query execution so that if a node cannot reserve a 
> partition, do not map the partition to this node on the same topology version 
> anymore (a quick fix)
> This will remove the need to throw an exception from SQL query inside 
> transaction when there is a pending exchange.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (IGNITE-6939) Exclude false owners from the execution plan based on query response

2017-11-16 Thread Vladimir Ozerov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-6939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Ozerov updated IGNITE-6939:

Fix Version/s: 2.4

> Exclude false owners from the execution plan based on query response
> 
>
> Key: IGNITE-6939
> URL: https://issues.apache.org/jira/browse/IGNITE-6939
> Project: Ignite
>  Issue Type: Task
>  Security Level: Public(Viewable by anyone) 
>  Components: cache, sql
>Reporter: Alexey Goncharuk
> Fix For: 2.4
>
>
> This is related to IGNITE-6858, the fix in the ticket can be improved.
> The scenario leading to the issue is as follows:
> 1) Node A has partition 1 as owning
> 2) Node B has local partition map which has partition 1 on node A as owning
> 3) Topology change is triggered which would move partition 1 from A to 
> another node, topology version is X
> 4) A transaction is started on node B on topology X
> 5) Partition is rebalanced and node A moves partition 1 to RENTING and then 
> to EVICTED state, node A updates it's local partition map.
> 6) A new topology change is triggered
> 7) Node A sends partition map (transitively) to the node B, but since there 
> is a pending exchange, node B ignores the updated map and still thinks that A 
> owns partition 1 [1]
> 8) transaction attempts to execute an SQL query against partition 1 on node A 
> and retries infinitely
> [1] The related code is in 
> GridDhtPartitionTopologyImpl#update(AffinityTopologyVersion, 
> GridDhtPartitionFullMap, CachePartitionFullCountersMap, Set, 
> AffinityTopologyVersion)
> {code}
> if (stopping || !lastTopChangeVer.initialized() ||
> // Ignore message not-related to exchange if exchange is in progress.
> (exchangeVer == null && !lastTopChangeVer.equals(readyTopVer)))
> return false;
> {code}
> There are two possibilities to fix this:
> 1) Make all updates to partition map in a single thread, then we will not 
> need update sequences and then we can update local partition map even when 
> there is a pending exchange (this is a relatively big, but useful change)
> 2) Make a change in SQL query execution so that if a node cannot reserve a 
> partition, do not map the partition to this node on the same topology version 
> anymore (a quick fix)
> This will remove the need to throw an exception from SQL query inside 
> transaction when there is a pending exchange.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (IGNITE-6649) Add EvictionPolicy factory support in IgniteConfiguration.

2017-11-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-6649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256582#comment-16256582
 ] 

ASF GitHub Bot commented on IGNITE-6649:


Github user AMashenkov closed the pull request at:

https://github.com/apache/ignite/pull/2896


> Add EvictionPolicy factory support in IgniteConfiguration.
> --
>
> Key: IGNITE-6649
> URL: https://issues.apache.org/jira/browse/IGNITE-6649
> Project: Ignite
>  Issue Type: Bug
>  Security Level: Public(Viewable by anyone) 
>  Components: cache
>Reporter: Andrew Mashenkov
>Assignee: Andrew Mashenkov
> Fix For: 2.4
>
> Attachments: EvictionPolicyTest.java
>
>
> For now the only way to set EvictionPolicy to IgniteConfiguration is to use 
> EvictionPolicy instance. 
> That looks error prone as user can easily share instance between caches or 
> cache reincarnations and got unexpected results.
> E.g. it can cause an AssertionError if EvictionPolicy is reused.
> Steps to reproduce.
> 1. Create CacheConfiguration object that will be reused.
> 2. Create and fill a cache.
> 3. Destroy cache and create cache again with same CacheConfiguration object.
> 4. One of next put can fails with stacktrace below.
> The error is throws when EvictionPolicy tries to evict entries from cache 
> that has just been destroyed.
> Also, EvictionPolicy object can be implicitly holds by some user objects 
> (together with IgniteConfiguration) that can cause memory leak.
> java.lang.AssertionError
>   at 
> org.apache.ignite.internal.processors.cache.CacheEvictableEntryImpl.evict(CacheEvictableEntryImpl.java:71)
>   at 
> org.apache.ignite.cache.eviction.lru.LruEvictionPolicy.shrink0(LruEvictionPolicy.java:275)
>   at 
> org.apache.ignite.cache.eviction.lru.LruEvictionPolicy.shrink(LruEvictionPolicy.java:250)
>   at 
> org.apache.ignite.cache.eviction.lru.LruEvictionPolicy.onEntryAccessed(LruEvictionPolicy.java:161)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheEvictionManager.notifyPolicy(GridCacheEvictionManager.java:1393)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheEvictionManager.touch(GridCacheEvictionManager.java:825)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.unlockEntries(GridDhtAtomicCache.java:3058)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAllAsyncInternal0(GridDhtAtomicCache.java:1952)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAllAsyncInternal(GridDhtAtomicCache.java:1730)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicAbstractUpdateFuture.mapSingle(GridNearAtomicAbstractUpdateFuture.java:264)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicSingleUpdateFuture.map(GridNearAtomicSingleUpdateFuture.java:494)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicSingleUpdateFuture.mapOnTopology(GridNearAtomicSingleUpdateFuture.java:436)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicAbstractUpdateFuture.map(GridNearAtomicAbstractUpdateFuture.java:209)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.update0(GridDhtAtomicCache.java:1245)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.put0(GridDhtAtomicCache.java:680)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheAdapter.put(GridCacheAdapter.java:2328)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheAdapter.put(GridCacheAdapter.java:2305)
>   at 
> org.apache.ignite.internal.processors.cache.IgniteCacheProxy.put(IgniteCacheProxy.java:1379)
>  
> UPD: See discussion here [1].
> [1] 
> http://apache-ignite-developers.2346864.n4.nabble.com/CacheConfiguration-reusage-issues-with-EvictionPolicy-enabled-td23437.html



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (IGNITE-5994) IgniteInternalCache.invokeAsync().get() can return null

2017-11-16 Thread Alexander Menshikov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-5994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256549#comment-16256549
 ] 

Alexander Menshikov commented on IGNITE-5994:
-

[~shia] You has formed the patch in wrong way. You need to create a PR on 
GitHub and create a upsource review. Please see HowToContribute page: 
https://cwiki.apache.org/confluence/display/IGNITE/How+to+Contribute#HowtoContribute-Workflow

Sorry for a slow answering. Also, I have to say I haven't merge permission so 
you will have to ask about merge on dev-list.

> IgniteInternalCache.invokeAsync().get() can return null
> ---
>
> Key: IGNITE-5994
> URL: https://issues.apache.org/jira/browse/IGNITE-5994
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.1
>Reporter: Alexander Menshikov
>Assignee: Zhang Yuan
>Priority: Minor
>  Labels: newbie
> Attachments: IgniteCacheSelfTest.java, 
> master_8629b50d6f_ignite-5994.patch
>
>
> The IgniteInternalCache.invoke() always return an EntryProcessorResult, but 
> the IgniteInternalCache.invokeAsync().get() can return the null in case when 
> an EntryProcessor has returned the null.
> Code from reproducer:
> {noformat}
> final EntryProcessor ep = new EntryProcessor Object, Object>() {
> @Override
> public Object process(MutableEntry entry,
> Object... objects) throws EntryProcessorException {
> return null;
> }
> };
> EntryProcessorResult result = utilCache.invoke("test", ep);
> assertNotNull(result);
> assertNull(result.get());
> result = utilCache.invokeAsync("test", ep).get();
> // Assert here!!!
> assertNotNull(result);
> assertNull(result.get());
> {noformat}
> It can be optimization. Nevertheless results of invoke() must be equals with 
> results of invokeAsync().get(). So there are two options:
> 1) To do so would be the invokeAsync(key, ep).get() returned the null too for 
> the optimization.
> 2) Or to do so would be the invoke(key, ep) returned an EntryProcessorResult 
> for a logical consistency.
> NOTE: Don't confuse with IgniteCache.invoke.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (IGNITE-4996) Queries: show Node selection modal in case if user execute SCAN on local cache

2017-11-16 Thread Pavel Konstantinov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256433#comment-16256433
 ] 

Pavel Konstantinov edited comment on IGNITE-4996 at 11/17/17 4:49 AM:
--

The issue still exists:
# start cluster with local cache (for 2.x you have to disable persistent)
# open Query tab, add Scan, select local cache, click Scan button (not 'Scan on 
selected node')

expected: Node selection modal should be showed up
actual: Node selection modal doesn't show


was (Author: pkonstantinov):
The issue still exists:
# start cluster version 1.9
# open Query tab, add Scan, select local cache, click Scan button (not 'Scan on 
selected node')

expected: Node selection modal should be showed up
actual: Node selection modal doesn't show

> Queries: show Node selection modal in case if user execute SCAN on local cache
> --
>
> Key: IGNITE-4996
> URL: https://issues.apache.org/jira/browse/IGNITE-4996
> Project: Ignite
>  Issue Type: Bug
>  Components: wizards
>Reporter: Pavel Konstantinov
>Assignee: Alexey Kuznetsov
>Priority: Minor
>
> Also we need to filter nodes list and show only nodes where selected cache is 
> configured



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (IGNITE-4996) Queries: show Node selection modal in case if user execute SCAN on local cache

2017-11-16 Thread Pavel Konstantinov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256433#comment-16256433
 ] 

Pavel Konstantinov edited comment on IGNITE-4996 at 11/17/17 4:48 AM:
--

The issue still exists:
# start cluster version 1.9
# open Query tab, add Scan, select local cache, click Scan button (not 'Scan on 
selected node')

expected: Node selection modal should be showed up
actual: Node selection modal doesn't show


was (Author: pkonstantinov):
The issue still exists:
# start cluster version 7.9
# open Query tab, add Scan, select local cache, click Scan button (not 'Scan on 
selected node')

expected: Node selection modal should be showed up
actual: Node selection modal doesn't show

> Queries: show Node selection modal in case if user execute SCAN on local cache
> --
>
> Key: IGNITE-4996
> URL: https://issues.apache.org/jira/browse/IGNITE-4996
> Project: Ignite
>  Issue Type: Bug
>  Components: wizards
>Reporter: Pavel Konstantinov
>Assignee: Alexey Kuznetsov
>Priority: Minor
>
> Also we need to filter nodes list and show only nodes where selected cache is 
> configured



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (IGNITE-4996) Queries: show Node selection modal in case if user execute SCAN on local cache

2017-11-16 Thread Pavel Konstantinov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-4996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Konstantinov reassigned IGNITE-4996:
--

Assignee: Alexey Kuznetsov  (was: Pavel Konstantinov)

> Queries: show Node selection modal in case if user execute SCAN on local cache
> --
>
> Key: IGNITE-4996
> URL: https://issues.apache.org/jira/browse/IGNITE-4996
> Project: Ignite
>  Issue Type: Bug
>  Components: wizards
>Reporter: Pavel Konstantinov
>Assignee: Alexey Kuznetsov
>Priority: Minor
>
> Also we need to filter nodes list and show only nodes where selected cache is 
> configured



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (IGNITE-4996) Queries: show Node selection modal in case if user execute SCAN on local cache

2017-11-16 Thread Pavel Konstantinov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256433#comment-16256433
 ] 

Pavel Konstantinov commented on IGNITE-4996:


The issue still exists:
# start cluster version 7.9
# open Query tab, add Scan, select local cache, click Scan button (not 'Scan on 
selected node')

expected: Node selection modal should be showed up
actual: Node selection modal doesn't show

> Queries: show Node selection modal in case if user execute SCAN on local cache
> --
>
> Key: IGNITE-4996
> URL: https://issues.apache.org/jira/browse/IGNITE-4996
> Project: Ignite
>  Issue Type: Bug
>  Components: wizards
>Reporter: Pavel Konstantinov
>Assignee: Pavel Konstantinov
>Priority: Minor
>
> Also we need to filter nodes list and show only nodes where selected cache is 
> configured



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-6944) Fail to execute task with immutable list string

2017-11-16 Thread Edmond Tsang (JIRA)
Edmond Tsang created IGNITE-6944:


 Summary: Fail to execute task with immutable list string
 Key: IGNITE-6944
 URL: https://issues.apache.org/jira/browse/IGNITE-6944
 Project: Ignite
  Issue Type: Bug
  Security Level: Public (Viewable by anyone)
  Components: binary
Affects Versions: 2.3
Reporter: Edmond Tsang


Exception occurs when executing task with immutable list of string due to not 
able to find method readResolve/writeReplace for binary object.

It appears this is caused by a side effect of Jira 
https://issues.apache.org/jira/browse/IGNITE-6485 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (IGNITE-6943) Web console: LoadCaches method should activate cluster if persistent is configured, otherwise method doesn't work due to cluster is inactive

2017-11-16 Thread Alexey Kuznetsov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-6943?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Kuznetsov updated IGNITE-6943:
-
Description: 
Web console can generate sample project.
Now we have a persistence feature.
Cluster with persistence started inactive and generated LoadCaches will not 
work. 

> Web console: LoadCaches method should activate cluster if persistent is 
> configured, otherwise method doesn't work due to cluster is inactive
> 
>
> Key: IGNITE-6943
> URL: https://issues.apache.org/jira/browse/IGNITE-6943
> Project: Ignite
>  Issue Type: Bug
>  Security Level: Public(Viewable by anyone) 
>  Components: wizards
>Affects Versions: 2.3
>Reporter: Pavel Konstantinov
> Fix For: 2.4
>
>
> Web console can generate sample project.
> Now we have a persistence feature.
> Cluster with persistence started inactive and generated LoadCaches will not 
> work. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Closed] (IGNITE-4180) Web console: Query paragraph is not extended from collapsed state on move by "Scroll to query" link

2017-11-16 Thread Pavel Konstantinov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-4180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Konstantinov closed IGNITE-4180.
--

> Web console: Query paragraph is not extended from collapsed state on move by 
> "Scroll to query" link
> ---
>
> Key: IGNITE-4180
> URL: https://issues.apache.org/jira/browse/IGNITE-4180
> Project: Ignite
>  Issue Type: Bug
>  Components: wizards
>Affects Versions: 1.8
>Reporter: Vasiliy Sisko
>Assignee: Pavel Konstantinov
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (IGNITE-4180) Web console: Query paragraph is not extended from collapsed state on move by "Scroll to query" link

2017-11-16 Thread Pavel Konstantinov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256411#comment-16256411
 ] 

Pavel Konstantinov commented on IGNITE-4180:


Re-tested. Cannot reproduce.

> Web console: Query paragraph is not extended from collapsed state on move by 
> "Scroll to query" link
> ---
>
> Key: IGNITE-4180
> URL: https://issues.apache.org/jira/browse/IGNITE-4180
> Project: Ignite
>  Issue Type: Bug
>  Components: wizards
>Affects Versions: 1.8
>Reporter: Vasiliy Sisko
>Assignee: Pavel Konstantinov
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (IGNITE-6920) Web console: Prepare Web Console package with simple deploy.

2017-11-16 Thread Pavel Konstantinov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-6920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Konstantinov reassigned IGNITE-6920:
--

Assignee: Andrey Novikov  (was: Pavel Konstantinov)

> Web console: Prepare Web Console package with simple deploy.
> 
>
> Key: IGNITE-6920
> URL: https://issues.apache.org/jira/browse/IGNITE-6920
> Project: Ignite
>  Issue Type: Bug
>  Security Level: Public(Viewable by anyone) 
>  Components: wizards
>Affects Versions: 2.0
>Reporter: Andrey Novikov
>Assignee: Andrey Novikov
>Priority: Minor
> Fix For: 2.4
>
>
> * Package Web Console backend into an executable that can be run even on 
> devices without Node.js installed.
> * Let Web Console backend will be used to serve static files (compiled Web 
> Console frontend)
> * Let Web Console backend download and run as child process mongoDB. if 
> mongoDB is not installed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (IGNITE-6920) Web console: Prepare Web Console package with simple deploy.

2017-11-16 Thread Pavel Konstantinov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-6920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256409#comment-16256409
 ] 

Pavel Konstantinov commented on IGNITE-6920:


Successfully tested in branch

> Web console: Prepare Web Console package with simple deploy.
> 
>
> Key: IGNITE-6920
> URL: https://issues.apache.org/jira/browse/IGNITE-6920
> Project: Ignite
>  Issue Type: Bug
>  Security Level: Public(Viewable by anyone) 
>  Components: wizards
>Affects Versions: 2.0
>Reporter: Andrey Novikov
>Assignee: Pavel Konstantinov
>Priority: Minor
> Fix For: 2.4
>
>
> * Package Web Console backend into an executable that can be run even on 
> devices without Node.js installed.
> * Let Web Console backend will be used to serve static files (compiled Web 
> Console frontend)
> * Let Web Console backend download and run as child process mongoDB. if 
> mongoDB is not installed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (IGNITE-6943) Web console: LoadCaches method should activate cluster if persistent is configured, otherwise method doesn't work due to cluster is inactive

2017-11-16 Thread Pavel Konstantinov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-6943?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Konstantinov updated IGNITE-6943:
---
Summary: Web console: LoadCaches method should activate cluster if 
persistent is configured, otherwise method doesn't work due to cluster is 
inactive  (was: Web console: LoadCahces method should activate cluster if 
persistent is configured, otherwise method doesn't work due to cluster is 
inactive)

> Web console: LoadCaches method should activate cluster if persistent is 
> configured, otherwise method doesn't work due to cluster is inactive
> 
>
> Key: IGNITE-6943
> URL: https://issues.apache.org/jira/browse/IGNITE-6943
> Project: Ignite
>  Issue Type: Bug
>  Security Level: Public(Viewable by anyone) 
>  Components: wizards
>Affects Versions: 2.3
>Reporter: Pavel Konstantinov
> Fix For: 2.4
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-6943) Web console: LoadCahces method should activate cluster if persistent is configured, otherwise method doesn't work due to cluster is inactive

2017-11-16 Thread Pavel Konstantinov (JIRA)
Pavel Konstantinov created IGNITE-6943:
--

 Summary: Web console: LoadCahces method should activate cluster if 
persistent is configured, otherwise method doesn't work due to cluster is 
inactive
 Key: IGNITE-6943
 URL: https://issues.apache.org/jira/browse/IGNITE-6943
 Project: Ignite
  Issue Type: Bug
  Security Level: Public (Viewable by anyone)
  Components: wizards
Affects Versions: 2.3
Reporter: Pavel Konstantinov
 Fix For: 2.4






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-6942) Auto re-connect to other node in case of failure of current

2017-11-16 Thread Mikhail Cherkasov (JIRA)
Mikhail Cherkasov created IGNITE-6942:
-

 Summary: Auto re-connect to other node in case of failure of 
current
 Key: IGNITE-6942
 URL: https://issues.apache.org/jira/browse/IGNITE-6942
 Project: Ignite
  Issue Type: Improvement
  Security Level: Public (Viewable by anyone)
  Components: sql
Reporter: Mikhail Cherkasov
 Fix For: 2.4


it will be great to have a re-connect feature for thin driver, in case if 
server failure it should choose another server node from a list of server nods.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (IGNITE-6869) Implement new JMX metric for jobs monitoring

2017-11-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-6869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16255568#comment-16255568
 ] 

ASF GitHub Bot commented on IGNITE-6869:


GitHub user alex-plekhanov opened a pull request:

https://github.com/apache/ignite/pull/3053

IGNITE-6869 Added new JMX metric: Total jobs execution time. 

Added new JMX metric: Total jobs execution time. 
Added MBean to monitor aggregated cluster metrics.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/alex-plekhanov/ignite ignite-6869

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/3053.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3053


commit 46384430e99135ba3aeba5a64d43c0c8c097ad8b
Author: Aleksey Plekhanov 
Date:   2017-11-16T16:31:16Z

IGNITE-6869 Added new JMX metric: Total jobs execution time. Added MBean to 
monitor aggregated cluster metrics.




> Implement new JMX metric for jobs monitoring
> 
>
> Key: IGNITE-6869
> URL: https://issues.apache.org/jira/browse/IGNITE-6869
> Project: Ignite
>  Issue Type: New Feature
>  Security Level: Public(Viewable by anyone) 
>Reporter: Aleksey Plekhanov
>Assignee: Aleksey Plekhanov
>  Labels: iep-6, jmx
>
> There are now some metrics in ClusterLocalNodeMetricsMXBean for monitoring 
> job's execution statistics (min/max/avg execution time). But these metrics 
> gathered since node started and can't be used to calculate average execution 
> time between probes.
> To resolve this problem new metric "Total jobs execution time" should be 
> implemented.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-6941) Add JDBC Thin Client security support.

2017-11-16 Thread Andrew Mashenkov (JIRA)
Andrew Mashenkov created IGNITE-6941:


 Summary: Add JDBC Thin Client security support.
 Key: IGNITE-6941
 URL: https://issues.apache.org/jira/browse/IGNITE-6941
 Project: Ignite
  Issue Type: Bug
  Security Level: Public (Viewable by anyone)
  Components: jdbc
Reporter: Andrew Mashenkov


For now JDBC thin client doesn't supports username\password to be passed to 
security plugin.
We should fixed this together with IGNITE-6856

http://apache-ignite-users.70518.x6.nabble.com/How-can-I-get-Ignite-security-plugin-to-work-with-JDBC-thin-client-tt17740.html



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (IGNITE-6373) Create example for local and distributed k-means algorithm

2017-11-16 Thread Yury Babak (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-6373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yury Babak reassigned IGNITE-6373:
--

Assignee: Oleg Ignatenko

> Create example for local and distributed k-means algorithm
> --
>
> Key: IGNITE-6373
> URL: https://issues.apache.org/jira/browse/IGNITE-6373
> Project: Ignite
>  Issue Type: Improvement
>  Components: ml
>Reporter: Yury Babak
>Assignee: Oleg Ignatenko
>  Labels: examples
>
> Currently we no examples for both versions of k-means. So we need at least 
> two example for local k-means and for distributed k-means.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (IGNITE-5827) Benchmarks refactoring

2017-11-16 Thread Yury Babak (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-5827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yury Babak updated IGNITE-5827:
---
Fix Version/s: 2.4

> Benchmarks refactoring
> --
>
> Key: IGNITE-5827
> URL: https://issues.apache.org/jira/browse/IGNITE-5827
> Project: Ignite
>  Issue Type: Bug
>  Components: ml
>Reporter: Yury Babak
>Assignee: Yury Babak
> Fix For: 2.4
>
>
> See MathBenchmark.java and VectorBenchmarkTest.java



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (IGNITE-5827) Benchmarks refactoring

2017-11-16 Thread Yury Babak (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-5827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yury Babak resolved IGNITE-5827.

Resolution: Duplicate
  Assignee: Yury Babak  (was: Oleg Ignatenko)

> Benchmarks refactoring
> --
>
> Key: IGNITE-5827
> URL: https://issues.apache.org/jira/browse/IGNITE-5827
> Project: Ignite
>  Issue Type: Bug
>  Components: ml
>Reporter: Yury Babak
>Assignee: Yury Babak
>
> See MathBenchmark.java and VectorBenchmarkTest.java



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (IGNITE-5827) Benchmarks refactoring

2017-11-16 Thread Yury Babak (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-5827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yury Babak reassigned IGNITE-5827:
--

Assignee: Oleg Ignatenko

> Benchmarks refactoring
> --
>
> Key: IGNITE-5827
> URL: https://issues.apache.org/jira/browse/IGNITE-5827
> Project: Ignite
>  Issue Type: Bug
>  Components: ml
>Reporter: Yury Babak
>Assignee: Oleg Ignatenko
>
> See MathBenchmark.java and VectorBenchmarkTest.java



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-6940) Provide metric to monitor Thread Starvation

2017-11-16 Thread Andrey Kuznetsov (JIRA)
Andrey Kuznetsov created IGNITE-6940:


 Summary: Provide metric to monitor Thread Starvation
 Key: IGNITE-6940
 URL: https://issues.apache.org/jira/browse/IGNITE-6940
 Project: Ignite
  Issue Type: Improvement
  Security Level: Public (Viewable by anyone)
Reporter: Andrey Kuznetsov


There is an existing code in {{IgniteKernal.start()}} that logs warnings when 
detects starvation. It should be improved to support more thread pools and 
update some metrics.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (IGNITE-6931) Simplify index rebuild

2017-11-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-6931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16255539#comment-16255539
 ] 

ASF GitHub Bot commented on IGNITE-6931:


GitHub user gvvinblade opened a pull request:

https://github.com/apache/ignite/pull/3052

IGNITE-6931



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-6931

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/3052.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3052


commit f6caa5695c1bd87b30dc442f641d81dca3df9a42
Author: Igor Seliverstov 
Date:   2017-11-16T16:14:15Z

IGNITE-6931 Simplify index rebuild




> Simplify index rebuild
> --
>
> Key: IGNITE-6931
> URL: https://issues.apache.org/jira/browse/IGNITE-6931
> Project: Ignite
>  Issue Type: Improvement
>  Security Level: Public(Viewable by anyone) 
>  Components: cache, sql
>Affects Versions: 2.3
>Reporter: Igor Seliverstov
>Assignee: Igor Seliverstov
> Fix For: 2.4
>
>
> There are two quite similar operations: CREATE INDEX rebuildIndexFromHach but 
> used approaches are absolutely different. We need to generalize an used 
> approach.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (IGNITE-6872) Linear regression should implement Model API

2017-11-16 Thread Oleg Ignatenko (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-6872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16253526#comment-16253526
 ] 

Oleg Ignatenko edited comment on IGNITE-6872 at 11/16/17 4:15 PM:
--

draft implementation (including exporter), unit tests and example are completed 
in branch 
[ignite-6872|https://github.com/gridgain/apache-ignite/tree/ignite-6872] - 
[~chief], [~amalykh] please feel free to take a look. Next thing I plan is to 
accommodate changes done to OLS per IGNITE-5846


was (Author: oignatenko):
drafted implementation, unit test and example in branch 
[ignite-6872|https://github.com/gridgain/apache-ignite/tree/ignite-6872] - 
[~chief], [~amalykh] please feel free to take a look. Next thing I plan is to 
add exporter like one we've got for KMeans - please note this may impact 
current implementation - specifically I may reconsider making regression 
implement Model instead of current "composition" way in case if it turns out 
more convenient for export matters.

> Linear regression should implement Model API
> 
>
> Key: IGNITE-6872
> URL: https://issues.apache.org/jira/browse/IGNITE-6872
> Project: Ignite
>  Issue Type: Task
>  Security Level: Public(Viewable by anyone) 
>  Components: ml
>Reporter: Oleg Ignatenko
>Assignee: Oleg Ignatenko
> Fix For: 2.4
>
>
> When linear regression was originally implemented per IGNITE-5012 we had no 
> Model API.
> Now that this API is available (merged into master with IGNITE-5218) lin 
> regression needs to adapt to implement it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (IGNITE-6870) Implement new JMX metric for cache topology validation monitoring

2017-11-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-6870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16255532#comment-16255532
 ] 

ASF GitHub Bot commented on IGNITE-6870:


GitHub user alex-plekhanov opened a pull request:

https://github.com/apache/ignite/pull/3051

IGNITE-6870 Added new JMX metric for cache topology validation monitoring



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/alex-plekhanov/ignite ignite-6870

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/3051.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3051


commit 2f18a7283898e1b78bae7b1b182cb5892fe20c82
Author: Aleksey Plekhanov 
Date:   2017-11-16T16:03:06Z

IGNITE-6870 Added new JMX metric for cache topology validation monitoring.




> Implement new JMX metric for cache topology validation monitoring
> -
>
> Key: IGNITE-6870
> URL: https://issues.apache.org/jira/browse/IGNITE-6870
> Project: Ignite
>  Issue Type: New Feature
>  Security Level: Public(Viewable by anyone) 
>Reporter: Aleksey Plekhanov
>Assignee: Aleksey Plekhanov
>  Labels: iep-6, jmx
>
> There is no way now to determine from outside the grid that cache is in 
> read-only state after TopologyValidator's "validate" method returns "false" 
> for topology. 
> Implement new JMX metric in CacheMetricsMXBean to show cache's topology 
> validation status.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (IGNITE-6932) GridQueryProcessor.querySqlFieldsNoCache should check for cluster active/inactive state

2017-11-16 Thread Taras Ledkov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-6932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16255518#comment-16255518
 ] 

Taras Ledkov edited comment on IGNITE-6932 at 11/16/17 4:02 PM:


[~vozerov], please review the patch.


was (Author: tledkov-gridgain):
[~vozerov], please review the pathc.

> GridQueryProcessor.querySqlFieldsNoCache should check for cluster 
> active/inactive state
> ---
>
> Key: IGNITE-6932
> URL: https://issues.apache.org/jira/browse/IGNITE-6932
> Project: Ignite
>  Issue Type: Task
>  Security Level: Public(Viewable by anyone) 
>  Components: sql
>Reporter: Vladimir Ozerov
>Assignee: Taras Ledkov
> Fix For: 2.4
>
>
> Currently we do not check for this, so operation may hang silently. Let's add 
> a check for this before starting query execution. See 
> {{IgniteKernal.checkClusterState}} and it's usages.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (IGNITE-6931) Simplify index rebuild

2017-11-16 Thread Igor Seliverstov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-6931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Seliverstov updated IGNITE-6931:
-
Summary: Simplify index rebuild  (was: Avoid extra deserialization on index 
rebuild)

> Simplify index rebuild
> --
>
> Key: IGNITE-6931
> URL: https://issues.apache.org/jira/browse/IGNITE-6931
> Project: Ignite
>  Issue Type: Improvement
>  Security Level: Public(Viewable by anyone) 
>  Components: cache, sql
>Affects Versions: 2.3
>Reporter: Igor Seliverstov
>Assignee: Igor Seliverstov
> Fix For: 2.4
>
>
> Currently on index rebuild we open a cursor over cache keys and for each key 
> get a full row instead of using already extracted one. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (IGNITE-6907) Check LINQ join tests for errors

2017-11-16 Thread Alexey Popov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-6907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16255503#comment-16255503
 ] 

Alexey Popov commented on IGNITE-6907:
--

Changes:
1. Minor changes in tests 
2. Found & fixed issue with "multiple from" query
3. Added more tests for "multiple from" query

[~ptupitsyn], please review

> Check LINQ join tests for errors
> 
>
> Key: IGNITE-6907
> URL: https://issues.apache.org/jira/browse/IGNITE-6907
> Project: Ignite
>  Issue Type: Improvement
>  Security Level: Public(Viewable by anyone) 
>  Components: platforms
>Affects Versions: 2.3
>Reporter: Alexey Popov
>Assignee: Alexey Popov
>
> 1) Please check join tests with LINQ.
> 2) Fix any issues you can find )



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (IGNITE-6907) Check LINQ join tests for errors

2017-11-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-6907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16255502#comment-16255502
 ] 

ASF GitHub Bot commented on IGNITE-6907:


GitHub user apopovgg opened a pull request:

https://github.com/apache/ignite/pull/3049

IGNITE-6907 Check LINQ join tests for errors



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-6907

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/3049.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3049


commit f09b03788611cd17570dfd1f240e2ffbbb5f3b06
Author: apopov 
Date:   2017-11-16T15:44:56Z

IGNITE-6907Check LINQ join tests for errors




> Check LINQ join tests for errors
> 
>
> Key: IGNITE-6907
> URL: https://issues.apache.org/jira/browse/IGNITE-6907
> Project: Ignite
>  Issue Type: Improvement
>  Security Level: Public(Viewable by anyone) 
>  Components: platforms
>Affects Versions: 2.3
>Reporter: Alexey Popov
>Assignee: Alexey Popov
>
> 1) Please check join tests with LINQ.
> 2) Fix any issues you can find )



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (IGNITE-6939) Exclude false owners from the execution plan based on query response

2017-11-16 Thread Alexey Goncharuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-6939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Goncharuk updated IGNITE-6939:
-
Description: 
This is related to IGNITE-6858, the fix in the ticket can be improved.

The scenario leading to the issue is as follows:
1) Node A has partition 1 as owning
2) Node B has local partition map which has partition 1 on node A as owning
3) Topology change is triggered which would move partition 1 from A to another 
node, topology version is X
4) A transaction is started on node B on topology X
5) Partition is rebalanced and node A moves partition 1 to RENTING and then to 
EVICTED state, node A updates it's local partition map.
6) A new topology change is triggered
7) Node A sends partition map (transitively) to the node B, but since there is 
a pending exchange, node B ignores the updated map and still thinks that A owns 
partition 1 [1]
8) transaction attempts to execute an SQL query against partition 1 on node A 
and retries infinitely

[1] The related code is in 
GridDhtPartitionTopologyImpl#update(AffinityTopologyVersion, 
GridDhtPartitionFullMap, CachePartitionFullCountersMap, Set, 
AffinityTopologyVersion)
{code}
if (stopping || !lastTopChangeVer.initialized() ||
// Ignore message not-related to exchange if exchange is in progress.
(exchangeVer == null && !lastTopChangeVer.equals(readyTopVer)))
return false;
{code}

There are two possibilities to fix this:
1) Make all updates to partition map in a single thread, then we will not need 
update sequences and then we can update local partition map even when there is 
a pending exchange (this is a relatively big, but useful change)
2) Make a change in SQL query execution so that if a node cannot reserve a 
partition, do not map the partition to this node on the same topology version 
anymore (a quick fix)

This will remove the need to throw an exception from SQL query inside 
transaction when there is a pending exchange.

  was:
This is related to IGNITE-6858, the fix in the ticket can be improved.

The scenario leading to the issue is as follows:
1) Node A has partition 1 as owning
2) Node B has local partition map which has partition 1 on node A as owning
3) Topology change is triggered which would move partition 1 from A to another 
node, topology version is X
4) A transaction is started on node B on topology X
5) Partition is rebalanced and node A moves partition 1 to RENTING and then to 
EVICTED state, node A updates it's local partition map.
6) A new topology change is triggered
7) Node A sends partition map (transitively) to the node B, but since there is 
a pending exchange, node B ignores the updated map and still thinks that A owns 
partition 1 [1]
8) transaction attempts to execute an SQL query against partition 1 on node A 
and retries infinitely

[1] The related code is in 
GridDhtPartitionTopologyImpl#update(AffinityTopologyVersion, 
GridDhtPartitionFullMap, CachePartitionFullCountersMap, Set, 
AffinityTopologyVersion)
{code}
if (stopping || !lastTopChangeVer.initialized() ||
// Ignore message not-related to exchange if exchange is in progress.
(exchangeVer == null && !lastTopChangeVer.equals(readyTopVer)))
return false;
{code}

There are two possibilities to fix this:
1) Make all updates to partition map in a single thread, then we will not need 
update sequences and then we can update local partition map even when there is 
a pending exchange (this is a relatively big, but useful change)
2) Make a change in SQL query execution so that if a node cannot reserve a 
partition, do not map the partition to this node on the same topology version 
anymore (a quick fix)


> Exclude false owners from the execution plan based on query response
> 
>
> Key: IGNITE-6939
> URL: https://issues.apache.org/jira/browse/IGNITE-6939
> Project: Ignite
>  Issue Type: Task
>  Security Level: Public(Viewable by anyone) 
>Reporter: Alexey Goncharuk
>
> This is related to IGNITE-6858, the fix in the ticket can be improved.
> The scenario leading to the issue is as follows:
> 1) Node A has partition 1 as owning
> 2) Node B has local partition map which has partition 1 on node A as owning
> 3) Topology change is triggered which would move partition 1 from A to 
> another node, topology version is X
> 4) A transaction is started on node B on topology X
> 5) Partition is rebalanced and node A moves partition 1 to RENTING and then 
> to EVICTED state, node A updates it's local partition map.
> 6) A new topology change is triggered
> 7) Node A sends partition map (transitively) to the node B, but since there 
> is a pending exchange, node B ignores the updated map and still thinks that A 
> owns partition 1 [1]
> 8) transaction attempts to execute an SQL query against partition 1 on node A 
> and 

[jira] [Created] (IGNITE-6939) Exclude false owners from the execution plan based on query response

2017-11-16 Thread Alexey Goncharuk (JIRA)
Alexey Goncharuk created IGNITE-6939:


 Summary: Exclude false owners from the execution plan based on 
query response
 Key: IGNITE-6939
 URL: https://issues.apache.org/jira/browse/IGNITE-6939
 Project: Ignite
  Issue Type: Task
  Security Level: Public (Viewable by anyone)
Reporter: Alexey Goncharuk


This is related to IGNITE-6858, the fix in the ticket can be improved.

The scenario leading to the issue is as follows:
1) Node A has partition 1 as owning
2) Node B has local partition map which has partition 1 on node A as owning
3) Topology change is triggered which would move partition 1 from A to another 
node, topology version is X
4) A transaction is started on node B on topology X
5) Partition is rebalanced and node A moves partition 1 to RENTING and then to 
EVICTED state, node A updates it's local partition map.
6) A new topology change is triggered
7) Node A sends partition map (transitively) to the node B, but since there is 
a pending exchange, node B ignores the updated map and still thinks that A owns 
partition 1 [1]
8) transaction attempts to execute an SQL query against partition 1 on node A 
and retries infinitely

[1] The related code is in 
GridDhtPartitionTopologyImpl#update(AffinityTopologyVersion, 
GridDhtPartitionFullMap, CachePartitionFullCountersMap, Set, 
AffinityTopologyVersion)
{code}
if (stopping || !lastTopChangeVer.initialized() ||
// Ignore message not-related to exchange if exchange is in progress.
(exchangeVer == null && !lastTopChangeVer.equals(readyTopVer)))
return false;
{code}

There are two possibilities to fix this:
1) Make all updates to partition map in a single thread, then we will not need 
update sequences and then we can update local partition map even when there is 
a pending exchange (this is a relatively big, but useful change)
2) Make a change in SQL query execution so that if a node cannot reserve a 
partition, do not map the partition to this node on the same topology version 
anymore (a quick fix)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (IGNITE-6907) Check LINQ join tests for errors

2017-11-16 Thread Alexey Popov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-6907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Popov updated IGNITE-6907:
-
Description: 
1) Please check join tests with LINQ.
2) Fix any issues you can find )



  was:
1) Please check tests with LINQ.
2) Fix any issues you can find )




> Check LINQ join tests for errors
> 
>
> Key: IGNITE-6907
> URL: https://issues.apache.org/jira/browse/IGNITE-6907
> Project: Ignite
>  Issue Type: Improvement
>  Security Level: Public(Viewable by anyone) 
>  Components: platforms
>Affects Versions: 2.3
>Reporter: Alexey Popov
>Assignee: Alexey Popov
>
> 1) Please check join tests with LINQ.
> 2) Fix any issues you can find )



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (IGNITE-6907) Check LINQ join tests for errors

2017-11-16 Thread Alexey Popov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-6907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Popov updated IGNITE-6907:
-
Summary: Check LINQ join tests for errors  (was: Check LINQ tests for 
errors)

> Check LINQ join tests for errors
> 
>
> Key: IGNITE-6907
> URL: https://issues.apache.org/jira/browse/IGNITE-6907
> Project: Ignite
>  Issue Type: Improvement
>  Security Level: Public(Viewable by anyone) 
>  Components: platforms
>Affects Versions: 2.3
>Reporter: Alexey Popov
>Assignee: Alexey Popov
>
> 1) Please check tests with LINQ.
> 2) Fix any issues you can find )



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (IGNITE-6907) Check LINQ tests for errors

2017-11-16 Thread Alexey Popov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-6907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Popov updated IGNITE-6907:
-
Summary: Check LINQ tests for errors  (was: Add LINQ many-to-many test and 
LINQ with transactions test)

> Check LINQ tests for errors
> ---
>
> Key: IGNITE-6907
> URL: https://issues.apache.org/jira/browse/IGNITE-6907
> Project: Ignite
>  Issue Type: Improvement
>  Security Level: Public(Viewable by anyone) 
>  Components: platforms
>Affects Versions: 2.3
>Reporter: Alexey Popov
>Assignee: Alexey Popov
>
> 1) Please add more many-to-many tests with LINQ.
> 2) Please add test with LINQ & Transactions to show/test how it could be used 
> together
> Probably it could be put to some example instead of unit tests



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (IGNITE-6907) Check LINQ tests for errors

2017-11-16 Thread Alexey Popov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-6907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Popov updated IGNITE-6907:
-
Description: 
1) Please check tests with LINQ.
2) Fix any issues you can find )



  was:
1) Please add more many-to-many tests with LINQ.
2) Please add test with LINQ & Transactions to show/test how it could be used 
together

Probably it could be put to some example instead of unit tests



> Check LINQ tests for errors
> ---
>
> Key: IGNITE-6907
> URL: https://issues.apache.org/jira/browse/IGNITE-6907
> Project: Ignite
>  Issue Type: Improvement
>  Security Level: Public(Viewable by anyone) 
>  Components: platforms
>Affects Versions: 2.3
>Reporter: Alexey Popov
>Assignee: Alexey Popov
>
> 1) Please check tests with LINQ.
> 2) Fix any issues you can find )



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (IGNITE-2662) .NET Core support (run on Linux)

2017-11-16 Thread Pavel Tupitsyn (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-2662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16255462#comment-16255462
 ] 

Pavel Tupitsyn commented on IGNITE-2662:


Ignite.NET has been started on Linux under .NET Core today, hooray! 

There is still some work to be done. Main points:
- Dll loading: jvm.dll vs libjvm.so
- Windows-specific APIs (Registry).
- Case-sensitive paths (regarding IGNITE_HOME and classpath detection).
- Path delimiters. Do not use hardcoded slashes.


> .NET Core support (run on Linux)
> 
>
> Key: IGNITE-2662
> URL: https://issues.apache.org/jira/browse/IGNITE-2662
> Project: Ignite
>  Issue Type: New Feature
>  Components: platforms
>Affects Versions: 1.1.4
>Reporter: Pavel Tupitsyn
>Assignee: Pavel Tupitsyn
>  Labels: .net, xplat
> Fix For: 2.4
>
>
> Ignite.NET should target .NET Standard so it is available on maximum number 
> of platforms, see
> https://blogs.msdn.microsoft.com/dotnet/2016/09/26/introducing-net-standard/
> https://weblog.west-wind.com/posts/2016/Nov/23/NET-Standard-20-Making-Sense-of-NET-Again
> https://github.com/dotnet/core/blob/master/roadmap.md
> Make sure that all used APIs are supported on all platforms, see API Analyzer 
> tool:
> https://channel9.msdn.com/coding4fun/blog/Your-New-Virtual-API-Review-Assistant
> This will allow us to run on Windows, OSX, and Linux, and target .NET Core in 
> additional to good old regular .NET.
> Possible difficulties:
> * JNI interop. Core has dllImport and it works on linux, and our C++ client 
> works on linux, so it should be possible
> * Reflection. We use it a lot, and API has changed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (IGNITE-6938) SQL TX: Reads should see own's previous writes

2017-11-16 Thread Vladimir Ozerov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-6938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Ozerov updated IGNITE-6938:

Labels: iep-3  (was: )

> SQL TX: Reads should see own's previous writes
> --
>
> Key: IGNITE-6938
> URL: https://issues.apache.org/jira/browse/IGNITE-6938
> Project: Ignite
>  Issue Type: Task
>  Components: cache, sql
>Reporter: Vladimir Ozerov
>  Labels: iep-3
> Fix For: 2.4
>
>
> If transaction modified a row, subsequent {{SELECT}} statements in the same 
> TX must return latest pending update instead of last matching committed 
> version.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (IGNITE-6937) SQL TX: Support SELECT FOR UPDATE

2017-11-16 Thread Vladimir Ozerov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-6937?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Ozerov updated IGNITE-6937:

Labels: iep-3  (was: )

> SQL TX: Support SELECT FOR UPDATE
> -
>
> Key: IGNITE-6937
> URL: https://issues.apache.org/jira/browse/IGNITE-6937
> Project: Ignite
>  Issue Type: Task
>  Components: cache, sql
>Reporter: Vladimir Ozerov
>  Labels: iep-3
> Fix For: 2.4
>
>
> Normally in SQL world readers do not block writers. This is how our SELECT 
> operations should work by default. But we need to add a support for {{SELECT 
> ... FOR UPDATE}} read mode, when reader obtains exclusive lock on read. 
> In this mode we lock entries as usual, but then send data back to the caller. 
> First page can be returned directly in our {{LockResponse}}. Next pages 
> should be requested in separate requests. With this approach {{SELECT ,,, FOR 
> UPDATE}} will require only single round-trip to both lock and read data in 
> case of small updates.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (IGNITE-6935) SQL TX: Locking protocol for simple queries

2017-11-16 Thread Vladimir Ozerov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-6935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Ozerov updated IGNITE-6935:

Labels: iep-3  (was: )

> SQL TX: Locking protocol for simple queries
> ---
>
> Key: IGNITE-6935
> URL: https://issues.apache.org/jira/browse/IGNITE-6935
> Project: Ignite
>  Issue Type: Task
>  Components: cache, sql
>Reporter: Vladimir Ozerov
>  Labels: iep-3
> Fix For: 2.4
>
>
> We need to develop locking protocol for SQL queries. Design considerations:
> 1) Use {{GridNeaLockRequest|Response}} as a template for new messages
> 2) Cover only queries which doesn't require reduce stage (see server-side DML 
> optimization code, e.g. {{GridH2DmlRequest}}).
> 3) When next entry is found, try locking it. If it is already locked, then 
> register current TX as candidate in MVCC manager and go to the next row. 
> Other TXes will notify use when entries are released. Send a response when 
> all entries are locked.
> 4) Read entry version before locking and after. If they doesn't match (i.e. 
> concurrent modification occurred), then throw an exception.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (IGNITE-6936) SQL TX: Implement commit protocol

2017-11-16 Thread Vladimir Ozerov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-6936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Ozerov updated IGNITE-6936:

Labels: iep-3  (was: )

> SQL TX: Implement commit protocol
> -
>
> Key: IGNITE-6936
> URL: https://issues.apache.org/jira/browse/IGNITE-6936
> Project: Ignite
>  Issue Type: Task
>  Components: cache, sql
>Reporter: Vladimir Ozerov
>  Labels: iep-3
> Fix For: 2.4
>
>
> Once we are able to lock entries [1], next step is to implement 2PC commit 
> protocol. Cache protocol could be used as a template. The main difference is 
> that near (client) node will not store entries. 
> [1] IGNITE-6935



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (IGNITE-4193) SQL TX: ODBC driver support

2017-11-16 Thread Vladimir Ozerov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-4193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Ozerov updated IGNITE-4193:

Labels: iep-3  (was: )

> SQL TX: ODBC driver support
> ---
>
> Key: IGNITE-4193
> URL: https://issues.apache.org/jira/browse/IGNITE-4193
> Project: Ignite
>  Issue Type: Task
>  Components: odbc
>Reporter: Denis Magda
>  Labels: iep-3
> Fix For: 2.4
>
>
> To support execution of DML and SELECT statements inside of a transaction 
> started from ODBC driver side.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (IGNITE-6938) SQL TX: Reads should see own's previous writes

2017-11-16 Thread Vladimir Ozerov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-6938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Ozerov updated IGNITE-6938:

Security: (was: Public)

> SQL TX: Reads should see own's previous writes
> --
>
> Key: IGNITE-6938
> URL: https://issues.apache.org/jira/browse/IGNITE-6938
> Project: Ignite
>  Issue Type: Task
>  Components: cache, sql
>Reporter: Vladimir Ozerov
> Fix For: 2.4
>
>
> If transaction modified a row, subsequent {{SELECT}} statements in the same 
> TX must return latest pending update instead of last matching committed 
> version.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (IGNITE-4192) SQL TX: JDBC driver support

2017-11-16 Thread Vladimir Ozerov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-4192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Ozerov updated IGNITE-4192:

Fix Version/s: 2.4

> SQL TX: JDBC driver support
> ---
>
> Key: IGNITE-4192
> URL: https://issues.apache.org/jira/browse/IGNITE-4192
> Project: Ignite
>  Issue Type: Task
>  Components: jdbc
>Reporter: Denis Magda
>  Labels: iep-3
> Fix For: 2.4
>
>
> To support execution of DML and SELECT statements inside of a transaction 
> started from JDBC driver side.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (IGNITE-6936) SQL TX: Implement commit protocol

2017-11-16 Thread Vladimir Ozerov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-6936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Ozerov updated IGNITE-6936:

Security: (was: Public)

> SQL TX: Implement commit protocol
> -
>
> Key: IGNITE-6936
> URL: https://issues.apache.org/jira/browse/IGNITE-6936
> Project: Ignite
>  Issue Type: Task
>  Components: cache, sql
>Reporter: Vladimir Ozerov
> Fix For: 2.4
>
>
> Once we are able to lock entries [1], next step is to implement 2PC commit 
> protocol. Cache protocol could be used as a template. The main difference is 
> that near (client) node will not store entries. 
> [1] IGNITE-6935



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (IGNITE-6937) SQL TX: Support SELECT FOR UPDATE

2017-11-16 Thread Vladimir Ozerov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-6937?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Ozerov updated IGNITE-6937:

Security: (was: Public)

> SQL TX: Support SELECT FOR UPDATE
> -
>
> Key: IGNITE-6937
> URL: https://issues.apache.org/jira/browse/IGNITE-6937
> Project: Ignite
>  Issue Type: Task
>  Components: cache, sql
>Reporter: Vladimir Ozerov
> Fix For: 2.4
>
>
> Normally in SQL world readers do not block writers. This is how our SELECT 
> operations should work by default. But we need to add a support for {{SELECT 
> ... FOR UPDATE}} read mode, when reader obtains exclusive lock on read. 
> In this mode we lock entries as usual, but then send data back to the caller. 
> First page can be returned directly in our {{LockResponse}}. Next pages 
> should be requested in separate requests. With this approach {{SELECT ,,, FOR 
> UPDATE}} will require only single round-trip to both lock and read data in 
> case of small updates.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (IGNITE-4192) SQL TX: JDBC driver support

2017-11-16 Thread Vladimir Ozerov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-4192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Ozerov updated IGNITE-4192:

Labels: iep-3  (was: )

> SQL TX: JDBC driver support
> ---
>
> Key: IGNITE-4192
> URL: https://issues.apache.org/jira/browse/IGNITE-4192
> Project: Ignite
>  Issue Type: Task
>  Components: jdbc
>Reporter: Denis Magda
>  Labels: iep-3
> Fix For: 2.4
>
>
> To support execution of DML and SELECT statements inside of a transaction 
> started from JDBC driver side.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (IGNITE-4193) SQL TX: ODBC driver support

2017-11-16 Thread Vladimir Ozerov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-4193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Ozerov updated IGNITE-4193:

Fix Version/s: 2.4

> SQL TX: ODBC driver support
> ---
>
> Key: IGNITE-4193
> URL: https://issues.apache.org/jira/browse/IGNITE-4193
> Project: Ignite
>  Issue Type: Task
>  Components: odbc
>Reporter: Denis Magda
>  Labels: iep-3
> Fix For: 2.4
>
>
> To support execution of DML and SELECT statements inside of a transaction 
> started from ODBC driver side.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (IGNITE-4192) SQL TX: JDBC driver support

2017-11-16 Thread Vladimir Ozerov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-4192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Ozerov updated IGNITE-4192:

Component/s: jdbc

> SQL TX: JDBC driver support
> ---
>
> Key: IGNITE-4192
> URL: https://issues.apache.org/jira/browse/IGNITE-4192
> Project: Ignite
>  Issue Type: Task
>  Components: jdbc
>Reporter: Denis Magda
>
> To support execution of DML and SELECT statements inside of a transaction 
> started from JDBC driver side.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (IGNITE-4193) SQL TX: ODBC driver support

2017-11-16 Thread Vladimir Ozerov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-4193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Ozerov updated IGNITE-4193:

Component/s: (was: sql)

> SQL TX: ODBC driver support
> ---
>
> Key: IGNITE-4193
> URL: https://issues.apache.org/jira/browse/IGNITE-4193
> Project: Ignite
>  Issue Type: Task
>  Components: odbc
>Reporter: Denis Magda
>
> To support execution of DML and SELECT statements inside of a transaction 
> started from ODBC driver side.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (IGNITE-4193) SQL TX: ODBC driver support

2017-11-16 Thread Vladimir Ozerov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-4193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Ozerov updated IGNITE-4193:

Component/s: sql

> SQL TX: ODBC driver support
> ---
>
> Key: IGNITE-4193
> URL: https://issues.apache.org/jira/browse/IGNITE-4193
> Project: Ignite
>  Issue Type: Task
>  Components: odbc
>Reporter: Denis Magda
>
> To support execution of DML and SELECT statements inside of a transaction 
> started from ODBC driver side.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (IGNITE-6937) SQL TX: Support SELECT FOR UPDATE

2017-11-16 Thread Vladimir Ozerov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-6937?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Ozerov updated IGNITE-6937:

Component/s: sql
 cache

> SQL TX: Support SELECT FOR UPDATE
> -
>
> Key: IGNITE-6937
> URL: https://issues.apache.org/jira/browse/IGNITE-6937
> Project: Ignite
>  Issue Type: Task
>  Security Level: Public(Viewable by anyone) 
>  Components: cache, sql
>Reporter: Vladimir Ozerov
> Fix For: 2.4
>
>
> Normally in SQL world readers do not block writers. This is how our SELECT 
> operations should work by default. But we need to add a support for {{SELECT 
> ... FOR UPDATE}} read mode, when reader obtains exclusive lock on read. 
> In this mode we lock entries as usual, but then send data back to the caller. 
> First page can be returned directly in our {{LockResponse}}. Next pages 
> should be requested in separate requests. With this approach {{SELECT ,,, FOR 
> UPDATE}} will require only single round-trip to both lock and read data in 
> case of small updates.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (IGNITE-4192) SQL TX: JDBC driver support

2017-11-16 Thread Vladimir Ozerov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-4192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Ozerov updated IGNITE-4192:

Component/s: (was: sql)

> SQL TX: JDBC driver support
> ---
>
> Key: IGNITE-4192
> URL: https://issues.apache.org/jira/browse/IGNITE-4192
> Project: Ignite
>  Issue Type: Task
>  Components: jdbc
>Reporter: Denis Magda
>
> To support execution of DML and SELECT statements inside of a transaction 
> started from JDBC driver side.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-6938) SQL TX: Reads should see own's previous writes

2017-11-16 Thread Vladimir Ozerov (JIRA)
Vladimir Ozerov created IGNITE-6938:
---

 Summary: SQL TX: Reads should see own's previous writes
 Key: IGNITE-6938
 URL: https://issues.apache.org/jira/browse/IGNITE-6938
 Project: Ignite
  Issue Type: Task
  Security Level: Public (Viewable by anyone)
  Components: cache, sql
Reporter: Vladimir Ozerov
 Fix For: 2.4


If transaction modified a row, subsequent {{SELECT}} statements in the same TX 
must return latest pending update instead of last matching committed version.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-6937) SQL TX: Support SELECT FOR UPDATE

2017-11-16 Thread Vladimir Ozerov (JIRA)
Vladimir Ozerov created IGNITE-6937:
---

 Summary: SQL TX: Support SELECT FOR UPDATE
 Key: IGNITE-6937
 URL: https://issues.apache.org/jira/browse/IGNITE-6937
 Project: Ignite
  Issue Type: Task
  Security Level: Public (Viewable by anyone)
Reporter: Vladimir Ozerov
 Fix For: 2.4


Normally in SQL world readers do not block writers. This is how our SELECT 
operations should work by default. But we need to add a support for {{SELECT 
... FOR UPDATE}} read mode, when reader obtains exclusive lock on read. 

In this mode we lock entries as usual, but then send data back to the caller. 
First page can be returned directly in our {{LockResponse}}. Next pages should 
be requested in separate requests. With this approach {{SELECT ,,, FOR UPDATE}} 
will require only single round-trip to both lock and read data in case of small 
updates.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-6936) SQL TX: Implement commit protocol

2017-11-16 Thread Vladimir Ozerov (JIRA)
Vladimir Ozerov created IGNITE-6936:
---

 Summary: SQL TX: Implement commit protocol
 Key: IGNITE-6936
 URL: https://issues.apache.org/jira/browse/IGNITE-6936
 Project: Ignite
  Issue Type: Task
  Security Level: Public (Viewable by anyone)
  Components: cache, sql
Reporter: Vladimir Ozerov
 Fix For: 2.4


Once we are able to lock entries [1], next step is to implement 2PC commit 
protocol. Cache protocol could be used as a template. The main difference is 
that near (client) node will not store entries. 

[1] IGNITE-6935



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-6935) SQL TX: Locking protocol for simple queries

2017-11-16 Thread Vladimir Ozerov (JIRA)
Vladimir Ozerov created IGNITE-6935:
---

 Summary: SQL TX: Locking protocol for simple queries
 Key: IGNITE-6935
 URL: https://issues.apache.org/jira/browse/IGNITE-6935
 Project: Ignite
  Issue Type: Task
  Security Level: Public (Viewable by anyone)
  Components: cache, sql
Reporter: Vladimir Ozerov
 Fix For: 2.4


We need to develop locking protocol for SQL queries. Design considerations:
1) Use {{GridNeaLockRequest|Response}} as a template for new messages
2) Cover only queries which doesn't require reduce stage (see server-side DML 
optimization code, e.g. {{GridH2DmlRequest}}).
3) When next entry is found, try locking it. If it is already locked, then 
register current TX as candidate in MVCC manager and go to the next row. Other 
TXes will notify use when entries are released. Send a response when all 
entries are locked.
4) Read entry version before locking and after. If they doesn't match (i.e. 
concurrent modification occurred), then throw an exception.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (IGNITE-6934) SQL: evaluate performance of onheap row caching

2017-11-16 Thread Vladimir Ozerov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-6934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Ozerov updated IGNITE-6934:

Description: 
Ignite has so-called "on heap cache" feature. When cache entry is accessed, we 
copy it from offheap to heap and put it into a temporal concurrent hash map 
([1], [2]), where it resides during usage. When operation is finished, entry is 
evicted. This is default behavior which keeps GC pressure low even for large 
in-memory data sets.

The downside is that we loose time on copying from offheap to heap. To mitigate 
this problem user can enable on-heap cache through 
{{IgniteCache.onheapCacheEnabled}}. In this mode entry will not be evicted from 
on-heap map, so it can be reused between different operations without 
additional copying. Eviction rules are managed through eviction policy. 

Unfortunately, SQL cannot use this optimization. As a result if key or value is 
large enough, we loose a lot of time on memory copying. And we cannot use 
current on-heap cache directly, we in SQL operate on row links, rather than on 
keys. So to apply this optimization to SQL we should either create additional 
row cache, or hack existing cache somehow.

As a first step I propose to evaluate the impact with quick and dirty solution:
1) Just add another map from link to K-V pair in the same cache, putting all 
concurrency issues aside.
2) Use this cache from SQL engine.
3) Measure the impact. 

[1] {{org.apache.ignite.internal.processors.cache.GridCacheConcurrentMapImpl}}
[2] 
{{org.apache.ignite.internal.processors.cache.GridCacheConcurrentMap.CacheMapHolder}}

  was:
Ignite has so-called "on heap cache" feature. When cache entry is accessed, we 
copy it from offheap to heap and put it into a temporal concurrent hash map 
[1], [2], where it resides during usage. When operation is finished, entry is 
evicted. This is default behavior which keeps GC pressure low even for large 
in-memory data sets.

The downside is that we loose time on copying from offheap to heap. To mitigate 
this problem user can enable on-heap cache through 
{{IgniteCache.onheapCacheEnabled}}. In this mode entry will not be evicted from 
on-heap map, so it can be reused between different operations without 
additional copying. Eviction rules are managed through eviction policy. 

Unfortunately, SQL cannot use this optimization. As a result if key or value is 
large enough, we loose a lot of time on memory copying. And we cannot use 
current on-heap cache directly, we in SQL operate on row links, rather than on 
keys. So to apply this optimization to SQL we should either create additional 
row cache, or hack existing cache somehow.

As a first step I propose to evaluate the impact with quick and dirty solution:
1) Just add another map from link to K-V pair in the same cache, putting all 
concurrency issues aside.
2) Use this cache from SQL engine.
3) Measure the impact. 

[1] {{org.apache.ignite.internal.processors.cache.GridCacheConcurrentMapImpl}}
[2] 
{{org.apache.ignite.internal.processors.cache.GridCacheConcurrentMap.CacheMapHolder}}


> SQL: evaluate performance of onheap row caching
> ---
>
> Key: IGNITE-6934
> URL: https://issues.apache.org/jira/browse/IGNITE-6934
> Project: Ignite
>  Issue Type: Task
>  Security Level: Public(Viewable by anyone) 
>  Components: sql
>Reporter: Vladimir Ozerov
>Assignee: Taras Ledkov
>  Labels: performance
> Fix For: 2.4
>
>
> Ignite has so-called "on heap cache" feature. When cache entry is accessed, 
> we copy it from offheap to heap and put it into a temporal concurrent hash 
> map ([1], [2]), where it resides during usage. When operation is finished, 
> entry is evicted. This is default behavior which keeps GC pressure low even 
> for large in-memory data sets.
> The downside is that we loose time on copying from offheap to heap. To 
> mitigate this problem user can enable on-heap cache through 
> {{IgniteCache.onheapCacheEnabled}}. In this mode entry will not be evicted 
> from on-heap map, so it can be reused between different operations without 
> additional copying. Eviction rules are managed through eviction policy. 
> Unfortunately, SQL cannot use this optimization. As a result if key or value 
> is large enough, we loose a lot of time on memory copying. And we cannot use 
> current on-heap cache directly, we in SQL operate on row links, rather than 
> on keys. So to apply this optimization to SQL we should either create 
> additional row cache, or hack existing cache somehow.
> As a first step I propose to evaluate the impact with quick and dirty 
> solution:
> 1) Just add another map from link to K-V pair in the same cache, putting all 
> concurrency issues aside.
> 2) Use this cache from SQL engine.
> 3) Measure the impact. 
> 

[jira] [Updated] (IGNITE-6934) SQL: evaluate performance of onheap row caching

2017-11-16 Thread Vladimir Ozerov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-6934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Ozerov updated IGNITE-6934:

Labels: performance  (was: )

> SQL: evaluate performance of onheap row caching
> ---
>
> Key: IGNITE-6934
> URL: https://issues.apache.org/jira/browse/IGNITE-6934
> Project: Ignite
>  Issue Type: Task
>  Security Level: Public(Viewable by anyone) 
>  Components: sql
>Reporter: Vladimir Ozerov
>Assignee: Taras Ledkov
>  Labels: performance
> Fix For: 2.4
>
>
> Ignite has so-called "on heap cache" feature. When cache entry is accessed, 
> we copy it from offheap to heap and put it into a temporal concurrent hash 
> map ([1], [2]), where it resides during usage. When operation is finished, 
> entry is evicted. This is default behavior which keeps GC pressure low even 
> for large in-memory data sets.
> The downside is that we loose time on copying from offheap to heap. To 
> mitigate this problem user can enable on-heap cache through 
> {{IgniteCache.onheapCacheEnabled}}. In this mode entry will not be evicted 
> from on-heap map, so it can be reused between different operations without 
> additional copying. Eviction rules are managed through eviction policy. 
> Unfortunately, SQL cannot use this optimization. As a result if key or value 
> is large enough, we loose a lot of time on memory copying. And we cannot use 
> current on-heap cache directly, we in SQL operate on row links, rather than 
> on keys. So to apply this optimization to SQL we should either create 
> additional row cache, or hack existing cache somehow.
> As a first step I propose to evaluate the impact with quick and dirty 
> solution:
> 1) Just add another map from link to K-V pair in the same cache, putting all 
> concurrency issues aside.
> 2) Use this cache from SQL engine.
> 3) Measure the impact. 
> [1] {{org.apache.ignite.internal.processors.cache.GridCacheConcurrentMapImpl}}
> [2] 
> {{org.apache.ignite.internal.processors.cache.GridCacheConcurrentMap.CacheMapHolder}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-6934) SQL: evaluate performance of onheap row caching

2017-11-16 Thread Vladimir Ozerov (JIRA)
Vladimir Ozerov created IGNITE-6934:
---

 Summary: SQL: evaluate performance of onheap row caching
 Key: IGNITE-6934
 URL: https://issues.apache.org/jira/browse/IGNITE-6934
 Project: Ignite
  Issue Type: Task
  Security Level: Public (Viewable by anyone)
  Components: sql
Reporter: Vladimir Ozerov
Assignee: Taras Ledkov
 Fix For: 2.4


Ignite has so-called "on heap cache" feature. When cache entry is accessed, we 
copy it from offheap to heap and put it into a temporal concurrent hash map 
[1], [2], where it resides during usage. When operation is finished, entry is 
evicted. This is default behavior which keeps GC pressure low even for large 
in-memory data sets.

The downside is that we loose time on copying from offheap to heap. To mitigate 
this problem user can enable on-heap cache through 
{{IgniteCache.onheapCacheEnabled}}. In this mode entry will not be evicted from 
on-heap map, so it can be reused between different operations without 
additional copying. Eviction rules are managed through eviction policy. 

Unfortunately, SQL cannot use this optimization. As a result if key or value is 
large enough, we loose a lot of time on memory copying. And we cannot use 
current on-heap cache directly, we in SQL operate on row links, rather than on 
keys. So to apply this optimization to SQL we should either create additional 
row cache, or hack existing cache somehow.

As a first step I propose to evaluate the impact with quick and dirty solution:
1) Just add another map from link to K-V pair in the same cache, putting all 
concurrency issues aside.
2) Use this cache from SQL engine.
3) Measure the impact. 

[1] org.apache.ignite.internal.processors.cache.GridCacheConcurrentMapImpl
[2] 
org.apache.ignite.internal.processors.cache.GridCacheConcurrentMap.CacheMapHolder



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (IGNITE-6932) GridQueryProcessor.querySqlFieldsNoCache should check for cluster active/inactive state

2017-11-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-6932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16255383#comment-16255383
 ] 

ASF GitHub Bot commented on IGNITE-6932:


GitHub user tledkov-gridgain opened a pull request:

https://github.com/apache/ignite/pull/3048

IGNITE-6932 GridQueryProcessor.querySqlFieldsNoCache should check for…

… cluster active/inactive state

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-6932

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/3048.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3048


commit 0601c1ba628556c4f5be82063dae111658f57a46
Author: tledkov-gridgain 
Date:   2017-11-16T14:18:03Z

IGNITE-6932 GridQueryProcessor.querySqlFieldsNoCache should check for 
cluster active/inactive state




> GridQueryProcessor.querySqlFieldsNoCache should check for cluster 
> active/inactive state
> ---
>
> Key: IGNITE-6932
> URL: https://issues.apache.org/jira/browse/IGNITE-6932
> Project: Ignite
>  Issue Type: Task
>  Security Level: Public(Viewable by anyone) 
>  Components: sql
>Reporter: Vladimir Ozerov
>Assignee: Taras Ledkov
> Fix For: 2.4
>
>
> Currently we do not check for this, so operation may hang silently. Let's add 
> a check for this before starting query execution. See 
> {{IgniteKernal.checkClusterState}} and it's usages.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (IGNITE-6171) Native facility to control excessive GC pauses

2017-11-16 Thread Vladimir Ozerov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-6171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16255304#comment-16255304
 ] 

Vladimir Ozerov edited comment on IGNITE-6171 at 11/16/17 1:30 PM:
---

[~cyberdemon], the very problem with this approach is that we observe GC pause 
_after_ it is finished. This is fine to log max GC pause somewhere and show a 
kind of "red flag" to the user. But we cannot react to this pause anyhow. To 
the constrast, solution with native threads will allow to shutdown unresponsive 
node _during_ GC pause. This was the original idea. 

But the question is - does original idea makes sense? Do we really want to 
shutdown the node due to long GC pause? This needs to be discussed separately.


was (Author: vozerov):
[~cyberdemon], the very problem with this approach is that we observe GC pause 
_after_ it finished. This is fine to log max GC pause somewhere and show a kind 
of "red flag" to the user. But we cannot react to this pause anyhow. To the 
constrast, solution with native threads will allow to shutdown unresponsive 
node _during_ GC pause. This was the original idea. 

But the question is - does original idea makes sense? Do we really want to 
shutdown the node due to long GC pause? This needs to be discussed separately.

> Native facility to control excessive GC pauses
> --
>
> Key: IGNITE-6171
> URL: https://issues.apache.org/jira/browse/IGNITE-6171
> Project: Ignite
>  Issue Type: Task
>  Components: general
>Reporter: Vladimir Ozerov
>Assignee: Dmitriy Sorokin
>  Labels: iep-7, usability
>
> Ignite is Java-based application. If node experiences long GC pauses it may 
> negatively affect other nodes. We need to find a way to detect long GC pauses 
> within the process and trigger some actions in response, e.g. node stop. 
> This is a kind of Inception \[1\], when you need to understand that you sleep 
> while sleeping. As all Java threads are blocked on safepoint, we cannot use 
> Java's thread to detect Java's GC. Native threads should be used instead.
> Proposed solution:
> 1) Thread 1 should periodically call dummy JNI method returning current time, 
> and set this time to shared variable;
> 2) Thread 2 should periodically check that variable. If it has not been 
> changed for some time - most likely we are in GC pause. Once certain 
> threashold is reached - trigger compensating action, whether this is a 
> warning, process kill, or what so ever.
> Justification: crossing native -> Java boundaries involves safepoints. This 
> way Thread 1 will be trapped if STW pause is in progress. Java method cannot 
> be empty, as JVM is smart enough and can deduce it to no-op. 
> \[1\] http://www.imdb.com/title/tt1375666/



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (IGNITE-6224) Node stoping does not wait all transactions completion

2017-11-16 Thread Vitaliy Biryukov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-6224?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaliy Biryukov reassigned IGNITE-6224:


Assignee: Vitaliy Biryukov

> Node stoping does not wait all transactions completion
> --
>
> Key: IGNITE-6224
> URL: https://issues.apache.org/jira/browse/IGNITE-6224
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.1
>Reporter: Vladislav Pyatkov
>Assignee: Vitaliy Biryukov
> Attachments: TransactionBehindStopNodeTest.java
>
>
> I have started grid node and executing transaction over some cache. After I 
> stopped the node in the middle execution of transaction. I got transaction 
> execution exception:
> {noformat}
> java.lang.IllegalStateException: class 
> org.apache.ignite.internal.processors.cache.CacheStoppedException: Failed to 
> perform cache operation (cache is stopped): cache
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheGateway.enter(GridCacheGateway.java:164)
>   at 
> org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.onEnter(GatewayProtectedCacheProxy.java:1656)
>   at 
> org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.put(GatewayProtectedCacheProxy.java:869)
>   at 
> org.apache.ignite.TransactionBehindStopNodeTest.testOneNode(TransactionBehindStopNodeTest.java:56)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at junit.framework.TestCase.runTest(TestCase.java:176)
>   at 
> org.apache.ignite.testframework.junits.GridAbstractTest.runTestInternal(GridAbstractTest.java:2000)
>   at 
> org.apache.ignite.testframework.junits.GridAbstractTest.access$000(GridAbstractTest.java:132)
>   at 
> org.apache.ignite.testframework.junits.GridAbstractTest$5.run(GridAbstractTest.java:1915)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat}
> although I stopped node with _false_ {{cancel}} flag.
> {code}
> G.stop(getTestIgniteInstanceName(0), false);
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (IGNITE-6867) Implement new JMX metrics for topology monitoring

2017-11-16 Thread Pavel Pereslegin (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-6867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16255220#comment-16255220
 ] 

Pavel Pereslegin commented on IGNITE-6867:
--

[~alex_pl], [~avinogradov], 
please, review this patch (pay attention to the place in which the test is 
placed).

> Implement new JMX metrics for topology monitoring
> -
>
> Key: IGNITE-6867
> URL: https://issues.apache.org/jira/browse/IGNITE-6867
> Project: Ignite
>  Issue Type: New Feature
>  Security Level: Public(Viewable by anyone) 
>Reporter: Aleksey Plekhanov
>Assignee: Pavel Pereslegin
>  Labels: iep-6, jmx
>
> These additional metrics and methods should be implemented:
> * Current topology version
> * Total server nodes count
> * Total client nodes count
> * Method to count nodes filtered by some node attribute
> * Method to count nodes grouped by some node attribute
>  
> There is already a ticket to implement first 2 metrics from this list 
> (IGNITE-6844)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (IGNITE-6171) Native facility to control excessive GC pauses

2017-11-16 Thread Dmitriy Sorokin (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-6171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16255216#comment-16255216
 ] 

Dmitriy Sorokin edited comment on IGNITE-6171 at 11/16/17 12:22 PM:


[~vozerov], [~avinogradov]
I think that we don't need to use JNI method, we only need a standard thread 
that wakes up through a small fixed timeout (20 ms, for example) and updates 
the time value by current system time. with calculating the difference with the 
previous value.
If the difference with the previous value will differ significantly from the 
expected one, this will mean that our thread has been frozen some time, and it 
does not matter if it was a STW pause or other cause of the system response 
degradation.
The system state with our control thread non-running more can't happen 
instantaneously, so we can detect the fact of system response degradation by 
this way.


was (Author: cyberdemon):
I think that we don't need to use JNI method, we only need a standard thread 
that wakes up through a small fixed timeout (20 ms, for example) and updates 
the time value by current system time. with calculating the difference with the 
previous value.
If the difference with the previous value will differ significantly from the 
expected one, this will mean that our thread has been frozen some time, and it 
does not matter if it was a STW pause or other cause of the system response 
degradation.
The system state with our control thread non-running more can't happen 
instantaneously, so we can detect the fact of system response degradation by 
this way.

> Native facility to control excessive GC pauses
> --
>
> Key: IGNITE-6171
> URL: https://issues.apache.org/jira/browse/IGNITE-6171
> Project: Ignite
>  Issue Type: Task
>  Components: general
>Reporter: Vladimir Ozerov
>Assignee: Dmitriy Sorokin
>  Labels: iep-7, usability
>
> Ignite is Java-based application. If node experiences long GC pauses it may 
> negatively affect other nodes. We need to find a way to detect long GC pauses 
> within the process and trigger some actions in response, e.g. node stop. 
> This is a kind of Inception \[1\], when you need to understand that you sleep 
> while sleeping. As all Java threads are blocked on safepoint, we cannot use 
> Java's thread to detect Java's GC. Native threads should be used instead.
> Proposed solution:
> 1) Thread 1 should periodically call dummy JNI method returning current time, 
> and set this time to shared variable;
> 2) Thread 2 should periodically check that variable. If it has not been 
> changed for some time - most likely we are in GC pause. Once certain 
> threashold is reached - trigger compensating action, whether this is a 
> warning, process kill, or what so ever.
> Justification: crossing native -> Java boundaries involves safepoints. This 
> way Thread 1 will be trapped if STW pause is in progress. Java method cannot 
> be empty, as JVM is smart enough and can deduce it to no-op. 
> \[1\] http://www.imdb.com/title/tt1375666/



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (IGNITE-6171) Native facility to control excessive GC pauses

2017-11-16 Thread Dmitriy Sorokin (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-6171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16255216#comment-16255216
 ] 

Dmitriy Sorokin commented on IGNITE-6171:
-

I think that we don't need to use JNI method, we only need a standard thread 
that wakes up through a small fixed timeout (20 ms, for example) and updates 
the time value by current system time. with calculating the difference with the 
previous value.
If the difference with the previous value will differ significantly from the 
expected one, this will mean that our thread has been frozen some time, and it 
does not matter if it was a STW pause or other cause of the system response 
degradation.
The system state with our control thread non-running more can't happen 
instantaneously, so we can detect the fact of system response degradation by 
this way.

> Native facility to control excessive GC pauses
> --
>
> Key: IGNITE-6171
> URL: https://issues.apache.org/jira/browse/IGNITE-6171
> Project: Ignite
>  Issue Type: Task
>  Components: general
>Reporter: Vladimir Ozerov
>Assignee: Dmitriy Sorokin
>  Labels: iep-7, usability
>
> Ignite is Java-based application. If node experiences long GC pauses it may 
> negatively affect other nodes. We need to find a way to detect long GC pauses 
> within the process and trigger some actions in response, e.g. node stop. 
> This is a kind of Inception \[1\], when you need to understand that you sleep 
> while sleeping. As all Java threads are blocked on safepoint, we cannot use 
> Java's thread to detect Java's GC. Native threads should be used instead.
> Proposed solution:
> 1) Thread 1 should periodically call dummy JNI method returning current time, 
> and set this time to shared variable;
> 2) Thread 2 should periodically check that variable. If it has not been 
> changed for some time - most likely we are in GC pause. Once certain 
> threashold is reached - trigger compensating action, whether this is a 
> warning, process kill, or what so ever.
> Justification: crossing native -> Java boundaries involves safepoints. This 
> way Thread 1 will be trapped if STW pause is in progress. Java method cannot 
> be empty, as JVM is smart enough and can deduce it to no-op. 
> \[1\] http://www.imdb.com/title/tt1375666/



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (IGNITE-5755) Wrong msg: calculation of memory policy size

2017-11-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-5755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16255210#comment-16255210
 ] 

ASF GitHub Bot commented on IGNITE-5755:


GitHub user sergvolkov opened a pull request:

https://github.com/apache/ignite/pull/3044

IGNITE-5755 change calculation of memory policy size



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/sergvolkov/ignite ignite-2.1

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/3044.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3044


commit e593c768aa7dcbfed30f2592e5ea1e1ad35ded77
Author: SA.Volkov 
Date:   2017-11-16T12:10:22Z

IGNITE-5755 change calculation of memory policy size




> Wrong msg: calculation of memory policy size
> 
>
> Key: IGNITE-5755
> URL: https://issues.apache.org/jira/browse/IGNITE-5755
> Project: Ignite
>  Issue Type: Bug
>  Components: general
>Affects Versions: 2.1
>Reporter: Alexander Belyak
>Assignee: Sergey Volkov
>Priority: Trivial
>
> In PageMemoryNoStoreImpl:
> {noformat}
> throw new IgniteOutOfMemoryException("Not enough memory allocated 
> " +
> "(consider increasing memory policy size or enabling 
> evictions) " +
> "[policyName=" + memoryPolicyCfg.getName() +
> ", size=" + U.readableSize(memoryPolicyCfg.getMaxSize(), 
> true) + "]"
> {noformat}
> wrong usage of U.readableSize - we should use non "Si" (1024 instead of 1000) 
> multiplier. Right code is:
> {noformat}
> throw new IgniteOutOfMemoryException("Not enough memory allocated 
> " +
> "(consider increasing memory policy size or enabling 
> evictions) " +
> "[policyName=" + memoryPolicyCfg.getName() +
> ", size=" + U.readableSize(memoryPolicyCfg.getMaxSize(), 
> false) + "]"
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (IGNITE-6924) CacheStoreSessionListener#onSessionStart() is not called in case of 'WriteBehind' mode is enabled and 'writeCache' size exceeds critical size.

2017-11-16 Thread Vyacheslav Koptilin (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-6924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16255196#comment-16255196
 ] 

Vyacheslav Koptilin commented on IGNITE-6924:
-

pull-request: https://github.com/apache/ignite/pull/3040/
TeamCity looks good enough 
https://ci.ignite.apache.org/project.html?projectId=Ignite20Tests_Ignite20Tests=pull%2F3040%2Fhead

> CacheStoreSessionListener#onSessionStart() is not called in case of 
> 'WriteBehind' mode is enabled and 'writeCache' size exceeds critical size.
> --
>
> Key: IGNITE-6924
> URL: https://issues.apache.org/jira/browse/IGNITE-6924
> Project: Ignite
>  Issue Type: Bug
>  Security Level: Public(Viewable by anyone) 
>  Components: cache
>Affects Versions: 2.4
>Reporter: Vyacheslav Koptilin
>Assignee: Vyacheslav Koptilin
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (IGNITE-6930) Optionally to do not write free list updates to WAL

2017-11-16 Thread Sergey Puchnin (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-6930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Puchnin updated IGNITE-6930:
---
Labels: IEP-8 iep-1  (was: iep-1)

> Optionally to do not write free list updates to WAL
> ---
>
> Key: IGNITE-6930
> URL: https://issues.apache.org/jira/browse/IGNITE-6930
> Project: Ignite
>  Issue Type: Task
>  Security Level: Public(Viewable by anyone) 
>  Components: cache
>Reporter: Vladimir Ozerov
>Assignee: Taras Ledkov
>  Labels: IEP-8, iep-1
> Fix For: 2.4
>
>
> When cache entry is created, we need to write update the free list. When 
> entry is updated, we need to update free list(s) several times. Currently 
> free list is persistent structure, so every update to it must be logged to be 
> able to recover after crash. This may incur significant overhead, especially 
> for small entries.
> E.g. this is how WAL for a single update looks like. "D" - updates with real 
> data, "F" - free-list management:
> {code}
>  1. [D] DataRecord [writeEntries=[UnwrapDataEntry[k = key, v = [ BinaryObject 
> [idHash=2053299190, hash=1986931360, typeId=-1580729813]], super = [DataEntry 
> [cacheId=94416770, op=UPDATE, writeVer=GridCacheVersion [topVer=122147562, 
> order=1510667560607, nodeOrder=1], partId=0, partCnt=4, super=WALRecord 
> [size=0, chainSize=0, pos=null, type=DATA_RECORD]]
>  2. [F] PagesListRemovePageRecord [rmvdPageId=00010005, 
> pageId=00010006, grpId=94416770, super=PageDeltaRecord 
> [grpId=94416770, pageId=00010006, super=WALRecord [size=37, 
> chainSize=0, pos=null, type=PAGES_LIST_REMOVE_PAGE]]]
>  3. [D] DataPageInsertRecord [super=PageDeltaRecord [grpId=94416770, 
> pageId=00010005, super=WALRecord [size=129, chainSize=0, pos=null, 
> type=DATA_PAGE_INSERT_RECORD]]]
>  4. [F] PagesListAddPageRecord [dataPageId=00010005, 
> super=PageDeltaRecord [grpId=94416770, pageId=00010008, 
> super=WALRecord [size=37, chainSize=0, pos=null, type=PAGES_LIST_ADD_PAGE]]]
>  5. [F] DataPageSetFreeListPageRecord [freeListPage=281474976710664, 
> super=PageDeltaRecord [grpId=94416770, pageId=00010005, 
> super=WALRecord [size=37, chainSize=0, pos=null, 
> type=DATA_PAGE_SET_FREE_LIST_PAGE]]]
>  6. [D] ReplaceRecord [io=DataLeafIO[ver=1], idx=0, super=PageDeltaRecord 
> [grpId=94416770, pageId=00010004, super=WALRecord [size=47, 
> chainSize=0, pos=null, type=BTREE_PAGE_REPLACE]]]
>  7. [F] DataPageRemoveRecord [itemId=0, super=PageDeltaRecord 
> [grpId=94416770, pageId=00010005, super=WALRecord [size=30, 
> chainSize=0, pos=null, type=DATA_PAGE_REMOVE_RECORD]]]
>  8. [F] PagesListRemovePageRecord [rmvdPageId=00010005, 
> pageId=00010008, grpId=94416770, super=PageDeltaRecord 
> [grpId=94416770, pageId=00010008, super=WALRecord [size=37, 
> chainSize=0, pos=null, type=PAGES_LIST_REMOVE_PAGE]]]
>  9. [F] DataPageSetFreeListPageRecord [freeListPage=0, super=PageDeltaRecord 
> [grpId=94416770, pageId=00010005, super=WALRecord [size=37, 
> chainSize=0, pos=null, type=DATA_PAGE_SET_FREE_LIST_PAGE]]]
> 10. [F] PagesListAddPageRecord [dataPageId=00010005, 
> super=PageDeltaRecord [grpId=94416770, pageId=00010006, 
> super=WALRecord [size=37, chainSize=0, pos=null, type=PAGES_LIST_ADD_PAGE]]]
> 11. [F] DataPageSetFreeListPageRecord [freeListPage=281474976710662, 
> super=PageDeltaRecord [grpId=94416770, pageId=00010005, 
> super=WALRecord [size=37, chainSize=0, pos=null, 
> type=DATA_PAGE_SET_FREE_LIST_PAGE]]]
> {code}
> If you sum all space required for operation (size in p.3 is shown incorrectly 
> here), you will see that data update required ~300 bytes, so do free list 
> update! 
> *Proposed solution*
> 1) Optionally do not write free list updates to WAL
> 2) In case of node restart we start with empty free lists, so data inserts 
> will have to allocate new pages
> 3) When old data page is read, add it to the free list
> 4) Start a background thread which will iterate over all old data pages and 
> re-create the free list, so that eventually all data pages are tracked.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (IGNITE-5846) Add support of distributed matrices for OLS regression.

2017-11-16 Thread Oleg Ignatenko (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-5846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16253662#comment-16253662
 ] 

Oleg Ignatenko edited comment on IGNITE-5846 at 11/16/17 11:50 AM:
---

reviewed pull request #3030.

{{DistributedRegressionExample}} runs fine and changes done to it look good, in 
particular renaming the {{nobs}} and {{nvars}} variables is appreciated.

All unit tests in ML suite passed on my machine.

Changes made to code in ml module look good, I only noticed few minor formal 
issues worth correcting (provided details of these in [pull request 
comments|https://github.com/apache/ignite/pull/3030#issuecomment-344899648]).

Overall the changes look very good, strongly recommend to merge it.


was (Author: oignatenko):
checking pull request #3030 (in progress). {{DistributedRegressionExample}} 
runs fine and changes done to it look good, in particular renaming the {{nobs}} 
and {{nvars}} variables is appreciated. All unit tests in ML suite passed on my 
machine.

> Add support of distributed matrices for OLS regression.
> ---
>
> Key: IGNITE-5846
> URL: https://issues.apache.org/jira/browse/IGNITE-5846
> Project: Ignite
>  Issue Type: Improvement
>  Components: ml
>Reporter: Yury Babak
>Assignee: Aleksey Zinoviev
>
> Currently OSL regression works only with local matrices.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (IGNITE-6930) Optionally to do not write free list updates to WAL

2017-11-16 Thread Vladimir Ozerov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-6930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Ozerov updated IGNITE-6930:

Labels: iep-1  (was: )

> Optionally to do not write free list updates to WAL
> ---
>
> Key: IGNITE-6930
> URL: https://issues.apache.org/jira/browse/IGNITE-6930
> Project: Ignite
>  Issue Type: Task
>  Security Level: Public(Viewable by anyone) 
>  Components: cache
>Reporter: Vladimir Ozerov
>Assignee: Taras Ledkov
>  Labels: iep-1
> Fix For: 2.4
>
>
> When cache entry is created, we need to write update the free list. When 
> entry is updated, we need to update free list(s) several times. Currently 
> free list is persistent structure, so every update to it must be logged to be 
> able to recover after crash. This may incur significant overhead, especially 
> for small entries.
> E.g. this is how WAL for a single update looks like. "D" - updates with real 
> data, "F" - free-list management:
> {code}
>  1. [D] DataRecord [writeEntries=[UnwrapDataEntry[k = key, v = [ BinaryObject 
> [idHash=2053299190, hash=1986931360, typeId=-1580729813]], super = [DataEntry 
> [cacheId=94416770, op=UPDATE, writeVer=GridCacheVersion [topVer=122147562, 
> order=1510667560607, nodeOrder=1], partId=0, partCnt=4, super=WALRecord 
> [size=0, chainSize=0, pos=null, type=DATA_RECORD]]
>  2. [F] PagesListRemovePageRecord [rmvdPageId=00010005, 
> pageId=00010006, grpId=94416770, super=PageDeltaRecord 
> [grpId=94416770, pageId=00010006, super=WALRecord [size=37, 
> chainSize=0, pos=null, type=PAGES_LIST_REMOVE_PAGE]]]
>  3. [D] DataPageInsertRecord [super=PageDeltaRecord [grpId=94416770, 
> pageId=00010005, super=WALRecord [size=129, chainSize=0, pos=null, 
> type=DATA_PAGE_INSERT_RECORD]]]
>  4. [F] PagesListAddPageRecord [dataPageId=00010005, 
> super=PageDeltaRecord [grpId=94416770, pageId=00010008, 
> super=WALRecord [size=37, chainSize=0, pos=null, type=PAGES_LIST_ADD_PAGE]]]
>  5. [F] DataPageSetFreeListPageRecord [freeListPage=281474976710664, 
> super=PageDeltaRecord [grpId=94416770, pageId=00010005, 
> super=WALRecord [size=37, chainSize=0, pos=null, 
> type=DATA_PAGE_SET_FREE_LIST_PAGE]]]
>  6. [D] ReplaceRecord [io=DataLeafIO[ver=1], idx=0, super=PageDeltaRecord 
> [grpId=94416770, pageId=00010004, super=WALRecord [size=47, 
> chainSize=0, pos=null, type=BTREE_PAGE_REPLACE]]]
>  7. [F] DataPageRemoveRecord [itemId=0, super=PageDeltaRecord 
> [grpId=94416770, pageId=00010005, super=WALRecord [size=30, 
> chainSize=0, pos=null, type=DATA_PAGE_REMOVE_RECORD]]]
>  8. [F] PagesListRemovePageRecord [rmvdPageId=00010005, 
> pageId=00010008, grpId=94416770, super=PageDeltaRecord 
> [grpId=94416770, pageId=00010008, super=WALRecord [size=37, 
> chainSize=0, pos=null, type=PAGES_LIST_REMOVE_PAGE]]]
>  9. [F] DataPageSetFreeListPageRecord [freeListPage=0, super=PageDeltaRecord 
> [grpId=94416770, pageId=00010005, super=WALRecord [size=37, 
> chainSize=0, pos=null, type=DATA_PAGE_SET_FREE_LIST_PAGE]]]
> 10. [F] PagesListAddPageRecord [dataPageId=00010005, 
> super=PageDeltaRecord [grpId=94416770, pageId=00010006, 
> super=WALRecord [size=37, chainSize=0, pos=null, type=PAGES_LIST_ADD_PAGE]]]
> 11. [F] DataPageSetFreeListPageRecord [freeListPage=281474976710662, 
> super=PageDeltaRecord [grpId=94416770, pageId=00010005, 
> super=WALRecord [size=37, chainSize=0, pos=null, 
> type=DATA_PAGE_SET_FREE_LIST_PAGE]]]
> {code}
> If you sum all space required for operation (size in p.3 is shown incorrectly 
> here), you will see that data update required ~300 bytes, so do free list 
> update! 
> *Proposed solution*
> 1) Optionally do not write free list updates to WAL
> 2) In case of node restart we start with empty free lists, so data inserts 
> will have to allocate new pages
> 3) When old data page is read, add it to the free list
> 4) Start a background thread which will iterate over all old data pages and 
> re-create the free list, so that eventually all data pages are tracked.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-6933) Consider executing updates in-place when SQL indexes are present or object size is smaller than the old object size

2017-11-16 Thread Alexey Goncharuk (JIRA)
Alexey Goncharuk created IGNITE-6933:


 Summary: Consider executing updates in-place when SQL indexes are 
present or object size is smaller than the old object size
 Key: IGNITE-6933
 URL: https://issues.apache.org/jira/browse/IGNITE-6933
 Project: Ignite
  Issue Type: Task
  Security Level: Public (Viewable by anyone)
Reporter: Alexey Goncharuk


Currently, we are able to execute in-place updates when the new object size 
exactly matches the old object size and SQL indexes are absent. This leads to a 
significant performance cost because of extra FreeList updates and extra 
indexes updates when only one field has changed.

A few possibilities which should be carefully examined:
1) Allow in-place updates if a new object size is smaller than the old object 
size (allow some space leak into the data page) and defer FreeList update to 
some later time (introduce a threshold?)
2) Examine (or propagate) the list of changed fields and allow in-place updates 
if indexed fields did not change
3) Investigate if we can implement an in-place update even with indexed fields 
- first, remove the value from indexes, then update, then insert value to 
indexes (note that there is a window when the value is not accessible via 
indexes in this case, so this is not a safe option). Also, do not update 
indexes if the indexed value did not change



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (IGNITE-6932) GridQueryProcessor.querySqlFieldsNoCache should check for cluster active/inactive state

2017-11-16 Thread Taras Ledkov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-6932?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Taras Ledkov updated IGNITE-6932:
-
Summary: GridQueryProcessor.querySqlFieldsNoCache should check for cluster 
active/inactive state  (was: GridQueryProcessorюquerySqlFieldsNoCache should 
check for cluster active/inactive state)

> GridQueryProcessor.querySqlFieldsNoCache should check for cluster 
> active/inactive state
> ---
>
> Key: IGNITE-6932
> URL: https://issues.apache.org/jira/browse/IGNITE-6932
> Project: Ignite
>  Issue Type: Task
>  Security Level: Public(Viewable by anyone) 
>  Components: sql
>Reporter: Vladimir Ozerov
>Assignee: Taras Ledkov
> Fix For: 2.4
>
>
> Currently we do not check for this, so operation may hang silently. Let's add 
> a check for this before starting query execution. See 
> {{IgniteKernal.checkClusterState}} and it's usages.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-6932) GridQueryProcessorюquerySqlFieldsNoCache should check for cluster active/inactive state

2017-11-16 Thread Vladimir Ozerov (JIRA)
Vladimir Ozerov created IGNITE-6932:
---

 Summary: GridQueryProcessorюquerySqlFieldsNoCache should check for 
cluster active/inactive state
 Key: IGNITE-6932
 URL: https://issues.apache.org/jira/browse/IGNITE-6932
 Project: Ignite
  Issue Type: Task
  Security Level: Public (Viewable by anyone)
  Components: sql
Reporter: Vladimir Ozerov
Assignee: Taras Ledkov
 Fix For: 2.4


Currently we do not check for this, so operation may hang silently. Let's add a 
check for this before starting query execution. See 
{{IgniteKernal.checkClusterState}} and it's usages.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-6931) Avoid extra deserialization on index rebuild

2017-11-16 Thread Igor Seliverstov (JIRA)
Igor Seliverstov created IGNITE-6931:


 Summary: Avoid extra deserialization on index rebuild
 Key: IGNITE-6931
 URL: https://issues.apache.org/jira/browse/IGNITE-6931
 Project: Ignite
  Issue Type: Improvement
  Security Level: Public (Viewable by anyone)
  Components: cache, sql
Affects Versions: 2.3
Reporter: Igor Seliverstov
Assignee: Igor Seliverstov
 Fix For: 2.4


Currently on index rebuild we open a cursor over cache keys and for each key 
get a full row instead of using already extracted one. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (IGNITE-6085) SQL: JOIN with multiple conditions is extremely slow

2017-11-16 Thread Roman Kondakov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-6085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16255155#comment-16255155
 ] 

Roman Kondakov commented on IGNITE-6085:


This problem occurs only with the OR condition in ON clause.  If you change 
condition to AND this query works well. The problem could be in the method 

{code:java}
public void createIndexConditions(Session session, TableFilter filter) {
if (andOrType == AND) {
left.createIndexConditions(session, filter);
right.createIndexConditions(session, filter);
}
}
{code}

in ConditionAndOr class. I asked a question about it in H2/issues: 
https://github.com/h2database/h2database/issues/670. Waiting for a response now.


> SQL: JOIN with multiple conditions is extremely slow
> 
>
> Key: IGNITE-6085
> URL: https://issues.apache.org/jira/browse/IGNITE-6085
> Project: Ignite
>  Issue Type: Task
>  Components: sql
>Affects Versions: 2.1
>Reporter: Vladimir Ozerov
>Assignee: Roman Kondakov
>  Labels: performance
> Fix For: 2.4
>
>
> Consider the following query:
> {code}
> SELECT ... FROM A a
> INNER JOIN B b ON b.id = a.foreign_id1 OR b.id = a.foreign_id2
> {code}
> In this case H2 cannot use indexes on {{foreign_id1}} or {{foreign_id2}} 
> columns and query execution takes extraordinary time. Known workaround for a 
> problem is to apply multiple JOINs, e.g.:
> {code}
> SELECT ... FROM A a
> LEFT OUTER JOIN B b1 ON b1.id = a.foreign_id1 
> LEFT OUTER JOIN B b2 ON b2.id = a.foreign_id2
> WHERE b1.id IS NOT NULL AND b2.id IS NOT NULL
> {code}
> On a single real-world scenario it improved exeution time by a factor of 500 
> (from 4s to 80ms).
> Something is terribly wrong here. Probably, H2 cannot perform necessary query 
> re-write, or cannot understand how to use index. Let's find a way to fix that.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (IGNITE-6930) Optionally to do not write free list updates to WAL

2017-11-16 Thread Vladimir Ozerov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-6930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Ozerov updated IGNITE-6930:

Summary: Optionally to do not write free list updates to WAL  (was: 
Non-persistent free lists)

> Optionally to do not write free list updates to WAL
> ---
>
> Key: IGNITE-6930
> URL: https://issues.apache.org/jira/browse/IGNITE-6930
> Project: Ignite
>  Issue Type: Task
>  Security Level: Public(Viewable by anyone) 
>  Components: cache
>Reporter: Vladimir Ozerov
>Assignee: Taras Ledkov
> Fix For: 2.4
>
>
> When cache entry is created, we need to write update the free list. When 
> entry is updated, we need to update free list(s) several times. Currently 
> free list is persistent structure, so every update to it must be logged to be 
> able to recover after crash. This may incur significant overhead, especially 
> for small entries.
> E.g. this is how WAL for a single update looks like. "D" - updates with real 
> data, "F" - free-list management:
> {code}
>  1. [D] DataRecord [writeEntries=[UnwrapDataEntry[k = key, v = [ BinaryObject 
> [idHash=2053299190, hash=1986931360, typeId=-1580729813]], super = [DataEntry 
> [cacheId=94416770, op=UPDATE, writeVer=GridCacheVersion [topVer=122147562, 
> order=1510667560607, nodeOrder=1], partId=0, partCnt=4, super=WALRecord 
> [size=0, chainSize=0, pos=null, type=DATA_RECORD]]
>  2. [F] PagesListRemovePageRecord [rmvdPageId=00010005, 
> pageId=00010006, grpId=94416770, super=PageDeltaRecord 
> [grpId=94416770, pageId=00010006, super=WALRecord [size=37, 
> chainSize=0, pos=null, type=PAGES_LIST_REMOVE_PAGE]]]
>  3. [D] DataPageInsertRecord [super=PageDeltaRecord [grpId=94416770, 
> pageId=00010005, super=WALRecord [size=129, chainSize=0, pos=null, 
> type=DATA_PAGE_INSERT_RECORD]]]
>  4. [F] PagesListAddPageRecord [dataPageId=00010005, 
> super=PageDeltaRecord [grpId=94416770, pageId=00010008, 
> super=WALRecord [size=37, chainSize=0, pos=null, type=PAGES_LIST_ADD_PAGE]]]
>  5. [F] DataPageSetFreeListPageRecord [freeListPage=281474976710664, 
> super=PageDeltaRecord [grpId=94416770, pageId=00010005, 
> super=WALRecord [size=37, chainSize=0, pos=null, 
> type=DATA_PAGE_SET_FREE_LIST_PAGE]]]
>  6. [D] ReplaceRecord [io=DataLeafIO[ver=1], idx=0, super=PageDeltaRecord 
> [grpId=94416770, pageId=00010004, super=WALRecord [size=47, 
> chainSize=0, pos=null, type=BTREE_PAGE_REPLACE]]]
>  7. [F] DataPageRemoveRecord [itemId=0, super=PageDeltaRecord 
> [grpId=94416770, pageId=00010005, super=WALRecord [size=30, 
> chainSize=0, pos=null, type=DATA_PAGE_REMOVE_RECORD]]]
>  8. [F] PagesListRemovePageRecord [rmvdPageId=00010005, 
> pageId=00010008, grpId=94416770, super=PageDeltaRecord 
> [grpId=94416770, pageId=00010008, super=WALRecord [size=37, 
> chainSize=0, pos=null, type=PAGES_LIST_REMOVE_PAGE]]]
>  9. [F] DataPageSetFreeListPageRecord [freeListPage=0, super=PageDeltaRecord 
> [grpId=94416770, pageId=00010005, super=WALRecord [size=37, 
> chainSize=0, pos=null, type=DATA_PAGE_SET_FREE_LIST_PAGE]]]
> 10. [F] PagesListAddPageRecord [dataPageId=00010005, 
> super=PageDeltaRecord [grpId=94416770, pageId=00010006, 
> super=WALRecord [size=37, chainSize=0, pos=null, type=PAGES_LIST_ADD_PAGE]]]
> 11. [F] DataPageSetFreeListPageRecord [freeListPage=281474976710662, 
> super=PageDeltaRecord [grpId=94416770, pageId=00010005, 
> super=WALRecord [size=37, chainSize=0, pos=null, 
> type=DATA_PAGE_SET_FREE_LIST_PAGE]]]
> {code}
> If you sum all space required for operation (size in p.3 is shown incorrectly 
> here), you will see that data update required ~300 bytes, so do free list 
> update! 
> *Proposed solution*
> 1) Optionally do not write free list updates to WAL
> 2) In case of node restart we start with empty free lists, so data inserts 
> will have to allocate new pages
> 3) When old data page is read, add it to the free list
> 4) Start a background thread which will iterate over all old data pages and 
> re-create the free list, so that eventually all data pages are tracked.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (IGNITE-6930) Non-persistent free lists

2017-11-16 Thread Vladimir Ozerov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-6930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Ozerov updated IGNITE-6930:

Description: 
When cache entry is created, we need to write update the free list. When entry 
is updated, we need to update free list(s) several times. Currently free list 
is persistent structure, so every update to it must be logged to be able to 
recover after crash. This may incur significant overhead, especially for small 
entries.

E.g. this is how WAL for a single update looks like. "D" - updates with real 
data, "F" - free-list management:
{code}
 1. [D] DataRecord [writeEntries=[UnwrapDataEntry[k = key, v = [ BinaryObject 
[idHash=2053299190, hash=1986931360, typeId=-1580729813]], super = [DataEntry 
[cacheId=94416770, op=UPDATE, writeVer=GridCacheVersion [topVer=122147562, 
order=1510667560607, nodeOrder=1], partId=0, partCnt=4, super=WALRecord 
[size=0, chainSize=0, pos=null, type=DATA_RECORD]]
 2. [F] PagesListRemovePageRecord [rmvdPageId=00010005, 
pageId=00010006, grpId=94416770, super=PageDeltaRecord [grpId=94416770, 
pageId=00010006, super=WALRecord [size=37, chainSize=0, pos=null, 
type=PAGES_LIST_REMOVE_PAGE]]]
 3. [D] DataPageInsertRecord [super=PageDeltaRecord [grpId=94416770, 
pageId=00010005, super=WALRecord [size=129, chainSize=0, pos=null, 
type=DATA_PAGE_INSERT_RECORD]]]
 4. [F] PagesListAddPageRecord [dataPageId=00010005, 
super=PageDeltaRecord [grpId=94416770, pageId=00010008, super=WALRecord 
[size=37, chainSize=0, pos=null, type=PAGES_LIST_ADD_PAGE]]]
 5. [F] DataPageSetFreeListPageRecord [freeListPage=281474976710664, 
super=PageDeltaRecord [grpId=94416770, pageId=00010005, super=WALRecord 
[size=37, chainSize=0, pos=null, type=DATA_PAGE_SET_FREE_LIST_PAGE]]]
 6. [D] ReplaceRecord [io=DataLeafIO[ver=1], idx=0, super=PageDeltaRecord 
[grpId=94416770, pageId=00010004, super=WALRecord [size=47, 
chainSize=0, pos=null, type=BTREE_PAGE_REPLACE]]]
 7. [F] DataPageRemoveRecord [itemId=0, super=PageDeltaRecord [grpId=94416770, 
pageId=00010005, super=WALRecord [size=30, chainSize=0, pos=null, 
type=DATA_PAGE_REMOVE_RECORD]]]
 8. [F] PagesListRemovePageRecord [rmvdPageId=00010005, 
pageId=00010008, grpId=94416770, super=PageDeltaRecord [grpId=94416770, 
pageId=00010008, super=WALRecord [size=37, chainSize=0, pos=null, 
type=PAGES_LIST_REMOVE_PAGE]]]
 9. [F] DataPageSetFreeListPageRecord [freeListPage=0, super=PageDeltaRecord 
[grpId=94416770, pageId=00010005, super=WALRecord [size=37, 
chainSize=0, pos=null, type=DATA_PAGE_SET_FREE_LIST_PAGE]]]
10. [F] PagesListAddPageRecord [dataPageId=00010005, 
super=PageDeltaRecord [grpId=94416770, pageId=00010006, super=WALRecord 
[size=37, chainSize=0, pos=null, type=PAGES_LIST_ADD_PAGE]]]
11. [F] DataPageSetFreeListPageRecord [freeListPage=281474976710662, 
super=PageDeltaRecord [grpId=94416770, pageId=00010005, super=WALRecord 
[size=37, chainSize=0, pos=null, type=DATA_PAGE_SET_FREE_LIST_PAGE]]]
{code}

If you sum all space required for operation (size in p.3 is shown incorrectly 
here), you will see that data update required ~300 bytes, so do free list 
update! 

*Proposed solution*
1) Optionally do not write free list updates to WAL
2) In case of node restart we start with empty free lists, so data inserts will 
have to allocate new pages
3) When old data page is read, add it to the free list
4) Start a background thread which will iterate over all old data pages and 
re-create the free list, so that eventually all data pages are tracked.

  was:
When cache entry is created, we need to write update the free list. When entry 
is updated, we need to update free list(s) several times. Currently free list 
is persistent structure, so every update to it must be logged to be able to 
recover after crash. This may incur significant overhead, especially for small 
entries.

E.g. this is how WAL for a single update looks like. "D" - updates with real 
data, "F" - free-list management:
{code}
 1. [D] DataRecord [writeEntries=[UnwrapDataEntry[k = key, v = [ BinaryObject 
[idHash=2053299190, hash=1986931360, typeId=-1580729813]], super = [DataEntry 
[cacheId=94416770, op=UPDATE, writeVer=GridCacheVersion [topVer=122147562, 
order=1510667560607, nodeOrder=1], partId=0, partCnt=4, super=WALRecord 
[size=0, chainSize=0, pos=null, type=DATA_RECORD]]
 2. [F] PagesListRemovePageRecord [rmvdPageId=00010005, 
pageId=00010006, grpId=94416770, super=PageDeltaRecord [grpId=94416770, 
pageId=00010006, super=WALRecord [size=37, chainSize=0, pos=null, 
type=PAGES_LIST_REMOVE_PAGE]]]
 3. [D] DataPageInsertRecord [super=PageDeltaRecord [grpId=94416770, 
pageId=00010005, super=WALRecord [size=129, chainSize=0, pos=null, 
type=DATA_PAGE_INSERT_RECORD]]]
 4. [F] PagesListAddPageRecord 

[jira] [Created] (IGNITE-6930) Non-persistent free lists

2017-11-16 Thread Vladimir Ozerov (JIRA)
Vladimir Ozerov created IGNITE-6930:
---

 Summary: Non-persistent free lists
 Key: IGNITE-6930
 URL: https://issues.apache.org/jira/browse/IGNITE-6930
 Project: Ignite
  Issue Type: Task
  Security Level: Public (Viewable by anyone)
  Components: cache
Reporter: Vladimir Ozerov
Assignee: Taras Ledkov
 Fix For: 2.4


When cache entry is created, we need to write update the free list. When entry 
is updated, we need to update free list(s) several times. Currently free list 
is persistent structure, so every update to it must be logged to be able to 
recover after crash. This may incur significant overhead, especially for small 
entries.

E.g. this is how WAL for a single update looks like. "D" - updates with real 
data, "F" - free-list management:
{code}
 1. [D] DataRecord [writeEntries=[UnwrapDataEntry[k = key, v = [ BinaryObject 
[idHash=2053299190, hash=1986931360, typeId=-1580729813]], super = [DataEntry 
[cacheId=94416770, op=UPDATE, writeVer=GridCacheVersion [topVer=122147562, 
order=1510667560607, nodeOrder=1], partId=0, partCnt=4, super=WALRecord 
[size=0, chainSize=0, pos=null, type=DATA_RECORD]]
 2. [F] PagesListRemovePageRecord [rmvdPageId=00010005, 
pageId=00010006, grpId=94416770, super=PageDeltaRecord [grpId=94416770, 
pageId=00010006, super=WALRecord [size=37, chainSize=0, pos=null, 
type=PAGES_LIST_REMOVE_PAGE]]]
 3. [D] DataPageInsertRecord [super=PageDeltaRecord [grpId=94416770, 
pageId=00010005, super=WALRecord [size=129, chainSize=0, pos=null, 
type=DATA_PAGE_INSERT_RECORD]]]
 4. [F] PagesListAddPageRecord [dataPageId=00010005, 
super=PageDeltaRecord [grpId=94416770, pageId=00010008, super=WALRecord 
[size=37, chainSize=0, pos=null, type=PAGES_LIST_ADD_PAGE]]]
 5. [F] DataPageSetFreeListPageRecord [freeListPage=281474976710664, 
super=PageDeltaRecord [grpId=94416770, pageId=00010005, super=WALRecord 
[size=37, chainSize=0, pos=null, type=DATA_PAGE_SET_FREE_LIST_PAGE]]]
 6. [D] ReplaceRecord [io=DataLeafIO[ver=1], idx=0, super=PageDeltaRecord 
[grpId=94416770, pageId=00010004, super=WALRecord [size=47, 
chainSize=0, pos=null, type=BTREE_PAGE_REPLACE]]]
 7. [F] DataPageRemoveRecord [itemId=0, super=PageDeltaRecord [grpId=94416770, 
pageId=00010005, super=WALRecord [size=30, chainSize=0, pos=null, 
type=DATA_PAGE_REMOVE_RECORD]]]
 8. [F] PagesListRemovePageRecord [rmvdPageId=00010005, 
pageId=00010008, grpId=94416770, super=PageDeltaRecord [grpId=94416770, 
pageId=00010008, super=WALRecord [size=37, chainSize=0, pos=null, 
type=PAGES_LIST_REMOVE_PAGE]]]
 9. [F] DataPageSetFreeListPageRecord [freeListPage=0, super=PageDeltaRecord 
[grpId=94416770, pageId=00010005, super=WALRecord [size=37, 
chainSize=0, pos=null, type=DATA_PAGE_SET_FREE_LIST_PAGE]]]
10. [F] PagesListAddPageRecord [dataPageId=00010005, 
super=PageDeltaRecord [grpId=94416770, pageId=00010006, super=WALRecord 
[size=37, chainSize=0, pos=null, type=PAGES_LIST_ADD_PAGE]]]
11. [F] DataPageSetFreeListPageRecord [freeListPage=281474976710662, 
super=PageDeltaRecord [grpId=94416770, pageId=00010005, super=WALRecord 
[size=37, chainSize=0, pos=null, type=DATA_PAGE_SET_FREE_LIST_PAGE]]]
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (IGNITE-6590) TCP port in bin/control.sh differs from default

2017-11-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-6590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16255072#comment-16255072
 ] 

ASF GitHub Bot commented on IGNITE-6590:


GitHub user MikeZhur opened a pull request:

https://github.com/apache/ignite/pull/3042

IGNITE-6590: TCP port in bin/control.sh differs from default



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/MikeZhur/ignite ignite-2.2

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/3042.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3042


commit 31b00d5edb48379aa7652ce3fcf88f3cda0a2821
Author: Mike Zhuravlev 
Date:   2017-11-15T11:17:00Z

IGNITE-6590: TCP port in bin/control.sh differs from default




> TCP port in bin/control.sh differs from default
> ---
>
> Key: IGNITE-6590
> URL: https://issues.apache.org/jira/browse/IGNITE-6590
> Project: Ignite
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 2.2
>Reporter: Ilya Kasnacheev
>Assignee: Mike Zhuravlev
>
> {code}
> % bin/ignite.sh -v
> >>> Local ports: TCP:8081 TCP:10800 TCP:11211 TCP:47100 TCP:47500 
> % bin/control.sh --host 127.0.0.1
> окт 10, 2017 3:01:26 PM org.apache.ignite.internal.client.impl.GridClientImpl 
> 
> WARNING: Failed to initialize topology on client start. Will retry in 
> background.
> Caused by: class 
> org.apache.ignite.internal.client.GridServerUnreachableException: Failed to 
> connect to any of the servers in list: [/127.0.0.1:11212]
> {code}
> 11212 != 11211. But it's very hard to spot visually. Please fix control.sh to 
> use correct port by default.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (IGNITE-6618) Web console: Do not show client nodes in node selection modal

2017-11-16 Thread Pavel Konstantinov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-6618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16255028#comment-16255028
 ] 

Pavel Konstantinov commented on IGNITE-6618:


Re-tested

> Web console: Do not show client nodes in node selection modal
> -
>
> Key: IGNITE-6618
> URL: https://issues.apache.org/jira/browse/IGNITE-6618
> Project: Ignite
>  Issue Type: Bug
>  Components: wizards
>Affects Versions: 2.1
>Reporter: Pavel Konstantinov
>Assignee: Pavel Konstantinov
> Fix For: 2.4
>
>
> I tried to 'Execute on Selected Node' the following query
> {code}
> SELECT c.id, d.id, p.id, p.salary 
> FROM "c_partitioned".City c
> inner join "c_partitioned".Department d
> on c.id=d.CTYID 
> inner join "c_partitioned".Person p
> on d.id=p.depID and p.salary > 5000
> inner join "c_partitioned".PersonBonus pb
> on p.id=pb.perID and pb.COUNT  < 5000
> where exists (select * from "c_partitioned".Person where rank > 0)
> {code}
> and selected a client node in the list of nodes
> and got exception
> {code}
> General error: "java.lang.NullPointerException"; SQL statement: SELECT c.id, 
> d.id, p.id, p.salary 
>  FROM "c_partitioned".City c
>  inner join "c_partitioned".Department d
>  on c.id=d.CTYID 
>  inner join "c_partitioned".Person p
>  on d.id=p.depID and p.salary > 5000
>  inner join "c_partitioned".PersonBonus pb
>  on p.id=pb.perID and pb.COUNT < 5000
>  where exists (select * from "c_partitioned".Person where rank > 0) 
> [5-195]
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (IGNITE-6893) Provide metric to monitor Java Deadlocks

2017-11-16 Thread Andrey Kuznetsov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-6893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Kuznetsov reassigned IGNITE-6893:


Assignee: Andrey Kuznetsov

> Provide metric to monitor Java Deadlocks 
> -
>
> Key: IGNITE-6893
> URL: https://issues.apache.org/jira/browse/IGNITE-6893
> Project: Ignite
>  Issue Type: Improvement
>  Security Level: Public(Viewable by anyone) 
>Reporter: Anton Vinogradov
>Assignee: Andrey Kuznetsov
>  Labels: iep-7
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (IGNITE-6858) Wait for exchange inside GridReduceQueryExecutor.query which never finishes due to opened transaction

2017-11-16 Thread Alexei Scherbakov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-6858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16255006#comment-16255006
 ] 

Alexei Scherbakov commented on IGNITE-6858:
---

Latest tc result: 
https://ci.ignite.apache.org/viewLog.html?buildId=944130=buildResultsDiv=Ignite20Tests_RunAll

> Wait for exchange inside GridReduceQueryExecutor.query which never finishes 
> due to opened transaction
> -
>
> Key: IGNITE-6858
> URL: https://issues.apache.org/jira/browse/IGNITE-6858
> Project: Ignite
>  Issue Type: Bug
>  Security Level: Public(Viewable by anyone) 
>  Components: sql
>Affects Versions: 2.3
>Reporter: Alexandr Kuramshin
>Assignee: Alexei Scherbakov
> Fix For: 2.4
>
>
> Infinite waiting in loop
> {noformat}
> for (int attempt = 0;; attempt++) {
> if (attempt != 0) {
> try {
> Thread.sleep(attempt * 10); // Wait for exchange.
> }
> catch (InterruptedException e) {
> Thread.currentThread().interrupt();
> throw new CacheException("Query was interrupted.", e);
> }
> }
> {noformat}
> because of exchange will wait for partition eviction with opened transaction 
> in a related thread
> {noformat}
> at java.lang.Thread.sleep(Native Method)
> at 
> o.a.i.i.processors.query.h2.twostep.GridReduceQueryExecutor.query(GridReduceQueryExecutor.java:546)
> at 
> o.a.i.i.processors.query.h2.IgniteH2Indexing$8.iterator(IgniteH2Indexing.java:1236)
> at 
> o.a.i.i.processors.cache.QueryCursorImpl.iterator(QueryCursorImpl.java:95)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (IGNITE-6904) SQL: partition reservations are released too early in lazy mode

2017-11-16 Thread Vladimir Ozerov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-6904?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Ozerov reassigned IGNITE-6904:
---

Assignee: Roman Kondakov  (was: Kirill Shirokov)

> SQL: partition reservations are released too early in lazy mode
> ---
>
> Key: IGNITE-6904
> URL: https://issues.apache.org/jira/browse/IGNITE-6904
> Project: Ignite
>  Issue Type: Bug
>  Security Level: Public(Viewable by anyone) 
>  Components: sql
>Affects Versions: 2.3
>Reporter: Vladimir Ozerov
>Assignee: Roman Kondakov
> Fix For: 2.4
>
>
> In lazy mode we advance query execution as new page requests arrive. However, 
> method {{GridMapQueryExecutor#onQueryRequest0}} releases partition 
> reservations when only the very first page is processed:
> {code}
> finally {
> GridH2QueryContext.clearThreadLocal();
> if (distributedJoinMode == OFF)
> qctx.clearContext(false);
> }
> {code}
> It means that incorrect results may be returned on unstable topology. We need 
> to release partitions only after the whole query is executed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (IGNITE-6928) Web console: use redesigned context menus everywhere

2017-11-16 Thread Vica Abramova (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-6928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vica Abramova updated IGNITE-6928:
--
Component/s: UI

> Web console: use redesigned context menus everywhere
> 
>
> Key: IGNITE-6928
> URL: https://issues.apache.org/jira/browse/IGNITE-6928
> Project: Ignite
>  Issue Type: Improvement
>  Security Level: Public(Viewable by anyone) 
>  Components: UI, wizards
>Reporter: Ilya Borisov
>Assignee: Ilya Borisov
>Priority: Minor
>
> At the moment web console has two distinct context menu styles. Eventually, 
> all older context menus have to be replaced with newer ones.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (IGNITE-6496) Client node does not reconnect to server node when the latter is restarted.

2017-11-16 Thread Alexey Goncharuk (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-6496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16254974#comment-16254974
 ] 

Alexey Goncharuk commented on IGNITE-6496:
--

I do not think that a silent semaphore release is a good option for client 
disconnect. If a client disconnects, the semaphores should  throw something 
like SemaphoreBrokenException and force a user to re-create the semaphore. 

> Client node does not reconnect to server node when the latter is restarted.
> ---
>
> Key: IGNITE-6496
> URL: https://issues.apache.org/jira/browse/IGNITE-6496
> Project: Ignite
>  Issue Type: Bug
>  Components: data structures
>Affects Versions: 1.9
>Reporter: Vyacheslav Koptilin
>Assignee: Vyacheslav Koptilin
> Attachments: ExampleNodeStartup.java
>
>
> The following scenario may result in deadlock on the client node:
>  - start server node and one client
>  - client node invokes IgniteQueue#take() method, that requires acquiring 
> both GridCacheGateway#readLock and GridCacheQueueAdapter#readSem.
>  - client node disconnects from the cluster for some reason (for example, 
> server node was stopped)
>in that case, GridCacheQueueAdapter does not release readSem and, 
> therefore, GridCacheGateway#readLock is also not released.
>  - 'tcp-client-disco-msg-worker' hangs due to unable acquiring 
> GridCacheGateway.writeLock
> "tcp-client-disco-msg-worker-#10%datastructures.GridCacheQueueClientReconnect1%"
>  #101 prio=5 os_prio=0 tid=0x21634000 nid=0x49a8 waiting on condition 
> [0x3a61e000]
>java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> parking to wait for  <0x00076fa988f8> (a 
> java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:870)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1199)
> at 
> java.util.concurrent.locks.ReentrantReadWriteLock$WriteLock.lock(ReentrantReadWriteLock.java:943)
> at 
> org.apache.ignite.internal.util.StripedCompositeReadWriteLock$WriteLock.lock0(StripedCompositeReadWriteLock.java:154)
> at 
> org.apache.ignite.internal.util.StripedCompositeReadWriteLock$WriteLock.lock(StripedCompositeReadWriteLock.java:123)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheGateway.writeLock(GridCacheGateway.java:278)
> at 
> org.apache.ignite.internal.IgniteKernal.onDisconnected(IgniteKernal.java:3873)
> at 
> org.apache.ignite.internal.managers.discovery.GridDiscoveryManager$4.onDiscovery0(GridDiscoveryManager.java:768)
> at 
> org.apache.ignite.internal.managers.discovery.GridDiscoveryManager$4.onDiscovery(GridDiscoveryManager.java:573)
> locked <0x00076f9f4048> (a java.lang.Object)
> at 
> org.apache.ignite.spi.discovery.tcp.ClientImpl$MessageWorker.notifyDiscovery(ClientImpl.java:2415)
> at 
> org.apache.ignite.spi.discovery.tcp.ClientImpl$MessageWorker.notifyDiscovery(ClientImpl.java:2394)
> at 
> org.apache.ignite.spi.discovery.tcp.ClientImpl$MessageWorker.body(ClientImpl.java:1710)
> at org.apache.ignite.spi.IgniteSpiThread.run(IgniteSpiThread.java:62)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-6928) Web console: use redesigned context menus everywhere

2017-11-16 Thread Ilya Borisov (JIRA)
Ilya Borisov created IGNITE-6928:


 Summary: Web console: use redesigned context menus everywhere
 Key: IGNITE-6928
 URL: https://issues.apache.org/jira/browse/IGNITE-6928
 Project: Ignite
  Issue Type: Improvement
  Security Level: Public (Viewable by anyone)
  Components: wizards
Reporter: Ilya Borisov
Assignee: Ilya Borisov
Priority: Minor


At the moment web console has two distinct context menu styles. Eventually, all 
older context menus have to be replaced with newer ones.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (IGNITE-6337) .NET: Thin client: SQL queries

2017-11-16 Thread Vladimir Ozerov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-6337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16254936#comment-16254936
 ] 

Vladimir Ozerov commented on IGNITE-6337:
-

[~ptupitsyn], I see one in problem 
{{ClientCacheSqlFieldsQueryRequest#process}}. If cache doesn't exist, we ignore 
that fact and use schema as is, what means that query will be executed on 
default schema silently - definitely not what user expected. Also we should 
distinguish cases when cache ID is set, and when it is not.

> .NET: Thin client: SQL queries
> --
>
> Key: IGNITE-6337
> URL: https://issues.apache.org/jira/browse/IGNITE-6337
> Project: Ignite
>  Issue Type: Improvement
>  Components: platforms, thin client
>Reporter: Pavel Tupitsyn
>Assignee: Pavel Tupitsyn
>  Labels: .NET
> Fix For: 2.4
>
>
> SQL and Fields queries in thin client.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (IGNITE-6336) .NET: Thin client: Create cache

2017-11-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-6336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16254932#comment-16254932
 ] 

ASF GitHub Bot commented on IGNITE-6336:


Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/2935


> .NET: Thin client: Create cache
> ---
>
> Key: IGNITE-6336
> URL: https://issues.apache.org/jira/browse/IGNITE-6336
> Project: Ignite
>  Issue Type: Improvement
>  Components: platforms, thin client
>Reporter: Pavel Tupitsyn
>Assignee: Pavel Tupitsyn
>  Labels: .NET
> Fix For: 2.4
>
>
> Create, destroy and observer caches from thin client (by name and from 
> {{CacheConfiguration}}).
> * {{IIgniteClient.CreateCache}}, {{GetOrCreateCache}} overloads
> * {{ICacheClient.GetConfiguration}}
> * {{IIgnite.GetCacheNames}}
> * {{IIgniteClient.DestroyCache}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (IGNITE-6336) .NET: Thin client: Create cache

2017-11-16 Thread Pavel Tupitsyn (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-6336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16254931#comment-16254931
 ] 

Pavel Tupitsyn commented on IGNITE-6336:


Merged to master: {{52b46c35fb4bf4f2b96bdc10673563843502fcbe}}.

> .NET: Thin client: Create cache
> ---
>
> Key: IGNITE-6336
> URL: https://issues.apache.org/jira/browse/IGNITE-6336
> Project: Ignite
>  Issue Type: Improvement
>  Components: platforms, thin client
>Reporter: Pavel Tupitsyn
>Assignee: Pavel Tupitsyn
>  Labels: .NET
> Fix For: 2.4
>
>
> Create, destroy and observer caches from thin client (by name and from 
> {{CacheConfiguration}}).
> * {{IIgniteClient.CreateCache}}, {{GetOrCreateCache}} overloads
> * {{ICacheClient.GetConfiguration}}
> * {{IIgnite.GetCacheNames}}
> * {{IIgniteClient.DestroyCache}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (IGNITE-4180) Web console: Query paragraph is not extended from collapsed state on move by "Scroll to query" link

2017-11-16 Thread Alexey Kuznetsov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-4180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Kuznetsov reassigned IGNITE-4180:


Assignee: Pavel Konstantinov

Could you retest.

> Web console: Query paragraph is not extended from collapsed state on move by 
> "Scroll to query" link
> ---
>
> Key: IGNITE-4180
> URL: https://issues.apache.org/jira/browse/IGNITE-4180
> Project: Ignite
>  Issue Type: Bug
>  Components: wizards
>Affects Versions: 1.8
>Reporter: Vasiliy Sisko
>Assignee: Pavel Konstantinov
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (IGNITE-4398) Web console: incorrect authentication under IE 11 (windows 10)

2017-11-16 Thread Alexey Kuznetsov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-4398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Kuznetsov reassigned IGNITE-4398:


Assignee: Alexander Kalinin

> Web console: incorrect authentication under IE 11 (windows 10)
> --
>
> Key: IGNITE-4398
> URL: https://issues.apache.org/jira/browse/IGNITE-4398
> Project: Ignite
>  Issue Type: Bug
>  Components: wizards
>Reporter: Pavel Konstantinov
>Assignee: Alexander Kalinin
>  Time Spent: 0.1m
>  Remaining Estimate: 0h
>
> To reproduce:
> 1) log in as User1
> 2) create some cluster and save
> 3) log out
> 4) log in as User2
> You will see the cluster from User1.
> Note: IE specific only



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Closed] (IGNITE-4703) Web Console: Correct the height of all the buttons.

2017-11-16 Thread Ilya Borisov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-4703?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ilya Borisov closed IGNITE-4703.


> Web Console: Correct the height of all the buttons.
> ---
>
> Key: IGNITE-4703
> URL: https://issues.apache.org/jira/browse/IGNITE-4703
> Project: Ignite
>  Issue Type: Bug
>  Components: UI, wizards
>Reporter: Vica Abramova
>Assignee: Alina Goncarko
> Attachments: btns.png
>
>
> See attached img.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Closed] (IGNITE-4700) Web Console: Font aren't Roboto Slab in the 'Project Structure' pop up window

2017-11-16 Thread Ilya Borisov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-4700?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ilya Borisov closed IGNITE-4700.


> Web Console: Font aren't Roboto Slab in the 'Project Structure' pop up window
> -
>
> Key: IGNITE-4700
> URL: https://issues.apache.org/jira/browse/IGNITE-4700
> Project: Ignite
>  Issue Type: Bug
>  Components: UI, wizards
>Reporter: Vica Abramova
> Attachments: structure.png
>
>
> See attached img. Need to change Verdana - > Roboto Slab (at all controls 
> like this)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Closed] (IGNITE-4698) Web Console: On the Configure Screen font of numbers in circles aren't Roboto Slab

2017-11-16 Thread Ilya Borisov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-4698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ilya Borisov closed IGNITE-4698.


> Web Console: On the Configure Screen font of numbers in circles aren't Roboto 
> Slab
> --
>
> Key: IGNITE-4698
> URL: https://issues.apache.org/jira/browse/IGNITE-4698
> Project: Ignite
>  Issue Type: Bug
>  Components: UI, wizards
>Reporter: Vica Abramova
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (IGNITE-4452) Web console: add execution time to results panel on Queries screen

2017-11-16 Thread Alexey Kuznetsov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-4452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Kuznetsov reassigned IGNITE-4452:


Assignee: Alexander Kalinin

> Web console: add execution time to results panel on Queries screen
> --
>
> Key: IGNITE-4452
> URL: https://issues.apache.org/jira/browse/IGNITE-4452
> Project: Ignite
>  Issue Type: Task
>  Components: wizards
>Reporter: Pavel Konstantinov
>Assignee: Alexander Kalinin
>Priority: Trivial
>
> I think it may be useful to know query last execution time, especially if we 
> have a Refresh rate possibility.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (IGNITE-4698) Web Console: On the Configure Screen font of numbers in circles aren't Roboto Slab

2017-11-16 Thread Ilya Borisov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-4698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ilya Borisov resolved IGNITE-4698.
--
Resolution: Won't Do

Redesigned configuration screen won't have circled numbers in navigation panel.

> Web Console: On the Configure Screen font of numbers in circles aren't Roboto 
> Slab
> --
>
> Key: IGNITE-4698
> URL: https://issues.apache.org/jira/browse/IGNITE-4698
> Project: Ignite
>  Issue Type: Bug
>  Components: UI, wizards
>Reporter: Vica Abramova
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (IGNITE-4454) Web console: add information on query panel UI about node query was executed on

2017-11-16 Thread Alexey Kuznetsov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-4454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Kuznetsov reassigned IGNITE-4454:


Assignee: Alexander Kalinin  (was: Vica Abramova)

> Web console: add information on query panel  UI about node query was executed 
> on
> 
>
> Key: IGNITE-4454
> URL: https://issues.apache.org/jira/browse/IGNITE-4454
> Project: Ignite
>  Issue Type: Task
>  Components: wizards
>Reporter: Pavel Konstantinov
>Assignee: Alexander Kalinin
>Priority: Minor
>
> Currently we show only query text  and do not show node in case of 'Execute 
> on selected node'



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


  1   2   >