[jira] [Commented] (IGNITE-5439) JDBC thin: support query cancel

2018-12-25 Thread Ignite TC Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-5439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16728924#comment-16728924
 ] 

Ignite TC Bot commented on IGNITE-5439:
---

{panel:title=-- Run :: All: No blockers 
found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}{panel}
[TeamCity *-- Run :: All* 
Results|https://ci.ignite.apache.org/viewLog.html?buildId=2645959buildTypeId=IgniteTests24Java8_RunAll]

> JDBC thin: support query cancel
> ---
>
> Key: IGNITE-5439
> URL: https://issues.apache.org/jira/browse/IGNITE-5439
> Project: Ignite
>  Issue Type: Task
>  Components: jdbc
>Affects Versions: 2.0
>Reporter: Taras Ledkov
>Assignee: Alexander Lapin
>Priority: Major
> Fix For: 2.8
>
>
> The JDBC {{Statement.cancel}} method must be supported.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-9607) Service Grid redesign - phase 1

2018-12-25 Thread Ignite TC Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-9607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16728921#comment-16728921
 ] 

Ignite TC Bot commented on IGNITE-9607:
---

{panel:title=-- Run :: All: No blockers 
found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}{panel}
[TeamCity *-- Run :: All* 
Results|https://ci.ignite.apache.org/viewLog.html?buildId=2647608buildTypeId=IgniteTests24Java8_RunAll]

> Service Grid redesign - phase 1
> ---
>
> Key: IGNITE-9607
> URL: https://issues.apache.org/jira/browse/IGNITE-9607
> Project: Ignite
>  Issue Type: Improvement
>  Components: managed services
>Reporter: Vyacheslav Daradur
>Assignee: Vyacheslav Daradur
>Priority: Major
> Fix For: 2.8
>
>
> This is an umbrella ticket for tasks which should be implemented atomically 
> in phase #1 of Service Grid redesign.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-10819) Test IgniteClientRejoinTest.testClientsReconnectAfterStart became flaky in master recently

2018-12-25 Thread Sergey Chugunov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-10819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Chugunov updated IGNITE-10819:
-
Description: 
As [test 
history|https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8=-21180267941031641=testDetails_IgniteTests24Java8=%3Cdefault%3E]
 in master branch shows the test has become flaky recently.

It looks like test started failing when IGNITE-10555 was merged to master ([two 
flaky 
failures|https://ci.ignite.apache.org/project.html?tab=testDetails=IgniteTests24Java8=-21180267941031641=8_IgniteTests24Java8=pull%2F5582%2Fhead]
 in the PR branch of this change also had place).

The reason of failure is timeout when *client4* node hangs awaiting for PME to 
complete. Communication failures are emulated in the test and when all clients 
fail to init an exchange on a specific affinity topology version (major=7, 
minor=1) everything works fine.
 But sometimes *client4* node manages to finish initialization of the exchange 
and hangs forever.

  was:
As [test 
history|https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8=-21180267941031641=testDetails_IgniteTests24Java8=%3Cdefault%3E]
 in master branch shows the test has become flaky recently.

It looks like test started failing when IGNITE-10555 was merged to master ([two 
flaky 
failures|https://ci.ignite.apache.org/project.html?tab=testDetails=IgniteTests24Java8=-21180267941031641=8_IgniteTests24Java8=pull%2F5582%2Fhead]
 in the PR branch of this change also had place).

The reason of failure is timeout when *client4* node hangs awaiting for PME to 
complete. Communication failures are emulated in the test and when all clients 
fail to init an exchange on a specific affinity topology version (major=7, 
minor=1) everything works fine.
 But sometimes *client4* node manages to finish init the exchange and hangs 
forever.


> Test IgniteClientRejoinTest.testClientsReconnectAfterStart became flaky in 
> master recently
> --
>
> Key: IGNITE-10819
> URL: https://issues.apache.org/jira/browse/IGNITE-10819
> Project: Ignite
>  Issue Type: Bug
>Reporter: Sergey Chugunov
>Priority: Major
>  Labels: MakeTeamcityGreenAgain
> Fix For: 2.8
>
>
> As [test 
> history|https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8=-21180267941031641=testDetails_IgniteTests24Java8=%3Cdefault%3E]
>  in master branch shows the test has become flaky recently.
> It looks like test started failing when IGNITE-10555 was merged to master 
> ([two flaky 
> failures|https://ci.ignite.apache.org/project.html?tab=testDetails=IgniteTests24Java8=-21180267941031641=8_IgniteTests24Java8=pull%2F5582%2Fhead]
>  in the PR branch of this change also had place).
> The reason of failure is timeout when *client4* node hangs awaiting for PME 
> to complete. Communication failures are emulated in the test and when all 
> clients fail to init an exchange on a specific affinity topology version 
> (major=7, minor=1) everything works fine.
>  But sometimes *client4* node manages to finish initialization of the 
> exchange and hangs forever.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-10819) Test IgniteClientRejoinTest.testClientsReconnectAfterStart became flaky in master recently

2018-12-25 Thread Sergey Chugunov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-10819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Chugunov updated IGNITE-10819:
-
Description: 
As [test 
history|https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8=-21180267941031641=testDetails_IgniteTests24Java8=%3Cdefault%3E]
 in master branch shows the test has become flaky recently.

It looks like test started failing when IGNITE-10555 was merged to master ([two 
flaky 
failures|https://ci.ignite.apache.org/project.html?tab=testDetails=IgniteTests24Java8=-21180267941031641=8_IgniteTests24Java8=pull%2F5582%2Fhead]
 in the PR branch of this change also had place).

The reason of failure is timeout when *client4* node hangs awaiting for PME to 
complete. Communication failures are emulated in the test and when all clients 
fail to init an exchange on a specific affinity topology version (major=7, 
minor=1) everything works fine.
 But sometimes *client4* node manages to finish init the exchange and hangs 
forever.

  was:
As [test 
history|https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8=-21180267941031641=testDetails_IgniteTests24Java8=%3Cdefault%3E]
 in master branch shows the test has become flaky recently.

Test started failing when IGNITE-10555 was merged to master.

The reason of failure is timeout when *client4* node hangs waiting for PME to 
complete. Communication failures are emulated in the test and when all clients 
fail to init an exchange on a specific affinity topology version (major=7, 
minor=1) everything works fine.
 But sometimes *client4* node manages to finish init the exchange and hangs 
forever.


> Test IgniteClientRejoinTest.testClientsReconnectAfterStart became flaky in 
> master recently
> --
>
> Key: IGNITE-10819
> URL: https://issues.apache.org/jira/browse/IGNITE-10819
> Project: Ignite
>  Issue Type: Bug
>Reporter: Sergey Chugunov
>Priority: Major
>  Labels: MakeTeamcityGreenAgain
> Fix For: 2.8
>
>
> As [test 
> history|https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8=-21180267941031641=testDetails_IgniteTests24Java8=%3Cdefault%3E]
>  in master branch shows the test has become flaky recently.
> It looks like test started failing when IGNITE-10555 was merged to master 
> ([two flaky 
> failures|https://ci.ignite.apache.org/project.html?tab=testDetails=IgniteTests24Java8=-21180267941031641=8_IgniteTests24Java8=pull%2F5582%2Fhead]
>  in the PR branch of this change also had place).
> The reason of failure is timeout when *client4* node hangs awaiting for PME 
> to complete. Communication failures are emulated in the test and when all 
> clients fail to init an exchange on a specific affinity topology version 
> (major=7, minor=1) everything works fine.
>  But sometimes *client4* node manages to finish init the exchange and hangs 
> forever.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-10819) Test IgniteClientRejoinTest.testClientsReconnectAfterStart became flaky in master recently

2018-12-25 Thread Sergey Chugunov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-10819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Chugunov updated IGNITE-10819:
-
Description: 
As [test 
history|https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8=-21180267941031641=testDetails_IgniteTests24Java8=%3Cdefault%3E]
 in master branch shows the test has become flaky recently.

Test started failing when IGNITE-10555 was merged to master.

The reason of failure is timeout when *client4* node hangs waiting for PME to 
complete. Communication failures are emulated in the test and when all clients 
fail to init an exchange on a specific affinity topology version (major=7, 
minor=1) everything works fine.
 But sometimes *client4* node manages to finish init the exchange and hangs 
forever.

  was:
As [test 
history|https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8=-21180267941031641=testDetails_IgniteTests24Java8=%3Cdefault%3E]
 in master branch shows the test has become flaky in master recently.

Test started failing when IGNITE-10555 was merged to master.

The reason of failure is timeout when *client4* node hangs waiting for PME to 
complete. Communication failures are emulated in the test and when all clients 
fail to init an exchange on a specific affinity topology version (major=7, 
minor=1) everything works fine.
But sometimes *client4* node manages to finish init the exchange and hangs 
forever.


> Test IgniteClientRejoinTest.testClientsReconnectAfterStart became flaky in 
> master recently
> --
>
> Key: IGNITE-10819
> URL: https://issues.apache.org/jira/browse/IGNITE-10819
> Project: Ignite
>  Issue Type: Bug
>Reporter: Sergey Chugunov
>Priority: Major
>  Labels: MakeTeamcityGreenAgain
> Fix For: 2.8
>
>
> As [test 
> history|https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8=-21180267941031641=testDetails_IgniteTests24Java8=%3Cdefault%3E]
>  in master branch shows the test has become flaky recently.
> Test started failing when IGNITE-10555 was merged to master.
> The reason of failure is timeout when *client4* node hangs waiting for PME to 
> complete. Communication failures are emulated in the test and when all 
> clients fail to init an exchange on a specific affinity topology version 
> (major=7, minor=1) everything works fine.
>  But sometimes *client4* node manages to finish init the exchange and hangs 
> forever.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-10314) Spark dataframe will get wrong schema if user executes add/drop column DDL

2018-12-25 Thread Ray Liu (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-10314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ray Liu updated IGNITE-10314:
-
Description: 
When user performs add/remove column in DDL,  Spark will get the old/wrong 
schema.

 

Analyse 

Currently Spark data frame API relies on QueryEntity to construct schema, but 
QueryEntity in QuerySchema is a local copy of the original QueryEntity, so the 
original QueryEntity is not updated when modification happens.

 

Solution

Use GridQueryTypeDescriptor to replace QueryEntity

  was:
When user performs add/remove column in DDL,  Spark will get the old/wrong 
schema.

 

Analyse 

Currently Spark data frame API relies on QueryEntity to construct schema, but 
QueryEntity in QuerySchema is a local copy of the original QueryEntity, so the 
original QueryEntity is not updated when modification happens.

 

Solution

Get the latest schema using JDBC thin driver's column metadata call, then 
update fields in QueryEntity.


> Spark dataframe will get wrong schema if user executes add/drop column DDL
> --
>
> Key: IGNITE-10314
> URL: https://issues.apache.org/jira/browse/IGNITE-10314
> Project: Ignite
>  Issue Type: Bug
>  Components: spark
>Affects Versions: 2.3, 2.4, 2.5, 2.6, 2.7
>Reporter: Ray Liu
>Assignee: Ray Liu
>Priority: Critical
> Fix For: 2.8
>
>
> When user performs add/remove column in DDL,  Spark will get the old/wrong 
> schema.
>  
> Analyse 
> Currently Spark data frame API relies on QueryEntity to construct schema, but 
> QueryEntity in QuerySchema is a local copy of the original QueryEntity, so 
> the original QueryEntity is not updated when modification happens.
>  
> Solution
> Use GridQueryTypeDescriptor to replace QueryEntity



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-10819) Test IgniteClientRejoinTest.testClientsReconnectAfterStart became flaky in master recently

2018-12-25 Thread Sergey Chugunov (JIRA)
Sergey Chugunov created IGNITE-10819:


 Summary: Test 
IgniteClientRejoinTest.testClientsReconnectAfterStart became flaky in master 
recently
 Key: IGNITE-10819
 URL: https://issues.apache.org/jira/browse/IGNITE-10819
 Project: Ignite
  Issue Type: Bug
Reporter: Sergey Chugunov
 Fix For: 2.8


As [test 
history|https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8=-21180267941031641=testDetails_IgniteTests24Java8=%3Cdefault%3E]
 in master branch shows the test has become flaky in master recently.

Test started failing when IGNITE-10555 was merged to master.

The reason of failure is timeout when *client4* node hangs waiting for PME to 
complete. Communication failures are emulated in the test and when all clients 
fail to init an exchange on a specific affinity topology version (major=7, 
minor=1) everything works fine.
But sometimes *client4* node manages to finish init the exchange and hangs 
forever.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-10819) Test IgniteClientRejoinTest.testClientsReconnectAfterStart became flaky in master recently

2018-12-25 Thread Sergey Chugunov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-10819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Chugunov updated IGNITE-10819:
-
Ignite Flags:   (was: Docs Required)

> Test IgniteClientRejoinTest.testClientsReconnectAfterStart became flaky in 
> master recently
> --
>
> Key: IGNITE-10819
> URL: https://issues.apache.org/jira/browse/IGNITE-10819
> Project: Ignite
>  Issue Type: Bug
>Reporter: Sergey Chugunov
>Priority: Major
>  Labels: MakeTeamcityGreenAgain
> Fix For: 2.8
>
>
> As [test 
> history|https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8=-21180267941031641=testDetails_IgniteTests24Java8=%3Cdefault%3E]
>  in master branch shows the test has become flaky in master recently.
> Test started failing when IGNITE-10555 was merged to master.
> The reason of failure is timeout when *client4* node hangs waiting for PME to 
> complete. Communication failures are emulated in the test and when all 
> clients fail to init an exchange on a specific affinity topology version 
> (major=7, minor=1) everything works fine.
> But sometimes *client4* node manages to finish init the exchange and hangs 
> forever.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-10819) Test IgniteClientRejoinTest.testClientsReconnectAfterStart became flaky in master recently

2018-12-25 Thread Sergey Chugunov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-10819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Chugunov updated IGNITE-10819:
-
Labels: MakeTeamcityGreenAgain  (was: )

> Test IgniteClientRejoinTest.testClientsReconnectAfterStart became flaky in 
> master recently
> --
>
> Key: IGNITE-10819
> URL: https://issues.apache.org/jira/browse/IGNITE-10819
> Project: Ignite
>  Issue Type: Bug
>Reporter: Sergey Chugunov
>Priority: Major
>  Labels: MakeTeamcityGreenAgain
> Fix For: 2.8
>
>
> As [test 
> history|https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8=-21180267941031641=testDetails_IgniteTests24Java8=%3Cdefault%3E]
>  in master branch shows the test has become flaky in master recently.
> Test started failing when IGNITE-10555 was merged to master.
> The reason of failure is timeout when *client4* node hangs waiting for PME to 
> complete. Communication failures are emulated in the test and when all 
> clients fail to init an exchange on a specific affinity topology version 
> (major=7, minor=1) everything works fine.
> But sometimes *client4* node manages to finish init the exchange and hangs 
> forever.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-10818) GridQueryTypeDescriptor should have cacheName and alias field

2018-12-25 Thread Ray Liu (JIRA)
Ray Liu created IGNITE-10818:


 Summary: GridQueryTypeDescriptor should have cacheName and alias 
field
 Key: IGNITE-10818
 URL: https://issues.apache.org/jira/browse/IGNITE-10818
 Project: Ignite
  Issue Type: Improvement
Reporter: Ray Liu
Assignee: Ray Liu


Currently, GridQueryTypeDescriptor don't have cacheName and alias field.

We have to cast GridQueryTypeDescriptor to QueryTypeDescriptorImpl to get these 
two fields.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-10314) Spark dataframe will get wrong schema if user executes add/drop column DDL

2018-12-25 Thread Ray Liu (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-10314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16728914#comment-16728914
 ] 

Ray Liu commented on IGNITE-10314:
--

Hello, [~NIzhikov]

I have implemented the fix, please review and comment.
The tests in teamcity is all green.

Here's the link.

https://ci.ignite.apache.org/viewLog.html?buildId=2650060

> Spark dataframe will get wrong schema if user executes add/drop column DDL
> --
>
> Key: IGNITE-10314
> URL: https://issues.apache.org/jira/browse/IGNITE-10314
> Project: Ignite
>  Issue Type: Bug
>  Components: spark
>Affects Versions: 2.3, 2.4, 2.5, 2.6, 2.7
>Reporter: Ray Liu
>Assignee: Ray Liu
>Priority: Critical
> Fix For: 2.8
>
>
> When user performs add/remove column in DDL,  Spark will get the old/wrong 
> schema.
>  
> Analyse 
> Currently Spark data frame API relies on QueryEntity to construct schema, but 
> QueryEntity in QuerySchema is a local copy of the original QueryEntity, so 
> the original QueryEntity is not updated when modification happens.
>  
> Solution
> Get the latest schema using JDBC thin driver's column metadata call, then 
> update fields in QueryEntity.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-10505) Flag IGNITE_DISABLE_WAL_DURING_REBALANCING should be turned on by default

2018-12-25 Thread Sergey Chugunov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-10505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16728912#comment-16728912
 ] 

Sergey Chugunov commented on IGNITE-10505:
--

I fixed the test itself, it looks like the change doesn't break any other 
functionality.

[~DmitriyGovorukhin], as an author of original test could you take a look 
please?

> Flag IGNITE_DISABLE_WAL_DURING_REBALANCING should be turned on by default
> -
>
> Key: IGNITE-10505
> URL: https://issues.apache.org/jira/browse/IGNITE-10505
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Sergey Chugunov
>Assignee: Sergey Chugunov
>Priority: Major
> Fix For: 2.8
>
>
> Before file-based rebalancing is implemented and rigorously tested Ignite 
> still relies on key-based rebalancing and flag 
> IGNITE_DISABLE_WAL_DURING_REBALANCING speeds it up significantly.
> By default it is turned off, but as it brings noticeable boost we need to 
> turn it on by default in the next minor release.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-10505) Flag IGNITE_DISABLE_WAL_DURING_REBALANCING should be turned on by default

2018-12-25 Thread Ignite TC Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-10505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16728909#comment-16728909
 ] 

Ignite TC Bot commented on IGNITE-10505:


{panel:title=-- Run :: All: No blockers 
found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}{panel}
[TeamCity *-- Run :: All* 
Results|https://ci.ignite.apache.org/viewLog.html?buildId=2635711buildTypeId=IgniteTests24Java8_RunAll]

> Flag IGNITE_DISABLE_WAL_DURING_REBALANCING should be turned on by default
> -
>
> Key: IGNITE-10505
> URL: https://issues.apache.org/jira/browse/IGNITE-10505
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Sergey Chugunov
>Assignee: Sergey Chugunov
>Priority: Major
> Fix For: 2.8
>
>
> Before file-based rebalancing is implemented and rigorously tested Ignite 
> still relies on key-based rebalancing and flag 
> IGNITE_DISABLE_WAL_DURING_REBALANCING speeds it up significantly.
> By default it is turned off, but as it brings noticeable boost we need to 
> turn it on by default in the next minor release.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-10809) IgniteClusterActivateDeactivateTestWithPersistence.testActivateFailover3 fails in master

2018-12-25 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-10809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16728908#comment-16728908
 ] 

ASF GitHub Bot commented on IGNITE-10809:
-

GitHub user sergey-chugunov-1985 opened a pull request:

https://github.com/apache/ignite/pull/5750

IGNITE-10809 testActiveFailover3 modified for persistent mode



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-10809

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/5750.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5750


commit 1fc88375e9eaee7439b2951c096bdc43f9108277
Author: Sergey Chugunov 
Date:   2018-12-25T11:01:50Z

IGNITE-10809 testActiveFailover3 modified for persistent mode




> IgniteClusterActivateDeactivateTestWithPersistence.testActivateFailover3 
> fails in master
> 
>
> Key: IGNITE-10809
> URL: https://issues.apache.org/jira/browse/IGNITE-10809
> Project: Ignite
>  Issue Type: Bug
>Reporter: Sergey Chugunov
>Assignee: Sergey Chugunov
>Priority: Major
>  Labels: MakeTeamcityGreenAgain
> Fix For: 2.8
>
>
> Test logic involves independent activation two sets of nodes and then their 
> join into a single cluster.
> After introducing BaselineTopology concept in 2.4 version this action became 
> prohibited to enforce data integrity.
> Test should be refactored to take this into account.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-10813) Run CheckpointReadLockFailureTest with JUnit4 runner

2018-12-25 Thread Andrey Kuznetsov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-10813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16728877#comment-16728877
 ] 

Andrey Kuznetsov commented on IGNITE-10813:
---

[~agura], could you please merge this?

> Run CheckpointReadLockFailureTest with JUnit4 runner
> 
>
> Key: IGNITE-10813
> URL: https://issues.apache.org/jira/browse/IGNITE-10813
> Project: Ignite
>  Issue Type: Bug
>Reporter: Andrey Kuznetsov
>Assignee: Andrey Kuznetsov
>Priority: Trivial
>
> The test fails on TeamCity. Should be run in JUnit4 manner.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-10813) Run CheckpointReadLockFailureTest with JUnit4 runner

2018-12-25 Thread Ignite TC Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-10813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16728876#comment-16728876
 ] 

Ignite TC Bot commented on IGNITE-10813:


{panel:title=-- Run :: All: No blockers 
found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}{panel}
[TeamCity *-- Run :: All* 
Results|https://ci.ignite.apache.org/viewLog.html?buildId=2643368buildTypeId=IgniteTests24Java8_RunAll]

> Run CheckpointReadLockFailureTest with JUnit4 runner
> 
>
> Key: IGNITE-10813
> URL: https://issues.apache.org/jira/browse/IGNITE-10813
> Project: Ignite
>  Issue Type: Bug
>Reporter: Andrey Kuznetsov
>Assignee: Andrey Kuznetsov
>Priority: Trivial
>
> The test fails on TeamCity. Should be run in JUnit4 manner.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-3303) Apache Flink Integration - Flink source to run a continuous query against one or multiple caches

2018-12-25 Thread Saikat Maitra (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-3303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16728874#comment-16728874
 ] 

Saikat Maitra commented on IGNITE-3303:
---

[~dpavlov] [~amashenkov] [~agoncharuk] [~avinogradov] Thank you so much for 
your help and feedback. We have been working on this feature for long time and 
very happy to see it being merged in master.

Regards,

Saikat

 

> Apache Flink Integration - Flink source to run a continuous query against one 
> or multiple caches
> 
>
> Key: IGNITE-3303
> URL: https://issues.apache.org/jira/browse/IGNITE-3303
> Project: Ignite
>  Issue Type: New Feature
>  Components: streaming
>Reporter: Saikat Maitra
>Assignee: Saikat Maitra
>Priority: Major
> Fix For: 2.8
>
> Attachments: Screen Shot 2016-10-07 at 12.44.47 AM.png, 
> testFlinkIgniteSourceWithLargeBatch.log, win7.PNG
>
>
> Apache Flink integration 
> +++ *Ignite as a bidirectional Connector* +++
> As a Flink source => run a continuous query against one or multiple
> caches [4].
> Related discussion : 
> http://apache-ignite-developers.2346864.n4.nabble.com/Apache-Flink-lt-gt-Apache-Ignite-integration-td8163.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-10784) SQL: Create a view with list of existing tables

2018-12-25 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-10784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16728808#comment-16728808
 ] 

ASF GitHub Bot commented on IGNITE-10784:
-

GitHub user pavel-kuznetsov opened a pull request:

https://github.com/apache/ignite/pull/5749

IGNITE-10784: added TABLES view



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-10784

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/5749.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5749


commit 8aa0e1a89ad5ccd081aa863b3f68d8b7d768e01c
Author: Pavel Kuznetsov 
Date:   2018-12-24T17:19:25Z

ignite-10784: wip

commit ea4dd42b4497bd561a308d8ff0c6fba9d5d71f24
Author: Pavel Kuznetsov 
Date:   2018-12-24T23:33:56Z

ignite-10784: Implemented minimal view IGNITE.TABLES.

commit a80e224f45d0624217258f23e281f9dc6569fa14
Author: Pavel Kuznetsov 
Date:   2018-12-25T10:09:33Z

ignite-10784: Tests wip.

commit 354d0c152952cc039ba7c6a8859c4639d3237826
Author: Pavel Kuznetsov 
Date:   2018-12-25T12:34:29Z

ignite-10784: Fixed bug with filter. Updated code and tests;

commit 0bad909b65ccf5cdf8ee38fae9c442d3d8940164
Author: Pavel Kuznetsov 
Date:   2018-12-25T12:47:50Z

ignite-10784: Rename column name;

Renamed column name of the view according to ISO standard.

commit f367f97215033a7e7f4f7d702bcf6cffd39d93dd
Author: Pavel Kuznetsov 
Date:   2018-12-25T14:11:25Z

ignite-10784: added Affinity column.

commit 526cb3eb98623bbe50c694859a9ea898dbc4401a
Author: Pavel Kuznetsov 
Date:   2018-12-25T15:00:27Z

ignite-10784: reverted affinity mode info in the view.

AffinityMapper should be handled during IGNITE-10310

commit ca14ca844c484e437aef323082839143daa52bf7
Author: Pavel Kuznetsov 
Date:   2018-12-25T18:52:31Z

ignite-10784: Added key and field alias columns.

commit 3af040ba1d439228b5e00a615bdb8a97c6aee053
Author: Pavel Kuznetsov 
Date:   2018-12-25T19:19:18Z

ignite-10784: fixed duplicates.

commit 795486fea87b762f58bbde11f0ec6e242217000c
Author: Pavel Kuznetsov 
Date:   2018-12-25T19:23:17Z

ignite-10784: Got rid of unnecessary arrays allocations.

commit 6512c06e6bab6bbdcc3162d5a2be02a485806d2a
Author: Pavel Kuznetsov 
Date:   2018-12-25T19:40:27Z

ignite-10784: Added key/value type name columns to the TABLES view.




> SQL: Create a view with list of existing tables
> ---
>
> Key: IGNITE-10784
> URL: https://issues.apache.org/jira/browse/IGNITE-10784
> Project: Ignite
>  Issue Type: Task
>  Components: sql
>Reporter: Vladimir Ozerov
>Assignee: Pavel Kuznetsov
>Priority: Major
> Fix For: 2.8
>
>
> We need to create a system view of currently available SQL tables. 
> Minimal required information:
> 1) Schema name
> 2) Table name
> 3) Owning cache name
> 4) Owning cache ID
> Other info to consider:
> 1) Affinity column name
> 2) Key/value aliases
> 3) Key/value type names
> 4) Analyse other vendors (e.g. MySQL, Postgresql) and see if any other useful 
> information could be exposed (taking in count that a lot of engine properties 
> are already exposed through {{CACHES}} view)
> Starting point: {{SqlSystemView}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-9989) JDBC v2: getPrimaryKeys always returns constant COLUMN_NAME, KEY_SEQ, PK_NAME

2018-12-25 Thread Pavel Kuznetsov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-9989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16728806#comment-16728806
 ] 

Pavel Kuznetsov commented on IGNITE-9989:
-

As noticed [~tledkov-gridgain] also we should keep old metadata task and info 
classes because old versions of jdbc v2 driver are still able to connect to the 
new nodes.
Added those classes.

> JDBC v2: getPrimaryKeys always returns constant COLUMN_NAME, KEY_SEQ, PK_NAME
> -
>
> Key: IGNITE-9989
> URL: https://issues.apache.org/jira/browse/IGNITE-9989
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.6
>Reporter: Pavel Kuznetsov
>Assignee: Pavel Kuznetsov
>Priority: Major
>  Labels: jdbc
>
> Jdbc v2 driver has hardcoded values for meta attibutes : 
> COLUMN_NAME = _KEY 
> KEY_SEQ = 1
> PK_NAME = _KEY
> But this values should be different for different tables.
> how to reproduce: 
> 1) connect to the cluser using jdbcv2 driver
> 2) CREATE TABLE TAB (ID LONG, SEC_ID LONG, VAL LONG, PRIMARY KEY(ID, SEC_ID))
> 3) check result of connection.getMetadata().getPrimaryKeys()



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-10815) NullPointerException in InitNewCoordinatorFuture.init() leads to cluster hang

2018-12-25 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-10815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16728796#comment-16728796
 ] 

ASF GitHub Bot commented on IGNITE-10815:
-

GitHub user Jokser opened a pull request:

https://github.com/apache/ignite/pull/5746

IGNITE-10815 Fixed coordinator failover in case of exchanges merge and 
non-affinity nodes



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-10815

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/5746.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5746


commit 97d4d22f12f0a24060ea1cd7253758065cf77023
Author: Pavel Kovalenko 
Date:   2018-12-25T17:25:57Z

IGNITE-10815 WIP

Signed-off-by: Pavel Kovalenko 

commit 141f40b8742d12681b3f41f7ee3dbc3ae2702380
Author: Pavel Kovalenko 
Date:   2018-12-25T18:51:11Z

IGNITE-10815 Fix and test.

Signed-off-by: Pavel Kovalenko 

commit 84b3fc09cca1f7133883bd44c64a1466d32d5b53
Author: Pavel Kovalenko 
Date:   2018-12-25T18:52:08Z

IGNITE-10815 Cleanup

Signed-off-by: Pavel Kovalenko 




> NullPointerException in InitNewCoordinatorFuture.init() leads to cluster hang
> -
>
> Key: IGNITE-10815
> URL: https://issues.apache.org/jira/browse/IGNITE-10815
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.4
>Reporter: Anton Kurbanov
>Assignee: Pavel Kovalenko
>Priority: Critical
> Fix For: 2.8
>
>
> Possible scenario to reproduce:
> 1. Force few consecutive exchange merges and finish.
> 2. Trigger exchange.
> 3. Shutdown coordinator node before sending/receiving full partitions message.
>  
> Stacktrace:
> {code:java}
> 2018-12-24 15:54:02,664 sys-#48%gg% ERROR 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture
>  - Failed to init new coordinator future: bd74f7ed-6984-4f78-9941-480df673ab77
> java.lang.NullPointerException: null
> at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.events(GridDhtPartitionsExchangeFuture.java:534)
>  ~[ignite-core-2.4.13.b4.jar:2.4.13.b4]
> at 
> org.apache.ignite.internal.processors.cache.CacheAffinitySharedManager$18.applyx(CacheAffinitySharedManager.java:1790)
>  ~[ignite-core-2.4.13.b4.jar:2.4.13.b4]
> at 
> org.apache.ignite.internal.processors.cache.CacheAffinitySharedManager$18.applyx(CacheAffinitySharedManager.java:1738)
>  ~[ignite-core-2.4.13.b4.jar:2.4.13.b4]
> at 
> org.apache.ignite.internal.processors.cache.CacheAffinitySharedManager.forAllRegisteredCacheGroups(CacheAffinitySharedManager.java:1107)
>  ~[ignite-core-2.4.13.b4.jar:2.4.13.b4]
> at 
> org.apache.ignite.internal.processors.cache.CacheAffinitySharedManager.initCoordinatorCaches(CacheAffinitySharedManager.java:1738)
>  ~[ignite-core-2.4.13.b4.jar:2.4.13.b4]
> at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.InitNewCoordinatorFuture.init(InitNewCoordinatorFuture.java:104)
>  ~[ignite-core-2.4.13.b4.jar:2.4.13.b4]
> at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture$8$1.call(GridDhtPartitionsExchangeFuture.java:3439)
>  [ignite-core-2.4.13.b4.jar:2.4.13.b4]
> at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture$8$1.call(GridDhtPartitionsExchangeFuture.java:3435)
>  [ignite-core-2.4.13.b4.jar:2.4.13.b4]
> at 
> org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:6720)
>  [ignite-core-2.4.13.b4.jar:2.4.13.b4]
> at 
> org.apache.ignite.internal.processors.closure.GridClosureProcessor$2.body(GridClosureProcessor.java:967)
>  [ignite-core-2.4.13.b4.jar:2.4.13.b4]
> at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110) 
> [ignite-core-2.4.13.b4.jar:2.4.13.b4]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  [?:1.8.0_171]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  [?:1.8.0_171]
> at java.lang.Thread.run(Thread.java:748) [?:1.8.0_171]
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-10178) change tests that fail("Ignite JIRA ticket URL") to @Ignore("Ignite JIRA ticket URL")

2018-12-25 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-10178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16728789#comment-16728789
 ] 

ASF GitHub Bot commented on IGNITE-10178:
-

GitHub user ololo3000 opened a pull request:

https://github.com/apache/ignite/pull/5745

IGNITE-10178 Ignore annotations added



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ololo3000/ignite IGNITE-10178

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/5745.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5745


commit aa2c91f0de000323e1ab8246ae5a1d55429f16d1
Author: Petrov 
Date:   2018-12-25T15:05:58Z

IGNITE-10178 Ignore annotations added

commit 8b859ace05cbe0e20a5f7a13be32fa0179591728
Author: Petrov 
Date:   2018-12-25T15:12:18Z

Merge branch 'master' into IGNITE-10178

commit 1a8cc8fe1251b1ced6e278816c7ad4ac1c4ecbe7
Author: Petrov 
Date:   2018-12-25T16:56:58Z

IGNITE-10178 minor fixes




> change tests that fail("Ignite JIRA ticket URL") to @Ignore("Ignite JIRA 
> ticket URL")
> -
>
> Key: IGNITE-10178
> URL: https://issues.apache.org/jira/browse/IGNITE-10178
> Project: Ignite
>  Issue Type: Sub-task
>Reporter: Oleg Ignatenko
>Assignee: PetrovMikhail
>Priority: Major
>
> Change tests that use {{fail("Ignite JIRA ticket URL")}} to {{@Ignore("Ignite 
> JIRA ticket URL")}}. Do the same change for tests that fail by 
> {{@IgniteIgnore("Ignite JIRA ticket URL")}}, like for example 
> [S3CheckpointSpiStartStopSelfTest.testStartStop|https://github.com/apache/ignite/blob/master/modules/aws/src/test/java/org/apache/ignite/spi/checkpoint/s3/S3CheckpointSpiStartStopSelfTest.java].
>  Also, use 
> [Ignore|http://junit.sourceforge.net/javadoc/org/junit/Ignore.html] to 
> annotate empty test classes in examples that were discovered and re-muted per 
> IGNITE-10174.
> If needed, refer parent task for more details.
> Note this step would better be coordinated with Teamcity and TC bot 
> maintainers because it may substantially impact them.
> -
> Note that tests that are expected to be ignored depending on runtime 
> conditions should be rewritten to use {{Assume}} instead of {{fail}}. So that 
> old code...
> {code}if (someRuntimeCondition())
> fail("Ignite JIRA ticket URL");{code}
> ...will change to
> {code}Assume.assumeFalse("Ignite JIRA ticket URL", 
> someRuntimeCondition());{code}
> (this change can be "extracted" into separate JIRA task if it is more 
> convenient). Readers interested to find more details about how {{Assume}} 
> works can find more details and code snippet [in comments 
> here|https://issues.apache.org/jira/browse/IGNITE-10178?focusedCommentId=16723863=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16723863].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-3303) Apache Flink Integration - Flink source to run a continuous query against one or multiple caches

2018-12-25 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-3303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16728768#comment-16728768
 ] 

ASF GitHub Bot commented on IGNITE-3303:


Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/5020


> Apache Flink Integration - Flink source to run a continuous query against one 
> or multiple caches
> 
>
> Key: IGNITE-3303
> URL: https://issues.apache.org/jira/browse/IGNITE-3303
> Project: Ignite
>  Issue Type: New Feature
>  Components: streaming
>Reporter: Saikat Maitra
>Assignee: Saikat Maitra
>Priority: Major
> Fix For: 2.8
>
> Attachments: Screen Shot 2016-10-07 at 12.44.47 AM.png, 
> testFlinkIgniteSourceWithLargeBatch.log, win7.PNG
>
>
> Apache Flink integration 
> +++ *Ignite as a bidirectional Connector* +++
> As a Flink source => run a continuous query against one or multiple
> caches [4].
> Related discussion : 
> http://apache-ignite-developers.2346864.n4.nabble.com/Apache-Flink-lt-gt-Apache-Ignite-integration-td8163.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-7616) GridDataStreamExecutor and GridCallbackExecutor JMX beans return incorrect values due to invalid interface registration.

2018-12-25 Thread Eduard Shangareev (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-7616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16728759#comment-16728759
 ] 

Eduard Shangareev commented on IGNITE-7616:
---

[~syssoftsol], hi! I looked through, looks good, but there is no any test. And 
there are already conflicts with master :(

{code}
ObjectName anyInstance = 
ignite.configuration().getMBeanServer().queryNames(null,
Query.isInstanceOf(new 
StringValueExp("org.apache.YourMXBeanClassName"))).iterator().next();

//  Invoke operation
Object val = 
ignite.configuration().getMBeanServer().invoke(anyInstance, "yourMethod", null, 
null);
{code}

You could use this snippet for test coverage of new MX bean.

> GridDataStreamExecutor and GridCallbackExecutor JMX beans return incorrect 
> values due to invalid interface registration.
> 
>
> Key: IGNITE-7616
> URL: https://issues.apache.org/jira/browse/IGNITE-7616
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.4
>Reporter: Max Shonichev
>Assignee: David Harvey
>Priority: Major
>  Labels: jmx
> Fix For: 2.8
>
> Attachments: image-2018-10-03-10-23-24-676.png, 
> image-2018-10-03-10-24-12-459.png, master_1b3742f4d7_mxbeans_threads2.patch
>
>
> Two of newly added management beans as a result of implementing feature 
> request https://issues.apache.org/jira/browse/IGNITE-7217 have bugs:
>  # GridDataStreamExecutor is registered as conforming to ThreadPoolMXBean 
> interface, though actually it is an incompatible StripedExecutor. 
>  # GridCallbackExecutor is registered as conforming to ThreadPoolMXBean 
> interface, though actually it is an incompatible 
> IgniteStripedThreadPoolExecutor.
>  # ThreadPoolMXBeanAdapter checks whether adapted instance is 
> ThreadPoolExecutor, and as interfaces are incompatible, most of the JMX 
> attributes of GridCallbackExecutor and GridDataStreamExecutor are returned as 
> -1 or null.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-9607) Service Grid redesign - phase 1

2018-12-25 Thread Vyacheslav Daradur (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-9607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16728758#comment-16728758
 ] 

Vyacheslav Daradur commented on IGNITE-9607:


The method {{onLocalJoin}} has been moved in {{ServiceProcessorAdapter}} and 
now we use 'instanceof' only when should use deprecated GridServiceProcessor 
with cache related logic, that means these places will be removed with 
{{GridServiceProcessor}} in future releases.

> Service Grid redesign - phase 1
> ---
>
> Key: IGNITE-9607
> URL: https://issues.apache.org/jira/browse/IGNITE-9607
> Project: Ignite
>  Issue Type: Improvement
>  Components: managed services
>Reporter: Vyacheslav Daradur
>Assignee: Vyacheslav Daradur
>Priority: Major
> Fix For: 2.8
>
>
> This is an umbrella ticket for tasks which should be implemented atomically 
> in phase #1 of Service Grid redesign.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-10543) [ML] Test/train sample generator

2018-12-25 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-10543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16728754#comment-16728754
 ] 

ASF GitHub Bot commented on IGNITE-10543:
-

Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/5727


> [ML] Test/train sample generator
> 
>
> Key: IGNITE-10543
> URL: https://issues.apache.org/jira/browse/IGNITE-10543
> Project: Ignite
>  Issue Type: New Feature
>  Components: ml
>Reporter: Yury Babak
>Assignee: Alexey Platonov
>Priority: Major
> Fix For: 2.8
>
>
> Need to design and implement sample generators for standard distributions and 
> user defined functions/points. It is useful for test regressions and 
> statistic package examples.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-10718) [ML] Merge XGBoost and Ignite ML trees together

2018-12-25 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-10718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16728752#comment-16728752
 ] 

ASF GitHub Bot commented on IGNITE-10718:
-

Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/5691


> [ML] Merge XGBoost and Ignite ML trees together
> ---
>
> Key: IGNITE-10718
> URL: https://issues.apache.org/jira/browse/IGNITE-10718
> Project: Ignite
>  Issue Type: Improvement
>  Components: ml
>Affects Versions: 2.8
>Reporter: Anton Dmitriev
>Assignee: Anton Dmitriev
>Priority: Major
> Fix For: 2.8
>
>
> Currently we have two similar hierarchy of trees: XGBoost trees and Ignite ML 
> trees. Would be great to merge them together.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-1436) C++: Port to MAC OS.

2018-12-25 Thread Dmitriy Pavlov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-1436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16728755#comment-16728755
 ] 

Dmitriy Pavlov commented on IGNITE-1436:


[~isapego] how do you find this patch? Can we merge?

Unfortunately, I'm not an expert in C/C++, so I guess only you can approve PR.

> C++: Port to MAC OS.
> 
>
> Key: IGNITE-1436
> URL: https://issues.apache.org/jira/browse/IGNITE-1436
> Project: Ignite
>  Issue Type: Task
>  Components: platforms
>Affects Versions: 1.1.4
>Reporter: Vladimir Ozerov
>Assignee: Stephen Darlington
>Priority: Major
>  Labels: cpp
> Fix For: 2.8
>
>
> It will require minimal porting of "common" and "utils" stuff.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-5565) Replace Cron4J with Quartz or Spring scheduler for ignite-schedule module.

2018-12-25 Thread Ilya Kasnacheev (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-5565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16728748#comment-16728748
 ] 

Ilya Kasnacheev commented on IGNITE-5565:
-

[~macrergate] I have asked for a number of changes in GitHub PR.

> Replace Cron4J with Quartz or Spring scheduler for ignite-schedule module.
> --
>
> Key: IGNITE-5565
> URL: https://issues.apache.org/jira/browse/IGNITE-5565
> Project: Ignite
>  Issue Type: Task
>  Components: general
>Reporter: Alexey Kuznetsov
>Assignee: Sergey Kosarev
>Priority: Major
>  Labels: newbie
>
> 1) Cron4J is very old:
>   Latest Cron4j 2.2.5 released: 28-Dec-2011 
>   Latest Quarz 2.3.0 released: 20-Apr-2017
> 2) Not very friendly license:
>   CronJ4 licensed under GNU LESSER GENERAL PUBLIC LICENSE
>   Quartz is freely usable, licensed under the Apache 2.0 license.
> So, if we replace Cron4J  with Quartz we can move ignite-schedule module
>  from lgpl profile to main distribution.
> Also spring's scheduler could be considered as Cron4J alternative.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-9607) Service Grid redesign - phase 1

2018-12-25 Thread Vyacheslav Daradur (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-9607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16728749#comment-16728749
 ] 

Vyacheslav Daradur commented on IGNITE-9607:


[~agoncharuk], thank you for the notes!
{quote}1) Not sure if it was discussed with other community members, but I 
think it may be better to get rid of various {{instanceof}} statements in the 
code and have a common interface for both service processor implementations 
(possibly, noop)
{quote}
Previously such approach has been used, but it has been reworked during a 
review. In the main code base (not in tests) we use 'instance of' in rare 
operations, mostly at startup of kernal or its components and on local join 
event, such checks are not called in heavy loaded places.
Also, we can't get-rid 'instance of' completely, e.g. 'onActivate' and 
'onDeactive' should be called in different places depending on service 
processor implementation. I'd like to leave it as is if you don't mind.
{quote}2) Should we narrow down the generic type of ServiceDeploymentFuture to 
Serializable?
{quote}
Makes perfect sense to me. Done.
{quote}3) In IgniteServiceProcessor#stopProcessor() we need to wrap 
fut.onDone(stopError) in a try-catch block. We recently discovered that 
onDone() call can re-throw the exception to the caller, which prevents correct 
node stop
{quote}
Done.
{quote}4) I think we should add (maybe in a separate ticket) some diagnostic 
mechanics to dump pending deployment futures. The best option is to use 
existing PME diagnostic mechanics
{quote}
I've filled a task IGNITE-10817

 

> Service Grid redesign - phase 1
> ---
>
> Key: IGNITE-9607
> URL: https://issues.apache.org/jira/browse/IGNITE-9607
> Project: Ignite
>  Issue Type: Improvement
>  Components: managed services
>Reporter: Vyacheslav Daradur
>Assignee: Vyacheslav Daradur
>Priority: Major
> Fix For: 2.8
>
>
> This is an umbrella ticket for tasks which should be implemented atomically 
> in phase #1 of Service Grid redesign.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-10558) MVCC: IgniteWalReader test failed.

2018-12-25 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-10558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16728746#comment-16728746
 ] 

ASF GitHub Bot commented on IGNITE-10558:
-

Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/5583


> MVCC: IgniteWalReader test failed.
> --
>
> Key: IGNITE-10558
> URL: https://issues.apache.org/jira/browse/IGNITE-10558
> Project: Ignite
>  Issue Type: Bug
>  Components: mvcc, persistence
>Reporter: Andrew Mashenkov
>Assignee: Andrew Mashenkov
>Priority: Major
>  Labels: WAL, mvcc_stabilization_stage_1
> Fix For: 2.8
>
>
> Wal iterator doesn't handle Mvcc wal records.
>  This causes IgniteWalReader tests failures in Mvcc Pds2 suite.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-9171) Use lazy mode with results pre-fetch

2018-12-25 Thread Taras Ledkov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-9171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16728745#comment-16728745
 ] 

Taras Ledkov commented on IGNITE-9171:
--

[~vozerov],
1. Connection manager changes was corresponded to previous lazy impl: with lazy 
threads. Your suggestion make sense.
But current connection manager has *potential connection leak*:
detached connection is removed from the map {{threadConns : Thread-> 
H2ConnectionWrapper}} and there is no way to close all connection on the node 
stop.
2. Rolls back changes at the connection manager highlight the problem with 
detached connection on reducer. Looks like it was cause of the error. I'll 
check it on TC,


> Use lazy mode with results pre-fetch
> 
>
> Key: IGNITE-9171
> URL: https://issues.apache.org/jira/browse/IGNITE-9171
> Project: Ignite
>  Issue Type: Task
>  Components: sql
>Affects Versions: 2.6
>Reporter: Taras Ledkov
>Assignee: Taras Ledkov
>Priority: Blocker
>  Labels: sql-stability
> Fix For: 2.8
>
>
> Current implementation of the {{lazy}} mode always starts separate thread for 
> {{MapQueryLazyWorker}}. It  causes excessive overhead for requests that 
> produces small results set.
> We have to begin execute query at the {{QUERY_POOL}} thread pool and fetch 
> first page of the results. In case results set is bigger than one page 
> {{MapQueryLazyWorker}} is started and link with {{MapNodeResults}} to handle 
> next pages lazy.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-10817) Service Grid: Introduce diagnostic tool to dump pending deployment tasks

2018-12-25 Thread Vyacheslav Daradur (JIRA)
Vyacheslav Daradur created IGNITE-10817:
---

 Summary: Service Grid: Introduce diagnostic tool to dump pending 
deployment tasks
 Key: IGNITE-10817
 URL: https://issues.apache.org/jira/browse/IGNITE-10817
 Project: Ignite
  Issue Type: Task
  Components: managed services
Reporter: Vyacheslav Daradur
Assignee: Vyacheslav Daradur
 Fix For: 2.8


It's necessary to introduce some kind of diagnostic tools to dump service 
deployments task which are pending in the queue in a log.

If possible, existing PME diagnostic mechanics should be reused.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-10058) resetLostPartitions() leaves an additional copy of a partition in the cluster

2018-12-25 Thread Pavel Pereslegin (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-10058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16728744#comment-16728744
 ] 

Pavel Pereslegin commented on IGNITE-10058:
---

In my last commit, I prepared a draft solution to meet suggestion by 
[~ilantukh] and assign partition states only on coordinator, but it seems these 
changes are too complex (see PR).

To meet requirement about zero update counters I divided resetLostPartitions 
into three steps:
1. When non-coordinator node preparing to send local partition states to 
coordinator - reset counters if owner is present (to start rebalancing after 
this exchange, see {{ResetLostPartitionTest}}).
2. Coordinator assigns partition states (including local ones) and resets local 
partition counters, if necessary.
3. When a non-coordinator node receives full message from coordinator, it 
changes state of local partitions (LOST -> OWNING) if necessary.

I am too late with this task and will not be able to work on it for the next 2 
weeks, so feel free to assign this ticket to yourself.

The main case described is fixed by calling {{checkRebalanceState}} after 
resetting lost partitions, this allows to set new affinity and evict duplicate 
partitions. Added 
{{IgniteCachePartitionLossPolicySelfTest.testReadWriteSafeRefreshDelay}} test 
to reproduce this problem.

> resetLostPartitions() leaves an additional copy of a partition in the cluster
> -
>
> Key: IGNITE-10058
> URL: https://issues.apache.org/jira/browse/IGNITE-10058
> Project: Ignite
>  Issue Type: Bug
>Reporter: Stanislav Lukyanov
>Assignee: Pavel Pereslegin
>Priority: Major
> Fix For: 2.8
>
>
> If there are several copies of a LOST partition, resetLostPartitions() will 
> leave all of them in the cluster as OWNING.
> Scenario:
> 1) Start 4 nodes, a cache with backups=0 and READ_WRITE_SAFE, fill the cache
> 2) Stop one node - some partitions are recreated on the remaining nodes as 
> LOST
> 3) Start one node - the LOST partitions are being rebalanced to the new node 
> from the existing ones
> 4) Wait for rebalance to complete
> 5) Call resetLostPartitions()
> After that the partitions that were LOST become OWNING on all nodes that had 
> them. Eviction of these partitions doesn't start.
> Need to correctly evict additional copies of LOST partitions either after 
> rebalance on step 4 or after resetLostPartitions() call on step 5.
> Current resetLostPartitions() implementation does call checkEvictions(), but 
> the ready affinity assignment contains several nodes per partition for some 
> reason.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (IGNITE-10058) resetLostPartitions() leaves an additional copy of a partition in the cluster

2018-12-25 Thread Pavel Pereslegin (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-10058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Pereslegin reassigned IGNITE-10058:
-

Assignee: (was: Pavel Pereslegin)

> resetLostPartitions() leaves an additional copy of a partition in the cluster
> -
>
> Key: IGNITE-10058
> URL: https://issues.apache.org/jira/browse/IGNITE-10058
> Project: Ignite
>  Issue Type: Bug
>Reporter: Stanislav Lukyanov
>Priority: Major
> Fix For: 2.8
>
>
> If there are several copies of a LOST partition, resetLostPartitions() will 
> leave all of them in the cluster as OWNING.
> Scenario:
> 1) Start 4 nodes, a cache with backups=0 and READ_WRITE_SAFE, fill the cache
> 2) Stop one node - some partitions are recreated on the remaining nodes as 
> LOST
> 3) Start one node - the LOST partitions are being rebalanced to the new node 
> from the existing ones
> 4) Wait for rebalance to complete
> 5) Call resetLostPartitions()
> After that the partitions that were LOST become OWNING on all nodes that had 
> them. Eviction of these partitions doesn't start.
> Need to correctly evict additional copies of LOST partitions either after 
> rebalance on step 4 or after resetLostPartitions() call on step 5.
> Current resetLostPartitions() implementation does call checkEvictions(), but 
> the ready affinity assignment contains several nodes per partition for some 
> reason.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-10815) NullPointerException in InitNewCoordinatorFuture.init() leads to cluster hang

2018-12-25 Thread Pavel Kovalenko (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-10815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Kovalenko updated IGNITE-10815:
-
Fix Version/s: 2.8

> NullPointerException in InitNewCoordinatorFuture.init() leads to cluster hang
> -
>
> Key: IGNITE-10815
> URL: https://issues.apache.org/jira/browse/IGNITE-10815
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.4
>Reporter: Anton Kurbanov
>Assignee: Pavel Kovalenko
>Priority: Critical
> Fix For: 2.8
>
>
> Possible scenario to reproduce:
> 1. Force few consecutive exchange merges and finish.
> 2. Trigger exchange.
> 3. Shutdown coordinator node before sending/receiving full partitions message.
>  
> Stacktrace:
> {code:java}
> 2018-12-24 15:54:02,664 sys-#48%gg% ERROR 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture
>  - Failed to init new coordinator future: bd74f7ed-6984-4f78-9941-480df673ab77
> java.lang.NullPointerException: null
> at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.events(GridDhtPartitionsExchangeFuture.java:534)
>  ~[ignite-core-2.4.13.b4.jar:2.4.13.b4]
> at 
> org.apache.ignite.internal.processors.cache.CacheAffinitySharedManager$18.applyx(CacheAffinitySharedManager.java:1790)
>  ~[ignite-core-2.4.13.b4.jar:2.4.13.b4]
> at 
> org.apache.ignite.internal.processors.cache.CacheAffinitySharedManager$18.applyx(CacheAffinitySharedManager.java:1738)
>  ~[ignite-core-2.4.13.b4.jar:2.4.13.b4]
> at 
> org.apache.ignite.internal.processors.cache.CacheAffinitySharedManager.forAllRegisteredCacheGroups(CacheAffinitySharedManager.java:1107)
>  ~[ignite-core-2.4.13.b4.jar:2.4.13.b4]
> at 
> org.apache.ignite.internal.processors.cache.CacheAffinitySharedManager.initCoordinatorCaches(CacheAffinitySharedManager.java:1738)
>  ~[ignite-core-2.4.13.b4.jar:2.4.13.b4]
> at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.InitNewCoordinatorFuture.init(InitNewCoordinatorFuture.java:104)
>  ~[ignite-core-2.4.13.b4.jar:2.4.13.b4]
> at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture$8$1.call(GridDhtPartitionsExchangeFuture.java:3439)
>  [ignite-core-2.4.13.b4.jar:2.4.13.b4]
> at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture$8$1.call(GridDhtPartitionsExchangeFuture.java:3435)
>  [ignite-core-2.4.13.b4.jar:2.4.13.b4]
> at 
> org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:6720)
>  [ignite-core-2.4.13.b4.jar:2.4.13.b4]
> at 
> org.apache.ignite.internal.processors.closure.GridClosureProcessor$2.body(GridClosureProcessor.java:967)
>  [ignite-core-2.4.13.b4.jar:2.4.13.b4]
> at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110) 
> [ignite-core-2.4.13.b4.jar:2.4.13.b4]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  [?:1.8.0_171]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  [?:1.8.0_171]
> at java.lang.Thread.run(Thread.java:748) [?:1.8.0_171]
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (IGNITE-10815) NullPointerException in InitNewCoordinatorFuture.init() leads to cluster hang

2018-12-25 Thread Pavel Kovalenko (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-10815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Kovalenko reassigned IGNITE-10815:


Assignee: Pavel Kovalenko

> NullPointerException in InitNewCoordinatorFuture.init() leads to cluster hang
> -
>
> Key: IGNITE-10815
> URL: https://issues.apache.org/jira/browse/IGNITE-10815
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.4
>Reporter: Anton Kurbanov
>Assignee: Pavel Kovalenko
>Priority: Critical
> Fix For: 2.8
>
>
> Possible scenario to reproduce:
> 1. Force few consecutive exchange merges and finish.
> 2. Trigger exchange.
> 3. Shutdown coordinator node before sending/receiving full partitions message.
>  
> Stacktrace:
> {code:java}
> 2018-12-24 15:54:02,664 sys-#48%gg% ERROR 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture
>  - Failed to init new coordinator future: bd74f7ed-6984-4f78-9941-480df673ab77
> java.lang.NullPointerException: null
> at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.events(GridDhtPartitionsExchangeFuture.java:534)
>  ~[ignite-core-2.4.13.b4.jar:2.4.13.b4]
> at 
> org.apache.ignite.internal.processors.cache.CacheAffinitySharedManager$18.applyx(CacheAffinitySharedManager.java:1790)
>  ~[ignite-core-2.4.13.b4.jar:2.4.13.b4]
> at 
> org.apache.ignite.internal.processors.cache.CacheAffinitySharedManager$18.applyx(CacheAffinitySharedManager.java:1738)
>  ~[ignite-core-2.4.13.b4.jar:2.4.13.b4]
> at 
> org.apache.ignite.internal.processors.cache.CacheAffinitySharedManager.forAllRegisteredCacheGroups(CacheAffinitySharedManager.java:1107)
>  ~[ignite-core-2.4.13.b4.jar:2.4.13.b4]
> at 
> org.apache.ignite.internal.processors.cache.CacheAffinitySharedManager.initCoordinatorCaches(CacheAffinitySharedManager.java:1738)
>  ~[ignite-core-2.4.13.b4.jar:2.4.13.b4]
> at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.InitNewCoordinatorFuture.init(InitNewCoordinatorFuture.java:104)
>  ~[ignite-core-2.4.13.b4.jar:2.4.13.b4]
> at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture$8$1.call(GridDhtPartitionsExchangeFuture.java:3439)
>  [ignite-core-2.4.13.b4.jar:2.4.13.b4]
> at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture$8$1.call(GridDhtPartitionsExchangeFuture.java:3435)
>  [ignite-core-2.4.13.b4.jar:2.4.13.b4]
> at 
> org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:6720)
>  [ignite-core-2.4.13.b4.jar:2.4.13.b4]
> at 
> org.apache.ignite.internal.processors.closure.GridClosureProcessor$2.body(GridClosureProcessor.java:967)
>  [ignite-core-2.4.13.b4.jar:2.4.13.b4]
> at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110) 
> [ignite-core-2.4.13.b4.jar:2.4.13.b4]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  [?:1.8.0_171]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  [?:1.8.0_171]
> at java.lang.Thread.run(Thread.java:748) [?:1.8.0_171]
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-10580) H2 connection and statements are reused invalid for local sql queries

2018-12-25 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-10580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16728738#comment-16728738
 ] 

ASF GitHub Bot commented on IGNITE-10580:
-

Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/5592


> H2 connection and statements are reused invalid for local sql queries
> -
>
> Key: IGNITE-10580
> URL: https://issues.apache.org/jira/browse/IGNITE-10580
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 2.7
>Reporter: Taras Ledkov
>Assignee: Taras Ledkov
>Priority: Major
> Fix For: 2.8
>
>
> The threadlocal connection & statement cache is used invalid for local 
> queries.
> Steps to reproduce:
> # Open iterator for local query {{Query0}};
> # In the same thread open one more iterator for {{Query1}} (SQl statement 
> must be equals to {{Query0}} and doesn't contains query parameters);
> # Fetch from the first iterator.
> The exception {{The object is already closed [90007-197]}} will be thrown.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (IGNITE-10385) NPE in CachePartitionPartialCountersMap.toString

2018-12-25 Thread Anton Kurbanov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-10385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Kurbanov reassigned IGNITE-10385:
---

Assignee: Anton Kurbanov

> NPE in CachePartitionPartialCountersMap.toString
> 
>
> Key: IGNITE-10385
> URL: https://issues.apache.org/jira/browse/IGNITE-10385
> Project: Ignite
>  Issue Type: Bug
>  Components: general
>Affects Versions: 2.4
>Reporter: Anton Kurbanov
>Assignee: Anton Kurbanov
>Priority: Blocker
>
> {noformat}
> Failed to reinitialize local partitions (preloading will be stopped)
> org.apache.ignite.IgniteException: null
> at 
> org.apache.ignite.internal.util.tostring.GridToStringBuilder.toStringImpl(GridToStringBuilder.java:1032)
>  ~[ignite-core-2.4.10.jar:2.4.10]
> at 
> org.apache.ignite.internal.util.tostring.GridToStringBuilder.toString(GridToStringBuilder.java:868)
>  ~[ignite-core-2.4.10.jar:2.4.10]
> at 
> org.apache.ignite.internal.managers.communication.GridIoMessage.toString(GridIoMessage.java:358)
>  ~[ignite-core-2.4.10.jar:2.4.10]
> at java.lang.String.valueOf(String.java:2994) ~[?:1.8.0_171]
> at java.lang.StringBuilder.append(StringBuilder.java:131) ~[?:1.8.0_171]
> at 
> org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.sendMessage0(TcpCommunicationSpi.java:2653)
>  ~[ignite-core-2.4.10.jar:2.4.10]
> at 
> org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.sendMessage(TcpCommunicationSpi.java:2586)
>  ~[ignite-core-2.4.10.jar:2.4.10]
> at 
> org.apache.ignite.internal.managers.communication.GridIoManager.send(GridIoManager.java:1642)
>  ~[ignite-core-2.4.10.jar:2.4.10]
> at 
> org.apache.ignite.internal.managers.communication.GridIoManager.sendToGridTopic(GridIoManager.java:1714)
>  ~[ignite-core-2.4.10.jar:2.4.10]
> at 
> org.apache.ignite.internal.processors.cache.GridCacheIoManager.send(GridCacheIoManager.java:1160)
>  ~[ignite-core-2.4.10.jar:2.4.10]
> at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.sendLocalPartitions(GridDhtPartitionsExchangeFuture.java:1399)
>  ~[ignite-core-2.4.10.jar:2.4.10]
> at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.sendPartitions(GridDhtPartitionsExchangeFuture.java:1506)
>  ~[ignite-core-2.4.10.jar:2.4.10]
> at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.distributedExchange(GridDhtPartitionsExchangeFuture.java:1139)
>  ~[ignite-core-2.4.10.jar:2.4.10]
> at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.init(GridDhtPartitionsExchangeFuture.java:703)
>  [ignite-core-2.4.10.jar:2.4.10]
> at 
> org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:2379)
>  [ignite-core-2.4.10.jar:2.4.10]
> at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110) 
> [ignite-core-2.4.10.jar:2.4.10]
> at java.lang.Thread.run(Thread.java:748) [?:1.8.0_171]
> Caused by: org.apache.ignite.IgniteException
> at 
> org.apache.ignite.internal.util.tostring.GridToStringBuilder.toStringImpl(GridToStringBuilder.java:1032)
>  ~[ignite-core-2.4.10.jar:2.4.10]
> at 
> org.apache.ignite.internal.util.tostring.GridToStringBuilder.toString(GridToStringBuilder.java:830)
>  ~[ignite-core-2.4.10.jar:2.4.10]
> at 
> org.apache.ignite.internal.util.tostring.GridToStringBuilder.toString(GridToStringBuilder.java:787)
>  ~[ignite-core-2.4.10.jar:2.4.10]
> at 
> org.apache.ignite.internal.util.tostring.GridToStringBuilder.toString(GridToStringBuilder.java:889)
>  ~[ignite-core-2.4.10.jar:2.4.10]
> at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsSingleMessage.toString(GridDhtPartitionsSingleMessage.java:551)
>  ~[ignite-core-2.4.10.jar:2.4.10]
> at java.lang.String.valueOf(String.java:2994) ~[?:1.8.0_171]
> at 
> org.apache.ignite.internal.util.GridStringBuilder.a(GridStringBuilder.java:101)
>  ~[ignite-core-2.4.10.jar:2.4.10]
> at 
> org.apache.ignite.internal.util.tostring.SBLimitedLength.a(SBLimitedLength.java:88)
>  ~[ignite-core-2.4.10.jar:2.4.10]
> at 
> org.apache.ignite.internal.util.tostring.GridToStringBuilder.toString(GridToStringBuilder.java:943)
>  ~[ignite-core-2.4.10.jar:2.4.10]
> at 
> org.apache.ignite.internal.util.tostring.GridToStringBuilder.toStringImpl(GridToStringBuilder.java:1009)
>  ~[ignite-core-2.4.10.jar:2.4.10]
> ... 16 more
> Caused by: java.lang.NullPointerException
> at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.CachePartitionPartialCountersMap.toString(CachePartitionPartialCountersMap.java:231)
>  ~[ignite-core-2.4.10.jar:2.4.10]
> at 

[jira] [Updated] (IGNITE-10580) H2 connection and statements are reused invalid for local sql queries

2018-12-25 Thread Vladimir Ozerov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-10580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Ozerov updated IGNITE-10580:
-
Ignite Flags:   (was: Docs Required)

> H2 connection and statements are reused invalid for local sql queries
> -
>
> Key: IGNITE-10580
> URL: https://issues.apache.org/jira/browse/IGNITE-10580
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 2.7
>Reporter: Taras Ledkov
>Assignee: Taras Ledkov
>Priority: Major
> Fix For: 2.8
>
>
> The threadlocal connection & statement cache is used invalid for local 
> queries.
> Steps to reproduce:
> # Open iterator for local query {{Query0}};
> # In the same thread open one more iterator for {{Query1}} (SQl statement 
> must be equals to {{Query0}} and doesn't contains query parameters);
> # Fetch from the first iterator.
> The exception {{The object is already closed [90007-197]}} will be thrown.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-9607) Service Grid redesign - phase 1

2018-12-25 Thread Alexey Goncharuk (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-9607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16728735#comment-16728735
 ] 

Alexey Goncharuk commented on IGNITE-9607:
--

[~daradurvs], a few minor comments:
1) Not sure if it was discussed with other community members, but I think it 
may be better to get rid of various {{instanceof}} statements in the code and 
have a common interface for both service processor implementations (possibly, 
noop)
2) Should we narrow down the generic type of {{ServiceDeploymentFuture}} to 
{{Serializable}}?
3) In {{IgniteServiceProcessor#stopProcessor()}} we need to wrap 
{{fut.onDone(stopError)}} in a try-catch block. We recently discovered that 
{{onDone()}} call can re-throw the exception to the caller, which prevents 
correct node stop
4) I think we should add (maybe in a separate ticket) some diagnostic mechanics 
to dump pending deployment futures. The best option is to use existing PME 
diagnostic mechanics

> Service Grid redesign - phase 1
> ---
>
> Key: IGNITE-9607
> URL: https://issues.apache.org/jira/browse/IGNITE-9607
> Project: Ignite
>  Issue Type: Improvement
>  Components: managed services
>Reporter: Vyacheslav Daradur
>Assignee: Vyacheslav Daradur
>Priority: Major
> Fix For: 2.8
>
>
> This is an umbrella ticket for tasks which should be implemented atomically 
> in phase #1 of Service Grid redesign.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-10797) Replace unused methods from IgniteCacheSnapshotManager.

2018-12-25 Thread Stanilovsky Evgeny (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-10797?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stanilovsky Evgeny updated IGNITE-10797:

Description: 
Remove unused methods:
IgniteCacheSnapshotManager#flushDirtyPageHandler
IgniteCacheSnapshotManager#onPageWrite

  was:
Replace unused methods:
IgniteCacheSnapshotManager#flushDirtyPageHandler
IgniteCacheSnapshotManager#onPageWrite


> Replace unused methods from IgniteCacheSnapshotManager.
> ---
>
> Key: IGNITE-10797
> URL: https://issues.apache.org/jira/browse/IGNITE-10797
> Project: Ignite
>  Issue Type: Improvement
>  Components: persistence
>Affects Versions: 2.7
>Reporter: Stanilovsky Evgeny
>Assignee: Stanilovsky Evgeny
>Priority: Major
> Fix For: 2.8
>
>
> Remove unused methods:
> IgniteCacheSnapshotManager#flushDirtyPageHandler
> IgniteCacheSnapshotManager#onPageWrite



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (IGNITE-7240) Upload ignite-dev-utils module in maven

2018-12-25 Thread Oleg Ostanin (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-7240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Oleg Ostanin closed IGNITE-7240.


> Upload ignite-dev-utils module in maven
> ---
>
> Key: IGNITE-7240
> URL: https://issues.apache.org/jira/browse/IGNITE-7240
> Project: Ignite
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.1
>Reporter: Ilya Suntsov
>Assignee: Oleg Ostanin
>Priority: Major
>
> Module  ignite-dev-utils allow us to parse WAL.
> We should upload it to maven.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-10474) MVCC: IgniteCacheConnectionRecovery10ConnectionsTest.testConnectionRecovery fails.

2018-12-25 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-10474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16728734#comment-16728734
 ] 

ASF GitHub Bot commented on IGNITE-10474:
-

Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/5698


> MVCC: IgniteCacheConnectionRecovery10ConnectionsTest.testConnectionRecovery 
> fails.
> --
>
> Key: IGNITE-10474
> URL: https://issues.apache.org/jira/browse/IGNITE-10474
> Project: Ignite
>  Issue Type: Bug
>  Components: mvcc
>Reporter: Andrew Mashenkov
>Assignee: Roman Kondakov
>Priority: Major
>  Labels: Hanging, mvcc_stabilization_stage_1
> Fix For: 2.8
>
>
> IgniteCacheConnectionRecovery10ConnectionsTest.testConnectionRecovery fails 
> due to hanging.
> We have to investigate and fix this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-10310) SQL: Create TABLEs system view with affinity information

2018-12-25 Thread Pavel Kuznetsov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-10310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16728731#comment-16728731
 ] 

Pavel Kuznetsov commented on IGNITE-10310:
--

linked ticket about TABLES view with no affinity specific.

> SQL: Create TABLEs system view with affinity information
> 
>
> Key: IGNITE-10310
> URL: https://issues.apache.org/jira/browse/IGNITE-10310
> Project: Ignite
>  Issue Type: Task
>  Components: sql
>Reporter: Vladimir Ozerov
>Priority: Major
>  Labels: iep-24
> Fix For: 2.8
>
>
> Lets add a system view with our tables. At the very least it should include:
> # table name
> # cache name
> # schema name
> # affinity column 
> # affinity key (if IGNITE-10309 is implemented)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-9171) Use lazy mode with results pre-fetch

2018-12-25 Thread Vladimir Ozerov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-9171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16728729#comment-16728729
 ] 

Vladimir Ozerov commented on IGNITE-9171:
-

[~tledkov-gridgain], I have two comments at the moment:
# Can we simplify or extract connection management into a separate ticket with 
tests showing why the change is needed? At the moment, as we decided to remove 
lazy threads, it is not obvious why all these changes are needed
# 
{{IgniteQueryTableLockAndConnectionPoolLazyModeOnTest.testSingleNodeWithParallelismTablesLockQueryAndDDLMultithreaded}}
 fails with {{ConcurrentModificationException}}. It seems that we cannot rely 
on H2's {{Session.getLocks}} as it is not thread safe, are we?

> Use lazy mode with results pre-fetch
> 
>
> Key: IGNITE-9171
> URL: https://issues.apache.org/jira/browse/IGNITE-9171
> Project: Ignite
>  Issue Type: Task
>  Components: sql
>Affects Versions: 2.6
>Reporter: Taras Ledkov
>Assignee: Taras Ledkov
>Priority: Blocker
>  Labels: sql-stability
> Fix For: 2.8
>
>
> Current implementation of the {{lazy}} mode always starts separate thread for 
> {{MapQueryLazyWorker}}. It  causes excessive overhead for requests that 
> produces small results set.
> We have to begin execute query at the {{QUERY_POOL}} thread pool and fetch 
> first page of the results. In case results set is bigger than one page 
> {{MapQueryLazyWorker}} is started and link with {{MapNodeResults}} to handle 
> next pages lazy.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (IGNITE-6189) Benchmarks for check LFS used disk space

2018-12-25 Thread Oleg Ostanin (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-6189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Oleg Ostanin closed IGNITE-6189.


> Benchmarks for check LFS used disk space
> 
>
> Key: IGNITE-6189
> URL: https://issues.apache.org/jira/browse/IGNITE-6189
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Aleksey Chetaev
>Assignee: Oleg Ostanin
>Priority: Major
>
> Need to create new benchmarks for test how many space we use for store n keys 
> in LFS.
> Benchmark should content arguments:
> 1. Range of entries which will be put.
> 2. Step: how often we show store size.
> Case for e.x. params (range 1_000_000, step 100_000):
> 1. Put 100_000 entries, save time for put
> 2. Wait for checkpoint finished on all nodes.
> 3. Calc db size on each server.
> 4. Write size and time to benchmarks results.
> ...
> 6. Repeat first 4 steps while entries count in cache less that 1_000_000.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (IGNITE-6189) Benchmarks for check LFS used disk space

2018-12-25 Thread Oleg Ostanin (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-6189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Oleg Ostanin resolved IGNITE-6189.
--
Resolution: Won't Fix

After closer look I think that yardstick is not the tool for disc space usage 
benchmarks.

> Benchmarks for check LFS used disk space
> 
>
> Key: IGNITE-6189
> URL: https://issues.apache.org/jira/browse/IGNITE-6189
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Aleksey Chetaev
>Assignee: Oleg Ostanin
>Priority: Major
>
> Need to create new benchmarks for test how many space we use for store n keys 
> in LFS.
> Benchmark should content arguments:
> 1. Range of entries which will be put.
> 2. Step: how often we show store size.
> Case for e.x. params (range 1_000_000, step 100_000):
> 1. Put 100_000 entries, save time for put
> 2. Wait for checkpoint finished on all nodes.
> 3. Calc db size on each server.
> 4. Write size and time to benchmarks results.
> ...
> 6. Repeat first 4 steps while entries count in cache less that 1_000_000.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-10785) MVCC: Grid can hang if transaction is failed to rollback

2018-12-25 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-10785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16728727#comment-16728727
 ] 

ASF GitHub Bot commented on IGNITE-10785:
-

Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/5735


> MVCC: Grid can hang if transaction is failed to rollback
> 
>
> Key: IGNITE-10785
> URL: https://issues.apache.org/jira/browse/IGNITE-10785
> Project: Ignite
>  Issue Type: Bug
>  Components: mvcc
>Reporter: Roman Kondakov
>Assignee: Roman Kondakov
>Priority: Major
>  Labels: Hanging, mvcc_stabilization_stage_1, transactions
> Fix For: 2.8
>
>
> Sometimes grid can hang if transactions is failed to rollback. Reproducer:
>  
> {noformat}
> [2018-12-14 08:48:13,890][WARN 
> ][sys-stripe-9-#12552%transactions.TxRollbackAsyncWithPersistenceTest2%][GridDhtColocatedCache]
>   Failed to acquire lock (transaction has been completed): 
> GridCacheVersion [topVer=156257270, order=1544777270268, nodeOrder=4]
> [2018-12-14 
> 08:48:13,893][ERROR][sys-stripe-9-#12552%transactions.TxRollbackAsyncWithPersistenceTest2%][GridDhtColocatedCache]
>   Failed to rollback the transaction: GridDhtTxLocal 
> [nearNodeId=b4ff2bcc-dc6a-49c2-a2ab-f26bae6b68c1, 
> nearFutId=282d09ca761-8ff10d65-a1d6-4f40-9fd7-afb7afa0b25c, nearMiniId=1, 
> nearFinFutId=382d09ca761-8ff10d65-a1d6-4f40-9fd7-afb7afa0b25c, 
> nearFinMiniId=1, nearXidVer=GridCacheVersion [topVer=156257270, 
> order=1544777270268, nodeOrder=4], lb=null, super=GridDhtTxLocalAdapter 
> [nearOnOriginatingNode=false, nearNodes=KeySetView [], dhtNodes=KeySetView 
> [], explicitLock=false, super=IgniteTxLocalAdapter [completedBase=null, 
> sndTransformedVals=false, depEnabled=false, txState=IgniteTxStateImpl 
> [activeCacheIds=[], recovery=null, mvccEnabled=null, txMap=HashSet []], 
> mvccWaitTxs=null, qryEnlisted=false, forceSkipCompletedVers=false, 
> super=IgniteTxAdapter [xidVer=GridCacheVersion [topVer=156257270, 
> order=1544777270269, nodeOrder=3], writeVer=null, implicit=false, loc=true, 
> threadId=13949, startTime=1544777293884, 
> nodeId=cb0ed489-aa1c-4e7a-a196-3bea9e52, startVer=GridCacheVersion 
> [topVer=156257270, order=1544777270269, nodeOrder=3], endVer=null, 
> isolation=REPEATABLE_READ, concurrency=PESSIMISTIC, timeout=0, 
> sysInvalidate=false, sys=false, plc=2, commitVer=GridCacheVersion 
> [topVer=156257270, order=1544777270269, nodeOrder=3], finalizing=NONE, 
> invalidParts=null, state=ROLLED_BACK, timedOut=false, 
> topVer=AffinityTopologyVersion [topVer=4, minorTopVer=0], 
> mvccSnapshot=MvccSnapshotResponse [futId=82, crdVer=1544777266400, cntr=848, 
> opCntr=1, txs=[640, 769, 706, 387, 707, 708, 582, 646, 583, 586, 847, 596, 
> 660, 724, 661, 598, 599, 603, 667, 731, 685, 494, 686, 688, 561, 629, 570, 
> 766, 639], cleanupVer=263, tracking=0], parentTx=null, duration=0ms, 
> onePhaseCommit=false], size=0]]]
> class org.apache.ignite.IgniteCheckedException: Failed to finish transaction 
> [commit=false, 
> tx=GridDhtTxLocal[xid=df386eba761--0950-4bf6--0003, 
> xidVersion=GridCacheVersion [topVer=156257270, order=1544777270269, 
> nodeOrder=3], concurrency=PESSIMISTIC, isolation=REPEATABLE_READ, 
> state=ROLLED_BACK, invalidate=false, rollbackOnly=true, 
> nodeId=cb0ed489-aa1c-4e7a-a196-3bea9e52, timeout=0, duration=0]]
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxLocal.finishTx(GridDhtTxLocal.java:482)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxLocal.rollbackDhtLocalAsync(GridDhtTxLocal.java:588)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxLocal.rollbackDhtLocal(GridDhtTxLocal.java:563)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTransactionalCacheAdapter.initTxTopologyVersion(GridDhtTransactionalCacheAdapter.java:2178)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTransactionalCacheAdapter.processNearTxEnlistRequest(GridDhtTransactionalCacheAdapter.java:2016)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTransactionalCacheAdapter.access$900(GridDhtTransactionalCacheAdapter.java:112)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTransactionalCacheAdapter$14.apply(GridDhtTransactionalCacheAdapter.java:229)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTransactionalCacheAdapter$14.apply(GridDhtTransactionalCacheAdapter.java:227)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheIoManager.processMessage(GridCacheIoManager.java:1127)
>   at 
> 

[jira] [Commented] (IGNITE-10648) Ignite hang to stop if node wasn't started completely. GridTcpRestNioListener hangs on latch.

2018-12-25 Thread Vyacheslav Koptilin (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-10648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16728722#comment-16728722
 ] 

Vyacheslav Koptilin commented on IGNITE-10648:
--

Hello [~v.pyatkov],

In general, this PR looks good to me. I would change the log message to the 
following: {{Marshaller is not initialized.}}

> Ignite hang to stop if node wasn't started completely. GridTcpRestNioListener 
> hangs on latch.
> -
>
> Key: IGNITE-10648
> URL: https://issues.apache.org/jira/browse/IGNITE-10648
> Project: Ignite
>  Issue Type: Bug
>Reporter: Pavel Voronkin
>Assignee: Vladislav Pyatkov
>Priority: Major
>
> If Ignition.start waits on rebalance then GridRestProcessor is not started 
> yet then we call Ingition.stop and 
> GridTcpRestNioListener hangs on 
> if (marshMapLatch.getCount() > 0)
>  U.awaitQuiet(marshMapLatch);
> cause wasn't counted down on start.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-9184) Cluster hangs during concurrent node client and server nodes restart

2018-12-25 Thread Mikhail Cherkasov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-9184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16728720#comment-16728720
 ] 

Mikhail Cherkasov commented on IGNITE-9184:
---

I simplified reproducer, continues queries are not involved here, to reproduce 
the issue we need to restart concurrently client and server nodes, see a new  
reproducer StressTest2.java

> Cluster hangs during concurrent node client and server nodes restart
> 
>
> Key: IGNITE-9184
> URL: https://issues.apache.org/jira/browse/IGNITE-9184
> Project: Ignite
>  Issue Type: Bug
>  Components: general
>Affects Versions: 2.6
>Reporter: Mikhail Cherkasov
>Assignee: Mikhail Cherkasov
>Priority: Blocker
> Fix For: 2.8
>
> Attachments: StressTest2.java, logs, stacktrace
>
>
> Please check the attached test case and stack trace.
> I can see: "Failed to wait for initial partition map exchange" message.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-9184) Cluster hangs during concurrent node client and server nodes restart

2018-12-25 Thread Mikhail Cherkasov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-9184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Cherkasov updated IGNITE-9184:
--
Summary: Cluster hangs during concurrent node client and server nodes 
restart  (was: Cluster hangs during concurrent node restart and continues query 
registration)

> Cluster hangs during concurrent node client and server nodes restart
> 
>
> Key: IGNITE-9184
> URL: https://issues.apache.org/jira/browse/IGNITE-9184
> Project: Ignite
>  Issue Type: Bug
>  Components: general
>Affects Versions: 2.6
>Reporter: Mikhail Cherkasov
>Assignee: Mikhail Cherkasov
>Priority: Blocker
> Fix For: 2.8
>
> Attachments: StressTest2.java, logs, stacktrace
>
>
> Please check the attached test case and stack trace.
> I can see: "Failed to wait for initial partition map exchange" message.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-9184) Cluster hangs during concurrent node restart and continues query registration

2018-12-25 Thread Mikhail Cherkasov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-9184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Cherkasov updated IGNITE-9184:
--
Attachment: (was: StressTest.java)

> Cluster hangs during concurrent node restart and continues query registration
> -
>
> Key: IGNITE-9184
> URL: https://issues.apache.org/jira/browse/IGNITE-9184
> Project: Ignite
>  Issue Type: Bug
>  Components: general
>Affects Versions: 2.6
>Reporter: Mikhail Cherkasov
>Assignee: Mikhail Cherkasov
>Priority: Blocker
> Fix For: 2.8
>
> Attachments: StressTest2.java, logs, stacktrace
>
>
> Please check the attached test case and stack trace.
> I can see: "Failed to wait for initial partition map exchange" message.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-9184) Cluster hangs during concurrent node restart and continues query registration

2018-12-25 Thread Mikhail Cherkasov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-9184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Cherkasov updated IGNITE-9184:
--
Attachment: StressTest2.java

> Cluster hangs during concurrent node restart and continues query registration
> -
>
> Key: IGNITE-9184
> URL: https://issues.apache.org/jira/browse/IGNITE-9184
> Project: Ignite
>  Issue Type: Bug
>  Components: general
>Affects Versions: 2.6
>Reporter: Mikhail Cherkasov
>Assignee: Mikhail Cherkasov
>Priority: Blocker
> Fix For: 2.8
>
> Attachments: StressTest.java, StressTest2.java, logs, stacktrace
>
>
> Please check the attached test case and stack trace.
> I can see: "Failed to wait for initial partition map exchange" message.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-10810) [ML] Import models from MLeap

2018-12-25 Thread Anton Dmitriev (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-10810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16728718#comment-16728718
 ] 

Anton Dmitriev commented on IGNITE-10810:
-

The purpose is to load the whole pipeline (or "transformer" in terms of MLeap). 
But it's gonna be loaded only for inference. In this context we don't need to 
merge MLeap Pipeline with Ignite Pipeline API. 

The idea is to use MLeap Runtime library to perform inference and integrate it 
into Ignite so that user can do distributed inference using Ignite 
infrastructure.

> [ML] Import models from MLeap
> -
>
> Key: IGNITE-10810
> URL: https://issues.apache.org/jira/browse/IGNITE-10810
> Project: Ignite
>  Issue Type: Improvement
>  Components: ml
>Affects Versions: 2.8
>Reporter: Anton Dmitriev
>Assignee: Anton Dmitriev
>Priority: Major
> Fix For: 2.8
>
>
> We want to have an ability to import models saved using MLeap library.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-10743) MVCC: Mute flaky mvcc tests

2018-12-25 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-10743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16728716#comment-16728716
 ] 

ASF GitHub Bot commented on IGNITE-10743:
-

Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/5724


> MVCC: Mute flaky mvcc tests
> ---
>
> Key: IGNITE-10743
> URL: https://issues.apache.org/jira/browse/IGNITE-10743
> Project: Ignite
>  Issue Type: Task
>  Components: mvcc
>Reporter: Roman Kondakov
>Assignee: Roman Kondakov
>Priority: Major
>  Labels: mvcc_stabilization_stage_1
> Fix For: 2.8
>
>
> We should mute all flaky MVCC tests on TC with appropriate tickets links. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-10810) [ML] Import models from MLeap

2018-12-25 Thread Aleksey Zinoviev (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-10810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16728713#comment-16728713
 ] 

Aleksey Zinoviev commented on IGNITE-10810:
---

Do you mean only model loads or whole pipelines?

The Pipeline API in Ignite in draft version now

> [ML] Import models from MLeap
> -
>
> Key: IGNITE-10810
> URL: https://issues.apache.org/jira/browse/IGNITE-10810
> Project: Ignite
>  Issue Type: Improvement
>  Components: ml
>Affects Versions: 2.8
>Reporter: Anton Dmitriev
>Assignee: Anton Dmitriev
>Priority: Major
> Fix For: 2.8
>
>
> We want to have an ability to import models saved using MLeap library.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-10816) MVCC: create benchmarks for bulk update operations.

2018-12-25 Thread Andrew Mashenkov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-10816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Mashenkov updated IGNITE-10816:
--
Ignite Flags:   (was: Docs Required)

> MVCC: create benchmarks for bulk update operations.
> ---
>
> Key: IGNITE-10816
> URL: https://issues.apache.org/jira/browse/IGNITE-10816
> Project: Ignite
>  Issue Type: Bug
>  Components: mvcc, sql, yardstick
>Reporter: Andrew Mashenkov
>Priority: Major
>
> For now, we have no any benchmark for bulk update operations (putAll or 
> multiple SQL inserts within single transaction)  that can be run in mvcc mode.
> 1. We should adapt existed PutAllTx benchmarks as they can failed due to 
> write conflicts.
> 2. We should add SQL benchmarks for batched insert\update operations within 
> same Tx similar to existed putAll benches.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-10816) MVCC: create benchmarks for bulk update operations.

2018-12-25 Thread Andrew Mashenkov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-10816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Mashenkov updated IGNITE-10816:
--
Issue Type: Task  (was: Bug)

> MVCC: create benchmarks for bulk update operations.
> ---
>
> Key: IGNITE-10816
> URL: https://issues.apache.org/jira/browse/IGNITE-10816
> Project: Ignite
>  Issue Type: Task
>  Components: mvcc, sql, yardstick
>Reporter: Andrew Mashenkov
>Priority: Major
>
> For now, we have no any benchmark for bulk update operations (putAll or 
> multiple SQL inserts within single transaction)  that can be run in mvcc mode.
> 1. We should adapt existed PutAllTx benchmarks as they can failed due to 
> write conflicts.
> 2. We should add SQL benchmarks for batched insert\update operations within 
> same Tx similar to existed putAll benches.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-10815) NullPointerException in InitNewCoordinatorFuture.init() leads to cluster hang

2018-12-25 Thread Anton Kurbanov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-10815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Kurbanov updated IGNITE-10815:

Summary: NullPointerException in InitNewCoordinatorFuture.init() leads to 
cluster hang  (was: NullPointerException during InitNewCoordinatorFuture.init() 
leads to cluster hang)

> NullPointerException in InitNewCoordinatorFuture.init() leads to cluster hang
> -
>
> Key: IGNITE-10815
> URL: https://issues.apache.org/jira/browse/IGNITE-10815
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.4
>Reporter: Anton Kurbanov
>Priority: Critical
>
> Possible scenario to reproduce:
> 1. Force few consecutive exchange merges and finish.
> 2. Trigger exchange.
> 3. Shutdown coordinator node before sending/receiving full partitions message.
>  
> Stacktrace:
> {code:java}
> 2018-12-24 15:54:02,664 sys-#48%gg% ERROR 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture
>  - Failed to init new coordinator future: bd74f7ed-6984-4f78-9941-480df673ab77
> java.lang.NullPointerException: null
> at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.events(GridDhtPartitionsExchangeFuture.java:534)
>  ~[ignite-core-2.4.13.b4.jar:2.4.13.b4]
> at 
> org.apache.ignite.internal.processors.cache.CacheAffinitySharedManager$18.applyx(CacheAffinitySharedManager.java:1790)
>  ~[ignite-core-2.4.13.b4.jar:2.4.13.b4]
> at 
> org.apache.ignite.internal.processors.cache.CacheAffinitySharedManager$18.applyx(CacheAffinitySharedManager.java:1738)
>  ~[ignite-core-2.4.13.b4.jar:2.4.13.b4]
> at 
> org.apache.ignite.internal.processors.cache.CacheAffinitySharedManager.forAllRegisteredCacheGroups(CacheAffinitySharedManager.java:1107)
>  ~[ignite-core-2.4.13.b4.jar:2.4.13.b4]
> at 
> org.apache.ignite.internal.processors.cache.CacheAffinitySharedManager.initCoordinatorCaches(CacheAffinitySharedManager.java:1738)
>  ~[ignite-core-2.4.13.b4.jar:2.4.13.b4]
> at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.InitNewCoordinatorFuture.init(InitNewCoordinatorFuture.java:104)
>  ~[ignite-core-2.4.13.b4.jar:2.4.13.b4]
> at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture$8$1.call(GridDhtPartitionsExchangeFuture.java:3439)
>  [ignite-core-2.4.13.b4.jar:2.4.13.b4]
> at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture$8$1.call(GridDhtPartitionsExchangeFuture.java:3435)
>  [ignite-core-2.4.13.b4.jar:2.4.13.b4]
> at 
> org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:6720)
>  [ignite-core-2.4.13.b4.jar:2.4.13.b4]
> at 
> org.apache.ignite.internal.processors.closure.GridClosureProcessor$2.body(GridClosureProcessor.java:967)
>  [ignite-core-2.4.13.b4.jar:2.4.13.b4]
> at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110) 
> [ignite-core-2.4.13.b4.jar:2.4.13.b4]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  [?:1.8.0_171]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  [?:1.8.0_171]
> at java.lang.Thread.run(Thread.java:748) [?:1.8.0_171]
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-10815) NullPointerException during InitNewCoordinatorFuture.init() leads to cluster hang

2018-12-25 Thread Anton Kurbanov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-10815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Kurbanov updated IGNITE-10815:

Description: 
Possible scenario to reproduce:

1. Force few consecutive exchange merges and finish.

2. Trigger exchange.

3. Shutdown coordinator node before sending/receiving full partitions message.

 

Stacktrace:
{code:java}
2018-12-24 15:54:02,664 sys-#48%gg% ERROR 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture
 - Failed to init new coordinator future: bd74f7ed-6984-4f78-9941-480df673ab77

java.lang.NullPointerException: null
at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.events(GridDhtPartitionsExchangeFuture.java:534)
 ~[ignite-core-2.4.13.b4.jar:2.4.13.b4]
at 
org.apache.ignite.internal.processors.cache.CacheAffinitySharedManager$18.applyx(CacheAffinitySharedManager.java:1790)
 ~[ignite-core-2.4.13.b4.jar:2.4.13.b4]
at 
org.apache.ignite.internal.processors.cache.CacheAffinitySharedManager$18.applyx(CacheAffinitySharedManager.java:1738)
 ~[ignite-core-2.4.13.b4.jar:2.4.13.b4]
at 
org.apache.ignite.internal.processors.cache.CacheAffinitySharedManager.forAllRegisteredCacheGroups(CacheAffinitySharedManager.java:1107)
 ~[ignite-core-2.4.13.b4.jar:2.4.13.b4]
at 
org.apache.ignite.internal.processors.cache.CacheAffinitySharedManager.initCoordinatorCaches(CacheAffinitySharedManager.java:1738)
 ~[ignite-core-2.4.13.b4.jar:2.4.13.b4]
at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.InitNewCoordinatorFuture.init(InitNewCoordinatorFuture.java:104)
 ~[ignite-core-2.4.13.b4.jar:2.4.13.b4]
at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture$8$1.call(GridDhtPartitionsExchangeFuture.java:3439)
 [ignite-core-2.4.13.b4.jar:2.4.13.b4]
at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture$8$1.call(GridDhtPartitionsExchangeFuture.java:3435)
 [ignite-core-2.4.13.b4.jar:2.4.13.b4]
at 
org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:6720)
 [ignite-core-2.4.13.b4.jar:2.4.13.b4]
at 
org.apache.ignite.internal.processors.closure.GridClosureProcessor$2.body(GridClosureProcessor.java:967)
 [ignite-core-2.4.13.b4.jar:2.4.13.b4]
at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110) 
[ignite-core-2.4.13.b4.jar:2.4.13.b4]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
[?:1.8.0_171]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
[?:1.8.0_171]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_171]
{code}
 

  was:
Possible scenario to reproduce:

1. Force few consecutive exchange merges and finish.

2. Trigger exchange.

3. Shutdown coordinator node before sending/receiving full partitions message.

 

Stacktrace:

2018-12-24 15:54:02,664 [sys-#48%gg%] ERROR 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture
 - Failed to init new coordinator future: bd74f7ed-6984-4f78-9941-480df673ab77

java.lang.NullPointerException: null
 at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.events(GridDhtPartitionsExchangeFuture.java:534)
 ~[ignite-core-2.4.13.b4.jar:2.4.13.b4]
 at 
org.apache.ignite.internal.processors.cache.CacheAffinitySharedManager$18.applyx(CacheAffinitySharedManager.java:1790)
 ~[ignite-core-2.4.13.b4.jar:2.4.13.b4]
 at 
org.apache.ignite.internal.processors.cache.CacheAffinitySharedManager$18.applyx(CacheAffinitySharedManager.java:1738)
 ~[ignite-core-2.4.13.b4.jar:2.4.13.b4]
 at 
org.apache.ignite.internal.processors.cache.CacheAffinitySharedManager.forAllRegisteredCacheGroups(CacheAffinitySharedManager.java:1107)
 ~[ignite-core-2.4.13.b4.jar:2.4.13.b4]
 at 
org.apache.ignite.internal.processors.cache.CacheAffinitySharedManager.initCoordinatorCaches(CacheAffinitySharedManager.java:1738)
 ~[ignite-core-2.4.13.b4.jar:2.4.13.b4]
 at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.InitNewCoordinatorFuture.init(InitNewCoordinatorFuture.java:104)
 ~[ignite-core-2.4.13.b4.jar:2.4.13.b4]
 at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture$8$1.call(GridDhtPartitionsExchangeFuture.java:3439)
 [ignite-core-2.4.13.b4.jar:2.4.13.b4]
 at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture$8$1.call(GridDhtPartitionsExchangeFuture.java:3435)
 [ignite-core-2.4.13.b4.jar:2.4.13.b4]
 at 
org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:6720)
 [ignite-core-2.4.13.b4.jar:2.4.13.b4]
 at 
org.apache.ignite.internal.processors.closure.GridClosureProcessor$2.body(GridClosureProcessor.java:967)
 

[jira] [Created] (IGNITE-10816) MVCC: create benchmarks for bulk update operations.

2018-12-25 Thread Andrew Mashenkov (JIRA)
Andrew Mashenkov created IGNITE-10816:
-

 Summary: MVCC: create benchmarks for bulk update operations.
 Key: IGNITE-10816
 URL: https://issues.apache.org/jira/browse/IGNITE-10816
 Project: Ignite
  Issue Type: Bug
  Components: mvcc, sql, yardstick
Reporter: Andrew Mashenkov


For now, we have no any benchmark for bulk update operations (putAll or 
multiple SQL inserts within single transaction)  that can be run in mvcc mode.

1. We should adapt existed PutAllTx benchmarks as they can failed due to write 
conflicts.
2. We should add SQL benchmarks for batched insert\update operations within 
same Tx similar to existed putAll benches.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-10803) [ML] Add prototype LogReg loading from PMML format

2018-12-25 Thread Aleksey Zinoviev (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-10803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Zinoviev updated IGNITE-10803:
--
Summary: [ML] Add prototype LogReg loading from PMML format  (was: [ML] Add 
prototype LinearRegression loading from PMML format)

> [ML] Add prototype LogReg loading from PMML format
> --
>
> Key: IGNITE-10803
> URL: https://issues.apache.org/jira/browse/IGNITE-10803
> Project: Ignite
>  Issue Type: Sub-task
>  Components: ml
>Reporter: Aleksey Zinoviev
>Assignee: Aleksey Zinoviev
>Priority: Major
>
> Generate or get existing PMML model for known dataset to load and predict new 
> data in Ignite



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-10784) SQL: Create a view with list of existing tables

2018-12-25 Thread Vladimir Ozerov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-10784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16728712#comment-16728712
 ] 

Vladimir Ozerov commented on IGNITE-10784:
--

[~pkouznet], please note that info about MVCC mode is already available in 
{{CACHES.ATOMICITY_MODE}}. I doubt that we need constant fields as we are 
implementing our own custom view, not {{INFORMATIONAL_SCHEMA}}. Table size 
approximation is not available at the moment in Ignite. We will have it in 
future [1].

[1] IGNITE-6079

> SQL: Create a view with list of existing tables
> ---
>
> Key: IGNITE-10784
> URL: https://issues.apache.org/jira/browse/IGNITE-10784
> Project: Ignite
>  Issue Type: Task
>  Components: sql
>Reporter: Vladimir Ozerov
>Assignee: Pavel Kuznetsov
>Priority: Major
> Fix For: 2.8
>
>
> We need to create a system view of currently available SQL tables. 
> Minimal required information:
> 1) Schema name
> 2) Table name
> 3) Owning cache name
> 4) Owning cache ID
> Other info to consider:
> 1) Affinity column name
> 2) Key/value aliases
> 3) Key/value type names
> 4) Analyse other vendors (e.g. MySQL, Postgresql) and see if any other useful 
> information could be exposed (taking in count that a lot of engine properties 
> are already exposed through {{CACHES}} view)
> Starting point: {{SqlSystemView}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-10815) NullPointerException during InitNewCoordinatorFuture.init() leads to cluster hang

2018-12-25 Thread Anton Kurbanov (JIRA)
Anton Kurbanov created IGNITE-10815:
---

 Summary: NullPointerException during 
InitNewCoordinatorFuture.init() leads to cluster hang
 Key: IGNITE-10815
 URL: https://issues.apache.org/jira/browse/IGNITE-10815
 Project: Ignite
  Issue Type: Bug
Affects Versions: 2.4
Reporter: Anton Kurbanov


Possible scenario to reproduce:

1. Force few consecutive exchange merges and finish.

2. Trigger exchange.

3. Shutdown coordinator node before sending/receiving full partitions message.

 

Stacktrace:

2018-12-24 15:54:02,664 [sys-#48%gg%] ERROR 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture
 - Failed to init new coordinator future: bd74f7ed-6984-4f78-9941-480df673ab77

java.lang.NullPointerException: null
 at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.events(GridDhtPartitionsExchangeFuture.java:534)
 ~[ignite-core-2.4.13.b4.jar:2.4.13.b4]
 at 
org.apache.ignite.internal.processors.cache.CacheAffinitySharedManager$18.applyx(CacheAffinitySharedManager.java:1790)
 ~[ignite-core-2.4.13.b4.jar:2.4.13.b4]
 at 
org.apache.ignite.internal.processors.cache.CacheAffinitySharedManager$18.applyx(CacheAffinitySharedManager.java:1738)
 ~[ignite-core-2.4.13.b4.jar:2.4.13.b4]
 at 
org.apache.ignite.internal.processors.cache.CacheAffinitySharedManager.forAllRegisteredCacheGroups(CacheAffinitySharedManager.java:1107)
 ~[ignite-core-2.4.13.b4.jar:2.4.13.b4]
 at 
org.apache.ignite.internal.processors.cache.CacheAffinitySharedManager.initCoordinatorCaches(CacheAffinitySharedManager.java:1738)
 ~[ignite-core-2.4.13.b4.jar:2.4.13.b4]
 at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.InitNewCoordinatorFuture.init(InitNewCoordinatorFuture.java:104)
 ~[ignite-core-2.4.13.b4.jar:2.4.13.b4]
 at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture$8$1.call(GridDhtPartitionsExchangeFuture.java:3439)
 [ignite-core-2.4.13.b4.jar:2.4.13.b4]
 at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture$8$1.call(GridDhtPartitionsExchangeFuture.java:3435)
 [ignite-core-2.4.13.b4.jar:2.4.13.b4]
 at 
org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:6720)
 [ignite-core-2.4.13.b4.jar:2.4.13.b4]
 at 
org.apache.ignite.internal.processors.closure.GridClosureProcessor$2.body(GridClosureProcessor.java:967)
 [ignite-core-2.4.13.b4.jar:2.4.13.b4]
 at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110) 
[ignite-core-2.4.13.b4.jar:2.4.13.b4]
 at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
[?:1.8.0_171]
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
[?:1.8.0_171]
 at java.lang.Thread.run(Thread.java:748) [?:1.8.0_171]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-10814) Exceptions thrown in InitNewCoordinatorFuture.init() are ignored

2018-12-25 Thread Anton Kurbanov (JIRA)
Anton Kurbanov created IGNITE-10814:
---

 Summary: Exceptions thrown in InitNewCoordinatorFuture.init() are 
ignored
 Key: IGNITE-10814
 URL: https://issues.apache.org/jira/browse/IGNITE-10814
 Project: Ignite
  Issue Type: Bug
Affects Versions: 2.4
Reporter: Anton Kurbanov


Exceptions thrown by InitNewCoordinatorFuture.init() called from 
GridDhtPartitionsExchangeFuture.onNodeLeft is ignored silently and may result 
in cluster hang without cause seen in logs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (IGNITE-10379) SQL: Extract partition info from BETWEEN and range conditions for integer types

2018-12-25 Thread Vladimir Ozerov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-10379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Ozerov reassigned IGNITE-10379:


Assignee: Alexander Lapin

> SQL: Extract partition info from BETWEEN and range conditions for integer 
> types
> ---
>
> Key: IGNITE-10379
> URL: https://issues.apache.org/jira/browse/IGNITE-10379
> Project: Ignite
>  Issue Type: Task
>  Components: sql
>Reporter: Vladimir Ozerov
>Assignee: Alexander Lapin
>Priority: Major
>  Labels: iep-24
>
> If there is a range condition on affinity column of integer type, we may try 
> to extract partition info from it in a way similar to IN clause [1]:
> {{x BETWEEN 1 and 5}} -> {{x IN (1, 2, 3, 4, 5)}}
> {{x > 1 and x <= 5}} -> {{x IN (2, 3, 4, 5)}}
> [1] IGNITE-9632



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-10808) Discovery message queue may build up with TcpDiscoveryMetricsUpdateMessage

2018-12-25 Thread Denis Mekhanikov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-10808?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denis Mekhanikov updated IGNITE-10808:
--
Fix Version/s: 2.8

> Discovery message queue may build up with TcpDiscoveryMetricsUpdateMessage
> --
>
> Key: IGNITE-10808
> URL: https://issues.apache.org/jira/browse/IGNITE-10808
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.7
>Reporter: Stanislav Lukyanov
>Assignee: Denis Mekhanikov
>Priority: Major
>  Labels: discovery
> Fix For: 2.8
>
> Attachments: IgniteMetricsOverflowTest.java
>
>
> A node receives a new metrics update message every `metricsUpdateFrequency` 
> milliseconds, and the message will be put at the top of the queue (because it 
> is a high priority message).
> If processing one message takes more than `metricsUpdateFrequency` then 
> multiple `TcpDiscoveryMetricsUpdateMessage` will be in the queue. A long 
> enough delay (e.g. caused by a network glitch or GC) may lead to the queue 
> building up tens of metrics update messages which are essentially useless to 
> be processed. Finally, if processing a message on average takes a little more 
> than `metricsUpdateFrequency` (even for a relatively short period of time, 
> say, for a minute due to network issues) then the message worker will end up 
> processing only the metrics updates and the cluster will essentially hang.
> Reproducer is attached. In the test, the queue first builds up and then very 
> slowly being teared down, causing "Failed to wait for PME" messages.
> Need to change ServerImpl's SocketReader not to put another metrics update 
> message to the top of the queue if it already has one (or replace the one at 
> the top with new one).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (IGNITE-10808) Discovery message queue may build up with TcpDiscoveryMetricsUpdateMessage

2018-12-25 Thread Denis Mekhanikov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-10808?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denis Mekhanikov reassigned IGNITE-10808:
-

Assignee: Denis Mekhanikov

> Discovery message queue may build up with TcpDiscoveryMetricsUpdateMessage
> --
>
> Key: IGNITE-10808
> URL: https://issues.apache.org/jira/browse/IGNITE-10808
> Project: Ignite
>  Issue Type: Bug
>Reporter: Stanislav Lukyanov
>Assignee: Denis Mekhanikov
>Priority: Major
> Attachments: IgniteMetricsOverflowTest.java
>
>
> A node receives a new metrics update message every `metricsUpdateFrequency` 
> milliseconds, and the message will be put at the top of the queue (because it 
> is a high priority message).
> If processing one message takes more than `metricsUpdateFrequency` then 
> multiple `TcpDiscoveryMetricsUpdateMessage` will be in the queue. A long 
> enough delay (e.g. caused by a network glitch or GC) may lead to the queue 
> building up tens of metrics update messages which are essentially useless to 
> be processed. Finally, if processing a message on average takes a little more 
> than `metricsUpdateFrequency` (even for a relatively short period of time, 
> say, for a minute due to network issues) then the message worker will end up 
> processing only the metrics updates and the cluster will essentially hang.
> Reproducer is attached. In the test, the queue first builds up and then very 
> slowly being teared down, causing "Failed to wait for PME" messages.
> Need to change ServerImpl's SocketReader not to put another metrics update 
> message to the top of the queue if it already has one (or replace the one at 
> the top with new one).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-10379) SQL: Extract partition info from BETWEEN and range conditions for integer types

2018-12-25 Thread Vladimir Ozerov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-10379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16728708#comment-16728708
 ] 

Vladimir Ozerov commented on IGNITE-10379:
--

Design considerations:
1) Consider limit of elements between bounds after which partition pruning is 
not very beneficial and hence could be skipped. E.g. 16 elements. Consider 
adding it to {{IgniteSystemProperties}}
2) May be it makes sense to move greater/less optimizations to a separate 
ticket, as they will require more complex expression tree analysis
3) For BETWEEN with parameters we are likely to need additional node type (e.g. 
RangeNode). For constants, group node will be enough.

> SQL: Extract partition info from BETWEEN and range conditions for integer 
> types
> ---
>
> Key: IGNITE-10379
> URL: https://issues.apache.org/jira/browse/IGNITE-10379
> Project: Ignite
>  Issue Type: Task
>  Components: sql
>Reporter: Vladimir Ozerov
>Priority: Major
>  Labels: iep-24
>
> If there is a range condition on affinity column of integer type, we may try 
> to extract partition info from it in a way similar to IN clause [1]:
> {{x BETWEEN 1 and 5}} -> {{x IN (1, 2, 3, 4, 5)}}
> {{x > 1 and x <= 5}} -> {{x IN (2, 3, 4, 5)}}
> [1] IGNITE-9632



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-10808) Discovery message queue may build up with TcpDiscoveryMetricsUpdateMessage

2018-12-25 Thread Denis Mekhanikov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-10808?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denis Mekhanikov updated IGNITE-10808:
--
Affects Version/s: 2.7

> Discovery message queue may build up with TcpDiscoveryMetricsUpdateMessage
> --
>
> Key: IGNITE-10808
> URL: https://issues.apache.org/jira/browse/IGNITE-10808
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.7
>Reporter: Stanislav Lukyanov
>Assignee: Denis Mekhanikov
>Priority: Major
>  Labels: discovery
> Fix For: 2.8
>
> Attachments: IgniteMetricsOverflowTest.java
>
>
> A node receives a new metrics update message every `metricsUpdateFrequency` 
> milliseconds, and the message will be put at the top of the queue (because it 
> is a high priority message).
> If processing one message takes more than `metricsUpdateFrequency` then 
> multiple `TcpDiscoveryMetricsUpdateMessage` will be in the queue. A long 
> enough delay (e.g. caused by a network glitch or GC) may lead to the queue 
> building up tens of metrics update messages which are essentially useless to 
> be processed. Finally, if processing a message on average takes a little more 
> than `metricsUpdateFrequency` (even for a relatively short period of time, 
> say, for a minute due to network issues) then the message worker will end up 
> processing only the metrics updates and the cluster will essentially hang.
> Reproducer is attached. In the test, the queue first builds up and then very 
> slowly being teared down, causing "Failed to wait for PME" messages.
> Need to change ServerImpl's SocketReader not to put another metrics update 
> message to the top of the queue if it already has one (or replace the one at 
> the top with new one).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-10808) Discovery message queue may build up with TcpDiscoveryMetricsUpdateMessage

2018-12-25 Thread Denis Mekhanikov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-10808?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denis Mekhanikov updated IGNITE-10808:
--
Labels: discovery  (was: )

> Discovery message queue may build up with TcpDiscoveryMetricsUpdateMessage
> --
>
> Key: IGNITE-10808
> URL: https://issues.apache.org/jira/browse/IGNITE-10808
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.7
>Reporter: Stanislav Lukyanov
>Assignee: Denis Mekhanikov
>Priority: Major
>  Labels: discovery
> Fix For: 2.8
>
> Attachments: IgniteMetricsOverflowTest.java
>
>
> A node receives a new metrics update message every `metricsUpdateFrequency` 
> milliseconds, and the message will be put at the top of the queue (because it 
> is a high priority message).
> If processing one message takes more than `metricsUpdateFrequency` then 
> multiple `TcpDiscoveryMetricsUpdateMessage` will be in the queue. A long 
> enough delay (e.g. caused by a network glitch or GC) may lead to the queue 
> building up tens of metrics update messages which are essentially useless to 
> be processed. Finally, if processing a message on average takes a little more 
> than `metricsUpdateFrequency` (even for a relatively short period of time, 
> say, for a minute due to network issues) then the message worker will end up 
> processing only the metrics updates and the cluster will essentially hang.
> Reproducer is attached. In the test, the queue first builds up and then very 
> slowly being teared down, causing "Failed to wait for PME" messages.
> Need to change ServerImpl's SocketReader not to put another metrics update 
> message to the top of the queue if it already has one (or replace the one at 
> the top with new one).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-10715) Remove boilerplate of settings 'TcpDiscoveryVmIpFinder' in tests

2018-12-25 Thread Vyacheslav Daradur (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-10715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16728699#comment-16728699
 ] 

Vyacheslav Daradur commented on IGNITE-10715:
-

The ticket has been reopened because of failure 
\{{IgniteClientConnectTest#testClientConnectToBigTopology}}.
The fix prepared already: [https://github.com/apache/ignite/pull/5739]

Waiting for tests result.

> Remove boilerplate of settings 'TcpDiscoveryVmIpFinder' in tests
> 
>
> Key: IGNITE-10715
> URL: https://issues.apache.org/jira/browse/IGNITE-10715
> Project: Ignite
>  Issue Type: Task
>Reporter: Vyacheslav Daradur
>Assignee: Vyacheslav Daradur
>Priority: Minor
> Fix For: 2.8
>
>
> It's necessary to remove boilerplate of settings 'TcpDiscoveryVmIpFinder' in 
> tests since this is default IP finder in tests after IGNITE-10555.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (IGNITE-10784) SQL: Create a view with list of existing tables

2018-12-25 Thread Pavel Kuznetsov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-10784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16728692#comment-16728692
 ] 

Pavel Kuznetsov edited comment on IGNITE-10784 at 12/25/18 12:35 PM:
-

Lets add also "constant" fields : CATALOG_NAME ("IGNITE"), TABLE_TYPE ("TABLE")

It is also good to have TABLE_ROWS field, I should think, if we can do this 
without scan query every time (size of pk index?).


was (Author: pkouznet):
Lets add also "constant" fields : CATALOG_NAME ("IGNITE"), TABLE_TYPE ("TABLE")

It is also good to have TABLE_ROWS field, should think, if we can do without 
scan (size of pk index?).

> SQL: Create a view with list of existing tables
> ---
>
> Key: IGNITE-10784
> URL: https://issues.apache.org/jira/browse/IGNITE-10784
> Project: Ignite
>  Issue Type: Task
>  Components: sql
>Reporter: Vladimir Ozerov
>Assignee: Pavel Kuznetsov
>Priority: Major
> Fix For: 2.8
>
>
> We need to create a system view of currently available SQL tables. 
> Minimal required information:
> 1) Schema name
> 2) Table name
> 3) Owning cache name
> 4) Owning cache ID
> Other info to consider:
> 1) Affinity column name
> 2) Key/value aliases
> 3) Key/value type names
> 4) Analyse other vendors (e.g. MySQL, Postgresql) and see if any other useful 
> information could be exposed (taking in count that a lot of engine properties 
> are already exposed through {{CACHES}} view)
> Starting point: {{SqlSystemView}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-10808) Discovery message queue may build up with TcpDiscoveryMetricsUpdateMessage

2018-12-25 Thread Stanislav Lukyanov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-10808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16728702#comment-16728702
 ] 

Stanislav Lukyanov commented on IGNITE-10808:
-

There are two parts in this problem:
1) The queue may grow indefinitely if metrics updates are generated faster than 
they're processed.
This can be solved by removing all of the updates but the latest one.
When a new metrics update is added to the queue, we should check if there is 
another metrics update in the queue already. If there then replace the old one 
with the new one (at the same place in the queue). We should be careful and 
only replace the metrics update on their first ring pass - the messages on the 
second ring pass should be left in the queue.

2) The metrics updates may take too much of the discovery worker capacity 
leading to starvation-type issues.
This can be solved by making metrics update normal priority instead of high 
priority.
To avoid triggering failure detection we need to make sure that all messages, 
not only metrics updates, reset the failure detection timer.

> Discovery message queue may build up with TcpDiscoveryMetricsUpdateMessage
> --
>
> Key: IGNITE-10808
> URL: https://issues.apache.org/jira/browse/IGNITE-10808
> Project: Ignite
>  Issue Type: Bug
>Reporter: Stanislav Lukyanov
>Priority: Major
> Attachments: IgniteMetricsOverflowTest.java
>
>
> A node receives a new metrics update message every `metricsUpdateFrequency` 
> milliseconds, and the message will be put at the top of the queue (because it 
> is a high priority message).
> If processing one message takes more than `metricsUpdateFrequency` then 
> multiple `TcpDiscoveryMetricsUpdateMessage` will be in the queue. A long 
> enough delay (e.g. caused by a network glitch or GC) may lead to the queue 
> building up tens of metrics update messages which are essentially useless to 
> be processed. Finally, if processing a message on average takes a little more 
> than `metricsUpdateFrequency` (even for a relatively short period of time, 
> say, for a minute due to network issues) then the message worker will end up 
> processing only the metrics updates and the cluster will essentially hang.
> Reproducer is attached. In the test, the queue first builds up and then very 
> slowly being teared down, causing "Failed to wait for PME" messages.
> Need to change ServerImpl's SocketReader not to put another metrics update 
> message to the top of the queue if it already has one (or replace the one at 
> the top with new one).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-10784) SQL: Create a view with list of existing tables

2018-12-25 Thread Pavel Kuznetsov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-10784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16728698#comment-16728698
 ] 

Pavel Kuznetsov commented on IGNITE-10784:
--

Important note: many vendors give "INFORMATION_SCHEMA" to the schema of the 
view because it's a part of the SQL ISO standard. Maybe we should change the 
name?

> SQL: Create a view with list of existing tables
> ---
>
> Key: IGNITE-10784
> URL: https://issues.apache.org/jira/browse/IGNITE-10784
> Project: Ignite
>  Issue Type: Task
>  Components: sql
>Reporter: Vladimir Ozerov
>Assignee: Pavel Kuznetsov
>Priority: Major
> Fix For: 2.8
>
>
> We need to create a system view of currently available SQL tables. 
> Minimal required information:
> 1) Schema name
> 2) Table name
> 3) Owning cache name
> 4) Owning cache ID
> Other info to consider:
> 1) Affinity column name
> 2) Key/value aliases
> 3) Key/value type names
> 4) Analyse other vendors (e.g. MySQL, Postgresql) and see if any other useful 
> information could be exposed (taking in count that a lot of engine properties 
> are already exposed through {{CACHES}} view)
> Starting point: {{SqlSystemView}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-10621) Track all running queries on initial query node

2018-12-25 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-10621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16728695#comment-16728695
 ] 

ASF GitHub Bot commented on IGNITE-10621:
-

Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/5620


> Track all running queries on initial query node
> ---
>
> Key: IGNITE-10621
> URL: https://issues.apache.org/jira/browse/IGNITE-10621
> Project: Ignite
>  Issue Type: Task
>  Components: sql
>Reporter: Yury Gerzhedovich
>Assignee: Yury Gerzhedovich
>Priority: Major
>  Labels: iep-29
> Fix For: 2.8
>
>
> As of now Ignite track running queries in few places and use 
> GridRunningQueryInfo to keep information about each of running query.
> Unfortunately we track not all running queries. Need to track all DML and 
> Select queries. It should be single point to track all running queries.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-10621) Track all running queries on initial query node

2018-12-25 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-10621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16728696#comment-16728696
 ] 

ASF GitHub Bot commented on IGNITE-10621:
-

Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/5663


> Track all running queries on initial query node
> ---
>
> Key: IGNITE-10621
> URL: https://issues.apache.org/jira/browse/IGNITE-10621
> Project: Ignite
>  Issue Type: Task
>  Components: sql
>Reporter: Yury Gerzhedovich
>Assignee: Yury Gerzhedovich
>Priority: Major
>  Labels: iep-29
> Fix For: 2.8
>
>
> As of now Ignite track running queries in few places and use 
> GridRunningQueryInfo to keep information about each of running query.
> Unfortunately we track not all running queries. Need to track all DML and 
> Select queries. It should be single point to track all running queries.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (IGNITE-10784) SQL: Create a view with list of existing tables

2018-12-25 Thread Pavel Kuznetsov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-10784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16728651#comment-16728651
 ] 

Pavel Kuznetsov edited comment on IGNITE-10784 at 12/25/18 12:33 PM:
-

maybe we should duplicate atomic mode or introduce new column "mvcc enabled".
Upd: I mean column with boolean type, which is true if owning cache has 
atomicity mode == TRANSACTIONAL_SNAPSHOT


was (Author: pkouznet):
maybe we should duplicate atomic mode or introduce new column "mvcc enabled".

> SQL: Create a view with list of existing tables
> ---
>
> Key: IGNITE-10784
> URL: https://issues.apache.org/jira/browse/IGNITE-10784
> Project: Ignite
>  Issue Type: Task
>  Components: sql
>Reporter: Vladimir Ozerov
>Assignee: Pavel Kuznetsov
>Priority: Major
> Fix For: 2.8
>
>
> We need to create a system view of currently available SQL tables. 
> Minimal required information:
> 1) Schema name
> 2) Table name
> 3) Owning cache name
> 4) Owning cache ID
> Other info to consider:
> 1) Affinity column name
> 2) Key/value aliases
> 3) Key/value type names
> 4) Analyse other vendors (e.g. MySQL, Postgresql) and see if any other useful 
> information could be exposed (taking in count that a lot of engine properties 
> are already exposed through {{CACHES}} view)
> Starting point: {{SqlSystemView}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-10784) SQL: Create a view with list of existing tables

2018-12-25 Thread Pavel Kuznetsov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-10784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16728692#comment-16728692
 ] 

Pavel Kuznetsov commented on IGNITE-10784:
--

Lets add also "constant" fields : CATALOG_NAME ("IGNITE"), TABLE_TYPE ("TABLE")

It is also good to have TABLE_ROWS field, should think, if we can do without 
scan (size of pk index?).

> SQL: Create a view with list of existing tables
> ---
>
> Key: IGNITE-10784
> URL: https://issues.apache.org/jira/browse/IGNITE-10784
> Project: Ignite
>  Issue Type: Task
>  Components: sql
>Reporter: Vladimir Ozerov
>Assignee: Pavel Kuznetsov
>Priority: Major
> Fix For: 2.8
>
>
> We need to create a system view of currently available SQL tables. 
> Minimal required information:
> 1) Schema name
> 2) Table name
> 3) Owning cache name
> 4) Owning cache ID
> Other info to consider:
> 1) Affinity column name
> 2) Key/value aliases
> 3) Key/value type names
> 4) Analyse other vendors (e.g. MySQL, Postgresql) and see if any other useful 
> information could be exposed (taking in count that a lot of engine properties 
> are already exposed through {{CACHES}} view)
> Starting point: {{SqlSystemView}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-9120) Metadata writer does not propagate error to failure handler

2018-12-25 Thread Sergey Chugunov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-9120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16728690#comment-16728690
 ] 

Sergey Chugunov commented on IGNITE-9120:
-

[~a-polyakov],

Change looks good to me. Thank you for contribution!

> Metadata writer does not propagate error to failure handler
> ---
>
> Key: IGNITE-9120
> URL: https://issues.apache.org/jira/browse/IGNITE-9120
> Project: Ignite
>  Issue Type: Bug
>Reporter: Alexand Polyakov
>Assignee: Alexand Polyakov
>Priority: Major
>
> In logs
> {code:java}
> [WARN] [tcp-disco-msg-worker- # 2% DPL_GRID% DplGridNodeName%] 
> [o.a.i.i.p.c.b.CacheObjectBinaryProcessorImpl] Failed to save metadata for 
> typeId: 978611101; The exception was selected: there was no space left on the 
> device{code}
> Node does not shut down
> The number of stalled transactions begins to grow.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-10813) Run CheckpointReadLockFailureTest with JUnit4 runner

2018-12-25 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-10813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16728686#comment-16728686
 ] 

ASF GitHub Bot commented on IGNITE-10813:
-

GitHub user andrey-kuznetsov opened a pull request:

https://github.com/apache/ignite/pull/5743

IGNITE-10813 Run CheckpointReadLockFailureTest with JUnit4 runner



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/andrey-kuznetsov/ignite ignite-10813

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/5743.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5743


commit d6f17ce28ebaae85f00be188319e9833d306f80d
Author: Andrey Kuznetsov 
Date:   2018-12-25T11:57:44Z

IGNITE-10813 Changed test runner to JUnit4.




> Run CheckpointReadLockFailureTest with JUnit4 runner
> 
>
> Key: IGNITE-10813
> URL: https://issues.apache.org/jira/browse/IGNITE-10813
> Project: Ignite
>  Issue Type: Bug
>Reporter: Andrey Kuznetsov
>Assignee: Andrey Kuznetsov
>Priority: Trivial
>
> The test fails on TeamCity. Should be run in JUnit4 manner.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-10003) Raise SYSTEM_WORKER_BLOCKED instead of CRITICAL_ERROR when checkpoint read lock timeout detected

2018-12-25 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-10003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16728684#comment-16728684
 ] 

ASF GitHub Bot commented on IGNITE-10003:
-

Github user andrey-kuznetsov closed the pull request at:

https://github.com/apache/ignite/pull/5084


> Raise SYSTEM_WORKER_BLOCKED instead of CRITICAL_ERROR when checkpoint read 
> lock timeout detected
> 
>
> Key: IGNITE-10003
> URL: https://issues.apache.org/jira/browse/IGNITE-10003
> Project: Ignite
>  Issue Type: Task
>Affects Versions: 2.7
>Reporter: Andrey Kuznetsov
>Assignee: Andrey Kuznetsov
>Priority: Trivial
> Fix For: 2.8
>
>
> {{GridCacheDatabaseSharedManager#failCheckpointReadLock}} should report 
> {{SYSTEM_WORKER_BLOCKED}} to failure handler: it is closer to the truth and 
> default consequenses are not so severe as opposed to {{CRITICAL_ERROR}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Issue Comment Deleted] (IGNITE-9120) Metadata writer does not propagate error to failure handler

2018-12-25 Thread Sergey Chugunov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-9120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Chugunov updated IGNITE-9120:

Comment: was deleted

(was: [~a-polyakov],

I believe we should do the same thing for marshaller mappings.)

> Metadata writer does not propagate error to failure handler
> ---
>
> Key: IGNITE-9120
> URL: https://issues.apache.org/jira/browse/IGNITE-9120
> Project: Ignite
>  Issue Type: Bug
>Reporter: Alexand Polyakov
>Assignee: Alexand Polyakov
>Priority: Major
>
> In logs
> {code:java}
> [WARN] [tcp-disco-msg-worker- # 2% DPL_GRID% DplGridNodeName%] 
> [o.a.i.i.p.c.b.CacheObjectBinaryProcessorImpl] Failed to save metadata for 
> typeId: 978611101; The exception was selected: there was no space left on the 
> device{code}
> Node does not shut down
> The number of stalled transactions begins to grow.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-10800) Add wal_mode parameter to yardstick properties file

2018-12-25 Thread Ilya Suntsov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-10800?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16728671#comment-16728671
 ] 

Ilya Suntsov commented on IGNITE-10800:
---

Sorry, persistence disabled for the second benchmark.

> Add wal_mode parameter to yardstick properties file
> ---
>
> Key: IGNITE-10800
> URL: https://issues.apache.org/jira/browse/IGNITE-10800
> Project: Ignite
>  Issue Type: Improvement
>  Components: yardstick
>Affects Versions: 2.7
>Reporter: Ilya Suntsov
>Assignee: Oleg Ostanin
>Priority: Major
> Attachments: wal_mode.zip
>
>
> As I understand we can enable persistence with properties file parameter. I 
> guess we need to add a parameter for WAL mode.
> Expected behavior:
>  * When we have in configuration region with persistence should be used this 
> configuration and wal_mode value from properties should be ignored. Also 
> should be added warnings in all nodes logs with configuration details. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-10329) Create JDBC "query" and "query join" benchmarks and compare them with Postgres and MySQL

2018-12-25 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-10329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16728680#comment-16728680
 ] 

ASF GitHub Bot commented on IGNITE-10329:
-

Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/5479


> Create JDBC "query" and "query join" benchmarks and compare them with 
> Postgres and MySQL
> 
>
> Key: IGNITE-10329
> URL: https://issues.apache.org/jira/browse/IGNITE-10329
> Project: Ignite
>  Issue Type: Task
>  Components: sql, yardstick
>Reporter: Vladimir Ozerov
>Assignee: Pavel Kuznetsov
>Priority: Major
> Fix For: 2.8
>
>
> Currently we have {{IgniteSqlQueryBenchmark}} and 
> {{IgniteSqlQueryJoinBenchmark}} benchmarks which query data over salary range 
> and optionally joins it with second table. Let's create a set of similar 
> benchmarks which will use JDBC to load and query data, and execute them 
> against one-node Ignite cluster, MySQL and Postgres.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-10813) Run CheckpointReadLockFailureTest with JUnit4 runner

2018-12-25 Thread Andrey Kuznetsov (JIRA)
Andrey Kuznetsov created IGNITE-10813:
-

 Summary: Run CheckpointReadLockFailureTest with JUnit4 runner
 Key: IGNITE-10813
 URL: https://issues.apache.org/jira/browse/IGNITE-10813
 Project: Ignite
  Issue Type: Bug
Reporter: Andrey Kuznetsov
Assignee: Andrey Kuznetsov


The test fails on TeamCity. Should be run in JUnit4 manner.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-10329) Create JDBC "query" and "query join" benchmarks and compare them with Postgres and MySQL

2018-12-25 Thread Vladimir Ozerov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-10329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16728678#comment-16728678
 ] 

Vladimir Ozerov commented on IGNITE-10329:
--

Merged to master.

> Create JDBC "query" and "query join" benchmarks and compare them with 
> Postgres and MySQL
> 
>
> Key: IGNITE-10329
> URL: https://issues.apache.org/jira/browse/IGNITE-10329
> Project: Ignite
>  Issue Type: Task
>  Components: sql, yardstick
>Reporter: Vladimir Ozerov
>Assignee: Pavel Kuznetsov
>Priority: Major
> Fix For: 2.8
>
>
> Currently we have {{IgniteSqlQueryBenchmark}} and 
> {{IgniteSqlQueryJoinBenchmark}} benchmarks which query data over salary range 
> and optionally joins it with second table. Let's create a set of similar 
> benchmarks which will use JDBC to load and query data, and execute them 
> against one-node Ignite cluster, MySQL and Postgres.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-10580) H2 connection and statements are reused invalid for local sql queries

2018-12-25 Thread Taras Ledkov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-10580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16728673#comment-16728673
 ] 

Taras Ledkov commented on IGNITE-10580:
---

*Fix:* We have to detach connection of local query until results not closed.
Tests are OK.
[~vozerov], please take a look.


> H2 connection and statements are reused invalid for local sql queries
> -
>
> Key: IGNITE-10580
> URL: https://issues.apache.org/jira/browse/IGNITE-10580
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 2.7
>Reporter: Taras Ledkov
>Assignee: Taras Ledkov
>Priority: Major
> Fix For: 2.8
>
>
> The threadlocal connection & statement cache is used invalid for local 
> queries.
> Steps to reproduce:
> # Open iterator for local query {{Query0}};
> # In the same thread open one more iterator for {{Query1}} (SQl statement 
> must be equals to {{Query0}} and doesn't contains query parameters);
> # Fetch from the first iterator.
> The exception {{The object is already closed [90007-197]}} will be thrown.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-10800) Add wal_mode parameter to yardstick properties file

2018-12-25 Thread Ilya Suntsov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-10800?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ilya Suntsov updated IGNITE-10800:
--
Attachment: wal_mode.zip

> Add wal_mode parameter to yardstick properties file
> ---
>
> Key: IGNITE-10800
> URL: https://issues.apache.org/jira/browse/IGNITE-10800
> Project: Ignite
>  Issue Type: Improvement
>  Components: yardstick
>Affects Versions: 2.7
>Reporter: Ilya Suntsov
>Assignee: Oleg Ostanin
>Priority: Major
> Attachments: wal_mode.zip
>
>
> As I understand we can enable persistence with properties file parameter. I 
> guess we need to add a parameter for WAL mode.
> Expected behavior:
>  * When we have in configuration region with persistence should be used this 
> configuration and wal_mode value from properties should be ignored. Also 
> should be added warnings in all nodes logs with configuration details. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-10800) Add wal_mode parameter to yardstick properties file

2018-12-25 Thread Ilya Suntsov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-10800?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16728669#comment-16728669
 ] 

Ilya Suntsov commented on IGNITE-10800:
---

[~oleg-ostanin] I've tested ver. 2.8.0-SNAPSHOT#20181224-sha1:6d298d99.

Configuration:

{noformat}

-cfg *${SCRIPT_DIR}*/../config/ignite-localhost-config.xml -cl -nn 
*${nodesNum}* -pds -wm BACKGROUND -b *${b}* -w *${w}* -d *${d}* -t *${t}* -sm 
*${sm}* -dn IgnitePutBenchmark -sn IgniteNode -ds 
*${ver}*atomic-put-*${b}*-backup,*\*

-cfg *${SCRIPT_DIR}*/../config/ignite-localhost-config.xml -cl -nn 
*${nodesNum}* -b *${b}* -w *${w}* -d *${d}* -t *${t}* -bs *1000* -col -sm 
PRIMARY_SYNC -dn IgnitePutAllTxBenchmark -sn IgniteNode -ds 
*${ver}*atomic-collocated-putAll-tx-*${b}*-backup

{noformat}

I expect 1 benchmark atomic-put with pds, walMode BACKGROUND and 1 in-memory 
benchmark  atomic-collocated-putAll-tx but I see 1 benchmark  with pds, walMode 
BACKGROUND and 1  with pds, walMode LOG_ONLY.

Please take a look at server logs at the attachment. 

 

 

> Add wal_mode parameter to yardstick properties file
> ---
>
> Key: IGNITE-10800
> URL: https://issues.apache.org/jira/browse/IGNITE-10800
> Project: Ignite
>  Issue Type: Improvement
>  Components: yardstick
>Affects Versions: 2.7
>Reporter: Ilya Suntsov
>Assignee: Oleg Ostanin
>Priority: Major
>
> As I understand we can enable persistence with properties file parameter. I 
> guess we need to add a parameter for WAL mode.
> Expected behavior:
>  * When we have in configuration region with persistence should be used this 
> configuration and wal_mode value from properties should be ignored. Also 
> should be added warnings in all nodes logs with configuration details. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-9184) Cluster hangs during concurrent node restart and continues query registration

2018-12-25 Thread Mikhail Cherkasov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-9184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16728666#comment-16728666
 ] 

Mikhail Cherkasov commented on IGNITE-9184:
---

[~zstan] it hangs even on latest master:
{code:java}
[25-12-2018 13:21:25][WARN ][main][GridCachePartitionExchangeManager] Still 
waiting for initial partition map exchange [fut=GridDhtPartitionsExchangeFuture 
[firstDiscoEvt=DiscoveryEvent [evtNode=TcpDiscoveryNode 
[id=1f8184a1-4722-4855-83c4-70760229465b, addrs=ArrayList [0:0:0:0:0:0:0:1%lo0, 
127.0.0.1, 172.25.4.99], sockAddrs=HashSet [/0:0:0:0:0:0:0:1%lo0:0, 
/127.0.0.1:0, Mikhails-MBP.gridgain.local/172.25.4.99:0], discPort=0, 
order=705, intOrder=0, lastExchangeTime=1545731025479, loc=true, 
ver=2.7.0#19700101-sha1:, isClient=true], topVer=705, nodeId8=1f8184a1, 
msg=null, type=NODE_JOINED, tstamp=1545731025740], crd=TcpDiscoveryNode 
[id=4f70ea84-f356-4086-9795-ea004edb72e9, addrs=ArrayList [0:0:0:0:0:0:0:1%lo0, 
127.0.0.1, 172.25.4.99], sockAddrs=HashSet [/0:0:0:0:0:0:0:1%lo0:47500, 
/127.0.0.1:47500, Mikhails-MBP.gridgain.local/172.25.4.99:47500], 
discPort=47500, order=681, intOrder=343, lastExchangeTime=1545731025545, 
loc=false, ver=2.7.0#19700101-sha1:, isClient=false], 
exchId=GridDhtPartitionExchangeId [topVer=AffinityTopologyVersion [topVer=705, 
minorTopVer=0], discoEvt=DiscoveryEvent [evtNode=TcpDiscoveryNode 
[id=1f8184a1-4722-4855-83c4-70760229465b, addrs=ArrayList [0:0:0:0:0:0:0:1%lo0, 
127.0.0.1, 172.25.4.99], sockAddrs=HashSet [/0:0:0:0:0:0:0:1%lo0:0, 
/127.0.0.1:0, Mikhails-MBP.gridgain.local/172.25.4.99:0], discPort=0, 
order=705, intOrder=0, lastExchangeTime=1545731025479, loc=true, 
ver=2.7.0#19700101-sha1:, isClient=true], topVer=705, nodeId8=1f8184a1, 
msg=null, type=NODE_JOINED, tstamp=1545731025740], nodeId=1f8184a1, 
evt=NODE_JOINED], added=true, initFut=GridFutureAdapter 
[ignoreInterrupts=false, state=DONE, res=true, hash=1416345449], init=true, 
lastVer=null, partReleaseFut=null, exchActions=ExchangeActions 
[startCaches=null, stopCaches=null, startGrps=[], stopGrps=[], resetParts=null, 
stateChangeRequest=null], affChangeMsg=null, initTs=1545731035406, 
centralizedAff=false, forceAffReassignment=false, exchangeLocE=null, 
cacheChangeFailureMsgSent=false, done=false, state=CLIENT, 
registerCachesFuture=GridFinishedFuture [resFlag=2], partitionsSent=false, 
partitionsReceived=false, delayedLatestMsg=GridDhtPartitionsFullMessage 
[parts=HashMap {-2100569601=GridDhtPartitionFullMap 
{502ade99-f598-428e-a142-2ff997eda6ab=GridDhtPartitionMap [moving=0, 
top=AffinityTopologyVersion [topVer=703, minorTopVer=0], updateSeq=106, 
size=100], 060c1c8d-3137-4069-9c0f-a10f621e38db=GridDhtPartitionMap [moving=0, 
top=AffinityTopologyVersion [topVer=701, minorTopVer=0], updateSeq=125, 
size=100], 4f70ea84-f356-4086-9795-ea004edb72e9=GridDhtPartitionMap [moving=0, 
top=AffinityTopologyVersion [topVer=701, minorTopVer=0], updateSeq=157, 
size=100], 7da72f35-3d8e-44a1-9cd8-c7d9584fa945=GridDhtPartitionMap [moving=0, 
top=AffinityTopologyVersion [topVer=703, minorTopVer=0], updateSeq=201, 
size=100]}, 2571410=GridDhtPartitionFullMap 
{502ade99-f598-428e-a142-2ff997eda6ab=GridDhtPartitionMap [moving=0, 
top=AffinityTopologyVersion [topVer=703, minorTopVer=0], updateSeq=531, 
size=525], 060c1c8d-3137-4069-9c0f-a10f621e38db=GridDhtPartitionMap [moving=0, 
top=AffinityTopologyVersion [topVer=701, minorTopVer=0], updateSeq=649, 
size=623], 4f70ea84-f356-4086-9795-ea004edb72e9=GridDhtPartitionMap [moving=0, 
top=AffinityTopologyVersion [topVer=701, minorTopVer=0], updateSeq=929, 
size=708], 7da72f35-3d8e-44a1-9cd8-c7d9584fa945=GridDhtPartitionMap [moving=0, 
top=AffinityTopologyVersion [topVer=705, minorTopVer=0], updateSeq=1157, 
size=515]}}, partCntrs=IgniteDhtPartitionCountersMap [], 
partCntrs2=o.a.i.i.processors.cache.distributed.dht.preloader.IgniteDhtPartitionCountersMap2@223769f7,
 partHistSuppliers=IgniteDhtPartitionHistorySuppliersMap [], 
partsToReload=IgniteDhtPartitionsToReloadMap [], partsSizes=HashMap 
{-2100569601=UnmodifiableMap {0=0, 1=0, 2=0, 3=0, 4=0, 5=0, 6=0, 7=0, 8=0, 9=0, 
10=0, 11=0, 12=0, 13=0, 14=0, 15=0, 16=0, 17=0, 18=0, 19=0, 20=0, 21=0, 22=0, 
23=0, 24=0, 25=0, 26=0, 27=0, 28=0, 29=0, 30=0, 31=0, 32=0, 33=0, 34=0, 35=0, 
36=0, 37=0, 38=0, 39=0, 40=0, 41=0, 42=0, 43=0, 44=0, 45=0, 46=0, 47=0, 48=0, 
49=0, 50=0, 51=0, 52=0, 53=0, 54=0, 55=0, 56=0, 57=0, 58=0, 59=0, 60=0, 61=0, 
62=0, 63=0, 64=0, 65=0, 66=0, 67=0, 68=0, 69=0, 70=0, 71=0, 72=0, 73=0, 74=0, 
75=0, 76=0, 77=0, 78=0, 79=0, 80=0, 81=0, 82=0, 83=0, 84=0, 85=0, 86=0, 87=0, 
88=0, 89=0, 90=0, 91=0, 92=0, 93=0, 94=0, 95=0, 96=0, 97=0, 98=0, 99=0}, 
2571410=UnmodifiableMap {0=0, 1=null, 2=null, 3=0, 4=0, 5=0, 6=null, 7=0, 8=0, 
9=0, 10=0, 11=null, 12=0, 13=0, 14=0, 15=null, 16=null, 17=0, 18=0, 19=0, 
20=null, 21=0, 22=0, 23=0, 24=null, 25=0, 26=0, 27=0, 28=0, 29=0, 30=null, 
31=0, 32=0, 33=0, 

[jira] [Commented] (IGNITE-10579) IgniteCacheContinuousQueryReconnectTest.testReconnectServer is flaky in master.

2018-12-25 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-10579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16728665#comment-16728665
 ] 

ASF GitHub Bot commented on IGNITE-10579:
-

Github user NSAmelchev closed the pull request at:

https://github.com/apache/ignite/pull/5591


> IgniteCacheContinuousQueryReconnectTest.testReconnectServer is flaky in 
> master.
> ---
>
> Key: IGNITE-10579
> URL: https://issues.apache.org/jira/browse/IGNITE-10579
> Project: Ignite
>  Issue Type: Bug
>Reporter: Amelchev Nikita
>Assignee: Amelchev Nikita
>Priority: Major
>  Labels: MakeTeamcityGreenAgain
>
> Next tests are flaky in master: 
> IgniteCacheContinuousQueryReconnectTest.testReconnectServer
> IgniteCacheContinuousQueryReconnectTest.testReconnectClient
> Test exception: 
> {noformat}
> junit.framework.AssertionFailedError: expected:<2> but was:<1>
> at 
> org.apache.ignite.internal.processors.cache.query.continuous.IgniteCacheContinuousQueryReconnectTest.putAndCheck(IgniteCacheContinuousQueryReconnectTest.java:111)
> at 
> org.apache.ignite.internal.processors.cache.query.continuous.IgniteCacheContinuousQueryReconnectTest.testReconnect(IgniteCacheContinuousQueryReconnectTest.java:179)
> at 
> org.apache.ignite.internal.processors.cache.query.continuous.IgniteCacheContinuousQueryReconnectTest.testReconnectServer(IgniteCacheContinuousQueryReconnectTest.java:93)
> {noformat}
> [Test 
> history.|https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8=-4837559557126450615=%3Cdefault%3E=testDetails]
> By logs I found that possible reason is that started node doesn't see 
> cluster: 
> {noformat}
> startGrid(0);
>   Topology snapshot [ver=1, locNode=0b292f90, servers=1, clients=0
> startGrid(1); //client
>   Topology snapshot [ver=2, locNode=0b292f90, servers=1, clients=1
>   Topology snapshot [ver=2, locNode=693848f6, servers=1, clients=1
> startGrid(2);
>   Topology snapshot [ver=3, locNode=0b292f90, servers=2, clients=1
>   Topology snapshot [ver=3, locNode=693848f6, servers=2, clients=1
>   Topology snapshot [ver=3, locNode=99a406a5, servers=2, clients=1
> stopGrid(0);
>   Topology snapshot [ver=4, locNode=99a406a5, servers=1, clients=1
>   Topology snapshot [ver=4, locNode=693848f6, servers=1, clients=1
> startGrid(3);
>   Topology snapshot [ver=1, locNode=8d9ef192, servers=1, clients=0
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (IGNITE-10579) IgniteCacheContinuousQueryReconnectTest.testReconnectServer is flaky in master.

2018-12-25 Thread Amelchev Nikita (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-10579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amelchev Nikita resolved IGNITE-10579.
--
Resolution: Won't Fix

These tests were fixed by IGNITE-10555. See the test's history: 
[testReconnectServer|https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8=-4837559557126450615=testDetails_IgniteTests24Java8=],
 
[testReconnectClient|https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8=-93777049115891327=testDetails_IgniteTests24Java8=].

> IgniteCacheContinuousQueryReconnectTest.testReconnectServer is flaky in 
> master.
> ---
>
> Key: IGNITE-10579
> URL: https://issues.apache.org/jira/browse/IGNITE-10579
> Project: Ignite
>  Issue Type: Bug
>Reporter: Amelchev Nikita
>Assignee: Amelchev Nikita
>Priority: Major
>  Labels: MakeTeamcityGreenAgain
>
> Next tests are flaky in master: 
> IgniteCacheContinuousQueryReconnectTest.testReconnectServer
> IgniteCacheContinuousQueryReconnectTest.testReconnectClient
> Test exception: 
> {noformat}
> junit.framework.AssertionFailedError: expected:<2> but was:<1>
> at 
> org.apache.ignite.internal.processors.cache.query.continuous.IgniteCacheContinuousQueryReconnectTest.putAndCheck(IgniteCacheContinuousQueryReconnectTest.java:111)
> at 
> org.apache.ignite.internal.processors.cache.query.continuous.IgniteCacheContinuousQueryReconnectTest.testReconnect(IgniteCacheContinuousQueryReconnectTest.java:179)
> at 
> org.apache.ignite.internal.processors.cache.query.continuous.IgniteCacheContinuousQueryReconnectTest.testReconnectServer(IgniteCacheContinuousQueryReconnectTest.java:93)
> {noformat}
> [Test 
> history.|https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8=-4837559557126450615=%3Cdefault%3E=testDetails]
> By logs I found that possible reason is that started node doesn't see 
> cluster: 
> {noformat}
> startGrid(0);
>   Topology snapshot [ver=1, locNode=0b292f90, servers=1, clients=0
> startGrid(1); //client
>   Topology snapshot [ver=2, locNode=0b292f90, servers=1, clients=1
>   Topology snapshot [ver=2, locNode=693848f6, servers=1, clients=1
> startGrid(2);
>   Topology snapshot [ver=3, locNode=0b292f90, servers=2, clients=1
>   Topology snapshot [ver=3, locNode=693848f6, servers=2, clients=1
>   Topology snapshot [ver=3, locNode=99a406a5, servers=2, clients=1
> stopGrid(0);
>   Topology snapshot [ver=4, locNode=99a406a5, servers=1, clients=1
>   Topology snapshot [ver=4, locNode=693848f6, servers=1, clients=1
> startGrid(3);
>   Topology snapshot [ver=1, locNode=8d9ef192, servers=1, clients=0
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-10811) Add new TC build-config to test Service Grid new and old implementations

2018-12-25 Thread Vyacheslav Daradur (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-10811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vyacheslav Daradur updated IGNITE-10811:

Description: 
It's necessary to add new build configuration on TeamCity (after merge 
IGNITE-9607) to have ability testing new and old implementations of the service 
processor.

In general, new build plan should contain 2 test suites:
 1) To place Service Grid related tests which *do not depend on* service 
processor's implementation (like classic unit tests) and *should be executed 
once*;
 2) To place Service Grid related tests which *depend* *on* service processor's 
implementation and *should be executed twice* in the new mode and old mode 
(-DIGNITE_EVENT_DRIVEN_SERVICE_PROCESSOR_ENABLED=false)

  was:
It's necessary to add new build configuration on TeamCity (after merge 
IGNITE-9607) to have ability testing new and old implementations of the service 
processor.

In general, new build plan we should contain 2 test suites:
 1) To place Service Grid related tests which *do not depend on* service 
processor's implementation (like classic unit tests) and *should be executed 
once*;
 2) To place Service Grid related tests which *depend* *on* service processor's 
implementation and *should be executed twice* in the new mode and old mode 
(-DIGNITE_EVENT_DRIVEN_SERVICE_PROCESSOR_ENABLED=false)


> Add new TC build-config to test Service Grid new and old implementations
> 
>
> Key: IGNITE-10811
> URL: https://issues.apache.org/jira/browse/IGNITE-10811
> Project: Ignite
>  Issue Type: Task
>  Components: managed services
>Reporter: Vyacheslav Daradur
>Assignee: Vyacheslav Daradur
>Priority: Major
>  Labels: iep-17
> Fix For: 2.8
>
>
> It's necessary to add new build configuration on TeamCity (after merge 
> IGNITE-9607) to have ability testing new and old implementations of the 
> service processor.
> In general, new build plan should contain 2 test suites:
>  1) To place Service Grid related tests which *do not depend on* service 
> processor's implementation (like classic unit tests) and *should be executed 
> once*;
>  2) To place Service Grid related tests which *depend* *on* service 
> processor's implementation and *should be executed twice* in the new mode and 
> old mode (-DIGNITE_EVENT_DRIVEN_SERVICE_PROCESSOR_ENABLED=false)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-10812) SQL: split classes responsible for distributed joins

2018-12-25 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-10812?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16728660#comment-16728660
 ] 

ASF GitHub Bot commented on IGNITE-10812:
-

GitHub user devozerov opened a pull request:

https://github.com/apache/ignite/pull/5742

IGNITE-10812



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-10812

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/5742.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5742


commit 35586e3955de133bfd1315e30eb60166c53bfa70
Author: devozerov 
Date:   2018-12-21T13:04:33Z

Splitting.

commit 7358f5a59c6041593db6e00c597399fcbc58f623
Author: devozerov 
Date:   2018-12-21T13:07:58Z

WIP.

commit 2460b95336e37dc54e3675ef0b2a78eb0c5e442e
Author: devozerov 
Date:   2018-12-21T13:28:24Z

WIP.

commit fb29fc805bf637d11867e28b5f9ab6aa1b87adfe
Author: devozerov 
Date:   2018-12-21T13:42:29Z

WIP.

commit 8c62ca73509f66058c0f097982440bbd6141241c
Author: devozerov 
Date:   2018-12-21T13:44:24Z

Moving classes.

commit bc01350bd5c4f3c032d6242cdc40cd83cede824a
Author: devozerov 
Date:   2018-12-21T13:52:48Z

RangeStream.

commit f8aab8effbfaa9e67748a5c7e7b6b9e176b41228
Author: devozerov 
Date:   2018-12-21T13:53:27Z

UnicastCursor.

commit cd577ac8fe2986c0ab95ab50db1562af84f32f53
Author: devozerov 
Date:   2018-12-21T13:55:44Z

BroadcastCursor.

commit 432b723bff73b37023581c80ed415864751c42e9
Author: devozerov 
Date:   2018-12-21T14:11:44Z

WIP.

commit 5e738904f36031ebfb17294cf0e96a04b45e8781
Author: devozerov 
Date:   2018-12-21T14:16:52Z

WIP.

commit 9fa48e5024453d4595f143d0040b7c030488d22d
Author: devozerov 
Date:   2018-12-21T14:31:06Z

Initial split done.

commit 8dc78d795b03a97b93a829cac3b83eac2e5eb821
Author: devozerov 
Date:   2018-12-21T14:32:50Z

WIP.

commit 5c2085c91b893379e0104cdc9a8febda4a89fa3a
Author: devozerov 
Date:   2018-12-21T14:53:33Z

Minors.

commit 3b90898b1da2633c48a0ac02c42c6618dca192f4
Author: devozerov 
Date:   2018-12-21T14:53:54Z

Minors.

commit 7027ef26e281ab93afe826ca8c309eb6500b2a94
Author: devozerov 
Date:   2018-12-21T14:57:38Z

More refactoring.

commit 0860542a2d33cb5b20c463cc7308a1fb84d5b946
Author: devozerov 
Date:   2018-12-21T14:57:57Z

WIP.

commit 192eb94f8becc21277d0e44833ac0dcd7328aa9f
Author: devozerov 
Date:   2018-12-21T15:07:25Z

More refactoring.

commit de664052d17a652d816d0dda1369ab0108a4d2a6
Author: devozerov 
Date:   2018-12-21T15:08:26Z

WIP.

commit 26ae7895b889d91788be03d9c457e69905cb96ee
Author: devozerov 
Date:   2018-12-25T08:19:46Z

Minors.




> SQL: split classes responsible for distributed joins
> 
>
> Key: IGNITE-10812
> URL: https://issues.apache.org/jira/browse/IGNITE-10812
> Project: Ignite
>  Issue Type: Task
>  Components: sql
>Reporter: Vladimir Ozerov
>Assignee: Vladimir Ozerov
>Priority: Major
> Fix For: 2.8
>
>
> This is just a refactoring task to create more precise hierarchy of classes 
> responsible for distributed joins.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-10812) SQL: split classes responsible for distributed joins

2018-12-25 Thread Vladimir Ozerov (JIRA)
Vladimir Ozerov created IGNITE-10812:


 Summary: SQL: split classes responsible for distributed joins
 Key: IGNITE-10812
 URL: https://issues.apache.org/jira/browse/IGNITE-10812
 Project: Ignite
  Issue Type: Task
  Components: sql
Reporter: Vladimir Ozerov
Assignee: Vladimir Ozerov
 Fix For: 2.8


This is just a refactoring task to create more precise hierarchy of classes 
responsible for distributed joins.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-9845) Web Console: Add support of two way ssl authentication in Web Console agent

2018-12-25 Thread Vasiliy Sisko (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-9845?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vasiliy Sisko updated IGNITE-9845:
--
Description: 
RestExecutor should not be shared between different users requests in case of 
two way ssl authentication:
 * For each token with ssl we need create separated RestExecutor and set up 
socketFactory and trustManager.
 * RestExecutor should be removed if token expired.

Add program arguments for passing client certificate, client password, trust 
store, trust store password for ignite node connection and web console backend. 

Example on okhttp: 
[https://github.com/square/okhttp/blob/cd872fd83824512c128dcd80c04d445c8a2fc8eb/okhttp-tests/src/test/java/okhttp3/internal/tls/ClientAuthTest.java]

Upgrade socket-io from 1.x to 2.x.

Add support for SSL cipher suites

Add tests.

---

*How to do local testing:*

On Windows
 # Download Open SSL:  Download Open SSL for Windows from 
[https://wiki.openssl.org/index.php/Binaries]
 # Unpack it.

On Linux - it is usually built-in.

Generate keys with provided script (see attached generate.bat, it could be 
easily adapted for Linux).

 

Add to etc/hosts: 

    127.0.0.1 localhost console.test.local

 

After that configure SSL for:
 # Web Console back-end.
 # Web Agent.
 # Cluster.

*Configure Web Console back-end settings:*

  "ssl": true,
   "key": "some_path/server.key",
   "cert": "some_path/server.crt",
   "ca": "some_path/ca.crt",
   "keyPassphrase": "p123456",

*Configure Web Agent parameters (see parameters descriptions):*

-t your_token

-s [https://console.test.local:3000|https://console.test.local:3000/] -n 
[https://console.test.local:11443|https://console.test.local:11443/]
 -nks client.jks -nkp p123456
 -nts ca.jks -ntp p123456
 -sks client.jks -skp p123456
 -sts ca.jks -stp p123456

 *Configure cluster JETTY config:*


   https
   
   true
   true
     
 


   some_path/server.jks
   p123456
   some_path/ca.jks
   p123456
   true
 

*How to start secure web console in direct install edition in Ubuntu:*
 # Download ignite web console direct install for linux ZIP archive .
 # Unpack downloaded archive to goal folder.
 # Generate SSL certificates.
 # Copy generated certificates to folder with unpacked web console direct 
install.
 # Open terminal and navigate to folder with unpacked web console direct 
install.
 # Run web console with the next command:

{code:java}
 ignite-web-console-linux --server:port 11443 --server:ssl true 
--server:requestCert true --server:key "server.key" --server:cert "server.crt" 
--server:ca "ca.crt" --server:passphrase "p123456"{code}
  7. Import client.p12 certificate into your browser. See attached 
screenstot in Chrome browser.

 

  was:
RestExecutor should not be shared between different users requests in case of 
two way ssl authentication:
 * For each token with ssl we need create separated RestExecutor and set up 
socketFactory and trustManager.
 * RestExecutor should be removed if token expired.

Add program arguments for passing client certificate, client password, trust 
store, trust store password for ignite node connection and web console backend. 

Example on okhttp: 
[https://github.com/square/okhttp/blob/cd872fd83824512c128dcd80c04d445c8a2fc8eb/okhttp-tests/src/test/java/okhttp3/internal/tls/ClientAuthTest.java]

Upgrade socket-io from 1.x to 2.x.

Add support for SSL cipher suites

Add tests.

---

*How to do local testing:*

On Windows
 # Download Open SSL:  Download Open SSL for Windows from 
[https://wiki.openssl.org/index.php/Binaries]
 # Unpack it.

On Linux - it is usually built-in.

Generate keys with provided script (see attached generate.bat, it could be 
easily adapted for Linux).

 

Add to etc/hosts: 

    127.0.0.1 localhost console.test.local

 

After that configure SSL for:
 # Web Console back-end.
 # Web Agent.
 # Cluster.

*Configure Web Console back-end settings:*

  "ssl": true,
   "key": "some_path/server.key",
   "cert": "some_path/server.crt",
   "ca": "some_path/ca.crt",
   "keyPassphrase": "p123456",

*Configure Web Agent parameters (see parameters descriptions):*

-t your_token

-s [https://console.test.local:3000|https://console.test.local:3000/] -n 
[https://console.test.local:11443|https://console.test.local:11443/]
 -nks client.jks -nkp p123456
 -nts ca.jks -ntp p123456
 -sks client.jks -skp p123456
 -sts ca.jks -stp p123456

 *Configure cluster JETTY config:*


   https
   
   true
   true
     
 


   some_path/server.jks
   p123456
   some_path/ca.jks
   p123456
   true
 

*How to start secure web console in direct install edition in Ubuntu:*
 # Download ignite web console direct install for linux ZIP archive .
 # Unpack downloaded archive to goal folder.
 # Generate SSL certificates.
 # Copy generated certificates to folder with unpacked web console direct 

[jira] [Updated] (IGNITE-9845) Web Console: Add support of two way ssl authentication in Web Console agent

2018-12-25 Thread Vasiliy Sisko (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-9845?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vasiliy Sisko updated IGNITE-9845:
--
Attachment: Selection_274.png

> Web Console: Add support of two way ssl authentication in Web Console agent
> ---
>
> Key: IGNITE-9845
> URL: https://issues.apache.org/jira/browse/IGNITE-9845
> Project: Ignite
>  Issue Type: Improvement
>  Components: wizards
>Affects Versions: 2.6
>Reporter: Andrey Novikov
>Assignee: Ilya Murchenko
>Priority: Major
> Fix For: 2.8
>
> Attachments: Selection_274.png, generate.bat
>
>
> RestExecutor should not be shared between different users requests in case of 
> two way ssl authentication:
>  * For each token with ssl we need create separated RestExecutor and set up 
> socketFactory and trustManager.
>  * RestExecutor should be removed if token expired.
> Add program arguments for passing client certificate, client password, trust 
> store, trust store password for ignite node connection and web console 
> backend. 
> Example on okhttp: 
> [https://github.com/square/okhttp/blob/cd872fd83824512c128dcd80c04d445c8a2fc8eb/okhttp-tests/src/test/java/okhttp3/internal/tls/ClientAuthTest.java]
> Upgrade socket-io from 1.x to 2.x.
> Add support for SSL cipher suites
> Add tests.
> ---
> *How to do local testing:*
> On Windows
>  # Download Open SSL:  Download Open SSL for Windows from 
> [https://wiki.openssl.org/index.php/Binaries]
>  # Unpack it.
> On Linux - it is usually built-in.
> Generate keys with provided script (see attached generate.bat, it could be 
> easily adapted for Linux).
>  
> Add to etc/hosts: 
>     127.0.0.1 localhost console.test.local
>  
> After that configure SSL for:
>  # Web Console back-end.
>  # Web Agent.
>  # Cluster.
> *Configure Web Console back-end settings:*
>   "ssl": true,
>    "key": "some_path/server.key",
>    "cert": "some_path/server.crt",
>    "ca": "some_path/ca.crt",
>    "keyPassphrase": "p123456",
> *Configure Web Agent parameters (see parameters descriptions):*
> -t your_token
> -s [https://console.test.local:3000|https://console.test.local:3000/] -n 
> [https://console.test.local:11443|https://console.test.local:11443/]
>  -nks client.jks -nkp p123456
>  -nts ca.jks -ntp p123456
>  -sks client.jks -skp p123456
>  -sts ca.jks -stp p123456
>  *Configure cluster JETTY config:*
> 
>    https
>     default="11443"/>
>    true
>    true
>       class="org.eclipse.jetty.server.SecureRequestCustomizer"/>
>  
>  class="org.eclipse.jetty.util.ssl.SslContextFactory">
>    some_path/server.jks
>    p123456
>    some_path/ca.jks
>    p123456
>    true
>  
> *How to start secure web console in direct install edition in Ubuntu:*
>  # Download ignite web console direct install for linux ZIP archive .
>  # Unpack downloaded archive to goal folder.
>  # Generate SSL certificates.
>  # Copy generated certificates to folder with unpacked web console direct 
> install.
>  # Open terminal and navigate to folder with unpacked web console direct 
> install.
>  # Run web console with the next command:
> {code:java}
>  {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (IGNITE-10784) SQL: Create a view with list of existing tables

2018-12-25 Thread Pavel Kuznetsov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-10784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16728522#comment-16728522
 ] 

Pavel Kuznetsov edited comment on IGNITE-10784 at 12/25/18 9:55 AM:


MySQL has approximate table size. 


was (Author: pkouznet):
MySQL has approximate table table size. 

> SQL: Create a view with list of existing tables
> ---
>
> Key: IGNITE-10784
> URL: https://issues.apache.org/jira/browse/IGNITE-10784
> Project: Ignite
>  Issue Type: Task
>  Components: sql
>Reporter: Vladimir Ozerov
>Assignee: Pavel Kuznetsov
>Priority: Major
> Fix For: 2.8
>
>
> We need to create a system view of currently available SQL tables. 
> Minimal required information:
> 1) Schema name
> 2) Table name
> 3) Owning cache name
> 4) Owning cache ID
> Other info to consider:
> 1) Affinity column name
> 2) Key/value aliases
> 3) Key/value type names
> 4) Analyse other vendors (e.g. MySQL, Postgresql) and see if any other useful 
> information could be exposed (taking in count that a lot of engine properties 
> are already exposed through {{CACHES}} view)
> Starting point: {{SqlSystemView}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-10784) SQL: Create a view with list of existing tables

2018-12-25 Thread Pavel Kuznetsov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-10784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16728651#comment-16728651
 ] 

Pavel Kuznetsov commented on IGNITE-10784:
--

maybe we should duplicate atomic mode or introduce new column "mvcc enabled".

> SQL: Create a view with list of existing tables
> ---
>
> Key: IGNITE-10784
> URL: https://issues.apache.org/jira/browse/IGNITE-10784
> Project: Ignite
>  Issue Type: Task
>  Components: sql
>Reporter: Vladimir Ozerov
>Assignee: Pavel Kuznetsov
>Priority: Major
> Fix For: 2.8
>
>
> We need to create a system view of currently available SQL tables. 
> Minimal required information:
> 1) Schema name
> 2) Table name
> 3) Owning cache name
> 4) Owning cache ID
> Other info to consider:
> 1) Affinity column name
> 2) Key/value aliases
> 3) Key/value type names
> 4) Analyse other vendors (e.g. MySQL, Postgresql) and see if any other useful 
> information could be exposed (taking in count that a lot of engine properties 
> are already exposed through {{CACHES}} view)
> Starting point: {{SqlSystemView}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


  1   2   >