[jira] [Commented] (IGNITE-10314) Spark dataframe will get wrong schema if user executes add/drop column DDL

2018-12-12 Thread Ray (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-10314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719740#comment-16719740
 ] 

Ray commented on IGNITE-10314:
--

[~NIzhikov]

I have ran the tests in TeamCity, and the results is green.

Here's the link.

https://ci.ignite.apache.org/viewLog.html?buildId=2535640=queuedBuildOverviewTab

> Spark dataframe will get wrong schema if user executes add/drop column DDL
> --
>
> Key: IGNITE-10314
> URL: https://issues.apache.org/jira/browse/IGNITE-10314
> Project: Ignite
>  Issue Type: Bug
>  Components: spark
>Affects Versions: 2.3, 2.4, 2.5, 2.6, 2.7
>Reporter: Ray
>Assignee: Ray
>Priority: Critical
> Fix For: 2.8
>
>
> When user performs add/remove column in DDL,  Spark will get the old/wrong 
> schema.
>  
> Analyse 
> Currently Spark data frame API relies on QueryEntity to construct schema, but 
> QueryEntity in QuerySchema is a local copy of the original QueryEntity, so 
> the original QueryEntity is not updated when modification happens.
>  
> Solution
> Get the latest schema using JDBC thin driver's column metadata call, then 
> update fields in QueryEntity.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (IGNITE-10314) Spark dataframe will get wrong schema if user executes add/drop column DDL

2018-12-12 Thread Ray (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-10314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16718610#comment-16718610
 ] 

Ray edited comment on IGNITE-10314 at 12/12/18 9:58 AM:


[~NIzhikov]

I have implemented the refreshFields using internal API after Vladimir 
confirmed in the dev list.

Please review and comment.


was (Author: ldz):
[~NIzhikov]

I have implemented the refreshFields using internal API after Vladimir 
confirmed in the dev list.

But when running tests in IgniteDataFrameSchemaSpec, there's some odd exception.
Exception in thread "main" java.lang.AssertionError: assertion failed: each 
serializer expression should contain at least one `BoundReference`
at scala.Predef$.assert(Predef.scala:170)
at 
org.apache.spark.sql.catalyst.encoders.ExpressionEncoder$$anonfun$11.apply(ExpressionEncoder.scala:238)
at 
org.apache.spark.sql.catalyst.encoders.ExpressionEncoder$$anonfun$11.apply(ExpressionEncoder.scala:236)
at 
scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
at 
scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
at scala.collection.immutable.List.foreach(List.scala:392)
at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241)
at scala.collection.immutable.List.flatMap(List.scala:355)
at 
org.apache.spark.sql.catalyst.encoders.ExpressionEncoder.(ExpressionEncoder.scala:236)
at 
org.apache.spark.sql.catalyst.encoders.RowEncoder$.apply(RowEncoder.scala:63)
at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:75)
at 
org.apache.spark.sql.SparkSession.baseRelationToDataFrame(SparkSession.scala:428)
at 
org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:233)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:227)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:164)
at 
org.apache.ignite.spark.IgniteDataFrameSchemaSpec.beforeAll(IgniteDataFrameSchemaSpec.scala:122)
at 
org.scalatest.BeforeAndAfterAll$class.beforeAll(BeforeAndAfterAll.scala:187)
at 
org.apache.ignite.spark.AbstractDataFrameSpec.beforeAll(AbstractDataFrameSpec.scala:39)
at org.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:253)
at 
org.apache.ignite.spark.AbstractDataFrameSpec.org$scalatest$BeforeAndAfter$$super$run(AbstractDataFrameSpec.scala:39)
at org.scalatest.BeforeAndAfter$class.run(BeforeAndAfter.scala:241)
at 
org.apache.ignite.spark.AbstractDataFrameSpec.run(AbstractDataFrameSpec.scala:39)
at org.scalatest.junit.JUnitRunner.run(JUnitRunner.scala:99)
at org.junit.runner.JUnitCore.run(JUnitCore.java:160)
at 
com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
at 
com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47)
at 
com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242)
at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70)
 
Can you take a look please?
 
I add breakpoint at refreshFields method, and this method is working fine, the 
latest fields are in the map.

> Spark dataframe will get wrong schema if user executes add/drop column DDL
> --
>
> Key: IGNITE-10314
> URL: https://issues.apache.org/jira/browse/IGNITE-10314
> Project: Ignite
>  Issue Type: Bug
>  Components: spark
>Affects Versions: 2.3, 2.4, 2.5, 2.6, 2.7
>Reporter: Ray
>Assignee: Ray
>Priority: Critical
> Fix For: 2.8
>
>
> When user performs add/remove column in DDL,  Spark will get the old/wrong 
> schema.
>  
> Analyse 
> Currently Spark data frame API relies on QueryEntity to construct schema, but 
> QueryEntity in QuerySchema is a local copy of the original QueryEntity, so 
> the original QueryEntity is not updated when modification happens.
>  
> Solution
> Get the latest schema using JDBC thin driver's column metadata call, then 
> update fields in QueryEntity.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-10314) Spark dataframe will get wrong schema if user executes add/drop column DDL

2018-12-12 Thread Ray (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-10314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16718702#comment-16718702
 ] 

Ray commented on IGNITE-10314:
--

[~NIzhikov]

There's a bug in my code causing the error, now I have fixed it.

Please review and comment.

> Spark dataframe will get wrong schema if user executes add/drop column DDL
> --
>
> Key: IGNITE-10314
> URL: https://issues.apache.org/jira/browse/IGNITE-10314
> Project: Ignite
>  Issue Type: Bug
>  Components: spark
>Affects Versions: 2.3, 2.4, 2.5, 2.6, 2.7
>Reporter: Ray
>Assignee: Ray
>Priority: Critical
> Fix For: 2.8
>
>
> When user performs add/remove column in DDL,  Spark will get the old/wrong 
> schema.
>  
> Analyse 
> Currently Spark data frame API relies on QueryEntity to construct schema, but 
> QueryEntity in QuerySchema is a local copy of the original QueryEntity, so 
> the original QueryEntity is not updated when modification happens.
>  
> Solution
> Get the latest schema using JDBC thin driver's column metadata call, then 
> update fields in QueryEntity.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-10314) Spark dataframe will get wrong schema if user executes add/drop column DDL

2018-12-12 Thread Ray (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-10314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16718610#comment-16718610
 ] 

Ray commented on IGNITE-10314:
--

[~NIzhikov]

I have implemented the refreshFields using internal API after Vladimir 
confirmed in the dev list.

But when running tests in IgniteDataFrameSchemaSpec, there's some odd exception.
Exception in thread "main" java.lang.AssertionError: assertion failed: each 
serializer expression should contain at least one `BoundReference`
at scala.Predef$.assert(Predef.scala:170)
at 
org.apache.spark.sql.catalyst.encoders.ExpressionEncoder$$anonfun$11.apply(ExpressionEncoder.scala:238)
at 
org.apache.spark.sql.catalyst.encoders.ExpressionEncoder$$anonfun$11.apply(ExpressionEncoder.scala:236)
at 
scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
at 
scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
at scala.collection.immutable.List.foreach(List.scala:392)
at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241)
at scala.collection.immutable.List.flatMap(List.scala:355)
at 
org.apache.spark.sql.catalyst.encoders.ExpressionEncoder.(ExpressionEncoder.scala:236)
at 
org.apache.spark.sql.catalyst.encoders.RowEncoder$.apply(RowEncoder.scala:63)
at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:75)
at 
org.apache.spark.sql.SparkSession.baseRelationToDataFrame(SparkSession.scala:428)
at 
org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:233)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:227)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:164)
at 
org.apache.ignite.spark.IgniteDataFrameSchemaSpec.beforeAll(IgniteDataFrameSchemaSpec.scala:122)
at 
org.scalatest.BeforeAndAfterAll$class.beforeAll(BeforeAndAfterAll.scala:187)
at 
org.apache.ignite.spark.AbstractDataFrameSpec.beforeAll(AbstractDataFrameSpec.scala:39)
at org.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:253)
at 
org.apache.ignite.spark.AbstractDataFrameSpec.org$scalatest$BeforeAndAfter$$super$run(AbstractDataFrameSpec.scala:39)
at org.scalatest.BeforeAndAfter$class.run(BeforeAndAfter.scala:241)
at 
org.apache.ignite.spark.AbstractDataFrameSpec.run(AbstractDataFrameSpec.scala:39)
at org.scalatest.junit.JUnitRunner.run(JUnitRunner.scala:99)
at org.junit.runner.JUnitCore.run(JUnitCore.java:160)
at 
com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
at 
com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47)
at 
com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242)
at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70)
 
Can you take a look please?
 
I add breakpoint at refreshFields method, and this method is working fine, the 
latest fields are in the map.

> Spark dataframe will get wrong schema if user executes add/drop column DDL
> --
>
> Key: IGNITE-10314
> URL: https://issues.apache.org/jira/browse/IGNITE-10314
> Project: Ignite
>  Issue Type: Bug
>  Components: spark
>Affects Versions: 2.3, 2.4, 2.5, 2.6, 2.7
>Reporter: Ray
>Assignee: Ray
>Priority: Critical
> Fix For: 2.8
>
>
> When user performs add/remove column in DDL,  Spark will get the old/wrong 
> schema.
>  
> Analyse 
> Currently Spark data frame API relies on QueryEntity to construct schema, but 
> QueryEntity in QuerySchema is a local copy of the original QueryEntity, so 
> the original QueryEntity is not updated when modification happens.
>  
> Solution
> Get the latest schema using JDBC thin driver's column metadata call, then 
> update fields in QueryEntity.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-10314) Spark dataframe will get wrong schema if user executes add/drop column DDL

2018-12-06 Thread Ray (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-10314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16712316#comment-16712316
 ] 

Ray commented on IGNITE-10314:
--

[~NIzhikov]

I have implemented the fix for this issue, please review and comment.

Some of the tests fail because currently there're bugs like IGNITE-10585 and 
IGNITE-10569 causing wrong table schema returned by thin JDBC driver.

 

> Spark dataframe will get wrong schema if user executes add/drop column DDL
> --
>
> Key: IGNITE-10314
> URL: https://issues.apache.org/jira/browse/IGNITE-10314
> Project: Ignite
>  Issue Type: Bug
>  Components: spark
>Affects Versions: 2.3, 2.4, 2.5, 2.6, 2.7
>Reporter: Ray
>Assignee: Ray
>Priority: Critical
> Fix For: 2.8
>
>
> When user performs add/remove column in DDL,  Spark will get the old/wrong 
> schema.
>  
> Analyse 
> Currently Spark data frame API relies on QueryEntity to construct schema, but 
> QueryEntity in QuerySchema is a local copy of the original QueryEntity, so 
> the original QueryEntity is not updated when modification happens.
>  
> Solution
> Get the latest schema using JDBC thin driver's column metadata call, then 
> update fields in QueryEntity.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-10585) JDBC driver returns FLOAT in column metadata for REAL SQL type

2018-12-06 Thread Ray (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-10585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16712310#comment-16712310
 ] 

Ray commented on IGNITE-10585:
--

[~vozerov]

According to [https://apacheignite-sql.readme.io/docs/data-types#section-real,] 
java.lang.Float should be cast to REAL in SQL.

I provided a fix, please review and comment.

> JDBC driver returns FLOAT in column metadata for REAL SQL type
> --
>
> Key: IGNITE-10585
> URL: https://issues.apache.org/jira/browse/IGNITE-10585
> Project: Ignite
>  Issue Type: Bug
>  Components: jdbc
>Affects Versions: 2.7
>Reporter: Ray
>Assignee: Ray
>Priority: Major
> Fix For: 2.8
>
>
> When I create a table using 
> create table c(a varchar, b real, primary key(a));
> The meta information for column b is wrong when I use !desc c to check.
> 0: jdbc:ignite:thin://127.0.0.1/> !desc c
>  TABLE_CAT
>  TABLE_SCHEM PUBLIC
>  TABLE_NAME C
>  COLUMN_NAME A
>  DATA_TYPE 12
>  TYPE_NAME VARCHAR
>  COLUMN_SIZE null
>  BUFFER_LENGTH null
>  DECIMAL_DIGITS null
>  NUM_PREC_RADIX 10
>  NULLABLE 1
>  REMARKS
>  COLUMN_DEF
>  SQL_DATA_TYPE 12
>  SQL_DATETIME_SUB null
>  CHAR_OCTET_LENGTH 2147483647
>  ORDINAL_POSITION 1
>  IS_NULLABLE YES
>  SCOPE_CATLOG
>  SCOPE_SCHEMA
>  SCOPE_TABLE
>  SOURCE_DATA_TYPE null
>  IS_AUTOINCREMENT NO
>  IS_GENERATEDCOLUMN NO
> TABLE_CAT
>  TABLE_SCHEM PUBLIC
>  TABLE_NAME C
>  COLUMN_NAME B
>  DATA_TYPE 7
>  {color:#d04437}TYPE_NAME FLOAT{color}
>  COLUMN_SIZE null
>  BUFFER_LENGTH null
>  DECIMAL_DIGITS null
>  NUM_PREC_RADIX 10
>  NULLABLE 1
>  REMARKS
>  COLUMN_DEF
>  SQL_DATA_TYPE 8
>  SQL_DATETIME_SUB null
>  CHAR_OCTET_LENGTH 2147483647
>  ORDINAL_POSITION 2
>  IS_NULLABLE YES
>  SCOPE_CATLOG
>  SCOPE_SCHEMA
>  SCOPE_TABLE
>  SOURCE_DATA_TYPE null
>  IS_AUTOINCREMENT NO
>  IS_GENERATEDCOLUMN NO



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-10585) JDBC driver returns FLOAT in column metadata for REAL SQL type

2018-12-06 Thread Ray (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-10585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ray updated IGNITE-10585:
-
Description: 
When I create a table using 

create table c(a varchar, b real, primary key(a));

The meta information for column b is wrong when I use !desc c to check.

0: jdbc:ignite:thin://127.0.0.1/> !desc c
 TABLE_CAT
 TABLE_SCHEM PUBLIC
 TABLE_NAME C
 COLUMN_NAME A
 DATA_TYPE 12
 TYPE_NAME VARCHAR
 COLUMN_SIZE null
 BUFFER_LENGTH null
 DECIMAL_DIGITS null
 NUM_PREC_RADIX 10
 NULLABLE 1
 REMARKS
 COLUMN_DEF
 SQL_DATA_TYPE 12
 SQL_DATETIME_SUB null
 CHAR_OCTET_LENGTH 2147483647
 ORDINAL_POSITION 1
 IS_NULLABLE YES
 SCOPE_CATLOG
 SCOPE_SCHEMA
 SCOPE_TABLE
 SOURCE_DATA_TYPE null
 IS_AUTOINCREMENT NO
 IS_GENERATEDCOLUMN NO

TABLE_CAT
 TABLE_SCHEM PUBLIC
 TABLE_NAME C
 COLUMN_NAME B
 DATA_TYPE 7
 {color:#d04437}TYPE_NAME FLOAT{color}
 COLUMN_SIZE null
 BUFFER_LENGTH null
 DECIMAL_DIGITS null
 NUM_PREC_RADIX 10
 NULLABLE 1
 REMARKS
 COLUMN_DEF
 SQL_DATA_TYPE 8
 SQL_DATETIME_SUB null
 CHAR_OCTET_LENGTH 2147483647
 ORDINAL_POSITION 2
 IS_NULLABLE YES
 SCOPE_CATLOG
 SCOPE_SCHEMA
 SCOPE_TABLE
 SOURCE_DATA_TYPE null
 IS_AUTOINCREMENT NO
 IS_GENERATEDCOLUMN NO

  was:
When I create a table using 

create table c(a varchar, b float, primary key(a));

The meta information for column b is wrong when I use !desc c to check.

0: jdbc:ignite:thin://127.0.0.1/> !desc c
TABLE_CAT
TABLE_SCHEM PUBLIC
TABLE_NAME C
COLUMN_NAME A
DATA_TYPE 12
TYPE_NAME VARCHAR
COLUMN_SIZE null
BUFFER_LENGTH null
DECIMAL_DIGITS null
NUM_PREC_RADIX 10
NULLABLE 1
REMARKS
COLUMN_DEF
SQL_DATA_TYPE 12
SQL_DATETIME_SUB null
CHAR_OCTET_LENGTH 2147483647
ORDINAL_POSITION 1
IS_NULLABLE YES
SCOPE_CATLOG
SCOPE_SCHEMA
SCOPE_TABLE
SOURCE_DATA_TYPE null
IS_AUTOINCREMENT NO
IS_GENERATEDCOLUMN NO

TABLE_CAT
TABLE_SCHEM PUBLIC
TABLE_NAME C
COLUMN_NAME B
DATA_TYPE 8
{color:#d04437}TYPE_NAME DOUBLE{color}
COLUMN_SIZE null
BUFFER_LENGTH null
DECIMAL_DIGITS null
NUM_PREC_RADIX 10
NULLABLE 1
REMARKS
COLUMN_DEF
SQL_DATA_TYPE 8
SQL_DATETIME_SUB null
CHAR_OCTET_LENGTH 2147483647
ORDINAL_POSITION 2
IS_NULLABLE YES
SCOPE_CATLOG
SCOPE_SCHEMA
SCOPE_TABLE
SOURCE_DATA_TYPE null
IS_AUTOINCREMENT NO
IS_GENERATEDCOLUMN NO

Summary: JDBC driver returns FLOAT in column metadata for REAL SQL type 
 (was: JDBC driver returns Double in column metadata for Float type)

> JDBC driver returns FLOAT in column metadata for REAL SQL type
> --
>
> Key: IGNITE-10585
> URL: https://issues.apache.org/jira/browse/IGNITE-10585
> Project: Ignite
>  Issue Type: Bug
>  Components: jdbc
>Affects Versions: 2.7
>Reporter: Ray
>Assignee: Ray
>Priority: Major
> Fix For: 2.8
>
>
> When I create a table using 
> create table c(a varchar, b real, primary key(a));
> The meta information for column b is wrong when I use !desc c to check.
> 0: jdbc:ignite:thin://127.0.0.1/> !desc c
>  TABLE_CAT
>  TABLE_SCHEM PUBLIC
>  TABLE_NAME C
>  COLUMN_NAME A
>  DATA_TYPE 12
>  TYPE_NAME VARCHAR
>  COLUMN_SIZE null
>  BUFFER_LENGTH null
>  DECIMAL_DIGITS null
>  NUM_PREC_RADIX 10
>  NULLABLE 1
>  REMARKS
>  COLUMN_DEF
>  SQL_DATA_TYPE 12
>  SQL_DATETIME_SUB null
>  CHAR_OCTET_LENGTH 2147483647
>  ORDINAL_POSITION 1
>  IS_NULLABLE YES
>  SCOPE_CATLOG
>  SCOPE_SCHEMA
>  SCOPE_TABLE
>  SOURCE_DATA_TYPE null
>  IS_AUTOINCREMENT NO
>  IS_GENERATEDCOLUMN NO
> TABLE_CAT
>  TABLE_SCHEM PUBLIC
>  TABLE_NAME C
>  COLUMN_NAME B
>  DATA_TYPE 7
>  {color:#d04437}TYPE_NAME FLOAT{color}
>  COLUMN_SIZE null
>  BUFFER_LENGTH null
>  DECIMAL_DIGITS null
>  NUM_PREC_RADIX 10
>  NULLABLE 1
>  REMARKS
>  COLUMN_DEF
>  SQL_DATA_TYPE 8
>  SQL_DATETIME_SUB null
>  CHAR_OCTET_LENGTH 2147483647
>  ORDINAL_POSITION 2
>  IS_NULLABLE YES
>  SCOPE_CATLOG
>  SCOPE_SCHEMA
>  SCOPE_TABLE
>  SOURCE_DATA_TYPE null
>  IS_AUTOINCREMENT NO
>  IS_GENERATEDCOLUMN NO



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-10585) JDBC driver returns Double in column metadata for Float type

2018-12-06 Thread Ray (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-10585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ray updated IGNITE-10585:
-
Affects Version/s: 2.7

> JDBC driver returns Double in column metadata for Float type
> 
>
> Key: IGNITE-10585
> URL: https://issues.apache.org/jira/browse/IGNITE-10585
> Project: Ignite
>  Issue Type: Bug
>  Components: jdbc
>Affects Versions: 2.7
>Reporter: Ray
>Assignee: Ray
>Priority: Major
> Fix For: 2.8
>
>
> When I create a table using 
> create table c(a varchar, b float, primary key(a));
> The meta information for column b is wrong when I use !desc c to check.
> 0: jdbc:ignite:thin://127.0.0.1/> !desc c
> TABLE_CAT
> TABLE_SCHEM PUBLIC
> TABLE_NAME C
> COLUMN_NAME A
> DATA_TYPE 12
> TYPE_NAME VARCHAR
> COLUMN_SIZE null
> BUFFER_LENGTH null
> DECIMAL_DIGITS null
> NUM_PREC_RADIX 10
> NULLABLE 1
> REMARKS
> COLUMN_DEF
> SQL_DATA_TYPE 12
> SQL_DATETIME_SUB null
> CHAR_OCTET_LENGTH 2147483647
> ORDINAL_POSITION 1
> IS_NULLABLE YES
> SCOPE_CATLOG
> SCOPE_SCHEMA
> SCOPE_TABLE
> SOURCE_DATA_TYPE null
> IS_AUTOINCREMENT NO
> IS_GENERATEDCOLUMN NO
> TABLE_CAT
> TABLE_SCHEM PUBLIC
> TABLE_NAME C
> COLUMN_NAME B
> DATA_TYPE 8
> {color:#d04437}TYPE_NAME DOUBLE{color}
> COLUMN_SIZE null
> BUFFER_LENGTH null
> DECIMAL_DIGITS null
> NUM_PREC_RADIX 10
> NULLABLE 1
> REMARKS
> COLUMN_DEF
> SQL_DATA_TYPE 8
> SQL_DATETIME_SUB null
> CHAR_OCTET_LENGTH 2147483647
> ORDINAL_POSITION 2
> IS_NULLABLE YES
> SCOPE_CATLOG
> SCOPE_SCHEMA
> SCOPE_TABLE
> SOURCE_DATA_TYPE null
> IS_AUTOINCREMENT NO
> IS_GENERATEDCOLUMN NO



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-10585) JDBC driver returns Double in column metadata for Float type

2018-12-06 Thread Ray (JIRA)
Ray created IGNITE-10585:


 Summary: JDBC driver returns Double in column metadata for Float 
type
 Key: IGNITE-10585
 URL: https://issues.apache.org/jira/browse/IGNITE-10585
 Project: Ignite
  Issue Type: Bug
  Components: jdbc
Reporter: Ray
Assignee: Ray
 Fix For: 2.8


When I create a table using 

create table c(a varchar, b float, primary key(a));

The meta information for column b is wrong when I use !desc c to check.

0: jdbc:ignite:thin://127.0.0.1/> !desc c
TABLE_CAT
TABLE_SCHEM PUBLIC
TABLE_NAME C
COLUMN_NAME A
DATA_TYPE 12
TYPE_NAME VARCHAR
COLUMN_SIZE null
BUFFER_LENGTH null
DECIMAL_DIGITS null
NUM_PREC_RADIX 10
NULLABLE 1
REMARKS
COLUMN_DEF
SQL_DATA_TYPE 12
SQL_DATETIME_SUB null
CHAR_OCTET_LENGTH 2147483647
ORDINAL_POSITION 1
IS_NULLABLE YES
SCOPE_CATLOG
SCOPE_SCHEMA
SCOPE_TABLE
SOURCE_DATA_TYPE null
IS_AUTOINCREMENT NO
IS_GENERATEDCOLUMN NO

TABLE_CAT
TABLE_SCHEM PUBLIC
TABLE_NAME C
COLUMN_NAME B
DATA_TYPE 8
{color:#d04437}TYPE_NAME DOUBLE{color}
COLUMN_SIZE null
BUFFER_LENGTH null
DECIMAL_DIGITS null
NUM_PREC_RADIX 10
NULLABLE 1
REMARKS
COLUMN_DEF
SQL_DATA_TYPE 8
SQL_DATETIME_SUB null
CHAR_OCTET_LENGTH 2147483647
ORDINAL_POSITION 2
IS_NULLABLE YES
SCOPE_CATLOG
SCOPE_SCHEMA
SCOPE_TABLE
SOURCE_DATA_TYPE null
IS_AUTOINCREMENT NO
IS_GENERATEDCOLUMN NO



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-10569) Null meta information when getting meta for a customized schema cache through JDBC driver

2018-12-06 Thread Ray (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-10569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16711236#comment-16711236
 ] 

Ray commented on IGNITE-10569:
--

[~vozerov]

The easiest way to fix this bug is to ignore cases when comparing schema name 
in JdbcRequestHandler.matches method.

Please take a look at the patch and let me know if this is the correct fix.

 

> Null meta information when getting meta for a customized schema cache through 
> JDBC driver
> -
>
> Key: IGNITE-10569
> URL: https://issues.apache.org/jira/browse/IGNITE-10569
> Project: Ignite
>  Issue Type: Bug
>  Components: jdbc
>Reporter: Ray
>Assignee: Ray
>Priority: Major
> Fix For: 2.8
>
>
> When I create a cache with customized schema(not PUBLIC), then query the 
> column meta information through thin JDBC driver it will return null.
>  
> Analysis:
> The cases of schema name is different in GridQueryTypeDescriptor and 
> CacaheConfiguration.
> So the schema validation
> if (!matches(table.schemaName(), req.schemaName()))
> in method JdbcRequestHandler.getColumnsMeta will not pass.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-10569) Null meta information when getting meta for a customized schema cache through JDBC driver

2018-12-06 Thread Ray (JIRA)
Ray created IGNITE-10569:


 Summary: Null meta information when getting meta for a customized 
schema cache through JDBC driver
 Key: IGNITE-10569
 URL: https://issues.apache.org/jira/browse/IGNITE-10569
 Project: Ignite
  Issue Type: Bug
  Components: jdbc
Reporter: Ray
Assignee: Ray
 Fix For: 2.8


When I create a cache with customized schema(not PUBLIC), then query the column 
meta information through thin JDBC driver it will return null.

 

Analysis:

The cases of schema name is different in GridQueryTypeDescriptor and 
CacaheConfiguration.

So the schema validation

if (!matches(table.schemaName(), req.schemaName()))

in method JdbcRequestHandler.getColumnsMeta will not pass.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-10356) JDBC thin driver returns wrong data type for Date and Decimal SQL type

2018-12-03 Thread Ray (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-10356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16708104#comment-16708104
 ] 

Ray commented on IGNITE-10356:
--

[~vozerov]

I have expanded imports.

Please take a look.

> JDBC thin driver returns wrong data type for Date and Decimal SQL type
> --
>
> Key: IGNITE-10356
> URL: https://issues.apache.org/jira/browse/IGNITE-10356
> Project: Ignite
>  Issue Type: Bug
>  Components: jdbc
>Affects Versions: 2.6, 2.7
>Reporter: Ray
>Assignee: Ray
>Priority: Critical
> Fix For: 2.8
>
>
> JDBC thin driver will return wrong metadata for column type when user creates 
> a table with Date and Decimal type.
>  
> Steps to reproduce.
> 1. Start one node and create table using this command
> create table a(a varchar, b decimal,c date, primary key(a));
> 2. Run "!desc a" to show the metadata of table a
> This results is as follows:
> TABLE_CAT
>  TABLE_SCHEM PUBLIC
>  TABLE_NAME A
>  COLUMN_NAME A
>  DATA_TYPE 12
>  TYPE_NAME VARCHAR
>  COLUMN_SIZE null
>  BUFFER_LENGTH null
>  DECIMAL_DIGITS null
>  NUM_PREC_RADIX 10
>  NULLABLE 1
>  REMARKS
>  COLUMN_DEF
>  SQL_DATA_TYPE 12
>  SQL_DATETIME_SUB null
>  CHAR_OCTET_LENGTH 2147483647
>  ORDINAL_POSITION 1
>  IS_NULLABLE YES
>  SCOPE_CATLOG
>  SCOPE_SCHEMA
>  SCOPE_TABLE
>  SOURCE_DATA_TYPE null
>  IS_AUTOINCREMENT NO
>  IS_GENERATEDCOLUMN NO
> TABLE_CAT
>  TABLE_SCHEM PUBLIC
>  TABLE_NAME A
>  COLUMN_NAME B
>  {color:#d04437}DATA_TYPE {color}
>  {color:#d04437}TYPE_NAME OTHER{color}
>  COLUMN_SIZE null
>  BUFFER_LENGTH null
>  DECIMAL_DIGITS null
>  NUM_PREC_RADIX 10
>  NULLABLE 1
>  REMARKS
>  COLUMN_DEF
>  {color:#d04437}SQL_DATA_TYPE {color}
>  SQL_DATETIME_SUB null
>  CHAR_OCTET_LENGTH 2147483647
>  ORDINAL_POSITION 2
>  IS_NULLABLE YES
>  SCOPE_CATLOG
>  SCOPE_SCHEMA
>  SCOPE_TABLE
>  SOURCE_DATA_TYPE null
>  IS_AUTOINCREMENT NO
>  IS_GENERATEDCOLUMN NO
> TABLE_CAT
>  TABLE_SCHEM PUBLIC
>  TABLE_NAME A
>  COLUMN_NAME C
>  {color:#d04437}DATA_TYPE {color}
>  {color:#d04437}TYPE_NAME OTHER{color}
>  COLUMN_SIZE null
>  BUFFER_LENGTH null
>  DECIMAL_DIGITS null
>  NUM_PREC_RADIX 10
>  NULLABLE 1
>  REMARKS
>  {color:#33}COLUMN_DEF{color}
> {color:#d04437} SQL_DATA_TYPE {color}
>  SQL_DATETIME_SUB null
>  CHAR_OCTET_LENGTH 2147483647
>  ORDINAL_POSITION 3
>  IS_NULLABLE YES
>  SCOPE_CATLOG
>  SCOPE_SCHEMA
>  SCOPE_TABLE
>  SOURCE_DATA_TYPE null
>  IS_AUTOINCREMENT NO
>  IS_GENERATEDCOLUMN NO
>  
> Column b and c has the wrong DATA_TYPE and TYPE_NAME.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (IGNITE-10356) JDBC thin driver returns wrong data type for Date and Decimal SQL type

2018-11-29 Thread Ray (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-10356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16704169#comment-16704169
 ] 

Ray edited comment on IGNITE-10356 at 11/30/18 3:06 AM:


[~vozerov]

I ran the failed test suite again manually.

Here's the detailed results.

 

{color:#d04437}[Inspections] Core{color} [[tests 0 BuildFailureOnMetric 
|https://ci.ignite.apache.org/viewLog.html?buildId=2407702]]
  
 Unused import (1)
  
  src/main/java/org/apache/ignite/internal/processors/query
  GridQueryProcessor.java (1)
 Fixed in https://issues.apache.org/jira/browse/IGNITE-10375

 

Platform .NET with one flaky failed test

Apache.Ignite.Core.Tests.exe: Apache.Ignite.Core.Tests.Log (1)
 DefaultLoggerTest.TestJavaLogger  
 This test looks flaky: 
 Frequent test status changes: 166 changes out of 698 invocations
 Test status change in build without changes: from failed to successful

Failed test detail

[https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8=testDetails=-1052276137395682005#analysis]

 

Build result:

[https://ci.ignite.apache.org/viewLog.html?buildId=2423396=buildResultsDiv=IgniteTests24Java8_PlatformNet]

 

{color:#33}Compute (Affinity Run) (Green with no failed test){color}

[https://ci.ignite.apache.org/viewLog.html?buildId=2427996=buildResultsDiv=IgniteTests24Java8_ComputeAffinityRun]

 

{color:#33}Cache 3 (Green with no failed test){color}

[https://ci.ignite.apache.org/viewLog.html?buildId=2423400=buildResultsDiv=IgniteTests24Java8_Cache3]

 

{color:#33}Activate | Deactivate Cluster (Green with no failed test){color}

[https://ci.ignite.apache.org/viewLog.html?buildId=2423402=buildResultsDiv=IgniteTests24Java8_ActivateDeactivateCluster]

 

{color:#33}Platform .NET (NuGet) (Green with no failed test){color}

[https://ci.ignite.apache.org/viewLog.html?buildId=2423404=buildResultsDiv=IgniteTests24Java8_PlatformNetNuGet]

 

{color:#33}PDS 1 (Green with no failed test){color}

[https://ci.ignite.apache.org/viewLog.html?buildId=2423406=buildResultsDiv=IgniteTests24Java8_Pds1]

 

Please review the results and comment.


was (Author: ldz):
[~vozerov]

I ran the failed test suite again manually.

Here's the detailed results.

 

{color:#d04437}[Inspections] Core{color} [[tests 0 BuildFailureOnMetric 
|https://ci.ignite.apache.org/viewLog.html?buildId=2407702]]
  
 Unused import (1)
  
  src/main/java/org/apache/ignite/internal/processors/query
  GridQueryProcessor.java (1)
 Fixed in https://issues.apache.org/jira/browse/IGNITE-10375

 

Platform .NET with one flaky failed test

Apache.Ignite.Core.Tests.exe: Apache.Ignite.Core.Tests.Log (1)
DefaultLoggerTest.TestJavaLogger  
This test looks flaky: 
Frequent test status changes: 166 changes out of 698 invocations
Test status change in build without changes: from failed to successful

[https://ci.ignite.apache.org/viewLog.html?buildId=2423396=buildResultsDiv=IgniteTests24Java8_PlatformNet]

 

{color:#33}Compute (Affinity Run) (Green with no failed test){color}

[https://ci.ignite.apache.org/viewLog.html?buildId=2427996=buildResultsDiv=IgniteTests24Java8_ComputeAffinityRun]

 

{color:#33}Cache 3 (Green with no failed test){color}

[https://ci.ignite.apache.org/viewLog.html?buildId=2423400=buildResultsDiv=IgniteTests24Java8_Cache3]

 

{color:#33}Activate | Deactivate Cluster (Green with no failed test){color}

[https://ci.ignite.apache.org/viewLog.html?buildId=2423402=buildResultsDiv=IgniteTests24Java8_ActivateDeactivateCluster]

 

{color:#33}Platform .NET (NuGet) (Green with no failed test){color}

[https://ci.ignite.apache.org/viewLog.html?buildId=2423404=buildResultsDiv=IgniteTests24Java8_PlatformNetNuGet]

 

{color:#33}PDS 1 (Green with no failed test){color}

[https://ci.ignite.apache.org/viewLog.html?buildId=2423406=buildResultsDiv=IgniteTests24Java8_Pds1]

 

Please review the results and comment.

> JDBC thin driver returns wrong data type for Date and Decimal SQL type
> --
>
> Key: IGNITE-10356
> URL: https://issues.apache.org/jira/browse/IGNITE-10356
> Project: Ignite
>  Issue Type: Bug
>  Components: jdbc
>Affects Versions: 2.6, 2.7
>Reporter: Ray
>Assignee: Ray
>Priority: Critical
> Fix For: 2.8
>
>
> JDBC thin driver will return wrong metadata for column type when user creates 
> a table with Date and Decimal type.
>  
> Steps to reproduce.
> 1. Start one node and create table using this command
> create table a(a varchar, b decimal,c date, primary key(a));
> 2. Run "!desc a" to show the metadata of table a
> This results is as follows:
> TABLE_CAT
>  TABLE_SCHEM PUBLIC
>  TABLE_NAME A
>  COLUMN_NAME A
>  DATA_TYPE 12
>  TYPE_NAME VARCHAR
>  COLUMN_SIZE null
>  BUFFER_LENGTH null
>  DECIMAL_DIGITS null
> 

[jira] [Comment Edited] (IGNITE-10356) JDBC thin driver returns wrong data type for Date and Decimal SQL type

2018-11-29 Thread Ray (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-10356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16704169#comment-16704169
 ] 

Ray edited comment on IGNITE-10356 at 11/30/18 3:06 AM:


[~vozerov]

I ran the failed test suite again manually.

Here's the detailed results.

 

{color:#d04437}[Inspections] Core{color} [[tests 0 BuildFailureOnMetric 
|https://ci.ignite.apache.org/viewLog.html?buildId=2407702]]
  
 Unused import (1)
  
  src/main/java/org/apache/ignite/internal/processors/query
  GridQueryProcessor.java (1)
 Fixed in https://issues.apache.org/jira/browse/IGNITE-10375

 

Platform .NET with one flaky failed test

Apache.Ignite.Core.Tests.exe: Apache.Ignite.Core.Tests.Log (1)
DefaultLoggerTest.TestJavaLogger  
This test looks flaky: 
Frequent test status changes: 166 changes out of 698 invocations
Test status change in build without changes: from failed to successful

[https://ci.ignite.apache.org/viewLog.html?buildId=2423396=buildResultsDiv=IgniteTests24Java8_PlatformNet]

 

{color:#33}Compute (Affinity Run) (Green with no failed test){color}

[https://ci.ignite.apache.org/viewLog.html?buildId=2427996=buildResultsDiv=IgniteTests24Java8_ComputeAffinityRun]

 

{color:#33}Cache 3 (Green with no failed test){color}

[https://ci.ignite.apache.org/viewLog.html?buildId=2423400=buildResultsDiv=IgniteTests24Java8_Cache3]

 

{color:#33}Activate | Deactivate Cluster (Green with no failed test){color}

[https://ci.ignite.apache.org/viewLog.html?buildId=2423402=buildResultsDiv=IgniteTests24Java8_ActivateDeactivateCluster]

 

{color:#33}Platform .NET (NuGet) (Green with no failed test){color}

[https://ci.ignite.apache.org/viewLog.html?buildId=2423404=buildResultsDiv=IgniteTests24Java8_PlatformNetNuGet]

 

{color:#33}PDS 1 (Green with no failed test){color}

[https://ci.ignite.apache.org/viewLog.html?buildId=2423406=buildResultsDiv=IgniteTests24Java8_Pds1]

 

Please review the results and comment.


was (Author: ldz):
[~vozerov]

I ran the failed test suite again manually.

Here's the detailed results.

 

{color:#d04437}[Inspections] Core{color} [[tests 0 BuildFailureOnMetric 
|https://ci.ignite.apache.org/viewLog.html?buildId=2407702]]
 
Unused import (1)
 
 src/main/java/org/apache/ignite/internal/processors/query
 GridQueryProcessor.java (1)
Fixed in https://issues.apache.org/jira/browse/IGNITE-10375

 

Platform .NET with one flaky failed test

[https://ci.ignite.apache.org/viewLog.html?buildId=2423396=buildResultsDiv=IgniteTests24Java8_PlatformNet]

 

{color:#33}Compute (Affinity Run) (Green with no failed test){color}

[https://ci.ignite.apache.org/viewLog.html?buildId=2427996=buildResultsDiv=IgniteTests24Java8_ComputeAffinityRun]

 

{color:#33}Cache 3 (Green with no failed test){color}

[https://ci.ignite.apache.org/viewLog.html?buildId=2423400=buildResultsDiv=IgniteTests24Java8_Cache3]

 

{color:#33}Activate | Deactivate Cluster (Green with no failed test){color}

[https://ci.ignite.apache.org/viewLog.html?buildId=2423402=buildResultsDiv=IgniteTests24Java8_ActivateDeactivateCluster]

 

{color:#33}Platform .NET (NuGet) (Green with no failed test){color}

[https://ci.ignite.apache.org/viewLog.html?buildId=2423404=buildResultsDiv=IgniteTests24Java8_PlatformNetNuGet]

 

{color:#33}PDS 1 (Green with no failed test){color}

[https://ci.ignite.apache.org/viewLog.html?buildId=2423406=buildResultsDiv=IgniteTests24Java8_Pds1]

 

Please review the results and comment.

> JDBC thin driver returns wrong data type for Date and Decimal SQL type
> --
>
> Key: IGNITE-10356
> URL: https://issues.apache.org/jira/browse/IGNITE-10356
> Project: Ignite
>  Issue Type: Bug
>  Components: jdbc
>Affects Versions: 2.6, 2.7
>Reporter: Ray
>Assignee: Ray
>Priority: Critical
> Fix For: 2.8
>
>
> JDBC thin driver will return wrong metadata for column type when user creates 
> a table with Date and Decimal type.
>  
> Steps to reproduce.
> 1. Start one node and create table using this command
> create table a(a varchar, b decimal,c date, primary key(a));
> 2. Run "!desc a" to show the metadata of table a
> This results is as follows:
> TABLE_CAT
>  TABLE_SCHEM PUBLIC
>  TABLE_NAME A
>  COLUMN_NAME A
>  DATA_TYPE 12
>  TYPE_NAME VARCHAR
>  COLUMN_SIZE null
>  BUFFER_LENGTH null
>  DECIMAL_DIGITS null
>  NUM_PREC_RADIX 10
>  NULLABLE 1
>  REMARKS
>  COLUMN_DEF
>  SQL_DATA_TYPE 12
>  SQL_DATETIME_SUB null
>  CHAR_OCTET_LENGTH 2147483647
>  ORDINAL_POSITION 1
>  IS_NULLABLE YES
>  SCOPE_CATLOG
>  SCOPE_SCHEMA
>  SCOPE_TABLE
>  SOURCE_DATA_TYPE null
>  IS_AUTOINCREMENT NO
>  IS_GENERATEDCOLUMN NO
> TABLE_CAT
>  TABLE_SCHEM PUBLIC
>  TABLE_NAME A
>  COLUMN_NAME B
>  {color:#d04437}DATA_TYPE {color}
>  

[jira] [Comment Edited] (IGNITE-10356) JDBC thin driver returns wrong data type for Date and Decimal SQL type

2018-11-29 Thread Ray (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-10356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16704169#comment-16704169
 ] 

Ray edited comment on IGNITE-10356 at 11/30/18 3:04 AM:


[~vozerov]

I ran the failed test suite again manually.

Here's the detailed results.

 

{color:#d04437}[Inspections] Core{color} [[tests 0 BuildFailureOnMetric 
|https://ci.ignite.apache.org/viewLog.html?buildId=2407702]]
 
Unused import (1)
 
 src/main/java/org/apache/ignite/internal/processors/query
 GridQueryProcessor.java (1)
Fixed in https://issues.apache.org/jira/browse/IGNITE-10375

 

Platform .NET with one flaky failed test

[https://ci.ignite.apache.org/viewLog.html?buildId=2423396=buildResultsDiv=IgniteTests24Java8_PlatformNet]

 

{color:#33}Compute (Affinity Run) (Green with no failed test){color}

[https://ci.ignite.apache.org/viewLog.html?buildId=2427996=buildResultsDiv=IgniteTests24Java8_ComputeAffinityRun]

 

{color:#33}Cache 3 (Green with no failed test){color}

[https://ci.ignite.apache.org/viewLog.html?buildId=2423400=buildResultsDiv=IgniteTests24Java8_Cache3]

 

{color:#33}Activate | Deactivate Cluster (Green with no failed test){color}

[https://ci.ignite.apache.org/viewLog.html?buildId=2423402=buildResultsDiv=IgniteTests24Java8_ActivateDeactivateCluster]

 

{color:#33}Platform .NET (NuGet) (Green with no failed test){color}

[https://ci.ignite.apache.org/viewLog.html?buildId=2423404=buildResultsDiv=IgniteTests24Java8_PlatformNetNuGet]

 

{color:#33}PDS 1 (Green with no failed test){color}

[https://ci.ignite.apache.org/viewLog.html?buildId=2423406=buildResultsDiv=IgniteTests24Java8_Pds1]

 

Please review the results and comment.


was (Author: ldz):
[~vozerov]

I ran the failed test suite again manually.

Here's the detailed results.

 

Platform .NET with one flaky failed test

[https://ci.ignite.apache.org/viewLog.html?buildId=2423396=buildResultsDiv=IgniteTests24Java8_PlatformNet]

 

{color:#33}Compute (Affinity Run) (Green with no failed test){color}

[https://ci.ignite.apache.org/viewLog.html?buildId=2427996=buildResultsDiv=IgniteTests24Java8_ComputeAffinityRun]

 

{color:#33}Cache 3 (Green with no failed test){color}

[https://ci.ignite.apache.org/viewLog.html?buildId=2423400=buildResultsDiv=IgniteTests24Java8_Cache3]

 

{color:#33}Activate | Deactivate Cluster (Green with no failed test){color}

[https://ci.ignite.apache.org/viewLog.html?buildId=2423402=buildResultsDiv=IgniteTests24Java8_ActivateDeactivateCluster]

 

{color:#33}Platform .NET (NuGet) (Green with no failed test){color}

[https://ci.ignite.apache.org/viewLog.html?buildId=2423404=buildResultsDiv=IgniteTests24Java8_PlatformNetNuGet]

 

{color:#33}PDS 1 (Green with no failed test){color}

[https://ci.ignite.apache.org/viewLog.html?buildId=2423406=buildResultsDiv=IgniteTests24Java8_Pds1]

 

Please review the results and comment.

> JDBC thin driver returns wrong data type for Date and Decimal SQL type
> --
>
> Key: IGNITE-10356
> URL: https://issues.apache.org/jira/browse/IGNITE-10356
> Project: Ignite
>  Issue Type: Bug
>  Components: jdbc
>Affects Versions: 2.6, 2.7
>Reporter: Ray
>Assignee: Ray
>Priority: Critical
> Fix For: 2.8
>
>
> JDBC thin driver will return wrong metadata for column type when user creates 
> a table with Date and Decimal type.
>  
> Steps to reproduce.
> 1. Start one node and create table using this command
> create table a(a varchar, b decimal,c date, primary key(a));
> 2. Run "!desc a" to show the metadata of table a
> This results is as follows:
> TABLE_CAT
>  TABLE_SCHEM PUBLIC
>  TABLE_NAME A
>  COLUMN_NAME A
>  DATA_TYPE 12
>  TYPE_NAME VARCHAR
>  COLUMN_SIZE null
>  BUFFER_LENGTH null
>  DECIMAL_DIGITS null
>  NUM_PREC_RADIX 10
>  NULLABLE 1
>  REMARKS
>  COLUMN_DEF
>  SQL_DATA_TYPE 12
>  SQL_DATETIME_SUB null
>  CHAR_OCTET_LENGTH 2147483647
>  ORDINAL_POSITION 1
>  IS_NULLABLE YES
>  SCOPE_CATLOG
>  SCOPE_SCHEMA
>  SCOPE_TABLE
>  SOURCE_DATA_TYPE null
>  IS_AUTOINCREMENT NO
>  IS_GENERATEDCOLUMN NO
> TABLE_CAT
>  TABLE_SCHEM PUBLIC
>  TABLE_NAME A
>  COLUMN_NAME B
>  {color:#d04437}DATA_TYPE {color}
>  {color:#d04437}TYPE_NAME OTHER{color}
>  COLUMN_SIZE null
>  BUFFER_LENGTH null
>  DECIMAL_DIGITS null
>  NUM_PREC_RADIX 10
>  NULLABLE 1
>  REMARKS
>  COLUMN_DEF
>  {color:#d04437}SQL_DATA_TYPE {color}
>  SQL_DATETIME_SUB null
>  CHAR_OCTET_LENGTH 2147483647
>  ORDINAL_POSITION 2
>  IS_NULLABLE YES
>  SCOPE_CATLOG
>  SCOPE_SCHEMA
>  SCOPE_TABLE
>  SOURCE_DATA_TYPE null
>  IS_AUTOINCREMENT NO
>  IS_GENERATEDCOLUMN NO
> TABLE_CAT
>  TABLE_SCHEM PUBLIC
>  TABLE_NAME A
>  COLUMN_NAME C
>  {color:#d04437}DATA_TYPE {color}
>  {color:#d04437}TYPE_NAME OTHER{color}
>  COLUMN_SIZE 

[jira] [Commented] (IGNITE-10356) JDBC thin driver returns wrong data type for Date and Decimal SQL type

2018-11-29 Thread Ray (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-10356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16704169#comment-16704169
 ] 

Ray commented on IGNITE-10356:
--

[~vozerov]

I ran the failed test suite again manually.

Here's the detailed results.

 

Platform .NET with one flaky failed test

[https://ci.ignite.apache.org/viewLog.html?buildId=2423396=buildResultsDiv=IgniteTests24Java8_PlatformNet]

 

{color:#33}Compute (Affinity Run) (Green with no failed test){color}

[https://ci.ignite.apache.org/viewLog.html?buildId=2427996=buildResultsDiv=IgniteTests24Java8_ComputeAffinityRun]

 

{color:#33}Cache 3 (Green with no failed test){color}

[https://ci.ignite.apache.org/viewLog.html?buildId=2423400=buildResultsDiv=IgniteTests24Java8_Cache3]

 

{color:#33}Activate | Deactivate Cluster (Green with no failed test){color}

[https://ci.ignite.apache.org/viewLog.html?buildId=2423402=buildResultsDiv=IgniteTests24Java8_ActivateDeactivateCluster]

 

{color:#33}Platform .NET (NuGet) (Green with no failed test){color}

[https://ci.ignite.apache.org/viewLog.html?buildId=2423404=buildResultsDiv=IgniteTests24Java8_PlatformNetNuGet]

 

{color:#33}PDS 1 (Green with no failed test){color}

[https://ci.ignite.apache.org/viewLog.html?buildId=2423406=buildResultsDiv=IgniteTests24Java8_Pds1]

 

Please review the results and comment.

> JDBC thin driver returns wrong data type for Date and Decimal SQL type
> --
>
> Key: IGNITE-10356
> URL: https://issues.apache.org/jira/browse/IGNITE-10356
> Project: Ignite
>  Issue Type: Bug
>  Components: jdbc
>Affects Versions: 2.6, 2.7
>Reporter: Ray
>Assignee: Ray
>Priority: Critical
> Fix For: 2.8
>
>
> JDBC thin driver will return wrong metadata for column type when user creates 
> a table with Date and Decimal type.
>  
> Steps to reproduce.
> 1. Start one node and create table using this command
> create table a(a varchar, b decimal,c date, primary key(a));
> 2. Run "!desc a" to show the metadata of table a
> This results is as follows:
> TABLE_CAT
>  TABLE_SCHEM PUBLIC
>  TABLE_NAME A
>  COLUMN_NAME A
>  DATA_TYPE 12
>  TYPE_NAME VARCHAR
>  COLUMN_SIZE null
>  BUFFER_LENGTH null
>  DECIMAL_DIGITS null
>  NUM_PREC_RADIX 10
>  NULLABLE 1
>  REMARKS
>  COLUMN_DEF
>  SQL_DATA_TYPE 12
>  SQL_DATETIME_SUB null
>  CHAR_OCTET_LENGTH 2147483647
>  ORDINAL_POSITION 1
>  IS_NULLABLE YES
>  SCOPE_CATLOG
>  SCOPE_SCHEMA
>  SCOPE_TABLE
>  SOURCE_DATA_TYPE null
>  IS_AUTOINCREMENT NO
>  IS_GENERATEDCOLUMN NO
> TABLE_CAT
>  TABLE_SCHEM PUBLIC
>  TABLE_NAME A
>  COLUMN_NAME B
>  {color:#d04437}DATA_TYPE {color}
>  {color:#d04437}TYPE_NAME OTHER{color}
>  COLUMN_SIZE null
>  BUFFER_LENGTH null
>  DECIMAL_DIGITS null
>  NUM_PREC_RADIX 10
>  NULLABLE 1
>  REMARKS
>  COLUMN_DEF
>  {color:#d04437}SQL_DATA_TYPE {color}
>  SQL_DATETIME_SUB null
>  CHAR_OCTET_LENGTH 2147483647
>  ORDINAL_POSITION 2
>  IS_NULLABLE YES
>  SCOPE_CATLOG
>  SCOPE_SCHEMA
>  SCOPE_TABLE
>  SOURCE_DATA_TYPE null
>  IS_AUTOINCREMENT NO
>  IS_GENERATEDCOLUMN NO
> TABLE_CAT
>  TABLE_SCHEM PUBLIC
>  TABLE_NAME A
>  COLUMN_NAME C
>  {color:#d04437}DATA_TYPE {color}
>  {color:#d04437}TYPE_NAME OTHER{color}
>  COLUMN_SIZE null
>  BUFFER_LENGTH null
>  DECIMAL_DIGITS null
>  NUM_PREC_RADIX 10
>  NULLABLE 1
>  REMARKS
>  {color:#33}COLUMN_DEF{color}
> {color:#d04437} SQL_DATA_TYPE {color}
>  SQL_DATETIME_SUB null
>  CHAR_OCTET_LENGTH 2147483647
>  ORDINAL_POSITION 3
>  IS_NULLABLE YES
>  SCOPE_CATLOG
>  SCOPE_SCHEMA
>  SCOPE_TABLE
>  SOURCE_DATA_TYPE null
>  IS_AUTOINCREMENT NO
>  IS_GENERATEDCOLUMN NO
>  
> Column b and c has the wrong DATA_TYPE and TYPE_NAME.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-10356) JDBC thin driver returns wrong data type for Date and Decimal SQL type

2018-11-27 Thread Ray (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-10356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16701318#comment-16701318
 ] 

Ray commented on IGNITE-10356:
--

[~vozerov]

The Run All has finished, there's a lot of failed tests in other modules.

Most of them looks flaky to me.

What's the next step here?

> JDBC thin driver returns wrong data type for Date and Decimal SQL type
> --
>
> Key: IGNITE-10356
> URL: https://issues.apache.org/jira/browse/IGNITE-10356
> Project: Ignite
>  Issue Type: Bug
>  Components: jdbc
>Affects Versions: 2.6, 2.7
>Reporter: Ray
>Assignee: Ray
>Priority: Critical
> Fix For: 2.8
>
>
> JDBC thin driver will return wrong metadata for column type when user creates 
> a table with Date and Decimal type.
>  
> Steps to reproduce.
> 1. Start one node and create table using this command
> create table a(a varchar, b decimal,c date, primary key(a));
> 2. Run "!desc a" to show the metadata of table a
> This results is as follows:
> TABLE_CAT
>  TABLE_SCHEM PUBLIC
>  TABLE_NAME A
>  COLUMN_NAME A
>  DATA_TYPE 12
>  TYPE_NAME VARCHAR
>  COLUMN_SIZE null
>  BUFFER_LENGTH null
>  DECIMAL_DIGITS null
>  NUM_PREC_RADIX 10
>  NULLABLE 1
>  REMARKS
>  COLUMN_DEF
>  SQL_DATA_TYPE 12
>  SQL_DATETIME_SUB null
>  CHAR_OCTET_LENGTH 2147483647
>  ORDINAL_POSITION 1
>  IS_NULLABLE YES
>  SCOPE_CATLOG
>  SCOPE_SCHEMA
>  SCOPE_TABLE
>  SOURCE_DATA_TYPE null
>  IS_AUTOINCREMENT NO
>  IS_GENERATEDCOLUMN NO
> TABLE_CAT
>  TABLE_SCHEM PUBLIC
>  TABLE_NAME A
>  COLUMN_NAME B
>  {color:#d04437}DATA_TYPE {color}
>  {color:#d04437}TYPE_NAME OTHER{color}
>  COLUMN_SIZE null
>  BUFFER_LENGTH null
>  DECIMAL_DIGITS null
>  NUM_PREC_RADIX 10
>  NULLABLE 1
>  REMARKS
>  COLUMN_DEF
>  {color:#d04437}SQL_DATA_TYPE {color}
>  SQL_DATETIME_SUB null
>  CHAR_OCTET_LENGTH 2147483647
>  ORDINAL_POSITION 2
>  IS_NULLABLE YES
>  SCOPE_CATLOG
>  SCOPE_SCHEMA
>  SCOPE_TABLE
>  SOURCE_DATA_TYPE null
>  IS_AUTOINCREMENT NO
>  IS_GENERATEDCOLUMN NO
> TABLE_CAT
>  TABLE_SCHEM PUBLIC
>  TABLE_NAME A
>  COLUMN_NAME C
>  {color:#d04437}DATA_TYPE {color}
>  {color:#d04437}TYPE_NAME OTHER{color}
>  COLUMN_SIZE null
>  BUFFER_LENGTH null
>  DECIMAL_DIGITS null
>  NUM_PREC_RADIX 10
>  NULLABLE 1
>  REMARKS
>  {color:#33}COLUMN_DEF{color}
> {color:#d04437} SQL_DATA_TYPE {color}
>  SQL_DATETIME_SUB null
>  CHAR_OCTET_LENGTH 2147483647
>  ORDINAL_POSITION 3
>  IS_NULLABLE YES
>  SCOPE_CATLOG
>  SCOPE_SCHEMA
>  SCOPE_TABLE
>  SOURCE_DATA_TYPE null
>  IS_AUTOINCREMENT NO
>  IS_GENERATEDCOLUMN NO
>  
> Column b and c has the wrong DATA_TYPE and TYPE_NAME.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-10356) JDBC thin driver returns wrong data type for Date and Decimal SQL type

2018-11-26 Thread Ray (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-10356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16699837#comment-16699837
 ] 

Ray commented on IGNITE-10356:
--

[~tledkov-gridgain] [~vozerov]

The test run is finished, looks like we have one flaky test 
JdbcThinTransactionsServerAutoCommitComplexSelfTest.testInsertAndQueryMultipleCaches.

Can you review the PR please?

> JDBC thin driver returns wrong data type for Date and Decimal SQL type
> --
>
> Key: IGNITE-10356
> URL: https://issues.apache.org/jira/browse/IGNITE-10356
> Project: Ignite
>  Issue Type: Bug
>  Components: jdbc
>Affects Versions: 2.6, 2.7
>Reporter: Ray
>Assignee: Ray
>Priority: Critical
> Fix For: 2.8
>
>
> JDBC thin driver will return wrong metadata for column type when user creates 
> a table with Date and Decimal type.
>  
> Steps to reproduce.
> 1. Start one node and create table using this command
> create table a(a varchar, b decimal,c date, primary key(a));
> 2. Run "!desc a" to show the metadata of table a
> This results is as follows:
> TABLE_CAT
>  TABLE_SCHEM PUBLIC
>  TABLE_NAME A
>  COLUMN_NAME A
>  DATA_TYPE 12
>  TYPE_NAME VARCHAR
>  COLUMN_SIZE null
>  BUFFER_LENGTH null
>  DECIMAL_DIGITS null
>  NUM_PREC_RADIX 10
>  NULLABLE 1
>  REMARKS
>  COLUMN_DEF
>  SQL_DATA_TYPE 12
>  SQL_DATETIME_SUB null
>  CHAR_OCTET_LENGTH 2147483647
>  ORDINAL_POSITION 1
>  IS_NULLABLE YES
>  SCOPE_CATLOG
>  SCOPE_SCHEMA
>  SCOPE_TABLE
>  SOURCE_DATA_TYPE null
>  IS_AUTOINCREMENT NO
>  IS_GENERATEDCOLUMN NO
> TABLE_CAT
>  TABLE_SCHEM PUBLIC
>  TABLE_NAME A
>  COLUMN_NAME B
>  {color:#d04437}DATA_TYPE {color}
>  {color:#d04437}TYPE_NAME OTHER{color}
>  COLUMN_SIZE null
>  BUFFER_LENGTH null
>  DECIMAL_DIGITS null
>  NUM_PREC_RADIX 10
>  NULLABLE 1
>  REMARKS
>  COLUMN_DEF
>  {color:#d04437}SQL_DATA_TYPE {color}
>  SQL_DATETIME_SUB null
>  CHAR_OCTET_LENGTH 2147483647
>  ORDINAL_POSITION 2
>  IS_NULLABLE YES
>  SCOPE_CATLOG
>  SCOPE_SCHEMA
>  SCOPE_TABLE
>  SOURCE_DATA_TYPE null
>  IS_AUTOINCREMENT NO
>  IS_GENERATEDCOLUMN NO
> TABLE_CAT
>  TABLE_SCHEM PUBLIC
>  TABLE_NAME A
>  COLUMN_NAME C
>  {color:#d04437}DATA_TYPE {color}
>  {color:#d04437}TYPE_NAME OTHER{color}
>  COLUMN_SIZE null
>  BUFFER_LENGTH null
>  DECIMAL_DIGITS null
>  NUM_PREC_RADIX 10
>  NULLABLE 1
>  REMARKS
>  {color:#33}COLUMN_DEF{color}
> {color:#d04437} SQL_DATA_TYPE {color}
>  SQL_DATETIME_SUB null
>  CHAR_OCTET_LENGTH 2147483647
>  ORDINAL_POSITION 3
>  IS_NULLABLE YES
>  SCOPE_CATLOG
>  SCOPE_SCHEMA
>  SCOPE_TABLE
>  SOURCE_DATA_TYPE null
>  IS_AUTOINCREMENT NO
>  IS_GENERATEDCOLUMN NO
>  
> Column b and c has the wrong DATA_TYPE and TYPE_NAME.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-10356) JDBC thin driver returns wrong data type for Date and Decimal SQL type

2018-11-25 Thread Ray (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-10356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16698525#comment-16698525
 ] 

Ray commented on IGNITE-10356:
--

[~tledkov-gridgain] Tests added and updated PR.


I try to trigger the JDBC test run manually, but it looks like it's been queued 
forever.

> JDBC thin driver returns wrong data type for Date and Decimal SQL type
> --
>
> Key: IGNITE-10356
> URL: https://issues.apache.org/jira/browse/IGNITE-10356
> Project: Ignite
>  Issue Type: Bug
>  Components: jdbc
>Affects Versions: 2.6, 2.7
>Reporter: Ray
>Assignee: Ray
>Priority: Critical
> Fix For: 2.8
>
>
> JDBC thin driver will return wrong metadata for column type when user creates 
> a table with Date and Decimal type.
>  
> Steps to reproduce.
> 1. Start one node and create table using this command
> create table a(a varchar, b decimal,c date, primary key(a));
> 2. Run "!desc a" to show the metadata of table a
> This results is as follows:
> TABLE_CAT
>  TABLE_SCHEM PUBLIC
>  TABLE_NAME A
>  COLUMN_NAME A
>  DATA_TYPE 12
>  TYPE_NAME VARCHAR
>  COLUMN_SIZE null
>  BUFFER_LENGTH null
>  DECIMAL_DIGITS null
>  NUM_PREC_RADIX 10
>  NULLABLE 1
>  REMARKS
>  COLUMN_DEF
>  SQL_DATA_TYPE 12
>  SQL_DATETIME_SUB null
>  CHAR_OCTET_LENGTH 2147483647
>  ORDINAL_POSITION 1
>  IS_NULLABLE YES
>  SCOPE_CATLOG
>  SCOPE_SCHEMA
>  SCOPE_TABLE
>  SOURCE_DATA_TYPE null
>  IS_AUTOINCREMENT NO
>  IS_GENERATEDCOLUMN NO
> TABLE_CAT
>  TABLE_SCHEM PUBLIC
>  TABLE_NAME A
>  COLUMN_NAME B
>  {color:#d04437}DATA_TYPE {color}
>  {color:#d04437}TYPE_NAME OTHER{color}
>  COLUMN_SIZE null
>  BUFFER_LENGTH null
>  DECIMAL_DIGITS null
>  NUM_PREC_RADIX 10
>  NULLABLE 1
>  REMARKS
>  COLUMN_DEF
>  {color:#d04437}SQL_DATA_TYPE {color}
>  SQL_DATETIME_SUB null
>  CHAR_OCTET_LENGTH 2147483647
>  ORDINAL_POSITION 2
>  IS_NULLABLE YES
>  SCOPE_CATLOG
>  SCOPE_SCHEMA
>  SCOPE_TABLE
>  SOURCE_DATA_TYPE null
>  IS_AUTOINCREMENT NO
>  IS_GENERATEDCOLUMN NO
> TABLE_CAT
>  TABLE_SCHEM PUBLIC
>  TABLE_NAME A
>  COLUMN_NAME C
>  {color:#d04437}DATA_TYPE {color}
>  {color:#d04437}TYPE_NAME OTHER{color}
>  COLUMN_SIZE null
>  BUFFER_LENGTH null
>  DECIMAL_DIGITS null
>  NUM_PREC_RADIX 10
>  NULLABLE 1
>  REMARKS
>  {color:#33}COLUMN_DEF{color}
> {color:#d04437} SQL_DATA_TYPE {color}
>  SQL_DATETIME_SUB null
>  CHAR_OCTET_LENGTH 2147483647
>  ORDINAL_POSITION 3
>  IS_NULLABLE YES
>  SCOPE_CATLOG
>  SCOPE_SCHEMA
>  SCOPE_TABLE
>  SOURCE_DATA_TYPE null
>  IS_AUTOINCREMENT NO
>  IS_GENERATEDCOLUMN NO
>  
> Column b and c has the wrong DATA_TYPE and TYPE_NAME.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-10356) JDBC thin driver returns wrong data type for Date and Decimal SQL type

2018-11-25 Thread Ray (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-10356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ray updated IGNITE-10356:
-
Description: 
JDBC thin driver will return wrong metadata for column type when user creates a 
table with Date and Decimal type.

 

Steps to reproduce.

1. Start one node and create table using this command

create table a(a varchar, b decimal,c date, primary key(a));

2. Run "!desc a" to show the metadata of table a

This results is as follows:

TABLE_CAT
 TABLE_SCHEM PUBLIC
 TABLE_NAME A
 COLUMN_NAME A
 DATA_TYPE 12
 TYPE_NAME VARCHAR
 COLUMN_SIZE null
 BUFFER_LENGTH null
 DECIMAL_DIGITS null
 NUM_PREC_RADIX 10
 NULLABLE 1
 REMARKS
 COLUMN_DEF
 SQL_DATA_TYPE 12
 SQL_DATETIME_SUB null
 CHAR_OCTET_LENGTH 2147483647
 ORDINAL_POSITION 1
 IS_NULLABLE YES
 SCOPE_CATLOG
 SCOPE_SCHEMA
 SCOPE_TABLE
 SOURCE_DATA_TYPE null
 IS_AUTOINCREMENT NO
 IS_GENERATEDCOLUMN NO

TABLE_CAT
 TABLE_SCHEM PUBLIC
 TABLE_NAME A
 COLUMN_NAME B
 {color:#d04437}DATA_TYPE {color}
 {color:#d04437}TYPE_NAME OTHER{color}
 COLUMN_SIZE null
 BUFFER_LENGTH null
 DECIMAL_DIGITS null
 NUM_PREC_RADIX 10
 NULLABLE 1
 REMARKS
 COLUMN_DEF
 {color:#d04437}SQL_DATA_TYPE {color}
 SQL_DATETIME_SUB null
 CHAR_OCTET_LENGTH 2147483647
 ORDINAL_POSITION 2
 IS_NULLABLE YES
 SCOPE_CATLOG
 SCOPE_SCHEMA
 SCOPE_TABLE
 SOURCE_DATA_TYPE null
 IS_AUTOINCREMENT NO
 IS_GENERATEDCOLUMN NO

TABLE_CAT
 TABLE_SCHEM PUBLIC
 TABLE_NAME A
 COLUMN_NAME C
 {color:#d04437}DATA_TYPE {color}
 {color:#d04437}TYPE_NAME OTHER{color}
 COLUMN_SIZE null
 BUFFER_LENGTH null
 DECIMAL_DIGITS null
 NUM_PREC_RADIX 10
 NULLABLE 1
 REMARKS
 {color:#33}COLUMN_DEF{color}
{color:#d04437} SQL_DATA_TYPE {color}
 SQL_DATETIME_SUB null
 CHAR_OCTET_LENGTH 2147483647
 ORDINAL_POSITION 3
 IS_NULLABLE YES
 SCOPE_CATLOG
 SCOPE_SCHEMA
 SCOPE_TABLE
 SOURCE_DATA_TYPE null
 IS_AUTOINCREMENT NO
 IS_GENERATEDCOLUMN NO

 

Column b and c has the wrong DATA_TYPE and TYPE_NAME.

 

  was:
JDBC thin driver will return wrong metadata for column type when user creates a 
table with Date and Decimal type.

 

Steps to reproduce.

1. Start one node and create table using this command

create table a(a varchar, b decimal,c date, primary key(a));

2. Run "!desc a" to show the metadata of table a

This results is as follows:

TABLE_CAT
TABLE_SCHEM PUBLIC
TABLE_NAME A
COLUMN_NAME A
DATA_TYPE 12
TYPE_NAME VARCHAR
COLUMN_SIZE null
BUFFER_LENGTH null
DECIMAL_DIGITS null
NUM_PREC_RADIX 10
NULLABLE 1
REMARKS
COLUMN_DEF
SQL_DATA_TYPE 12
SQL_DATETIME_SUB null
CHAR_OCTET_LENGTH 2147483647
ORDINAL_POSITION 1
IS_NULLABLE YES
SCOPE_CATLOG
SCOPE_SCHEMA
SCOPE_TABLE
SOURCE_DATA_TYPE null
IS_AUTOINCREMENT NO
IS_GENERATEDCOLUMN NO

TABLE_CAT
TABLE_SCHEM PUBLIC
TABLE_NAME A
COLUMN_NAME B
DATA_TYPE 
TYPE_NAME OTHER
COLUMN_SIZE null
BUFFER_LENGTH null
DECIMAL_DIGITS null
NUM_PREC_RADIX 10
NULLABLE 1
REMARKS
COLUMN_DEF
SQL_DATA_TYPE 
SQL_DATETIME_SUB null
CHAR_OCTET_LENGTH 2147483647
ORDINAL_POSITION 2
IS_NULLABLE YES
SCOPE_CATLOG
SCOPE_SCHEMA
SCOPE_TABLE
SOURCE_DATA_TYPE null
IS_AUTOINCREMENT NO
IS_GENERATEDCOLUMN NO

TABLE_CAT
TABLE_SCHEM PUBLIC
TABLE_NAME A
COLUMN_NAME C
DATA_TYPE 
TYPE_NAME OTHER
COLUMN_SIZE null
BUFFER_LENGTH null
DECIMAL_DIGITS null
NUM_PREC_RADIX 10
NULLABLE 1
REMARKS
COLUMN_DEF
SQL_DATA_TYPE 
SQL_DATETIME_SUB null
CHAR_OCTET_LENGTH 2147483647
ORDINAL_POSITION 3
IS_NULLABLE YES
SCOPE_CATLOG
SCOPE_SCHEMA
SCOPE_TABLE
SOURCE_DATA_TYPE null
IS_AUTOINCREMENT NO
IS_GENERATEDCOLUMN NO

 

Column b and c has the wrong DATA_TYPE and TYPE_NAME.

 


> JDBC thin driver returns wrong data type for Date and Decimal SQL type
> --
>
> Key: IGNITE-10356
> URL: https://issues.apache.org/jira/browse/IGNITE-10356
> Project: Ignite
>  Issue Type: Bug
>  Components: jdbc
>Affects Versions: 2.6, 2.7
>Reporter: Ray
>Assignee: Ray
>Priority: Critical
> Fix For: 2.8
>
>
> JDBC thin driver will return wrong metadata for column type when user creates 
> a table with Date and Decimal type.
>  
> Steps to reproduce.
> 1. Start one node and create table using this command
> create table a(a varchar, b decimal,c date, primary key(a));
> 2. Run "!desc a" to show the metadata of table a
> This results is as follows:
> TABLE_CAT
>  TABLE_SCHEM PUBLIC
>  TABLE_NAME A
>  COLUMN_NAME A
>  DATA_TYPE 12
>  TYPE_NAME VARCHAR
>  COLUMN_SIZE null
>  BUFFER_LENGTH null
>  DECIMAL_DIGITS null
>  NUM_PREC_RADIX 10
>  NULLABLE 1
>  REMARKS
>  COLUMN_DEF
>  SQL_DATA_TYPE 12
>  SQL_DATETIME_SUB null
>  CHAR_OCTET_LENGTH 2147483647
>  ORDINAL_POSITION 1
>  IS_NULLABLE YES
>  SCOPE_CATLOG
>  SCOPE_SCHEMA
>  SCOPE_TABLE
>  SOURCE_DATA_TYPE null
>  IS_AUTOINCREMENT NO
>  IS_GENERATEDCOLUMN NO
> TABLE_CAT
>  TABLE_SCHEM PUBLIC
>  TABLE_NAME A
>  COLUMN_NAME B
>  

[jira] [Updated] (IGNITE-10314) Spark dataframe will get wrong schema if user executes add/drop column DDL

2018-11-21 Thread Ray (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-10314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ray updated IGNITE-10314:
-
Description: 
When user performs add/remove column in DDL,  Spark will get the old/wrong 
schema.

 

Analyse 

Currently Spark data frame API relies on QueryEntity to construct schema, but 
QueryEntity in QuerySchema is a local copy of the original QueryEntity, so the 
original QueryEntity is not updated when modification happens.

 

Solution

Get the latest schema using JDBC thin driver's column metadata call, then 
update fields in QueryEntity.

  was:
When user performs add/remove column in DDL,  Spark will get the old/wrong 
schema.

 

Analyse 

Currently Spark data frame API relies on QueryEntity to construct schema, but 
QueryEntity in QuerySchema is a local copy of the original QueryEntity, so the 
original QueryEntity is not updated when modification happens.

 

Solution

Get the schema using sql, get rid of QueryEntity.


> Spark dataframe will get wrong schema if user executes add/drop column DDL
> --
>
> Key: IGNITE-10314
> URL: https://issues.apache.org/jira/browse/IGNITE-10314
> Project: Ignite
>  Issue Type: Bug
>  Components: spark
>Affects Versions: 2.3, 2.4, 2.5, 2.6, 2.7
>Reporter: Ray
>Assignee: Ray
>Priority: Critical
> Fix For: 2.8
>
>
> When user performs add/remove column in DDL,  Spark will get the old/wrong 
> schema.
>  
> Analyse 
> Currently Spark data frame API relies on QueryEntity to construct schema, but 
> QueryEntity in QuerySchema is a local copy of the original QueryEntity, so 
> the original QueryEntity is not updated when modification happens.
>  
> Solution
> Get the latest schema using JDBC thin driver's column metadata call, then 
> update fields in QueryEntity.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-10356) JDBC thin driver returns wrong data type for Date and Decimal SQL type

2018-11-21 Thread Ray (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-10356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ray updated IGNITE-10356:
-
Description: 
JDBC thin driver will return wrong metadata for column type when user creates a 
table with Date and Decimal type.

 

Steps to reproduce.

1. Start one node and create table using this command

create table a(a varchar, b decimal,c date, primary key(a));

2. Run "!desc a" to show the metadata of table a

This results is as follows:

TABLE_CAT
TABLE_SCHEM PUBLIC
TABLE_NAME A
COLUMN_NAME A
DATA_TYPE 12
TYPE_NAME VARCHAR
COLUMN_SIZE null
BUFFER_LENGTH null
DECIMAL_DIGITS null
NUM_PREC_RADIX 10
NULLABLE 1
REMARKS
COLUMN_DEF
SQL_DATA_TYPE 12
SQL_DATETIME_SUB null
CHAR_OCTET_LENGTH 2147483647
ORDINAL_POSITION 1
IS_NULLABLE YES
SCOPE_CATLOG
SCOPE_SCHEMA
SCOPE_TABLE
SOURCE_DATA_TYPE null
IS_AUTOINCREMENT NO
IS_GENERATEDCOLUMN NO

TABLE_CAT
TABLE_SCHEM PUBLIC
TABLE_NAME A
COLUMN_NAME B
DATA_TYPE 
TYPE_NAME OTHER
COLUMN_SIZE null
BUFFER_LENGTH null
DECIMAL_DIGITS null
NUM_PREC_RADIX 10
NULLABLE 1
REMARKS
COLUMN_DEF
SQL_DATA_TYPE 
SQL_DATETIME_SUB null
CHAR_OCTET_LENGTH 2147483647
ORDINAL_POSITION 2
IS_NULLABLE YES
SCOPE_CATLOG
SCOPE_SCHEMA
SCOPE_TABLE
SOURCE_DATA_TYPE null
IS_AUTOINCREMENT NO
IS_GENERATEDCOLUMN NO

TABLE_CAT
TABLE_SCHEM PUBLIC
TABLE_NAME A
COLUMN_NAME C
DATA_TYPE 
TYPE_NAME OTHER
COLUMN_SIZE null
BUFFER_LENGTH null
DECIMAL_DIGITS null
NUM_PREC_RADIX 10
NULLABLE 1
REMARKS
COLUMN_DEF
SQL_DATA_TYPE 
SQL_DATETIME_SUB null
CHAR_OCTET_LENGTH 2147483647
ORDINAL_POSITION 3
IS_NULLABLE YES
SCOPE_CATLOG
SCOPE_SCHEMA
SCOPE_TABLE
SOURCE_DATA_TYPE null
IS_AUTOINCREMENT NO
IS_GENERATEDCOLUMN NO

 

Column b and c has the wrong DATA_TYPE and TYPE_NAME.

 

  was:
JDBC thin driver will return wrong metadata for column type when user creates a 
table with Date and Decimal type.

 


> JDBC thin driver returns wrong data type for Date and Decimal SQL type
> --
>
> Key: IGNITE-10356
> URL: https://issues.apache.org/jira/browse/IGNITE-10356
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.6, 2.7
>Reporter: Ray
>Assignee: Ray
>Priority: Critical
> Fix For: 2.8
>
>
> JDBC thin driver will return wrong metadata for column type when user creates 
> a table with Date and Decimal type.
>  
> Steps to reproduce.
> 1. Start one node and create table using this command
> create table a(a varchar, b decimal,c date, primary key(a));
> 2. Run "!desc a" to show the metadata of table a
> This results is as follows:
> TABLE_CAT
> TABLE_SCHEM PUBLIC
> TABLE_NAME A
> COLUMN_NAME A
> DATA_TYPE 12
> TYPE_NAME VARCHAR
> COLUMN_SIZE null
> BUFFER_LENGTH null
> DECIMAL_DIGITS null
> NUM_PREC_RADIX 10
> NULLABLE 1
> REMARKS
> COLUMN_DEF
> SQL_DATA_TYPE 12
> SQL_DATETIME_SUB null
> CHAR_OCTET_LENGTH 2147483647
> ORDINAL_POSITION 1
> IS_NULLABLE YES
> SCOPE_CATLOG
> SCOPE_SCHEMA
> SCOPE_TABLE
> SOURCE_DATA_TYPE null
> IS_AUTOINCREMENT NO
> IS_GENERATEDCOLUMN NO
> TABLE_CAT
> TABLE_SCHEM PUBLIC
> TABLE_NAME A
> COLUMN_NAME B
> DATA_TYPE 
> TYPE_NAME OTHER
> COLUMN_SIZE null
> BUFFER_LENGTH null
> DECIMAL_DIGITS null
> NUM_PREC_RADIX 10
> NULLABLE 1
> REMARKS
> COLUMN_DEF
> SQL_DATA_TYPE 
> SQL_DATETIME_SUB null
> CHAR_OCTET_LENGTH 2147483647
> ORDINAL_POSITION 2
> IS_NULLABLE YES
> SCOPE_CATLOG
> SCOPE_SCHEMA
> SCOPE_TABLE
> SOURCE_DATA_TYPE null
> IS_AUTOINCREMENT NO
> IS_GENERATEDCOLUMN NO
> TABLE_CAT
> TABLE_SCHEM PUBLIC
> TABLE_NAME A
> COLUMN_NAME C
> DATA_TYPE 
> TYPE_NAME OTHER
> COLUMN_SIZE null
> BUFFER_LENGTH null
> DECIMAL_DIGITS null
> NUM_PREC_RADIX 10
> NULLABLE 1
> REMARKS
> COLUMN_DEF
> SQL_DATA_TYPE 
> SQL_DATETIME_SUB null
> CHAR_OCTET_LENGTH 2147483647
> ORDINAL_POSITION 3
> IS_NULLABLE YES
> SCOPE_CATLOG
> SCOPE_SCHEMA
> SCOPE_TABLE
> SOURCE_DATA_TYPE null
> IS_AUTOINCREMENT NO
> IS_GENERATEDCOLUMN NO
>  
> Column b and c has the wrong DATA_TYPE and TYPE_NAME.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-10356) JDBC thin driver returns wrong data type for Date and Decimal SQL type

2018-11-21 Thread Ray (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-10356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16695569#comment-16695569
 ] 

Ray commented on IGNITE-10356:
--

TC results attached.

 

[~vozerov] Can you review the fix please?

> JDBC thin driver returns wrong data type for Date and Decimal SQL type
> --
>
> Key: IGNITE-10356
> URL: https://issues.apache.org/jira/browse/IGNITE-10356
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.6, 2.7
>Reporter: Ray
>Assignee: Ray
>Priority: Critical
> Fix For: 2.8
>
>
> JDBC thin driver will return wrong metadata for column type when user creates 
> a table with Date and Decimal type.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-10356) JDBC thin driver returns wrong data type for Date and Decimal SQL type

2018-11-20 Thread Ray (JIRA)
Ray created IGNITE-10356:


 Summary: JDBC thin driver returns wrong data type for Date and 
Decimal SQL type
 Key: IGNITE-10356
 URL: https://issues.apache.org/jira/browse/IGNITE-10356
 Project: Ignite
  Issue Type: Bug
Affects Versions: 2.6, 2.7
Reporter: Ray
Assignee: Ray
 Fix For: 2.8


JDBC thin driver will return wrong metadata for column type when user creates a 
table with Date and Decimal type.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-10314) Spark dataframe will get wrong schema if user executes add/drop column DDL

2018-11-19 Thread Ray (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-10314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16691485#comment-16691485
 ] 

Ray commented on IGNITE-10314:
--

Currently, when user performs add/remove column DDL, the QueryEntity will not 
change.

This result in Spark getting wrong schema because Spark relies on QueryEntity 
to construct data frame schema.

After [~vozerov]'s reply in dev list, 
[http://apache-ignite-developers.2346864.n4.nabble.com/Schema-in-CacheConfig-is-not-updated-after-DDL-commands-Add-drop-column-Create-drop-index-td38002.html.]

This behavior is by design, so I decide to fix this issue from the Spark side.

 

So I propose this solution, instead of getting schema by QueryEntity I want to 
get schema by a SQL select command.

[~NIzhikov], what do you think about this solution?

If you think this solution OK then I'll start implementing.

> Spark dataframe will get wrong schema if user executes add/drop column DDL
> --
>
> Key: IGNITE-10314
> URL: https://issues.apache.org/jira/browse/IGNITE-10314
> Project: Ignite
>  Issue Type: Bug
>  Components: spark
>Affects Versions: 2.3, 2.4, 2.5, 2.6, 2.7
>Reporter: Ray
>Assignee: Ray
>Priority: Critical
> Fix For: 2.8
>
>
> When user performs add/remove column in DDL,  Spark will get the old/wrong 
> schema.
>  
> Analyse 
> Currently Spark data frame API relies on QueryEntity to construct schema, but 
> QueryEntity in QuerySchema is a local copy of the original QueryEntity, so 
> the original QueryEntity is not updated when modification happens.
>  
> Solution
> Get the schema using sql, get rid of QueryEntity.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-10314) Spark dataframe will get wrong schema if user executes add/drop column DDL

2018-11-19 Thread Ray (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-10314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ray updated IGNITE-10314:
-
Affects Version/s: (was: 2.2)
   (was: 2.1)
   (was: 2.0)
  Description: 
When user performs add/remove column in DDL,  Spark will get the old/wrong 
schema.

 

Analyse 

Currently Spark data frame API relies on QueryEntity to construct schema, but 
QueryEntity in QuerySchema is a local copy of the original QueryEntity, so the 
original QueryEntity is not updated when modification happens.

 

Solution

Get the schema using sql, get rid of QueryEntity.

  was:
When user performs column and index modification operation in SQL(ex create 
index, drop index, add column, drop column),  QueryEntity in CacheConfiguration 
for the modified cache is not updated.

 

Analyse 

QueryEntity in QuerySchema is a local copy of the original QueryEntity, so the 
original QueryEntity is not updated when modification happens.

  Component/s: spark
  Summary: Spark dataframe will get wrong schema if user executes 
add/drop column DDL  (was: QueryEntity is not updated when column and index 
added or dropped)

> Spark dataframe will get wrong schema if user executes add/drop column DDL
> --
>
> Key: IGNITE-10314
> URL: https://issues.apache.org/jira/browse/IGNITE-10314
> Project: Ignite
>  Issue Type: Bug
>  Components: spark
>Affects Versions: 2.3, 2.4, 2.5, 2.6, 2.7
>Reporter: Ray
>Assignee: Ray
>Priority: Critical
> Fix For: 2.8
>
>
> When user performs add/remove column in DDL,  Spark will get the old/wrong 
> schema.
>  
> Analyse 
> Currently Spark data frame API relies on QueryEntity to construct schema, but 
> QueryEntity in QuerySchema is a local copy of the original QueryEntity, so 
> the original QueryEntity is not updated when modification happens.
>  
> Solution
> Get the schema using sql, get rid of QueryEntity.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (IGNITE-10314) QueryEntity is not updated when column and index added or dropped

2018-11-19 Thread Ray (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-10314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16691410#comment-16691410
 ] 

Ray edited comment on IGNITE-10314 at 11/19/18 8:58 AM:


[~vozerov] Thanks for the reply.

 

I think it's only reasonable to return the newest QueryEntity to user.

For example, a user adds a column to a table then he reads data using Spark 
data frame API which currently relies on QueryEntity to construct data frame 
schema, so user will get wrong schema.

Maybe we should return the newest QueryEntity to user and store the original 
QueryEntity separately?

What do you think?


was (Author: ldz):
[~vozerov] Thanks for the reply.

 

I think it's only reasonable to return the newest QueryEntity to user.

For example, a user adds a column to a table then he reads data using Spark 
data frame API which currently relies on QueryEntity to construct data frame 
schema, now user will get wrong schema.

Maybe we should return the newest QueryEntity to user and store the original 
QueryEntity separately?

What do you think?

> QueryEntity is not updated when column and index added or dropped
> -
>
> Key: IGNITE-10314
> URL: https://issues.apache.org/jira/browse/IGNITE-10314
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.0, 2.1, 2.2, 2.3, 2.4, 2.5, 2.6, 2.7
>Reporter: Ray
>Assignee: Ray
>Priority: Critical
> Fix For: 2.8
>
>
> When user performs column and index modification operation in SQL(ex create 
> index, drop index, add column, drop column),  QueryEntity in 
> CacheConfiguration for the modified cache is not updated.
>  
> Analyse 
> QueryEntity in QuerySchema is a local copy of the original QueryEntity, so 
> the original QueryEntity is not updated when modification happens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-10314) QueryEntity is not updated when column and index added or dropped

2018-11-19 Thread Ray (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-10314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16691410#comment-16691410
 ] 

Ray commented on IGNITE-10314:
--

[~vozerov] Thanks for the reply.

 

I think it's only reasonable to return the newest QueryEntity to user.

For example, a user adds a column to a table then he reads data using Spark 
data frame API which currently relies on QueryEntity to construct data frame 
schema, now user will get wrong schema.

Maybe we should return the newest QueryEntity to user and store the original 
QueryEntity separately?

What do you think?

> QueryEntity is not updated when column and index added or dropped
> -
>
> Key: IGNITE-10314
> URL: https://issues.apache.org/jira/browse/IGNITE-10314
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.0, 2.1, 2.2, 2.3, 2.4, 2.5, 2.6, 2.7
>Reporter: Ray
>Assignee: Ray
>Priority: Critical
> Fix For: 2.8
>
>
> When user performs column and index modification operation in SQL(ex create 
> index, drop index, add column, drop column),  QueryEntity in 
> CacheConfiguration for the modified cache is not updated.
>  
> Analyse 
> QueryEntity in QuerySchema is a local copy of the original QueryEntity, so 
> the original QueryEntity is not updated when modification happens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (IGNITE-10314) QueryEntity is not updated when column and index added or dropped

2018-11-18 Thread Ray (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-10314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16691221#comment-16691221
 ] 

Ray edited comment on IGNITE-10314 at 11/19/18 6:00 AM:


I have submitted a fix for this bug.

[~vozerov] Can you take a look and tell me if this is a correct fix please?

I'll add more tests if this is the correct fix.

Also can you explain why you copy the QueryEntity in QuerySchema?

 


was (Author: ldz):
I have submitted a fix for this bug.

[~vozerov] Can you take a look and tell me if this is a correct fix please?

I'll fix the current tests and add more tests if this is the correct fix.

Also can you explain why you copy the QueryEntity in QuerySchema?

 

> QueryEntity is not updated when column and index added or dropped
> -
>
> Key: IGNITE-10314
> URL: https://issues.apache.org/jira/browse/IGNITE-10314
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.0, 2.1, 2.2, 2.3, 2.4, 2.5, 2.6, 2.7
>Reporter: Ray
>Assignee: Ray
>Priority: Critical
> Fix For: 2.8
>
>
> When user performs column and index modification operation in SQL(ex create 
> index, drop index, add column, drop column),  QueryEntity in 
> CacheConfiguration for the modified cache is not updated.
>  
> Analyse 
> QueryEntity in QuerySchema is a local copy of the original QueryEntity, so 
> the original QueryEntity is not updated when modification happens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-10314) QueryEntity is not updated when column and index added or dropped

2018-11-18 Thread Ray (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-10314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ray updated IGNITE-10314:
-
Affects Version/s: 2.0
   2.1
   2.2
   2.3
   2.4
   2.5
 Ignite Flags:   (was: Docs Required)

> QueryEntity is not updated when column and index added or dropped
> -
>
> Key: IGNITE-10314
> URL: https://issues.apache.org/jira/browse/IGNITE-10314
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.0, 2.1, 2.2, 2.3, 2.4, 2.5, 2.6, 2.7
>Reporter: Ray
>Assignee: Ray
>Priority: Critical
> Fix For: 2.8
>
>
> When user performs column and index modification operation in SQL(ex create 
> index, drop index, add column, drop column),  QueryEntity in 
> CacheConfiguration for the modified cache is not updated.
>  
> Analyse 
> QueryEntity in QuerySchema is a local copy of the original QueryEntity, so 
> the original QueryEntity is not updated when modification happens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (IGNITE-10314) QueryEntity is not updated when column and index added or dropped

2018-11-18 Thread Ray (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-10314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16691221#comment-16691221
 ] 

Ray edited comment on IGNITE-10314 at 11/19/18 3:54 AM:


I have submitted a fix for this bug.

[~vozerov] Can you take a look and tell me if this is a correct fix please?

I'll fix the current tests and add more tests if this is the correct fix.

Also can you explain why you copy the QueryEntity in QuerySchema?

 


was (Author: ldz):
I have submitted a fix for this bug.

[~vozerov] Can you take a look and tell me if this is a correct fix please?

I'll fix the tests and more tests if this is the correct fix.

Also can you explain why you copy the QueryEntity in QuerySchema?

 

> QueryEntity is not updated when column and index added or dropped
> -
>
> Key: IGNITE-10314
> URL: https://issues.apache.org/jira/browse/IGNITE-10314
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.6, 2.7
>Reporter: Ray
>Assignee: Ray
>Priority: Critical
> Fix For: 2.8
>
>
> When user performs column and index modification operation in SQL(ex create 
> index, drop index, add column, drop column),  QueryEntity in 
> CacheConfiguration for the modified cache is not updated.
>  
> Analyse 
> QueryEntity in QuerySchema is a local copy of the original QueryEntity, so 
> the original QueryEntity is not updated when modification happens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-10314) QueryEntity is not updated when column and index added or dropped

2018-11-18 Thread Ray (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-10314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16691221#comment-16691221
 ] 

Ray commented on IGNITE-10314:
--

I have submitted a fix for this bug.

[~vozerov] Can you take a look and tell me if this is a correct fix please?

I'll fix the tests and more tests if this is the correct fix.

Also can you explain why you copy the QueryEntity in QuerySchema?

 

> QueryEntity is not updated when column and index added or dropped
> -
>
> Key: IGNITE-10314
> URL: https://issues.apache.org/jira/browse/IGNITE-10314
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.6, 2.7
>Reporter: Ray
>Assignee: Ray
>Priority: Critical
> Fix For: 2.8
>
>
> When user performs column and index modification operation in SQL(ex create 
> index, drop index, add column, drop column),  QueryEntity in 
> CacheConfiguration for the modified cache is not updated.
>  
> Analyse 
> QueryEntity in QuerySchema is a local copy of the original QueryEntity, so 
> the original QueryEntity is not updated when modification happens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-10314) QueryEntity is not updated when column and index added or dropped

2018-11-18 Thread Ray (JIRA)
Ray created IGNITE-10314:


 Summary: QueryEntity is not updated when column and index added or 
dropped
 Key: IGNITE-10314
 URL: https://issues.apache.org/jira/browse/IGNITE-10314
 Project: Ignite
  Issue Type: Bug
Affects Versions: 2.6, 2.7
Reporter: Ray
Assignee: Ray
 Fix For: 2.8


When user performs column and index modification operation in SQL(ex create 
index, drop index, add column, drop column),  QueryEntity in CacheConfiguration 
for the modified cache is not updated.

 

Analyse 

QueryEntity in QuerySchema is a local copy of the original QueryEntity, so the 
original QueryEntity is not updated when modification happens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8386) SQL: Make sure PK index do not use wrapped object

2018-09-17 Thread Ray (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16618476#comment-16618476
 ] 

Ray commented on IGNITE-8386:
-

Can we expect this ticket in 2.7?

Does this ticket break compatibility with older Ignite versions with 
persistence enabled?

 

> SQL: Make sure PK index do not use wrapped object
> -
>
> Key: IGNITE-8386
> URL: https://issues.apache.org/jira/browse/IGNITE-8386
> Project: Ignite
>  Issue Type: Task
>  Components: sql
>Affects Versions: 2.4
>Reporter: Vladimir Ozerov
>Assignee: Yury Gerzhedovich
>Priority: Major
>  Labels: iep-19, performance
>
> Currently PK may be built over the whole {{_KEY}} column, i.e. the whole 
> binary object. This could happen in two cases:
> 1) Composite PK
> 2) Plain PK but with {{WRAP_KEY}} option.
> This is critical performance issue for two reasons:
> 1) This index is effectively useless and cannot be used in any sensible 
> queries; it just wastes space and makes updates slower
> 2) Binary object typically has common header bytes what may lead to excessive 
> number of comparisons during index update.
> To mitigate the problem we need to ensure that index is *never* built over 
> {{_KEY}}, Instead, we must always extract target columns and build normal 
> index over them.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8697) Flink sink throws java.lang.IllegalArgumentException when running in flink cluster mode.

2018-07-23 Thread Ray (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16552531#comment-16552531
 ] 

Ray commented on IGNITE-8697:
-

Hello [~samaitra],
 
Thanks for the fix.
I validated this fix by running my WordCount application in both standalone
mode and cluster mode.
The data can be inserted.
 
But I found another problem here.
The data written into Ignite is not correct.
My application counts the word occurrence in this the following sentence.
 "To be, or not to be,--that is the question:--",
 "Whether 'tis nobler in the mind to suffer",
 "The slings and arrows of outrageous fortune",
 "Or to take arms against a sea of troubles,
 
The count of word "to" should be 9.
But when I check the result in Ignite, all the values of every word is 1.
Clearly it's wrong.
The reproducer program is the same as I attached above.
 
Please let me know if you can reproduce this issue.

> Flink sink throws java.lang.IllegalArgumentException when running in flink 
> cluster mode.
> 
>
> Key: IGNITE-8697
> URL: https://issues.apache.org/jira/browse/IGNITE-8697
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.3, 2.4, 2.5
>Reporter: Ray
>Priority: Blocker
>
> if I submit the Application to the Flink Cluster using Ignite flink sink I 
> get this error
>  
> java.lang.ExceptionInInitializerError
>   at 
> org.apache.ignite.sink.flink.IgniteSink$SinkContext.getStreamer(IgniteSink.java:201)
>   at 
> org.apache.ignite.sink.flink.IgniteSink$SinkContext.access$100(IgniteSink.java:175)
>   at org.apache.ignite.sink.flink.IgniteSink.invoke(IgniteSink.java:165)
>   at 
> org.apache.flink.streaming.api.functions.sink.SinkFunction.invoke(SinkFunction.java:52)
>   at 
> org.apache.flink.streaming.api.operators.StreamSink.processElement(StreamSink.java:56)
>   at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:560)
>   at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:535)
>   at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:515)
>   at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:679)
>   at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:657)
>   at 
> org.apache.flink.streaming.api.operators.TimestampedCollector.collect(TimestampedCollector.java:51)
>   at 
> org.myorg.quickstart.InstrumentStreamer$Splitter.flatMap(InstrumentStreamer.java:97)
>   at 
> org.myorg.quickstart.InstrumentStreamer$Splitter.flatMap(InstrumentStreamer.java:1)
>   at 
> org.apache.flink.streaming.api.operators.StreamFlatMap.processElement(StreamFlatMap.java:50)
>   at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:560)
>   at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:535)
>   at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:515)
>   at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:679)
>   at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:657)
>   at 
> org.apache.flink.streaming.api.operators.StreamSourceContexts$NonTimestampContext.collect(StreamSourceContexts.java:104)
>   at 
> org.apache.flink.streaming.api.functions.source.SocketTextStreamFunction.run(SocketTextStreamFunction.java:110)
>   at 
> org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:87)
>   at 
> org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:56)
>   at 
> org.apache.flink.streaming.runtime.tasks.SourceStreamTask.run(SourceStreamTask.java:99)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:306)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:703)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.IllegalArgumentException: Ouch! Argument is invalid: 
> Cache name must not be null or empty.
>   at 
> org.apache.ignite.internal.util.GridArgumentCheck.ensure(GridArgumentCheck.java:109)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheUtils.validateCacheName(GridCacheUtils.java:1581)
>   at 
> org.apache.ignite.internal.IgniteKernal.dataStreamer(IgniteKernal.java:3284)
>   at 
> 

[jira] [Comment Edited] (IGNITE-8697) Flink sink throws java.lang.IllegalArgumentException when running in flink cluster mode.

2018-07-16 Thread Ray (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16545977#comment-16545977
 ] 

Ray edited comment on IGNITE-8697 at 7/17/18 3:09 AM:
--

[~samaitra]

I tried your newest code and wrote a simple word count application to test 
 the sink. 
 It appears there's still problems. 
 Here's my code. 
{code:java}
import org.apache.flink.api.scala._
import org.apache.flink.streaming.api.scala.StreamExecutionEnvironment
import org.apache.flink.streaming.api.scala.extensions._
import org.apache.flink.configuration.Configuration
import org.apache.ignite.Ignition
import org.apache.ignite.configuration.CacheConfiguration

import scala.collection.JavaConverters._


object WordCount {

   def main(args: Array[String]) {

  val env = StreamExecutionEnvironment.getExecutionEnvironment

  val igniteSink = new IgniteSink[java.util.Map[String, Int]]("aaa", 
"ignite.xml")

  igniteSink.setAllowOverwrite(false)
  igniteSink.setAutoFlushFrequency(1)

  igniteSink.open(new Configuration)


  // get input data
  val text = env.fromElements(
 "To be, or not to be,--that is the question:--",
 "Whether 'tis nobler in the mind to suffer",
 "The slings and arrows of outrageous fortune",
 "Or to take arms against a sea of troubles,")


  val counts = text
 // split up the lines in pairs (2-tuples) containing: (word,1)
 .flatMap(_.toLowerCase.split("\\W+"))
 .filter(_.nonEmpty)
 .map((_, 1))
 // group by the tuple field "0" and sum up tuple field "1"
 .keyBy(0)
 .sum(1)
 // Convert to key/value format before ingesting to Ignite
 .mapWith { case (k: String, v: Int) => Map(k -> v).asJava }
 .addSink(igniteSink)

  try
 env.execute("Streaming WordCount1")
  catch {
 case e: Exception =>

 // Exception handling.
  } finally igniteSink.close()

   }
}

{code}
 

I tried running this application in Idea and the error log snippet is as 
 follows 

07/16/2018 11:05:30 aggregation -> Map -> Sink: Unnamed(4/8) switched to 
 FAILED 
 class org.apache.ignite.IgniteException: Default Ignite instance has already 
 been started. 
         at 
 
org.apache.ignite.internal.util.IgniteUtils.convertException(IgniteUtils.java:990)
 
         at org.apache.ignite.Ignition.start(Ignition.java:355) 
         at IgniteSink.open(IgniteSink.java:135) 
         at 
 
org.apache.flink.api.common.functions.util.FunctionUtils.openFunction(FunctionUtils.java:36)
 
         at 
 
org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.open(AbstractUdfStreamOperator.java:111)
 
         at 
 
org.apache.flink.streaming.runtime.tasks.StreamTask.openAllOperators(StreamTask.java:376)
 
         at 
 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:253) 
         at org.apache.flink.runtime.taskmanager.Task.run(Task.java:702) 
         at java.lang.Thread.run(Thread.java:745) 
 Caused by: class org.apache.ignite.IgniteCheckedException: Default Ignite 
 instance has already been started. 
         at org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1134) 
         at 
 
org.apache.ignite.internal.IgnitionEx.startConfigurations(IgnitionEx.java:1069) 
         at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:955) 
         at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:854) 
         at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:724) 
         at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:693) 
         at org.apache.ignite.Ignition.start(Ignition.java:352) 
         ... 7 more 

07/16/2018 11:05:30 Job execution switched to status FAILING. 


was (Author: ldz):
[~samaitra]

I tried your newest code and wrote a simple word count application to test 
 the sink. 
 It appears there's still problems. 
 Here's my code. 

import org.apache.flink.api.scala._ 
 import org.apache.flink.streaming.api.scala.StreamExecutionEnvironment 
 import org.apache.flink.streaming.api.scala.extensions._ 
 import org.apache.flink.configuration.Configuration 
 import org.apache.ignite.Ignition 
 import org.apache.ignite.configuration.CacheConfiguration 

import scala.collection.JavaConverters._ 

object WordCount { 

        def main(args: Array[String])

{                 

val env = StreamExecutionEnvironment.getExecutionEnvironment

val igniteSink = new IgniteSink[java.util.Map[String, Int]]("aaa", "ignite.xml")

igniteSink.setAllowOverwrite(false)
igniteSink.setAutoFlushFrequency(1)

igniteSink.open(new Configuration)


// get input data
val text = env.fromElements(
 "To be, or not to be,--that is the question:--",
 "Whether 'tis nobler in the mind to suffer",
 "The slings and arrows of outrageous fortune",
 "Or to take arms against a sea of troubles,")


val counts = text
 // 

[jira] [Comment Edited] (IGNITE-8697) Flink sink throws java.lang.IllegalArgumentException when running in flink cluster mode.

2018-07-16 Thread Ray (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16545977#comment-16545977
 ] 

Ray edited comment on IGNITE-8697 at 7/17/18 3:07 AM:
--

[~samaitra]

I tried your newest code and wrote a simple word count application to test 
 the sink. 
 It appears there's still problems. 
 Here's my code. 

import org.apache.flink.api.scala._ 
 import org.apache.flink.streaming.api.scala.StreamExecutionEnvironment 
 import org.apache.flink.streaming.api.scala.extensions._ 
 import org.apache.flink.configuration.Configuration 
 import org.apache.ignite.Ignition 
 import org.apache.ignite.configuration.CacheConfiguration 

import scala.collection.JavaConverters._ 

object WordCount { 

        def main(args: Array[String])

{                 

val env = StreamExecutionEnvironment.getExecutionEnvironment

val igniteSink = new IgniteSink[java.util.Map[String, Int]]("aaa", "ignite.xml")

igniteSink.setAllowOverwrite(false)
igniteSink.setAutoFlushFrequency(1)

igniteSink.open(new Configuration)


// get input data
val text = env.fromElements(
 "To be, or not to be,--that is the question:--",
 "Whether 'tis nobler in the mind to suffer",
 "The slings and arrows of outrageous fortune",
 "Or to take arms against a sea of troubles,")


val counts = text
 // split up the lines in pairs (2-tuples) containing: (word,1)
 .flatMap(_.toLowerCase.split("\\W+"))
 .filter(_.nonEmpty)
 .map((_, 1))
 // group by the tuple field "0" and sum up tuple field "1"
 .keyBy(0)
 .sum(1)
 // Convert to key/value format before ingesting to Ignite
 .mapWith \{ case (k: String, v: Int) => Map(k -> v).asJava }
 .addSink(igniteSink)

try
 env.execute("Streaming WordCount1")
catch {
 case e: Exception =>

 // Exception handling.
} finally igniteSink.close()

        } 
 } 

I tried running this application in Idea and the error log snippet is as 
 follows 

07/16/2018 11:05:30 aggregation -> Map -> Sink: Unnamed(4/8) switched to 
 FAILED 
 class org.apache.ignite.IgniteException: Default Ignite instance has already 
 been started. 
         at 
 
org.apache.ignite.internal.util.IgniteUtils.convertException(IgniteUtils.java:990)
 
         at org.apache.ignite.Ignition.start(Ignition.java:355) 
         at IgniteSink.open(IgniteSink.java:135) 
         at 
 
org.apache.flink.api.common.functions.util.FunctionUtils.openFunction(FunctionUtils.java:36)
 
         at 
 
org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.open(AbstractUdfStreamOperator.java:111)
 
         at 
 
org.apache.flink.streaming.runtime.tasks.StreamTask.openAllOperators(StreamTask.java:376)
 
         at 
 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:253) 
         at org.apache.flink.runtime.taskmanager.Task.run(Task.java:702) 
         at java.lang.Thread.run(Thread.java:745) 
 Caused by: class org.apache.ignite.IgniteCheckedException: Default Ignite 
 instance has already been started. 
         at org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1134) 
         at 
 
org.apache.ignite.internal.IgnitionEx.startConfigurations(IgnitionEx.java:1069) 
         at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:955) 
         at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:854) 
         at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:724) 
         at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:693) 
         at org.apache.ignite.Ignition.start(Ignition.java:352) 
         ... 7 more 

07/16/2018 11:05:30 Job execution switched to status FAILING. 


was (Author: ldz):
[~samaitra]

I tried your newest code and wrote a simple word count application to test 
the sink. 
It appears there's still problems. 
Here's my code. 



import org.apache.flink.api.scala._ 
import org.apache.flink.streaming.api.scala.StreamExecutionEnvironment 
import org.apache.flink.streaming.api.scala.extensions._ 
import org.apache.flink.configuration.Configuration 
import org.apache.ignite.Ignition 
import org.apache.ignite.configuration.CacheConfiguration 

import scala.collection.JavaConverters._ 


object WordCount { 

        def main(args: Array[String]) { 

                val ignite = Ignition.start("ignite.xml") 
                val cacheConfig = new CacheConfiguration[Any, Any]() 
                ignite.destroyCache("aaa") 
                cacheConfig.setName("aaa") 
                cacheConfig.setSqlSchema("PUBLIC") 
                ignite.createCache(cacheConfig) 
                ignite.close() 


                // set up the execution environment 
                val env = StreamExecutionEnvironment.getExecutionEnvironment 

                val igniteSink = new IgniteSink[java.util.Map[String, 
Int]]("aaa", 
"ignite.xml") 

                igniteSink.setAllowOverwrite(false) 
                igniteSink.setAutoFlushFrequency(1) 

                

[jira] [Commented] (IGNITE-8697) Flink sink throws java.lang.IllegalArgumentException when running in flink cluster mode.

2018-07-16 Thread Ray (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16545977#comment-16545977
 ] 

Ray commented on IGNITE-8697:
-

[~samaitra]

I tried your newest code and wrote a simple word count application to test 
the sink. 
It appears there's still problems. 
Here's my code. 



import org.apache.flink.api.scala._ 
import org.apache.flink.streaming.api.scala.StreamExecutionEnvironment 
import org.apache.flink.streaming.api.scala.extensions._ 
import org.apache.flink.configuration.Configuration 
import org.apache.ignite.Ignition 
import org.apache.ignite.configuration.CacheConfiguration 

import scala.collection.JavaConverters._ 


object WordCount { 

        def main(args: Array[String]) { 

                val ignite = Ignition.start("ignite.xml") 
                val cacheConfig = new CacheConfiguration[Any, Any]() 
                ignite.destroyCache("aaa") 
                cacheConfig.setName("aaa") 
                cacheConfig.setSqlSchema("PUBLIC") 
                ignite.createCache(cacheConfig) 
                ignite.close() 


                // set up the execution environment 
                val env = StreamExecutionEnvironment.getExecutionEnvironment 

                val igniteSink = new IgniteSink[java.util.Map[String, 
Int]]("aaa", 
"ignite.xml") 

                igniteSink.setAllowOverwrite(false) 
                igniteSink.setAutoFlushFrequency(1) 

                igniteSink.open(new Configuration) 


                // get input data 
                val text = env.fromElements( 
                        "To be, or not to be,--that is the question:--", 
                        "Whether 'tis nobler in the mind to suffer", 
                        "The slings and arrows of outrageous fortune", 
                        "Or to take arms against a sea of troubles,") 


                val counts = text 
                        // split up the lines in pairs (2-tuples) containing: 
(word,1) 
                        .flatMap(_.toLowerCase.split("\\W+")) 
                        .filter(_.nonEmpty) 
                        .map((_, 1)) 
                        // group by the tuple field "0" and sum up tuple field 
"1" 
                        .keyBy(0) 
                        .sum(1) 
                        // Convert to key/value format before ingesting to 
Ignite 
                        .mapWith \{ case (k: String, v: Int) => Map(k -> 
v).asJava } 
                        .addSink(igniteSink) 

                try 
                        env.execute("Streaming WordCount1") 
                catch { 
                        case e: Exception => 

                        // Exception handling. 
                } finally igniteSink.close() 

        } 
} 

I tried running this application in Idea and the error log snippet is as 
follows 

07/16/2018 11:05:30 aggregation -> Map -> Sink: Unnamed(4/8) switched to 
FAILED 
class org.apache.ignite.IgniteException: Default Ignite instance has already 
been started. 
        at 
org.apache.ignite.internal.util.IgniteUtils.convertException(IgniteUtils.java:990)
 
        at org.apache.ignite.Ignition.start(Ignition.java:355) 
        at IgniteSink.open(IgniteSink.java:135) 
        at 
org.apache.flink.api.common.functions.util.FunctionUtils.openFunction(FunctionUtils.java:36)
 
        at 
org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.open(AbstractUdfStreamOperator.java:111)
 
        at 
org.apache.flink.streaming.runtime.tasks.StreamTask.openAllOperators(StreamTask.java:376)
 
        at 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:253) 
        at org.apache.flink.runtime.taskmanager.Task.run(Task.java:702) 
        at java.lang.Thread.run(Thread.java:745) 
Caused by: class org.apache.ignite.IgniteCheckedException: Default Ignite 
instance has already been started. 
        at org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1134) 
        at 
org.apache.ignite.internal.IgnitionEx.startConfigurations(IgnitionEx.java:1069) 
        at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:955) 
        at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:854) 
        at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:724) 
        at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:693) 
        at org.apache.ignite.Ignition.start(Ignition.java:352) 
        ... 7 more 

07/16/2018 11:05:30 Job execution switched to status FAILING. 

> Flink sink throws java.lang.IllegalArgumentException when running in flink 
> cluster mode.
> 
>
> Key: IGNITE-8697
> URL: https://issues.apache.org/jira/browse/IGNITE-8697
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.3, 2.4, 2.5
>Reporter: Ray
>

[jira] [Commented] (IGNITE-8534) Upgrade Ignite Spark Module's Spark version to 2.3.0

2018-06-26 Thread Ray (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16523268#comment-16523268
 ] 

Ray commented on IGNITE-8534:
-

Hello, [~NIzhikov]

I have fixed all failed tests and added a few more on ltrim/trim/trim functions.

Please review and share your comments.

> Upgrade Ignite Spark Module's Spark version to 2.3.0
> 
>
> Key: IGNITE-8534
> URL: https://issues.apache.org/jira/browse/IGNITE-8534
> Project: Ignite
>  Issue Type: Improvement
>  Components: spark
>Reporter: Ray
>Assignee: Ray
>Priority: Major
> Fix For: 2.6
>
>
> Spark released its newest version 2.3.0 on Feb 28th, we should upgrade Ignite 
> Spark module to to the latest version.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8697) Flink sink throws java.lang.IllegalArgumentException when running in flink cluster mode.

2018-06-15 Thread Ray (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16513514#comment-16513514
 ] 

Ray commented on IGNITE-8697:
-

[~samaitra]

Are you able to reproduce this issue?

Any plans for the fix?

> Flink sink throws java.lang.IllegalArgumentException when running in flink 
> cluster mode.
> 
>
> Key: IGNITE-8697
> URL: https://issues.apache.org/jira/browse/IGNITE-8697
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.3, 2.4, 2.5
>Reporter: Ray
>Priority: Blocker
>
> if I submit the Application to the Flink Cluster using Ignite flink sink I 
> get this error
>  
> java.lang.ExceptionInInitializerError
>   at 
> org.apache.ignite.sink.flink.IgniteSink$SinkContext.getStreamer(IgniteSink.java:201)
>   at 
> org.apache.ignite.sink.flink.IgniteSink$SinkContext.access$100(IgniteSink.java:175)
>   at org.apache.ignite.sink.flink.IgniteSink.invoke(IgniteSink.java:165)
>   at 
> org.apache.flink.streaming.api.functions.sink.SinkFunction.invoke(SinkFunction.java:52)
>   at 
> org.apache.flink.streaming.api.operators.StreamSink.processElement(StreamSink.java:56)
>   at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:560)
>   at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:535)
>   at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:515)
>   at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:679)
>   at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:657)
>   at 
> org.apache.flink.streaming.api.operators.TimestampedCollector.collect(TimestampedCollector.java:51)
>   at 
> org.myorg.quickstart.InstrumentStreamer$Splitter.flatMap(InstrumentStreamer.java:97)
>   at 
> org.myorg.quickstart.InstrumentStreamer$Splitter.flatMap(InstrumentStreamer.java:1)
>   at 
> org.apache.flink.streaming.api.operators.StreamFlatMap.processElement(StreamFlatMap.java:50)
>   at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:560)
>   at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:535)
>   at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:515)
>   at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:679)
>   at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:657)
>   at 
> org.apache.flink.streaming.api.operators.StreamSourceContexts$NonTimestampContext.collect(StreamSourceContexts.java:104)
>   at 
> org.apache.flink.streaming.api.functions.source.SocketTextStreamFunction.run(SocketTextStreamFunction.java:110)
>   at 
> org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:87)
>   at 
> org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:56)
>   at 
> org.apache.flink.streaming.runtime.tasks.SourceStreamTask.run(SourceStreamTask.java:99)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:306)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:703)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.IllegalArgumentException: Ouch! Argument is invalid: 
> Cache name must not be null or empty.
>   at 
> org.apache.ignite.internal.util.GridArgumentCheck.ensure(GridArgumentCheck.java:109)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheUtils.validateCacheName(GridCacheUtils.java:1581)
>   at 
> org.apache.ignite.internal.IgniteKernal.dataStreamer(IgniteKernal.java:3284)
>   at 
> org.apache.ignite.sink.flink.IgniteSink$SinkContext$Holder.(IgniteSink.java:183)
>   ... 27 more



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8534) Upgrade Ignite Spark Module's Spark version to 2.3.0

2018-06-06 Thread Ray (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16504264#comment-16504264
 ] 

Ray commented on IGNITE-8534:
-

[~vveider] [~dpavlov]

Can you review the PR and give some comments please?

All unit tests in spark module and build passed in my local environment.

 

There're two users in the user list trying to use spark 2.3 with Ignite last 
week only.

[http://apache-ignite-users.70518.x6.nabble.com/Spark-Ignite-connection-using-Config-file-td21827.html]

[http://apache-ignite-users.70518.x6.nabble.com/Spark-Ignite-standalone-mode-on-Kubernetes-cluster-td21739.html]

 

 

> Upgrade Ignite Spark Module's Spark version to 2.3.0
> 
>
> Key: IGNITE-8534
> URL: https://issues.apache.org/jira/browse/IGNITE-8534
> Project: Ignite
>  Issue Type: Improvement
>  Components: spark
>Reporter: Ray
>Assignee: Ray
>Priority: Major
> Fix For: 2.6
>
>
> Spark released its newest version 2.3.0 on Feb 28th, we should upgrade Ignite 
> Spark module to to the latest version.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8534) Upgrade Ignite Spark Module's Spark version to 2.3.0

2018-06-06 Thread Ray (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16504174#comment-16504174
 ] 

Ray commented on IGNITE-8534:
-

Hi [~vveider]

I removed spark-2.10 module in this ticket, not scalar-2.10.

I believe you're talking about spark-2.10 module here.

As I explained here 
[http://apache-ignite-developers.2346864.n4.nabble.com/Review-request-for-IGNITE-8534-Upgrade-Ignite-Spark-Module-s-Spark-version-to-2-3-td30979.html#a31078],
 before Spark 2.3, spark user can use either scala 2.10 or 2.11 to write their 
application.

So in Ignite, spark-2.10 module exists to accommodate user uses scala 2.10 to 
write their spark application.

But in Spark 2.3, spark community removed support for scala 2.10 so it's only 
reasonable to remove spark-2.10 module in Ignite.

Because if we don't remove spark-2.10 module we can't upgrade spark module to 
spark 2.3.0.

> Upgrade Ignite Spark Module's Spark version to 2.3.0
> 
>
> Key: IGNITE-8534
> URL: https://issues.apache.org/jira/browse/IGNITE-8534
> Project: Ignite
>  Issue Type: Improvement
>  Components: spark
>Reporter: Ray
>Assignee: Ray
>Priority: Major
> Fix For: 2.6
>
>
> Spark released its newest version 2.3.0 on Feb 28th, we should upgrade Ignite 
> Spark module to to the latest version.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8697) Flink sink throws java.lang.IllegalArgumentException when running in flink cluster mode.

2018-06-05 Thread Ray (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-8697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502828#comment-16502828
 ] 

Ray commented on IGNITE-8697:
-

Yes, the cache is already created before running my flink application. 

The issue can be reproduced every time when you submit your flink application 
to your flink cluster. 

> Flink sink throws java.lang.IllegalArgumentException when running in flink 
> cluster mode.
> 
>
> Key: IGNITE-8697
> URL: https://issues.apache.org/jira/browse/IGNITE-8697
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.3, 2.4, 2.5
>Reporter: Ray
>Assignee: Roman Shtykh
>Priority: Blocker
>
> if I submit the Application to the Flink Cluster using Ignite flink sink I 
> get this error
>  
> java.lang.ExceptionInInitializerError
>   at 
> org.apache.ignite.sink.flink.IgniteSink$SinkContext.getStreamer(IgniteSink.java:201)
>   at 
> org.apache.ignite.sink.flink.IgniteSink$SinkContext.access$100(IgniteSink.java:175)
>   at org.apache.ignite.sink.flink.IgniteSink.invoke(IgniteSink.java:165)
>   at 
> org.apache.flink.streaming.api.functions.sink.SinkFunction.invoke(SinkFunction.java:52)
>   at 
> org.apache.flink.streaming.api.operators.StreamSink.processElement(StreamSink.java:56)
>   at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:560)
>   at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:535)
>   at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:515)
>   at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:679)
>   at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:657)
>   at 
> org.apache.flink.streaming.api.operators.TimestampedCollector.collect(TimestampedCollector.java:51)
>   at 
> org.myorg.quickstart.InstrumentStreamer$Splitter.flatMap(InstrumentStreamer.java:97)
>   at 
> org.myorg.quickstart.InstrumentStreamer$Splitter.flatMap(InstrumentStreamer.java:1)
>   at 
> org.apache.flink.streaming.api.operators.StreamFlatMap.processElement(StreamFlatMap.java:50)
>   at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:560)
>   at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:535)
>   at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:515)
>   at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:679)
>   at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:657)
>   at 
> org.apache.flink.streaming.api.operators.StreamSourceContexts$NonTimestampContext.collect(StreamSourceContexts.java:104)
>   at 
> org.apache.flink.streaming.api.functions.source.SocketTextStreamFunction.run(SocketTextStreamFunction.java:110)
>   at 
> org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:87)
>   at 
> org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:56)
>   at 
> org.apache.flink.streaming.runtime.tasks.SourceStreamTask.run(SourceStreamTask.java:99)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:306)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:703)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.IllegalArgumentException: Ouch! Argument is invalid: 
> Cache name must not be null or empty.
>   at 
> org.apache.ignite.internal.util.GridArgumentCheck.ensure(GridArgumentCheck.java:109)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheUtils.validateCacheName(GridCacheUtils.java:1581)
>   at 
> org.apache.ignite.internal.IgniteKernal.dataStreamer(IgniteKernal.java:3284)
>   at 
> org.apache.ignite.sink.flink.IgniteSink$SinkContext$Holder.(IgniteSink.java:183)
>   ... 27 more



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-8697) Flink sink throws java.lang.IllegalArgumentException when running in flink cluster mode.

2018-06-04 Thread Ray (JIRA)
Ray created IGNITE-8697:
---

 Summary: Flink sink throws java.lang.IllegalArgumentException when 
running in flink cluster mode.
 Key: IGNITE-8697
 URL: https://issues.apache.org/jira/browse/IGNITE-8697
 Project: Ignite
  Issue Type: Bug
Affects Versions: 2.5, 2.4, 2.3
Reporter: Ray
Assignee: Roman Shtykh


if I submit the Application to the Flink Cluster using Ignite flink sink I get 
this error

 
java.lang.ExceptionInInitializerError
at 
org.apache.ignite.sink.flink.IgniteSink$SinkContext.getStreamer(IgniteSink.java:201)
at 
org.apache.ignite.sink.flink.IgniteSink$SinkContext.access$100(IgniteSink.java:175)
at org.apache.ignite.sink.flink.IgniteSink.invoke(IgniteSink.java:165)
at 
org.apache.flink.streaming.api.functions.sink.SinkFunction.invoke(SinkFunction.java:52)
at 
org.apache.flink.streaming.api.operators.StreamSink.processElement(StreamSink.java:56)
at 
org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:560)
at 
org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:535)
at 
org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:515)
at 
org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:679)
at 
org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:657)
at 
org.apache.flink.streaming.api.operators.TimestampedCollector.collect(TimestampedCollector.java:51)
at 
org.myorg.quickstart.InstrumentStreamer$Splitter.flatMap(InstrumentStreamer.java:97)
at 
org.myorg.quickstart.InstrumentStreamer$Splitter.flatMap(InstrumentStreamer.java:1)
at 
org.apache.flink.streaming.api.operators.StreamFlatMap.processElement(StreamFlatMap.java:50)
at 
org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:560)
at 
org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:535)
at 
org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:515)
at 
org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:679)
at 
org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:657)
at 
org.apache.flink.streaming.api.operators.StreamSourceContexts$NonTimestampContext.collect(StreamSourceContexts.java:104)
at 
org.apache.flink.streaming.api.functions.source.SocketTextStreamFunction.run(SocketTextStreamFunction.java:110)
at 
org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:87)
at 
org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:56)
at 
org.apache.flink.streaming.runtime.tasks.SourceStreamTask.run(SourceStreamTask.java:99)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:306)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:703)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.IllegalArgumentException: Ouch! Argument is invalid: Cache 
name must not be null or empty.
at 
org.apache.ignite.internal.util.GridArgumentCheck.ensure(GridArgumentCheck.java:109)
at 
org.apache.ignite.internal.processors.cache.GridCacheUtils.validateCacheName(GridCacheUtils.java:1581)
at 
org.apache.ignite.internal.IgniteKernal.dataStreamer(IgniteKernal.java:3284)
at 
org.apache.ignite.sink.flink.IgniteSink$SinkContext$Holder.(IgniteSink.java:183)
... 27 more



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8534) Upgrade Ignite Spark Module's Spark version to 2.3.0

2018-05-23 Thread Ray (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-8534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16486783#comment-16486783
 ] 

Ray commented on IGNITE-8534:
-

Hello, [~NIzhikov]

I have attached the dev list discussion thread, Denis says he's ok with the 
upgrade.

Can you share your thoughts?

> Upgrade Ignite Spark Module's Spark version to 2.3.0
> 
>
> Key: IGNITE-8534
> URL: https://issues.apache.org/jira/browse/IGNITE-8534
> Project: Ignite
>  Issue Type: Improvement
>  Components: spark
>Reporter: Ray
>Assignee: Ray
>Priority: Major
> Fix For: 2.6
>
>
> Spark released its newest version 2.3.0 on Feb 28th, we should upgrade Ignite 
> Spark module to to the latest version.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8534) Upgrade Ignite Spark Module's Spark version to 2.3.0

2018-05-20 Thread Ray (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-8534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16482133#comment-16482133
 ] 

Ray commented on IGNITE-8534:
-

I removed spark-2.10 module because Spark 2.3.0 removed support for scala 2.10

https://issues.apache.org/jira/browse/SPARK-19810 

> Upgrade Ignite Spark Module's Spark version to 2.3.0
> 
>
> Key: IGNITE-8534
> URL: https://issues.apache.org/jira/browse/IGNITE-8534
> Project: Ignite
>  Issue Type: Improvement
>  Components: spark
>Reporter: Ray
>Assignee: Ray
>Priority: Major
> Fix For: 2.6
>
>
> Spark released its newest version 2.3.0 on Feb 28th, we should upgrade Ignite 
> Spark module to to the latest version.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-8534) Upgrade Ignite Spark Module's Spark version to 2.3.0

2018-05-20 Thread Ray (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-8534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16482130#comment-16482130
 ] 

Ray commented on IGNITE-8534:
-

[~vkulichenko] [~NIzhikov]

 
Please, review my changes.

> Upgrade Ignite Spark Module's Spark version to 2.3.0
> 
>
> Key: IGNITE-8534
> URL: https://issues.apache.org/jira/browse/IGNITE-8534
> Project: Ignite
>  Issue Type: Improvement
>  Components: spark
>Reporter: Ray
>Assignee: Ray
>Priority: Major
> Fix For: 2.6
>
>
> Spark released its newest version 2.3.0 on Feb 28th, we should upgrade Ignite 
> Spark module to to the latest version.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-8534) Upgrade Ignite Spark Module's Spark version to 2.3.0

2018-05-20 Thread Ray (JIRA)
Ray created IGNITE-8534:
---

 Summary: Upgrade Ignite Spark Module's Spark version to 2.3.0
 Key: IGNITE-8534
 URL: https://issues.apache.org/jira/browse/IGNITE-8534
 Project: Ignite
  Issue Type: Improvement
  Components: spark
Reporter: Ray
Assignee: Ray
 Fix For: 2.6


Spark released its newest version 2.3.0 on Feb 28th, we should upgrade Ignite 
Spark module to to the latest version.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)