[jira] [Comment Edited] (FLINK-29587) Fail to generate code for SearchOperator

2022-10-18 Thread luoyuxia (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17619946#comment-17619946
 ] 

luoyuxia edited comment on FLINK-29587 at 10/19/22 2:29 AM:


After a detail debug, I found the reason is that the reduced expression ` 200 = 
table1.dimid and table1.dimid = 100`  won't be reduced to false in 
`RexSimplify#simplify`. Then it will constribuct a search operaotr with empty 
range, which bring the exception as reported in this jira.

 

To fix it, the better way is to fix the logic to make such case can be 
simpified, so that we can fix this Jira and the such sql `select * from t where 
 a = 100 and 200 = a` can benefit from it.


was (Author: luoyuxia):
After a detail debug, I found the reason is that the reduced expression ` 200 = 
table1.dimid and table1.dimid = 100`  won't be reduced to false 

in `RexSimplify#simplify`. Then it will constribuct a search operaotr with 
empty range, which bring the exception as reported in this jira.

 

To fix it, the better way is to fix the logic to make such case can be 
simpified, so that we can fix this Jira and the such sql `select * from t where 
 a = 100 and 200 = a` can benefit from it.

> Fail to generate code for  SearchOperator 
> --
>
> Key: FLINK-29587
> URL: https://issues.apache.org/jira/browse/FLINK-29587
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner, Table SQL / Runtime
>Reporter: luoyuxia
>Priority: Major
>  Labels: pull-request-available
>
> Can be reproduced with the following code with Hive dialect
> {code:java}
> // hive dialect
> tableEnv.executeSql("create table table1 (id int, val string, val1 string, 
> dimid int)");
> tableEnv.executeSql("create table table3 (id int)");
> CollectionUtil.iteratorToList(
> tableEnv.executeSql(
> "select table1.id, table1.val, table1.val1 from 
> table1 left semi join"
> + " table3 on table1.dimid = table3.id and 
> table3.id = 100 where table1.dimid = 200")
> .collect());{code}
> The  plan is 
> {code:java}
> LogicalSink(table=[*anonymous_collect$1*], fields=[id, val, val1])
>   LogicalProject(id=[$0], val=[$1], val1=[$2])
>     LogicalFilter(condition=[=($3, 200)])
>       LogicalJoin(condition=[AND(=($3, $4), =($4, 100))], joinType=[semi])
>         LogicalTableScan(table=[[test-catalog, default, table1]])
>         LogicalTableScan(table=[[test-catalog, default, 
> table3]])BatchPhysicalSink(table=[*anonymous_collect$1*], fields=[id, val, 
> val1])
>   BatchPhysicalNestedLoopJoin(joinType=[LeftSemiJoin], where=[$f1], 
> select=[id, val, val1], build=[right])
>     BatchPhysicalCalc(select=[id, val, val1], where=[=(dimid, 200)])
>       BatchPhysicalTableSourceScan(table=[[test-catalog, default, table1]], 
> fields=[id, val, val1, dimid])
>     BatchPhysicalExchange(distribution=[broadcast])
>       BatchPhysicalCalc(select=[SEARCH(id, Sarg[]) AS $f1])
>         BatchPhysicalTableSourceScan(table=[[test-catalog, default, table3]], 
> fields=[id]) {code}
>  
> But it'll throw exception when generate code for it.
> The exception is 
>  
>  
> {code:java}
> java.util.NoSuchElementException
>     at 
> com.google.common.collect.ImmutableRangeSet.span(ImmutableRangeSet.java:203)
>     at org.apache.calcite.util.Sarg.isComplementedPoints(Sarg.java:148)
>     at 
> org.apache.flink.table.planner.codegen.calls.SearchOperatorGen$.generateSearch(SearchOperatorGen.scala:87)
>     at 
> org.apache.flink.table.planner.codegen.ExprCodeGenerator.visitCall(ExprCodeGenerator.scala:474)
>     at 
> org.apache.flink.table.planner.codegen.ExprCodeGenerator.visitCall(ExprCodeGenerator.scala:57)
>     at org.apache.calcite.rex.RexCall.accept(RexCall.java:174)
>     at 
> org.apache.flink.table.planner.codegen.ExprCodeGenerator.generateExpression(ExprCodeGenerator.scala:143)
>     at 
> org.apache.flink.table.planner.codegen.CalcCodeGenerator$.$anonfun$generateProcessCode$4(CalcCodeGenerator.scala:140)
>     at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:233)
>     at 
> scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:58)
>     at 
> scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:51)
>     at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
>     at scala.collection.TraversableLike.map(TraversableLike.scala:233)
>     at scala.collection.TraversableLike.map$(TraversableLike.scala:226)
>     at scala.collection.AbstractTraversable.map(Traversable.scala:104)
>     at 
> org.apache.flink.table.planner.codegen.CalcCodeGenerator$.produceProjectionCode$1(CalcCodeGenerator.scala:140)
>     at 
> 

[jira] [Comment Edited] (FLINK-29587) Fail to generate code for SearchOperator

2022-10-18 Thread luoyuxia (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17619946#comment-17619946
 ] 

luoyuxia edited comment on FLINK-29587 at 10/19/22 2:29 AM:


After a detail debug, I found the reason is that the reduced expression ` 200 = 
table1.dimid and table1.dimid = 100`  won't be reduced to false 

in `RexSimplify#simplify`. Then it will constribuct a search operaotr with 
empty range, which bring the exception as reported in this jira.

 

To fix it, the better way is to fix the logic to make such case can be 
simpified, so that we can fix this Jira and the such sql `select * from t where 
 a = 100 and 200 = a` can benefit from it.


was (Author: luoyuxia):
After a detain debug, I found the reason is that the reduced expression ` 200 = 
table1.dimid and table1.dimid = 100`  won't be reduced to false 

in `RexSimplify#simplify`. Then it will constribuct a search operaotr with 
empty range, which bring the exception as reported in this jira.

 

To fix it, the better way is to fix the logic to make such case can be 
simpified, so that we can fix this Jira and the such sql `select * from t where 
 a = 100 and 200 = a` can benefit from it.

> Fail to generate code for  SearchOperator 
> --
>
> Key: FLINK-29587
> URL: https://issues.apache.org/jira/browse/FLINK-29587
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner, Table SQL / Runtime
>Reporter: luoyuxia
>Priority: Major
>  Labels: pull-request-available
>
> Can be reproduced with the following code with Hive dialect
> {code:java}
> // hive dialect
> tableEnv.executeSql("create table table1 (id int, val string, val1 string, 
> dimid int)");
> tableEnv.executeSql("create table table3 (id int)");
> CollectionUtil.iteratorToList(
> tableEnv.executeSql(
> "select table1.id, table1.val, table1.val1 from 
> table1 left semi join"
> + " table3 on table1.dimid = table3.id and 
> table3.id = 100 where table1.dimid = 200")
> .collect());{code}
> The  plan is 
> {code:java}
> LogicalSink(table=[*anonymous_collect$1*], fields=[id, val, val1])
>   LogicalProject(id=[$0], val=[$1], val1=[$2])
>     LogicalFilter(condition=[=($3, 200)])
>       LogicalJoin(condition=[AND(=($3, $4), =($4, 100))], joinType=[semi])
>         LogicalTableScan(table=[[test-catalog, default, table1]])
>         LogicalTableScan(table=[[test-catalog, default, 
> table3]])BatchPhysicalSink(table=[*anonymous_collect$1*], fields=[id, val, 
> val1])
>   BatchPhysicalNestedLoopJoin(joinType=[LeftSemiJoin], where=[$f1], 
> select=[id, val, val1], build=[right])
>     BatchPhysicalCalc(select=[id, val, val1], where=[=(dimid, 200)])
>       BatchPhysicalTableSourceScan(table=[[test-catalog, default, table1]], 
> fields=[id, val, val1, dimid])
>     BatchPhysicalExchange(distribution=[broadcast])
>       BatchPhysicalCalc(select=[SEARCH(id, Sarg[]) AS $f1])
>         BatchPhysicalTableSourceScan(table=[[test-catalog, default, table3]], 
> fields=[id]) {code}
>  
> But it'll throw exception when generate code for it.
> The exception is 
>  
>  
> {code:java}
> java.util.NoSuchElementException
>     at 
> com.google.common.collect.ImmutableRangeSet.span(ImmutableRangeSet.java:203)
>     at org.apache.calcite.util.Sarg.isComplementedPoints(Sarg.java:148)
>     at 
> org.apache.flink.table.planner.codegen.calls.SearchOperatorGen$.generateSearch(SearchOperatorGen.scala:87)
>     at 
> org.apache.flink.table.planner.codegen.ExprCodeGenerator.visitCall(ExprCodeGenerator.scala:474)
>     at 
> org.apache.flink.table.planner.codegen.ExprCodeGenerator.visitCall(ExprCodeGenerator.scala:57)
>     at org.apache.calcite.rex.RexCall.accept(RexCall.java:174)
>     at 
> org.apache.flink.table.planner.codegen.ExprCodeGenerator.generateExpression(ExprCodeGenerator.scala:143)
>     at 
> org.apache.flink.table.planner.codegen.CalcCodeGenerator$.$anonfun$generateProcessCode$4(CalcCodeGenerator.scala:140)
>     at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:233)
>     at 
> scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:58)
>     at 
> scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:51)
>     at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
>     at scala.collection.TraversableLike.map(TraversableLike.scala:233)
>     at scala.collection.TraversableLike.map$(TraversableLike.scala:226)
>     at scala.collection.AbstractTraversable.map(Traversable.scala:104)
>     at 
> org.apache.flink.table.planner.codegen.CalcCodeGenerator$.produceProjectionCode$1(CalcCodeGenerator.scala:140)
>     at 
> 

[jira] [Commented] (FLINK-29587) Fail to generate code for SearchOperator

2022-10-18 Thread luoyuxia (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17619946#comment-17619946
 ] 

luoyuxia commented on FLINK-29587:
--

After a detain debug, I found the reason is that the reduced expression ` 200 = 
table1.dimid and table1.dimid = 100`  won't be reduced to false 

in `RexSimplify#simplify`. Then it will constribuct a search operaotr with 
empty range, which bring the exception as reported in this jira.

 

To fix it, the better way is to fix the logic to make such case can be 
simpified, so that we can fix this Jira and the such sql `select * from t where 
 a = 100 and 200 = a` can benefit from it.

> Fail to generate code for  SearchOperator 
> --
>
> Key: FLINK-29587
> URL: https://issues.apache.org/jira/browse/FLINK-29587
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner, Table SQL / Runtime
>Reporter: luoyuxia
>Priority: Major
>  Labels: pull-request-available
>
> Can be reproduced with the following code with Hive dialect
> {code:java}
> // hive dialect
> tableEnv.executeSql("create table table1 (id int, val string, val1 string, 
> dimid int)");
> tableEnv.executeSql("create table table3 (id int)");
> CollectionUtil.iteratorToList(
> tableEnv.executeSql(
> "select table1.id, table1.val, table1.val1 from 
> table1 left semi join"
> + " table3 on table1.dimid = table3.id and 
> table3.id = 100 where table1.dimid = 200")
> .collect());{code}
> The  plan is 
> {code:java}
> LogicalSink(table=[*anonymous_collect$1*], fields=[id, val, val1])
>   LogicalProject(id=[$0], val=[$1], val1=[$2])
>     LogicalFilter(condition=[=($3, 200)])
>       LogicalJoin(condition=[AND(=($3, $4), =($4, 100))], joinType=[semi])
>         LogicalTableScan(table=[[test-catalog, default, table1]])
>         LogicalTableScan(table=[[test-catalog, default, 
> table3]])BatchPhysicalSink(table=[*anonymous_collect$1*], fields=[id, val, 
> val1])
>   BatchPhysicalNestedLoopJoin(joinType=[LeftSemiJoin], where=[$f1], 
> select=[id, val, val1], build=[right])
>     BatchPhysicalCalc(select=[id, val, val1], where=[=(dimid, 200)])
>       BatchPhysicalTableSourceScan(table=[[test-catalog, default, table1]], 
> fields=[id, val, val1, dimid])
>     BatchPhysicalExchange(distribution=[broadcast])
>       BatchPhysicalCalc(select=[SEARCH(id, Sarg[]) AS $f1])
>         BatchPhysicalTableSourceScan(table=[[test-catalog, default, table3]], 
> fields=[id]) {code}
>  
> But it'll throw exception when generate code for it.
> The exception is 
>  
>  
> {code:java}
> java.util.NoSuchElementException
>     at 
> com.google.common.collect.ImmutableRangeSet.span(ImmutableRangeSet.java:203)
>     at org.apache.calcite.util.Sarg.isComplementedPoints(Sarg.java:148)
>     at 
> org.apache.flink.table.planner.codegen.calls.SearchOperatorGen$.generateSearch(SearchOperatorGen.scala:87)
>     at 
> org.apache.flink.table.planner.codegen.ExprCodeGenerator.visitCall(ExprCodeGenerator.scala:474)
>     at 
> org.apache.flink.table.planner.codegen.ExprCodeGenerator.visitCall(ExprCodeGenerator.scala:57)
>     at org.apache.calcite.rex.RexCall.accept(RexCall.java:174)
>     at 
> org.apache.flink.table.planner.codegen.ExprCodeGenerator.generateExpression(ExprCodeGenerator.scala:143)
>     at 
> org.apache.flink.table.planner.codegen.CalcCodeGenerator$.$anonfun$generateProcessCode$4(CalcCodeGenerator.scala:140)
>     at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:233)
>     at 
> scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:58)
>     at 
> scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:51)
>     at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
>     at scala.collection.TraversableLike.map(TraversableLike.scala:233)
>     at scala.collection.TraversableLike.map$(TraversableLike.scala:226)
>     at scala.collection.AbstractTraversable.map(Traversable.scala:104)
>     at 
> org.apache.flink.table.planner.codegen.CalcCodeGenerator$.produceProjectionCode$1(CalcCodeGenerator.scala:140)
>     at 
> org.apache.flink.table.planner.codegen.CalcCodeGenerator$.generateProcessCode(CalcCodeGenerator.scala:164)
>     at 
> org.apache.flink.table.planner.codegen.CalcCodeGenerator$.generateCalcOperator(CalcCodeGenerator.scala:49)
>     at 
> org.apache.flink.table.planner.codegen.CalcCodeGenerator.generateCalcOperator(CalcCodeGenerator.scala)
>     at 
> org.apache.flink.table.planner.plan.nodes.exec.common.CommonExecCalc.translateToPlanInternal(CommonExecCalc.java:100)
>     at 
> org.apache.flink.table.planner.plan.nodes.exec.ExecNodeBase.translateToPlan(ExecNodeBase.java:158)
>     at 
> 

[jira] [Commented] (FLINK-29679) DESCRIBE statement shows comment

2022-10-18 Thread luoyuxia (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17619921#comment-17619921
 ] 

luoyuxia commented on FLINK-29679:
--

[~liyubin117]  Is it for table's comment or column comment?

> DESCRIBE statement shows comment
> 
>
> Key: FLINK-29679
> URL: https://issues.apache.org/jira/browse/FLINK-29679
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / API
>Affects Versions: 1.17.0
>Reporter: Yubin Li
>Assignee: Yubin Li
>Priority: Major
>
> comment is very helpful to make table schema user-friendly, many data 
> analysers rely on such message to write sql adaptive to corresponding 
> business logics.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-29678) Data may loss when sink bounded stream into filesystem with auto compact enabled in streaming mode

2022-10-18 Thread luoyuxia (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17619440#comment-17619440
 ] 

luoyuxia commented on FLINK-29678:
--

[~martijnvisser] Thanks for reminder. Sorry for that I forget to refer to 
enable checkpoint in the description.

I'm afriad of that the data loss has no deal with checkpointing.  The test will 
still fail even though I enable checkpointing in `

StreamingTestBase#before` by `env.enableCheckpointing(100)`

> Data may loss when sink bounded stream into filesystem with auto compact 
> enabled in streaming mode 
> ---
>
> Key: FLINK-29678
> URL: https://issues.apache.org/jira/browse/FLINK-29678
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / FileSystem
>Affects Versions: 1.15.0
>Reporter: luoyuxia
>Priority: Major
>
> In stream mode, when writing bounded data stream into filesystem with auto 
> compactation enabel, the data may loss.
> We can reproduce it by adding one code line `'auto-compaction'='true' ` in 
> `FileSystemITCaseBase#open` to enable auto compact.
> {code:java}
> tableEnv.executeSql(
>   s"""
>  |create table partitionedTable (
>  |  x string,
>  |  y int,
>  |  a int,
>  |  b bigint,
>  |  c as b + 1
>  |) partitioned by (a, b) with (
>  |  'connector' = 'filesystem',
>  |  'auto-compaction'='true', // added line to enable auto compaction.
>  |  'path' = '$getScheme://$resultPath',
>  |  ${formatProperties().mkString(",\n")}
>  |)
>""".stripMargin
> ) {code}
> Then the test `StreamFileSystemTestCsvITCase#testPartialDynamicPartition` 
> will fail with the assert failure:
> {code:java}
> java.lang.AssertionError: 
> Expected :List(x18,18)
> Actual   :List() {code}
> There is no data has been written into the table.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-29678) Data may loss when sink bounded stream into filesystem with auto compact enabled in streaming mode

2022-10-18 Thread luoyuxia (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17619427#comment-17619427
 ] 

luoyuxia commented on FLINK-29678:
--

In stream mode, with auto compaction, the pipeline  for  writing is  
CompactFileWriter , CompactCoordinator, CompactOperator, PartitionCommitter.

 

if the datastream is bounded, CompactFileWriter write file1, file2, then call 
method endInput. `CompactCoordinator`  is expected to pack fiel1, file2 to 
downstream's CompactOperator. But `CompactCoordinator`  won't do that since it 
has no `endInput` method. So that, file1, file2 will never be compacted which 
is always Invisible to user. So, the data in file1, file2 will loss.

 

To fix it, we need to add a method `endInput` in `CompactCoordinator` to pack 
the remain files which are written between last snapshot and endinput. 

> Data may loss when sink bounded stream into filesystem with auto compact 
> enabled in streaming mode 
> ---
>
> Key: FLINK-29678
> URL: https://issues.apache.org/jira/browse/FLINK-29678
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / FileSystem
>Affects Versions: 1.15.0
>Reporter: luoyuxia
>Priority: Major
>
> In stream mode, when writing bounded data stream into filesystem with auto 
> compactation enabel, the data may loss.
> We can reproduce it by adding one code line `'auto-compaction'='true' ` in 
> `FileSystemITCaseBase#open` to enable auto compact.
> {code:java}
> tableEnv.executeSql(
>   s"""
>  |create table partitionedTable (
>  |  x string,
>  |  y int,
>  |  a int,
>  |  b bigint,
>  |  c as b + 1
>  |) partitioned by (a, b) with (
>  |  'connector' = 'filesystem',
>  |  'auto-compaction'='true', // added line to enable auto compaction.
>  |  'path' = '$getScheme://$resultPath',
>  |  ${formatProperties().mkString(",\n")}
>  |)
>""".stripMargin
> ) {code}
> Then the test `StreamFileSystemTestCsvITCase#testPartialDynamicPartition` 
> will fail with the assert failure:
> {code:java}
> java.lang.AssertionError: 
> Expected :List(x18,18)
> Actual   :List() {code}
> There is no data has been written into the table.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29678) Data may loss when sink bounded stream into filesystem with auto compact enabled in streaming mode

2022-10-18 Thread luoyuxia (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

luoyuxia updated FLINK-29678:
-
Description: 
In stream mode, when writing bounded data stream into filesystem with auto 
compactation enabel, the data may loss.

We can reproduce it by adding one code line `'auto-compaction'='true' ` in 
`FileSystemITCaseBase#open` to enable auto compact.
{code:java}
tableEnv.executeSql(
  s"""
 |create table partitionedTable (
 |  x string,
 |  y int,
 |  a int,
 |  b bigint,
 |  c as b + 1
 |) partitioned by (a, b) with (
 |  'connector' = 'filesystem',
 |  'auto-compaction'='true', // added line to enable auto compaction.
 |  'path' = '$getScheme://$resultPath',
 |  ${formatProperties().mkString(",\n")}
 |)
   """.stripMargin
) {code}
Then the test `StreamFileSystemTestCsvITCase#testPartialDynamicPartition` will 
fail with the assert failure:
{code:java}
java.lang.AssertionError: 
Expected :List(x18,18)
Actual   :List() {code}
There is no data has been written into the table.

> Data may loss when sink bounded stream into filesystem with auto compact 
> enabled in streaming mode 
> ---
>
> Key: FLINK-29678
> URL: https://issues.apache.org/jira/browse/FLINK-29678
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / FileSystem
>Affects Versions: 1.15.0
>Reporter: luoyuxia
>Priority: Major
>
> In stream mode, when writing bounded data stream into filesystem with auto 
> compactation enabel, the data may loss.
> We can reproduce it by adding one code line `'auto-compaction'='true' ` in 
> `FileSystemITCaseBase#open` to enable auto compact.
> {code:java}
> tableEnv.executeSql(
>   s"""
>  |create table partitionedTable (
>  |  x string,
>  |  y int,
>  |  a int,
>  |  b bigint,
>  |  c as b + 1
>  |) partitioned by (a, b) with (
>  |  'connector' = 'filesystem',
>  |  'auto-compaction'='true', // added line to enable auto compaction.
>  |  'path' = '$getScheme://$resultPath',
>  |  ${formatProperties().mkString(",\n")}
>  |)
>""".stripMargin
> ) {code}
> Then the test `StreamFileSystemTestCsvITCase#testPartialDynamicPartition` 
> will fail with the assert failure:
> {code:java}
> java.lang.AssertionError: 
> Expected :List(x18,18)
> Actual   :List() {code}
> There is no data has been written into the table.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-29678) Data may loss when sink bounded stream into filesystem with auto compact enabled in streaming mode

2022-10-18 Thread luoyuxia (Jira)
luoyuxia created FLINK-29678:


 Summary: Data may loss when sink bounded stream into filesystem 
with auto compact enabled in streaming mode 
 Key: FLINK-29678
 URL: https://issues.apache.org/jira/browse/FLINK-29678
 Project: Flink
  Issue Type: Bug
  Components: Connectors / FileSystem
Affects Versions: 1.15.0
Reporter: luoyuxia






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-29653) close DynamicResult but it was already closed

2022-10-17 Thread luoyuxia (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17619188#comment-17619188
 ] 

luoyuxia commented on FLINK-29653:
--

cc [~fsk119] 

> close DynamicResult but it was already closed
> -
>
> Key: FLINK-29653
> URL: https://issues.apache.org/jira/browse/FLINK-29653
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.15.0
>Reporter: Jingqi Shi
>Priority: Major
>
> We have built a test environment with apache kyuubi and flink, and 
> experienced a problem with LocalExecutor class.
>  
> {code:java}
> 2022-09-08 22:53:37,729 WARN  
> org.apache.kyuubi.engine.flink.operation.ExecuteStatement    [] - Failed to 
> clean result set 4515c4aa72d73cf368aba5ddabb675ce in session 
> 681753c2-d945-4200-861c-6ad739e2a92c
> org.apache.flink.table.client.gateway.SqlExecutionException: Could not find a 
> result with result identifier '4515c4aa72d73cf368aba5ddabb675ce'.
>     at 
> org.apache.flink.table.client.gateway.local.LocalExecutor.cancelQuery(LocalExecutor.java:306)
>  ~[flink-sql-client-1.15.1-SNAPSHOT.jar:1.15.1-SNAPSHOT]
>     at 
> org.apache.kyuubi.engine.flink.operation.ExecuteStatement.cleanupQueryResult(ExecuteStatement.scala:180)
>  
> [kyuubi-flink-sql-engine_2.12-1.5.1-incubating-SNAPSHOT.jar:1.5.1-incubating-SNAPSHOT]
>     at 
> org.apache.kyuubi.engine.flink.operation.ExecuteStatement.runQueryOperation(ExecuteStatement.scala:167)
>  
> [kyuubi-flink-sql-engine_2.12-1.5.1-incubating-SNAPSHOT.jar:1.5.1-incubating-SNAPSHOT]
>     at 
> org.apache.kyuubi.engine.flink.operation.ExecuteStatement.org$apache$kyuubi$engine$flink$operation$ExecuteStatement$$executeStatement(ExecuteStatement.scala:111)
>  
> [kyuubi-flink-sql-engine_2.12-1.5.1-incubating-SNAPSHOT.jar:1.5.1-incubating-SNAPSHOT]
>     at 
> org.apache.kyuubi.engine.flink.operation.ExecuteStatement$$anon$1.run(ExecuteStatement.scala:81)
>  
> [kyuubi-flink-sql-engine_2.12-1.5.1-incubating-SNAPSHOT.jar:1.5.1-incubating-SNAPSHOT]
>     at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [?:1.8.0_202]
>     at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_202]
>     at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  [?:1.8.0_202]
>     at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  [?:1.8.0_202]
>     at java.lang.Thread.run(Thread.java:748) [?:1.8.0_202] {code}
>  
>  
> ResultStore stores resultId and result, skips sessionId. But if one result 
> reached closeSession(), all results in ResultStore will be closed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-29657) Flink hive parser considers literal floating point number differently than Hive SQL

2022-10-17 Thread luoyuxia (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17619186#comment-17619186
 ] 

luoyuxia commented on FLINK-29657:
--

I see.  The most Hive sql statements in Hive 1.2 should also work in Hive 2.3  
since the Hive sql is expected be backward compatible,  but some behavior may 
different between Hive 1.2, Hive 2.3.  And from my experience, most of the 
changes are  bugs fix or some other things that Hive commutily decide to change 
the behavior after a carefully discussion.

[~Runking] If you have any other problems, please let me know.  I'm really glad 
to help. :)

> Flink hive parser considers literal floating point number differently than 
> Hive SQL
> ---
>
> Key: FLINK-29657
> URL: https://issues.apache.org/jira/browse/FLINK-29657
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Runkang He
>Priority: Major
>
> Hive SQL consider literal floating number(such as 1.1) as double, but Flink 
> hive parser consider this as decimal, so it causes that some hive udf that 
> accepts double arg, will not pass type check in hive parser.
> Hive SQL's behavior:
> hive> explain select 1.1 + false;
> 2022-10-17 16:37:14,286    FAILED: SemanticException [Error 10014]: Line 1:15 
> Wrong arguments 'false': No matching method for class 
> org.apache.hadoop.hive.ql.udf.generic.GenericUDFOPNumericPlus with 
> ({*}double{*}, boolean)
> Flink hive parser's behavior:
> in NumExprProcessor#process, it process non-postfix number as decimal by 
> default.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-29652) get duplicate result from sql-client in BATCH mode

2022-10-17 Thread luoyuxia (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17619179#comment-17619179
 ] 

luoyuxia commented on FLINK-29652:
--

Thanks for the explanation.

[~fsk119] Could you please have a look and assign this ticket to [~shijingqi] 

> get duplicate result from sql-client in BATCH mode
> --
>
> Key: FLINK-29652
> URL: https://issues.apache.org/jira/browse/FLINK-29652
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Affects Versions: 1.13.0, 1.14.0, 1.15.0, 1.16.0
>Reporter: Jingqi Shi
>Priority: Major
>
> In BATCH mode, we experienced problems with flink-sql-client when retrieving 
> result record. We may get duplicate row records occasionally even if querying 
> from a hive/hudi table which contains only one record.
>  
> For example, SELECT COUNT(1) AS val FROM x.test_hive_table, we may get:
> {code:java}
> +--+
> | val  |
> +--+
> | 1    |
> | …    |
> | 1    |
> +--+ {code}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-29657) Flink hive parser considers literal floating point number differently than Hive SQL

2022-10-17 Thread luoyuxia (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17618779#comment-17618779
 ] 

luoyuxia edited comment on FLINK-29657 at 10/17/22 9:20 AM:


Just a quick question, what's the hive version are you using? I guess some 
lower version than 2.3.

Actaully, since Hive 2.3, literal floating number(such as 1.1) has been 
considered as {*}decimal type{*}. You can refer to HIVE-13945 for more detail. 

I tried with Hive 2.3, the exception message will be 
{code:java}
FAILED: SemanticException [Error 10014]: Line 1:8 Wrong arguments 'false': No 
matching method for class 
org.apache.hadoop.hive.ql.udf.generic.GenericUDFOPNumericPlus with 
(decimal(2,1), boolean) {code}


was (Author: luoyuxia):
Just a quick question, what's the hive version are you using? I guess some 
lower version that 2.3.

Actaully, since Hive 2.3, literal floating number(such as 1.1) has been 
considered as {*}decimal type{*}. You can refer to 
[HIVE-13945|https://issues.apache.org/jira/browse/HIVE-13945] for more detail. 

I tried with Hive 2.3, the exception message will be 
{code:java}
FAILED: SemanticException [Error 10014]: Line 1:8 Wrong arguments 'false': No 
matching method for class 
org.apache.hadoop.hive.ql.udf.generic.GenericUDFOPNumericPlus with 
(decimal(2,1), boolean) {code}

> Flink hive parser considers literal floating point number differently than 
> Hive SQL
> ---
>
> Key: FLINK-29657
> URL: https://issues.apache.org/jira/browse/FLINK-29657
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Runkang He
>Priority: Major
>
> Hive SQL consider literal floating number(such as 1.1) as double, but Flink 
> hive parser consider this as decimal, so it causes that some hive udf that 
> accepts double arg, will not pass type check in hive parser.
> Hive SQL's behavior:
> hive> explain select 1.1 + false;
> 2022-10-17 16:37:14,286    FAILED: SemanticException [Error 10014]: Line 1:15 
> Wrong arguments 'false': No matching method for class 
> org.apache.hadoop.hive.ql.udf.generic.GenericUDFOPNumericPlus with 
> ({*}double{*}, boolean)
> Flink hive parser's behavior:
> in NumExprProcessor#process, it process non-postfix number as decimal by 
> default.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-29657) Flink hive parser considers literal floating point number differently than Hive SQL

2022-10-17 Thread luoyuxia (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17618770#comment-17618770
 ] 

luoyuxia edited comment on FLINK-29657 at 10/17/22 9:20 AM:


Cool!   [~Runking] Sure, I will have a look, any problems related to Hive 
dialect will be my first priority.  

 


was (Author: luoyuxia):
Cool!   [~Runking] Sure, I will have a look, any problems related to Hive 
dialect will be my first priority. 

 

> Flink hive parser considers literal floating point number differently than 
> Hive SQL
> ---
>
> Key: FLINK-29657
> URL: https://issues.apache.org/jira/browse/FLINK-29657
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Runkang He
>Priority: Major
>
> Hive SQL consider literal floating number(such as 1.1) as double, but Flink 
> hive parser consider this as decimal, so it causes that some hive udf that 
> accepts double arg, will not pass type check in hive parser.
> Hive SQL's behavior:
> hive> explain select 1.1 + false;
> 2022-10-17 16:37:14,286    FAILED: SemanticException [Error 10014]: Line 1:15 
> Wrong arguments 'false': No matching method for class 
> org.apache.hadoop.hive.ql.udf.generic.GenericUDFOPNumericPlus with 
> ({*}double{*}, boolean)
> Flink hive parser's behavior:
> in NumExprProcessor#process, it process non-postfix number as decimal by 
> default.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-29657) Flink hive parser considers literal floating point number differently than Hive SQL

2022-10-17 Thread luoyuxia (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17618782#comment-17618782
 ] 

luoyuxia commented on FLINK-29657:
--

When support hive dialect, we follow Hive 2.3's behavior since it's wide-used 
and stable , so we may miss some thing in other versions.

But for your problem, it's quick to fix. We just need to adjust the  logic in 
`NumExprProcessor#process`

> Flink hive parser considers literal floating point number differently than 
> Hive SQL
> ---
>
> Key: FLINK-29657
> URL: https://issues.apache.org/jira/browse/FLINK-29657
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Runkang He
>Priority: Major
>
> Hive SQL consider literal floating number(such as 1.1) as double, but Flink 
> hive parser consider this as decimal, so it causes that some hive udf that 
> accepts double arg, will not pass type check in hive parser.
> Hive SQL's behavior:
> hive> explain select 1.1 + false;
> 2022-10-17 16:37:14,286    FAILED: SemanticException [Error 10014]: Line 1:15 
> Wrong arguments 'false': No matching method for class 
> org.apache.hadoop.hive.ql.udf.generic.GenericUDFOPNumericPlus with 
> ({*}double{*}, boolean)
> Flink hive parser's behavior:
> in NumExprProcessor#process, it process non-postfix number as decimal by 
> default.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-29657) Flink hive parser considers literal floating point number differently than Hive SQL

2022-10-17 Thread luoyuxia (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17618779#comment-17618779
 ] 

luoyuxia commented on FLINK-29657:
--

Just a quick question, what's the hive version are you using? I guess some 
lower version that 2.3.

Actaully, since Hive 2.3, literal floating number(such as 1.1) has been 
considered as {*}decimal type{*}. You can refer to 
[HIVE-13945|https://issues.apache.org/jira/browse/HIVE-13945] for more detail. 

I tried with Hive 2.3, the exception message will be 
{code:java}
FAILED: SemanticException [Error 10014]: Line 1:8 Wrong arguments 'false': No 
matching method for class 
org.apache.hadoop.hive.ql.udf.generic.GenericUDFOPNumericPlus with 
(decimal(2,1), boolean) {code}

> Flink hive parser considers literal floating point number differently than 
> Hive SQL
> ---
>
> Key: FLINK-29657
> URL: https://issues.apache.org/jira/browse/FLINK-29657
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Runkang He
>Priority: Major
>
> Hive SQL consider literal floating number(such as 1.1) as double, but Flink 
> hive parser consider this as decimal, so it causes that some hive udf that 
> accepts double arg, will not pass type check in hive parser.
> Hive SQL's behavior:
> hive> explain select 1.1 + false;
> 2022-10-17 16:37:14,286    FAILED: SemanticException [Error 10014]: Line 1:15 
> Wrong arguments 'false': No matching method for class 
> org.apache.hadoop.hive.ql.udf.generic.GenericUDFOPNumericPlus with 
> ({*}double{*}, boolean)
> Flink hive parser's behavior:
> in NumExprProcessor#process, it process non-postfix number as decimal by 
> default.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-29657) Flink hive parser considers literal floating point number differently than Hive SQL

2022-10-17 Thread luoyuxia (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17618770#comment-17618770
 ] 

luoyuxia commented on FLINK-29657:
--

Cool!   [~Runking] Sure, I will have a look, any problems related to Hive 
dialect will be my first priority. 

 

> Flink hive parser considers literal floating point number differently than 
> Hive SQL
> ---
>
> Key: FLINK-29657
> URL: https://issues.apache.org/jira/browse/FLINK-29657
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Runkang He
>Priority: Major
>
> Hive SQL consider literal floating number(such as 1.1) as double, but Flink 
> hive parser consider this as decimal, so it causes that some hive udf that 
> accepts double arg, will not pass type check in hive parser.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29651) Code gen will fail for like operator when the literal specified in user's sql hasn't be escaped

2022-10-17 Thread luoyuxia (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29651?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

luoyuxia updated FLINK-29651:
-
Summary: Code gen will  fail for like operator when the literal specified 
in user's sql hasn't be escaped   (was: Code gen will  fail when the literal 
specified in user's sql hasn't be escaped )

> Code gen will  fail for like operator when the literal specified in user's 
> sql hasn't be escaped 
> -
>
> Key: FLINK-29651
> URL: https://issues.apache.org/jira/browse/FLINK-29651
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.15.0
>Reporter: luoyuxia
>Priority: Major
>
> Can be reproduced with the following code in Flink 1.15:
>  
> {code:java}
> // testTable contains a column `field1`
> tableEnvironment
> .executeSql(
> "select *, '1' as run from testTable WHERE field1 LIKE 
> 'b\"cd\"e%'")
> .print(); {code}
> The exception the code generated fail to compile for it will contain the 
> following code line
>  
>  
> {code:java}
> private final org.apache.flink.table.data.binary.BinaryStringData str$6 = 
> org.apache.flink.table.data.binary.BinaryStringData.fromString("b"cd"e");  // 
>  mismatched input 'cd' expecting ')'{code}
> Seem it's produced by this [pr]([https://github.com/apache/flink/pull/19001]) 
> which changes the logic for generate literal.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-29652) get duplicate result from sql-client in BATCH mode

2022-10-17 Thread luoyuxia (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17618719#comment-17618719
 ] 

luoyuxia commented on FLINK-29652:
--

[~shijingqi] Feel free to open a pr. we will review for you. But Could you 
please explain a bit why this hapeen since it's really weird to me .

 

 

> get duplicate result from sql-client in BATCH mode
> --
>
> Key: FLINK-29652
> URL: https://issues.apache.org/jira/browse/FLINK-29652
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Affects Versions: 1.13.0, 1.14.0, 1.15.0, 1.16.0
>Reporter: Jingqi Shi
>Priority: Major
>
> In BATCH mode, we experienced problems with flink-sql-client when retrieving 
> result record. We may get duplicate row records occasionally even if querying 
> from a hive/hudi table which contains only one record.
>  
> For example, SELECT COUNT(1) AS val FROM x.test_hive_table, we may get:
> {code:java}
> +--+
> | val  |
> +--+
> | 1    |
> | …    |
> | 1    |
> +--+ {code}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-29126) Fix spliting file optimization doesn't work for orc format

2022-10-17 Thread luoyuxia (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

luoyuxia resolved FLINK-29126.
--
Resolution: Fixed

> Fix spliting file optimization doesn't work for orc format
> --
>
> Key: FLINK-29126
> URL: https://issues.apache.org/jira/browse/FLINK-29126
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Hive
>Affects Versions: 1.16.0
>Reporter: luoyuxia
>Assignee: luoyuxia
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.16.0, 1.17.0
>
>
> FLINK-27338 try to improve file spliting for orc format. But it doesn't work 
> for a making  mistake in judge whether the table is stored as orc format or 
> not. We should fix it.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-29651) Code gen will fail when the literal specified in user's sql hasn't be escaped

2022-10-16 Thread luoyuxia (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17618362#comment-17618362
 ] 

luoyuxia commented on FLINK-29651:
--

I would like to fix it.

> Code gen will  fail when the literal specified in user's sql hasn't be 
> escaped 
> ---
>
> Key: FLINK-29651
> URL: https://issues.apache.org/jira/browse/FLINK-29651
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.15.0
>Reporter: luoyuxia
>Priority: Major
>
> Can be reproduced with the following code in Flink 1.15:
>  
> {code:java}
> // testTable contains a column `field1`
> tableEnvironment
> .executeSql(
> "select *, '1' as run from testTable WHERE field1 LIKE 
> 'b\"cd\"e%'")
> .print(); {code}
> The exception the code generated fail to compile for it will contain the 
> following code line
>  
>  
> {code:java}
> private final org.apache.flink.table.data.binary.BinaryStringData str$6 = 
> org.apache.flink.table.data.binary.BinaryStringData.fromString("b"cd"e");  // 
>  mismatched input 'cd' expecting ')'{code}
> Seem it's produced by this [pr]([https://github.com/apache/flink/pull/19001]) 
> which changes the logic for generate literal.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29651) Code gen will fail when the literal specified in user's sql hasn't be escaped

2022-10-16 Thread luoyuxia (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29651?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

luoyuxia updated FLINK-29651:
-
Description: 
Can be reproduced with the following code in Flink 1.15:

 
{code:java}
// testTable contains a column `field1`
tableEnvironment
.executeSql(
"select *, '1' as run from testTable WHERE field1 LIKE 
'b\"cd\"e%'")
.print(); {code}
The exception the code generated fail to compile for it will contain the 
following code line

 

 
{code:java}
private final org.apache.flink.table.data.binary.BinaryStringData str$6 = 
org.apache.flink.table.data.binary.BinaryStringData.fromString("b"cd"e");  //  
mismatched input 'cd' expecting ')'{code}
Seem it's produced by this [pr]([https://github.com/apache/flink/pull/19001]) 
which changes the logic for generate literal.

 

> Code gen will  fail when the literal specified in user's sql hasn't be 
> escaped 
> ---
>
> Key: FLINK-29651
> URL: https://issues.apache.org/jira/browse/FLINK-29651
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.15.0
>Reporter: luoyuxia
>Priority: Major
>
> Can be reproduced with the following code in Flink 1.15:
>  
> {code:java}
> // testTable contains a column `field1`
> tableEnvironment
> .executeSql(
> "select *, '1' as run from testTable WHERE field1 LIKE 
> 'b\"cd\"e%'")
> .print(); {code}
> The exception the code generated fail to compile for it will contain the 
> following code line
>  
>  
> {code:java}
> private final org.apache.flink.table.data.binary.BinaryStringData str$6 = 
> org.apache.flink.table.data.binary.BinaryStringData.fromString("b"cd"e");  // 
>  mismatched input 'cd' expecting ')'{code}
> Seem it's produced by this [pr]([https://github.com/apache/flink/pull/19001]) 
> which changes the logic for generate literal.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-29651) Code gen will fail when the literal specified in user's sql hasn't be escaped

2022-10-16 Thread luoyuxia (Jira)
luoyuxia created FLINK-29651:


 Summary: Code gen will  fail when the literal specified in user's 
sql hasn't be escaped 
 Key: FLINK-29651
 URL: https://issues.apache.org/jira/browse/FLINK-29651
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Runtime
Affects Versions: 1.15.0
Reporter: luoyuxia






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-29590) Fix literal issue in HiveDialect

2022-10-14 Thread luoyuxia (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17617511#comment-17617511
 ] 

luoyuxia commented on FLINK-29590:
--

[~jark] The pr. for release-1.16 is available in 
[https://github.com/apache/flink/pull/21061]

> Fix literal issue in HiveDialect
> 
>
> Key: FLINK-29590
> URL: https://issues.apache.org/jira/browse/FLINK-29590
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.16.0
>Reporter: luoyuxia
>Assignee: luoyuxia
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.17.0
>
>
> in FLINK-26474, we try to fold constant, but it brings a issue that the 
> folded constant like `Double.NAN` and no-primitive type  can't be convert 
> into calcite literal in method  `HiveParserRexNodeConverter#convertConstant`.
> For example, the following code will throw an exception 
> "org.apache.hadoop.hive.ql.parse.SemanticException: NaN" in method 
> `HiveParserRexNodeConverter#convertConstant`
> {code:java}
> // hive dialect
> SELECT asin(2); {code}
> To fix it, we need to figure out such case and then not to fold constant .
>  
> in FLINK-27017, we use Hive's `GenericUDFOPDivide` to do divide for better 
> compatibility, but it bring a issue that when use a int/long literal as 
> divisor, the result type passed and inferred type may not match.
> The fix it, we need to make the result type match the inferred type.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-29337) Fix fail to query non-hive table in Hive dialect

2022-10-14 Thread luoyuxia (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

luoyuxia resolved FLINK-29337.
--
Resolution: Fixed

> Fix fail to query non-hive table in Hive dialect
> 
>
> Key: FLINK-29337
> URL: https://issues.apache.org/jira/browse/FLINK-29337
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Reporter: luoyuxia
>Assignee: luoyuxia
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.16.0, 1.17.0
>
>
> Flink will fail for the query with non-hive table in HiveDialect.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29635) Hive sink should support merge small files in batch mode

2022-10-13 Thread luoyuxia (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

luoyuxia updated FLINK-29635:
-
Summary: Hive sink should support merge small files in batch mode  (was: 
Hive sink should supports merge small files in batch mode)

> Hive sink should support merge small files in batch mode
> 
>
> Key: FLINK-29635
> URL: https://issues.apache.org/jira/browse/FLINK-29635
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Hive
>Reporter: luoyuxia
>Priority: Major
> Fix For: 1.17.0
>
>
> When using Flink write Hive table in batch mode, there may produce small 
> files. We should provide a mechanism to merge these small files.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-29635) Hive sink should supports merge small files in batch mode

2022-10-13 Thread luoyuxia (Jira)
luoyuxia created FLINK-29635:


 Summary: Hive sink should supports merge small files in batch mode
 Key: FLINK-29635
 URL: https://issues.apache.org/jira/browse/FLINK-29635
 Project: Flink
  Issue Type: Improvement
  Components: Connectors / Hive
Reporter: luoyuxia
 Fix For: 1.17.0


When using Flink write Hive table in batch mode, there may produce small files. 
We should provide a mechanism to merge these small files.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-29590) Fix literal issue in HiveDialect

2022-10-13 Thread luoyuxia (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17617395#comment-17617395
 ] 

luoyuxia commented on FLINK-29590:
--

[~jark]  Sure, but we have to merge the pr 
[https://github.com/apache/flink/pull/21034]  for FLINK-29337  first since 
there's some conflicts between them.

> Fix literal issue in HiveDialect
> 
>
> Key: FLINK-29590
> URL: https://issues.apache.org/jira/browse/FLINK-29590
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.16.0
>Reporter: luoyuxia
>Assignee: luoyuxia
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.17.0
>
>
> in FLINK-26474, we try to fold constant, but it brings a issue that the 
> folded constant like `Double.NAN` and no-primitive type  can't be convert 
> into calcite literal in method  `HiveParserRexNodeConverter#convertConstant`.
> For example, the following code will throw an exception 
> "org.apache.hadoop.hive.ql.parse.SemanticException: NaN" in method 
> `HiveParserRexNodeConverter#convertConstant`
> {code:java}
> // hive dialect
> SELECT asin(2); {code}
> To fix it, we need to figure out such case and then not to fold constant .
>  
> in FLINK-27017, we use Hive's `GenericUDFOPDivide` to do divide for better 
> compatibility, but it bring a issue that when use a int/long literal as 
> divisor, the result type passed and inferred type may not match.
> The fix it, we need to make the result type match the inferred type.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-29350) Add a section for moving planner jar in Hive dependencies page

2022-10-13 Thread luoyuxia (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17617391#comment-17617391
 ] 

luoyuxia commented on FLINK-29350:
--

[~jark] The pr is in [https://github.com/apache/flink/pull/21058]

> Add a section for moving planner jar in Hive dependencies page
> --
>
> Key: FLINK-29350
> URL: https://issues.apache.org/jira/browse/FLINK-29350
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Hive, Documentation
>Affects Versions: 1.16.0
>Reporter: luoyuxia
>Assignee: luoyuxia
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.17.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-29617) Cost too much time to start SourceCoordinator of hdfsFileSource when start JobMaster

2022-10-13 Thread luoyuxia (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17616855#comment-17616855
 ] 

luoyuxia edited comment on FLINK-29617 at 10/13/22 9:07 AM:


[~dangshazi] Thanks for raising it and detail explanation. I'll be much 
appreciated that you can take the ticket.  If you don't have time, maybe I can 
help take it.

I'm fine with these two suggestions. But prefer suggestion 2 since suggestion 1 
will bring new option which user may hardly know it.

I have one question, have you ever tried with these suggestions? If so, what's 
the improvement of these two suggestions?

Btw, the images uploaded is failed. Could you please upload them again?


was (Author: luoyuxia):
[~dangshazi] Thanks for raising it and detail explanation. I'll be much 
appreciated that you can take the ticket.  If you don't have time, maybe I can 
help take it.

I'm fine with these two suggestions. But prefer suggestion 2 since suggestion 1 
will bring new option which user may hardly know it.

I have one question, have you ever tried with these suggestions? If so, what's 
the improvement of these two suggestions?

Btw, the images uploaded is . Could you please upload them again?

> Cost too much time to start SourceCoordinator of hdfsFileSource when start 
> JobMaster
> 
>
> Key: FLINK-29617
> URL: https://issues.apache.org/jira/browse/FLINK-29617
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / FileSystem, Runtime / Coordination
>Affects Versions: 1.15.2
>Reporter: LI Mingkun
>Priority: Major
>  Labels: coordination, file-system
>
> h1. Scenario:
> Our user use flink batch to compact small files in one day. Flink version : 
> 1.15
> He split pipeline into 24 for each hour. So there are 24 source
>  
> I find it  costs too much time to start SourceCoordinator of hdfsFileSource 
> when start JobMaster
>  
>  as follow:
>  
> !https://mail.google.com/mail/u/0?ui=2=488d9ac3dd=0.1=msg-a:r-3013789195315215531=183cb292e567fd9f=fimg=ip=s0-l75-ft=ANGjdJ9SVAoAslMUGQdVQJ_ccmEf4LxhaONYKJvS_V8nvijvT3JXw_VlyRBAEE9EQhTtWdYPa4TLCO5rxjXGrTDK2_PGHX4RZDPTQTJ0LwKXAUr4BYlMhYZsjcrY9eo=emb=ii_l95bh7qy0|width=542,height=260!
>  
> h1. Root Cause:
> I got the root cause after check: 
>  # AbstractFileSource will enumerateSplits when createEnumerator
>  # NotSplittingRecursiveEnumerator need to get fileblockLocation of every 
> fileblock which is a heavy IO operation
> !https://mail.google.com/mail/u/0?ui=2=488d9ac3dd=0.3=msg-a:r-3013789195315215531=183cb292e567fd9f=fimg=ip=s0-l75-ft=ANGjdJ8AoT071eCNMb_q3uJtcbrUmZnYbg3ucnDelMlRRPn7WLlXOBGj650srQk9vhqKyJEANvpOWoxHuH6jNHt7g6go8JkeRUZKc81yqT0yzzz7tbBciTe-YnRVQ7w=emb=ii_l95bp1832|width=542,height=456!
>  
> !https://mail.google.com/mail/u/0?ui=2=488d9ac3dd=0.2=msg-a:r-3013789195315215531=183cb292e567fd9f=fimg=ip=s0-l75-ft=ANGjdJ9phsX1nauTsx3xWje_YJM4uUaOLXKHcXKsm7WJquPQQGC7bQTni3OhQB5HtGYVOvrD-3Kbp9LURfUj6OiIUgsZU1AImSL0vj27cnDcf7HpVpLpaqdADtpoABU=emb=ii_l95bjh1g1|width=526,height=542!
>  
> h1. Suggestion
>  # FileSource add option to disable location fetcher
>  # Move location fetcher into IOExecutor



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-29617) Cost too much time to start SourceCoordinator of hdfsFileSource when start JobMaster

2022-10-13 Thread luoyuxia (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17616855#comment-17616855
 ] 

luoyuxia edited comment on FLINK-29617 at 10/13/22 9:06 AM:


[~dangshazi] Thanks for raising it and detail explanation. I'll be much 
appreciated that you can take the ticket.  If you don't have time, maybe I can 
help take it.

I'm fine with these two suggestions. But prefer suggestion 2 since suggestion 1 
will bring new option which user may hardly know it.

I have one question, have you ever tried with these suggestions? If so, what's 
the improvement of these two suggestions?

Btw, the images uploaded is . Could you please upload them again?


was (Author: luoyuxia):
[~dangshazi] Thanks for raising it and detail explanation. I'll be much 
appreciated that you can take the ticket. 

I'm fine with these two suggestions. But prefer suggestion 2 since suggestion 1 
will bring new option which user may hardly know it.

I have one question, have you ever tried with these suggestions? If so, what's 
the improvement of these two suggestions?

Btw, the images uploaded is . Could you please upload them again?

> Cost too much time to start SourceCoordinator of hdfsFileSource when start 
> JobMaster
> 
>
> Key: FLINK-29617
> URL: https://issues.apache.org/jira/browse/FLINK-29617
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / FileSystem, Runtime / Coordination
>Affects Versions: 1.15.2
>Reporter: LI Mingkun
>Priority: Major
>  Labels: coordination, file-system
>
> h1. Scenario:
> Our user use flink batch to compact small files in one day. Flink version : 
> 1.15
> He split pipeline into 24 for each hour. So there are 24 source
>  
> I find it  costs too much time to start SourceCoordinator of hdfsFileSource 
> when start JobMaster
>  
>  as follow:
>  
> !https://mail.google.com/mail/u/0?ui=2=488d9ac3dd=0.1=msg-a:r-3013789195315215531=183cb292e567fd9f=fimg=ip=s0-l75-ft=ANGjdJ9SVAoAslMUGQdVQJ_ccmEf4LxhaONYKJvS_V8nvijvT3JXw_VlyRBAEE9EQhTtWdYPa4TLCO5rxjXGrTDK2_PGHX4RZDPTQTJ0LwKXAUr4BYlMhYZsjcrY9eo=emb=ii_l95bh7qy0|width=542,height=260!
>  
> h1. Root Cause:
> I got the root cause after check: 
>  # AbstractFileSource will enumerateSplits when createEnumerator
>  # NotSplittingRecursiveEnumerator need to get fileblockLocation of every 
> fileblock which is a heavy IO operation
> !https://mail.google.com/mail/u/0?ui=2=488d9ac3dd=0.3=msg-a:r-3013789195315215531=183cb292e567fd9f=fimg=ip=s0-l75-ft=ANGjdJ8AoT071eCNMb_q3uJtcbrUmZnYbg3ucnDelMlRRPn7WLlXOBGj650srQk9vhqKyJEANvpOWoxHuH6jNHt7g6go8JkeRUZKc81yqT0yzzz7tbBciTe-YnRVQ7w=emb=ii_l95bp1832|width=542,height=456!
>  
> !https://mail.google.com/mail/u/0?ui=2=488d9ac3dd=0.2=msg-a:r-3013789195315215531=183cb292e567fd9f=fimg=ip=s0-l75-ft=ANGjdJ9phsX1nauTsx3xWje_YJM4uUaOLXKHcXKsm7WJquPQQGC7bQTni3OhQB5HtGYVOvrD-3Kbp9LURfUj6OiIUgsZU1AImSL0vj27cnDcf7HpVpLpaqdADtpoABU=emb=ii_l95bjh1g1|width=526,height=542!
>  
> h1. Suggestion
>  # FileSource add option to disable location fetcher
>  # Move location fetcher into IOExecutor



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-29617) Cost too much time to start SourceCoordinator of hdfsFileSource when start JobMaster

2022-10-13 Thread luoyuxia (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17616855#comment-17616855
 ] 

luoyuxia commented on FLINK-29617:
--

[~dangshazi] Thanks for raising it and detail explanation. I'll be much 
appreciated that you can take the ticket. 

I'm fine with these two suggestions. But prefer suggestion 2 since suggestion 1 
will bring new option which user may hardly know it.

I have one question, have you ever tried with these suggestions? If so, what's 
the improvement of these two suggestions?

Btw, the images uploaded is . Could you please upload them again?

> Cost too much time to start SourceCoordinator of hdfsFileSource when start 
> JobMaster
> 
>
> Key: FLINK-29617
> URL: https://issues.apache.org/jira/browse/FLINK-29617
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / FileSystem, Runtime / Coordination
>Affects Versions: 1.15.2
>Reporter: LI Mingkun
>Priority: Major
>  Labels: coordination, file-system
>
> h1. Scenario:
> Our user use flink batch to compact small files in one day. Flink version : 
> 1.15
> He split pipeline into 24 for each hour. So there are 24 source
>  
> I find it  costs too much time to start SourceCoordinator of hdfsFileSource 
> when start JobMaster
>  
>  as follow:
>  
> !https://mail.google.com/mail/u/0?ui=2=488d9ac3dd=0.1=msg-a:r-3013789195315215531=183cb292e567fd9f=fimg=ip=s0-l75-ft=ANGjdJ9SVAoAslMUGQdVQJ_ccmEf4LxhaONYKJvS_V8nvijvT3JXw_VlyRBAEE9EQhTtWdYPa4TLCO5rxjXGrTDK2_PGHX4RZDPTQTJ0LwKXAUr4BYlMhYZsjcrY9eo=emb=ii_l95bh7qy0|width=542,height=260!
>  
> h1. Root Cause:
> I got the root cause after check: 
>  # AbstractFileSource will enumerateSplits when createEnumerator
>  # NotSplittingRecursiveEnumerator need to get fileblockLocation of every 
> fileblock which is a heavy IO operation
> !https://mail.google.com/mail/u/0?ui=2=488d9ac3dd=0.3=msg-a:r-3013789195315215531=183cb292e567fd9f=fimg=ip=s0-l75-ft=ANGjdJ8AoT071eCNMb_q3uJtcbrUmZnYbg3ucnDelMlRRPn7WLlXOBGj650srQk9vhqKyJEANvpOWoxHuH6jNHt7g6go8JkeRUZKc81yqT0yzzz7tbBciTe-YnRVQ7w=emb=ii_l95bp1832|width=542,height=456!
>  
> !https://mail.google.com/mail/u/0?ui=2=488d9ac3dd=0.2=msg-a:r-3013789195315215531=183cb292e567fd9f=fimg=ip=s0-l75-ft=ANGjdJ9phsX1nauTsx3xWje_YJM4uUaOLXKHcXKsm7WJquPQQGC7bQTni3OhQB5HtGYVOvrD-3Kbp9LURfUj6OiIUgsZU1AImSL0vj27cnDcf7HpVpLpaqdADtpoABU=emb=ii_l95bjh1g1|width=526,height=542!
>  
> h1. Suggestion
>  # FileSource add option to disable location fetcher
>  # Move location fetcher into IOExecutor



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-29337) Fix fail to query non-hive table in Hive dialect

2022-10-12 Thread luoyuxia (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17616391#comment-17616391
 ] 

luoyuxia commented on FLINK-29337:
--

[~jark] Yes. I have opened the pr [https://github.com/apache/flink/pull/21034]

> Fix fail to query non-hive table in Hive dialect
> 
>
> Key: FLINK-29337
> URL: https://issues.apache.org/jira/browse/FLINK-29337
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Reporter: luoyuxia
>Assignee: luoyuxia
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.17.0
>
>
> Flink will fail for the query with non-hive table in HiveDialect.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-28618) Cannot use hive.dialect on master

2022-10-11 Thread luoyuxia (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-28618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

luoyuxia resolved FLINK-28618.
--
Resolution: Fixed

Close it since I haven't reproduced this problem using  master branch. Feel 
free to open it if the problem still exists.

> Cannot use hive.dialect on master 
> --
>
> Key: FLINK-28618
> URL: https://issues.apache.org/jira/browse/FLINK-28618
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
> Environment: hadoop_class  2.10
> openjdk11
>Reporter: liubo
>Priority: Major
> Attachments: image-2022-07-21-11-01-12-395.png, 
> image-2022-07-21-11-04-12-552.png
>
>
> I build the newest master flink and copy 
> \{hive-exec-2.3.9.jar;flink-sql-connector-hive-2.3.9_2.12-1.16-SNAPSHOT.jar;flink-connector-hive_2.12-1.16-SNAPSHOT.jar}
>  to $FLINK_HOME/lib 。
>  
> then , i got faild in sql-client  !image-2022-07-21-11-01-12-395.png!
> and after copy opt/flink-table-planner_2.12-1.16-SNAPSHOT.jar 
> even cannot open the sql-client 
> !image-2022-07-21-11-04-12-552.png!
>  
> so , what's wronge? 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29590) Fix literal issue in HiveDialect

2022-10-11 Thread luoyuxia (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

luoyuxia updated FLINK-29590:
-
Description: 
in FLINK-26474, we try to fold constant, but it brings a issue that the folded 
constant like `Double.NAN` and no-primitive type  can't be convert into calcite 
literal in method  `HiveParserRexNodeConverter#convertConstant`.

For example, the following code will throw an exception 
"org.apache.hadoop.hive.ql.parse.SemanticException: NaN" in method 
`HiveParserRexNodeConverter#convertConstant`
{code:java}
// hive dialect
SELECT asin(2); {code}
To fix it, we need to figure out such case and then not to fold constant .

 

in FLINK-27017, we use Hive's `GenericUDFOPDivide` to do divide for better 
compatibility, but it bring a issue that when use a int/long literal as 
divisor, the result type passed and inferred type may not match.

The fix it, we need to make the result type match the inferred type.

 

  was:
in FLINK-26474, we try to fold constant, but it may bring a issue that the 
folded constant like `Double.NAN` and no-primitive type  can't be convert into 
calcite literal in method  `HiveParserRexNodeConverter#convertConstant`.

in FLINK-27017, we use Hive's `GenericUDFOPDivide` to do divide for better 
compatity.

 


> Fix literal issue in HiveDialect
> 
>
> Key: FLINK-29590
> URL: https://issues.apache.org/jira/browse/FLINK-29590
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.16.0
>Reporter: luoyuxia
>Priority: Major
>
> in FLINK-26474, we try to fold constant, but it brings a issue that the 
> folded constant like `Double.NAN` and no-primitive type  can't be convert 
> into calcite literal in method  `HiveParserRexNodeConverter#convertConstant`.
> For example, the following code will throw an exception 
> "org.apache.hadoop.hive.ql.parse.SemanticException: NaN" in method 
> `HiveParserRexNodeConverter#convertConstant`
> {code:java}
> // hive dialect
> SELECT asin(2); {code}
> To fix it, we need to figure out such case and then not to fold constant .
>  
> in FLINK-27017, we use Hive's `GenericUDFOPDivide` to do divide for better 
> compatibility, but it bring a issue that when use a int/long literal as 
> divisor, the result type passed and inferred type may not match.
> The fix it, we need to make the result type match the inferred type.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29590) Fix literal issue in HiveDialect

2022-10-11 Thread luoyuxia (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

luoyuxia updated FLINK-29590:
-
Description: 
in FLINK-26474, we try to fold constant, but it may bring a issue that the 
folded constant like `Double.NAN` and no-primitive type  can't be convert into 
calcite literal in method  `HiveParserRexNodeConverter#convertConstant`.

in FLINK-27017, we use Hive's `GenericUDFOPDivide` to do divide for better 
compatity.

 

  was:in FLINK-26474, we try to fold constant, but it may bring a issue that 
the folded constant can't be convert into 


> Fix literal issue in HiveDialect
> 
>
> Key: FLINK-29590
> URL: https://issues.apache.org/jira/browse/FLINK-29590
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.16.0
>Reporter: luoyuxia
>Priority: Major
>
> in FLINK-26474, we try to fold constant, but it may bring a issue that the 
> folded constant like `Double.NAN` and no-primitive type  can't be convert 
> into calcite literal in method  `HiveParserRexNodeConverter#convertConstant`.
> in FLINK-27017, we use Hive's `GenericUDFOPDivide` to do divide for better 
> compatity.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29590) Fix literal issue for HiveDialect

2022-10-11 Thread luoyuxia (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

luoyuxia updated FLINK-29590:
-
Summary: Fix literal issue for HiveDialect  (was: Fix constant fold issue 
for HiveDialect)

> Fix literal issue for HiveDialect
> -
>
> Key: FLINK-29590
> URL: https://issues.apache.org/jira/browse/FLINK-29590
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.16.0
>Reporter: luoyuxia
>Priority: Major
>
> in FLINK-26474, we try to fold constant, but it may bring a issue that the 
> folded constant can't be convert into 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29590) Fix constant fold issue for HiveDialect

2022-10-11 Thread luoyuxia (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

luoyuxia updated FLINK-29590:
-
Description: in FLINK-26474, we try to fold constant, but it may bring a 
issue that the folded constant can't be convert into 

> Fix constant fold issue for HiveDialect
> ---
>
> Key: FLINK-29590
> URL: https://issues.apache.org/jira/browse/FLINK-29590
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.16.0
>Reporter: luoyuxia
>Priority: Major
>
> in FLINK-26474, we try to fold constant, but it may bring a issue that the 
> folded constant can't be convert into 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29590) Fix literal issue in HiveDialect

2022-10-11 Thread luoyuxia (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

luoyuxia updated FLINK-29590:
-
Summary: Fix literal issue in HiveDialect  (was: Fix literal issue for 
HiveDialect)

> Fix literal issue in HiveDialect
> 
>
> Key: FLINK-29590
> URL: https://issues.apache.org/jira/browse/FLINK-29590
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.16.0
>Reporter: luoyuxia
>Priority: Major
>
> in FLINK-26474, we try to fold constant, but it may bring a issue that the 
> folded constant can't be convert into 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-29590) Fix constant fold issue for HiveDialect

2022-10-11 Thread luoyuxia (Jira)
luoyuxia created FLINK-29590:


 Summary: Fix constant fold issue for HiveDialect
 Key: FLINK-29590
 URL: https://issues.apache.org/jira/browse/FLINK-29590
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Hive
Affects Versions: 1.16.0
Reporter: luoyuxia






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-26726) Remove the unregistered task from readersAwaitingSplit

2022-10-11 Thread luoyuxia (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

luoyuxia updated FLINK-26726:
-
Issue Type: Bug  (was: Improvement)

> Remove the unregistered  task from readersAwaitingSplit
> ---
>
> Key: FLINK-26726
> URL: https://issues.apache.org/jira/browse/FLINK-26726
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Ecosystem
>Reporter: zoucao
>Assignee: zoucao
>Priority: Major
>  Labels: pull-request-available, stale-assigned
> Attachments: stack.txt
>
>
> Recently, we faced a problem caused by the unregistered task when using the 
> hive table as a source to do streaming reading. 
> I think the problem is that we do not remove the unregistered  task from 
> `readersAwaitingSplit` in `ContinuousHiveSplitEnumerator` and 
> `ContinuousFileSplitEnumerator`.
> Assuming that we have two tasks 0 and 1, they all exist in 
> `readersAwaitingSplit`,  if there does not exist any new file in the path for 
> a long time. Then, a new split is generated, and it is assigned to task-1. 
> Unfortunately, task-1 can not consume the split successfully, and the 
> exception will be thrown and cause all tasks to restart. The failover will 
> not affect the `readersAwaitingSplit`, but it will clear the 
> `SourceCoordinatorContext#registeredReaders`.
> After restarting, task-0 exists in `readersAwaitingSplit` but not in 
> `registeredReaders`. if task-1 register first and send the request to get 
> split, the SplitEnumerator will assign splits for both task-1 and task-0, but 
> task-0 has not been registered.
> The stack exists in the attachment.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-29585) Migrate TableSchema to Schema for Hive connector

2022-10-11 Thread luoyuxia (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17616111#comment-17616111
 ] 

luoyuxia commented on FLINK-29585:
--

Sure.  [~jark] Could you please assign this ticket to [~aitozi] ?

> Migrate TableSchema to Schema for Hive connector
> 
>
> Key: FLINK-29585
> URL: https://issues.apache.org/jira/browse/FLINK-29585
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Hive
>Reporter: luoyuxia
>Priority: Major
>
> `TableSchema` is deprecated, we should migrate it to Schema



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-29587) Fail to generate code for SearchOperator

2022-10-11 Thread luoyuxia (Jira)
luoyuxia created FLINK-29587:


 Summary: Fail to generate code for  SearchOperator 
 Key: FLINK-29587
 URL: https://issues.apache.org/jira/browse/FLINK-29587
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Planner, Table SQL / Runtime
Reporter: luoyuxia


Can be reproduced with the following code with Hive dialect
{code:java}
// hive dialect

tableEnv.executeSql("create table table1 (id int, val string, val1 string, 
dimid int)");
tableEnv.executeSql("create table table3 (id int)");

CollectionUtil.iteratorToList(
tableEnv.executeSql(
"select table1.id, table1.val, table1.val1 from table1 
left semi join"
+ " table3 on table1.dimid = table3.id and 
table3.id = 100 where table1.dimid = 200")
.collect());{code}
The  plan is 
{code:java}
LogicalSink(table=[*anonymous_collect$1*], fields=[id, val, val1])
  LogicalProject(id=[$0], val=[$1], val1=[$2])
    LogicalFilter(condition=[=($3, 200)])
      LogicalJoin(condition=[AND(=($3, $4), =($4, 100))], joinType=[semi])
        LogicalTableScan(table=[[test-catalog, default, table1]])
        LogicalTableScan(table=[[test-catalog, default, 
table3]])BatchPhysicalSink(table=[*anonymous_collect$1*], fields=[id, val, 
val1])
  BatchPhysicalNestedLoopJoin(joinType=[LeftSemiJoin], where=[$f1], select=[id, 
val, val1], build=[right])
    BatchPhysicalCalc(select=[id, val, val1], where=[=(dimid, 200)])
      BatchPhysicalTableSourceScan(table=[[test-catalog, default, table1]], 
fields=[id, val, val1, dimid])
    BatchPhysicalExchange(distribution=[broadcast])
      BatchPhysicalCalc(select=[SEARCH(id, Sarg[]) AS $f1])
        BatchPhysicalTableSourceScan(table=[[test-catalog, default, table3]], 
fields=[id]) {code}
 

But it'll throw exception when generate code for it.

The exception is 

 

 
{code:java}
java.util.NoSuchElementException
    at 
com.google.common.collect.ImmutableRangeSet.span(ImmutableRangeSet.java:203)
    at org.apache.calcite.util.Sarg.isComplementedPoints(Sarg.java:148)
    at 
org.apache.flink.table.planner.codegen.calls.SearchOperatorGen$.generateSearch(SearchOperatorGen.scala:87)
    at 
org.apache.flink.table.planner.codegen.ExprCodeGenerator.visitCall(ExprCodeGenerator.scala:474)
    at 
org.apache.flink.table.planner.codegen.ExprCodeGenerator.visitCall(ExprCodeGenerator.scala:57)
    at org.apache.calcite.rex.RexCall.accept(RexCall.java:174)
    at 
org.apache.flink.table.planner.codegen.ExprCodeGenerator.generateExpression(ExprCodeGenerator.scala:143)
    at 
org.apache.flink.table.planner.codegen.CalcCodeGenerator$.$anonfun$generateProcessCode$4(CalcCodeGenerator.scala:140)
    at 
scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:233)
    at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:58)
    at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:51)
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
    at scala.collection.TraversableLike.map(TraversableLike.scala:233)
    at scala.collection.TraversableLike.map$(TraversableLike.scala:226)
    at scala.collection.AbstractTraversable.map(Traversable.scala:104)
    at 
org.apache.flink.table.planner.codegen.CalcCodeGenerator$.produceProjectionCode$1(CalcCodeGenerator.scala:140)
    at 
org.apache.flink.table.planner.codegen.CalcCodeGenerator$.generateProcessCode(CalcCodeGenerator.scala:164)
    at 
org.apache.flink.table.planner.codegen.CalcCodeGenerator$.generateCalcOperator(CalcCodeGenerator.scala:49)
    at 
org.apache.flink.table.planner.codegen.CalcCodeGenerator.generateCalcOperator(CalcCodeGenerator.scala)
    at 
org.apache.flink.table.planner.plan.nodes.exec.common.CommonExecCalc.translateToPlanInternal(CommonExecCalc.java:100)
    at 
org.apache.flink.table.planner.plan.nodes.exec.ExecNodeBase.translateToPlan(ExecNodeBase.java:158)
    at 
org.apache.flink.table.planner.plan.nodes.exec.ExecEdge.translateToPlan(ExecEdge.java:257)
    at 
org.apache.flink.table.planner.plan.nodes.exec.batch.BatchExecExchange.translateToPlanInternal(BatchExecExchange.java:136)
 {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-29585) Migrate TableSchema to Schema for Hive connector

2022-10-11 Thread luoyuxia (Jira)
luoyuxia created FLINK-29585:


 Summary: Migrate TableSchema to Schema for Hive connector
 Key: FLINK-29585
 URL: https://issues.apache.org/jira/browse/FLINK-29585
 Project: Flink
  Issue Type: Improvement
  Components: Connectors / Hive
Reporter: luoyuxia


`TableSchema` is deprecated, we should migrate it to Schema



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-27423) Upgrade Hive 3.1 connector from 3.1.2 to 3.1.3

2022-10-10 Thread luoyuxia (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

luoyuxia resolved FLINK-27423.
--
Resolution: Fixed

Close it seems it has been fixed. Feel free to open it if not.

> Upgrade Hive 3.1 connector from 3.1.2 to 3.1.3
> --
>
> Key: FLINK-27423
> URL: https://issues.apache.org/jira/browse/FLINK-27423
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Connectors / Hive
>Affects Versions: 1.15.0, 1.16.0
>Reporter: Jeff Yang
>Priority: Minor
>  Labels: auto-deprioritized-major, pull-request-available
>
> The latest supported version of the Hive 3.1.* release.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-27384) In the Hive dimension table, when the data is changed on the original partition, the create_time configuration does not take effect

2022-10-10 Thread luoyuxia (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17615432#comment-17615432
 ] 

luoyuxia commented on FLINK-27384:
--

[~leonard] Seems the prs for release-1.4&1.15 branches are ready. Could you 
please help merge it when you're free.

> In the Hive dimension table, when the data is changed on the original 
> partition, the create_time configuration does not take effect
> ---
>
> Key: FLINK-27384
> URL: https://issues.apache.org/jira/browse/FLINK-27384
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.14.4, 1.15.1
>Reporter: 陈磊
>Assignee: 陈磊
>Priority: Major
>  Labels: pull-request-available, stale-assigned
> Attachments: image-2022-04-25-15-46-01-833.png, 
> image-2022-04-25-15-47-54-213.png
>
>
> In the Hive dimension table, when the data is changed on the original 
> partition, the create_time configuration does not take effect.
> !image-2022-04-25-15-46-01-833.png!
> The current table structure directory is as follows:
> !image-2022-04-25-15-47-54-213.png!
> From the above figure, we can know that when hive is a dimension table, it 
> will load the data of dt=2021-04-22, hr=27.
> If a new partition arrives now, the data of the latest partition can be read 
> smoothly. However, if the data is modified on the original partition, 
> theoretically, the data of the modified partition is read, because of the 
> create_time and latest configuration, but this is not the case in practice. 
> The data that was originally loaded is still read.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-27604) flink sql read hive on hbase throw NPE

2022-10-10 Thread luoyuxia (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27604?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

luoyuxia resolved FLINK-27604.
--
Resolution: Fixed

> flink sql read hive on hbase throw NPE
> --
>
> Key: FLINK-27604
> URL: https://issues.apache.org/jira/browse/FLINK-27604
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.13.6
>Reporter: zhangsan
>Priority: Major
>
> I have some table data on hbase, I usually read the hbase data by loading 
> external tables through hive, I want to read the data through flink sql by 
> reading hive tables, when I try with sql-client I get an error. I don't know 
> if there is any way to solve this problem, but I can read the data using the 
> spark engine.
> 
> Environment:
> flink:1.13.6
> hive:2.1.1-cdh6.2.0
> hbase:2.1.0-cdh6.2.0
> flinksql Execution tools:flink sql client 
> sql submit mode:yarn-per-job
> 
> flink lib directory
> antlr-runtime-3.5.2.jar
> flink-csv-1.13.6.jar
> flink-dist_2.11-1.13.6.jar
> flink-json-1.13.6.jar
> flink-shaded-zookeeper-3.4.14.jar
> flink-sql-connector-hive-2.2.0_2.11-1.13.6.jar
> flink-table_2.11-1.13.6.jar
> flink-table-blink_2.11-1.13.6.jar
> guava-14.0.1.jar
> hadoop-mapreduce-client-core-3.0.0-cdh6.2.0.jar
> hbase-client-2.1.0-cdh6.2.0.jar
> hbase-common-2.1.0-cdh6.2.0.jar
> hbase-protocol-2.1.0-cdh6.2.0.jar
> hbase-server-2.1.0-cdh6.2.0.jar
> hive-exec-2.1.1-cdh6.2.0.jar
> hive-hbase-handler-2.1.1-cdh6.2.0.jar
> htrace-core4-4.1.0-incubating.jar
> log4j-1.2-api-2.17.1.jar
> log4j-api-2.17.1.jar
> log4j-core-2.17.1.jar
> log4j-slf4j-impl-2.17.1.jar
> protobuf-java-2.5.0.jar
> 
> step:
> hive create table stament:
> {code:java}
> CREATE EXTERNAL TABLE `ods`.`student`(
>   `row_key` string, 
>   `name` string,
>   `age` int,
>   `addr` string 
> ) 
> ROW FORMAT SERDE 
>   'org.apache.hadoop.hive.hbase.HBaseSerDe' 
> STORED BY 
>   'org.apache.hadoop.hive.hbase.HBaseStorageHandler' 
> WITH SERDEPROPERTIES ( 
>   
> 'hbase.columns.mapping'=':key,FINAL:NAME,FINAL:AGE,FINAL:ADDR,'serialization.format'='1')
> TBLPROPERTIES (
>   'hbase.table.name'='ODS:STUDENT'); {code}
> catalog:hive catalog 
> sql: select * from ods.student;
> 
> error:
> {code:java}
> org.apache.flink.table.client.gateway.SqlExecutionException: Could not 
> execute SQL statement.
>     at 
> org.apache.flink.table.client.gateway.local.LocalExecutor.executeOperation(LocalExecutor.java:215)
>  ~[flink-sql-client_2.11-1.13.6.jar:1.13.6]
>     at 
> org.apache.flink.table.client.gateway.local.LocalExecutor.executeQuery(LocalExecutor.java:235)
>  ~[flink-sql-client_2.11-1.13.6.jar:1.13.6]
>     at 
> org.apache.flink.table.client.cli.CliClient.callSelect(CliClient.java:479) 
> ~[flink-sql-client_2.11-1.13.6.jar:1.13.6]
>     at 
> org.apache.flink.table.client.cli.CliClient.callOperation(CliClient.java:412) 
> ~[flink-sql-client_2.11-1.13.6.jar:1.13.6]
>     at 
> org.apache.flink.table.client.cli.CliClient.lambda$executeStatement$0(CliClient.java:327)
>  [flink-sql-client_2.11-1.13.6.jar:1.13.6]
>     at java.util.Optional.ifPresent(Optional.java:159) ~[?:1.8.0_191]
>     at 
> org.apache.flink.table.client.cli.CliClient.executeStatement(CliClient.java:327)
>  [flink-sql-client_2.11-1.13.6.jar:1.13.6]
>     at 
> org.apache.flink.table.client.cli.CliClient.executeInteractive(CliClient.java:297)
>  [flink-sql-client_2.11-1.13.6.jar:1.13.6]
>     at 
> org.apache.flink.table.client.cli.CliClient.executeInInteractiveMode(CliClient.java:221)
>  [flink-sql-client_2.11-1.13.6.jar:1.13.6]
>     at org.apache.flink.table.client.SqlClient.openCli(SqlClient.java:151) 
> [flink-sql-client_2.11-1.13.6.jar:1.13.6]
>     at org.apache.flink.table.client.SqlClient.start(SqlClient.java:95) 
> [flink-sql-client_2.11-1.13.6.jar:1.13.6]
>     at 
> org.apache.flink.table.client.SqlClient.startClient(SqlClient.java:187) 
> [flink-sql-client_2.11-1.13.6.jar:1.13.6]
>     at org.apache.flink.table.client.SqlClient.main(SqlClient.java:161) 
> [flink-sql-client_2.11-1.13.6.jar:1.13.6]
> Caused by: org.apache.flink.connectors.hive.FlinkHiveException: Unable to 
> instantiate the hadoop input format
>     at 
> org.apache.flink.connectors.hive.HiveSourceFileEnumerator.createMRSplits(HiveSourceFileEnumerator.java:100)
>  ~[flink-sql-connector-hive-2.2.0_2.11-1.13.6.jar:1.13.6]
>     at 
> org.apache.flink.connectors.hive.HiveSourceFileEnumerator.createInputSplits(HiveSourceFileEnumerator.java:71)
>  ~[flink-sql-connector-hive-2.2.0_2.11-1.13.6.jar:1.13.6]
>     at 
> org.apache.flink.connectors.hive.HiveTableSource.lambda$getDataStream$1(HiveTableSource.java:212)
>  ~[flink-sql-connector-hive-2.2.0_2.11-1.13.6.jar:1.13.6]
>     at 
> org.apache.flink.connectors.hive.HiveParallelismInference.logRunningTime(HiveParallelismInference.java:107)
>  

[jira] [Comment Edited] (FLINK-27604) flink sql read hive on hbase throw NPE

2022-10-10 Thread luoyuxia (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17615429#comment-17615429
 ] 

luoyuxia edited comment on FLINK-27604 at 10/11/22 2:53 AM:


[~18579099...@163.com] Thanks for reporting it. Currently, it's not supported 
to hbase data via Hive in Flink.  But I think we may need to support it.


was (Author: luoyuxia):
[~18579099...@163.com] Thanks for reporting it. Currently, it's not supported 
to hbase data via Hive in Flink.  But I think it's a valid requirement. 

> flink sql read hive on hbase throw NPE
> --
>
> Key: FLINK-27604
> URL: https://issues.apache.org/jira/browse/FLINK-27604
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.13.6
>Reporter: zhangsan
>Priority: Major
>
> I have some table data on hbase, I usually read the hbase data by loading 
> external tables through hive, I want to read the data through flink sql by 
> reading hive tables, when I try with sql-client I get an error. I don't know 
> if there is any way to solve this problem, but I can read the data using the 
> spark engine.
> 
> Environment:
> flink:1.13.6
> hive:2.1.1-cdh6.2.0
> hbase:2.1.0-cdh6.2.0
> flinksql Execution tools:flink sql client 
> sql submit mode:yarn-per-job
> 
> flink lib directory
> antlr-runtime-3.5.2.jar
> flink-csv-1.13.6.jar
> flink-dist_2.11-1.13.6.jar
> flink-json-1.13.6.jar
> flink-shaded-zookeeper-3.4.14.jar
> flink-sql-connector-hive-2.2.0_2.11-1.13.6.jar
> flink-table_2.11-1.13.6.jar
> flink-table-blink_2.11-1.13.6.jar
> guava-14.0.1.jar
> hadoop-mapreduce-client-core-3.0.0-cdh6.2.0.jar
> hbase-client-2.1.0-cdh6.2.0.jar
> hbase-common-2.1.0-cdh6.2.0.jar
> hbase-protocol-2.1.0-cdh6.2.0.jar
> hbase-server-2.1.0-cdh6.2.0.jar
> hive-exec-2.1.1-cdh6.2.0.jar
> hive-hbase-handler-2.1.1-cdh6.2.0.jar
> htrace-core4-4.1.0-incubating.jar
> log4j-1.2-api-2.17.1.jar
> log4j-api-2.17.1.jar
> log4j-core-2.17.1.jar
> log4j-slf4j-impl-2.17.1.jar
> protobuf-java-2.5.0.jar
> 
> step:
> hive create table stament:
> {code:java}
> CREATE EXTERNAL TABLE `ods`.`student`(
>   `row_key` string, 
>   `name` string,
>   `age` int,
>   `addr` string 
> ) 
> ROW FORMAT SERDE 
>   'org.apache.hadoop.hive.hbase.HBaseSerDe' 
> STORED BY 
>   'org.apache.hadoop.hive.hbase.HBaseStorageHandler' 
> WITH SERDEPROPERTIES ( 
>   
> 'hbase.columns.mapping'=':key,FINAL:NAME,FINAL:AGE,FINAL:ADDR,'serialization.format'='1')
> TBLPROPERTIES (
>   'hbase.table.name'='ODS:STUDENT'); {code}
> catalog:hive catalog 
> sql: select * from ods.student;
> 
> error:
> {code:java}
> org.apache.flink.table.client.gateway.SqlExecutionException: Could not 
> execute SQL statement.
>     at 
> org.apache.flink.table.client.gateway.local.LocalExecutor.executeOperation(LocalExecutor.java:215)
>  ~[flink-sql-client_2.11-1.13.6.jar:1.13.6]
>     at 
> org.apache.flink.table.client.gateway.local.LocalExecutor.executeQuery(LocalExecutor.java:235)
>  ~[flink-sql-client_2.11-1.13.6.jar:1.13.6]
>     at 
> org.apache.flink.table.client.cli.CliClient.callSelect(CliClient.java:479) 
> ~[flink-sql-client_2.11-1.13.6.jar:1.13.6]
>     at 
> org.apache.flink.table.client.cli.CliClient.callOperation(CliClient.java:412) 
> ~[flink-sql-client_2.11-1.13.6.jar:1.13.6]
>     at 
> org.apache.flink.table.client.cli.CliClient.lambda$executeStatement$0(CliClient.java:327)
>  [flink-sql-client_2.11-1.13.6.jar:1.13.6]
>     at java.util.Optional.ifPresent(Optional.java:159) ~[?:1.8.0_191]
>     at 
> org.apache.flink.table.client.cli.CliClient.executeStatement(CliClient.java:327)
>  [flink-sql-client_2.11-1.13.6.jar:1.13.6]
>     at 
> org.apache.flink.table.client.cli.CliClient.executeInteractive(CliClient.java:297)
>  [flink-sql-client_2.11-1.13.6.jar:1.13.6]
>     at 
> org.apache.flink.table.client.cli.CliClient.executeInInteractiveMode(CliClient.java:221)
>  [flink-sql-client_2.11-1.13.6.jar:1.13.6]
>     at org.apache.flink.table.client.SqlClient.openCli(SqlClient.java:151) 
> [flink-sql-client_2.11-1.13.6.jar:1.13.6]
>     at org.apache.flink.table.client.SqlClient.start(SqlClient.java:95) 
> [flink-sql-client_2.11-1.13.6.jar:1.13.6]
>     at 
> org.apache.flink.table.client.SqlClient.startClient(SqlClient.java:187) 
> [flink-sql-client_2.11-1.13.6.jar:1.13.6]
>     at org.apache.flink.table.client.SqlClient.main(SqlClient.java:161) 
> [flink-sql-client_2.11-1.13.6.jar:1.13.6]
> Caused by: org.apache.flink.connectors.hive.FlinkHiveException: Unable to 
> instantiate the hadoop input format
>     at 
> org.apache.flink.connectors.hive.HiveSourceFileEnumerator.createMRSplits(HiveSourceFileEnumerator.java:100)
>  ~[flink-sql-connector-hive-2.2.0_2.11-1.13.6.jar:1.13.6]
>     at 
> 

[jira] [Commented] (FLINK-27604) flink sql read hive on hbase throw NPE

2022-10-10 Thread luoyuxia (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17615429#comment-17615429
 ] 

luoyuxia commented on FLINK-27604:
--

[~18579099...@163.com] Thanks for reporting it. Currently, it's not supported 
to hbase data via Hive in Flink.  But I think it's a valid requirement. 

> flink sql read hive on hbase throw NPE
> --
>
> Key: FLINK-27604
> URL: https://issues.apache.org/jira/browse/FLINK-27604
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.13.6
>Reporter: zhangsan
>Priority: Major
>
> I have some table data on hbase, I usually read the hbase data by loading 
> external tables through hive, I want to read the data through flink sql by 
> reading hive tables, when I try with sql-client I get an error. I don't know 
> if there is any way to solve this problem, but I can read the data using the 
> spark engine.
> 
> Environment:
> flink:1.13.6
> hive:2.1.1-cdh6.2.0
> hbase:2.1.0-cdh6.2.0
> flinksql Execution tools:flink sql client 
> sql submit mode:yarn-per-job
> 
> flink lib directory
> antlr-runtime-3.5.2.jar
> flink-csv-1.13.6.jar
> flink-dist_2.11-1.13.6.jar
> flink-json-1.13.6.jar
> flink-shaded-zookeeper-3.4.14.jar
> flink-sql-connector-hive-2.2.0_2.11-1.13.6.jar
> flink-table_2.11-1.13.6.jar
> flink-table-blink_2.11-1.13.6.jar
> guava-14.0.1.jar
> hadoop-mapreduce-client-core-3.0.0-cdh6.2.0.jar
> hbase-client-2.1.0-cdh6.2.0.jar
> hbase-common-2.1.0-cdh6.2.0.jar
> hbase-protocol-2.1.0-cdh6.2.0.jar
> hbase-server-2.1.0-cdh6.2.0.jar
> hive-exec-2.1.1-cdh6.2.0.jar
> hive-hbase-handler-2.1.1-cdh6.2.0.jar
> htrace-core4-4.1.0-incubating.jar
> log4j-1.2-api-2.17.1.jar
> log4j-api-2.17.1.jar
> log4j-core-2.17.1.jar
> log4j-slf4j-impl-2.17.1.jar
> protobuf-java-2.5.0.jar
> 
> step:
> hive create table stament:
> {code:java}
> CREATE EXTERNAL TABLE `ods`.`student`(
>   `row_key` string, 
>   `name` string,
>   `age` int,
>   `addr` string 
> ) 
> ROW FORMAT SERDE 
>   'org.apache.hadoop.hive.hbase.HBaseSerDe' 
> STORED BY 
>   'org.apache.hadoop.hive.hbase.HBaseStorageHandler' 
> WITH SERDEPROPERTIES ( 
>   
> 'hbase.columns.mapping'=':key,FINAL:NAME,FINAL:AGE,FINAL:ADDR,'serialization.format'='1')
> TBLPROPERTIES (
>   'hbase.table.name'='ODS:STUDENT'); {code}
> catalog:hive catalog 
> sql: select * from ods.student;
> 
> error:
> {code:java}
> org.apache.flink.table.client.gateway.SqlExecutionException: Could not 
> execute SQL statement.
>     at 
> org.apache.flink.table.client.gateway.local.LocalExecutor.executeOperation(LocalExecutor.java:215)
>  ~[flink-sql-client_2.11-1.13.6.jar:1.13.6]
>     at 
> org.apache.flink.table.client.gateway.local.LocalExecutor.executeQuery(LocalExecutor.java:235)
>  ~[flink-sql-client_2.11-1.13.6.jar:1.13.6]
>     at 
> org.apache.flink.table.client.cli.CliClient.callSelect(CliClient.java:479) 
> ~[flink-sql-client_2.11-1.13.6.jar:1.13.6]
>     at 
> org.apache.flink.table.client.cli.CliClient.callOperation(CliClient.java:412) 
> ~[flink-sql-client_2.11-1.13.6.jar:1.13.6]
>     at 
> org.apache.flink.table.client.cli.CliClient.lambda$executeStatement$0(CliClient.java:327)
>  [flink-sql-client_2.11-1.13.6.jar:1.13.6]
>     at java.util.Optional.ifPresent(Optional.java:159) ~[?:1.8.0_191]
>     at 
> org.apache.flink.table.client.cli.CliClient.executeStatement(CliClient.java:327)
>  [flink-sql-client_2.11-1.13.6.jar:1.13.6]
>     at 
> org.apache.flink.table.client.cli.CliClient.executeInteractive(CliClient.java:297)
>  [flink-sql-client_2.11-1.13.6.jar:1.13.6]
>     at 
> org.apache.flink.table.client.cli.CliClient.executeInInteractiveMode(CliClient.java:221)
>  [flink-sql-client_2.11-1.13.6.jar:1.13.6]
>     at org.apache.flink.table.client.SqlClient.openCli(SqlClient.java:151) 
> [flink-sql-client_2.11-1.13.6.jar:1.13.6]
>     at org.apache.flink.table.client.SqlClient.start(SqlClient.java:95) 
> [flink-sql-client_2.11-1.13.6.jar:1.13.6]
>     at 
> org.apache.flink.table.client.SqlClient.startClient(SqlClient.java:187) 
> [flink-sql-client_2.11-1.13.6.jar:1.13.6]
>     at org.apache.flink.table.client.SqlClient.main(SqlClient.java:161) 
> [flink-sql-client_2.11-1.13.6.jar:1.13.6]
> Caused by: org.apache.flink.connectors.hive.FlinkHiveException: Unable to 
> instantiate the hadoop input format
>     at 
> org.apache.flink.connectors.hive.HiveSourceFileEnumerator.createMRSplits(HiveSourceFileEnumerator.java:100)
>  ~[flink-sql-connector-hive-2.2.0_2.11-1.13.6.jar:1.13.6]
>     at 
> org.apache.flink.connectors.hive.HiveSourceFileEnumerator.createInputSplits(HiveSourceFileEnumerator.java:71)
>  ~[flink-sql-connector-hive-2.2.0_2.11-1.13.6.jar:1.13.6]
>     at 
> org.apache.flink.connectors.hive.HiveTableSource.lambda$getDataStream$1(HiveTableSource.java:212)
>  

[jira] [Commented] (FLINK-29432) Replace GenericUDFNvl with GenericUDFCoalesce

2022-10-10 Thread luoyuxia (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29432?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17615410#comment-17615410
 ] 

luoyuxia commented on FLINK-29432:
--

Thanks for contribution. But to put it briefly, I prefer not to change it 
immediately.

Actally,  HIVE-20961 is a patch of Hive 4.0. Hive 4.0 is not released and Flink 
doesn't provide official support for hive 4. Of course it should be fixed if we 
want to support hive4, but at least, seems we have no such plan in short term. 

Also, we can't just only change replace `GenericUDFNvl` with 
`GenericUDFCoalesce` to fix it for it may bring other bugs to Hive dialect as 
reported in [Hive-24902| https://issues.apache.org/jira/browse/HIVE-24902]. 
Also, I'm doubt it may bring other bugs that have't been found.

 

For your problem, you can change in your flink distribution.

 

> Replace GenericUDFNvl with GenericUDFCoalesce
> -
>
> Key: FLINK-29432
> URL: https://issues.apache.org/jira/browse/FLINK-29432
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Hive
>Affects Versions: 1.15.2
>Reporter: Prabhu Joseph
>Priority: Major
>  Labels: pull-request-available
>
> Hive NVL() function has many issues like 
> [HIVE-25193|https://issues.apache.org/jira/browse/HIVE-25193] and it is 
> retired [HIVE-20961|https://issues.apache.org/jira/browse/HIVE-20961]. Our 
> internal hive distribution has the fix for HIVE-20961. With this fix, Flink 
> Build is failing with below as there is no more GenericUDFNvl in Hive. This 
> needs to be replaced with GenericUDFCoalesce.
> {code}
> [INFO] 
> /codebuild/output/src366217558/src/build/flink/rpm/BUILD/flink-1.15.2/flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/table/planner/delegation/hive/copy/HiveParserDefaultGraphWalker.java:
>  Recompile with -Xlint:unchecked for details.
> [INFO] -
> [ERROR] COMPILATION ERROR : 
> [INFO] -
> [ERROR] 
> /codebuild/output/src366217558/src/build/flink/rpm/BUILD/flink-1.15.2/flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/table/planner/delegation/hive/HiveParserTypeCheckProcFactory.java:[75,45]
>  cannot find symbol
>   symbol:   class GenericUDFNvl
>   location: package org.apache.hadoop.hive.ql.udf.generic
> [ERROR] 
> /codebuild/output/src366217558/src/build/flink/rpm/BUILD/flink-1.15.2/flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/table/planner/delegation/hive/HiveParserTypeCheckProcFactory.java:[1216,41]
>  cannot find symbol
>   symbol:   class GenericUDFNvl
>   location: class 
> org.apache.flink.table.planner.delegation.hive.HiveParserTypeCheckProcFactory.DefaultExprProcessor
> [ERROR] 
> /codebuild/output/src366217558/src/build/flink/rpm/BUILD/flink-1.15.2/flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/table/planner/delegation/hive/copy/HiveParserSemanticAnalyzer.java:[231,26]
>  constructor GlobalLimitCtx in class 
> org.apache.hadoop.hive.ql.parse.GlobalLimitCtx cannot be applied to given 
> types;
>   required: org.apache.hadoop.hive.conf.HiveConf
>   found: no arguments
>   reason: actual and formal argument lists differ in length
> [INFO] 3 errors
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-29408) HiveCatalogITCase failed with NPE

2022-09-28 Thread luoyuxia (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17610470#comment-17610470
 ] 

luoyuxia commented on FLINK-29408:
--

I try to debug the failure in this pr 
[https://github.com/apache/flink/pull/20905].

I found when I just change the parameters for CI:

 
{code:java}
test_pool_definition: name: Default{code}
 

to 

 
{code:java}
test_pool_definition: vmImage: 'ubuntu-20.04' 
{code}
 

 

The ci will fail 
[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=41403=results].

But when I revert such changes, it pass again 
[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=41415=results]

I'm confused about it. [~hxbks2ks] Do you know what's the difference between 
`name: Default` and `vmImage: 'ubuntu-20.04' `?

> HiveCatalogITCase failed with NPE
> -
>
> Key: FLINK-29408
> URL: https://issues.apache.org/jira/browse/FLINK-29408
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.16.0, 1.17.0
>Reporter: Huang Xingbo
>Assignee: luoyuxia
>Priority: Critical
>  Labels: test-stability
>
> {code:java}
> 2022-09-25T03:41:07.4212129Z Sep 25 03:41:07 [ERROR] 
> org.apache.flink.table.catalog.hive.HiveCatalogUdfITCase.testFlinkUdf  Time 
> elapsed: 0.098 s  <<< ERROR!
> 2022-09-25T03:41:07.4212662Z Sep 25 03:41:07 java.lang.NullPointerException
> 2022-09-25T03:41:07.4213189Z Sep 25 03:41:07  at 
> org.apache.flink.table.catalog.hive.HiveCatalogUdfITCase.testFlinkUdf(HiveCatalogUdfITCase.java:109)
> 2022-09-25T03:41:07.4213753Z Sep 25 03:41:07  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2022-09-25T03:41:07.4224643Z Sep 25 03:41:07  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2022-09-25T03:41:07.4225311Z Sep 25 03:41:07  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2022-09-25T03:41:07.4225879Z Sep 25 03:41:07  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2022-09-25T03:41:07.4226405Z Sep 25 03:41:07  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> 2022-09-25T03:41:07.4227201Z Sep 25 03:41:07  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2022-09-25T03:41:07.4227807Z Sep 25 03:41:07  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> 2022-09-25T03:41:07.4228394Z Sep 25 03:41:07  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2022-09-25T03:41:07.4228966Z Sep 25 03:41:07  at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> 2022-09-25T03:41:07.4229514Z Sep 25 03:41:07  at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
> 2022-09-25T03:41:07.4230066Z Sep 25 03:41:07  at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
> 2022-09-25T03:41:07.4230587Z Sep 25 03:41:07  at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
> 2022-09-25T03:41:07.4231258Z Sep 25 03:41:07  at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> 2022-09-25T03:41:07.4231823Z Sep 25 03:41:07  at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
> 2022-09-25T03:41:07.4232384Z Sep 25 03:41:07  at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
> 2022-09-25T03:41:07.4232930Z Sep 25 03:41:07  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
> 2022-09-25T03:41:07.4233511Z Sep 25 03:41:07  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
> 2022-09-25T03:41:07.4234039Z Sep 25 03:41:07  at 
> org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
> 2022-09-25T03:41:07.4234546Z Sep 25 03:41:07  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
> 2022-09-25T03:41:07.4235057Z Sep 25 03:41:07  at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
> 2022-09-25T03:41:07.4235573Z Sep 25 03:41:07  at 
> org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
> 2022-09-25T03:41:07.4236087Z Sep 25 03:41:07  at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
> 2022-09-25T03:41:07.4236635Z Sep 25 03:41:07  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> 2022-09-25T03:41:07.4237314Z Sep 25 03:41:07  at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> 2022-09-25T03:41:07.4238211Z Sep 25 03:41:07  at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
> 2022-09-25T03:41:07.4238775Z Sep 25 03:41:07  at 
> 

[jira] [Created] (FLINK-29447) Add doc for federation query using Hive dialect

2022-09-28 Thread luoyuxia (Jira)
luoyuxia created FLINK-29447:


 Summary: Add doc for federation query  using Hive dialect 
 Key: FLINK-29447
 URL: https://issues.apache.org/jira/browse/FLINK-29447
 Project: Flink
  Issue Type: Sub-task
  Components: Connectors / Hive, Documentation
Reporter: luoyuxia






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-29024) Add an overview page for Hive Compatibility

2022-09-28 Thread luoyuxia (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29024?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

luoyuxia closed FLINK-29024.

Resolution: Not A Problem

Seems we don't need it again.

> Add an overview page for Hive Compatibility
> ---
>
> Key: FLINK-29024
> URL: https://issues.apache.org/jira/browse/FLINK-29024
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Hive, Documentation
>Affects Versions: 1.16.0
>Reporter: luoyuxia
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.16.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29350) Add a section for moving planner jar in Hive dependencies page

2022-09-28 Thread luoyuxia (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

luoyuxia updated FLINK-29350:
-
Summary: Add a section for moving planner jar in Hive dependencies page  
(was: Add a note for swapping planner jar in Hive dependencies page)

> Add a section for moving planner jar in Hive dependencies page
> --
>
> Key: FLINK-29350
> URL: https://issues.apache.org/jira/browse/FLINK-29350
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Hive, Documentation
>Affects Versions: 1.16.0
>Reporter: luoyuxia
>Assignee: luoyuxia
>Priority: Major
> Fix For: 1.16.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29386) Fix fail to compile flink-connector-hive when profile is hive3

2022-09-22 Thread luoyuxia (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

luoyuxia updated FLINK-29386:
-
Description: 
The compile will fail in hive3. 
[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=41238=logs=b1fcf054-9138-5463-c73c-a49979b9ac2a=9291ac46-dd95-5135-b799-3839e65a8691]

Introduced by FLINK-29152 which introduces 
org.apache.hadoop.hive.metastore.MetaStoreUtils.

DEFAULT_SERIALIZATION_FORMAT,  TableType.INDEX_TABLE, 
ErrorMsg.SHOW_CREATETABLE_INDEX.    But they don't exist in Hive3.

  was:
Introduced by FLINK-29152 which introduces 
org.apache.hadoop.hive.metastore.MetaStoreUtils.

DEFAULT_SERIALIZATION_FORMAT which is not exists in Hive3.


> Fix fail to compile flink-connector-hive when profile is hive3
> --
>
> Key: FLINK-29386
> URL: https://issues.apache.org/jira/browse/FLINK-29386
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.16.0, 1.17.0
>Reporter: luoyuxia
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.16.0, 1.17.0
>
>
> The compile will fail in hive3. 
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=41238=logs=b1fcf054-9138-5463-c73c-a49979b9ac2a=9291ac46-dd95-5135-b799-3839e65a8691]
> Introduced by FLINK-29152 which introduces 
> org.apache.hadoop.hive.metastore.MetaStoreUtils.
> DEFAULT_SERIALIZATION_FORMAT,  TableType.INDEX_TABLE, 
> ErrorMsg.SHOW_CREATETABLE_INDEX.    But they don't exist in Hive3.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29386) Fix fail to compile flink-connector-hive when profile is hive3

2022-09-22 Thread luoyuxia (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

luoyuxia updated FLINK-29386:
-
Priority: Blocker  (was: Critical)

> Fix fail to compile flink-connector-hive when profile is hive3
> --
>
> Key: FLINK-29386
> URL: https://issues.apache.org/jira/browse/FLINK-29386
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.16.0, 1.17.0
>Reporter: luoyuxia
>Priority: Blocker
> Fix For: 1.16.0, 1.17.0
>
>
> Introduced by FLINK-29152 which introduces 
> org.apache.hadoop.hive.metastore.MetaStoreUtils.
> DEFAULT_SERIALIZATION_FORMAT which is not exists in Hive3.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-29386) Fix fail to compile flink-connector-hive when profile is hive3

2022-09-22 Thread luoyuxia (Jira)
luoyuxia created FLINK-29386:


 Summary: Fix fail to compile flink-connector-hive when profile is 
hive3
 Key: FLINK-29386
 URL: https://issues.apache.org/jira/browse/FLINK-29386
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Hive
Affects Versions: 1.16.0, 1.17.0
Reporter: luoyuxia
 Fix For: 1.16.0, 1.17.0


Introduced by FLINK-29152 which introduces 
org.apache.hadoop.hive.metastore.MetaStoreUtils.

DEFAULT_SERIALIZATION_FORMAT which is not exists in Hive3.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29386) Fix fail to compile flink-connector-hive when profile is hive3

2022-09-22 Thread luoyuxia (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

luoyuxia updated FLINK-29386:
-
Priority: Critical  (was: Blocker)

> Fix fail to compile flink-connector-hive when profile is hive3
> --
>
> Key: FLINK-29386
> URL: https://issues.apache.org/jira/browse/FLINK-29386
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.16.0, 1.17.0
>Reporter: luoyuxia
>Priority: Critical
> Fix For: 1.16.0, 1.17.0
>
>
> Introduced by FLINK-29152 which introduces 
> org.apache.hadoop.hive.metastore.MetaStoreUtils.
> DEFAULT_SERIALIZATION_FORMAT which is not exists in Hive3.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29337) Fix fail to query non-hive table in Hive dialect

2022-09-21 Thread luoyuxia (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

luoyuxia updated FLINK-29337:
-
Summary: Fix fail to query non-hive table in Hive dialect  (was: Fix fail 
to use HiveDialect for non-hive table)

> Fix fail to query non-hive table in Hive dialect
> 
>
> Key: FLINK-29337
> URL: https://issues.apache.org/jira/browse/FLINK-29337
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Reporter: luoyuxia
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.16.0
>
>
> Flink will fail for the query with non-hive table in HiveDialect.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-28750) Whether to add field comment for hive table

2022-09-20 Thread luoyuxia (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17607137#comment-17607137
 ] 

luoyuxia commented on FLINK-28750:
--

We can move on it after finish FLINK-18958

> Whether to add field comment for hive table
> ---
>
> Key: FLINK-28750
> URL: https://issues.apache.org/jira/browse/FLINK-28750
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Hive
>Affects Versions: 1.14.5
>Reporter: hehuiyuan
>Priority: Minor
>  Labels: pull-request-available
> Attachments: image-2022-07-30-15-53-03-754.png, 
> image-2022-07-30-16-36-37-032.png
>
>
> Currently,  I have a hive ddl,as follows
> {code:java}
> "set table.sql-dialect=hive;\n" +
> "CREATE TABLE IF NOT EXISTS myhive.dev.shipu3_test_1125 (\n" +
> "   `id` int COMMENT 'ia',\n" +
> "   `cartdid` bigint COMMENT 'aaa',\n" +
> "   `customer` string COMMENT '',\n" +
> "   `product` string COMMENT '',\n" +
> "   `price` double COMMENT '',\n" +
> "   `dt` STRING COMMENT ''\n" +
> ") PARTITIONED BY (dt STRING) STORED AS TEXTFILE TBLPROPERTIES (\n" +
> "  'streaming-source.enable' = 'false',\n" +
> "  'streaming-source.partition.include' = 'all',\n" +
> "  'lookup.join.cache.ttl' = '12 h'\n" +
> ")"; {code}
> It is parsed as SqlCreateHiveTable by hive dialect parser. But the field 
> commet is lost.
>  
>  
> !image-2022-07-30-16-36-37-032.png|width=777,height=526!
>  
>  
>  
>  
>  
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-29350) Add a note for swapping planner jar in Hive dependencies page

2022-09-20 Thread luoyuxia (Jira)
luoyuxia created FLINK-29350:


 Summary: Add a note for swapping planner jar in Hive dependencies 
page
 Key: FLINK-29350
 URL: https://issues.apache.org/jira/browse/FLINK-29350
 Project: Flink
  Issue Type: Improvement
  Components: Connectors / Hive, Documentation
Affects Versions: 1.16.0
Reporter: luoyuxia
 Fix For: 1.16.0






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29350) Add a note for swapping planner jar in Hive dependencies page

2022-09-20 Thread luoyuxia (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

luoyuxia updated FLINK-29350:
-
Parent: FLINK-29021
Issue Type: Sub-task  (was: Improvement)

> Add a note for swapping planner jar in Hive dependencies page
> -
>
> Key: FLINK-29350
> URL: https://issues.apache.org/jira/browse/FLINK-29350
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Hive, Documentation
>Affects Versions: 1.16.0
>Reporter: luoyuxia
>Priority: Major
> Fix For: 1.16.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-29026) Add docs for HiveServer2 integration

2022-09-20 Thread luoyuxia (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

luoyuxia resolved FLINK-29026.
--
Resolution: Fixed

> Add docs for HiveServer2 integration
> 
>
> Key: FLINK-29026
> URL: https://issues.apache.org/jira/browse/FLINK-29026
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Hive, Documentation, Table SQL / Gateway
>Affects Versions: 1.16.0
>Reporter: luoyuxia
>Assignee: Shengkai Fang
>Priority: Critical
> Fix For: 1.16.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-29079) Add doc for show statement of Hive dialect

2022-09-20 Thread luoyuxia (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

luoyuxia resolved FLINK-29079.
--
Resolution: Fixed

> Add doc for show statement of Hive dialect
> --
>
> Key: FLINK-29079
> URL: https://issues.apache.org/jira/browse/FLINK-29079
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Hive
>Affects Versions: 1.16.0
>Reporter: luoyuxia
>Priority: Major
> Fix For: 1.16.0
>
>
> Add a page of show statment for HiveDialect. As our Hive dialect is 
> compatible to Hive, so we can take some from Hive docs



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-29076) Add doc for alter statement of Hive dialect

2022-09-20 Thread luoyuxia (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29076?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

luoyuxia resolved FLINK-29076.
--
Resolution: Fixed

> Add doc for alter statement of Hive dialect
> ---
>
> Key: FLINK-29076
> URL: https://issues.apache.org/jira/browse/FLINK-29076
> Project: Flink
>  Issue Type: Sub-task
>  Components: Documentation
>Affects Versions: 1.16.0
>Reporter: luoyuxia
>Priority: Major
> Fix For: 1.16.0
>
>
> Add a page of alter statment for HiveDialect. As our Hive dialect is 
> compatible to Hive, so we can take some from [Hive docs|#LanguageManualDDL]]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-29078) Add doc for drop statement of Hive dialect

2022-09-20 Thread luoyuxia (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

luoyuxia resolved FLINK-29078.
--
Resolution: Fixed

> Add doc for drop statement of Hive dialect
> --
>
> Key: FLINK-29078
> URL: https://issues.apache.org/jira/browse/FLINK-29078
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Hive, Documentation
>Affects Versions: 1.16.0
>Reporter: luoyuxia
>Priority: Major
> Fix For: 1.16.0
>
>
> Add a page of drop statment for HiveDialect. As our Hive dialect is 
> compatible to Hive, so we can take some from Hive docs



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-29077) Add doc for create statement of Hive dialect

2022-09-20 Thread luoyuxia (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

luoyuxia resolved FLINK-29077.
--
Resolution: Fixed

> Add doc for create statement of Hive dialect
> 
>
> Key: FLINK-29077
> URL: https://issues.apache.org/jira/browse/FLINK-29077
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Hive
>Reporter: luoyuxia
>Priority: Major
> Fix For: 1.16.0
>
>
> Add a page of create statment for HiveDialect. As our Hive dialect is 
> compatible to Hive, so we can take some from Hive docs



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-29343) Fix fail to execute ddl in HiveDialect when use specifc catalog in sql statement

2022-09-19 Thread luoyuxia (Jira)
luoyuxia created FLINK-29343:


 Summary: Fix fail to execute ddl in HiveDialect when use specifc 
catalog in sql statement
 Key: FLINK-29343
 URL: https://issues.apache.org/jira/browse/FLINK-29343
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Hive
Affects Versions: 1.16.0
Reporter: luoyuxia
 Fix For: 1.16.0






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-29337) Fix fail to use HiveDialect for non-hive table

2022-09-19 Thread luoyuxia (Jira)
luoyuxia created FLINK-29337:


 Summary: Fix fail to use HiveDialect for non-hive table
 Key: FLINK-29337
 URL: https://issues.apache.org/jira/browse/FLINK-29337
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Hive
Reporter: luoyuxia
 Fix For: 1.16.0


Flink will fail for the query with non-hive table in HiveDialect.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-29209) Fail to connect HiveServer endpoint with TProtocolException

2022-09-19 Thread luoyuxia (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

luoyuxia closed FLINK-29209.

Resolution: Not A Problem

> Fail to connect HiveServer endpoint with TProtocolException
> ---
>
> Key: FLINK-29209
> URL: https://issues.apache.org/jira/browse/FLINK-29209
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.16.0
>Reporter: luoyuxia
>Priority: Critical
> Fix For: 1.16.0
>
>
> When I try to connect HiveServer endpoint with some BI tools like Apache 
> SuperSet / FineBI / MetaBase, it fails to connect with the following 
> exception:
> {code:java}
> 2022-09-05 20:12:36,179 ERROR org.apache.thrift.server.TThreadPoolServer      
>              [] - Thrift error occurred during processing of message.
> org.apache.thrift.protocol.TProtocolException: Missing version in 
> readMessageBegin, old client?
>     at 
> org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:228)
>  ~[flink-sql-connector-hive-2.3.9_2.12-1.16-SNAPSHOT.jar:1.16-SNAPSHOT]
>     at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:27) 
> ~[flink-sql-connector-hive-2.3.9_2.12-1.16-SNAPSHOT.jar:1.16-SNAPSHOT]
>     at 
> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286)
>  [flink-sql-connector-hive-2.3.9_2.12-1.16-SNAPSHOT.jar:1.16-SNAPSHOT]
>     at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  [?:1.8.0_332]
>     at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  [?:1.8.0_332]
>     at java.lang.Thread.run(Thread.java:750) [?:1.8.0_332] {code}
> The jdbc url is "jdbc:hive2://host:port/default;auth=noSasl".
> But then I try to connect Hive's own HiveServer, the jdbc url is 
> "jdbc:hive2://host:port/default",  it works well.
> Seems we need extra configuration or adaption to connect to Flink's 
> HiveServer endpoint 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-29152) Describe statement resutls is different from the Hive

2022-09-08 Thread luoyuxia (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17601706#comment-17601706
 ] 

luoyuxia commented on FLINK-29152:
--

I will first to fix insistent behavior for `describe table` statement.

> Describe statement resutls is different from the Hive 
> --
>
> Key: FLINK-29152
> URL: https://issues.apache.org/jira/browse/FLINK-29152
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.16.0
>Reporter: Shengkai Fang
>Priority: Major
>
> In hive, the results schema is 
> {code:java}
> +---++--+
> | col_name  | data_type  | comment  |
> +---++--+
> | a | int|  |
> | b | string |  |
> +---++--+
> {code}
> but our implementation is 
> {code:java}
> 0: jdbc:hive2://localhost:1/default> describe sink;
> +---+---+---+---+-++
> | name  | type  | null  |  key  | extras  | watermark  |
> +---+---+---+---+-++
> | a | INT   | true  | NULL  | NULL| NULL   |
> +---+---+---+---+-++
> {code}
> BTW, it's better we can support {{DESCRIBE FORMATTED}} like hive does.
> {code:java}
> +---++---+
> |   col_name| data_type   
>|comment|
> +---++---+
> | # col_name| data_type   
>| comment   |
> |   | NULL
>| NULL  |
> | a | int 
>|   |
> | b | string  
>|   |
> |   | NULL
>| NULL  |
> | # Detailed Table Information  | NULL
>| NULL  |
> | Database: | default 
>| NULL  |
> | Owner:| null
>| NULL  |
> | CreateTime:   | Tue Aug 30 06:54:00 UTC 2022
>| NULL  |
> | LastAccessTime:   | UNKNOWN 
>| NULL  |
> | Retention:| 0   
>| NULL  |
> | Location: | 
> hdfs://namenode:8020/user/hive/warehouse/sink  | NULL  |
> | Table Type:   | MANAGED_TABLE   
>| NULL  |
> | Table Parameters: | NULL
>| NULL  |
> |   | comment 
>|   |
> |   | numFiles
>| 0 |
> |   | totalSize   
>| 0 |
> |   | transient_lastDdlTime   
>| 1661842440|
> |   | NULL
>| NULL  |
> | # Storage Information | NULL
>| NULL  |
> | SerDe Library:| 
> org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe | NULL  |
> | InputFormat:  | org.apache.hadoop.mapred.TextInputFormat
>| NULL  |
> | OutputFormat: | 
> org.apache.hadoop.hive.ql.io.IgnoreKeyTextOutputFormat | NULL 
>  |
> | Compressed:   | No  
>| NULL  |
> | Num Buckets:  | -1  
>| NULL  |
> | Bucket Columns:   | []  
>| NULL  |
> | Sort Columns: | []  
>| NULL  |
> | 

[jira] [Updated] (FLINK-29045) Optimize error message in Flink SQL Client and Gateway when try to use Hive Dialect

2022-09-08 Thread luoyuxia (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29045?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

luoyuxia updated FLINK-29045:
-
Summary: Optimize error message in Flink SQL Client and Gateway when try to 
use Hive Dialect  (was: Optimize error message in Flink SQL Client/HiveServer2 
Endpoint when try to switch Hive Dialect)

> Optimize error message in Flink SQL Client and Gateway when try to use Hive 
> Dialect
> ---
>
> Key: FLINK-29045
> URL: https://issues.apache.org/jira/browse/FLINK-29045
> Project: Flink
>  Issue Type: Improvement
>Reporter: luoyuxia
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.16.0
>
>
> Since Flink 1.15 , if users want to use HiveDialect, they have to swap 
> flink-table-planner-loader located in /lib with flink-table-planner_2.12 
> located in /opt
> Noticing it bothers some users as reported in [FLINK-27020| 
> https://issues.apache.org/jira/browse/FLINK-27020], 
> [FLINK-28618|https://issues.apache.org/jira/browse/FLINK-28618] .
> Althogh the document has noted it, but some users may still miss it.  It 
> would be better to show the detail error message  and tell user how to deal 
> with  such case in Flink SQL client.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29045) Optimize error message in Flink SQL Client/HiveServer2 Endpoint when try to switch Hive Dialect

2022-09-07 Thread luoyuxia (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29045?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

luoyuxia updated FLINK-29045:
-
Summary: Optimize error message in Flink SQL Client/HiveServer2 Endpoint 
when try to switch Hive Dialect  (was: Optimize error message in Flink SQL 
Client when try to switch Hive Dialect)

> Optimize error message in Flink SQL Client/HiveServer2 Endpoint when try to 
> switch Hive Dialect
> ---
>
> Key: FLINK-29045
> URL: https://issues.apache.org/jira/browse/FLINK-29045
> Project: Flink
>  Issue Type: Improvement
>Reporter: luoyuxia
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.16.0
>
>
> Since Flink 1.15 , if users want to use HiveDialect, they have to swap 
> flink-table-planner-loader located in /lib with flink-table-planner_2.12 
> located in /opt
> Noticing it bothers some users as reported in [FLINK-27020| 
> https://issues.apache.org/jira/browse/FLINK-27020], 
> [FLINK-28618|https://issues.apache.org/jira/browse/FLINK-28618] .
> Althogh the document has noted it, but some users may still miss it.  It 
> would be better to show the detail error message  and tell user how to deal 
> with  such case in Flink SQL client.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29191) Hive dialect can't get value for the variables set by set command

2022-09-07 Thread luoyuxia (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

luoyuxia updated FLINK-29191:
-
Description: 
When using Hive dialect, we can use 
{code:java}
set k1=v1;
{code}
to set variable to Flink's table config.

But if we want the get the value for `k1` by using 
{code:java}
set k1;
{code}
we will get nothing.

The reason is Hive dialect won't lookup the vairable in Flink's table config.

To fix it, we also need to lookup in Flink's table config.

  was:
When using Hive dialect, we can use 
{code:java}
set k1=v1;
{code}
to set variable to Flink's table config.

But if we want the get the value for `k1` by using 
{code:java}
set k1;
{code}
we will get nothing.

The reason is Hive dialect won't lookup the vairable in Flink's table config.

To fix it, we also need to lookup in link's table config.


> Hive dialect can't get value for the variables set by  set command
> --
>
> Key: FLINK-29191
> URL: https://issues.apache.org/jira/browse/FLINK-29191
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.16.0
>Reporter: luoyuxia
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.16.0
>
>
> When using Hive dialect, we can use 
> {code:java}
> set k1=v1;
> {code}
> to set variable to Flink's table config.
> But if we want the get the value for `k1` by using 
> {code:java}
> set k1;
> {code}
> we will get nothing.
> The reason is Hive dialect won't lookup the vairable in Flink's table config.
> To fix it, we also need to lookup in Flink's table config.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29222) Wrong behavior for Hive's load data inpath

2022-09-07 Thread luoyuxia (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29222?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

luoyuxia updated FLINK-29222:
-
Description: 
In hive, `load data inpath` will remove src file, and `load data local inpath` 
won't remove the src file.

But When using the following sql with Hive dialect:
{code:java}
load data local inpath 'test.txt' INTO TABLE tab2 {code}
The file `test.txt` will be removed, although the expected is not to remove the 
`test.txt`.

The reason is the parameter order is not right when try to call 
`HiveCatalog#loadTable(...,  isOverWrite, isSourceLocal)`,

It'll call it with 

 
{code:java}
hiveCatalog.loadTable(
   ..., 
hiveLoadDataOperation.isSrcLocal(), // should be isOverwrite
hiveLoadDataOperation.isOverwrite()); // should be isSrcLocal{code}
 

 

 

  was:
When using the following sql with Hive dialect:

 
{code:java}
load data local inpath 'test.txt' INTO TABLE tab2 {code}
The file `test.txt` will be removed. But the expected is not to remove the 
`test.txt`.

The reason is the parameter order is not right when try to call 
`HiveCatalog#loadTable(...,  isOverWrite, isSourceLocal)`,

It'll call it with 

 
{code:java}
hiveCatalog.loadTable(
   ..., 
hiveLoadDataOperation.isSrcLocal(), // should be isOverwrite
hiveLoadDataOperation.isOverwrite()); // should be isSrcLocal{code}
 

 

 


> Wrong behavior for Hive's load data inpath
> --
>
> Key: FLINK-29222
> URL: https://issues.apache.org/jira/browse/FLINK-29222
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.16.0
>Reporter: luoyuxia
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.16.0
>
>
> In hive, `load data inpath` will remove src file, and `load data local 
> inpath` won't remove the src file.
> But When using the following sql with Hive dialect:
> {code:java}
> load data local inpath 'test.txt' INTO TABLE tab2 {code}
> The file `test.txt` will be removed, although the expected is not to remove 
> the `test.txt`.
> The reason is the parameter order is not right when try to call 
> `HiveCatalog#loadTable(...,  isOverWrite, isSourceLocal)`,
> It'll call it with 
>  
> {code:java}
> hiveCatalog.loadTable(
>..., 
> hiveLoadDataOperation.isSrcLocal(), // should be isOverwrite
> hiveLoadDataOperation.isOverwrite()); // should be isSrcLocal{code}
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29222) Wrong behavior for Hive's load data inpath

2022-09-07 Thread luoyuxia (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29222?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

luoyuxia updated FLINK-29222:
-
Description: 
In hive, `load data inpath` will remove src file, and `load data local inpath` 
won't remove the src file.

But When using the following sql with Hive dialect:
{code:java}
load data local inpath 'test.txt' INTO TABLE tab2 {code}
The file `test.txt` will be removed, although the expected is not to remove the 
`test.txt`.

The reason is the parameter order is not right when try to call 
`HiveCatalog#loadTable(...,  isOverWrite, isSourceLocal)`,

It'll call it with 
{code:java}
hiveCatalog.loadTable(
   ..., 
hiveLoadDataOperation.isSrcLocal(), // should be isOverwrite
hiveLoadDataOperation.isOverwrite()); // should be isSrcLocal{code}
 

 

 

  was:
In hive, `load data inpath` will remove src file, and `load data local inpath` 
won't remove the src file.

But When using the following sql with Hive dialect:
{code:java}
load data local inpath 'test.txt' INTO TABLE tab2 {code}
The file `test.txt` will be removed, although the expected is not to remove the 
`test.txt`.

The reason is the parameter order is not right when try to call 
`HiveCatalog#loadTable(...,  isOverWrite, isSourceLocal)`,

It'll call it with 

 
{code:java}
hiveCatalog.loadTable(
   ..., 
hiveLoadDataOperation.isSrcLocal(), // should be isOverwrite
hiveLoadDataOperation.isOverwrite()); // should be isSrcLocal{code}
 

 

 


> Wrong behavior for Hive's load data inpath
> --
>
> Key: FLINK-29222
> URL: https://issues.apache.org/jira/browse/FLINK-29222
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.16.0
>Reporter: luoyuxia
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.16.0
>
>
> In hive, `load data inpath` will remove src file, and `load data local 
> inpath` won't remove the src file.
> But When using the following sql with Hive dialect:
> {code:java}
> load data local inpath 'test.txt' INTO TABLE tab2 {code}
> The file `test.txt` will be removed, although the expected is not to remove 
> the `test.txt`.
> The reason is the parameter order is not right when try to call 
> `HiveCatalog#loadTable(...,  isOverWrite, isSourceLocal)`,
> It'll call it with 
> {code:java}
> hiveCatalog.loadTable(
>..., 
> hiveLoadDataOperation.isSrcLocal(), // should be isOverwrite
> hiveLoadDataOperation.isOverwrite()); // should be isSrcLocal{code}
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29222) Wrong behavior for Hive's load data inpath

2022-09-07 Thread luoyuxia (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29222?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

luoyuxia updated FLINK-29222:
-
Description: 
When using the following sql with Hive dialect:

 
{code:java}
load data local inpath 'test.txt' INTO TABLE tab2 {code}
The file `test.txt` will be removed. But the expected is not to remove the 
`test.txt`.

The reason is the parameter order is not right when try to call 
`HiveCatalog#loadTable(...,  isOverWrite, isSourceLocal)`,

It'll call it with 

 
{code:java}
hiveCatalog.loadTable(
   ..., 
hiveLoadDataOperation.isSrcLocal(), // should be isOverwrite
hiveLoadDataOperation.isOverwrite()); // should be isSrcLocal{code}
 

 

 

> Wrong behavior for Hive's load data inpath
> --
>
> Key: FLINK-29222
> URL: https://issues.apache.org/jira/browse/FLINK-29222
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.16.0
>Reporter: luoyuxia
>Priority: Critical
> Fix For: 1.16.0
>
>
> When using the following sql with Hive dialect:
>  
> {code:java}
> load data local inpath 'test.txt' INTO TABLE tab2 {code}
> The file `test.txt` will be removed. But the expected is not to remove the 
> `test.txt`.
> The reason is the parameter order is not right when try to call 
> `HiveCatalog#loadTable(...,  isOverWrite, isSourceLocal)`,
> It'll call it with 
>  
> {code:java}
> hiveCatalog.loadTable(
>..., 
> hiveLoadDataOperation.isSrcLocal(), // should be isOverwrite
> hiveLoadDataOperation.isOverwrite()); // should be isSrcLocal{code}
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-29222) Wrong behavior for Hive's load data inpath

2022-09-07 Thread luoyuxia (Jira)
luoyuxia created FLINK-29222:


 Summary: Wrong behavior for Hive's load data inpath
 Key: FLINK-29222
 URL: https://issues.apache.org/jira/browse/FLINK-29222
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Hive
Affects Versions: 1.16.0
Reporter: luoyuxia
 Fix For: 1.16.0






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-29185) Failed to execute USING JAR in Hive Dialect

2022-09-07 Thread luoyuxia (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17601146#comment-17601146
 ] 

luoyuxia commented on FLINK-29185:
--

Only happens for `create temporary function xxx using jar`. The reason is 
haven't registered such resource when creating  temporary function.

To fix it, we need to register such resource.

> Failed to execute USING JAR in Hive Dialect
> ---
>
> Key: FLINK-29185
> URL: https://issues.apache.org/jira/browse/FLINK-29185
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.16.0
>Reporter: Shengkai Fang
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-26603) [Umbrella] Decouple Hive with Flink planner

2022-09-06 Thread luoyuxia (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17600717#comment-17600717
 ] 

luoyuxia commented on FLINK-26603:
--

I think I still need some time to finish it.

> [Umbrella] Decouple Hive with Flink planner
> ---
>
> Key: FLINK-26603
> URL: https://issues.apache.org/jira/browse/FLINK-26603
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Hive, Table SQL / Planner
>Reporter: luoyuxia
>Priority: Major
> Fix For: 1.17.0
>
>
> To support Hive dialect with Flink, we have implemented FLIP-123, FLIP-152.
> But it also brings much maintenance burden and complexity for it mixes some 
> logic specific to Hive with Flink planner. We should remove such logic from 
> Flink planner and make it totally decouple with Flink planner.
> With this ticket, we expect:
> 1:  there won't be any specific logic to Hive in planner module
> 2:  remove  flink-sql-parser-hive from flink-table module 
> 3:  remove the planner dependency in flink-connector-hive
> I'll update more details after investigation.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-25605) Batch get statistics of multiple partitions instead of get one by one

2022-09-06 Thread luoyuxia (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

luoyuxia resolved FLINK-25605.
--
Resolution: Fixed

> Batch get statistics of multiple partitions instead of get one by one
> -
>
> Key: FLINK-25605
> URL: https://issues.apache.org/jira/browse/FLINK-25605
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Jing Zhang
>Assignee: tartarus
>Priority: Minor
> Attachments: image-2022-01-11-15-59-55-894.png, 
> image-2022-01-11-16-00-28-002.png
>
>
> Currently, `PushPartitionIntoTableSourceScanRule` would fetch statistics of 
> matched partitions one by one.
>  !image-2022-01-11-15-59-55-894.png! 
> If there are multiple matched partitions, it costs much time to waiting for 
> get all statistics.
> We could make an improvement here to batch get statistics of multiple 
> partitions.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-28954) Release Testing: Verify FLIP-223 HiveServer2 Endpoint

2022-09-06 Thread luoyuxia (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-28954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

luoyuxia resolved FLINK-28954.
--
Resolution: Resolved

Closing it since the the verification has been finished.

> Release Testing: Verify FLIP-223 HiveServer2 Endpoint
> -
>
> Key: FLINK-28954
> URL: https://issues.apache.org/jira/browse/FLINK-28954
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Hive, Table SQL / Gateway
>Affects Versions: 1.16.0
>Reporter: Shengkai Fang
>Assignee: luoyuxia
>Priority: Blocker
>  Labels: release-testing
> Fix For: 1.16.0
>
>
> HiveServer2 Endpoint is ready to use in this version. I think we can verify:
>  # We can start the SQL Gateway with HiveServer2 Endpoint
>  # User is able to sumit SQL with Hive beeline
>  # User is able to sumit SQL with DBeaver



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-28954) Release Testing: Verify FLIP-223 HiveServer2 Endpoint

2022-09-06 Thread luoyuxia (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17600707#comment-17600707
 ] 

luoyuxia commented on FLINK-28954:
--

I has almost finished the verification for HiveServer2 Endpoint

1: start the sql gateway with HiveServer2 endpoint according to the 
documentation.

2: connect to HiveServer2 endpoint with beeline and run some sql randomly.

3: use Zeppelin to connect HiveServer2 endpoint and run some sql randomly.

4: use dolphinscheduler to schedule some sql jobs including create table/insert 
into table/query table via HiveServer2 endpoint 

5: use some BI tools to connect HiveServer2 endpoint. It works for 
Tableau/DataEase. But fail to HiveServer2 endpoint with SuperSet, FlineBI, 
Metabase, which is tracked by FLINK-29209

> Release Testing: Verify FLIP-223 HiveServer2 Endpoint
> -
>
> Key: FLINK-28954
> URL: https://issues.apache.org/jira/browse/FLINK-28954
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Hive, Table SQL / Gateway
>Affects Versions: 1.16.0
>Reporter: Shengkai Fang
>Assignee: luoyuxia
>Priority: Blocker
>  Labels: release-testing
> Fix For: 1.16.0
>
>
> HiveServer2 Endpoint is ready to use in this version. I think we can verify:
>  # We can start the SQL Gateway with HiveServer2 Endpoint
>  # User is able to sumit SQL with Hive beeline
>  # User is able to sumit SQL with DBeaver



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29209) Fail to connect HiveServer endpoint with TProtocolException

2022-09-06 Thread luoyuxia (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

luoyuxia updated FLINK-29209:
-
Description: 
When I try to connect HiveServer endpoint with some BI tools like Apache 
SuperSet / FineBI / MetaBase, it fails to connect with the following exception:
{code:java}
2022-09-05 20:12:36,179 ERROR org.apache.thrift.server.TThreadPoolServer        
           [] - Thrift error occurred during processing of message.
org.apache.thrift.protocol.TProtocolException: Missing version in 
readMessageBegin, old client?
    at 
org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:228)
 ~[flink-sql-connector-hive-2.3.9_2.12-1.16-SNAPSHOT.jar:1.16-SNAPSHOT]
    at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:27) 
~[flink-sql-connector-hive-2.3.9_2.12-1.16-SNAPSHOT.jar:1.16-SNAPSHOT]
    at 
org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286)
 [flink-sql-connector-hive-2.3.9_2.12-1.16-SNAPSHOT.jar:1.16-SNAPSHOT]
    at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
[?:1.8.0_332]
    at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
[?:1.8.0_332]
    at java.lang.Thread.run(Thread.java:750) [?:1.8.0_332] {code}
The jdbc url is "jdbc:hive2://host:port/default;auth=noSasl".

But then I try to connect Hive's own HiveServer, the jdbc url is 
"jdbc:hive2://host:port/default",  it works well.

Seems we need extra configuration or adaption to connect to Flink's HiveServer 
endpoint 

  was:
When I try to connect HiveServer endpoint with some BI tools like Apache 
SuperSet / FineBI / MetaBase, it fails to connect with the following exception:
{code:java}
2022-09-05 20:12:36,179 ERROR org.apache.thrift.server.TThreadPoolServer        
           [] - Thrift error occurred during processing of message.
org.apache.thrift.protocol.TProtocolException: Missing version in 
readMessageBegin, old client?
    at 
org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:228)
 ~[flink-sql-connector-hive-2.3.9_2.12-1.16-SNAPSHOT.jar:1.16-SNAPSHOT]
    at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:27) 
~[flink-sql-connector-hive-2.3.9_2.12-1.16-SNAPSHOT.jar:1.16-SNAPSHOT]
    at 
org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286)
 [flink-sql-connector-hive-2.3.9_2.12-1.16-SNAPSHOT.jar:1.16-SNAPSHOT]
    at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
[?:1.8.0_332]
    at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
[?:1.8.0_332]
    at java.lang.Thread.run(Thread.java:750) [?:1.8.0_332] {code}
But then I try to connect Hive's own HiveServer, it works well.

Seems we need extra configuration or adaption to connect to Flink's HiveServer 
endpoint 


> Fail to connect HiveServer endpoint with TProtocolException
> ---
>
> Key: FLINK-29209
> URL: https://issues.apache.org/jira/browse/FLINK-29209
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.16.0
>Reporter: luoyuxia
>Priority: Critical
> Fix For: 1.16.0
>
>
> When I try to connect HiveServer endpoint with some BI tools like Apache 
> SuperSet / FineBI / MetaBase, it fails to connect with the following 
> exception:
> {code:java}
> 2022-09-05 20:12:36,179 ERROR org.apache.thrift.server.TThreadPoolServer      
>              [] - Thrift error occurred during processing of message.
> org.apache.thrift.protocol.TProtocolException: Missing version in 
> readMessageBegin, old client?
>     at 
> org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:228)
>  ~[flink-sql-connector-hive-2.3.9_2.12-1.16-SNAPSHOT.jar:1.16-SNAPSHOT]
>     at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:27) 
> ~[flink-sql-connector-hive-2.3.9_2.12-1.16-SNAPSHOT.jar:1.16-SNAPSHOT]
>     at 
> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286)
>  [flink-sql-connector-hive-2.3.9_2.12-1.16-SNAPSHOT.jar:1.16-SNAPSHOT]
>     at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  [?:1.8.0_332]
>     at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  [?:1.8.0_332]
>     at java.lang.Thread.run(Thread.java:750) [?:1.8.0_332] {code}
> The jdbc url is "jdbc:hive2://host:port/default;auth=noSasl".
> But then I try to connect Hive's own HiveServer, the jdbc url is 
> "jdbc:hive2://host:port/default",  it works well.
> Seems we need extra configuration or adaption to connect to Flink's 
> HiveServer endpoint 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-29209) Fail to connect HiveServer endpoint with TProtocolException

2022-09-06 Thread luoyuxia (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17600706#comment-17600706
 ] 

luoyuxia commented on FLINK-29209:
--

cc [~fsk119] 

> Fail to connect HiveServer endpoint with TProtocolException
> ---
>
> Key: FLINK-29209
> URL: https://issues.apache.org/jira/browse/FLINK-29209
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.16.0
>Reporter: luoyuxia
>Priority: Critical
> Fix For: 1.16.0
>
>
> When I try to connect HiveServer endpoint with some BI tools like Apache 
> SuperSet / FineBI / MetaBase, it fails to connect with the following 
> exception:
> {code:java}
> 2022-09-05 20:12:36,179 ERROR org.apache.thrift.server.TThreadPoolServer      
>              [] - Thrift error occurred during processing of message.
> org.apache.thrift.protocol.TProtocolException: Missing version in 
> readMessageBegin, old client?
>     at 
> org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:228)
>  ~[flink-sql-connector-hive-2.3.9_2.12-1.16-SNAPSHOT.jar:1.16-SNAPSHOT]
>     at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:27) 
> ~[flink-sql-connector-hive-2.3.9_2.12-1.16-SNAPSHOT.jar:1.16-SNAPSHOT]
>     at 
> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286)
>  [flink-sql-connector-hive-2.3.9_2.12-1.16-SNAPSHOT.jar:1.16-SNAPSHOT]
>     at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  [?:1.8.0_332]
>     at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  [?:1.8.0_332]
>     at java.lang.Thread.run(Thread.java:750) [?:1.8.0_332] {code}
> But then I try to connect Hive's own HiveServer, it works well.
> Seems we need extra configuration or adaption to connect to Flink's 
> HiveServer endpoint 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-29209) Fail to connect HiveServer endpoint with TProtocolException

2022-09-06 Thread luoyuxia (Jira)
luoyuxia created FLINK-29209:


 Summary: Fail to connect HiveServer endpoint with 
TProtocolException
 Key: FLINK-29209
 URL: https://issues.apache.org/jira/browse/FLINK-29209
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Hive
Affects Versions: 1.16.0
Reporter: luoyuxia
 Fix For: 1.16.0


When I try to connect HiveServer endpoint with some BI tools like Apache 
SuperSet / FineBI / MetaBase, it fails to connect with the following exception:
{code:java}
2022-09-05 20:12:36,179 ERROR org.apache.thrift.server.TThreadPoolServer        
           [] - Thrift error occurred during processing of message.
org.apache.thrift.protocol.TProtocolException: Missing version in 
readMessageBegin, old client?
    at 
org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:228)
 ~[flink-sql-connector-hive-2.3.9_2.12-1.16-SNAPSHOT.jar:1.16-SNAPSHOT]
    at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:27) 
~[flink-sql-connector-hive-2.3.9_2.12-1.16-SNAPSHOT.jar:1.16-SNAPSHOT]
    at 
org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286)
 [flink-sql-connector-hive-2.3.9_2.12-1.16-SNAPSHOT.jar:1.16-SNAPSHOT]
    at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
[?:1.8.0_332]
    at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
[?:1.8.0_332]
    at java.lang.Thread.run(Thread.java:750) [?:1.8.0_332] {code}
But then I try to connect Hive's own HiveServer, it works well.

Seems we need extra configuration or adaption to connect to Flink's HiveServer 
endpoint 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-19004) Fail to call Hive percentile function together with distinct aggregate call

2022-09-05 Thread luoyuxia (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17600285#comment-17600285
 ] 

luoyuxia commented on FLINK-19004:
--

[~Runking]  Sorry for that. I mistook it may hardly happen,  so I haven't 
pushed it to be merged.  I think the pr 
[https://github.com/apache/flink/pull/18997]  is in good of shape. If you're 
urgently to fix it, you can apply this patch and build your flink.

The test failure is just plan assert failure for the plan has changed  since we 
will use `first_value` instead of `min`.

But notic it may bring performance regression since it'll use sort agg instead 
of hash agg after apply this patch.

But after finish this [https://github.com/apache/flink/pull/20130] , the 
performance regression will be fixed.

 

 

> Fail to call Hive percentile function together with distinct aggregate call
> ---
>
> Key: FLINK-19004
> URL: https://issues.apache.org/jira/browse/FLINK-19004
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive, Table SQL / Planner
>Reporter: Rui Li
>Assignee: luoyuxia
>Priority: Minor
>  Labels: auto-deprioritized-major, pull-request-available, 
> stale-assigned
>
> The following test case would fail:
> {code}
>   @Test
>   public void test() throws Exception {
>   TableEnvironment tableEnv = getTableEnvWithHiveCatalog();
>   tableEnv.unloadModule("core");
>   tableEnv.loadModule("hive", new HiveModule());
>   tableEnv.loadModule("core", CoreModule.INSTANCE);
>   tableEnv.executeSql("create table src(x int,y int)");
>   tableEnv.executeSql("select count(distinct 
> y),`percentile`(y,`array`(0.5,0.99)) from src group by x").collect();
>   }
> {code}
> The error is:
> {noformat}
> org.apache.flink.table.api.TableException: Cannot generate a valid execution 
> plan for the given query: 
> FlinkLogicalLegacySink(name=[collect], fields=[EXPR$0, EXPR$1])
> +- FlinkLogicalCalc(select=[EXPR$0, EXPR$1])
>+- FlinkLogicalAggregate(group=[{0}], EXPR$0=[COUNT($1) FILTER $3], 
> EXPR$1=[MIN($2) FILTER $4])
>   +- FlinkLogicalCalc(select=[x, y, EXPR$1, =(CASE(=($e, 0:BIGINT), 
> 0:BIGINT, 1:BIGINT), 0) AS $g_0, =(CASE(=($e, 0:BIGINT), 0:BIGINT, 1:BIGINT), 
> 1) AS $g_1])
>  +- FlinkLogicalAggregate(group=[{0, 1, 3}], EXPR$1=[percentile($4, 
> $2)])
> +- FlinkLogicalExpand(projects=[x, y, $f2, $e, y_0])
>+- FlinkLogicalCalc(select=[x, y, array(0.5:DECIMAL(2, 1), 
> 0.99:DECIMAL(3, 2)) AS $f2])
>   +- FlinkLogicalLegacyTableSourceScan(table=[[test-catalog, 
> default, src, source: [HiveTableSource(x, y) TablePath: default.src, 
> PartitionPruned: false, PartitionNums: null]]], fields=[x, y])
> Min aggregate function does not support type: ''ARRAY''.
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-19004) Fail to call Hive percentile function together with distinct aggregate call

2022-09-05 Thread luoyuxia (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17600285#comment-17600285
 ] 

luoyuxia edited comment on FLINK-19004 at 9/5/22 9:13 AM:
--

[~Runking]  Sorry for that. I mistook it may hardly happen,  so I haven't 
pushed it to be merged.  I think the pr 
[https://github.com/apache/flink/pull/18997]  is in good of shape. If you're 
urgently to fix it, you can apply this patch and build your flink.

The test failure is just plan assert failure for the plan has changed  since we 
will use `first_value` instead of `min`.

But notic it may bring performance regression since it'll use sort agg instead 
of hash agg after apply this patch for such case.

But after finish this [https://github.com/apache/flink/pull/20130] , the 
performance regression will be fixed.

 

 


was (Author: luoyuxia):
[~Runking]  Sorry for that. I mistook it may hardly happen,  so I haven't 
pushed it to be merged.  I think the pr 
[https://github.com/apache/flink/pull/18997]  is in good of shape. If you're 
urgently to fix it, you can apply this patch and build your flink.

The test failure is just plan assert failure for the plan has changed  since we 
will use `first_value` instead of `min`.

But notic it may bring performance regression since it'll use sort agg instead 
of hash agg after apply this patch.

But after finish this [https://github.com/apache/flink/pull/20130] , the 
performance regression will be fixed.

 

 

> Fail to call Hive percentile function together with distinct aggregate call
> ---
>
> Key: FLINK-19004
> URL: https://issues.apache.org/jira/browse/FLINK-19004
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive, Table SQL / Planner
>Reporter: Rui Li
>Assignee: luoyuxia
>Priority: Minor
>  Labels: auto-deprioritized-major, pull-request-available, 
> stale-assigned
>
> The following test case would fail:
> {code}
>   @Test
>   public void test() throws Exception {
>   TableEnvironment tableEnv = getTableEnvWithHiveCatalog();
>   tableEnv.unloadModule("core");
>   tableEnv.loadModule("hive", new HiveModule());
>   tableEnv.loadModule("core", CoreModule.INSTANCE);
>   tableEnv.executeSql("create table src(x int,y int)");
>   tableEnv.executeSql("select count(distinct 
> y),`percentile`(y,`array`(0.5,0.99)) from src group by x").collect();
>   }
> {code}
> The error is:
> {noformat}
> org.apache.flink.table.api.TableException: Cannot generate a valid execution 
> plan for the given query: 
> FlinkLogicalLegacySink(name=[collect], fields=[EXPR$0, EXPR$1])
> +- FlinkLogicalCalc(select=[EXPR$0, EXPR$1])
>+- FlinkLogicalAggregate(group=[{0}], EXPR$0=[COUNT($1) FILTER $3], 
> EXPR$1=[MIN($2) FILTER $4])
>   +- FlinkLogicalCalc(select=[x, y, EXPR$1, =(CASE(=($e, 0:BIGINT), 
> 0:BIGINT, 1:BIGINT), 0) AS $g_0, =(CASE(=($e, 0:BIGINT), 0:BIGINT, 1:BIGINT), 
> 1) AS $g_1])
>  +- FlinkLogicalAggregate(group=[{0, 1, 3}], EXPR$1=[percentile($4, 
> $2)])
> +- FlinkLogicalExpand(projects=[x, y, $f2, $e, y_0])
>+- FlinkLogicalCalc(select=[x, y, array(0.5:DECIMAL(2, 1), 
> 0.99:DECIMAL(3, 2)) AS $f2])
>   +- FlinkLogicalLegacyTableSourceScan(table=[[test-catalog, 
> default, src, source: [HiveTableSource(x, y) TablePath: default.src, 
> PartitionPruned: false, PartitionNums: null]]], fields=[x, y])
> Min aggregate function does not support type: ''ARRAY''.
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-29191) Hive dialect can't get value for the variables set by set command

2022-09-05 Thread luoyuxia (Jira)
luoyuxia created FLINK-29191:


 Summary: Hive dialect can't get value for the variables set by  
set command
 Key: FLINK-29191
 URL: https://issues.apache.org/jira/browse/FLINK-29191
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Hive
Affects Versions: 1.16.0
Reporter: luoyuxia
 Fix For: 1.16.0


When using Hive dialect, we can use 
{code:java}
set k1=v1;
{code}
to set variable to Flink's table config.

But if we want the get the value for `k1` by using 
{code:java}
set k1;
{code}
we will get nothing.

The reason is Hive dialect won't lookup the vairable in Flink's table config.

To fix it, we also need to lookup in link's table config.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-29177) Shade the org.apache.commons in flink-sql-connector-hive to avoid the class conflict

2022-09-01 Thread luoyuxia (Jira)
luoyuxia created FLINK-29177:


 Summary: Shade the org.apache.commons in flink-sql-connector-hive 
to avoid the class conflict
 Key: FLINK-29177
 URL: https://issues.apache.org/jira/browse/FLINK-29177
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Hive
Reporter: luoyuxia


Reported by user 
https://lists.apache.org/thread/zbyz28b8dfqvb9ppb9bbtw8zp1ql72cp

We should shade the class to avoid class conflict.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-28954) Release Testing: Verify FLIP-223 HiveServer2 Endpoint

2022-08-29 Thread luoyuxia (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17597458#comment-17597458
 ] 

luoyuxia commented on FLINK-28954:
--

Just has done some simple tests. I think I still need some to test before close 
the issue. 

> Release Testing: Verify FLIP-223 HiveServer2 Endpoint
> -
>
> Key: FLINK-28954
> URL: https://issues.apache.org/jira/browse/FLINK-28954
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Hive, Table SQL / Gateway
>Affects Versions: 1.16.0
>Reporter: Shengkai Fang
>Assignee: luoyuxia
>Priority: Blocker
>  Labels: release-testing
> Fix For: 1.16.0
>
>
> HiveServer2 Endpoint is ready to use in this version. I think we can verify:
>  # We can start the SQL Gateway with HiveServer2 Endpoint
>  # User is able to sumit SQL with Hive beeline
>  # User is able to sumit SQL with DBeaver



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29025) Add doc for Hive Dialect

2022-08-29 Thread luoyuxia (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

luoyuxia updated FLINK-29025:
-
Summary: Add doc for Hive Dialect  (was: Add overview doc for Hive Dialect)

> Add doc for Hive Dialect
> 
>
> Key: FLINK-29025
> URL: https://issues.apache.org/jira/browse/FLINK-29025
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Hive, Documentation
>Affects Versions: 1.16.0
>Reporter: luoyuxia
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.16.0
>
>
> Moving some stuff from connectors/table/hive/hive_dialect.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29126) Fix spliting file optimization doesn't work for orc format

2022-08-28 Thread luoyuxia (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

luoyuxia updated FLINK-29126:
-
Summary: Fix spliting file optimization doesn't work for orc format  (was: 
Fix spliting file optimization doesn't work for orc foramt)

> Fix spliting file optimization doesn't work for orc format
> --
>
> Key: FLINK-29126
> URL: https://issues.apache.org/jira/browse/FLINK-29126
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Hive
>Affects Versions: 1.16.0
>Reporter: luoyuxia
>Priority: Major
> Fix For: 1.16.0
>
>
> FLINK-27338 try to improve file spliting for orc format. But it doesn't work 
> for a making  mistake in judge whether the table is stored as orc format or 
> not. We should fix it.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29126) Fix spliting file optimization doesn't work for orc foramt

2022-08-28 Thread luoyuxia (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

luoyuxia updated FLINK-29126:
-
Description: FLINK-27338 try to improve file spliting for orc format. But 
it doesn't work for a making  mistake in judge whether the table is stored as 
orc format or not. We should fix it.  (was: [FLINK-27338]d

 )

> Fix spliting file optimization doesn't work for orc foramt
> --
>
> Key: FLINK-29126
> URL: https://issues.apache.org/jira/browse/FLINK-29126
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Hive
>Affects Versions: 1.16.0
>Reporter: luoyuxia
>Priority: Major
> Fix For: 1.16.0
>
>
> FLINK-27338 try to improve file spliting for orc format. But it doesn't work 
> for a making  mistake in judge whether the table is stored as orc format or 
> not. We should fix it.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-29126) Fix spliting file optimization doesn't work for orc foramt

2022-08-28 Thread luoyuxia (Jira)
luoyuxia created FLINK-29126:


 Summary: Fix spliting file optimization doesn't work for orc foramt
 Key: FLINK-29126
 URL: https://issues.apache.org/jira/browse/FLINK-29126
 Project: Flink
  Issue Type: Sub-task
  Components: Connectors / Hive
Affects Versions: 1.16.0
Reporter: luoyuxia
 Fix For: 1.16.0






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29126) Fix spliting file optimization doesn't work for orc foramt

2022-08-28 Thread luoyuxia (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

luoyuxia updated FLINK-29126:
-
Description: 
[FLINK-27338]d

 

> Fix spliting file optimization doesn't work for orc foramt
> --
>
> Key: FLINK-29126
> URL: https://issues.apache.org/jira/browse/FLINK-29126
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Hive
>Affects Versions: 1.16.0
>Reporter: luoyuxia
>Priority: Major
> Fix For: 1.16.0
>
>
> [FLINK-27338]d
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-29046) HiveTableSourceStatisticsReportTest fails with Hadoop 3

2022-08-23 Thread luoyuxia (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17583931#comment-17583931
 ] 

luoyuxia commented on FLINK-29046:
--

Seems like a bug of orc 1.5.6 which Hive3 depends. And I also found a similar 
issue [ORC-516|https://issues.apache.org/jira/browse/ORC-517].  I think we can 
skip check decimal in Hive3. 

> HiveTableSourceStatisticsReportTest fails with Hadoop 3
> ---
>
> Key: FLINK-29046
> URL: https://issues.apache.org/jira/browse/FLINK-29046
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive, Tests
>Affects Versions: 1.16.0
>Reporter: Chesnay Schepler
>Assignee: Yunhong Zheng
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.16.0
>
> Attachments: image-2022-08-22-21-06-29-980.png
>
>
> {code:java}
> 2022-08-19T13:35:56.1882498Z Aug 19 13:35:56 [ERROR] 
> org.apache.flink.connectors.hive.HiveTableSourceStatisticsReportTest.testFlinkOrcFormatHiveTableSourceStatisticsReport
>   Time elapsed: 9.442 s  <<< FAILURE!
> 2022-08-19T13:35:56.1883817Z Aug 19 13:35:56 
> org.opentest4j.AssertionFailedError: 
> 2022-08-19T13:35:56.1884543Z Aug 19 13:35:56 
> 2022-08-19T13:35:56.1890435Z Aug 19 13:35:56 expected: TableStats{rowCount=3, 
> colStats={f_boolean=ColumnStats(nullCount=1), 
> f_smallint=ColumnStats(nullCount=0, max=128, min=100), 
> f_decimal5=ColumnStats(nullCount=0, max=223.45, min=123.45), f_array=null, 
> f_binary=null, f_decimal38=ColumnStats(nullCount=1, 
> max=123433343334333433343334333433343334.34, 
> min=123433343334333433343334333433343334.33), f_map=null, 
> f_float=ColumnStats(nullCount=1, max=33.33300018310547, 
> min=33.31100082397461), f_row=null, f_tinyint=ColumnStats(nullCount=0, max=3, 
> min=1), f_decimal14=ColumnStats(nullCount=0, max=1255.33, 
> min=1233.33), f_date=ColumnStats(nullCount=0, max=1990-10-16, 
> min=1990-10-14), f_bigint=ColumnStats(nullCount=0, max=1238123899121, 
> min=1238123899000), f_timestamp3=ColumnStats(nullCount=0, max=1990-10-16 
> 12:12:43.123, min=1990-10-14 12:12:43.123), f_double=ColumnStats(nullCount=0, 
> max=10.1, min=1.1), f_string=ColumnStats(nullCount=0, max=def, min=abcd), 
> f_int=ColumnStats(nullCount=1, max=45536, min=31000)}}
> 2022-08-19T13:35:56.1902811Z Aug 19 13:35:56  but was: TableStats{rowCount=3, 
> colStats={f_boolean=ColumnStats(nullCount=1), 
> f_smallint=ColumnStats(nullCount=0, max=128, min=100), 
> f_decimal5=ColumnStats(nullCount=0, max=223.45, min=0), f_array=null, 
> f_binary=null, f_decimal38=ColumnStats(nullCount=1, 
> max=123433343334333433343334333433343334.34, 
> min=123433343334333433343334333433343334.33), f_map=null, 
> f_float=ColumnStats(nullCount=1, max=33.33300018310547, 
> min=33.31100082397461), f_row=null, f_tinyint=ColumnStats(nullCount=0, max=3, 
> min=1), f_decimal14=ColumnStats(nullCount=0, max=1255.33, min=0), 
> f_date=ColumnStats(nullCount=0, max=1990-10-16, min=1990-10-14), 
> f_bigint=ColumnStats(nullCount=0, max=1238123899121, min=1238123899000), 
> f_timestamp3=ColumnStats(nullCount=0, max=1990-10-16 12:12:43.123, 
> min=1990-10-14 12:12:43.123), f_double=ColumnStats(nullCount=0, max=10.1, 
> min=1.1), f_string=ColumnStats(nullCount=0, max=def, min=abcd), 
> f_int=ColumnStats(nullCount=1, max=45536, min=31000)}}
> 2022-08-19T13:35:56.1908634Z Aug 19 13:35:56  at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> 2022-08-19T13:35:56.1910402Z Aug 19 13:35:56  at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> 2022-08-19T13:35:56.1912266Z Aug 19 13:35:56  at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> 2022-08-19T13:35:56.1913257Z Aug 19 13:35:56  at 
> org.apache.flink.connectors.hive.HiveTableSourceStatisticsReportTest.assertHiveTableOrcFormatTableStatsEquals(HiveTableSourceStatisticsReportTest.java:339)
> 2022-08-19T13:35:56.1914512Z Aug 19 13:35:56  at 
> org.apache.flink.connectors.hive.HiveTableSourceStatisticsReportTest.testFlinkOrcFormatHiveTableSourceStatisticsReport(HiveTableSourceStatisticsReportTest.java:118)
> 2022-08-19T13:35:56.1915444Z Aug 19 13:35:56  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2022-08-19T13:35:56.1916130Z Aug 19 13:35:56  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2022-08-19T13:35:56.1916856Z Aug 19 13:35:56  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2022-08-19T13:35:56.1917571Z Aug 19 13:35:56  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2022-08-19T13:35:56.1918278Z Aug 19 13:35:56  at 
> 

[jira] [Updated] (FLINK-29076) Add doc for alter statement of Hive dialect

2022-08-23 Thread luoyuxia (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29076?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

luoyuxia updated FLINK-29076:
-
Description: Add a page of alter statment for HiveDialect. As our Hive 
dialect is compatible to Hive, so we can take some from [Hive 
docs|#LanguageManualDDL]]  (was: Add a page of alter statment for HiveDialect. 
As our Hive dialect is compatible to Hive, so we can take it from [Hive 
docs|[https://cwiki.apache.org/confluence/display/hive/languagemanual+ddl#LanguageManualDDL]])

> Add doc for alter statement of Hive dialect
> ---
>
> Key: FLINK-29076
> URL: https://issues.apache.org/jira/browse/FLINK-29076
> Project: Flink
>  Issue Type: Sub-task
>  Components: Documentation
>Affects Versions: 1.16.0
>Reporter: luoyuxia
>Priority: Major
> Fix For: 1.16.0
>
>
> Add a page of alter statment for HiveDialect. As our Hive dialect is 
> compatible to Hive, so we can take some from [Hive docs|#LanguageManualDDL]]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-29079) Add doc for show statement of Hive dialect

2022-08-23 Thread luoyuxia (Jira)
luoyuxia created FLINK-29079:


 Summary: Add doc for show statement of Hive dialect
 Key: FLINK-29079
 URL: https://issues.apache.org/jira/browse/FLINK-29079
 Project: Flink
  Issue Type: Sub-task
  Components: Connectors / Hive
Affects Versions: 1.16.0
Reporter: luoyuxia
 Fix For: 1.16.0


Add a page of show statment for HiveDialect. As our Hive dialect is compatible 
to Hive, so we can take some from Hive docs



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-29078) Add doc for drop statement of Hive dialect

2022-08-23 Thread luoyuxia (Jira)
luoyuxia created FLINK-29078:


 Summary: Add doc for drop statement of Hive dialect
 Key: FLINK-29078
 URL: https://issues.apache.org/jira/browse/FLINK-29078
 Project: Flink
  Issue Type: Sub-task
  Components: Connectors / Hive, Documentation
Affects Versions: 1.16.0
Reporter: luoyuxia
 Fix For: 1.16.0


Add a page of drop statment for HiveDialect. As our Hive dialect is compatible 
to Hive, so we can take some from Hive docs



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-29077) Add doc for creeate statement of Hive dialect

2022-08-23 Thread luoyuxia (Jira)
luoyuxia created FLINK-29077:


 Summary: Add doc for creeate statement of Hive dialect
 Key: FLINK-29077
 URL: https://issues.apache.org/jira/browse/FLINK-29077
 Project: Flink
  Issue Type: Sub-task
  Components: Connectors / Hive
Reporter: luoyuxia
 Fix For: 1.16.0


Add a page of create statment for HiveDialect. As our Hive dialect is 
compatible to Hive, so we can take some from Hive docs



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


<    2   3   4   5   6   7   8   9   10   11   >