[flink] branch master updated (4d648db -> 6243723)

2021-10-10 Thread kurt
This is an automated email from the ASF dual-hosted git repository.

kurt pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 4d648db  [FLINK-23358][core] Refactor CoreOptions parent first 
patterns to List options
 add 6243723  [FLINK-24459][table] Performance improvement of file sink 
(#17416)

No new revisions were added by this update.

Summary of changes:
 .../flink/table/utils/PartitionPathUtils.java  | 27 ++--
 .../flink/table/utils/PartitionPathUtilsTest.java  | 81 ++
 2 files changed, 103 insertions(+), 5 deletions(-)
 create mode 100644 
flink-table/flink-table-common/src/test/java/org/apache/flink/table/utils/PartitionPathUtilsTest.java


[flink] branch master updated (126fafc -> f2b0e43)

2021-08-27 Thread kurt
This is an automated email from the ASF dual-hosted git repository.

kurt pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 126fafc  [FLINK-23794][tests] Remove InMemoryReporter from 
MiniClusterResource.
 add f2b0e43  [FLINK-24013][misc] Add IssueNavigationLink for IDEA git log

No new revisions were added by this update.

Summary of changes:
 .gitignore|  3 ++-
 .idea/vcs.xml | 36 
 2 files changed, 38 insertions(+), 1 deletion(-)
 create mode 100644 .idea/vcs.xml


[flink] branch release-1.12 updated: [FLINK-22015][table-planner-blink] Exclude IS NULL from SEARCH operators (#16140)

2021-06-11 Thread kurt
This is an automated email from the ASF dual-hosted git repository.

kurt pushed a commit to branch release-1.12
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.12 by this push:
 new faf7cc4  [FLINK-22015][table-planner-blink] Exclude IS NULL from 
SEARCH operators (#16140)
faf7cc4 is described below

commit faf7cc43beebce3fee528ec5637e9387b95bec99
Author: TsReaper 
AuthorDate: Fri Jun 11 18:19:37 2021 +0800

[FLINK-22015][table-planner-blink] Exclude IS NULL from SEARCH operators 
(#16140)
---
 .../java/org/apache/calcite/rex/RexSimplify.java   | 28 -
 .../table/planner/plan/batch/sql/CalcTest.xml  | 35 ++
 .../table/planner/plan/stream/sql/CalcTest.xml | 35 ++
 .../table/planner/plan/batch/sql/CalcTest.scala| 10 +++
 .../table/planner/plan/stream/sql/CalcTest.scala   |  9 ++
 .../planner/runtime/batch/sql/CalcITCase.scala | 34 +
 6 files changed, 136 insertions(+), 15 deletions(-)

diff --git 
a/flink-table/flink-table-planner-blink/src/main/java/org/apache/calcite/rex/RexSimplify.java
 
b/flink-table/flink-table-planner-blink/src/main/java/org/apache/calcite/rex/RexSimplify.java
index 57b9fb7..ea59497 100644
--- 
a/flink-table/flink-table-planner-blink/src/main/java/org/apache/calcite/rex/RexSimplify.java
+++ 
b/flink-table/flink-table-planner-blink/src/main/java/org/apache/calcite/rex/RexSimplify.java
@@ -68,9 +68,14 @@ import static org.apache.calcite.rex.RexUnknownAs.UNKNOWN;
 /**
  * Context required to simplify a row-expression.
  *
- * Copied to fix CALCITE-4364, should be removed for the next Calcite 
upgrade.
+ * Copied to fix Calcite 1.26 bugs, should be removed for the next Calcite 
upgrade.
  *
- * Changes: Line 1307, Line 1764, Line 2638 ~ Line 2656.
+ * Changes (line numbers are from the original RexSimplify file):
+ *
+ * 
+ *   CALCITE-4364 & FLINK-19811: Line 1307, Line 1764, Line 2638 ~ Line 
2656.
+ *   CALCITE-4446 & FLINK-22015: Line 2542 ~ Line 2548, Line 2614 ~ Line 
2619.
+ * 
  */
 public class RexSimplify {
 private final boolean paranoid;
@@ -2669,13 +2674,9 @@ public class RexSimplify {
 ((RexCall) e).operands.get(1),
 e.getKind(),
 newTerms);
-case IS_NULL:
-if (negate) {
-return false;
-}
-final RexNode arg = ((RexCall) e).operands.get(0);
-return accept1(
-arg, e.getKind(), 
rexBuilder.makeNullLiteral(arg.getType()), newTerms);
+// CHANGED: we remove IS_NULL here
+// because SEARCH operator in Calcite 1.26 handles 
UNKNOWNs incorrectly
+// see CALCITE-4446
 default:
 return false;
 }
@@ -2741,12 +2742,9 @@ public class RexSimplify {
 final Sarg sarg = literal.getValueAs(Sarg.class);
 b.addSarg(sarg, negate, literal.getType());
 return true;
-case IS_NULL:
-if (negate) {
-throw new AssertionError("negate is not supported for 
IS_NULL");
-}
-b.containsNull = true;
-return true;
+// CHANGED: we remove IS_NULL here
+// because SEARCH operator in Calcite 1.26 handles 
UNKNOWNs incorrectly
+// see CALCITE-4446
 default:
 throw new AssertionError("unexpected " + kind);
 }
diff --git 
a/flink-table/flink-table-planner-blink/src/test/resources/org/apache/flink/table/planner/plan/batch/sql/CalcTest.xml
 
b/flink-table/flink-table-planner-blink/src/test/resources/org/apache/flink/table/planner/plan/batch/sql/CalcTest.xml
index f8ebe6b..3eb1d5e 100644
--- 
a/flink-table/flink-table-planner-blink/src/test/resources/org/apache/flink/table/planner/plan/batch/sql/CalcTest.xml
+++ 
b/flink-table/flink-table-planner-blink/src/test/resources/org/apache/flink/table/planner/plan/batch/sql/CalcTest.xml
@@ -374,4 +374,39 @@ Calc(select=[ROW(1, _UTF-16LE'Hi', a) AS EXPR$0])
 ]]>
 
   
+  
+
+  
+
+
+  
+
+
+  
+
+  
+  
+
+  
+
+
+  
+
+
+  
+
+  
 
diff --git 
a/flink-table/flink-table-planner-blink/src/test/resources/org/apache/flink/table/planner/plan/stream/sql/CalcTest.xml
 
b/flink-table/flink-table-planner-blink/src/test/resources/org/apache/flink/table/planner/plan/stream/sql/CalcTest.xml
index 32045fb..6b6f346 100644
--- 
a/flink-table/flink-table-planner-blink/src/test/resources/org/apache/flink/table/planner/plan/stream/sql/CalcTest.xml
+++ 
b/flink-table/flink-table-planner-blin

[flink] branch master updated: [FLINK-20855][table-runtime-blink] Fix calculating numBuckets overflow (#14566)

2021-04-16 Thread kurt
This is an automated email from the ASF dual-hosted git repository.

kurt pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new fd8e34c  [FLINK-20855][table-runtime-blink] Fix calculating numBuckets 
overflow (#14566)
fd8e34c is described below

commit fd8e34c03b663aff96a625ed751b66244da8793e
Author: JieFang.He 
AuthorDate: Sat Apr 17 09:45:20 2021 +0800

[FLINK-20855][table-runtime-blink] Fix calculating numBuckets overflow 
(#14566)
---
 .../org/apache/flink/table/runtime/hashtable/LongHashPartition.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git 
a/flink-table/flink-table-runtime-blink/src/main/java/org/apache/flink/table/runtime/hashtable/LongHashPartition.java
 
b/flink-table/flink-table-runtime-blink/src/main/java/org/apache/flink/table/runtime/hashtable/LongHashPartition.java
index f022e6f..9515626 100644
--- 
a/flink-table/flink-table-runtime-blink/src/main/java/org/apache/flink/table/runtime/hashtable/LongHashPartition.java
+++ 
b/flink-table/flink-table-runtime-blink/src/main/java/org/apache/flink/table/runtime/hashtable/LongHashPartition.java
@@ -160,7 +160,7 @@ public class LongHashPartition extends 
AbstractPagedInputView implements Seekabl
 this.partitionNum = partitionNum;
 this.recursionLevel = recursionLevel;
 
-int numBuckets = MathUtils.roundDownToPowerOf2(bucketNumSegs * 
segmentSize / 16);
+int numBuckets = MathUtils.roundDownToPowerOf2(segmentSize / 16 * 
bucketNumSegs);
 MemorySegment[] buckets = new MemorySegment[bucketNumSegs];
 for (int i = 0; i < bucketNumSegs; i++) {
 buckets[i] = longTable.nextSegment();


[flink] branch master updated (1562ed0 -> e4a2738)

2021-04-16 Thread kurt
This is an automated email from the ASF dual-hosted git repository.

kurt pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 1562ed0  [FLINK-22016][table-planner-blink] 
RexNodeExtractor#visitLiteral should deal with NULL literals correctly (#15570)
 add e4a2738  [FLINK-22308][sql-client] Fix CliTableauResultView print 
results after cancel in STREAMING mode (#15644)

No new revisions were added by this update.

Summary of changes:
 .../java/org/apache/flink/table/client/cli/CliTableauResultView.java | 1 +
 1 file changed, 1 insertion(+)


[flink] branch master updated (9466e09 -> 1562ed0)

2021-04-16 Thread kurt
This is an automated email from the ASF dual-hosted git repository.

kurt pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 9466e09  [hotfix][coordination] Add safety guard against uncaught 
exceptions for Future dependent lambdas
 add 1562ed0  [FLINK-22016][table-planner-blink] 
RexNodeExtractor#visitLiteral should deal with NULL literals correctly (#15570)

No new revisions were added by this update.

Summary of changes:
 .../planner/plan/utils/RexNodeExtractor.scala  |  7 +-
 .../PushFilterIntoTableSourceScanRuleTestBase.java | 11 +
 .../PushFilterInCalcIntoTableSourceRuleTest.xml| 25 
 ...PushFilterIntoLegacyTableSourceScanRuleTest.xml | 27 ++
 .../PushFilterIntoTableSourceScanRuleTest.xml  | 27 ++
 5 files changed, 96 insertions(+), 1 deletion(-)


[flink] branch release-1.12 updated (6f3436e -> 902a0f5)

2021-04-15 Thread kurt
This is an automated email from the ASF dual-hosted git repository.

kurt pushed a change to branch release-1.12
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 6f3436e  [FLINK-21941] Make sure jobs do not finish before taking 
savepoint in RescalingITCase
 add 902a0f5  [FLINK-20761][hive] Escape the location path when creating 
input splits (#15625)

No new revisions were added by this update.

Summary of changes:
 .../connectors/hive/HiveSourceFileEnumerator.java  | 60 --
 .../connectors/hive/read/HiveTableInputFormat.java | 37 ++---
 .../hive/TableEnvHiveConnectorITCase.java  | 47 +
 3 files changed, 84 insertions(+), 60 deletions(-)


[flink] branch master updated (1e0da5e -> 45e2fb5)

2021-04-15 Thread kurt
This is an automated email from the ASF dual-hosted git repository.

kurt pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 1e0da5e  [FLINK-22260][table-planner-blink] Use unresolvedSchema 
during operation convertions
 add 45e2fb5  [FLINK-20761][hive] Escape the location path when creating 
input splits (#15549)

No new revisions were added by this update.

Summary of changes:
 .../connectors/hive/HiveSourceFileEnumerator.java  | 58 --
 .../connectors/hive/read/HiveTableInputFormat.java | 35 ++---
 .../hive/TableEnvHiveConnectorITCase.java  | 47 ++
 3 files changed, 84 insertions(+), 56 deletions(-)


[flink] branch master updated: [FLINK-22169][sql-client] Improve CliTableauResultView when printing batch results (#15603)

2021-04-14 Thread kurt
This is an automated email from the ASF dual-hosted git repository.

kurt pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new 326a564  [FLINK-22169][sql-client] Improve CliTableauResultView when 
printing batch results (#15603)
326a564 is described below

commit 326a564ebc0e397654ccd4e5a83a77ef811c6a76
Author: Shengkai <33114724+fsk...@users.noreply.github.com>
AuthorDate: Thu Apr 15 13:51:23 2021 +0800

[FLINK-22169][sql-client] Improve CliTableauResultView when printing batch 
results (#15603)
---
 .../table/client/cli/CliTableauResultView.java | 80 +++---
 .../table/client/cli/CliTableauResultViewTest.java | 30 +---
 .../src/test/resources/sql/catalog_database.q  |  5 +-
 .../src/test/resources/sql/insert.q| 24 +++
 .../src/test/resources/sql/select.q| 14 ++--
 .../src/test/resources/sql/statement_set.q | 24 +++
 6 files changed, 89 insertions(+), 88 deletions(-)

diff --git 
a/flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/cli/CliTableauResultView.java
 
b/flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/cli/CliTableauResultView.java
index aae6afc..df84801 100644
--- 
a/flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/cli/CliTableauResultView.java
+++ 
b/flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/cli/CliTableauResultView.java
@@ -29,6 +29,7 @@ import org.apache.flink.types.Row;
 
 import org.jline.terminal.Terminal;
 
+import java.util.ArrayList;
 import java.util.List;
 import java.util.concurrent.CancellationException;
 import java.util.concurrent.ExecutionException;
@@ -66,7 +67,11 @@ public class CliTableauResultView implements AutoCloseable {
 Future resultFuture =
 displayResultExecutorService.submit(
 () -> {
-printResults(receivedRowCount, 
resultDescriptor.isStreamingMode());
+if (resultDescriptor.isStreamingMode()) {
+printStreamingResults(receivedRowCount);
+} else {
+printBatchResults(receivedRowCount);
+}
 });
 
 // capture CTRL-C
@@ -85,7 +90,8 @@ public class CliTableauResultView implements AutoCloseable {
 .println(
 "Query terminated, received a total of "
 + receivedRowCount.get()
-+ " rows");
++ " "
++ getRowTerm(receivedRowCount));
 terminal.flush();
 } catch (ExecutionException e) {
 if (e.getCause() instanceof SqlExecutionException) {
@@ -114,28 +120,26 @@ public class CliTableauResultView implements 
AutoCloseable {
 }
 }
 
-private void printResults(AtomicInteger receivedRowCount, boolean 
isStreamingMode) {
+private void printBatchResults(AtomicInteger receivedRowCount) {
+final List resultRows = waitBatchResults();
+receivedRowCount.addAndGet(resultRows.size());
+PrintUtils.printAsTableauForm(
+resultDescriptor.getResultSchema(), resultRows.iterator(), 
terminal.writer());
+}
+
+private void printStreamingResults(AtomicInteger receivedRowCount) {
 List columns = resultDescriptor.getResultSchema().getColumns();
-final String[] fieldNames;
-final int[] colWidths;
-if (isStreamingMode) {
-fieldNames =
-Stream.concat(
-Stream.of(PrintUtils.ROW_KIND_COLUMN),
-columns.stream().map(Column::getName))
-.toArray(String[]::new);
-colWidths =
-PrintUtils.columnWidthsByType(
-columns,
-PrintUtils.MAX_COLUMN_WIDTH,
-PrintUtils.NULL_COLUMN,
-PrintUtils.ROW_KIND_COLUMN);
-} else {
-fieldNames = 
columns.stream().map(Column::getName).toArray(String[]::new);
-colWidths =
-PrintUtils.columnWidthsByType(
-columns, PrintUtils.MAX_COLUMN_WIDTH, 
PrintUtils.NULL_COLUMN, null);
-}
+final String[] fieldNames =
+Stream.concat(
+Stream.of(PrintUtils.ROW_KIND_COLUMN),
+columns.stream().map(Column::getName))
+.toArray(String[]::new);
+final int[] colWidths =
+PrintUtils.columnWidthsB

[flink] branch master updated (9fd6ecf -> 5b9e788)

2021-04-14 Thread kurt
This is an automated email from the ASF dual-hosted git repository.

kurt pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 9fd6ecf  [FLINK-22180][coordination] Only reclaim slots if job is 
removed
 add 5b9e788  [FLINK-21660][hive] Stop using is_generic to differentiate 
hive and flink tables (#15155)

No new revisions were added by this update.

Summary of changes:
 .../connectors/hive/HiveDynamicTableFactory.java   |  46 +---
 .../flink/connectors/hive/HiveTableFactory.java|  20 +-
 .../flink/table/catalog/hive/HiveCatalog.java  | 252 -
 .../flink/table/catalog/hive/HiveDatabaseUtil.java | 105 -
 .../table/catalog/hive/util/HiveTableUtil.java |  40 ++--
 .../delegation/hive/DDLOperationConverter.java |  12 +-
 .../flink/connectors/hive/HiveDialectITCase.java   |   4 -
 .../connectors/hive/HiveTableFactoryTest.java  |   7 +-
 .../catalog/hive/HiveCatalogDataTypeTest.java  |   5 +-
 .../hive/HiveCatalogGenericMetadataTest.java   |  19 +-
 .../catalog/hive/HiveCatalogHiveMetadataTest.java  |  29 +--
 .../table/catalog/hive/HiveCatalogITCase.java  |  25 ++
 .../flink/table/catalog/hive/HiveCatalogTest.java  |  12 +-
 flink-python/pyflink/table/tests/test_catalog.py   |   6 +-
 .../table/client/gateway/local/DependencyTest.java |   5 +-
 .../sql/parser/hive/ddl/SqlCreateHiveDatabase.java |   5 -
 .../sql/parser/hive/ddl/SqlCreateHiveTable.java|   8 +-
 .../sql/parser/hive/ddl/SqlCreateHiveView.java |   6 +-
 .../flink/table/catalog/CatalogDatabaseImpl.java   |   5 +
 .../flink/table/catalog/CatalogTableBuilder.java   |   7 -
 .../flink/table/catalog/CatalogTableImpl.java  |   1 -
 .../table/catalog/GenericInMemoryCatalog.java  |   4 +-
 .../flink/table/catalog/CatalogTestBase.java   |   6 +-
 .../flink/table/catalog/CatalogDatabase.java   |   7 +
 .../apache/flink/table/catalog/CatalogTest.java|   8 +-
 .../flink/table/catalog/CatalogTestUtil.java   |  27 +--
 26 files changed, 286 insertions(+), 385 deletions(-)
 delete mode 100644 
flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/table/catalog/hive/HiveDatabaseUtil.java


[flink] branch master updated: [FLINK-21592][table-planner-blink] RemoveSingleAggregateRule fails due to nullability mismatch (#15082)

2021-04-14 Thread kurt
This is an automated email from the ASF dual-hosted git repository.

kurt pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new b22bc62  [FLINK-21592][table-planner-blink] RemoveSingleAggregateRule 
fails due to nullability mismatch (#15082)
b22bc62 is described below

commit b22bc62ae59d3ccaef95507897c7725970e4e5c3
Author: Rui Li 
AuthorDate: Wed Apr 14 17:51:43 2021 +0800

[FLINK-21592][table-planner-blink] RemoveSingleAggregateRule fails due to 
nullability mismatch (#15082)
---
 .../apache/calcite/sql2rel/RelDecorrelator.java| 22 ++
 .../logical/RemoveSingleAggregateRuleTest.xml  | 50 ++
 .../logical/RemoveSingleAggregateRuleTest.scala| 45 +++
 3 files changed, 109 insertions(+), 8 deletions(-)

diff --git 
a/flink-table/flink-table-planner-blink/src/main/java/org/apache/calcite/sql2rel/RelDecorrelator.java
 
b/flink-table/flink-table-planner-blink/src/main/java/org/apache/calcite/sql2rel/RelDecorrelator.java
index 4181625..07893cc 100644
--- 
a/flink-table/flink-table-planner-blink/src/main/java/org/apache/calcite/sql2rel/RelDecorrelator.java
+++ 
b/flink-table/flink-table-planner-blink/src/main/java/org/apache/calcite/sql2rel/RelDecorrelator.java
@@ -118,11 +118,7 @@ import java.util.SortedMap;
 import java.util.TreeMap;
 import java.util.stream.Collectors;
 
-/**
- * Copied to fix CALCITE-4333, should be removed for the next Calcite upgrade.
- *
- * Changes: Line 671 ~ Line 681, Line 430 ~ Line 441.
- */
+/** Copied to fix calcite issues. */
 public class RelDecorrelator implements ReflectiveVisitor {
 // ~ Static fields/initializers 
-
 
@@ -439,6 +435,9 @@ public class RelDecorrelator implements ReflectiveVisitor {
 return null;
 }
 
+// BEGIN FLINK MODIFICATION
+// Reason: to de-correlate sort rel when its parent is not a correlate
+// Should be removed after CALCITE-4333 is fixed
 final RelNode newInput = frame.r;
 
 Mappings.TargetMapping mapping =
@@ -452,6 +451,7 @@ public class RelDecorrelator implements ReflectiveVisitor {
 
 final int offset = rel.offset == null ? -1 : 
RexLiteral.intValue(rel.offset);
 final int fetch = rel.fetch == null ? -1 : 
RexLiteral.intValue(rel.fetch);
+// END FLINK MODIFICATION
 
 final RelNode newSort =
 relBuilder
@@ -685,6 +685,9 @@ public class RelDecorrelator implements ReflectiveVisitor {
 
 public Frame getInvoke(RelNode r, RelNode parent) {
 final Frame frame = dispatcher.invoke(r);
+// BEGIN FLINK MODIFICATION
+// Reason: to de-correlate sort rel when its parent is not a correlate
+// Should be removed after CALCITE-4333 is fixed
 if (frame != null && parent instanceof Correlate && r instanceof Sort) 
{
 Sort sort = (Sort) r;
 // Can not decorrelate if the sort has per-correlate-key 
attributes like
@@ -696,6 +699,7 @@ public class RelDecorrelator implements ReflectiveVisitor {
 return null;
 }
 }
+// END FLINK MODIFICATION
 if (frame != null) {
 map.put(r, frame);
 }
@@ -1869,13 +1873,15 @@ public class RelDecorrelator implements 
ReflectiveVisitor {
 return;
 }
 
-// singleAggRel produces a nullable type, so create the new
-// projection that casts proj expr to a nullable type.
+// BEGIN FLINK MODIFICATION
+// Reason: fix the nullability mismatch issue
 final RelBuilder relBuilder = call.builder();
+final boolean nullable = 
singleAggregate.getAggCallList().get(0).getType().isNullable();
 final RelDataType type =
 relBuilder
 .getTypeFactory()
-
.createTypeWithNullability(projExprs.get(0).getType(), true);
+
.createTypeWithNullability(projExprs.get(0).getType(), nullable);
+// END FLINK MODIFICATION
 final RexNode cast = relBuilder.getRexBuilder().makeCast(type, 
projExprs.get(0));
 relBuilder.push(aggregate).project(cast);
 call.transformTo(relBuilder.build());
diff --git 
a/flink-table/flink-table-planner-blink/src/test/resources/org/apache/flink/table/planner/plan/rules/logical/RemoveSingleAggregateRuleTest.xml
 
b/flink-table/flink-table-planner-blink/src/test/resources/org/apache/flink/table/planner/plan/rules/logical/RemoveSingleAggregateRuleTest.xml
new file mode 100644
index 000..05ccc23
--- /dev/null
+++ 
b/flink-table/flink-table-planner-blink/src/test/resources/org/apache/flink/table/planner/plan/rules/logical/RemoveSingleAggregateRuleTest.x

[flink] branch master updated (81118c0 -> c29ee91)

2021-04-14 Thread kurt
This is an automated email from the ASF dual-hosted git repository.

kurt pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 81118c0  [FLINK-21839][docs] Document stop-with-savepoint behavior 
more explicitly
 add c29ee91  [FLINK-22207][hive] Fix retrieving Flink properties in Hive 
Catalog (#15564)

No new revisions were added by this update.

Summary of changes:
 .../flink/table/catalog/hive/HiveCatalog.java  |  2 +-
 .../flink/table/catalog/hive/HiveCatalogTest.java  | 38 ++
 2 files changed, 39 insertions(+), 1 deletion(-)


[flink] branch master updated (0e82a99 -> b78ce3b)

2021-04-12 Thread kurt
This is an automated email from the ASF dual-hosted git repository.

kurt pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 0e82a99  [FLINK-22166][table] Empty values with sort willl fail
 add b78ce3b  [FLINK-21748][sql-client] Fix unstable LocalExecutorITCase 
(#15563)

No new revisions were added by this update.

Summary of changes:
 .../local/result/ChangelogCollectResult.java   |  1 +
 .../gateway/local/result/CollectResultBase.java|  7 ++---
 .../result/MaterializedCollectBatchResult.java |  4 ++-
 .../result/MaterializedCollectStreamResult.java|  2 ++
 .../client/gateway/local/LocalExecutorITCase.java  | 33 +++---
 5 files changed, 19 insertions(+), 28 deletions(-)


[flink] branch master updated (f6fc769 -> a3be1cc)

2021-04-12 Thread kurt
This is an automated email from the ASF dual-hosted git repository.

kurt pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from f6fc769  [FLINK-17371][table-runtime] Open testDefaultDecimalCasting 
case in DecimalTypeTest
 add a3be1cc  [FLINK-22015][table-planner-blink] Exclude IS NULL from 
SEARCH operators (#15547)

No new revisions were added by this update.

Summary of changes:
 .../java/org/apache/calcite/rex/RexSimplify.java   | 28 +-
 .../table/planner/plan/batch/sql/CalcTest.xml  | 65 +-
 .../table/planner/plan/stream/sql/CalcTest.xml | 65 +-
 .../table/planner/plan/batch/sql/CalcTest.scala| 10 
 .../table/planner/plan/stream/sql/CalcTest.scala   |  9 +++
 .../planner/runtime/batch/sql/CalcITCase.scala | 34 +++
 6 files changed, 166 insertions(+), 45 deletions(-)


[flink] branch master updated (276e847 -> aff7bb3)

2021-04-09 Thread kurt
This is an automated email from the ASF dual-hosted git repository.

kurt pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 276e847  [FLINK-20387][table] Support TIMESTAMP_LTZ as rowtime 
attribute
 add aff7bb3  [hotfix][table-runtime] Fix the exception info of raw format 
(#15418)

No new revisions were added by this update.

Summary of changes:
 .../apache/flink/table/formats/raw/RawFormatSerializationSchema.java| 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)


[flink] branch master updated (4a6518b -> 3f4dd82)

2021-04-08 Thread kurt
This is an automated email from the ASF dual-hosted git repository.

kurt pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 4a6518b  [FLINK-22084][coordination] Use a consistent default max 
parallelism and restore logic in the Adaptive Scheduler
 add 3f4dd82  [FLINK-22143][table-runtime] Fix less lines than requested 
returned by LimitableBulkFormat (#15513)

No new revisions were added by this update.

Summary of changes:
 .../table/filesystem/LimitableBulkFormat.java  |  9 --
 .../table/filesystem/LimitableBulkFormatTest.java  | 34 ++
 2 files changed, 41 insertions(+), 2 deletions(-)


[flink] branch master updated: [FLINK-21927][table-planner-blink] Resolve ExpressionReducer compile fail when running long term (#15354)

2021-04-01 Thread kurt
This is an automated email from the ASF dual-hosted git repository.

kurt pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new 56f275c  [FLINK-21927][table-planner-blink] Resolve ExpressionReducer 
compile fail when running long term (#15354)
56f275c is described below

commit 56f275c989933bfdb06c9a85117b14a53464c1f7
Author: xmarker <592577...@qq.com>
AuthorDate: Fri Apr 2 09:01:12 2021 +0800

[FLINK-21927][table-planner-blink] Resolve ExpressionReducer compile fail 
when running long term (#15354)

This commit replaces CodeGenUtils#nameCounter from AtomicInteger to 
AtomicLong to resolve
Integer overflow issue
---
 .../scala/org/apache/flink/table/planner/codegen/CodeGenUtils.scala   | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git 
a/flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/codegen/CodeGenUtils.scala
 
b/flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/codegen/CodeGenUtils.scala
index c2a822e..b7b3a37 100644
--- 
a/flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/codegen/CodeGenUtils.scala
+++ 
b/flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/codegen/CodeGenUtils.scala
@@ -20,7 +20,7 @@ package org.apache.flink.table.planner.codegen
 
 import java.lang.reflect.Method
 import java.lang.{Boolean => JBoolean, Byte => JByte, Double => JDouble, Float 
=> JFloat, Integer => JInt, Long => JLong, Object => JObject, Short => JShort}
-import java.util.concurrent.atomic.AtomicInteger
+import java.util.concurrent.atomic.AtomicLong
 
 import org.apache.flink.api.common.ExecutionConfig
 import org.apache.flink.api.common.functions.RuntimeContext
@@ -116,7 +116,7 @@ object CodeGenUtils {
 
   // 

 
-  private val nameCounter = new AtomicInteger
+  private val nameCounter = new AtomicLong
 
   def newName(name: String): String = {
 s"$name$$${nameCounter.getAndIncrement}"


[flink] branch master updated (39354f3 -> c36bce8)

2021-04-01 Thread kurt
This is an automated email from the ASF dual-hosted git repository.

kurt pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 39354f3  [FLINK-16444][state] Enable to create latency tracking state
 add c36bce8  [FLINK-20320][sql-client] Support init sql file in sql client

No new revisions were added by this update.

Summary of changes:
 .../org/apache/flink/table/client/SqlClient.java   |  24 +-
 .../apache/flink/table/client/cli/CliClient.java   | 281 -
 .../apache/flink/table/client/cli/CliOptions.java  |   7 +
 .../flink/table/client/cli/CliOptionsParser.java   |  19 +-
 .../apache/flink/table/client/cli/CliStrings.java  |   2 +-
 .../flink/table/client/cli}/TerminalUtils.java |  22 +-
 .../apache/flink/table/client/SqlClientTest.java   |  55 ++--
 .../flink/table/client/cli/CliClientITCase.java|   5 +-
 .../flink/table/client/cli/CliClientTest.java  |  70 +++--
 .../flink/table/client/cli/CliResultViewTest.java  |   7 +-
 .../table/client/cli/CliTableauResultViewTest.java |   1 -
 .../src/test/resources/sql-client-help.out |  20 +-
 .../src/test/resources/sql/statement_set.q |  15 +-
 13 files changed, 345 insertions(+), 183 deletions(-)
 rename 
flink-table/flink-sql-client/src/{test/java/org/apache/flink/table/client/cli/utils
 => main/java/org/apache/flink/table/client/cli}/TerminalUtils.java (68%)


[flink] branch master updated (7dd4c33 -> 5dccd76)

2021-03-30 Thread kurt
This is an automated email from the ASF dual-hosted git repository.

kurt pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 7dd4c33  [FLINK-21994][python] Support FLIP-27 based FileSource 
connector in PyFlink DataStream API
 add 5dccd76  [hotfix] Create parent interface for begin/end statement set 
operation. (#15444)

No new revisions were added by this update.

Summary of changes:
 .../table/operations/BeginStatementSetOperation.java  |  2 +-
 .../table/operations/EndStatementSetOperation.java|  2 +-
 ...ntSetOperation.java => StatementSetOperation.java} | 19 +++
 3 files changed, 13 insertions(+), 10 deletions(-)
 copy 
flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/operations/{BeginStatementSetOperation.java
 => StatementSetOperation.java} (78%)


[flink] branch master updated: [FLINK-12828][sql-client] Support -f option with a sql script as input

2021-03-30 Thread kurt
This is an automated email from the ASF dual-hosted git repository.

kurt pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new f594d71  [FLINK-12828][sql-client] Support -f option with a sql script 
as input
f594d71 is described below

commit f594d71ac63accb21e9cf854c3525fe2ca913488
Author: Shengkai <1059623...@qq.com>
AuthorDate: Sat Mar 27 14:39:48 2021 +0800

[FLINK-12828][sql-client] Support -f option with a sql script as input

This closes #15366
---
 .../org/apache/flink/table/client/SqlClient.java   |  49 +++-
 .../apache/flink/table/client/cli/CliClient.java   | 248 ++---
 .../apache/flink/table/client/cli/CliOptions.java  |   9 +
 .../flink/table/client/cli/CliOptionsParser.java   |  29 ++-
 .../table/client/cli/CliStatementSplitter.java |  67 ++
 .../apache/flink/table/client/cli/CliStrings.java  |   4 +-
 .../flink/table/client/cli/SqlMultiLineParser.java |   3 +-
 .../apache/flink/table/client/SqlClientTest.java   |  35 +++
 .../flink/table/client/cli/CliClientITCase.java|   3 +
 .../flink/table/client/cli/CliClientTest.java  | 213 ++
 .../table/client/cli/CliStatementSplitterTest.java |  72 ++
 .../table/client/cli/TerminalStreamsResource.java  |   5 +
 .../src/test/resources/sql-client-help-command.out |  24 ++
 .../src/test/resources/sql-client-help.out |  26 ++-
 14 files changed, 582 insertions(+), 205 deletions(-)

diff --git 
a/flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/SqlClient.java
 
b/flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/SqlClient.java
index 8e2bcbc..79b2a71 100644
--- 
a/flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/SqlClient.java
+++ 
b/flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/SqlClient.java
@@ -22,14 +22,19 @@ import org.apache.flink.table.client.cli.CliClient;
 import org.apache.flink.table.client.cli.CliOptions;
 import org.apache.flink.table.client.cli.CliOptionsParser;
 import org.apache.flink.table.client.gateway.Executor;
+import org.apache.flink.table.client.gateway.SqlExecutionException;
 import org.apache.flink.table.client.gateway.context.DefaultContext;
 import org.apache.flink.table.client.gateway.local.LocalContextUtils;
 import org.apache.flink.table.client.gateway.local.LocalExecutor;
 
+import org.apache.commons.io.IOUtils;
 import org.apache.commons.lang3.SystemUtils;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
+import java.io.IOException;
+import java.net.URL;
+import java.nio.charset.StandardCharsets;
 import java.nio.file.Path;
 import java.nio.file.Paths;
 import java.util.Arrays;
@@ -106,18 +111,25 @@ public class SqlClient {
 SystemUtils.IS_OS_WINDOWS ? "flink-sql-history" : 
".flink-sql-history");
 }
 
+boolean hasSqlFile = options.getSqlFile() != null;
+boolean hasUpdateStatement = options.getUpdateStatement() != null;
+if (hasSqlFile && hasUpdateStatement) {
+throw new IllegalArgumentException(
+String.format(
+"Please use either option %s or %s. The option %s 
is deprecated and it's suggested to use %s instead.",
+CliOptionsParser.OPTION_FILE,
+CliOptionsParser.OPTION_UPDATE,
+CliOptionsParser.OPTION_UPDATE.getOpt(),
+CliOptionsParser.OPTION_FILE.getOpt()));
+}
+
+boolean isInteractiveMode = !hasSqlFile && !hasUpdateStatement;
+
 try (CliClient cli = new CliClient(sessionId, executor, 
historyFilePath)) {
-// interactive CLI mode
-if (options.getUpdateStatement() == null) {
+if (isInteractiveMode) {
 cli.open();
-}
-// execute single update statement
-else {
-final boolean success = 
cli.submitUpdate(options.getUpdateStatement());
-if (!success) {
-throw new SqlClientException(
-"Could not submit given SQL update statement to 
cluster.");
-}
+} else {
+cli.executeSqlFile(readExecutionContent());
 }
 }
 }
@@ -195,4 +207,21 @@ public class SqlClient {
 System.out.println("done.");
 }
 }
+
+private String readExecutionContent() {
+if (options.getSqlFile() != null) {
+return readFromURL(options.getSqlFile());
+} else {
+return options.getUpdateStatement().trim();
+}
+}
+
+private String readFromURL(URL file) {
+try {
+return IOUtils.toString(file, StandardCh

[flink] branch master updated (21157d5 -> b3b8abb)

2021-03-29 Thread kurt
This is an automated email from the ASF dual-hosted git repository.

kurt pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 21157d5  [hotfix][docs] Add missing AggregatingStateDescriptor in 
State page (#15208)
 add b3b8abb  [FLINK-21590][build] Avoid hive logs in the SQL client 
console (#15384)

No new revisions were added by this update.

Summary of changes:
 flink-dist/src/main/flink-bin/conf/log4j-cli.properties | 6 ++
 1 file changed, 6 insertions(+)


[flink] branch master updated: [FLINK-21978][table-planner] Disable cast conversion between Numeric type and TIMESTAMP_LTZ type

2021-03-27 Thread kurt
This is an automated email from the ASF dual-hosted git repository.

kurt pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new cb6b5c1  [FLINK-21978][table-planner] Disable cast conversion between 
Numeric type and TIMESTAMP_LTZ type
cb6b5c1 is described below

commit cb6b5c1282f2d7ac226fa5c04ff650107382bb6d
Author: Leonard Xu 
AuthorDate: Sat Mar 27 20:01:57 2021 +0800

[FLINK-21978][table-planner] Disable cast conversion between Numeric type 
and TIMESTAMP_LTZ type

This closes #15374
---
 .../planner/codegen/calls/ScalarOperatorGens.scala | 103 +++--
 .../planner/expressions/TemporalTypesTest.scala| 166 -
 .../apache/flink/table/data/DecimalDataUtils.java  |  22 ---
 .../apache/flink/table/data/DecimalDataTest.java   |   7 -
 4 files changed, 85 insertions(+), 213 deletions(-)

diff --git 
a/flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/codegen/calls/ScalarOperatorGens.scala
 
b/flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/codegen/calls/ScalarOperatorGens.scala
index 8dea49d..fa188b7 100644
--- 
a/flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/codegen/calls/ScalarOperatorGens.scala
+++ 
b/flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/codegen/calls/ScalarOperatorGens.scala
@@ -1241,7 +1241,7 @@ object ScalarOperatorGens {
 s"$method($operandTerm.getMillisecond(), $zone)"
   }
 
-// Disable cast conversion between Numeric type and  Timestamp type
+// Disable cast conversion between Numeric type and Timestamp type
 case (TINYINT, TIMESTAMP_WITHOUT_TIME_ZONE) |
  (SMALLINT, TIMESTAMP_WITHOUT_TIME_ZONE) |
  (INTEGER, TIMESTAMP_WITHOUT_TIME_ZONE) |
@@ -1267,91 +1267,30 @@ object ScalarOperatorGens {
   }
 }
 
-// Tinyint -> TimestampLtz
-// Smallint -> TimestampLtz
-// Int -> TimestampLtz
-// Bigint -> TimestampLtz
+// Disable cast conversion between Numeric type and TimestampLtz type
 case (TINYINT, TIMESTAMP_WITH_LOCAL_TIME_ZONE) |
  (SMALLINT, TIMESTAMP_WITH_LOCAL_TIME_ZONE) |
  (INTEGER, TIMESTAMP_WITH_LOCAL_TIME_ZONE) |
- (BIGINT, TIMESTAMP_WITH_LOCAL_TIME_ZONE) =>
-  generateUnaryOperatorIfNotNull(ctx, targetType, operand) {
-operandTerm => s"$TIMESTAMP_DATA.fromEpochMillis(((long) $operandTerm) 
* 1000)"
-  }
-
-// Float -> TimestampLtz
-// Double -> TimestampLtz
-case (FLOAT, TIMESTAMP_WITH_LOCAL_TIME_ZONE) |
- (DOUBLE, TIMESTAMP_WITH_LOCAL_TIME_ZONE) =>
-  generateUnaryOperatorIfNotNull(ctx, targetType, operand) {
-operandTerm =>
-  s"""
-  |$TIMESTAMP_DATA.fromEpochMillis((long) ($operandTerm * 1000),
-  |(int) (($operandTerm - (int) $operandTerm) * 1000_000_000 % 
1000_000))
-  """.stripMargin
-  }
-
-// DECIMAL -> TimestampLtz
-case (DECIMAL, TIMESTAMP_WITH_LOCAL_TIME_ZONE) =>
-  generateUnaryOperatorIfNotNull(ctx, targetType, operand) {
-operandTerm =>
-  s"$DECIMAL_UTIL.castToTimestamp($operandTerm)"
-  }
-
-// TimestampLtz -> Tinyint
-case (TIMESTAMP_WITH_LOCAL_TIME_ZONE, TINYINT) =>
-  generateUnaryOperatorIfNotNull(ctx, targetType, operand) {
-operandTerm => s"((byte) ($operandTerm.getMillisecond() / 1000))"
-  }
-
-// TimestampLtz -> Smallint
-case (TIMESTAMP_WITH_LOCAL_TIME_ZONE, SMALLINT) =>
-  generateUnaryOperatorIfNotNull(ctx, targetType, operand) {
-operandTerm => s"((short) ($operandTerm.getMillisecond() / 1000))"
-  }
-
-// TimestampLtz -> Int
-case (TIMESTAMP_WITH_LOCAL_TIME_ZONE, INTEGER) =>
-  generateUnaryOperatorIfNotNull(ctx, targetType, operand) {
-operandTerm => s"((int) ($operandTerm.getMillisecond() / 1000))"
-  }
-
-// TimestampLtz -> BigInt
-case (TIMESTAMP_WITH_LOCAL_TIME_ZONE, BIGINT) =>
-  generateUnaryOperatorIfNotNull(ctx, targetType, operand) {
-operandTerm => s"((long) ($operandTerm.getMillisecond() / 1000))"
-  }
-
-// TimestampLtz -> Float
-case (TIMESTAMP_WITH_LOCAL_TIME_ZONE, FLOAT) =>
-  generateUnaryOperatorIfNotNull(ctx, targetType, operand) {
-operandTerm =>
-  s"""
- |((float) ($operandTerm.getMillisecond() / 1000.0 +
- |$operandTerm.getNanoOfMillisecond() / 1000_000_000.0))
-  """.stripMargin
-  }
-
-// TimestampLtz -> Double
-case (TIMESTAMP_WITH_LOCAL_TIME_ZONE, DOUBLE) =>
-  generateUnaryOperatorIfNotNull(ctx, target

[flink] branch master updated: [hotfix] Package format-common to flink-csv/flink-json (#15394)

2021-03-27 Thread kurt
This is an automated email from the ASF dual-hosted git repository.

kurt pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new c375f4c  [hotfix] Package format-common to flink-csv/flink-json 
(#15394)
c375f4c is described below

commit c375f4cfd394c10b54110eac446873055b716b89
Author: Leonard Xu 
AuthorDate: Sat Mar 27 16:59:23 2021 +0800

[hotfix] Package format-common to flink-csv/flink-json (#15394)
---
 flink-formats/flink-csv/pom.xml  | 37 ++---
 flink-formats/flink-json/pom.xml | 38 +++---
 2 files changed, 69 insertions(+), 6 deletions(-)

diff --git a/flink-formats/flink-csv/pom.xml b/flink-formats/flink-csv/pom.xml
index 3db3076..ce0565e 100644
--- a/flink-formats/flink-csv/pom.xml
+++ b/flink-formats/flink-csv/pom.xml
@@ -100,6 +100,31 @@ under the License.


 
+   
+   
+   
+   org.apache.maven.plugins
+   maven-shade-plugin
+   
+   
+   shade-flink
+   package
+   
+   shade
+   
+   
+   
+   
+   
org.apache.flink:flink-format-common
+   
+   
+   
+   
+   
+   
+   
+   
+



@@ -113,15 +138,21 @@ under the License.



org.apache.maven.plugins
-   
maven-jar-plugin
+   
maven-shade-plugin



package

-   
jar
+   
shade


-   
sql-jar
+   

+   

+   
org.apache.flink:flink-format-common
+   

+   

+   
true
+   
sql-jar



diff --git a/flink-formats/flink-json/pom.xml b/flink-formats/flink-json/pom.xml
index a5217b7..87d48e8 100644
--- a/flink-formats/flink-json/pom.xml
+++ b/flink-formats/flink-json/pom.xml
@@ -108,6 +108,31 @@ under the License.


 
+   
+   
+   
+   org.apache.maven.plugins
+   maven-shade-plugin
+   
+   
+   shade-flink
+   package
+   
+   shade
+   
+   
+   
+   
+   
org.apache.flink:flink-format-common

[flink] branch master updated (0736500 -> 1af31a5)

2021-03-27 Thread kurt
This is an automated email from the ASF dual-hosted git repository.

kurt pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 0736500  [FLINK-21463][sql-client] Unify SQL Client parser and 
TableEnvironment parser
 add 1af31a5  [FLINK-21984][table] Change precision argument from optional 
to required in TO_TIMESTAMP_LTZ(numeric, precision)

No new revisions were added by this update.

Summary of changes:
 docs/data/sql_functions.yml|   6 +-
 .../pyflink/table/tests/test_expression.py |   2 +-
 .../table/api/ImplicitExpressionConversions.scala  |   7 --
 .../functions/sql/FlinkSqlOperatorTable.java   |   4 +-
 .../planner/codegen/calls/BuiltInMethods.scala |  15 ---
 .../planner/codegen/calls/FunctionGenerator.scala  |  13 ---
 .../flink/table/planner/expressions/time.scala |   2 +-
 .../planner/expressions/TemporalTypesTest.scala| 110 +++--
 .../table/runtime/functions/SqlDateTimeUtils.java  |  12 ---
 9 files changed, 64 insertions(+), 107 deletions(-)


[flink] branch master updated: [FLINK-21947][csv] Support TIMESTAMP_LTZ type in CSV format

2021-03-26 Thread kurt
This is an automated email from the ASF dual-hosted git repository.

kurt pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new a5aaa0e  [FLINK-21947][csv] Support TIMESTAMP_LTZ type in CSV format
a5aaa0e is described below

commit a5aaa0e605eb4966b5d73c987ea4d9d0bc852c6d
Author: Leonard Xu 
AuthorDate: Wed Mar 24 17:10:23 2021 +0800

[FLINK-21947][csv] Support TIMESTAMP_LTZ type in CSV format

This closes #15356
---
 flink-formats/flink-csv/pom.xml|  6 ++
 .../formats/csv/CsvRowDeserializationSchema.java   | 11 +++-
 .../flink/formats/csv/CsvRowSchemaConverter.java   |  3 +-
 .../formats/csv/CsvRowSerializationSchema.java | 23 
 .../flink/formats/csv/CsvToRowDataConverters.java  | 20 ---
 .../flink/formats/csv/RowDataToCsvConverters.java  | 53 +
 .../formats/csv/CsvRowDataSerDeSchemaTest.java | 66 +++---
 .../csv/CsvRowDeSerializationSchemaTest.java   | 49 ++--
 flink-formats/flink-format-common/pom.xml  | 44 +++
 .../apache/flink/formats/common}/TimeFormats.java  | 22 +---
 .../flink/formats/common}/TimestampFormat.java |  2 +-
 flink-formats/flink-json/pom.xml   |  6 ++
 .../flink/formats/json/JsonFormatFactory.java  |  1 +
 .../org/apache/flink/formats/json/JsonOptions.java |  1 +
 .../json/JsonRowDataDeserializationSchema.java |  1 +
 .../json/JsonRowDataSerializationSchema.java   |  1 +
 .../formats/json/JsonRowDeserializationSchema.java |  4 +-
 .../formats/json/JsonRowSerializationSchema.java   |  4 +-
 .../formats/json/JsonToRowDataConverters.java  | 11 ++--
 .../formats/json/RowDataToJsonConverters.java  | 11 ++--
 .../json/canal/CanalJsonDecodingFormat.java|  2 +-
 .../json/canal/CanalJsonDeserializationSchema.java |  2 +-
 .../formats/json/canal/CanalJsonFormatFactory.java |  2 +-
 .../json/canal/CanalJsonSerializationSchema.java   |  2 +-
 .../json/debezium/DebeziumJsonDecodingFormat.java  |  2 +-
 .../DebeziumJsonDeserializationSchema.java |  2 +-
 .../json/debezium/DebeziumJsonFormatFactory.java   |  2 +-
 .../debezium/DebeziumJsonSerializationSchema.java  |  2 +-
 .../maxwell/MaxwellJsonDeserializationSchema.java  |  2 +-
 .../json/maxwell/MaxwellJsonFormatFactory.java |  2 +-
 .../maxwell/MaxwellJsonSerializationSchema.java|  2 +-
 .../flink/formats/json/JsonFormatFactoryTest.java  |  1 +
 .../formats/json/JsonRowDataSerDeSchemaTest.java   |  1 +
 .../json/canal/CanalJsonFormatFactoryTest.java |  2 +-
 .../json/canal/CanalJsonSerDeSchemaTest.java   |  2 +-
 .../debezium/DebeziumJsonFormatFactoryTest.java|  2 +-
 .../json/debezium/DebeziumJsonSerDeSchemaTest.java |  2 +-
 .../json/maxwell/MaxwellJsonFormatFactoryTest.java |  2 +-
 .../json/maxwell/MaxwellJsonSerDerTest.java|  2 +-
 flink-formats/pom.xml  |  3 +-
 40 files changed, 276 insertions(+), 102 deletions(-)

diff --git a/flink-formats/flink-csv/pom.xml b/flink-formats/flink-csv/pom.xml
index f458f15..3db3076 100644
--- a/flink-formats/flink-csv/pom.xml
+++ b/flink-formats/flink-csv/pom.xml
@@ -36,6 +36,12 @@ under the License.
 

 
+   
+   org.apache.flink
+   flink-format-common
+   ${project.version}
+   
+

 

diff --git 
a/flink-formats/flink-csv/src/main/java/org/apache/flink/formats/csv/CsvRowDeserializationSchema.java
 
b/flink-formats/flink-csv/src/main/java/org/apache/flink/formats/csv/CsvRowDeserializationSchema.java
index 3c9d18f..535767a 100644
--- 
a/flink-formats/flink-csv/src/main/java/org/apache/flink/formats/csv/CsvRowDeserializationSchema.java
+++ 
b/flink-formats/flink-csv/src/main/java/org/apache/flink/formats/csv/CsvRowDeserializationSchema.java
@@ -42,9 +42,14 @@ import java.math.BigInteger;
 import java.sql.Date;
 import java.sql.Time;
 import java.sql.Timestamp;
+import java.time.LocalDateTime;
+import java.time.ZoneOffset;
 import java.util.Arrays;
 import java.util.Objects;
 
+import static org.apache.flink.formats.common.TimeFormats.SQL_TIMESTAMP_FORMAT;
+import static 
org.apache.flink.formats.common.TimeFormats.SQL_TIMESTAMP_WITH_LOCAL_TIMEZONE_FORMAT;
+
 /**
  * Deserialization schema from CSV to Flink types.
  *
@@ -316,7 +321,11 @@ public final class CsvRowDeserializationSchema implements 
DeserializationSchema<
 } else if (info.equals(Types.LOCAL_TIME)) {
 return (node) -> Time.valueOf(node.asText()).toLocalTime();
 } else if (info.equals(Types.LOCAL_DATE_TIME)) {
-return (node) -> 
Timestamp.valueOf(node.asText()).toLocalDateTime();
+return (node) -> LocalDateTime.parse(node.asText(), 
SQL_TIMESTAMP_FORMAT);
+} else if (info.equals(T

[flink] branch master updated (30e0e1e -> 2aab2ef)

2021-03-25 Thread kurt
This is an automated email from the ASF dual-hosted git repository.

kurt pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 30e0e1e  [FLINK-21629][python] Support Python UDAF in Sliding Window
 add 2aab2ef  [FLINK-20563][hive] Support built-in functions for Hive 
versions prior to 1.2.0

No new revisions were added by this update.

Summary of changes:
 .../table/catalog/hive/client/HiveShimV100.java| 52 ++
 .../table/catalog/hive/client/HiveShimV120.java| 32 +
 .../flink/table/module/hive/HiveModuleTest.java| 45 ++-
 3 files changed, 79 insertions(+), 50 deletions(-)


[flink] branch master updated (01d7095 -> c2f74dc)

2021-03-16 Thread kurt
This is an automated email from the ASF dual-hosted git repository.

kurt pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 01d7095  [FLINK-21396][table-planner-blink] Use ResolvedCatalogTable 
within the planner
 add 10cb675  [hotfix][table-planner] Fix integral types list in 
FunctionGenerator
 add a60f4b4  [hotfix][doc] Fix sql functions name typo
 add c2f74dc  [FLINK-21622][table-planner] Introduce function 
TO_TIMESTAMP_LTZ(numeric [, precision])

No new revisions were added by this update.

Summary of changes:
 docs/data/sql_functions.yml|   7 +-
 flink-python/pyflink/table/expressions.py  |  15 ++
 .../pyflink/table/tests/test_expression.py |   4 +-
 .../org/apache/flink/table/api/Expressions.java|  18 ++
 .../table/api/ImplicitExpressionConversions.scala  |  21 ++
 .../functions/BuiltInFunctionDefinitions.java  |   6 +
 .../expressions/converter/DirectConvertRule.java   |   3 +
 .../functions/sql/FlinkSqlOperatorTable.java   |  20 +-
 .../planner/codegen/calls/BuiltInMethods.scala |  30 +++
 .../planner/codegen/calls/FunctionGenerator.scala  |  34 ++-
 .../expressions/PlannerExpressionConverter.scala   |   4 +
 .../flink/table/planner/expressions/time.scala |  29 +++
 .../planner/expressions/TemporalTypesTest.scala| 229 ++-
 .../expressions/utils/ExpressionTestBase.scala | 251 ++---
 .../table/runtime/functions/SqlDateTimeUtils.java  |  97 +++-
 15 files changed, 679 insertions(+), 89 deletions(-)



[flink] branch master updated: [FLINK-21624][table-planner] Correct FLOOR/CEIL (TIMESTAMP/DATE TO WEEK) functions

2021-03-11 Thread kurt
This is an automated email from the ASF dual-hosted git repository.

kurt pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new 615d5d1  [FLINK-21624][table-planner] Correct FLOOR/CEIL 
(TIMESTAMP/DATE TO WEEK) functions
615d5d1 is described below

commit 615d5d150abdeab89d15f6830177eac238bdc2fc
Author: Leonard Xu 
AuthorDate: Fri Mar 12 12:19:05 2021 +0800

[FLINK-21624][table-planner] Correct FLOOR/CEIL (TIMESTAMP/DATE TO WEEK) 
functions

This closes #15125
---
 .../table/planner/codegen/calls/FloorCeilCallGen.scala  |  4 ++--
 .../flink/table/planner/expressions/TemporalTypesTest.scala | 13 +
 .../flink/table/runtime/functions/SqlDateTimeUtils.java |  9 +
 3 files changed, 24 insertions(+), 2 deletions(-)

diff --git 
a/flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/codegen/calls/FloorCeilCallGen.scala
 
b/flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/codegen/calls/FloorCeilCallGen.scala
index 35bc9e2..cd3d7b9 100644
--- 
a/flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/codegen/calls/FloorCeilCallGen.scala
+++ 
b/flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/codegen/calls/FloorCeilCallGen.scala
@@ -66,7 +66,7 @@ class FloorCeilCallGen(
 terms =>
   unit match {
 // for Timestamp with timezone info
-case YEAR | QUARTER | MONTH | DAY | HOUR
+case YEAR | QUARTER | MONTH | WEEK | DAY | HOUR
   if terms.length + 1 == method.getParameterCount &&
 method.getParameterTypes()(terms.length) == classOf[TimeZone] 
=>
   val timeZone = ctx.addReusableSessionTimeZone()
@@ -79,7 +79,7 @@ class FloorCeilCallGen(
  |""".stripMargin
 
 // for Unix Date / Unix Time
-case YEAR | MONTH =>
+case YEAR | MONTH | WEEK =>
   operand.resultType.getTypeRoot match {
 case LogicalTypeRoot.TIMESTAMP_WITHOUT_TIME_ZONE =>
   val longTerm = s"${terms.head}.getMillisecond()"
diff --git 
a/flink-table/flink-table-planner-blink/src/test/scala/org/apache/flink/table/planner/expressions/TemporalTypesTest.scala
 
b/flink-table/flink-table-planner-blink/src/test/scala/org/apache/flink/table/planner/expressions/TemporalTypesTest.scala
index f09aee1..0222302 100644
--- 
a/flink-table/flink-table-planner-blink/src/test/scala/org/apache/flink/table/planner/expressions/TemporalTypesTest.scala
+++ 
b/flink-table/flink-table-planner-blink/src/test/scala/org/apache/flink/table/planner/expressions/TemporalTypesTest.scala
@@ -740,9 +740,16 @@ class TemporalTypesTest extends ExpressionTestBase {
 testSqlApi("CEIL(TIME '12:44:31' TO MINUTE)", "12:45:00")
 testSqlApi("CEIL(TIME '12:44:31' TO HOUR)", "13:00:00")
 
+testSqlApi("FLOOR( DATE '2021-02-27' TO WEEK)", "2021-02-21")
+testSqlApi("FLOOR( DATE '2021-03-01' TO WEEK)", "2021-02-28")
+testSqlApi("CEIL( DATE '2021-02-27' TO WEEK)", "2021-02-28")
+testSqlApi("CEIL( DATE '2021-03-01' TO WEEK)", "2021-03-07")
+
 testSqlApi("FLOOR(TIMESTAMP '2018-03-20 06:44:31' TO HOUR)", "2018-03-20 
06:00:00")
 testSqlApi("FLOOR(TIMESTAMP '2018-03-20 06:44:31' TO DAY)", "2018-03-20 
00:00:00")
 testSqlApi("FLOOR(TIMESTAMP '2018-03-20 00:00:00' TO DAY)", "2018-03-20 
00:00:00")
+testSqlApi("FLOOR(TIMESTAMP '2021-02-27 00:00:00' TO WEEK)", "2021-02-21 
00:00:00")
+testSqlApi("FLOOR(TIMESTAMP '2021-03-01 00:00:00' TO WEEK)", "2021-02-28 
00:00:00")
 testSqlApi("FLOOR(TIMESTAMP '2018-04-01 06:44:31' TO MONTH)", "2018-04-01 
00:00:00")
 testSqlApi("FLOOR(TIMESTAMP '2018-01-01 06:44:31' TO MONTH)", "2018-01-01 
00:00:00")
 testSqlApi("CEIL(TIMESTAMP '2018-03-20 06:44:31' TO HOUR)", "2018-03-20 
07:00:00")
@@ -750,6 +757,8 @@ class TemporalTypesTest extends ExpressionTestBase {
 testSqlApi("CEIL(TIMESTAMP '2018-03-20 06:44:31' TO DAY)", "2018-03-21 
00:00:00")
 testSqlApi("CEIL(TIMESTAMP '2018-03-01 00:00:00' TO DAY)", "2018-03-01 
00:00:00")
 testSqlApi("CEIL(TIMESTAMP '2018-03-31 00:00:01' TO DAY)", "2018-04-01 
00:00:00")
+testSqlApi("CEIL(TIMESTAMP '2021-02-27 00:00:00' TO WEEK)", "2021-02-28 
00:00:00")
+testSqlApi("CEIL(TIMESTAMP '2021-03-01 00:00:00' TO WEEK)", "2021-03-07 
00:00:00")
 testSqlApi("CEIL(TIMESTAMP '2018-03-01 

[flink] branch master updated (2208f77 -> 2a64773)

2020-10-19 Thread kurt
This is an automated email from the ASF dual-hosted git repository.

kurt pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 2208f77  [FLINK-18836][python] Support Python UDTF return types which 
are not generator
 add 2a64773  [FLINK-19649][sql-parser] Fix unparse method of sql create 
table without columns (#13651)

No new revisions were added by this update.

Summary of changes:
 .../flink/sql/parser/ddl/SqlCreateTable.java   | 32 --
 .../flink/sql/parser/FlinkSqlParserImplTest.java   | 13 +
 2 files changed, 30 insertions(+), 15 deletions(-)



[flink] branch master updated (2208f77 -> 2a64773)

2020-10-19 Thread kurt
This is an automated email from the ASF dual-hosted git repository.

kurt pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 2208f77  [FLINK-18836][python] Support Python UDTF return types which 
are not generator
 add 2a64773  [FLINK-19649][sql-parser] Fix unparse method of sql create 
table without columns (#13651)

No new revisions were added by this update.

Summary of changes:
 .../flink/sql/parser/ddl/SqlCreateTable.java   | 32 --
 .../flink/sql/parser/FlinkSqlParserImplTest.java   | 13 +
 2 files changed, 30 insertions(+), 15 deletions(-)



[flink] branch master updated (3ebf0d9 -> 2cca80f)

2020-07-22 Thread kurt
This is an automated email from the ASF dual-hosted git repository.

kurt pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 3ebf0d9  [FLINK-17789][core] DelegatingConfiguration should remove 
prefix instead of add prefix in toMap
 add 2cca80f  [FLINK-18607][build] Give the maven module a human readable 
name

No new revisions were added by this update.

Summary of changes:
 flink-annotations/pom.xml | 2 +-
 flink-clients/pom.xml | 2 +-
 flink-connectors/flink-connector-base/pom.xml | 2 +-
 flink-connectors/flink-connector-cassandra/pom.xml| 2 +-
 flink-connectors/flink-connector-elasticsearch-base/pom.xml   | 2 +-
 flink-connectors/flink-connector-elasticsearch5/pom.xml   | 2 +-
 flink-connectors/flink-connector-elasticsearch6/pom.xml   | 2 +-
 flink-connectors/flink-connector-elasticsearch7/pom.xml   | 2 +-
 flink-connectors/flink-connector-filesystem/pom.xml   | 2 +-
 flink-connectors/flink-connector-gcp-pubsub/pom.xml   | 2 +-
 flink-connectors/flink-connector-hbase/pom.xml| 2 +-
 flink-connectors/flink-connector-hive/pom.xml | 2 +-
 flink-connectors/flink-connector-jdbc/pom.xml | 2 +-
 flink-connectors/flink-connector-kafka-0.10/pom.xml   | 2 +-
 flink-connectors/flink-connector-kafka-0.11/pom.xml   | 2 +-
 flink-connectors/flink-connector-kafka-base/pom.xml   | 2 +-
 flink-connectors/flink-connector-kafka/pom.xml| 2 +-
 flink-connectors/flink-connector-kinesis/pom.xml  | 2 +-
 flink-connectors/flink-connector-nifi/pom.xml | 2 +-
 flink-connectors/flink-connector-rabbitmq/pom.xml | 2 +-
 flink-connectors/flink-connector-twitter/pom.xml  | 2 +-
 flink-connectors/flink-hadoop-compatibility/pom.xml   | 2 +-
 flink-connectors/flink-hcatalog/pom.xml   | 2 +-
 flink-connectors/flink-sql-connector-elasticsearch6/pom.xml   | 2 +-
 flink-connectors/flink-sql-connector-elasticsearch7/pom.xml   | 2 +-
 flink-connectors/flink-sql-connector-hbase/pom.xml| 2 +-
 flink-connectors/flink-sql-connector-hive-1.2.2/pom.xml   | 2 +-
 flink-connectors/flink-sql-connector-hive-2.2.0/pom.xml   | 2 +-
 flink-connectors/flink-sql-connector-hive-2.3.6/pom.xml   | 2 +-
 flink-connectors/flink-sql-connector-hive-3.1.2/pom.xml   | 2 +-
 flink-connectors/flink-sql-connector-kafka-0.10/pom.xml   | 2 +-
 flink-connectors/flink-sql-connector-kafka-0.11/pom.xml   | 2 +-
 flink-connectors/flink-sql-connector-kafka/pom.xml| 2 +-
 flink-connectors/pom.xml  | 2 +-
 flink-container/pom.xml   | 2 +-
 flink-contrib/flink-connector-wikiedits/pom.xml   | 2 +-
 flink-contrib/pom.xml | 2 +-
 flink-core/pom.xml| 2 +-
 flink-dist/pom.xml| 2 +-
 flink-docs/pom.xml| 2 +-
 flink-end-to-end-tests/flink-batch-sql-test/pom.xml   | 2 +-
 flink-end-to-end-tests/flink-bucketing-sink-test/pom.xml  | 2 +-
 flink-end-to-end-tests/flink-cli-test/pom.xml | 2 +-
 flink-end-to-end-tests/flink-confluent-schema-registry/pom.xml| 2 ++
 .../flink-connector-gcp-pubsub-emulator-tests/pom.xml | 2 +-
 flink-end-to-end-tests/flink-dataset-allround-test/pom.xml| 2 +-
 .../flink-dataset-fine-grained-recovery-test/pom.xml  | 2 +-
 flink-end-to-end-tests/flink-datastream-allround-test/pom.xml | 2 +-
 flink-end-to-end-tests/flink-distributed-cache-via-blob-test/pom.xml  | 2 +-
 flink-end-to-end-tests/flink-elasticsearch5-test/pom.xml  | 2 +-
 flink-end-to-end-tests/flink-elasticsearch6-test/pom.xml  | 2 +-
 flink-end-to-end-tests/flink-elasticsearch7-test/pom.xml  | 2 +-
 flink-end-to-end-tests/flink-end-to-end-tests-common-kafka/pom.xml| 1 +
 flink-end-to-end-tests/flink-end-to-end-tests-common/pom.xml  | 1 +
 flink-end-to-end-tests/flink-end-to-end-tests-hbase/pom.xml   | 1 +
 flink-end-to-end-tests/flink-heavy-deployment-stress-test/pom.xml | 2 +-
 flink-end-to-end-tests/flink-high-parallelism-iterations-test/pom.xml | 2 +-
 .../flink-local-recovery-and-allocation-test/pom.xml  | 2 +-
 flink-end-to-end-tests/flink-metrics

[flink] branch master updated (3ebf0d9 -> 2cca80f)

2020-07-22 Thread kurt
This is an automated email from the ASF dual-hosted git repository.

kurt pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 3ebf0d9  [FLINK-17789][core] DelegatingConfiguration should remove 
prefix instead of add prefix in toMap
 add 2cca80f  [FLINK-18607][build] Give the maven module a human readable 
name

No new revisions were added by this update.

Summary of changes:
 flink-annotations/pom.xml | 2 +-
 flink-clients/pom.xml | 2 +-
 flink-connectors/flink-connector-base/pom.xml | 2 +-
 flink-connectors/flink-connector-cassandra/pom.xml| 2 +-
 flink-connectors/flink-connector-elasticsearch-base/pom.xml   | 2 +-
 flink-connectors/flink-connector-elasticsearch5/pom.xml   | 2 +-
 flink-connectors/flink-connector-elasticsearch6/pom.xml   | 2 +-
 flink-connectors/flink-connector-elasticsearch7/pom.xml   | 2 +-
 flink-connectors/flink-connector-filesystem/pom.xml   | 2 +-
 flink-connectors/flink-connector-gcp-pubsub/pom.xml   | 2 +-
 flink-connectors/flink-connector-hbase/pom.xml| 2 +-
 flink-connectors/flink-connector-hive/pom.xml | 2 +-
 flink-connectors/flink-connector-jdbc/pom.xml | 2 +-
 flink-connectors/flink-connector-kafka-0.10/pom.xml   | 2 +-
 flink-connectors/flink-connector-kafka-0.11/pom.xml   | 2 +-
 flink-connectors/flink-connector-kafka-base/pom.xml   | 2 +-
 flink-connectors/flink-connector-kafka/pom.xml| 2 +-
 flink-connectors/flink-connector-kinesis/pom.xml  | 2 +-
 flink-connectors/flink-connector-nifi/pom.xml | 2 +-
 flink-connectors/flink-connector-rabbitmq/pom.xml | 2 +-
 flink-connectors/flink-connector-twitter/pom.xml  | 2 +-
 flink-connectors/flink-hadoop-compatibility/pom.xml   | 2 +-
 flink-connectors/flink-hcatalog/pom.xml   | 2 +-
 flink-connectors/flink-sql-connector-elasticsearch6/pom.xml   | 2 +-
 flink-connectors/flink-sql-connector-elasticsearch7/pom.xml   | 2 +-
 flink-connectors/flink-sql-connector-hbase/pom.xml| 2 +-
 flink-connectors/flink-sql-connector-hive-1.2.2/pom.xml   | 2 +-
 flink-connectors/flink-sql-connector-hive-2.2.0/pom.xml   | 2 +-
 flink-connectors/flink-sql-connector-hive-2.3.6/pom.xml   | 2 +-
 flink-connectors/flink-sql-connector-hive-3.1.2/pom.xml   | 2 +-
 flink-connectors/flink-sql-connector-kafka-0.10/pom.xml   | 2 +-
 flink-connectors/flink-sql-connector-kafka-0.11/pom.xml   | 2 +-
 flink-connectors/flink-sql-connector-kafka/pom.xml| 2 +-
 flink-connectors/pom.xml  | 2 +-
 flink-container/pom.xml   | 2 +-
 flink-contrib/flink-connector-wikiedits/pom.xml   | 2 +-
 flink-contrib/pom.xml | 2 +-
 flink-core/pom.xml| 2 +-
 flink-dist/pom.xml| 2 +-
 flink-docs/pom.xml| 2 +-
 flink-end-to-end-tests/flink-batch-sql-test/pom.xml   | 2 +-
 flink-end-to-end-tests/flink-bucketing-sink-test/pom.xml  | 2 +-
 flink-end-to-end-tests/flink-cli-test/pom.xml | 2 +-
 flink-end-to-end-tests/flink-confluent-schema-registry/pom.xml| 2 ++
 .../flink-connector-gcp-pubsub-emulator-tests/pom.xml | 2 +-
 flink-end-to-end-tests/flink-dataset-allround-test/pom.xml| 2 +-
 .../flink-dataset-fine-grained-recovery-test/pom.xml  | 2 +-
 flink-end-to-end-tests/flink-datastream-allround-test/pom.xml | 2 +-
 flink-end-to-end-tests/flink-distributed-cache-via-blob-test/pom.xml  | 2 +-
 flink-end-to-end-tests/flink-elasticsearch5-test/pom.xml  | 2 +-
 flink-end-to-end-tests/flink-elasticsearch6-test/pom.xml  | 2 +-
 flink-end-to-end-tests/flink-elasticsearch7-test/pom.xml  | 2 +-
 flink-end-to-end-tests/flink-end-to-end-tests-common-kafka/pom.xml| 1 +
 flink-end-to-end-tests/flink-end-to-end-tests-common/pom.xml  | 1 +
 flink-end-to-end-tests/flink-end-to-end-tests-hbase/pom.xml   | 1 +
 flink-end-to-end-tests/flink-heavy-deployment-stress-test/pom.xml | 2 +-
 flink-end-to-end-tests/flink-high-parallelism-iterations-test/pom.xml | 2 +-
 .../flink-local-recovery-and-allocation-test/pom.xml  | 2 +-
 flink-end-to-end-tests/flink-metrics

[flink] branch release-1.11 updated (5dc9aad -> c727999)

2020-06-11 Thread kurt
This is an automated email from the ASF dual-hosted git repository.

kurt pushed a change to branch release-1.11
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 5dc9aad  [FLINK-17829][docs][jdbc] Add documentation for the new JDBC 
connector
 add c727999  [FLINK-16375][table] Reintroduce registerTable[Source/Sink] 
methods for table env

No new revisions were added by this update.

Summary of changes:
 .../apache/flink/table/api/TableEnvironment.java   | 50 ++
 .../table/api/internal/TableEnvironmentImpl.java   | 27 
 .../planner/plan/stream/sql/LegacySinkTest.scala   | 18 
 .../plan/stream/table/LegacyTableSourceTest.scala  | 24 +--
 .../flink/table/api/internal/TableEnvImpl.scala| 38 
 .../table/api/stream/table/TableSourceTest.scala   | 24 +--
 .../runtime/batch/sql/TableEnvironmentITCase.scala | 22 +-
 .../flink/table/utils/MockTableEnvironment.scala   | 12 +-
 8 files changed, 170 insertions(+), 45 deletions(-)



[flink] branch release-1.11 updated (5dc9aad -> c727999)

2020-06-11 Thread kurt
This is an automated email from the ASF dual-hosted git repository.

kurt pushed a change to branch release-1.11
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 5dc9aad  [FLINK-17829][docs][jdbc] Add documentation for the new JDBC 
connector
 add c727999  [FLINK-16375][table] Reintroduce registerTable[Source/Sink] 
methods for table env

No new revisions were added by this update.

Summary of changes:
 .../apache/flink/table/api/TableEnvironment.java   | 50 ++
 .../table/api/internal/TableEnvironmentImpl.java   | 27 
 .../planner/plan/stream/sql/LegacySinkTest.scala   | 18 
 .../plan/stream/table/LegacyTableSourceTest.scala  | 24 +--
 .../flink/table/api/internal/TableEnvImpl.scala| 38 
 .../table/api/stream/table/TableSourceTest.scala   | 24 +--
 .../runtime/batch/sql/TableEnvironmentITCase.scala | 22 +-
 .../flink/table/utils/MockTableEnvironment.scala   | 12 +-
 8 files changed, 170 insertions(+), 45 deletions(-)



[flink] branch release-1.11 updated: [FLINK-16375][table] Reintroduce registerTable[Source/Sink] methods for table env

2020-06-11 Thread kurt
This is an automated email from the ASF dual-hosted git repository.

kurt pushed a commit to branch release-1.11
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.11 by this push:
 new c727999  [FLINK-16375][table] Reintroduce registerTable[Source/Sink] 
methods for table env
c727999 is described below

commit c72799933a0d5ac02dfb9437bafa1b5ef38619f9
Author: Kurt Young 
AuthorDate: Thu Jun 11 23:23:09 2020 +0800

[FLINK-16375][table] Reintroduce registerTable[Source/Sink] methods for 
table env

Since we don't have a good alternative solution for easily register user 
defined source/sink/factory, we need these methods back, for now.

This closes #12603
---
 .../apache/flink/table/api/TableEnvironment.java   | 50 ++
 .../table/api/internal/TableEnvironmentImpl.java   | 27 
 .../planner/plan/stream/sql/LegacySinkTest.scala   | 18 
 .../plan/stream/table/LegacyTableSourceTest.scala  | 24 +--
 .../flink/table/api/internal/TableEnvImpl.scala| 38 
 .../table/api/stream/table/TableSourceTest.scala   | 24 +--
 .../runtime/batch/sql/TableEnvironmentITCase.scala | 22 +-
 .../flink/table/utils/MockTableEnvironment.scala   | 12 +-
 8 files changed, 170 insertions(+), 45 deletions(-)

diff --git 
a/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/api/TableEnvironment.java
 
b/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/api/TableEnvironment.java
index d5f5ce0..614b6e7 100644
--- 
a/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/api/TableEnvironment.java
+++ 
b/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/api/TableEnvironment.java
@@ -21,6 +21,7 @@ package org.apache.flink.table.api;
 import org.apache.flink.annotation.Experimental;
 import org.apache.flink.annotation.PublicEvolving;
 import org.apache.flink.api.common.JobExecutionResult;
+import org.apache.flink.api.common.typeinfo.TypeInformation;
 import org.apache.flink.table.api.internal.TableEnvironmentImpl;
 import org.apache.flink.table.catalog.Catalog;
 import org.apache.flink.table.descriptors.ConnectTableDescriptor;
@@ -546,6 +547,55 @@ public interface TableEnvironment {
void createTemporaryView(String path, Table view);
 
/**
+* Registers an external {@link TableSource} in this {@link 
TableEnvironment}'s catalog.
+* Registered tables can be referenced in SQL queries.
+*
+* Temporary objects can shadow permanent ones. If a permanent 
object in a given path exists, it will
+* be inaccessible in the current session. To make the permanent object 
available again one can drop the
+* corresponding temporary object.
+*
+* @param nameThe name under which the {@link TableSource} is 
registered.
+* @param tableSource The {@link TableSource} to register.
+* @deprecated Use {@link #connect(ConnectorDescriptor)} instead.
+*/
+   @Deprecated
+   void registerTableSource(String name, TableSource tableSource);
+
+   /**
+* Registers an external {@link TableSink} with given field names and 
types in this
+* {@link TableEnvironment}'s catalog.
+* Registered sink tables can be referenced in SQL DML statements.
+*
+* Temporary objects can shadow permanent ones. If a permanent 
object in a given path exists, it will
+* be inaccessible in the current session. To make the permanent object 
available again one can drop the
+* corresponding temporary object.
+*
+* @param name The name under which the {@link TableSink} is registered.
+* @param fieldNames The field names to register with the {@link 
TableSink}.
+* @param fieldTypes The field types to register with the {@link 
TableSink}.
+* @param tableSink The {@link TableSink} to register.
+* @deprecated Use {@link #connect(ConnectorDescriptor)} instead.
+*/
+   @Deprecated
+   void registerTableSink(String name, String[] fieldNames, 
TypeInformation[] fieldTypes, TableSink tableSink);
+
+   /**
+* Registers an external {@link TableSink} with already configured 
field names and field types in
+* this {@link TableEnvironment}'s catalog.
+* Registered sink tables can be referenced in SQL DML statements.
+*
+* Temporary objects can shadow permanent ones. If a permanent 
object in a given path exists, it will
+* be inaccessible in the current session. To make the permanent object 
available again one can drop the
+* corresponding temporary object.
+*
+* @param name The name under which the {@link TableSink} is registered.
+* @param configuredSink The configured {@link TableSink} to register.
+* @deprecated Use {@link #connect

[flink] branch release-1.11 updated: [FLINK-18224][docs] Add document about sql client's tableau result mode

2020-06-10 Thread kurt
This is an automated email from the ASF dual-hosted git repository.

kurt pushed a commit to branch release-1.11
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.11 by this push:
 new a76a7cc  [FLINK-18224][docs] Add document about sql client's tableau 
result mode
a76a7cc is described below

commit a76a7cc2f8113f39ab18d7114b9680dabad3a49d
Author: Kurt Young 
AuthorDate: Thu Jun 11 13:27:39 2020 +0800

[FLINK-18224][docs] Add document about sql client's tableau result mode

This closes #12569

(cherry picked from commit 6fed1a136382c294a84ea29a278f33d7976e55d3)
---
 docs/dev/table/sqlClient.md| 45 --
 docs/dev/table/sqlClient.zh.md | 45 --
 .../flink-sql-client/conf/sql-client-defaults.yaml |  2 +-
 3 files changed, 83 insertions(+), 9 deletions(-)

diff --git a/docs/dev/table/sqlClient.md b/docs/dev/table/sqlClient.md
index 8331efe..4a229be 100644
--- a/docs/dev/table/sqlClient.md
+++ b/docs/dev/table/sqlClient.md
@@ -66,7 +66,7 @@ SELECT 'Hello World';
 
 This query requires no table source and produces a single row result. The CLI 
will retrieve results from the cluster and visualize them. You can close the 
result view by pressing the `Q` key.
 
-The CLI supports **two modes** for maintaining and visualizing results.
+The CLI supports **three modes** for maintaining and visualizing results.
 
 The **table mode** materializes results in memory and visualizes them in a 
regular, paginated table representation. It can be enabled by executing the 
following command in the CLI:
 
@@ -80,7 +80,18 @@ The **changelog mode** does not materialize results and 
visualizes the result st
 SET execution.result-mode=changelog;
 {% endhighlight %}
 
-You can use the following query to see both result modes in action:
+The **tableau mode** is more like a traditional way which will display the 
results in the screen directly with a tableau format.
+The displaying content will be influenced by the query execution 
type(`execution.type`).
+
+{% highlight text %}
+SET execution.result-mode=tableau;
+{% endhighlight %}
+
+Note that when you use this mode with streaming query, the result will be 
continuously printed on the console. If the input data of 
+this query is bounded, the job will terminate after Flink processed all input 
data, and the printing will also be stopped automatically.
+Otherwise, if you want to terminate a running query, just type `CTRL-C` in 
this case, the job and the printing will be stopped.
+
+You can use the following query to see all the result modes in action:
 
 {% highlight sql %}
 SELECT name, COUNT(*) AS cnt FROM (VALUES ('Bob'), ('Alice'), ('Greg'), 
('Bob')) AS NameTable(name) GROUP BY name;
@@ -106,9 +117,35 @@ Alice, 1
 Greg, 1
 {% endhighlight %}
 
-Both result modes can be useful during the prototyping of SQL queries. In both 
modes, results are stored in the Java heap memory of the SQL Client. In order 
to keep the CLI interface responsive, the changelog mode only shows the latest 
1000 changes. The table mode allows for navigating through bigger results that 
are only limited by the available main memory and the configured [maximum 
number of rows](sqlClient.html#configuration) (`max-table-result-rows`).
+In *tableau mode*, if you ran the query in streaming mode, the displayed 
result would be:
+{% highlight text %}
++-+--+--+
+| +/- | name |  cnt |
++-+--+--+
+|   + |  Bob |1 |
+|   + |Alice |1 |
+|   + | Greg |1 |
+|   - |  Bob |1 |
+|   + |  Bob |2 |
++-+--+--+
+Received a total of 5 rows
+{% endhighlight %}
+
+And if you ran the query in batch mode, the displayed result would be:
+{% highlight text %}
++---+-+
+|  name | cnt |
++---+-+
+| Alice |   1 |
+|   Bob |   2 |
+|  Greg |   1 |
++---+-+
+3 rows in set
+{% endhighlight %}
+
+All these result modes can be useful during the prototyping of SQL queries. In 
all these modes, results are stored in the Java heap memory of the SQL Client. 
In order to keep the CLI interface responsive, the changelog mode only shows 
the latest 1000 changes. The table mode allows for navigating through bigger 
results that are only limited by the available main memory and the configured 
[maximum number of rows](sqlClient.html#configuration) 
(`max-table-result-rows`).
 
-Attention Queries that are executed in 
a batch environment, can only be retrieved using the `table` result mode.
+Attention Queries that are executed in 
a batch environment, can only be retrieved using the `table` or `tableau` 
result mode

[flink] branch master updated (88cc44a -> 6fed1a1)

2020-06-10 Thread kurt
This is an automated email from the ASF dual-hosted git repository.

kurt pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 88cc44a  [hotfix][avro] Link to Hadoop Integration in Avro format 
documentation
 add 6fed1a1  [FLINK-18224][docs] Add document about sql client's tableau 
result mode

No new revisions were added by this update.

Summary of changes:
 docs/dev/table/sqlClient.md| 45 --
 docs/dev/table/sqlClient.zh.md | 45 --
 .../flink-sql-client/conf/sql-client-defaults.yaml |  2 +-
 3 files changed, 83 insertions(+), 9 deletions(-)



[flink] branch master updated (88cc44a -> 6fed1a1)

2020-06-10 Thread kurt
This is an automated email from the ASF dual-hosted git repository.

kurt pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 88cc44a  [hotfix][avro] Link to Hadoop Integration in Avro format 
documentation
 add 6fed1a1  [FLINK-18224][docs] Add document about sql client's tableau 
result mode

No new revisions were added by this update.

Summary of changes:
 docs/dev/table/sqlClient.md| 45 --
 docs/dev/table/sqlClient.zh.md | 45 --
 .../flink-sql-client/conf/sql-client-defaults.yaml |  2 +-
 3 files changed, 83 insertions(+), 9 deletions(-)



[flink] branch release-1.11 updated: [FLINK-17340][docs] Update docs which related to default planner changing.

2020-06-02 Thread kurt
This is an automated email from the ASF dual-hosted git repository.

kurt pushed a commit to branch release-1.11
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.11 by this push:
 new a450571  [FLINK-17340][docs] Update docs which related to default 
planner changing.
a450571 is described below

commit a450571c354832fe792d324beddbcc6e98cafb09
Author: Kurt Young 
AuthorDate: Tue Jun 2 09:50:23 2020 +0800

[FLINK-17340][docs] Update docs which related to default planner changing.

This closes #12429

(cherry picked from commit 1c78ab397de524836fd69c6218b1122aa387c251)
---
 docs/dev/table/catalogs.md  |  4 ++--
 docs/dev/table/catalogs.zh.md   |  4 ++--
 docs/dev/table/common.md| 52 ++---
 docs/dev/table/common.zh.md | 51 ++--
 docs/dev/table/hive/index.md|  4 ++--
 docs/dev/table/hive/index.zh.md |  4 ++--
 docs/dev/table/index.md |  6 ++---
 docs/dev/table/index.zh.md  |  6 ++---
 8 files changed, 60 insertions(+), 71 deletions(-)

diff --git a/docs/dev/table/catalogs.md b/docs/dev/table/catalogs.md
index 961d0a6..7236154 100644
--- a/docs/dev/table/catalogs.md
+++ b/docs/dev/table/catalogs.md
@@ -63,7 +63,7 @@ Set a `JdbcCatalog` with the following parameters:
 
 {% highlight java %}
 
-EnvironmentSettings settings = 
EnvironmentSettings.newInstance().useBlinkPlanner().inStreamingMode().build();
+EnvironmentSettings settings = 
EnvironmentSettings.newInstance().inStreamingMode().build();
 TableEnvironment tableEnv = TableEnvironment.create(settings);
 
 String name= "mypg";
@@ -82,7 +82,7 @@ tableEnv.useCatalog("mypg");
 
 {% highlight scala %}
 
-val settings = 
EnvironmentSettings.newInstance().useBlinkPlanner().inStreamingMode().build()
+val settings = EnvironmentSettings.newInstance().inStreamingMode().build()
 val tableEnv = TableEnvironment.create(settings)
 
 val name= "mypg"
diff --git a/docs/dev/table/catalogs.zh.md b/docs/dev/table/catalogs.zh.md
index 23e0969..f775ad6 100644
--- a/docs/dev/table/catalogs.zh.md
+++ b/docs/dev/table/catalogs.zh.md
@@ -59,7 +59,7 @@ Set a `Jdbcatalog` with the following parameters:
 
 {% highlight java %}
 
-EnvironmentSettings settings = 
EnvironmentSettings.newInstance().useBlinkPlanner().inStreamingMode().build();
+EnvironmentSettings settings = 
EnvironmentSettings.newInstance().inStreamingMode().build();
 TableEnvironment tableEnv = TableEnvironment.create(settings);
 
 String name= "mypg";
@@ -78,7 +78,7 @@ tableEnv.useCatalog("mypg");
 
 {% highlight scala %}
 
-val settings = 
EnvironmentSettings.newInstance().useBlinkPlanner().inStreamingMode().build()
+val settings = EnvironmentSettings.newInstance().inStreamingMode().build()
 val tableEnv = TableEnvironment.create(settings)
 
 val name= "mypg"
diff --git a/docs/dev/table/common.md b/docs/dev/table/common.md
index 4dae5b7..a6fe84f 100644
--- a/docs/dev/table/common.md
+++ b/docs/dev/table/common.md
@@ -32,12 +32,11 @@ Main Differences Between the Two Planners
 
 1. Blink treats batch jobs as a special case of streaming. As such, the 
conversion between Table and DataSet is also not supported, and batch jobs will 
not be translated into `DateSet` programs but translated into `DataStream` 
programs, the same as the streaming jobs.
 2. The Blink planner does not support `BatchTableSource`, use bounded 
`StreamTableSource` instead of it.
-3. The Blink planner only support the brand new `Catalog` and does not support 
`ExternalCatalog` which is deprecated.
-4. The implementations of `FilterableTableSource` for the old planner and the 
Blink planner are incompatible. The old planner will push down 
`PlannerExpression`s into `FilterableTableSource`, while the Blink planner will 
push down `Expression`s.
-5. String based key-value config options (Please see the documentation about 
[Configuration]({{ site.baseurl }}/dev/table/config.html) for details) are only 
used for the Blink planner.
-6. The implementation(`CalciteConfig`) of `PlannerConfig` in two planners is 
different.
-7. The Blink planner will optimize multiple-sinks into one DAG (supported only 
on `TableEnvironment`, not on `StreamTableEnvironment`). The old planner will 
always optimize each sink into a new DAG, where all DAGs are independent of 
each other.
-8. The old planner does not support catalog statistics now, while the Blink 
planner does.
+3. The implementations of `FilterableTableSource` for the old planner and the 
Blink planner are incompatible. The old planner will push down 
`PlannerExpression`s into `FilterableTableSource`, while the Blink planner will 
push down `Expression`s.
+4. String based key-value config options (Please see the documentation about 
[Configuration]({{ site.baseurl }}/dev/table/config.html) for details) 

[flink] branch master updated: [FLINK-17340][docs] Update docs which related to default planner changing.

2020-06-02 Thread kurt
This is an automated email from the ASF dual-hosted git repository.

kurt pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new 1c78ab3  [FLINK-17340][docs] Update docs which related to default 
planner changing.
1c78ab3 is described below

commit 1c78ab397de524836fd69c6218b1122aa387c251
Author: Kurt Young 
AuthorDate: Tue Jun 2 09:50:23 2020 +0800

[FLINK-17340][docs] Update docs which related to default planner changing.

This closes #12429
---
 docs/dev/table/catalogs.md  |  4 ++--
 docs/dev/table/catalogs.zh.md   |  4 ++--
 docs/dev/table/common.md| 52 ++---
 docs/dev/table/common.zh.md | 51 ++--
 docs/dev/table/hive/index.md|  4 ++--
 docs/dev/table/hive/index.zh.md |  4 ++--
 docs/dev/table/index.md |  6 ++---
 docs/dev/table/index.zh.md  |  6 ++---
 8 files changed, 60 insertions(+), 71 deletions(-)

diff --git a/docs/dev/table/catalogs.md b/docs/dev/table/catalogs.md
index 961d0a6..7236154 100644
--- a/docs/dev/table/catalogs.md
+++ b/docs/dev/table/catalogs.md
@@ -63,7 +63,7 @@ Set a `JdbcCatalog` with the following parameters:
 
 {% highlight java %}
 
-EnvironmentSettings settings = 
EnvironmentSettings.newInstance().useBlinkPlanner().inStreamingMode().build();
+EnvironmentSettings settings = 
EnvironmentSettings.newInstance().inStreamingMode().build();
 TableEnvironment tableEnv = TableEnvironment.create(settings);
 
 String name= "mypg";
@@ -82,7 +82,7 @@ tableEnv.useCatalog("mypg");
 
 {% highlight scala %}
 
-val settings = 
EnvironmentSettings.newInstance().useBlinkPlanner().inStreamingMode().build()
+val settings = EnvironmentSettings.newInstance().inStreamingMode().build()
 val tableEnv = TableEnvironment.create(settings)
 
 val name= "mypg"
diff --git a/docs/dev/table/catalogs.zh.md b/docs/dev/table/catalogs.zh.md
index 23e0969..f775ad6 100644
--- a/docs/dev/table/catalogs.zh.md
+++ b/docs/dev/table/catalogs.zh.md
@@ -59,7 +59,7 @@ Set a `Jdbcatalog` with the following parameters:
 
 {% highlight java %}
 
-EnvironmentSettings settings = 
EnvironmentSettings.newInstance().useBlinkPlanner().inStreamingMode().build();
+EnvironmentSettings settings = 
EnvironmentSettings.newInstance().inStreamingMode().build();
 TableEnvironment tableEnv = TableEnvironment.create(settings);
 
 String name= "mypg";
@@ -78,7 +78,7 @@ tableEnv.useCatalog("mypg");
 
 {% highlight scala %}
 
-val settings = 
EnvironmentSettings.newInstance().useBlinkPlanner().inStreamingMode().build()
+val settings = EnvironmentSettings.newInstance().inStreamingMode().build()
 val tableEnv = TableEnvironment.create(settings)
 
 val name= "mypg"
diff --git a/docs/dev/table/common.md b/docs/dev/table/common.md
index 4dae5b7..a6fe84f 100644
--- a/docs/dev/table/common.md
+++ b/docs/dev/table/common.md
@@ -32,12 +32,11 @@ Main Differences Between the Two Planners
 
 1. Blink treats batch jobs as a special case of streaming. As such, the 
conversion between Table and DataSet is also not supported, and batch jobs will 
not be translated into `DateSet` programs but translated into `DataStream` 
programs, the same as the streaming jobs.
 2. The Blink planner does not support `BatchTableSource`, use bounded 
`StreamTableSource` instead of it.
-3. The Blink planner only support the brand new `Catalog` and does not support 
`ExternalCatalog` which is deprecated.
-4. The implementations of `FilterableTableSource` for the old planner and the 
Blink planner are incompatible. The old planner will push down 
`PlannerExpression`s into `FilterableTableSource`, while the Blink planner will 
push down `Expression`s.
-5. String based key-value config options (Please see the documentation about 
[Configuration]({{ site.baseurl }}/dev/table/config.html) for details) are only 
used for the Blink planner.
-6. The implementation(`CalciteConfig`) of `PlannerConfig` in two planners is 
different.
-7. The Blink planner will optimize multiple-sinks into one DAG (supported only 
on `TableEnvironment`, not on `StreamTableEnvironment`). The old planner will 
always optimize each sink into a new DAG, where all DAGs are independent of 
each other.
-8. The old planner does not support catalog statistics now, while the Blink 
planner does.
+3. The implementations of `FilterableTableSource` for the old planner and the 
Blink planner are incompatible. The old planner will push down 
`PlannerExpression`s into `FilterableTableSource`, while the Blink planner will 
push down `Expression`s.
+4. String based key-value config options (Please see the documentation about 
[Configuration]({{ site.baseurl }}/dev/table/config.html) for details) are only 
used for the Blink planner.
+5. The implementation(`CalciteConfig`) of `Planner

[flink] branch master updated (50232aa -> 20c28ac)

2020-05-18 Thread kurt
This is an automated email from the ASF dual-hosted git repository.

kurt pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 50232aa  [hotfix] Remove raw class usages in Configuration.
 add 20c28ac  [FLINK-17715][sql-client] Align function DDL support with 
TableEnvironment in SQL client

No new revisions were added by this update.

Summary of changes:
 .../apache/flink/table/client/cli/CliClient.java   |  37 -
 .../apache/flink/table/client/cli/CliStrings.java  |   8 ++
 .../flink/table/client/cli/SqlCommandParser.java   |  20 +--
 .../flink/table/client/gateway/Executor.java   |   6 +
 .../table/client/gateway/local/LocalExecutor.java  |  12 ++
 .../flink/table/client/cli/CliClientTest.java  |   6 +
 .../flink/table/client/cli/CliResultViewTest.java  |   6 +
 .../table/client/cli/SqlCommandParserTest.java |  30 
 .../flink/table/client/cli/TestingExecutor.java|   6 +
 .../client/gateway/local/LocalExecutorITCase.java  | 158 +
 10 files changed, 280 insertions(+), 9 deletions(-)



[flink] branch master updated (bf58725 -> cfeaedd)

2020-05-18 Thread kurt
This is an automated email from the ASF dual-hosted git repository.

kurt pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from bf58725  Revert [FLINK-15670]
 add cfeaedd  [FLINK-17791][table][streaming] Support collecting query 
results under all execution and network environments

No new revisions were added by this update.

Summary of changes:
 .../streaming/api/datastream/DataStreamUtils.java  | 85 ++
 .../collect/CollectSinkOperatorFactory.java| 17 +++--
 .../api/operators/collect/CollectStreamSink.java   | 84 +
 .../table/planner/sinks/BatchSelectTableSink.java  | 77 +---
 ...lectTableSink.java => SelectTableSinkBase.java} | 59 +++
 .../table/planner/sinks/StreamSelectTableSink.java | 58 +--
 .../runtime/batch/sql/join/ScalarQueryITCase.scala |  4 +-
 .../flink/table/sinks/StreamSelectTableSink.java   | 37 +-
 .../apache/flink/table/planner/StreamPlanner.scala |  1 +
 9 files changed, 173 insertions(+), 249 deletions(-)
 create mode 100644 
flink-streaming-java/src/main/java/org/apache/flink/streaming/api/operators/collect/CollectStreamSink.java
 copy 
flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/sinks/{StreamSelectTableSink.java
 => SelectTableSinkBase.java} (53%)



[flink] branch master updated: [FLINK-17735][streaming] Add an iterator to collect sink results through coordination rest api

2020-05-17 Thread kurt
This is an automated email from the ASF dual-hosted git repository.

kurt pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new 8650e40   [FLINK-17735][streaming] Add an iterator to collect sink 
results through coordination rest api
8650e40 is described below

commit 8650e409763a9b4eba92c77cc02ca22edc0230af
Author: TsReaper 
AuthorDate: Sun May 17 21:47:31 2020 +0800

 [FLINK-17735][streaming] Add an iterator to collect sink results through 
coordination rest api

This closes #12073
---
 .../operators/collect/CollectResultFetcher.java| 334 +
 .../operators/collect/CollectResultIterator.java   |  95 ++
 .../api/operators/collect/CollectSinkFunction.java |  33 +-
 .../collect/CollectSinkOperatorCoordinator.java|  22 +-
 .../collect/CollectSinkOperatorFactory.java|  21 +-
 .../collect/CollectResultIteratorTest.java | 132 
 .../operators/collect/CollectSinkFunctionTest.java | 206 -
 .../CollectSinkOperatorCoordinatorTest.java|   6 +-
 .../collect/utils/CollectRequestSender.java|  31 --
 .../operators/collect/utils/TestCollectClient.java | 141 -
 .../utils/TestCoordinationRequestHandler.java  | 213 +
 .../api/operators/collect/utils/TestJobClient.java | 133 
 12 files changed, 1095 insertions(+), 272 deletions(-)

diff --git 
a/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/operators/collect/CollectResultFetcher.java
 
b/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/operators/collect/CollectResultFetcher.java
new file mode 100644
index 000..ec7286b
--- /dev/null
+++ 
b/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/operators/collect/CollectResultFetcher.java
@@ -0,0 +1,334 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.streaming.api.operators.collect;
+
+import org.apache.flink.api.common.JobExecutionResult;
+import org.apache.flink.api.common.JobStatus;
+import org.apache.flink.api.common.accumulators.SerializedListAccumulator;
+import org.apache.flink.api.common.typeutils.TypeSerializer;
+import 
org.apache.flink.api.common.typeutils.base.array.BytePrimitiveArraySerializer;
+import org.apache.flink.api.java.tuple.Tuple2;
+import org.apache.flink.core.execution.JobClient;
+import org.apache.flink.runtime.jobgraph.OperatorID;
+import 
org.apache.flink.runtime.operators.coordination.CoordinationRequestGateway;
+import org.apache.flink.util.Preconditions;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import javax.annotation.Nullable;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.concurrent.CompletableFuture;
+import java.util.concurrent.ExecutionException;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.TimeoutException;
+
+/**
+ * A fetcher which fetches query results from sink and provides exactly-once 
semantics.
+ */
+public class CollectResultFetcher {
+
+   private static final int DEFAULT_RETRY_MILLIS = 100;
+   private static final long DEFAULT_ACCUMULATOR_GET_MILLIS = 1;
+
+   private static final Logger LOG = 
LoggerFactory.getLogger(CollectResultFetcher.class);
+
+   private final CompletableFuture operatorIdFuture;
+   private final String accumulatorName;
+   private final int retryMillis;
+
+   private ResultBuffer buffer;
+
+   @Nullable
+   private JobClient jobClient;
+   @Nullable
+   private CoordinationRequestGateway gateway;
+
+   private boolean jobTerminated;
+   private boolean closed;
+
+   public CollectResultFetcher(
+   CompletableFuture operatorIdFuture,
+   TypeSerializer serializer,
+   String accumulatorName) {
+   this(
+   operatorIdFuture,
+   serializer,
+   accumulatorName,
+   DEFAULT_RETRY_MILLIS);
+   }
+
+   

[flink] branch master updated (2bbf8ed -> 81c3738)

2020-05-17 Thread kurt
This is an automated email from the ASF dual-hosted git repository.

kurt pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 2bbf8ed  [FLINK-17448][sql-parser-hive] Implement alter DDL for Hive 
dialect
 add 81c3738  [FLINK-17451][table][hive] Implement view DDLs for Hive 
dialect

No new revisions were added by this update.

Summary of changes:
 .../flink/connectors/hive/HiveDialectTest.java |  32 +++
 .../src/main/codegen/data/Parser.tdd   |   8 ++
 .../src/main/codegen/includes/parserImpls.ftl  | 106 +
 ...eProps.java => SqlAlterHiveViewProperties.java} |  25 +++--
 ...ateHiveDatabase.java => SqlCreateHiveView.java} |  65 ++---
 .../parser/hive/FlinkHiveSqlParserImplTest.java|  45 +
 .../src/main/codegen/includes/parserImpls.ftl  |   2 +-
 .../SqlShowViews.java => ddl/SqlAlterView.java}|  32 ---
 ...qlAlterTableRename.java => SqlAlterViewAs.java} |  29 +++---
 ...Properties.java => SqlAlterViewProperties.java} |  19 +---
 ...terTableRename.java => SqlAlterViewRename.java} |  25 ++---
 .../apache/flink/sql/parser/ddl/SqlCreateView.java |  13 ++-
 .../table/api/internal/TableEnvironmentImpl.java   |  33 +++
 ...ameOperation.java => AlterViewAsOperation.java} |  21 ++--
 ...TableOperation.java => AlterViewOperation.java} |  16 ++--
 ...tion.java => AlterViewPropertiesOperation.java} |  28 +++---
 ...peration.java => AlterViewRenameOperation.java} |  21 ++--
 .../operations/SqlToOperationConverter.java|  68 -
 18 files changed, 436 insertions(+), 152 deletions(-)
 copy 
flink-table/flink-sql-parser-hive/src/main/java/org/apache/flink/sql/parser/hive/ddl/{SqlAlterHiveDatabaseProps.java
 => SqlAlterHiveViewProperties.java} (68%)
 copy 
flink-table/flink-sql-parser-hive/src/main/java/org/apache/flink/sql/parser/hive/ddl/{SqlCreateHiveDatabase.java
 => SqlCreateHiveView.java} (52%)
 copy 
flink-table/flink-sql-parser/src/main/java/org/apache/flink/sql/parser/{dql/SqlShowViews.java
 => ddl/SqlAlterView.java} (67%)
 copy 
flink-table/flink-sql-parser/src/main/java/org/apache/flink/sql/parser/ddl/{SqlAlterTableRename.java
 => SqlAlterViewAs.java} (63%)
 copy 
flink-table/flink-sql-parser/src/main/java/org/apache/flink/sql/parser/ddl/{SqlAlterTableProperties.java
 => SqlAlterViewProperties.java} (75%)
 copy 
flink-table/flink-sql-parser/src/main/java/org/apache/flink/sql/parser/ddl/{SqlAlterTableRename.java
 => SqlAlterViewRename.java} (65%)
 copy 
flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/operations/ddl/{AlterTableRenameOperation.java
 => AlterViewAsOperation.java} (62%)
 copy 
flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/operations/ddl/{AlterTableOperation.java
 => AlterViewOperation.java} (69%)
 copy 
flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/operations/ddl/{AlterTablePropertiesOperation.java
 => AlterViewPropertiesOperation.java} (65%)
 copy 
flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/operations/ddl/{AlterTableRenameOperation.java
 => AlterViewRenameOperation.java} (62%)



[flink] branch master updated (2b42209 -> 7c6251e)

2020-05-16 Thread kurt
This is an automated email from the ASF dual-hosted git repository.

kurt pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 2b42209  [FLINK-17719][task][checkpointing] Provide 
ChannelStateReader#hasStates for hints of reading channel states
 add 7c6251e  [FLINK-17734][streaming] Add an operator coordinator and sink 
function to collect sink data.

No new revisions were added by this update.

Summary of changes:
 .../runtime/SavepointTaskManagerRuntimeInfo.java   |   7 +-
 .../taskexecutor/TaskManagerConfiguration.java |  13 +-
 .../runtime/taskexecutor/TaskManagerRunner.java|   2 +-
 .../taskmanager/TaskManagerRuntimeInfo.java|  17 +
 .../TaskExecutorPartitionLifecycleTest.java|   6 +-
 .../taskexecutor/TaskExecutorSlotLifetimeTest.java |   6 +-
 .../runtime/taskexecutor/TaskExecutorTest.java |  11 +-
 .../TaskSubmissionTestEnvironment.java |   5 +-
 .../runtime/util/JvmExitOnFatalErrorTest.java  |   6 +-
 .../util/TestingTaskManagerRuntimeInfo.java|  15 +
 .../api/operators/StreamingRuntimeContext.java |  10 +
 .../collect/CollectCoordinationRequest.java|  66 +++
 .../collect/CollectCoordinationResponse.java   | 101 
 .../operators/collect/CollectSinkAddressEvent.java |  38 ++
 .../api/operators/collect/CollectSinkFunction.java | 455 +++
 .../api/operators/collect/CollectSinkOperator.java |  68 +++
 .../collect/CollectSinkOperatorCoordinator.java| 217 +++
 .../collect/CollectSinkOperatorFactory.java|  64 +++
 .../api/functions/sink/filesystem/TestUtils.java   |   4 +-
 .../operators/collect/CollectSinkFunctionTest.java | 632 +
 .../CollectSinkOperatorCoordinatorTest.java| 197 +++
 .../collect/utils/CollectRequestSender.java|  31 +
 .../utils/MockFunctionInitializationContext.java   |  48 ++
 .../collect/utils/MockFunctionSnapshotContext.java |  42 ++
 .../collect/utils/MockOperatorEventGateway.java|  44 ++
 .../collect/utils/MockOperatorStateStore.java  | 109 
 .../operators/collect/utils/TestCollectClient.java | 141 +
 .../util/MockStreamingRuntimeContext.java  |  15 +-
 28 files changed, 2357 insertions(+), 13 deletions(-)
 create mode 100644 
flink-streaming-java/src/main/java/org/apache/flink/streaming/api/operators/collect/CollectCoordinationRequest.java
 create mode 100644 
flink-streaming-java/src/main/java/org/apache/flink/streaming/api/operators/collect/CollectCoordinationResponse.java
 create mode 100644 
flink-streaming-java/src/main/java/org/apache/flink/streaming/api/operators/collect/CollectSinkAddressEvent.java
 create mode 100644 
flink-streaming-java/src/main/java/org/apache/flink/streaming/api/operators/collect/CollectSinkFunction.java
 create mode 100644 
flink-streaming-java/src/main/java/org/apache/flink/streaming/api/operators/collect/CollectSinkOperator.java
 create mode 100644 
flink-streaming-java/src/main/java/org/apache/flink/streaming/api/operators/collect/CollectSinkOperatorCoordinator.java
 create mode 100644 
flink-streaming-java/src/main/java/org/apache/flink/streaming/api/operators/collect/CollectSinkOperatorFactory.java
 create mode 100644 
flink-streaming-java/src/test/java/org/apache/flink/streaming/api/operators/collect/CollectSinkFunctionTest.java
 create mode 100644 
flink-streaming-java/src/test/java/org/apache/flink/streaming/api/operators/collect/CollectSinkOperatorCoordinatorTest.java
 create mode 100644 
flink-streaming-java/src/test/java/org/apache/flink/streaming/api/operators/collect/utils/CollectRequestSender.java
 create mode 100644 
flink-streaming-java/src/test/java/org/apache/flink/streaming/api/operators/collect/utils/MockFunctionInitializationContext.java
 create mode 100644 
flink-streaming-java/src/test/java/org/apache/flink/streaming/api/operators/collect/utils/MockFunctionSnapshotContext.java
 create mode 100644 
flink-streaming-java/src/test/java/org/apache/flink/streaming/api/operators/collect/utils/MockOperatorEventGateway.java
 create mode 100644 
flink-streaming-java/src/test/java/org/apache/flink/streaming/api/operators/collect/utils/MockOperatorStateStore.java
 create mode 100644 
flink-streaming-java/src/test/java/org/apache/flink/streaming/api/operators/collect/utils/TestCollectClient.java



[flink] branch master updated (b843480 -> f25bf57)

2020-05-16 Thread kurt
This is an automated email from the ASF dual-hosted git repository.

kurt pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from b843480  [FLINK-17587][filesystem] Filesystem streaming sink support 
commit success file
 add f25bf57  [hotfix] fix the import of archetype resources

No new revisions were added by this update.

Summary of changes:
 .../main/resources/archetype-resources/src/main/scala/SpendReport.scala  | 1 +
 1 file changed, 1 insertion(+)



[flink] branch master updated (a7559e5 -> 677867a)

2020-05-15 Thread kurt
This is an automated email from the ASF dual-hosted git repository.

kurt pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from a7559e5  [hotfix] fix e2e test due to removing TableSource & TableSink 
registration
 add 677867a  [FLINK-17450][sql-parser][hive] Implement function & catalog 
DDLs for Hive dialect

No new revisions were added by this update.

Summary of changes:
 .../flink/connectors/hive/HiveDialectTest.java |  29 +
 .../src/main/codegen/data/Parser.tdd   |   6 +
 .../src/main/codegen/includes/parserImpls.ftl  | 134 -
 .../parser/hive/FlinkHiveSqlParserImplTest.java|  43 +++
 4 files changed, 211 insertions(+), 1 deletion(-)



[flink] branch master updated (d17a2e0 -> a7559e5)

2020-05-15 Thread kurt
This is an automated email from the ASF dual-hosted git repository.

kurt pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from d17a2e0  [hotfix] fix table walkthrough due to removing TableSource & 
TableSink registration
 add a7559e5  [hotfix] fix e2e test due to removing TableSource & TableSink 
registration

No new revisions were added by this update.

Summary of changes:
 .../main/resources/archetype-resources/src/main/java/SpendReport.java | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)



[flink] branch master updated (32b79d1 -> d17a2e0)

2020-05-15 Thread kurt
This is an automated email from the ASF dual-hosted git repository.

kurt pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 32b79d1  [FLINK-17748][table] Remove registration of 
TableSource/TableSink in table env
 add d17a2e0  [hotfix] fix table walkthrough due to removing TableSource & 
TableSink registration

No new revisions were added by this update.

Summary of changes:
 .../resources/archetype-resources/src/main/java/SpendReport.java   | 7 +--
 1 file changed, 5 insertions(+), 2 deletions(-)



[flink] branch master updated (da16f9e -> 32b79d1)

2020-05-15 Thread kurt
This is an automated email from the ASF dual-hosted git repository.

kurt pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from da16f9e  [FLINK-16624][k8s] Support user-specified annotations for the 
rest Service
 add 32b79d1  [FLINK-17748][table] Remove registration of 
TableSource/TableSink in table env

No new revisions were added by this update.

Summary of changes:
 .../cassandra/CassandraConnectorITCase.java|   3 +-
 .../connector/hbase/source/HBaseTableSource.java   |   2 +-
 .../connector/hbase/HBaseConnectorITCase.java  |  15 +--
 .../flink/sql/tests/BatchSQLTestProgram.java   |   7 +-
 .../flink/sql/tests/StreamSQLTestProgram.java  |   5 +-
 .../apache/flink/table/tpcds/TpcdsTestProgram.java |   3 +-
 .../java/org/apache/flink/orc/OrcTableSource.java  |   2 +-
 .../org/apache/flink/orc/OrcTableSourceITCase.java |   5 +-
 .../flink/formats/parquet/ParquetTableSource.java  |   2 +-
 .../formats/parquet/ParquetTableSourceITCase.java  |   5 +-
 flink-python/pyflink/table/table_environment.py|   4 +-
 .../client/gateway/local/ExecutionContext.java |   5 +-
 .../table/client/gateway/local/LocalExecutor.java  |   3 +-
 .../client/gateway/local/ExecutionContextTest.java |   2 +-
 .../apache/flink/table/api/TableEnvironment.java   |  52 +--
 .../table/api/internal/TableEnvironmentImpl.java   |  33 +--
 .../api/internal/TableEnvironmentInternal.java |  31 ++-
 .../table/descriptors/ConnectTableDescriptor.java  |   2 +-
 .../flink/table/api/TableEnvironmentITCase.scala   |   7 +-
 .../apache/flink/table/api/batch/ExplainTest.scala |   6 +-
 .../flink/table/api/stream/ExplainTest.scala   |  16 ++--
 .../plan/batch/sql/DagOptimizationTest.scala   |  58 ++--
 .../planner/plan/batch/sql/LegacySinkTest.scala|   6 +-
 .../planner/plan/batch/sql/TableSourceTest.scala   |   6 +-
 ...hProjectIntoLegacyTableSourceScanRuleTest.scala |   4 +-
 .../plan/stream/sql/DagOptimizationTest.scala  | 101 ++---
 .../planner/plan/stream/sql/LegacySinkTest.scala   |  29 --
 .../stream/sql/MiniBatchIntervalInferTest.scala|  11 ++-
 .../planner/plan/stream/sql/TableSourceTest.scala  |  25 ++---
 .../plan/stream/table/TableSourceTest.scala|  26 +++---
 .../validation/LegacyTableSinkValidationTest.scala |   4 +-
 .../runtime/batch/sql/TableScanITCase.scala|   8 +-
 .../runtime/batch/sql/TableSourceITCase.scala  |   8 +-
 .../batch/sql/agg/AggregateITCaseBase.scala|   2 +-
 .../runtime/batch/sql/join/JoinITCase.scala|   6 +-
 .../batch/table/LegacyTableSinkITCase.scala|  10 +-
 .../runtime/stream/sql/AggregateITCase.scala   |   6 +-
 .../planner/runtime/stream/sql/CalcITCase.scala|   4 +-
 .../runtime/stream/sql/CorrelateITCase.scala   |  18 ++--
 .../runtime/stream/sql/Limit0RemoveITCase.scala|  18 ++--
 .../planner/runtime/stream/sql/RankITCase.scala|  40 
 .../runtime/stream/sql/TableScanITCase.scala   |  10 +-
 .../runtime/stream/sql/TableSourceITCase.scala |  18 ++--
 .../planner/runtime/stream/sql/UnnestITCase.scala  |   4 +-
 .../runtime/stream/sql/WindowAggregateITCase.scala |   6 +-
 .../runtime/stream/table/AggregateITCase.scala |   4 +-
 .../planner/runtime/stream/table/JoinITCase.scala  |  14 +--
 .../stream/table/LegacyTableSinkITCase.scala   |  28 +++---
 .../planner/runtime/utils/BatchTableEnvUtil.scala  |   5 +-
 .../flink/table/planner/utils/TableTestBase.scala  |  15 ++-
 .../flink/table/api/internal/TableEnvImpl.scala|  49 ++
 .../table/runtime/batch/JavaTableSourceITCase.java |   5 +-
 .../flink/table/api/TableEnvironmentITCase.scala   |  32 ---
 .../org/apache/flink/table/api/TableITCase.scala   |   8 +-
 .../apache/flink/table/api/TableSourceTest.scala   |  51 +++
 .../apache/flink/table/api/batch/ExplainTest.scala |  17 ++--
 .../sql/validation/InsertIntoValidationTest.scala  |  14 ++-
 .../validation/InsertIntoValidationTest.scala  |   7 +-
 .../flink/table/api/stream/ExplainTest.scala   |  17 ++--
 .../sql/validation/InsertIntoValidationTest.scala  |  11 ++-
 .../table/api/stream/table/TableSourceTest.scala   |  25 ++---
 .../validation/InsertIntoValidationTest.scala  |   8 +-
 .../table/validation/TableSinkValidationTest.scala |  10 +-
 .../validation/TableSourceValidationTest.scala |   6 +-
 .../api/validation/TableSinksValidationTest.scala  |   8 +-
 .../api/validation/TableSourceValidationTest.scala |  26 +++---
 .../batch/sql/PartitionableSinkITCase.scala|   7 +-
 .../runtime/batch/sql/TableEnvironmentITCase.scala |  47 ++
 .../runtime/batch/sql/TableSourceITCase.scala  |   8 +-
 .../batch/table/TableEnvironmentITCase.scala   |   4 +-
 .../runtime/batch/table/TableSinkITCase.scala  |   7 +-
 .../runtime/batch/table/TableSourceITCase.scala|  49 +-
 .../runtime/stream/TimeAttributesITCase.scala

[flink] branch master updated (9fe920f -> 74b8bb5)

2020-05-14 Thread kurt
This is an automated email from the ASF dual-hosted git repository.

kurt pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 9fe920f  [FLINK-14807][rest] Introduce REST API for communication 
between clients and operator coordinators
 add 74b8bb5  [FLINK-17667][table-planner-blink][hive] Support INSERT 
statement for hive dialect

No new revisions were added by this update.

Summary of changes:
 .../flink/connectors/hive/HiveDialectTest.java |  53 +-
 .../src/main/codegen/data/Parser.tdd   |   2 +
 .../src/main/codegen/includes/parserImpls.ftl  | 193 +
 .../sql/parser/hive/dml/RichSqlHiveInsert.java | 101 +++
 .../parser/hive/FlinkHiveSqlParserImplTest.java|  17 ++
 .../flink/table/planner/calcite/CalciteParser.java |  17 +-
 6 files changed, 373 insertions(+), 10 deletions(-)
 create mode 100644 
flink-table/flink-sql-parser-hive/src/main/java/org/apache/flink/sql/parser/hive/dml/RichSqlHiveInsert.java



[flink] branch master updated: [FLINK-14807][rest] Introduce REST API for communication between clients and operator coordinators

2020-05-14 Thread kurt
This is an automated email from the ASF dual-hosted git repository.

kurt pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new 9fe920f  [FLINK-14807][rest] Introduce REST API for communication 
between clients and operator coordinators
9fe920f is described below

commit 9fe920fdc6f7eeaf2d99901099c842cfd0f1380a
Author: TsReaper 
AuthorDate: Fri May 15 07:46:31 2020 +0800

[FLINK-14807][rest] Introduce REST API for communication between clients 
and operator coordinators

This closes #12037
---
 .../deployment/ClusterClientJobClientAdapter.java  |  14 ++-
 .../deployment/application/EmbeddedJobClient.java  |  20 +++-
 .../apache/flink/client/program/ClusterClient.java |  13 +++
 .../flink/client/program/MiniClusterClient.java|  20 
 .../client/program/PerJobMiniClusterFactory.java   |  19 +++-
 .../client/program/rest/RestClusterClient.java |  37 
 .../flink/client/program/TestingClusterClient.java |  11 +++
 .../client/program/rest/RestClusterClientTest.java |  75 +++
 .../src/test/resources/rest_api_v1.snapshot|  33 +++
 .../flink/runtime/dispatcher/Dispatcher.java   |  17 
 .../apache/flink/runtime/jobmaster/JobMaster.java  |  15 +++
 .../flink/runtime/jobmaster/JobMasterGateway.java  |  19 
 .../flink/runtime/minicluster/MiniCluster.java |  14 +++
 .../coordination/CoordinationRequest.java  |  27 ++
 .../coordination/CoordinationRequestGateway.java   |  39 
 .../coordination/CoordinationRequestHandler.java   |  36 +++
 .../coordination/CoordinationResponse.java |  27 ++
 .../coordination/ClientCoordinationHandler.java|  84 +
 .../rest/messages/OperatorIDPathParameter.java |  49 ++
 .../coordination/ClientCoordinationHeaders.java|  80 
 .../ClientCoordinationMessageParameters.java   |  49 ++
 .../ClientCoordinationRequestBody.java |  56 +++
 .../ClientCoordinationResponseBody.java|  56 +++
 .../flink/runtime/scheduler/SchedulerBase.java |  36 ++-
 .../flink/runtime/scheduler/SchedulerNG.java   |  12 +++
 .../flink/runtime/webmonitor/RestfulGateway.java   |  24 +
 .../runtime/webmonitor/WebMonitorEndpoint.java |   9 ++
 .../jobmaster/utils/TestingJobMasterGateway.java   |  14 ++-
 .../utils/TestingJobMasterGatewayBuilder.java  |  11 ++-
 .../OperatorCoordinatorSchedulerTest.java  |  48 ++
 .../TestingCoordinationRequestHandler.java | 104 +
 .../webmonitor/TestingDispatcherGateway.java   |  14 ++-
 .../runtime/webmonitor/TestingRestfulGateway.java  |  34 ++-
 .../environment/RemoteStreamEnvironmentTest.java   |  11 +++
 34 files changed, 1112 insertions(+), 15 deletions(-)

diff --git 
a/flink-clients/src/main/java/org/apache/flink/client/deployment/ClusterClientJobClientAdapter.java
 
b/flink-clients/src/main/java/org/apache/flink/client/deployment/ClusterClientJobClientAdapter.java
index 3fb0e5c..94c31f2 100644
--- 
a/flink-clients/src/main/java/org/apache/flink/client/deployment/ClusterClientJobClientAdapter.java
+++ 
b/flink-clients/src/main/java/org/apache/flink/client/deployment/ClusterClientJobClientAdapter.java
@@ -26,6 +26,10 @@ import org.apache.flink.client.program.ClusterClientProvider;
 import org.apache.flink.client.program.ProgramInvocationException;
 import org.apache.flink.core.execution.JobClient;
 import org.apache.flink.runtime.concurrent.FutureUtils;
+import org.apache.flink.runtime.jobgraph.OperatorID;
+import org.apache.flink.runtime.operators.coordination.CoordinationRequest;
+import 
org.apache.flink.runtime.operators.coordination.CoordinationRequestGateway;
+import org.apache.flink.runtime.operators.coordination.CoordinationResponse;
 
 import org.apache.commons.io.IOUtils;
 
@@ -41,7 +45,7 @@ import static 
org.apache.flink.util.Preconditions.checkNotNull;
 /**
  * An implementation of the {@link JobClient} interface that uses a {@link 
ClusterClient} underneath..
  */
-public class ClusterClientJobClientAdapter implements JobClient {
+public class ClusterClientJobClientAdapter implements JobClient, 
CoordinationRequestGateway {
 
private final ClusterClientProvider clusterClientProvider;
 
@@ -115,6 +119,13 @@ public class ClusterClientJobClientAdapter 
implements JobClient {
})));
}
 
+   @Override
+   public CompletableFuture 
sendCoordinationRequest(OperatorID operatorId, CoordinationRequest request) {
+   return bridgeClientRequest(
+   clusterClientProvider,
+   clusterClient -> 
clusterClient.sendCoordinationRequest(jobID, operatorId, request));
+   }
+
private static  CompletableFuture bridgeClientRequ

[flink] branch master updated: [FLINK-17112][table] Support DESCRIBE statement in Flink SQL

2020-05-12 Thread kurt
This is an automated email from the ASF dual-hosted git repository.

kurt pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new c5c6b76  [FLINK-17112][table] Support DESCRIBE statement in Flink SQL
c5c6b76 is described below

commit c5c6b7612d0b5492ec4db8233709bf0a1093a1fe
Author: Zhenghua Gao 
AuthorDate: Tue May 12 20:41:16 2020 +0800

[FLINK-17112][table] Support DESCRIBE statement in Flink SQL

This closes #11892
---
 .../table/api/internal/TableEnvironmentImpl.java   |  69 +++-
 .../flink/table/api/internal/TableResultImpl.java  |  22 ++--
 .../table/operations/DescribeTableOperation.java   |  59 +++
 .../org/apache/flink/table/utils/PrintUtils.java   |  15 ++-
 .../operations/SqlToOperationConverter.java|  12 +++
 .../table/planner/calcite/FlinkPlannerImpl.scala   |   7 +-
 .../flink/table/api/TableEnvironmentTest.scala | 116 -
 .../table/sqlexec/SqlToOperationConverter.java |  12 +++
 .../flink/table/api/internal/TableEnvImpl.scala|  60 ++-
 .../flink/table/calcite/FlinkPlannerImpl.scala |   7 +-
 .../api/batch/BatchTableEnvironmentTest.scala  |  28 +
 11 files changed, 382 insertions(+), 25 deletions(-)

diff --git 
a/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/api/internal/TableEnvironmentImpl.java
 
b/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/api/internal/TableEnvironmentImpl.java
index 23752c3..de7bd96 100644
--- 
a/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/api/internal/TableEnvironmentImpl.java
+++ 
b/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/api/internal/TableEnvironmentImpl.java
@@ -38,6 +38,7 @@ import org.apache.flink.table.api.TableException;
 import org.apache.flink.table.api.TableResult;
 import org.apache.flink.table.api.TableSchema;
 import org.apache.flink.table.api.ValidationException;
+import org.apache.flink.table.api.WatermarkSpec;
 import org.apache.flink.table.catalog.Catalog;
 import org.apache.flink.table.catalog.CatalogBaseTable;
 import org.apache.flink.table.catalog.CatalogFunction;
@@ -76,6 +77,7 @@ import org.apache.flink.table.module.Module;
 import org.apache.flink.table.module.ModuleManager;
 import org.apache.flink.table.operations.CatalogQueryOperation;
 import org.apache.flink.table.operations.CatalogSinkModifyOperation;
+import org.apache.flink.table.operations.DescribeTableOperation;
 import org.apache.flink.table.operations.ExplainOperation;
 import org.apache.flink.table.operations.ModifyOperation;
 import org.apache.flink.table.operations.Operation;
@@ -112,13 +114,17 @@ import org.apache.flink.table.sinks.TableSink;
 import org.apache.flink.table.sources.TableSource;
 import org.apache.flink.table.sources.TableSourceValidation;
 import org.apache.flink.table.types.DataType;
+import org.apache.flink.table.types.logical.LogicalType;
 import org.apache.flink.table.utils.PrintUtils;
 import org.apache.flink.table.utils.TableSchemaUtils;
 import org.apache.flink.types.Row;
 
+import org.apache.commons.lang3.StringUtils;
+
 import java.util.ArrayList;
 import java.util.Arrays;
 import java.util.Collections;
+import java.util.HashMap;
 import java.util.List;
 import java.util.Map;
 import java.util.Optional;
@@ -156,7 +162,7 @@ public class TableEnvironmentImpl implements 
TableEnvironmentInternal {
"CREATE TABLE, DROP TABLE, ALTER TABLE, CREATE 
DATABASE, DROP DATABASE, ALTER DATABASE, " +
"CREATE FUNCTION, DROP FUNCTION, ALTER FUNCTION, CREATE 
CATALOG, USE CATALOG, USE [CATALOG.]DATABASE, " +
"SHOW CATALOGS, SHOW DATABASES, SHOW TABLES, SHOW 
FUNCTIONS, CREATE VIEW, DROP VIEW, SHOW VIEWS, " +
-   "INSERT.";
+   "INSERT, DESCRIBE.";
 
/**
 * Provides necessary methods for {@link ConnectTableDescriptor}.
@@ -712,7 +718,8 @@ public class TableEnvironmentImpl implements 
TableEnvironmentInternal {

.resultKind(ResultKind.SUCCESS_WITH_CONTENT)
.tableSchema(tableSchema)
.data(tableSink.getResultIterator())
-   
.setPrintStyle(TableResultImpl.PrintStyle.tableau(PrintUtils.MAX_COLUMN_WIDTH))
+   
.setPrintStyle(TableResultImpl.PrintStyle.tableau(
+   
PrintUtils.MAX_COLUMN_WIDTH, PrintUtils.NULL_COLUMN))
.build();
} catch (Exception e) {
throw new TableException("Failed to execute sql", e);
@@ -966,6 +973,17 @@ public cla

[flink] branch master updated (7cfcd33 -> 2160c32)

2020-05-11 Thread kurt
This is an automated email from the ASF dual-hosted git repository.

kurt pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 7cfcd33  [FLINK-17608][web] Add TM log and stdout page/tab back
 add 2160c32  [FLINK-17252][table] Add Table#execute api and support SELECT 
statement in TableEnvironment#executeSql

No new revisions were added by this update.

Summary of changes:
 .../flink/connectors/hive/HiveTableSourceTest.java |  30 ++--
 .../connectors/hive/TableEnvHiveConnectorTest.java |  47 +++---
 .../table/catalog/hive/HiveCatalogITCase.java  |   9 +-
 .../catalog/hive/HiveCatalogUseBlinkITCase.java|   9 +-
 .../flink/table/module/hive/HiveModuleTest.java|  23 ++-
 .../io/jdbc/catalog/PostgresCatalogITCase.java |  27 ++--
 flink-python/pyflink/table/table.py|  15 ++
 flink-python/pyflink/table/table_environment.py|   1 +
 flink-python/pyflink/table/tests/test_table_api.py |  52 +++
 .../java/org/apache/flink/table/api/Table.java |  15 +-
 .../apache/flink/table/api/TableEnvironment.java   |   2 +-
 .../flink/table/api/internal/SelectTableSink.java  |  37 +++--
 .../table/api/internal/TableEnvironmentImpl.java   |  32 +++-
 .../api/internal/TableEnvironmentInternal.java |   9 ++
 .../apache/flink/table/api/internal/TableImpl.java |   5 +
 .../flink/table/api/internal/TableResultImpl.java  |  65 +---
 .../org/apache/flink/table/delegation/Planner.java |  12 +-
 .../org/apache/flink/table/utils/PlannerMock.java  |   7 +
 .../org/apache/flink/table/utils/PrintUtils.java   |  34 -
 .../org/apache/flink/table/api/TableUtils.java | 165 -
 .../table/planner/sinks/BatchSelectTableSink.java  | 112 ++
 .../sinks/SelectTableSinkSchemaConverter.java  |  61 
 .../table/planner/sinks/StreamSelectTableSink.java |  94 
 .../table/planner/delegation/BatchPlanner.scala|   8 +-
 .../table/planner/delegation/StreamPlanner.scala   |   8 +-
 .../flink/table/api/TableUtilsBatchITCase.java |  66 -
 .../flink/table/api/TableUtilsStreamingITCase.java |  75 --
 .../flink/table/api/TableEnvironmentITCase.scala   |  81 +-
 .../flink/table/api/TableEnvironmentTest.scala |  10 --
 .../org/apache/flink/table/api/TableITCase.scala   | 128 
 .../runtime/batch/sql/join/ScalarQueryITCase.scala |   4 +-
 .../runtime/stream/FsStreamingSinkITCaseBase.scala |   6 +-
 .../planner/runtime/utils/BatchTestBase.scala  |   4 +-
 .../flink/table/sinks/BatchSelectTableSink.java| 105 +
 .../sinks/SelectTableSinkSchemaConverter.java  |  56 +++
 .../flink/table/sinks/StreamSelectTableSink.java   |  86 +++
 .../flink/table/api/internal/TableEnvImpl.scala|  32 +++-
 .../apache/flink/table/planner/StreamPlanner.scala |   5 +
 .../flink/table/api/TableEnvironmentITCase.scala   |  77 +-
 .../org/apache/flink/table/api/TableITCase.scala   | 120 +++
 .../api/batch/BatchTableEnvironmentTest.scala  |  11 --
 .../runtime/batch/sql/TableEnvironmentITCase.scala |  33 -
 .../table/runtime/batch/table/TableITCase.scala|  65 
 .../streaming/util/TestStreamEnvironment.java  |   4 +
 44 files changed, 1392 insertions(+), 455 deletions(-)
 create mode 100644 flink-python/pyflink/table/tests/test_table_api.py
 copy 
flink-core/src/main/java/org/apache/flink/api/common/operators/UnaryOperatorInformation.java
 => 
flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/api/internal/SelectTableSink.java
 (53%)
 delete mode 100644 
flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/api/TableUtils.java
 create mode 100644 
flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/sinks/BatchSelectTableSink.java
 create mode 100644 
flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/sinks/SelectTableSinkSchemaConverter.java
 create mode 100644 
flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/sinks/StreamSelectTableSink.java
 delete mode 100644 
flink-table/flink-table-planner-blink/src/test/java/org/apache/flink/table/api/TableUtilsBatchITCase.java
 delete mode 100644 
flink-table/flink-table-planner-blink/src/test/java/org/apache/flink/table/api/TableUtilsStreamingITCase.java
 create mode 100644 
flink-table/flink-table-planner-blink/src/test/scala/org/apache/flink/table/api/TableITCase.scala
 create mode 100644 
flink-table/flink-table-planner/src/main/java/org/apache/flink/table/sinks/BatchSelectTableSink.java
 create mode 100644 
flink-table/flink-table-planner/src/main/java/org/apache/flink/table/sinks/SelectTableSinkSchemaConverter.java
 create mode 100644 
flink-table/flink-table-planner/src/main/java/org/apache/flink/table/sinks/StreamSelectTableSink.java
 create mode 100644 
flink-table/flink-table-planner/src/test/scala/org/apache/fl

[flink] branch master updated (25cbce1 -> 51c7d61)

2020-05-10 Thread kurt
This is an automated email from the ASF dual-hosted git repository.

kurt pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 25cbce1  [FLINK-17452][hive] Support creating Hive tables with 
constraints
 add 51c7d61  [FLINK-17601][table-planner-blink] Correct the table scan 
node name in the explain result of TableEnvironmentITCase#testStatementSet

No new revisions were added by this update.

Summary of changes:
 .../src/test/resources/explain/testStatementSet.out   | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)



[flink] branch master updated (5bf121a -> 55d04d5)

2020-05-10 Thread kurt
This is an automated email from the ASF dual-hosted git repository.

kurt pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 5bf121a  [FLINK-17564][checkpointing] Fix buffer disorder issue of 
RemoteInputChannel for unaligned checkpoint
 add 55d04d5  [FLINK-16367][table] Introduce 
TableEnvironment#createStatementSet api

No new revisions were added by this update.

Summary of changes:
 flink-python/pyflink/table/__init__.py |   6 +-
 flink-python/pyflink/table/statement_set.py|  93 +++
 flink-python/pyflink/table/table_environment.py|  13 +++
 .../table/tests/test_table_environment_api.py  |  40 +++
 .../org/apache/flink/table/api/StatementSet.java   |  65 +++
 .../apache/flink/table/api/TableEnvironment.java   |   7 ++
 .../flink/table/api/internal/StatementSetImpl.java | 102 +
 .../table/api/internal/TableEnvironmentImpl.java   |  69 +++-
 .../test/resources/explain/testStatementSet.out|  63 +++
 .../flink/table/api/TableEnvironmentITCase.scala   | 125 +++--
 .../flink/table/api/internal/TableEnvImpl.scala|  66 +++
 .../flink/table/api/TableEnvironmentITCase.scala   | 104 +++--
 .../runtime/batch/sql/TableEnvironmentITCase.scala |  82 --
 .../flink/table/utils/MockTableEnvironment.scala   |   4 +-
 ...treamAndSqlUpdate.out => testStatementSet0.out} |  30 +
 .../src/test/scala/resources/testStatementSet1.out |  81 +
 16 files changed, 864 insertions(+), 86 deletions(-)
 create mode 100644 flink-python/pyflink/table/statement_set.py
 create mode 100644 
flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/api/StatementSet.java
 create mode 100644 
flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/api/internal/StatementSetImpl.java
 create mode 100644 
flink-table/flink-table-planner-blink/src/test/resources/explain/testStatementSet.out
 copy 
flink-table/flink-table-planner/src/test/scala/resources/{testFromToDataStreamAndSqlUpdate.out
 => testStatementSet0.out} (52%)
 create mode 100644 
flink-table/flink-table-planner/src/test/scala/resources/testStatementSet1.out



[flink] 01/05: [FLINK-17267] [table-planner] legacy stream planner supports explain insert operation

2020-05-08 Thread kurt
This is an automated email from the ASF dual-hosted git repository.

kurt pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 59ca355f05b668f1cb41cae02ce182ccc7c4d707
Author: godfreyhe 
AuthorDate: Fri Apr 24 15:22:09 2020 +0800

[FLINK-17267] [table-planner] legacy stream planner supports explain insert 
operation
---
 .../table/calcite/RelTimeIndicatorConverter.scala  |  30 +++
 .../flink/table/plan/nodes/LogicalSink.scala   |  55 +
 .../org/apache/flink/table/plan/nodes/Sink.scala   |  57 +
 .../plan/nodes/datastream/DataStreamSink.scala | 204 
 .../plan/nodes/logical/FlinkLogicalSink.scala  |  73 ++
 .../flink/table/plan/rules/FlinkRuleSets.scala |   6 +-
 .../plan/rules/datastream/DataStreamSinkRule.scala |  53 +
 .../apache/flink/table/planner/StreamPlanner.scala | 263 ++---
 .../flink/table/sinks/DataStreamTableSink.scala|  58 +
 .../flink/table/api/TableEnvironmentITCase.scala   |   4 +-
 .../flink/table/api/stream/ExplainTest.scala   |  75 --
 .../apache/flink/table/utils/TableTestBase.scala   |  10 +
 .../resources/testFromToDataStreamAndSqlUpdate.out |  23 +-
 .../src/test/scala/resources/testInsert.out|  29 +++
 .../test/scala/resources/testMultipleInserts.out   |  55 +
 .../resources/testSqlUpdateAndToDataStream.out |  20 +-
 16 files changed, 785 insertions(+), 230 deletions(-)

diff --git 
a/flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/calcite/RelTimeIndicatorConverter.scala
 
b/flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/calcite/RelTimeIndicatorConverter.scala
index 7f85245..82b3f2b 100644
--- 
a/flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/calcite/RelTimeIndicatorConverter.scala
+++ 
b/flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/calcite/RelTimeIndicatorConverter.scala
@@ -32,6 +32,7 @@ import 
org.apache.flink.table.calcite.FlinkTypeFactory.{isRowtimeIndicatorType,
 import org.apache.flink.table.catalog.BasicOperatorTable
 import org.apache.flink.table.functions.sql.ProctimeSqlFunction
 import org.apache.flink.table.plan.logical.rel._
+import org.apache.flink.table.plan.nodes.LogicalSink
 import org.apache.flink.table.plan.schema.TimeIndicatorRelDataType
 
 import java.util.{Collections => JCollections}
@@ -173,6 +174,30 @@ class RelTimeIndicatorConverter(rexBuilder: RexBuilder) 
extends RelShuttle {
 case temporalTableJoin: LogicalTemporalTableJoin =>
   visit(temporalTableJoin)
 
+case sink: LogicalSink =>
+  var newInput = sink.getInput.accept(this)
+  var needsConversion = false
+
+  val projects = newInput.getRowType.getFieldList.map { field =>
+if (isProctimeIndicatorType(field.getType)) {
+  needsConversion = true
+  rexBuilder.makeCall(ProctimeSqlFunction, new 
RexInputRef(field.getIndex, field.getType))
+} else {
+  new RexInputRef(field.getIndex, field.getType)
+}
+  }
+
+  // add final conversion if necessary
+  if (needsConversion) {
+newInput = LogicalProject.create(newInput, projects, 
newInput.getRowType.getFieldNames)
+  }
+  new LogicalSink(
+sink.getCluster,
+sink.getTraitSet,
+newInput,
+sink.sink,
+sink.sinkName)
+
 case _ =>
   throw new TableException(s"Unsupported logical operator: 
${other.getClass.getSimpleName}")
   }
@@ -392,6 +417,11 @@ object RelTimeIndicatorConverter {
 val converter = new RelTimeIndicatorConverter(rexBuilder)
 val convertedRoot = rootRel.accept(converter)
 
+// the LogicalSink is converted in RelTimeIndicatorConverter before
+if (rootRel.isInstanceOf[LogicalSink]) {
+  return convertedRoot
+}
+
 var needsConversion = false
 
 // materialize remaining proctime indicators
diff --git 
a/flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/plan/nodes/LogicalSink.scala
 
b/flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/plan/nodes/LogicalSink.scala
new file mode 100644
index 000..478bb85
--- /dev/null
+++ 
b/flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/plan/nodes/LogicalSink.scala
@@ -0,0 +1,55 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS"

[flink] 03/05: [FLINK-17267] [table] TableEnvironment#explainSql supports EXPLAIN statement

2020-05-08 Thread kurt
This is an automated email from the ASF dual-hosted git repository.

kurt pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 6013a62523bd2fc73b6aa7b3109f6b7c138ac51c
Author: godfreyhe 
AuthorDate: Wed Apr 22 17:08:00 2020 +0800

[FLINK-17267] [table] TableEnvironment#explainSql supports EXPLAIN statement
---
 .../table/api/internal/TableEnvironmentImpl.java   |   9 ++
 .../flink/table/operations/ExplainOperation.java   |  46 
 .../operations/SqlToOperationConverter.java|  19 
 .../table/planner/calcite/FlinkPlannerImpl.scala   |  11 +-
 .../table/planner/delegation/BatchPlanner.scala|  22 +++-
 .../table/planner/delegation/StreamPlanner.scala   |  22 +++-
 .../explain/testExecuteSqlWithExplainInsert.out|  31 +
 .../explain/testExecuteSqlWithExplainSelect.out|  21 
 .../flink/table/api/TableEnvironmentTest.scala | 115 ++-
 .../table/sqlexec/SqlToOperationConverter.java |  19 
 .../table/api/internal/BatchTableEnvImpl.scala |  71 
 .../flink/table/api/internal/TableEnvImpl.scala|  13 ++-
 .../flink/table/calcite/FlinkPlannerImpl.scala |  11 +-
 .../apache/flink/table/planner/StreamPlanner.scala |  17 ++-
 .../flink/table/api/TableEnvironmentITCase.scala   |  13 +--
 .../api/batch/BatchTableEnvironmentTest.scala  | 120 +++-
 .../api/stream/StreamTableEnvironmentTest.scala| 126 -
 .../resources/testExecuteSqlWithExplainInsert0.out |  31 +
 .../resources/testExecuteSqlWithExplainInsert1.out |  36 ++
 .../resources/testExecuteSqlWithExplainSelect0.out |  21 
 .../resources/testExecuteSqlWithExplainSelect1.out |  27 +
 21 files changed, 744 insertions(+), 57 deletions(-)

diff --git 
a/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/api/internal/TableEnvironmentImpl.java
 
b/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/api/internal/TableEnvironmentImpl.java
index 342ee55..1ca045b 100644
--- 
a/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/api/internal/TableEnvironmentImpl.java
+++ 
b/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/api/internal/TableEnvironmentImpl.java
@@ -72,6 +72,7 @@ import org.apache.flink.table.module.Module;
 import org.apache.flink.table.module.ModuleManager;
 import org.apache.flink.table.operations.CatalogQueryOperation;
 import org.apache.flink.table.operations.CatalogSinkModifyOperation;
+import org.apache.flink.table.operations.ExplainOperation;
 import org.apache.flink.table.operations.ModifyOperation;
 import org.apache.flink.table.operations.Operation;
 import org.apache.flink.table.operations.QueryOperation;
@@ -852,6 +853,14 @@ public class TableEnvironmentImpl implements 
TableEnvironmentInternal {
return buildShowResult(listFunctions());
} else if (operation instanceof ShowViewsOperation) {
return buildShowResult(listViews());
+   } else if (operation instanceof ExplainOperation) {
+   String explanation = 
planner.explain(Collections.singletonList(((ExplainOperation) 
operation).getChild()), false);
+   return TableResultImpl.builder()
+   
.resultKind(ResultKind.SUCCESS_WITH_CONTENT)
+   
.tableSchema(TableSchema.builder().field("result", DataTypes.STRING()).build())
+   
.data(Collections.singletonList(Row.of(explanation)))
+   .build();
+
} else {
throw new 
TableException(UNSUPPORTED_QUERY_IN_EXECUTE_SQL_MSG);
}
diff --git 
a/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/operations/ExplainOperation.java
 
b/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/operations/ExplainOperation.java
new file mode 100644
index 000..c78c78f
--- /dev/null
+++ 
b/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/operations/ExplainOperation.java
@@ -0,0 +1,46 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for t

[flink] branch master updated (0d1738b -> 4959ebf)

2020-05-08 Thread kurt
This is an automated email from the ASF dual-hosted git repository.

kurt pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 0d1738b  [FLINK-17256][python] Support keyword arguments in the 
PyFlink descriptor API
 new 59ca355  [FLINK-17267] [table-planner] legacy stream planner supports 
explain insert operation
 new 2fc82ad  [FLINK-17267] [table-planner] legacy batch planner supports 
explain insert operation
 new 6013a62  [FLINK-17267] [table] TableEnvironment#explainSql supports 
EXPLAIN statement
 new 1a5fc5f  [FLINK-17267] [table] Introduce TableEnvironment#explainSql 
api
 new 4959ebf  [FLINK-17267] [table] Introduce Table#explain api

The 5 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 flink-python/pyflink/table/__init__.py |   4 +-
 flink-python/pyflink/table/explain_detail.py   |  34 +++
 flink-python/pyflink/table/table.py|  14 ++
 flink-python/pyflink/table/table_environment.py|  19 +-
 flink-python/pyflink/table/tests/test_explain.py   |  40 +++
 .../table/tests/test_table_environment_api.py  |  34 ++-
 flink-python/pyflink/util/utils.py |  20 ++
 .../org/apache/flink/table/api/ExplainDetail.java} |  45 ++--
 .../java/org/apache/flink/table/api/Table.java |  10 +
 .../apache/flink/table/api/TableEnvironment.java   |  15 +-
 .../table/api/internal/TableEnvironmentImpl.java   |  48 +++-
 .../api/internal/TableEnvironmentInternal.java |  14 ++
 .../apache/flink/table/api/internal/TableImpl.java |   6 +
 .../flink/table/api/internal/TableResultImpl.java  |  51 +++-
 .../org/apache/flink/table/delegation/Planner.java |   7 +-
 .../flink/table/operations/ExplainOperation.java}  |  39 ++-
 .../org/apache/flink/table/utils/PlannerMock.java  |   3 +-
 .../operations/SqlToOperationConverter.java|  19 ++
 .../table/planner/calcite/FlinkPlannerImpl.scala   |  11 +-
 .../table/planner/delegation/BatchPlanner.scala|  30 ++-
 .../table/planner/delegation/StreamPlanner.scala   |  33 ++-
 .../explain/testExecuteSqlWithExplainInsert.out|  31 +++
 .../explain/testExecuteSqlWithExplainSelect.out|  21 ++
 .../resources/explain/testExplainSqlWithInsert.out |  31 +++
 .../resources/explain/testExplainSqlWithSelect.out |  21 ++
 .../flink/table/api/TableEnvironmentTest.scala | 194 +-
 .../flink/table/planner/utils/TableTestBase.scala  |   2 +-
 .../table/sqlexec/SqlToOperationConverter.java |  19 ++
 .../table/api/internal/BatchTableEnvImpl.scala | 230 ++---
 .../flink/table/api/internal/TableEnvImpl.scala|  88 ++-
 .../flink/table/calcite/FlinkPlannerImpl.scala |  11 +-
 .../table/calcite/RelTimeIndicatorConverter.scala  |  30 +++
 .../flink/table/plan/nodes/LogicalSink.scala   |  55 
 .../org/apache/flink/table/plan/nodes/Sink.scala   |  57 +
 .../table/plan/nodes/dataset/DataSetSink.scala |  57 +
 .../plan/nodes/datastream/DataStreamSink.scala | 204 +++
 .../plan/nodes/logical/FlinkLogicalSink.scala  |  73 ++
 .../flink/table/plan/rules/FlinkRuleSets.scala |   9 +-
 .../table/plan/rules/dataSet/DataSetSinkRule.scala |  55 
 .../plan/rules/datastream/DataStreamSinkRule.scala |  53 
 .../apache/flink/table/planner/StreamPlanner.scala | 280 +++--
 .../flink/table/sinks/DataStreamTableSink.scala|  58 +
 .../flink/table/api/TableEnvironmentITCase.scala   |  17 +-
 .../api/batch/BatchTableEnvironmentTest.scala  | 201 ++-
 .../apache/flink/table/api/batch/ExplainTest.scala |  71 --
 .../sql/validation/InsertIntoValidationTest.scala  |   8 +
 .../validation/InsertIntoValidationTest.scala  |   4 +
 .../flink/table/api/stream/ExplainTest.scala   |  75 --
 .../api/stream/StreamTableEnvironmentTest.scala| 206 ++-
 .../flink/table/utils/MockTableEnvironment.scala   |   4 +-
 .../apache/flink/table/utils/TableTestBase.scala   |  10 +
 .../resources/testExecuteSqlWithExplainInsert0.out |  31 +++
 .../resources/testExecuteSqlWithExplainInsert1.out |  36 +++
 .../resources/testExecuteSqlWithExplainSelect0.out |  21 ++
 .../resources/testExecuteSqlWithExplainSelect1.out |  27 ++
 .../scala/resources/testExplainSqlWithInsert0.out  |  31 +++
 .../scala/resources/testExplainSqlWithInsert1.out  |  43 
 .../scala/resources/testExplainSqlWithSelect0.out  |  21 ++
 .../scala/resources/testExplainSqlWithSelect1.out  |  27 ++
 .../resources/testFromToDataStreamAndSqlUpdate.out |  23 +-
 .../src/test/scala/resources/testInsert.out|  29 +++
 .../src/test/scala/resources/testInsert1.out   |  27 ++
 .../test/scala/resources/testMultipleInserts.out   |  55 
 .../

[flink] 02/05: [FLINK-17267] [table-planner] legacy batch planner supports explain insert operation

2020-05-08 Thread kurt
This is an automated email from the ASF dual-hosted git repository.

kurt pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 2fc82ad5b92367c9f9a2aad5e6df3fd78108754e
Author: godfreyhe 
AuthorDate: Fri Apr 24 20:45:12 2020 +0800

[FLINK-17267] [table-planner] legacy batch planner supports explain insert 
operation
---
 .../table/tests/test_table_environment_api.py  |   5 +-
 .../table/api/internal/BatchTableEnvImpl.scala | 191 +
 .../flink/table/api/internal/TableEnvImpl.scala|  63 ++-
 .../table/plan/nodes/dataset/DataSetSink.scala |  57 ++
 .../flink/table/plan/rules/FlinkRuleSets.scala |   3 +-
 .../table/plan/rules/dataSet/DataSetSinkRule.scala |  55 ++
 .../apache/flink/table/api/batch/ExplainTest.scala |  71 ++--
 .../sql/validation/InsertIntoValidationTest.scala  |   8 +
 .../validation/InsertIntoValidationTest.scala  |   4 +
 .../src/test/scala/resources/testInsert1.out   |  27 +++
 .../test/scala/resources/testMultipleInserts1.out  |  51 ++
 11 files changed, 477 insertions(+), 58 deletions(-)

diff --git a/flink-python/pyflink/table/tests/test_table_environment_api.py 
b/flink-python/pyflink/table/tests/test_table_environment_api.py
index c1bf04a..87c8023 100644
--- a/flink-python/pyflink/table/tests/test_table_environment_api.py
+++ b/flink-python/pyflink/table/tests/test_table_environment_api.py
@@ -400,8 +400,9 @@ class BatchTableEnvironmentTests(TableEnvironmentTest, 
PyFlinkBatchTableTestCase
 t_env.sql_update("insert into sink1 select * from %s where a > 100" % 
source)
 t_env.sql_update("insert into sink2 select * from %s where a < 100" % 
source)
 
-with self.assertRaises(TableException):
-t_env.explain(extended=True)
+actual = t_env.explain(extended=True)
+
+assert isinstance(actual, str)
 
 def test_create_table_environment(self):
 table_config = TableConfig()
diff --git 
a/flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/api/internal/BatchTableEnvImpl.scala
 
b/flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/api/internal/BatchTableEnvImpl.scala
index 343fc23..e25b8e4 100644
--- 
a/flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/api/internal/BatchTableEnvImpl.scala
+++ 
b/flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/api/internal/BatchTableEnvImpl.scala
@@ -38,8 +38,9 @@ import 
org.apache.flink.table.expressions.utils.ApiExpressionDefaultVisitor
 import org.apache.flink.table.expressions.{Expression, 
UnresolvedCallExpression}
 import 
org.apache.flink.table.functions.BuiltInFunctionDefinitions.TIME_ATTRIBUTES
 import org.apache.flink.table.module.ModuleManager
-import org.apache.flink.table.operations.{DataSetQueryOperation, 
QueryOperation}
+import org.apache.flink.table.operations.{CatalogSinkModifyOperation, 
DataSetQueryOperation, ModifyOperation, Operation, QueryOperation}
 import org.apache.flink.table.plan.BatchOptimizer
+import org.apache.flink.table.plan.nodes.LogicalSink
 import org.apache.flink.table.plan.nodes.dataset.DataSetRel
 import org.apache.flink.table.planner.Conversions
 import org.apache.flink.table.runtime.MapRunner
@@ -74,7 +75,7 @@ abstract class BatchTableEnvImpl(
 moduleManager: ModuleManager)
   extends TableEnvImpl(config, catalogManager, moduleManager) {
 
-  private val bufferedSinks = new JArrayList[DataSink[_]]
+  private val bufferedModifyOperations = new JArrayList[ModifyOperation]()
 
   private[flink] val optimizer = new BatchOptimizer(
 () => 
config.getPlannerConfig.unwrap(classOf[CalciteConfig]).orElse(CalciteConfig.DEFAULT),
@@ -170,8 +171,8 @@ abstract class BatchTableEnvImpl(
 }
   }
 
-  override protected def addToBuffer(sink: DataSink[_]): Unit = {
-bufferedSinks.add(sink)
+  override protected def addToBuffer[T](modifyOperation: ModifyOperation): 
Unit = {
+bufferedModifyOperations.add(modifyOperation)
   }
 
   /**
@@ -215,32 +216,69 @@ abstract class BatchTableEnvImpl(
 * @param extended Flag to include detailed optimizer estimates.
 */
   private[flink] def explain(table: Table, extended: Boolean): String = {
-val ast = getRelBuilder.tableOperation(table.getQueryOperation).build()
-val optimizedPlan = optimizer.optimize(ast)
-val dataSet = translate[Row](
-  optimizedPlan,
-  getTableSchema(table.getQueryOperation.getTableSchema.getFieldNames, 
optimizedPlan))(
-  new GenericTypeInfo(classOf[Row]))
-dataSet.output(new DiscardingOutputFormat[Row])
-val env = dataSet.getExecutionEnvironment
+
explain(JCollections.singletonList(table.getQueryOperation.asInstanceOf[Operation]),
 extended)
+  }
+
+  override def explain(table: Table): String = explain(table: Table, extended 
= false)
+
+  override def explain(extended: Boolean): String = {
+
explain(bufferedModifyOperatio

[flink] 05/05: [FLINK-17267] [table] Introduce Table#explain api

2020-05-08 Thread kurt
This is an automated email from the ASF dual-hosted git repository.

kurt pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 4959ebfc349bb80f88582f9ef2d60d09f1140737
Author: godfreyhe 
AuthorDate: Fri Apr 24 23:04:33 2020 +0800

[FLINK-17267] [table] Introduce Table#explain api

This closes #11905
---
 flink-python/pyflink/table/explain_detail.py   |  4 +-
 flink-python/pyflink/table/table.py| 14 ++
 flink-python/pyflink/table/table_environment.py|  5 +--
 .../{explain_detail.py => tests/test_explain.py}   | 30 -
 .../table/tests/test_table_environment_api.py  |  2 +-
 flink-python/pyflink/util/utils.py |  4 +-
 .../org/apache/flink/table/api/ExplainDetail.java  |  4 +-
 .../java/org/apache/flink/table/api/Table.java | 10 +
 .../table/api/internal/TableEnvironmentImpl.java   |  8 +++-
 .../api/internal/TableEnvironmentInternal.java | 14 ++
 .../apache/flink/table/api/internal/TableImpl.java |  6 +++
 .../flink/table/api/internal/TableResultImpl.java  | 51 +++---
 .../table/planner/delegation/StreamPlanner.scala   |  2 +-
 .../flink/table/api/TableEnvironmentTest.scala | 24 +-
 .../table/api/internal/BatchTableEnvImpl.scala |  8 ++--
 .../flink/table/api/internal/TableEnvImpl.scala|  8 ++--
 .../api/batch/BatchTableEnvironmentTest.scala  | 22 ++
 .../api/stream/StreamTableEnvironmentTest.scala| 22 ++
 18 files changed, 202 insertions(+), 36 deletions(-)

diff --git a/flink-python/pyflink/table/explain_detail.py 
b/flink-python/pyflink/table/explain_detail.py
index 48e7ce9..0cbcbe9 100644
--- a/flink-python/pyflink/table/explain_detail.py
+++ b/flink-python/pyflink/table/explain_detail.py
@@ -29,6 +29,6 @@ class ExplainDetail(object):
 # 0.0 memory}
 ESTIMATED_COST = 0
 
-# The changelog traits produced by a physical rel node.
+# The changelog mode produced by a physical rel node.
 # e.g. GroupAggregate(..., changelogMode=[I,UA,D])
-CHANGELOG_TRAITS = 1
+CHANGELOG_MODE = 1
diff --git a/flink-python/pyflink/table/table.py 
b/flink-python/pyflink/table/table.py
index 728d331..b74797e 100644
--- a/flink-python/pyflink/table/table.py
+++ b/flink-python/pyflink/table/table.py
@@ -21,6 +21,7 @@ from pyflink.java_gateway import get_gateway
 from pyflink.table.table_schema import TableSchema
 
 from pyflink.util.utils import to_jarray
+from pyflink.util.utils import to_j_explain_detail_arr
 
 __all__ = ['Table', 'GroupedTable', 'GroupWindowedTable', 'OverWindowedTable', 
'WindowGroupedTable']
 
@@ -718,6 +719,19 @@ class Table(object):
 """
 self._j_table.executeInsert(table_path, overwrite)
 
+def explain(self, *extra_details):
+"""
+Returns the AST of this table and the execution plan.
+
+:param extra_details: The extra explain details which the explain 
result should include,
+  e.g. estimated cost, changelog mode for streaming
+:type extra_details: tuple[ExplainDetail] (variable-length arguments 
of ExplainDetail)
+:return: The statement for which the AST and execution plan will be 
returned.
+:rtype: str
+"""
+j_extra_details = to_j_explain_detail_arr(extra_details)
+return self._j_table.explain(j_extra_details)
+
 def __str__(self):
 return self._j_table.toString()
 
diff --git a/flink-python/pyflink/table/table_environment.py 
b/flink-python/pyflink/table/table_environment.py
index 94ff785..91073d8 100644
--- a/flink-python/pyflink/table/table_environment.py
+++ b/flink-python/pyflink/table/table_environment.py
@@ -471,13 +471,12 @@ class TableEnvironment(object):
 
 def explain_sql(self, stmt, *extra_details):
 """
-Returns the AST of the specified statement and the execution plan to 
compute
-the result of the given statement.
+Returns the AST of the specified statement and the execution plan.
 
 :param stmt: The statement for which the AST and execution plan will 
be returned.
 :type stmt: str
 :param extra_details: The extra explain details which the explain 
result should include,
-  e.g. estimated cost, change log trait for 
streaming
+  e.g. estimated cost, changelog mode for streaming
 :type extra_details: tuple[ExplainDetail] (variable-length arguments 
of ExplainDetail)
 :return: The statement for which the AST and execution plan will be 
returned.
 :rtype: str
diff --git a/flink-python/pyflink/table/explain_detail.py 
b/flink-python/pyflink/table/tests/test_explain.py
similarity index 58%
copy from flink-python/pyflink/table/explain_detail.py
copy to flink-python/pyflink/table/tests/test_explain.py
index 48e7ce9..1dccaba 100644
--- a/flink-py

[flink] 04/05: [FLINK-17267] [table] Introduce TableEnvironment#explainSql api

2020-05-08 Thread kurt
This is an automated email from the ASF dual-hosted git repository.

kurt pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 1a5fc5f28a1c3569da8de953f6cd1fad5371f6a4
Author: godfreyhe 
AuthorDate: Wed Apr 29 10:57:32 2020 +0800

[FLINK-17267] [table] Introduce TableEnvironment#explainSql api
---
 flink-python/pyflink/table/__init__.py |  4 +-
 flink-python/pyflink/table/explain_detail.py   | 34 
 flink-python/pyflink/table/table_environment.py| 20 ++-
 .../table/tests/test_table_environment_api.py  | 29 +-
 flink-python/pyflink/util/utils.py | 20 +++
 .../org/apache/flink/table/api/ExplainDetail.java} | 45 +---
 .../apache/flink/table/api/TableEnvironment.java   | 15 +-
 .../table/api/internal/TableEnvironmentImpl.java   | 35 +++--
 .../org/apache/flink/table/delegation/Planner.java |  7 +--
 .../org/apache/flink/table/utils/PlannerMock.java  |  3 +-
 .../table/planner/delegation/BatchPlanner.scala|  8 +--
 .../table/planner/delegation/StreamPlanner.scala   | 11 ++--
 .../resources/explain/testExplainSqlWithInsert.out | 31 +++
 .../resources/explain/testExplainSqlWithSelect.out | 21 
 .../flink/table/api/TableEnvironmentTest.scala | 57 
 .../flink/table/planner/utils/TableTestBase.scala  |  2 +-
 .../table/api/internal/BatchTableEnvImpl.scala | 20 +--
 .../flink/table/api/internal/TableEnvImpl.scala| 18 +--
 .../apache/flink/table/planner/StreamPlanner.scala |  2 +-
 .../api/batch/BatchTableEnvironmentTest.scala  | 61 +-
 .../api/stream/StreamTableEnvironmentTest.scala| 58 
 .../flink/table/utils/MockTableEnvironment.scala   |  4 +-
 .../scala/resources/testExplainSqlWithInsert0.out  | 31 +++
 .../scala/resources/testExplainSqlWithInsert1.out  | 43 +++
 .../scala/resources/testExplainSqlWithSelect0.out  | 21 
 .../scala/resources/testExplainSqlWithSelect1.out  | 27 ++
 26 files changed, 562 insertions(+), 65 deletions(-)

diff --git a/flink-python/pyflink/table/__init__.py 
b/flink-python/pyflink/table/__init__.py
index 140c6b3..1e367f3 100644
--- a/flink-python/pyflink/table/__init__.py
+++ b/flink-python/pyflink/table/__init__.py
@@ -70,6 +70,7 @@ from pyflink.table.sources import TableSource, CsvTableSource
 from pyflink.table.types import DataTypes, UserDefinedType, Row
 from pyflink.table.table_schema import TableSchema
 from pyflink.table.udf import FunctionContext, ScalarFunction
+from pyflink.table.explain_detail import ExplainDetail
 
 __all__ = [
 'TableEnvironment',
@@ -93,5 +94,6 @@ __all__ = [
 'TableSchema',
 'FunctionContext',
 'ScalarFunction',
-'SqlDialect'
+'SqlDialect',
+'ExplainDetail'
 ]
diff --git a/flink-python/pyflink/table/explain_detail.py 
b/flink-python/pyflink/table/explain_detail.py
new file mode 100644
index 000..48e7ce9
--- /dev/null
+++ b/flink-python/pyflink/table/explain_detail.py
@@ -0,0 +1,34 @@
+
+#  Licensed to the Apache Software Foundation (ASF) under one
+#  or more contributor license agreements.  See the NOTICE file
+#  distributed with this work for additional information
+#  regarding copyright ownership.  The ASF licenses this file
+#  to you under the Apache License, Version 2.0 (the
+#  "License"); you may not use this file except in compliance
+#  with the License.  You may obtain a copy of the License at
+#
+#  http://www.apache.org/licenses/LICENSE-2.0
+#
+#  Unless required by applicable law or agreed to in writing, software
+#  distributed under the License is distributed on an "AS IS" BASIS,
+#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#  See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+__all__ = ['ExplainDetail']
+
+
+class ExplainDetail(object):
+"""
+ExplainDetail defines the types of details for explain result.
+"""
+
+# The cost information on physical rel node estimated by optimizer.
+# e.g. TableSourceScan(..., cumulative cost = {1.0E8 rows, 1.0E8 cpu, 
2.4E9 io, 0.0 network,
+# 0.0 memory}
+ESTIMATED_COST = 0
+
+# The changelog traits produced by a physical rel node.
+# e.g. GroupAggregate(..., changelogMode=[I,UA,D])
+CHANGELOG_TRAITS = 1
diff --git a/flink-python/pyflink/table/table_environment.py 
b/flink-python/pyflink/table/table_environment.py
index d8e8c51..94ff785 100644
--- a/flink-python/pyflink/table/table_environment.py
+++ b/flink-python/pyflink/table/table_environment.py
@@ -36,7 +36,8 @@ from pyflink.table import Table
 from pyflink.table.types import _to_ja

[flink] branch master updated (967b89c -> 1de54fe)

2020-05-07 Thread kurt
This is an automated email from the ASF dual-hosted git repository.

kurt pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 967b89c  [FLINK-17092][python] Add retry when pip install dependencies 
(#12024)
 add 1de54fe  [FLINK-17251] [table] Introduce Table#executeInsert & add 
insert statement support to TableEnvironment#executeSql

No new revisions were added by this update.

Summary of changes:
 flink-python/pyflink/table/table.py|  21 ++
 .../java/org/apache/flink/table/api/Table.java |  57 +
 .../table/api/internal/TableEnvironmentImpl.java   |  52 -
 .../api/internal/TableEnvironmentInternal.java |  59 ++
 .../apache/flink/table/api/internal/TableImpl.java |  32 ++-
 .../flink/table/api/internal/TableResultImpl.java  |   2 +-
 .../apache/flink/table/delegation/Executor.java|  10 +
 .../org/apache/flink/table/utils/ExecutorMock.java |   6 +
 .../table/planner/delegation/ExecutorBase.java |   6 +
 .../flink/table/api/TableEnvironmentITCase.scala   | 202 +-
 .../flink/table/executor/StreamExecutor.java   |   6 +
 .../table/api/internal/BatchTableEnvImpl.scala | 108 ++
 .../flink/table/api/internal/TableEnvImpl.scala| 168 ++-
 .../flink/table/api/TableEnvironmentITCase.scala   | 152 ++
 .../runtime/batch/sql/TableEnvironmentITCase.scala | 231 -
 .../apache/flink/table/utils/testTableSinks.scala  |  87 
 16 files changed, 1097 insertions(+), 102 deletions(-)
 create mode 100644 
flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/api/internal/TableEnvironmentInternal.java
 create mode 100644 
flink-table/flink-table-planner/src/test/scala/org/apache/flink/table/utils/testTableSinks.scala



[flink] branch master updated (a7b2b5a -> ee9bb64)

2020-05-07 Thread kurt
This is an automated email from the ASF dual-hosted git repository.

kurt pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from a7b2b5a  [FLINK-16782][state] Avoid unnecessary check on expired entry 
when ReturnExpiredIfNotCleanedUp
 add fcce072  [FLINK-16029][table] Port CustomConnectorDescriptor to 
flink-table-common module
 add 5511009  [FLINK-16029][table] Refactor CsvTableSink to support 
numFiles and writeMode
 add ee9bb64  [FLINK-16029][table-planner-blink] Remove 
registerTableSource/registerTableSink in test cases of blink planner (use 
tableEnv.connect() instead).

No new revisions were added by this update.

Summary of changes:
 .../org/apache/flink/table/descriptors/OldCsv.java |  25 +
 .../flink/table/descriptors/OldCsvValidator.java   |   4 +
 .../flink/table/sinks/CsvTableSinkFactoryBase.java |  15 +-
 .../descriptors}/CustomConnectorDescriptor.java|   4 +-
 .../planner/catalog/CatalogStatisticsTest.java |   8 +-
 .../org.apache.flink.table.factories.TableFactory  |  15 +
 .../flink/table/api/TableEnvironmentITCase.scala   |  71 +--
 .../flink/table/api/TableEnvironmentTest.scala |  14 +-
 .../table/planner/plan/batch/sql/LimitTest.scala   |  14 +-
 .../planner/plan/batch/sql/TableSourceTest.scala   |  64 +-
 .../plan/batch/sql/join/LookupJoinTest.scala   |   4 +-
 .../PushFilterIntoTableSourceScanRuleTest.scala|  37 +-
 .../PushPartitionIntoTableSourceScanRuleTest.scala |   8 +-
 .../planner/plan/stream/sql/TableSourceTest.scala  |  61 +-
 .../plan/stream/sql/join/LookupJoinTest.scala  | 245 ++--
 .../table/validation/TableSinkValidationTest.scala |   9 +-
 .../planner/runtime/batch/sql/LimitITCase.scala|   6 +-
 .../runtime/batch/sql/TableScanITCase.scala|  34 +-
 .../runtime/batch/sql/TableSourceITCase.scala  |  68 +--
 .../runtime/batch/sql/TimestampITCase.scala|  38 +-
 .../runtime/batch/sql/join/LookupJoinITCase.scala  |  56 +-
 .../runtime/batch/table/TableSinkITCase.scala  |  18 +-
 .../runtime/stream/sql/AsyncLookupJoinITCase.scala |  49 +-
 .../runtime/stream/sql/LookupJoinITCase.scala  | 103 ++--
 .../runtime/stream/sql/TableScanITCase.scala   |  28 +-
 .../runtime/stream/sql/TableSourceITCase.scala |  54 +-
 .../runtime/stream/sql/TimestampITCase.scala   |  30 +-
 .../runtime/stream/table/TableSinkITCase.scala |  40 +-
 .../utils/InMemoryLookupableTableSource.scala  | 177 +++---
 .../planner/utils/MemoryTableSourceSinkUtil.scala  | 156 +++--
 .../flink/table/planner/utils/TableTestBase.scala  |  45 +-
 .../planner/utils/TestLimitableTableSource.scala   |  60 +-
 .../table/planner/utils/testTableSourceSinks.scala | 666 -
 33 files changed, 1320 insertions(+), 906 deletions(-)
 rename {flink-python/src/main/java/org/apache/flink/table/descriptors/python 
=> 
flink-table/flink-table-common/src/main/java/org/apache/flink/table/descriptors}/CustomConnectorDescriptor.java
 (92%)



[flink] branch master updated (c54293b -> dca4a77)

2020-04-29 Thread kurt
This is an automated email from the ASF dual-hosted git repository.

kurt pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from c54293b  [FLINK-17148][python] Support converting pandas DataFrame to 
Flink Table (#11832)
 add 55e8d1a  [FLINK-15591][sql-parser] Support parsing TEMPORARY in table 
definition
 add dca4a77  [FLINK-15591][table] Support create/drop temporary table in 
both planners

No new revisions were added by this update.

Summary of changes:
 .../src/main/codegen/includes/parserImpls.ftl  |  13 +--
 .../flink/sql/parser/ddl/SqlCreateTable.java   |  16 ++-
 .../apache/flink/sql/parser/ddl/SqlDropTable.java  |  15 ++-
 .../flink/sql/parser/FlinkSqlParserImplTest.java   |  35 ++
 .../table/api/internal/TableEnvironmentImpl.java   |  54 +++---
 .../apache/flink/table/catalog/CatalogManager.java |  38 ---
 .../table/operations/ddl/CreateTableOperation.java |  10 +-
 .../table/operations/ddl/DropTableOperation.java   |  12 ++-
 .../operations/SqlToOperationConverter.java|   5 +-
 .../flink/table/api/TableEnvironmentTest.scala | 118 +
 .../table/planner/catalog/CatalogTableITCase.scala |  85 +++
 .../table/sqlexec/SqlToOperationConverter.java |   5 +-
 .../flink/table/api/internal/TableEnvImpl.scala|  55 +++---
 .../flink/table/catalog/CatalogManagerTest.java|  11 +-
 .../api/batch/BatchTableEnvironmentTest.scala  |  25 -
 15 files changed, 424 insertions(+), 73 deletions(-)



[flink] branch master updated (eddd91a -> ea51116)

2020-04-28 Thread kurt
This is an automated email from the ASF dual-hosted git repository.

kurt pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from eddd91a  [hotfix] Remove site.baseurl from {% link %} tags
 add ea51116  [FLINK-17111][table] Support SHOW VIEWS in Flink SQL

No new revisions were added by this update.

Summary of changes:
 flink-python/pyflink/table/table_environment.py| 16 +-
 .../src/main/codegen/data/Parser.tdd   |  3 ++
 .../src/main/codegen/includes/parserImpls.ftl  | 14 +
 .../{SqlShowCatalogs.java => SqlShowViews.java}| 15 +++---
 .../flink/sql/parser/FlinkSqlParserImplTest.java   |  5 ++
 .../apache/flink/table/api/TableEnvironment.java   |  9 
 .../table/api/internal/TableEnvironmentImpl.java   | 13 -
 .../apache/flink/table/catalog/CatalogManager.java | 40 +-
 ...ablesOperation.java => ShowViewsOperation.java} |  6 +--
 .../operations/SqlToOperationConverter.java|  9 
 .../table/planner/calcite/FlinkPlannerImpl.scala   |  7 ++-
 .../flink/table/api/TableEnvironmentTest.scala | 36 +
 .../table/sqlexec/SqlToOperationConverter.java |  9 
 .../flink/table/api/internal/TableEnvImpl.scala| 16 --
 .../flink/table/calcite/FlinkPlannerImpl.scala |  7 ++-
 .../api/batch/BatchTableEnvironmentTest.scala  | 61 ++
 .../flink/table/utils/MockTableEnvironment.scala   |  2 +
 17 files changed, 240 insertions(+), 28 deletions(-)
 copy 
flink-table/flink-sql-parser/src/main/java/org/apache/flink/sql/parser/dql/{SqlShowCatalogs.java
 => SqlShowViews.java} (84%)
 copy 
flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/operations/{ShowTablesOperation.java
 => ShowViewsOperation.java} (87%)



[flink] 03/06: [FLINK-17339][sql-client] Change default planner to blink.

2020-04-26 Thread kurt
This is an automated email from the ASF dual-hosted git repository.

kurt pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 2421f94bbeed01d661d4c568f9c9370479223467
Author: Kurt Young 
AuthorDate: Thu Apr 23 22:17:34 2020 +0800

[FLINK-17339][sql-client] Change default planner to blink.
---
 .../apache/flink/table/client/config/entries/ExecutionEntry.java| 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git 
a/flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/config/entries/ExecutionEntry.java
 
b/flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/config/entries/ExecutionEntry.java
index 04d81a2..dd7c322 100644
--- 
a/flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/config/entries/ExecutionEntry.java
+++ 
b/flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/config/entries/ExecutionEntry.java
@@ -175,7 +175,7 @@ public class ExecutionEntry extends ConfigEntry {
}
 
final String planner = 
properties.getOptionalString(EXECUTION_PLANNER)
-   .orElse(EXECUTION_PLANNER_VALUE_OLD);
+   .orElse(EXECUTION_PLANNER_VALUE_BLINK);
 
if (planner.equals(EXECUTION_PLANNER_VALUE_OLD)) {
builder.useOldPlanner();
@@ -200,7 +200,7 @@ public class ExecutionEntry extends ConfigEntry {
 
public boolean isStreamingPlanner() {
final String planner = 
properties.getOptionalString(EXECUTION_PLANNER)
-   .orElse(EXECUTION_PLANNER_VALUE_OLD);
+   .orElse(EXECUTION_PLANNER_VALUE_BLINK);
 
// Blink planner is a streaming planner
if (planner.equals(EXECUTION_PLANNER_VALUE_BLINK)) {
@@ -216,7 +216,7 @@ public class ExecutionEntry extends ConfigEntry {
 
public boolean isBatchPlanner() {
final String planner = 
properties.getOptionalString(EXECUTION_PLANNER)
-   .orElse(EXECUTION_PLANNER_VALUE_OLD);
+   .orElse(EXECUTION_PLANNER_VALUE_BLINK);
 
// Blink planner is not a batch planner
if (planner.equals(EXECUTION_PLANNER_VALUE_BLINK)) {



[flink] 04/06: [FLINK-17339][table-planner-blink] Simplify test cases due to default planner changing.

2020-04-26 Thread kurt
This is an automated email from the ASF dual-hosted git repository.

kurt pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit cb523d3f0e0bbb5b206b7beeb871f9244e539634
Author: Kurt Young 
AuthorDate: Thu Apr 23 22:18:51 2020 +0800

[FLINK-17339][table-planner-blink] Simplify test cases due to default 
planner changing.
---
 .../test/java/org/apache/flink/table/api/EnvironmentTest.java |  4 +---
 .../org/apache/flink/table/api/TableUtilsStreamingITCase.java |  2 +-
 .../flink/table/planner/catalog/CatalogConstraintTest.java|  2 +-
 .../org/apache/flink/table/planner/catalog/CatalogITCase.java |  2 +-
 .../flink/table/planner/catalog/CatalogStatisticsTest.java|  2 +-
 .../org/apache/flink/table/api/TableEnvironmentITCase.scala   |  4 ++--
 .../org/apache/flink/table/api/TableEnvironmentTest.scala |  2 +-
 .../flink/table/planner/catalog/CatalogTableITCase.scala  |  4 ++--
 .../apache/flink/table/planner/catalog/CatalogTableTest.scala |  2 +-
 .../flink/table/planner/catalog/CatalogViewITCase.scala   |  4 ++--
 .../apache/flink/table/planner/codegen/agg/AggTestBase.scala  |  4 ++--
 .../table/planner/expressions/utils/ExpressionTestBase.scala  |  3 +--
 .../plan/batch/table/validation/JoinValidationTest.scala  |  4 ++--
 .../batch/table/validation/SetOperatorsValidationTest.scala   |  6 +++---
 .../flink/table/planner/plan/utils/FlinkRelOptUtilTest.scala  |  2 +-
 .../planner/runtime/harness/GroupAggregateHarnessTest.scala   |  2 +-
 .../table/planner/runtime/harness/OverWindowHarnessTest.scala |  2 +-
 .../planner/runtime/harness/TableAggregateHarnessTest.scala   |  2 +-
 .../flink/table/planner/runtime/utils/BatchTestBase.scala |  2 +-
 .../flink/table/planner/runtime/utils/StreamingTestBase.scala |  2 +-
 .../org/apache/flink/table/planner/utils/TableTestBase.scala  | 11 +--
 21 files changed, 32 insertions(+), 36 deletions(-)

diff --git 
a/flink-table/flink-table-planner-blink/src/test/java/org/apache/flink/table/api/EnvironmentTest.java
 
b/flink-table/flink-table-planner-blink/src/test/java/org/apache/flink/table/api/EnvironmentTest.java
index 387f40d..80801ca 100644
--- 
a/flink-table/flink-table-planner-blink/src/test/java/org/apache/flink/table/api/EnvironmentTest.java
+++ 
b/flink-table/flink-table-planner-blink/src/test/java/org/apache/flink/table/api/EnvironmentTest.java
@@ -40,9 +40,7 @@ public class EnvironmentTest {
@Test
public void testPassingExecutionParameters() {
StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEnvironment();
-   StreamTableEnvironment tEnv = StreamTableEnvironment.create(
-   env,
-   
EnvironmentSettings.newInstance().useBlinkPlanner().inStreamingMode().build());
+   StreamTableEnvironment tEnv = 
StreamTableEnvironment.create(env);
 
tEnv.getConfig().addConfiguration(
new Configuration()
diff --git 
a/flink-table/flink-table-planner-blink/src/test/java/org/apache/flink/table/api/TableUtilsStreamingITCase.java
 
b/flink-table/flink-table-planner-blink/src/test/java/org/apache/flink/table/api/TableUtilsStreamingITCase.java
index 89f134c..0f5c4cf 100644
--- 
a/flink-table/flink-table-planner-blink/src/test/java/org/apache/flink/table/api/TableUtilsStreamingITCase.java
+++ 
b/flink-table/flink-table-planner-blink/src/test/java/org/apache/flink/table/api/TableUtilsStreamingITCase.java
@@ -48,7 +48,7 @@ public class TableUtilsStreamingITCase extends TestLogger {
env.setParallelism(4);
env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime);
 
-   EnvironmentSettings settings = 
EnvironmentSettings.newInstance().useBlinkPlanner().inStreamingMode().build();
+   EnvironmentSettings settings = 
EnvironmentSettings.newInstance().inStreamingMode().build();
tEnv = StreamTableEnvironment.create(env, settings);
}
 
diff --git 
a/flink-table/flink-table-planner-blink/src/test/java/org/apache/flink/table/planner/catalog/CatalogConstraintTest.java
 
b/flink-table/flink-table-planner-blink/src/test/java/org/apache/flink/table/planner/catalog/CatalogConstraintTest.java
index 273ffd1..25ca83b 100644
--- 
a/flink-table/flink-table-planner-blink/src/test/java/org/apache/flink/table/planner/catalog/CatalogConstraintTest.java
+++ 
b/flink-table/flink-table-planner-blink/src/test/java/org/apache/flink/table/planner/catalog/CatalogConstraintTest.java
@@ -57,7 +57,7 @@ public class CatalogConstraintTest {
 
@Before
public void setup() {
-   EnvironmentSettings settings = 
EnvironmentSettings.newInstance().useBlinkPlanner().inBatchMode().build();
+   EnvironmentSettings settings = 
EnvironmentSettings.newInstance().inBatchMode().build();
tEnv = TableEnvironment.create(settings);
catalog

[flink] 01/06: [FLINK-17339][table-api] Change default planner to blink

2020-04-26 Thread kurt
This is an automated email from the ASF dual-hosted git repository.

kurt pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit f0a4b09319211391dcfd09198e360f48102a9991
Author: Kurt Young 
AuthorDate: Thu Apr 23 21:50:04 2020 +0800

[FLINK-17339][table-api] Change default planner to blink

This closes #11910
---
 .../apache/flink/table/api/EnvironmentSettings.java| 18 +-
 1 file changed, 9 insertions(+), 9 deletions(-)

diff --git 
a/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/api/EnvironmentSettings.java
 
b/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/api/EnvironmentSettings.java
index 7eec4a3..bc6e8e0 100644
--- 
a/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/api/EnvironmentSettings.java
+++ 
b/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/api/EnvironmentSettings.java
@@ -37,7 +37,7 @@ import java.util.Map;
  * Example:
  * {@code
  *EnvironmentSettings.newInstance()
- *  .useOldPlanner()
+ *  .useBlinkPlanner()
  *  .inStreamingMode()
  *  .withBuiltInCatalogName("default_catalog")
  *  .withBuiltInDatabaseName("default_database")
@@ -159,16 +159,15 @@ public class EnvironmentSettings {
private static final String BLINK_PLANNER_FACTORY = 
"org.apache.flink.table.planner.delegation.BlinkPlannerFactory";
private static final String BLINK_EXECUTOR_FACTORY = 
"org.apache.flink.table.planner.delegation.BlinkExecutorFactory";
 
-   private String plannerClass = OLD_PLANNER_FACTORY;
-   private String executorClass = OLD_EXECUTOR_FACTORY;
+   private String plannerClass = BLINK_PLANNER_FACTORY;
+   private String executorClass = BLINK_EXECUTOR_FACTORY;
private String builtInCatalogName = DEFAULT_BUILTIN_CATALOG;
private String builtInDatabaseName = DEFAULT_BUILTIN_DATABASE;
private boolean isStreamingMode = true;
 
/**
-* Sets the old Flink planner as the required module.
-*
-* This is the default behavior.
+* Sets the old Flink planner as the required module. By 
default, {@link #useBlinkPlanner()}
+* is enabled.
 */
public Builder useOldPlanner() {
this.plannerClass = OLD_PLANNER_FACTORY;
@@ -177,8 +176,9 @@ public class EnvironmentSettings {
}
 
/**
-* Sets the Blink planner as the required module. By default, 
{@link #useOldPlanner()} is
-* enabled.
+* Sets the Blink planner as the required module.
+*
+* This is the default behavior.
 */
public Builder useBlinkPlanner() {
this.plannerClass = BLINK_PLANNER_FACTORY;
@@ -191,7 +191,7 @@ public class EnvironmentSettings {
 *
 * A planner will be discovered automatically, if there is 
only one planner available.
 *
-* By default, {@link #useOldPlanner()} is enabled.
+* By default, {@link #useBlinkPlanner()} is enabled.
 */
public Builder useAnyPlanner() {
this.plannerClass = null;



[flink] 05/06: [FLINK-17339][examples] Update examples due to default planner changing.

2020-04-26 Thread kurt
This is an automated email from the ASF dual-hosted git repository.

kurt pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit aa4e1d64ac26ca973ff953c6837648feb220f760
Author: Kurt Young 
AuthorDate: Thu Apr 23 22:19:44 2020 +0800

[FLINK-17339][examples] Update examples due to default planner changing.
---
 .../apache/flink/table/examples/java/StreamSQLExample.java | 14 +-
 .../flink/table/examples/java/StreamWindowSQLExample.java  | 10 +-
 .../flink/table/examples/scala/StreamSQLExample.scala  | 14 +-
 3 files changed, 19 insertions(+), 19 deletions(-)

diff --git 
a/flink-examples/flink-examples-table/src/main/java/org/apache/flink/table/examples/java/StreamSQLExample.java
 
b/flink-examples/flink-examples-table/src/main/java/org/apache/flink/table/examples/java/StreamSQLExample.java
index a4dbbf0..bce8054 100644
--- 
a/flink-examples/flink-examples-table/src/main/java/org/apache/flink/table/examples/java/StreamSQLExample.java
+++ 
b/flink-examples/flink-examples-table/src/main/java/org/apache/flink/table/examples/java/StreamSQLExample.java
@@ -50,19 +50,23 @@ public class StreamSQLExample {
public static void main(String[] args) throws Exception {
 
final ParameterTool params = ParameterTool.fromArgs(args);
-   String planner = params.has("planner") ? params.get("planner") 
: "flink";
+   String planner = params.has("planner") ? params.get("planner") 
: "blink";
 
// set up execution environment
StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEnvironment();
StreamTableEnvironment tEnv;
if (Objects.equals(planner, "blink")) { // use blink planner in 
streaming mode
EnvironmentSettings settings = 
EnvironmentSettings.newInstance()
-   .useBlinkPlanner()
-   .inStreamingMode()
-   .build();
+   .inStreamingMode()
+   .useBlinkPlanner()
+   .build();
tEnv = StreamTableEnvironment.create(env, settings);
} else if (Objects.equals(planner, "flink")) {  // use flink 
planner in streaming mode
-   tEnv = StreamTableEnvironment.create(env);
+   EnvironmentSettings settings = 
EnvironmentSettings.newInstance()
+   .inStreamingMode()
+   .useOldPlanner()
+   .build();
+   tEnv = StreamTableEnvironment.create(env, settings);
} else {
System.err.println("The planner is incorrect. Please 
run 'StreamSQLExample --planner ', " +
"where planner (it is either flink or blink, 
and the default is flink) indicates whether the " +
diff --git 
a/flink-examples/flink-examples-table/src/main/java/org/apache/flink/table/examples/java/StreamWindowSQLExample.java
 
b/flink-examples/flink-examples-table/src/main/java/org/apache/flink/table/examples/java/StreamWindowSQLExample.java
index f86de17..4620a8a5 100644
--- 
a/flink-examples/flink-examples-table/src/main/java/org/apache/flink/table/examples/java/StreamWindowSQLExample.java
+++ 
b/flink-examples/flink-examples-table/src/main/java/org/apache/flink/table/examples/java/StreamWindowSQLExample.java
@@ -19,7 +19,6 @@
 package org.apache.flink.table.examples.java;
 
 import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
-import org.apache.flink.table.api.EnvironmentSettings;
 import org.apache.flink.table.api.Table;
 import org.apache.flink.table.api.java.StreamTableEnvironment;
 import org.apache.flink.types.Row;
@@ -41,16 +40,9 @@ import java.io.IOException;
 public class StreamWindowSQLExample {
 
public static void main(String[] args) throws Exception {
-
// set up execution environment
StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEnvironment();
-   // use blink planner in streaming mode,
-   // because watermark statement is only available in blink 
planner.
-   EnvironmentSettings settings = EnvironmentSettings.newInstance()
-   .useBlinkPlanner()
-   .inStreamingMode()
-   .build();
-   StreamTableEnvironment tEnv = 
StreamTableEnvironment.create(env, settings);
+   StreamTableEnvironment tEnv = 
StreamTableEnvironment.create(env);
 
// write source data into temporary file and get the absolute 
path
String contents = "1,be

[flink] 06/06: [FLINK-17339][misc] Update tests due to default planner changing.

2020-04-26 Thread kurt
This is an automated email from the ASF dual-hosted git repository.

kurt pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit a9f8a0b4481719fb511436a61e36cb0e26559c79
Author: Kurt Young 
AuthorDate: Thu Apr 23 22:20:05 2020 +0800

[FLINK-17339][misc] Update tests due to default planner changing.
---
 flink-connectors/flink-connector-cassandra/pom.xml| 2 +-
 .../java/org/apache/flink/sql/tests/StreamSQLTestProgram.java | 2 +-
 .../src/main/java/org/apache/flink/ml/common/MLEnvironment.java   | 5 -
 .../test/java/org/apache/flink/ml/common/MLEnvironmentTest.java   | 4 +++-
 flink-python/pyflink/table/tests/test_environment_settings.py | 4 ++--
 flink-python/pyflink/table/tests/test_table_environment_api.py| 6 +++---
 flink-python/pyflink/testing/test_case_utils.py   | 8 ++--
 .../src/main/scala/org/apache/flink/api/scala/FlinkILoop.scala| 5 +++--
 8 files changed, 23 insertions(+), 13 deletions(-)

diff --git a/flink-connectors/flink-connector-cassandra/pom.xml 
b/flink-connectors/flink-connector-cassandra/pom.xml
index f66d5ec..b7aa6d1 100644
--- a/flink-connectors/flink-connector-cassandra/pom.xml
+++ b/flink-connectors/flink-connector-cassandra/pom.xml
@@ -172,7 +172,7 @@ under the License.


org.apache.flink
-   
flink-table-planner_${scala.binary.version}
+   
flink-table-planner-blink_${scala.binary.version}
${project.version}
provided
true
diff --git 
a/flink-end-to-end-tests/flink-stream-sql-test/src/main/java/org/apache/flink/sql/tests/StreamSQLTestProgram.java
 
b/flink-end-to-end-tests/flink-stream-sql-test/src/main/java/org/apache/flink/sql/tests/StreamSQLTestProgram.java
index 40a4005..8d08466 100644
--- 
a/flink-end-to-end-tests/flink-stream-sql-test/src/main/java/org/apache/flink/sql/tests/StreamSQLTestProgram.java
+++ 
b/flink-end-to-end-tests/flink-stream-sql-test/src/main/java/org/apache/flink/sql/tests/StreamSQLTestProgram.java
@@ -84,7 +84,7 @@ public class StreamSQLTestProgram {
 
ParameterTool params = ParameterTool.fromArgs(args);
String outputPath = params.getRequired("outputPath");
-   String planner = params.get("planner", "old");
+   String planner = params.get("planner", "blink");
 
final EnvironmentSettings.Builder builder = 
EnvironmentSettings.newInstance();
builder.inStreamingMode();
diff --git 
a/flink-ml-parent/flink-ml-lib/src/main/java/org/apache/flink/ml/common/MLEnvironment.java
 
b/flink-ml-parent/flink-ml-lib/src/main/java/org/apache/flink/ml/common/MLEnvironment.java
index f9decea..595ac2c 100644
--- 
a/flink-ml-parent/flink-ml-lib/src/main/java/org/apache/flink/ml/common/MLEnvironment.java
+++ 
b/flink-ml-parent/flink-ml-lib/src/main/java/org/apache/flink/ml/common/MLEnvironment.java
@@ -21,6 +21,7 @@ package org.apache.flink.ml.common;
 
 import org.apache.flink.api.java.ExecutionEnvironment;
 import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
+import org.apache.flink.table.api.EnvironmentSettings;
 import org.apache.flink.table.api.java.BatchTableEnvironment;
 import org.apache.flink.table.api.java.StreamTableEnvironment;
 
@@ -150,7 +151,9 @@ public class MLEnvironment {
 */
public StreamTableEnvironment getStreamTableEnvironment() {
if (null == streamTableEnv) {
-   streamTableEnv = 
StreamTableEnvironment.create(getStreamExecutionEnvironment());
+   streamTableEnv = StreamTableEnvironment.create(
+   getStreamExecutionEnvironment(),
+   
EnvironmentSettings.newInstance().useOldPlanner().build());
}
return streamTableEnv;
}
diff --git 
a/flink-ml-parent/flink-ml-lib/src/test/java/org/apache/flink/ml/common/MLEnvironmentTest.java
 
b/flink-ml-parent/flink-ml-lib/src/test/java/org/apache/flink/ml/common/MLEnvironmentTest.java
index 60b18fe..50f87c5 100644
--- 
a/flink-ml-parent/flink-ml-lib/src/test/java/org/apache/flink/ml/common/MLEnvironmentTest.java
+++ 
b/flink-ml-parent/flink-ml-lib/src/test/java/org/apache/flink/ml/common/MLEnvironmentTest.java
@@ -21,6 +21,7 @@ package org.apache.flink.ml.common;
 
 import org.apache.flink.api.java.ExecutionEnvironment;
 import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
+import org.apache.flink.table.api.EnvironmentSettings;
 import org.apache.flink.table.api.java.BatchTableEnvironment;
 import org.apache.flink.table.api.java.StreamTableEnvironment;
 
@@ -56,7 +57,8 @@ public class MLEnvironmentTest {
@Test
public void testConstructWithStream

[flink] branch master updated (cbaea29 -> a9f8a0b)

2020-04-26 Thread kurt
This is an automated email from the ASF dual-hosted git repository.

kurt pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from cbaea29  Partial Revert "[FLINK-15669] [sql-client] Fix SQL client 
can't cancel flink job"
 new f0a4b09  [FLINK-17339][table-api] Change default planner to blink
 new 04c52f8  [FLINK-17339][table-planner] Update test cases due to default 
planner change.
 new 2421f94  [FLINK-17339][sql-client] Change default planner to blink.
 new cb523d3  [FLINK-17339][table-planner-blink] Simplify test cases due to 
default planner changing.
 new aa4e1d6  [FLINK-17339][examples] Update examples due to default 
planner changing.
 new a9f8a0b  [FLINK-17339][misc] Update tests due to default planner 
changing.

The 6 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 flink-connectors/flink-connector-cassandra/pom.xml |   2 +-
 .../flink/sql/tests/StreamSQLTestProgram.java  |   2 +-
 .../table/examples/java/StreamSQLExample.java  |  14 +-
 .../examples/java/StreamWindowSQLExample.java  |  10 +-
 .../table/examples/scala/StreamSQLExample.scala|  14 +-
 .../org/apache/flink/ml/common/MLEnvironment.java  |   5 +-
 .../apache/flink/ml/common/MLEnvironmentTest.java  |   4 +-
 .../table/tests/test_environment_settings.py   |   4 +-
 .../table/tests/test_table_environment_api.py  |   6 +-
 flink-python/pyflink/testing/test_case_utils.py|   8 +-
 .../org/apache/flink/api/scala/FlinkILoop.scala|   5 +-
 .../client/config/entries/ExecutionEntry.java  |   6 +-
 .../flink/table/api/EnvironmentSettings.java   |  18 +--
 .../apache/flink/table/api/EnvironmentTest.java|   4 +-
 .../flink/table/api/TableUtilsStreamingITCase.java |   2 +-
 .../planner/catalog/CatalogConstraintTest.java |   2 +-
 .../flink/table/planner/catalog/CatalogITCase.java |   2 +-
 .../planner/catalog/CatalogStatisticsTest.java |   2 +-
 .../flink/table/api/TableEnvironmentITCase.scala   |   4 +-
 .../flink/table/api/TableEnvironmentTest.scala |   2 +-
 .../table/planner/catalog/CatalogTableITCase.scala |   4 +-
 .../table/planner/catalog/CatalogTableTest.scala   |   2 +-
 .../table/planner/catalog/CatalogViewITCase.scala  |   4 +-
 .../table/planner/codegen/agg/AggTestBase.scala|   4 +-
 .../expressions/utils/ExpressionTestBase.scala |   3 +-
 .../table/validation/JoinValidationTest.scala  |   4 +-
 .../validation/SetOperatorsValidationTest.scala|   6 +-
 .../planner/plan/utils/FlinkRelOptUtilTest.scala   |   2 +-
 .../harness/GroupAggregateHarnessTest.scala|   2 +-
 .../runtime/harness/OverWindowHarnessTest.scala|   2 +-
 .../harness/TableAggregateHarnessTest.scala|   2 +-
 .../planner/runtime/utils/BatchTestBase.scala  |   2 +-
 .../planner/runtime/utils/StreamingTestBase.scala  |   2 +-
 .../flink/table/planner/utils/TableTestBase.scala  |  11 +-
 .../table/api/StreamTableEnvironmentTest.java  |   3 +-
 .../table/runtime/stream/sql/FunctionITCase.java   |   8 +-
 .../table/runtime/stream/sql/JavaSqlITCase.java|  13 +-
 .../table/runtime/stream/table/FunctionITCase.java |   9 +-
 .../table/runtime/stream/table/ValuesITCase.java   |   5 +-
 .../flink/table/api/stream/ExplainTest.scala   |   9 +-
 .../api/stream/StreamTableEnvironmentTest.scala|   5 +-
 .../StreamTableEnvironmentValidationTest.scala |   7 +-
 .../sql/validation/InsertIntoValidationTest.scala  |  16 +--
 .../validation/InsertIntoValidationTest.scala  |   9 +-
 .../table/validation/JoinValidationTest.scala  |  16 ++-
 .../validation/SetOperatorsValidationTest.scala|  14 +-
 .../validation/UnsupportedOpsValidationTest.scala  |  25 +---
 .../api/validation/TableSourceValidationTest.scala |  59 +
 .../flink/table/catalog/CatalogTableITCase.scala   |   8 +-
 .../table/match/PatternTranslatorTestBase.scala|   5 +-
 .../runtime/harness/AggFunctionHarnessTest.scala   |   9 +-
 .../harness/GroupAggregateHarnessTest.scala|  18 ++-
 .../table/runtime/harness/MatchHarnessTest.scala   |   6 +-
 .../harness/TableAggregateHarnessTest.scala|  21 ++-
 .../runtime/stream/TimeAttributesITCase.scala  | 115 +++-
 .../runtime/stream/sql/InsertIntoITCase.scala  |  49 ++-
 .../table/runtime/stream/sql/JoinITCase.scala  | 120 +
 .../runtime/stream/sql/MatchRecognizeITCase.scala  |  81 +---
 .../runtime/stream/sql/OverWindowITCase.scala  | 146 +++--
 .../runtime/stream/sql/SetOperatorsITCase.scala|  12 +-
 .../table/runtime/stream/sql/SortITCase.scala  |   9 +-
 .../flink/table/runtime/stream/sql/SqlITCase.scala | 142 ++--
 .../runtime/stream/sql/Tab

[flink] branch master updated: [FLINK-16935][table-planner-blink] Enable or delete most of the ignored test cases in blink planner.

2020-04-25 Thread kurt
This is an automated email from the ASF dual-hosted git repository.

kurt pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new 490e2af  [FLINK-16935][table-planner-blink] Enable or delete most of 
the ignored test cases in blink planner.
490e2af is described below

commit 490e2af0f9cc3021b6423535768e9f3604b27519
Author: Kurt Young 
AuthorDate: Sat Apr 25 20:36:59 2020 +0800

[FLINK-16935][table-planner-blink] Enable or delete most of the ignored 
test cases in blink planner.

This closes #11874
---
 .../flink/table/planner/plan/utils/SortUtil.scala  |   3 +-
 .../batch/table/stringexpr/SetOperatorsTest.xml|  14 +++
 .../plan/stream/table/TemporalTableJoinTest.xml|  90 +++
 .../planner/expressions/DecimalTypeTest.scala  |   3 +-
 .../expressions/NonDeterministicTests.scala|  58 +-
 .../validation/ScalarFunctionsValidationTest.scala |  13 +--
 .../table/planner/plan/batch/table/JoinTest.scala  |  15 +--
 .../batch/table/stringexpr/SetOperatorsTest.scala  |   3 +-
 .../plan/stream/table/TemporalTableJoinTest.scala  |  27 ++---
 .../TemporalTableJoinValidationTest.scala  |  18 ++-
 .../planner/runtime/batch/sql/LimitITCase.scala|   4 +-
 .../planner/runtime/batch/sql/MiscITCase.scala |  12 +-
 .../planner/runtime/batch/sql/RankITCase.scala |   1 -
 .../sql/agg/DistinctAggregateITCaseBase.scala  |  34 +-
 .../batch/sql/agg/WindowAggregateITCase.scala  | 125 +
 .../sql/join/JoinConditionTypeCoerceITCase.scala   |   4 +-
 .../batch/sql/join/JoinWithoutKeyITCase.scala  |   5 +-
 .../runtime/batch/sql/join/SemiJoinITCase.scala|   3 +-
 .../runtime/batch/table/AggregationITCase.scala|   9 +-
 .../planner/runtime/batch/table/CalcITCase.scala   |  16 +--
 .../planner/runtime/stream/sql/SortITCase.scala|   4 +-
 .../runtime/stream/sql/SplitAggregateITCase.scala  |   5 +-
 .../planner/runtime/stream/table/CalcITCase.scala  |   1 -
 .../planner/runtime/stream/table/JoinITCase.scala  |   2 -
 24 files changed, 177 insertions(+), 292 deletions(-)

diff --git 
a/flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/utils/SortUtil.scala
 
b/flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/utils/SortUtil.scala
index 89f7540..59dec37 100644
--- 
a/flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/utils/SortUtil.scala
+++ 
b/flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/utils/SortUtil.scala
@@ -46,8 +46,7 @@ object SortUtil {
 if (fetch != null) {
   getLimitStart(offset) + RexLiteral.intValue(fetch)
 } else {
-  // TODO return Long.MaxValue when providing FlinkRelMdRowCount on Sort ?
-  Integer.MAX_VALUE
+  Long.MaxValue
 }
   }
 
diff --git 
a/flink-table/flink-table-planner-blink/src/test/resources/org/apache/flink/table/planner/plan/batch/table/stringexpr/SetOperatorsTest.xml
 
b/flink-table/flink-table-planner-blink/src/test/resources/org/apache/flink/table/planner/plan/batch/table/stringexpr/SetOperatorsTest.xml
index 0ebd86b..c8a528a 100644
--- 
a/flink-table/flink-table-planner-blink/src/test/resources/org/apache/flink/table/planner/plan/batch/table/stringexpr/SetOperatorsTest.xml
+++ 
b/flink-table/flink-table-planner-blink/src/test/resources/org/apache/flink/table/planner/plan/batch/table/stringexpr/SetOperatorsTest.xml
@@ -16,6 +16,20 @@ See the License for the specific language governing 
permissions and
 limitations under the License.
 -->
 
+  
+
+  
+
+  
+
   
 
   
+
+  
+
+  
+
+  
+
+  
+
+  
+
+  
+
+  
+
+  
+
+  
+
+  
+
+  
+
+  
+
+  
+
diff --git 
a/flink-table/flink-table-planner-blink/src/test/scala/org/apache/flink/table/planner/expressions/DecimalTypeTest.scala
 
b/flink-table/flink-table-planner-blink/src/test/scala/org/apache/flink/table/planner/expressions/DecimalTypeTest.scala
index 635f188..2a88898 100644
--- 
a/flink-table/flink-table-planner-blink/src/test/scala/org/apache/flink/table/planner/expressions/DecimalTypeTest.scala
+++ 
b/flink-table/flink-table-planner-blink/src/test/scala/org/apache/flink/table/planner/expressions/DecimalTypeTest.scala
@@ -26,6 +26,7 @@ import 
org.apache.flink.table.planner.expressions.utils.ExpressionTestBase
 import 
org.apache.flink.table.runtime.types.TypeInfoLogicalTypeConverter.fromLogicalTypeToTypeInfo
 import org.apache.flink.table.types.logical.DecimalType
 import org.apache.flink.types.Row
+
 import org.junit.{Ignore, Test}
 
 class DecimalTypeTest extends ExpressionTestBase {
@@ -132,7 +133,7 @@ class DecimalTypeTest extends ExpressionTestBase {
   @Ignore
   @Test
   def testDefaultDecimalCasting(): Unit = {
-// from String
+//// from Str

[flink] branch master updated (0650e4b -> 322d589)

2020-04-22 Thread kurt
This is an automated email from the ASF dual-hosted git repository.

kurt pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 0650e4b  [FLINK-15669] [sql-client] Fix SQL client can't cancel flink 
job
 add 3ce6c09  [FLINK-17106][sql-parser] Support to parse TEMPORARY and IF 
NOT EXISTS in view definition
 add 322d589  [FLINK-17106][table-planner] Support create and drop view in 
both planners

No new revisions were added by this update.

Summary of changes:
 .../src/main/codegen/data/Parser.tdd   |  11 +-
 .../src/main/codegen/includes/parserImpls.ftl  |  86 -
 .../apache/flink/sql/parser/ddl/SqlCreateView.java |  25 +-
 .../apache/flink/sql/parser/ddl/SqlDropView.java   |  15 +-
 .../flink/sql/parser/FlinkSqlParserImplTest.java   |  50 +++
 .../table/api/internal/TableEnvironmentImpl.java   |  37 +-
 .../apache/flink/table/catalog/CatalogManager.java |   2 +-
 ...tionOperation.java => CreateViewOperation.java} |  46 ++-
 ...nctionOperation.java => DropViewOperation.java} |  34 +-
 .../operations/SqlToOperationConverter.java|  67 
 .../flink/table/api/TableEnvironmentTest.scala | 376 ++-
 .../table/planner/catalog/CatalogTableITCase.scala |   2 -
 .../table/planner/catalog/CatalogViewITCase.scala  | 333 +
 .../table/sqlexec/SqlToOperationConverter.java |  80 
 .../flink/table/api/internal/TableEnvImpl.scala|  28 ++
 .../flink/table/calcite/FlinkPlannerImpl.scala |   2 +-
 .../flink/table/catalog/CatalogManagerTest.java|   2 +-
 .../flink/table/catalog/CatalogTableITCase.scala   | 403 +
 18 files changed, 1516 insertions(+), 83 deletions(-)
 copy 
flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/operations/ddl/{CreateCatalogFunctionOperation.java
 => CreateViewOperation.java} (62%)
 copy 
flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/operations/ddl/{DropCatalogFunctionOperation.java
 => DropViewOperation.java} (72%)
 create mode 100644 
flink-table/flink-table-planner-blink/src/test/scala/org/apache/flink/table/planner/catalog/CatalogViewITCase.scala



[flink] branch master updated (422aa89 -> 43876ff)

2020-04-20 Thread kurt
This is an automated email from the ASF dual-hosted git repository.

kurt pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 422aa89  [FLINK-15347] Add dedicated termination future executor to 
SupervisorActor
 add 43876ff  [FLINK-17263][table-planner-blink] Replace 
RepeatFamilyOperandTypeChecker with CompositeOperandTypeChecker in planner

No new revisions were added by this update.

Summary of changes:
 .../functions/sql/FlinkSqlOperatorTable.java   |   6 +-
 .../plan/type/RepeatFamilyOperandTypeChecker.java  | 125 -
 2 files changed, 3 insertions(+), 128 deletions(-)
 delete mode 100644 
flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/plan/type/RepeatFamilyOperandTypeChecker.java



[flink] branch master updated (6e61339 -> e0d2db7)

2020-04-19 Thread kurt
This is an automated email from the ASF dual-hosted git repository.

kurt pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 6e61339  [FLINK-17068][python][tests] Ensure the permission of scripts 
set correctly before executing the Python tests (#11805)
 add e0d2db7  [FLINK-16366] [table] Introduce executeSql method in 
TableEnvironment

No new revisions were added by this update.

Summary of changes:
 .../test-scripts/test_table_shaded_dependencies.sh |   2 +
 flink-python/pyflink/table/table_environment.py|  14 ++
 flink-table/flink-sql-client/pom.xml   |   8 -
 .../table/client/cli/CliChangelogResultView.java   |   4 +-
 .../flink/table/client/cli/CliTableResultView.java |   3 +-
 .../table/client/cli/CliTableauResultView.java | 119 +---
 .../apache/flink/table/client/cli/CliUtils.java|  46 -
 .../src/main/resources/META-INF/NOTICE |   1 -
 .../table/client/cli/CliTableauResultViewTest.java |   2 +-
 .../flink/table/client/cli/CliUtilsTest.java   |  68 ---
 .../org/apache/flink/table/api/ResultKind.java}|  17 +-
 .../apache/flink/table/api/TableEnvironment.java   |  13 ++
 .../api/{PlannerConfig.java => TableResult.java}   |  45 +++--
 .../table/api/internal/TableEnvironmentImpl.java   | 154 +++
 .../flink/table/api/internal/TableResultImpl.java  | 170 
 .../table/operations/ShowCatalogsOperation.java|  10 +-
 .../table/operations/ShowDatabasesOperation.java   |  10 +-
 .../table/operations/ShowFunctionsOperation.java   |  10 +-
 .../flink/table/operations/ShowOperation.java} |  11 +-
 .../table/operations/ShowTablesOperation.java  |  10 +-
 flink-table/flink-table-common/pom.xml |   9 +
 .../org/apache/flink/table/utils/PrintUtils.java   | 216 +
 .../apache/flink/table/utils/PrintUtilsTest.java   | 201 +++
 flink-table/flink-table-planner-blink/pom.xml  |  10 +
 .../operations/SqlToOperationConverter.java|  36 
 .../src/main/resources/META-INF/NOTICE |   1 +
 .../main/resources/META-INF/licenses/LICENSE.icu4j |   0
 .../table/planner/calcite/FlinkPlannerImpl.scala   |   7 +-
 .../flink/table/api/TableEnvironmentTest.scala | 183 -
 flink-table/flink-table-planner/pom.xml|  10 +
 .../table/sqlexec/SqlToOperationConverter.java |  36 
 .../src/main/resources/META-INF/NOTICE |   1 +
 .../main/resources/META-INF/licenses/LICENSE.icu4j |   0
 .../flink/table/api/internal/TableEnvImpl.scala| 134 ++---
 .../flink/table/calcite/FlinkPlannerImpl.scala |   7 +-
 .../api/batch/BatchTableEnvironmentTest.scala  | 202 ++-
 .../flink/table/utils/MockTableEnvironment.scala   |   4 +-
 flink-table/flink-table-uber-blink/pom.xml |   7 +
 flink-table/flink-table-uber/pom.xml   |   7 +
 39 files changed, 1426 insertions(+), 362 deletions(-)
 delete mode 100644 
flink-table/flink-sql-client/src/test/java/org/apache/flink/table/client/cli/CliUtilsTest.java
 copy 
flink-table/{flink-table-common/src/main/java/org/apache/flink/table/api/ExpressionParserException.java
 => 
flink-table-api-java/src/main/java/org/apache/flink/table/api/ResultKind.java} 
(71%)
 copy 
flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/api/{PlannerConfig.java
 => TableResult.java} (54%)
 create mode 100644 
flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/api/internal/TableResultImpl.java
 copy 
flink-filesystems/flink-azure-fs-hadoop/src/main/java/org/apache/flink/fs/azurefs/AzureFSFactory.java
 => 
flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/operations/ShowCatalogsOperation.java
 (79%)
 copy 
flink-filesystems/flink-azure-fs-hadoop/src/main/java/org/apache/flink/fs/azurefs/AzureFSFactory.java
 => 
flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/operations/ShowDatabasesOperation.java
 (79%)
 copy 
flink-filesystems/flink-azure-fs-hadoop/src/main/java/org/apache/flink/fs/azurefs/AzureFSFactory.java
 => 
flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/operations/ShowFunctionsOperation.java
 (79%)
 copy 
flink-table/{flink-table-common/src/main/java/org/apache/flink/table/functions/python/PythonFunctionKind.java
 => 
flink-table-api-java/src/main/java/org/apache/flink/table/operations/ShowOperation.java}
 (80%)
 copy 
flink-filesystems/flink-azure-fs-hadoop/src/main/java/org/apache/flink/fs/azurefs/AzureFSFactory.java
 => 
flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/operations/ShowTablesOperation.java
 (79%)
 create mode 100644 
flink-table/flink-table-common/src/main/java/org/apache/flink/table/utils/PrintUtils.java
 create mode 100644 
flink-table/flink-table-common/src/test/java/org/apache/flink/table/utils/PrintUtilsTest.java
 copy flink-table/{flink-sq

[flink] branch master updated (a20ed14 -> 5915277)

2020-04-18 Thread kurt
This is an automated email from the ASF dual-hosted git repository.

kurt pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from a20ed14  [FLINK-8864][sql-client] Add cli command history in sql 
client and make it avaible after restart
 add 5915277  [FLINK-16823][table-planner-blink] Fix 
TIMESTAMPDIFF/TIMESTAMPADD yield incorrect results

No new revisions were added by this update.

Summary of changes:
 .../planner/codegen/calls/BuiltInMethods.scala |  8 ++
 .../planner/codegen/calls/ScalarOperatorGens.scala | 16 ++--
 .../codegen/calls/TimestampDiffCallGen.scala   |  8 +-
 .../planner/expressions/ScalarFunctionsTest.scala  |  6 +-
 .../planner/expressions/TemporalTypesTest.scala| 16 
 .../table/runtime/functions/SqlFunctionUtils.java  | 87 ++
 6 files changed, 126 insertions(+), 15 deletions(-)



[flink] branch master updated: [FLINK-8864][sql-client] Add cli command history in sql client and make it avaible after restart

2020-04-18 Thread kurt
This is an automated email from the ASF dual-hosted git repository.

kurt pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new a20ed14  [FLINK-8864][sql-client] Add cli command history in sql 
client and make it avaible after restart
a20ed14 is described below

commit a20ed14374c75d95a0fda5d33bea7f902088d670
Author: Kurt Young 
AuthorDate: Thu Apr 16 13:19:55 2020 +0800

[FLINK-8864][sql-client] Add cli command history in sql client and make it 
avaible after restart

This closes #11765
---
 .../org/apache/flink/table/client/SqlClient.java   | 12 +-
 .../apache/flink/table/client/cli/CliClient.java   | 18 ++--
 .../apache/flink/table/client/cli/CliOptions.java  |  9 +++-
 .../flink/table/client/cli/CliOptionsParser.java   | 20 -
 .../apache/flink/table/client/cli/CliUtils.java| 21 +
 .../flink/table/client/cli/CliClientTest.java  | 50 +++---
 .../flink/table/client/cli/CliResultViewTest.java  |  7 ++-
 7 files changed, 123 insertions(+), 14 deletions(-)

diff --git 
a/flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/SqlClient.java
 
b/flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/SqlClient.java
index 599c580..3f68dc6 100644
--- 
a/flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/SqlClient.java
+++ 
b/flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/SqlClient.java
@@ -26,11 +26,14 @@ import org.apache.flink.table.client.gateway.Executor;
 import org.apache.flink.table.client.gateway.SessionContext;
 import org.apache.flink.table.client.gateway.local.LocalExecutor;
 
+import org.apache.commons.lang3.SystemUtils;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
 import java.io.IOException;
 import java.net.URL;
+import java.nio.file.Path;
+import java.nio.file.Paths;
 import java.util.Arrays;
 import java.util.Collections;
 import java.util.List;
@@ -119,7 +122,14 @@ public class SqlClient {
private void openCli(String sessionId, Executor executor) {
CliClient cli = null;
try {
-   cli = new CliClient(sessionId, executor);
+   Path historyFilePath;
+   if (options.getHistoryFilePath() != null) {
+   historyFilePath = 
Paths.get(options.getHistoryFilePath());
+   } else {
+   historyFilePath = 
Paths.get(System.getProperty("user.home"),
+   SystemUtils.IS_OS_WINDOWS ? 
"flink-sql-history" : ".flink-sql-history");
+   }
+   cli = new CliClient(sessionId, executor, 
historyFilePath);
// interactive CLI mode
if (options.getUpdateStatement() == null) {
cli.open();
diff --git 
a/flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/cli/CliClient.java
 
b/flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/cli/CliClient.java
index 3521de6..f2e4732a 100644
--- 
a/flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/cli/CliClient.java
+++ 
b/flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/cli/CliClient.java
@@ -83,7 +83,7 @@ public class CliClient {
 * afterwards using {@link #close()}.
 */
@VisibleForTesting
-   public CliClient(Terminal terminal, String sessionId, Executor 
executor) {
+   public CliClient(Terminal terminal, String sessionId, Executor 
executor, Path historyFilePath) {
this.terminal = terminal;
this.sessionId = sessionId;
this.executor = executor;
@@ -106,6 +106,18 @@ public class CliClient {
lineReader.setVariable(LineReader.ERRORS, 1);
// perform code completion case insensitive
lineReader.option(LineReader.Option.CASE_INSENSITIVE, true);
+   // set history file path
+   if (Files.exists(historyFilePath) || 
CliUtils.createFile(historyFilePath)) {
+   String msg = "Command history file path: " + 
historyFilePath;
+   // print it in the command line as well as log file
+   System.out.println(msg);
+   LOG.info(msg);
+   lineReader.setVariable(LineReader.HISTORY_FILE, 
historyFilePath);
+   } else {
+   String msg = "Unable to create history file: " + 
historyFilePath;
+   System.out.println(msg);
+   LOG.warn(msg);
+   }
 
// create prompt
prompt = new AttributedStringBu

[flink] branch master updated (0fa0830 -> 0f83584)

2020-04-15 Thread kurt
This is an automated email from the ASF dual-hosted git repository.

kurt pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 0fa0830  [FLINK-17080][java,table] Fix possible NPE in 
Utils#CollectHelper
 add 0f83584  [FLINK-17103][table-planner-blink] Supports dynamic options 
for Blink planner

No new revisions were added by this update.

Summary of changes:
 ...ection.html => table_config_configuration.html} |  16 +-
 .../src/main/codegen/includes/parserImpls.ftl  |   8 +-
 .../apache/flink/sql/parser/dml/RichSqlInsert.java |  23 ++
 .../flink/sql/parser/FlinkSqlParserImplTest.java   |   6 -
 .../flink/table/api/config/TableConfigOptions.java |  28 +-
 .../flink/table/catalog/CatalogTableImpl.java  |   5 +
 .../flink/table/catalog/ConnectorCatalogTable.java |   5 +
 .../operations/CatalogSinkModifyOperation.java |  13 +-
 .../apache/flink/table/catalog/CatalogTable.java   |   7 +
 .../flink/table/factories/TableSinkFactory.java|   2 +-
 .../table/planner/delegation/PlannerContext.java   |   2 +
 .../table/planner/hint/FlinkHintStrategies.java|  48 
 .../flink/table/planner/hint/FlinkHints.java   |  70 +
 .../operations/SqlToOperationConverter.java|  17 +-
 .../table/planner/delegation/PlannerBase.scala |  18 +-
 .../planner/plan/schema/CatalogSourceTable.scala   |  38 ++-
 .../planner/plan/schema/TableSourceTable.scala |  37 ++-
 .../operations/SqlToOperationConverterTest.java|  34 +++
 .../org.apache.flink.table.factories.TableFactory  |   1 +
 .../table/planner/plan/hint/OptionsHintTest.xml| 303 +
 .../flink/table/api/TableEnvironmentITCase.scala   |   2 +-
 .../flink/table/api/TableEnvironmentTest.scala |   4 +-
 .../table/planner/catalog/CatalogTableITCase.scala |  71 -
 .../table/planner/plan/hint/OptionsHintTest.scala  | 198 ++
 .../runtime/batch/sql/TableSourceITCase.scala  |   8 +-
 .../runtime/stream/sql/TableSourceITCase.scala |   8 +-
 ...bleSources.scala => testTableSourceSinks.scala} | 145 +-
 .../table/sqlexec/SqlToOperationConverter.java |   4 +-
 28 files changed, 1043 insertions(+), 78 deletions(-)
 copy docs/_includes/generated/{security_auth_zk_section.html => 
table_config_configuration.html} (51%)
 copy 
flink-yarn/src/main/java/org/apache/flink/yarn/configuration/YarnConfigOptionsInternal.java
 => 
flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/api/config/TableConfigOptions.java
 (51%)
 create mode 100644 
flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/hint/FlinkHintStrategies.java
 create mode 100644 
flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/hint/FlinkHints.java
 create mode 100644 
flink-table/flink-table-planner-blink/src/test/resources/org/apache/flink/table/planner/plan/hint/OptionsHintTest.xml
 create mode 100644 
flink-table/flink-table-planner-blink/src/test/scala/org/apache/flink/table/planner/plan/hint/OptionsHintTest.scala
 rename 
flink-table/flink-table-planner-blink/src/test/scala/org/apache/flink/table/planner/utils/{testTableSources.scala
 => testTableSourceSinks.scala} (86%)



[flink] branch master updated: [hotfix][doc] Typo fixups in the concepts/runtime part of the website. (#11740)

2020-04-15 Thread kurt
This is an automated email from the ASF dual-hosted git repository.

kurt pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new ae2b82f  [hotfix][doc] Typo fixups in the concepts/runtime part of the 
website. (#11740)
ae2b82f is described below

commit ae2b82f549c7669f41018594d5b7c6f5e56545e2
Author: Etienne Chauchot 
AuthorDate: Wed Apr 15 08:22:20 2020 +0200

[hotfix][doc] Typo fixups in the concepts/runtime part of the website. 
(#11740)
---
 docs/concepts/flink-architecture.md | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/docs/concepts/flink-architecture.md 
b/docs/concepts/flink-architecture.md
index 3bc14dd..183c170 100644
--- a/docs/concepts/flink-architecture.md
+++ b/docs/concepts/flink-architecture.md
@@ -44,7 +44,7 @@ The Flink runtime consists of two types of processes:
 tasks, coordinates checkpoints, coordinates recovery on failures, etc.
 
 There is always at least one *Flink Master*. A high-availability setup
-might have multiple *Flink Masters*, one of which one is always the
+might have multiple *Flink Masters*, one of which is always the
 *leader*, and the others are *standby*.
 
   - The *TaskManagers* (also called *workers*) execute the *tasks* (or more
@@ -102,7 +102,7 @@ certain amount of reserved managed memory. Note that no CPU 
isolation happens
 here; currently slots only separate the managed memory of tasks.
 
 By adjusting the number of task slots, users can define how subtasks are
-isolated from each other.  Having one slot per TaskManager means each task
+isolated from each other.  Having one slot per TaskManager means that each task
 group runs in a separate JVM (which can be started in a separate container, for
 example). Having multiple slots means more subtasks share the same JVM. Tasks
 in the same JVM share TCP connections (via multiplexing) and heartbeat



[flink] branch master updated (8615625 -> b7c8686)

2020-04-11 Thread kurt
This is an automated email from the ASF dual-hosted git repository.

kurt pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 8615625  [FLINK-16761][python] Return JobExecutionResult for Python 
execute method (#11670)
 add b7c8686  [hotfix][doc] Typo fixups in the concepts part of the 
website. (#11703)

No new revisions were added by this update.

Summary of changes:
 docs/concepts/index.md | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)



[flink] branch master updated (1063d56 -> 9242daf)

2020-03-17 Thread kurt
This is an automated email from the ASF dual-hosted git repository.

kurt pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 1063d56  [FLINK-16413][hive] Reduce hive source parallelism when limit 
push down
 add 9242daf  [FLINK-16623][sql-client] Add the shorthand 'desc' for 
describe in sql client

No new revisions were added by this update.

Summary of changes:
 .../src/main/java/org/apache/flink/table/client/cli/CliClient.java| 1 +
 .../main/java/org/apache/flink/table/client/cli/SqlCommandParser.java | 4 
 .../java/org/apache/flink/table/client/cli/SqlCommandParserTest.java  | 3 +++
 3 files changed, 8 insertions(+)



[flink] branch master updated (2424301 -> 22915d0)

2020-03-13 Thread kurt
This is an automated email from the ASF dual-hosted git repository.

kurt pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 2424301  [FLINK-16538][python][docs] Restructure Python Table API 
documentation (#11375)
 add 22915d0  [FLINK-16464][sql-client] Result-mode tableau may shift when 
content contains Chinese string in SQL CLI (#11334)

No new revisions were added by this update.

Summary of changes:
 flink-table/flink-sql-client/pom.xml   |   8 +
 .../table/client/cli/CliTableauResultView.java |  34 +-
 .../apache/flink/table/client/cli/CliUtils.java|  31 ++
 .../src/main/resources/META-INF/NOTICE |   1 +
 .../main/resources/META-INF/licenses/LICENSE.icu4j | 414 +
 .../table/client/cli/CliTableauResultViewTest.java |  61 ++-
 .../flink/table/client/cli/CliUtilsTest.java   |  25 ++
 7 files changed, 553 insertions(+), 21 deletions(-)
 create mode 100644 
flink-table/flink-sql-client/src/main/resources/META-INF/licenses/LICENSE.icu4j



[flink] branch master updated (108f528 -> 625da7b)

2020-03-12 Thread kurt
This is an automated email from the ASF dual-hosted git repository.

kurt pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 108f528  [FLINK-16350][e2e] Run half of streaming HA tests against ZK 
3.5
 add 625da7b  [FLINK-16539][sql-client] Trim the value when calling set 
command in sql cli

No new revisions were added by this update.

Summary of changes:
 .../src/main/java/org/apache/flink/table/client/cli/CliClient.java  | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)



[flink] branch master updated (b3f8e33 -> 2b13a41)

2020-03-03 Thread kurt
This is an automated email from the ASF dual-hosted git repository.

kurt pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from b3f8e33  [FLINK-16140] [docs-zh] Translate Event Processing (CEP) page 
into Chinese (#11168)
 add 2b13a41  [FLINK-16362][table] Remove deprecated `emitDataStream` 
method in StreamTableSink

No new revisions were added by this update.

Summary of changes:
 docs/dev/table/sourceSinks.md   | 12 ++--
 docs/dev/table/sourceSinks.zh.md| 12 ++--
 .../connectors/cassandra/CassandraAppendTableSink.java  |  4 
 .../elasticsearch/ElasticsearchUpsertTableSinkBase.java |  5 -
 .../Elasticsearch6UpsertTableSinkFactoryTest.java   |  2 +-
 .../Elasticsearch7UpsertTableSinkFactoryTest.java   |  2 +-
 .../streaming/connectors/kafka/KafkaTableSinkBase.java  |  5 -
 .../kafka/KafkaTableSourceSinkFactoryTestBase.java  |  4 ++--
 .../apache/flink/addons/hbase/HBaseUpsertTableSink.java |  5 -
 .../flink/api/java/io/jdbc/JDBCAppendTableSink.java |  5 -
 .../flink/api/java/io/jdbc/JDBCUpsertTableSink.java |  5 -
 .../flink/api/java/io/jdbc/JDBCAppendTableSinkTest.java |  2 +-
 .../client/gateway/local/CollectStreamTableSink.java|  5 -
 .../client/gateway/utils/TestTableSinkFactoryBase.java  |  5 +++--
 .../java/org/apache/flink/table/sinks/CsvTableSink.java |  5 -
 .../apache/flink/table/sinks/OutputFormatTableSink.java |  5 -
 .../org/apache/flink/table/sinks/StreamTableSink.java   | 17 +
 .../java/org/apache/flink/table/api/TableUtils.java |  5 -
 .../flink/table/planner/sinks/CollectTableSink.scala|  4 
 .../factories/utils/TestCollectionTableFactory.scala|  4 
 .../runtime/batch/sql/PartitionableSinkITCase.scala |  5 -
 .../table/planner/runtime/utils/StreamTestSink.scala| 12 
 .../table/planner/utils/MemoryTableSourceSinkUtil.scala |  5 -
 .../factories/utils/TestCollectionTableFactory.scala|  4 ++--
 .../table/runtime/stream/table/TableSinkITCase.scala| 15 ++-
 .../flink/table/utils/MemoryTableSourceSinkUtil.scala   |  4 ++--
 .../walkthrough/common/table/SpendReportTableSink.java  |  6 --
 27 files changed, 35 insertions(+), 129 deletions(-)



[flink] branch master updated (3e10f0a -> 242efcd)

2020-03-03 Thread kurt
This is an automated email from the ASF dual-hosted git repository.

kurt pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 3e10f0a  [FLINK-16281][jdbc] Parameter 'maxRetryTimes' can not work in 
AppendOnlyWriter (#11223)
 add 242efcd  [FLINK-12814][sql-client] Support a traditional and scrolling 
view of result (tableau format)

No new revisions were added by this update.

Summary of changes:
 .../apache/flink/table/client/cli/CliClient.java   |  38 +-
 .../table/client/cli/CliTableauResultView.java | 404 +
 .../client/config/entries/ExecutionEntry.java  |   8 +
 .../table/client/gateway/ResultDescriptor.java |  13 +-
 .../table/client/gateway/local/LocalExecutor.java  |   3 +-
 .../table/client/gateway/local/ResultStore.java|  10 +-
 .../flink/table/client/cli/CliResultViewTest.java  |   7 +-
 .../table/client/cli/CliTableauResultViewTest.java | 373 +++
 .../table/client/cli/utils/TerminalUtils.java  |   6 +-
 9 files changed, 840 insertions(+), 22 deletions(-)
 create mode 100644 
flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/cli/CliTableauResultView.java
 create mode 100644 
flink-table/flink-sql-client/src/test/java/org/apache/flink/table/client/cli/CliTableauResultViewTest.java



[flink] branch master updated (4ad2c5f -> 0398c30)

2020-02-12 Thread kurt
This is an automated email from the ASF dual-hosted git repository.

kurt pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 4ad2c5f  [FLINK-15445][jdbc] Support new type system and check 
unsupported data types for JDBC table source
 add 0398c30  [FLINK-15990][table][python] Remove register source and sink 
in ConnectTableDescriptor

No new revisions were added by this update.

Summary of changes:
 docs/ops/python_shell.md   |  4 +-
 docs/ops/python_shell.zh.md|  4 +-
 flink-python/dev/pip_test_code.py  |  2 +-
 flink-python/pyflink/shell.py  |  4 +-
 flink-python/pyflink/table/descriptors.py  | 45 -
 .../pyflink/table/examples/batch/word_count.py |  2 +-
 flink-python/pyflink/table/table_environment.py| 26 +-
 .../pyflink/table/tests/test_descriptor.py | 53 ++--
 .../pyflink/table/tests/test_shell_example.py  |  4 +-
 .../table/api/java/BatchTableEnvironment.java  |  6 +--
 .../apache/flink/table/api/TableEnvironment.java   |  6 +--
 .../flink/table/api/internal/Registration.java | 22 -
 .../table/api/internal/TableEnvironmentImpl.java   | 11 +
 .../table/descriptors/ConnectTableDescriptor.java  | 56 --
 .../flink/table/factories/TableFactoryUtil.java| 17 ---
 .../flink/table/api/TableEnvironmentTest.java  |  9 +---
 .../org/apache/flink/table/utils/ParserMock.java   |  2 +-
 .../table/api/scala/BatchTableEnvironment.scala|  6 +--
 .../table/api/internal/BatchTableEnvImpl.scala |  9 +---
 .../table/descriptors/TableDescriptorTest.scala|  2 +-
 20 files changed, 40 insertions(+), 250 deletions(-)



[flink] branch release-1.10 updated (4df788a -> ba5b5b7)

2020-02-05 Thread kurt
This is an automated email from the ASF dual-hosted git repository.

kurt pushed a change to branch release-1.10
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 4df788a  [FLINK-15335] Replace `add-dependencies-for-IDEA` profile
 add ba5b5b7  [FLINK-15858][hive] Store generic table schema as properties

No new revisions were added by this update.

Summary of changes:
 docs/dev/table/hive/hive_catalog.md|  11 +-
 docs/dev/table/hive/hive_catalog.zh.md |  11 +-
 .../flink/table/catalog/hive/HiveCatalog.java  | 124 -
 .../table/catalog/hive/HiveCatalogConfig.java  |   3 +
 .../hive/HiveCatalogGenericMetadataTest.java   |  43 +++
 .../table/catalog/hive/HiveCatalogITCase.java  |  36 ++
 .../src/test/resources/csv/test3.csv   |   5 +
 .../apache/flink/table/catalog/CatalogTest.java|  36 +++---
 8 files changed, 190 insertions(+), 79 deletions(-)
 create mode 100644 
flink-connectors/flink-connector-hive/src/test/resources/csv/test3.csv



[flink] branch master updated (712207a -> 4fcd877)

2020-02-04 Thread kurt
This is an automated email from the ASF dual-hosted git repository.

kurt pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 712207a  [hotfix][docs] Fix missing double quotes in catalog docs
 add 4fcd877  [FLINK-15858][hive] Store generic table schema as properties

No new revisions were added by this update.

Summary of changes:
 docs/dev/table/hive/hive_catalog.md|  11 +-
 docs/dev/table/hive/hive_catalog.zh.md |  11 +-
 .../flink/table/catalog/hive/HiveCatalog.java  | 124 -
 .../table/catalog/hive/HiveCatalogConfig.java  |   3 +
 .../hive/HiveCatalogGenericMetadataTest.java   |  43 +++
 .../table/catalog/hive/HiveCatalogITCase.java  |  36 ++
 .../src/test/resources/csv/test3.csv   |   5 +
 .../apache/flink/table/catalog/CatalogTest.java|  36 +++---
 8 files changed, 190 insertions(+), 79 deletions(-)
 create mode 100644 
flink-connectors/flink-connector-hive/src/test/resources/csv/test3.csv



[flink] branch release-1.9 updated: [FLINK-15577][table-planner] Add Window specs to WindowAggregate nodes' digests

2020-01-19 Thread kurt
This is an automated email from the ASF dual-hosted git repository.

kurt pushed a commit to branch release-1.9
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.9 by this push:
 new 578a709  [FLINK-15577][table-planner] Add Window specs to 
WindowAggregate nodes' digests
578a709 is described below

commit 578a70901230e83351287ffa6b73b27f5a16d8ad
Author: Benoit Hanotte 
AuthorDate: Mon Jan 13 10:48:25 2020 +0100

[FLINK-15577][table-planner] Add Window specs to WindowAggregate nodes' 
digests

The RelNode's digest is used by the Calcite HepPlanner to avoid adding
duplicate vertices to the graph. If an equivalent vertex was already
present in the graph, then that vertex is used in place of the newly
generated one.
This means that the digest needs to contain all the information
necessary to identifying a vertex and distinguishing it from similar
- but not equivalent - vertices.

In the case of the `WindowAggregation` nodes, the window specs are
currently not in the digest, meaning that two aggregations with the same
signatures and expressions but different windows are considered
equivalent by the planner, which is not correct and will lead to an
invalid Physical Plan.

This commit fixes this issue and adds a test ensuring that the window
specs are in the digest, as well as similar aggregations on two
different windows will not be considered equivalent.

This closes #10854

(cherry picked from commit 244718553742c086eefc95f927d7b26af597d40a)
---
 .../plan/logical/rel/LogicalWindowAggregate.scala  |  2 +-
 .../logical/rel/LogicalWindowTableAggregate.scala  |  2 +-
 .../logical/FlinkLogicalWindowAggregate.scala  | 10 +++-
 .../logical/FlinkLogicalWindowTableAggregate.scala | 10 +++-
 .../table/api/batch/sql/GroupWindowTest.scala  | 59 
 .../table/api/stream/sql/GroupWindowTest.scala | 64 ++
 .../table/GroupWindowTableAggregateTest.scala  | 59 
 7 files changed, 202 insertions(+), 4 deletions(-)

diff --git 
a/flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/plan/logical/rel/LogicalWindowAggregate.scala
 
b/flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/plan/logical/rel/LogicalWindowAggregate.scala
index b87afd6..ee456c4 100644
--- 
a/flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/plan/logical/rel/LogicalWindowAggregate.scala
+++ 
b/flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/plan/logical/rel/LogicalWindowAggregate.scala
@@ -49,7 +49,7 @@ class LogicalWindowAggregate(
 for (property <- namedProperties) {
   pw.item(property.name, property.property)
 }
-pw
+pw.item("window", window.toString)
   }
 
   override def copy(
diff --git 
a/flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/plan/logical/rel/LogicalWindowTableAggregate.scala
 
b/flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/plan/logical/rel/LogicalWindowTableAggregate.scala
index 02db874..3c722f5 100644
--- 
a/flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/plan/logical/rel/LogicalWindowTableAggregate.scala
+++ 
b/flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/plan/logical/rel/LogicalWindowTableAggregate.scala
@@ -50,7 +50,7 @@ class LogicalWindowTableAggregate(
 for (property <- namedProperties) {
   pw.item(property.name, property.property)
 }
-pw
+pw.item("window", window.toString)
   }
 
   override def copy(traitSet: RelTraitSet, inputs: util.List[RelNode]): 
TableAggregate = {
diff --git 
a/flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/plan/nodes/logical/FlinkLogicalWindowAggregate.scala
 
b/flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/plan/nodes/logical/FlinkLogicalWindowAggregate.scala
index 0c289c1..26deb4a 100644
--- 
a/flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/plan/nodes/logical/FlinkLogicalWindowAggregate.scala
+++ 
b/flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/plan/nodes/logical/FlinkLogicalWindowAggregate.scala
@@ -25,7 +25,7 @@ import org.apache.calcite.rel.`type`.RelDataType
 import org.apache.calcite.rel.convert.ConverterRule
 import org.apache.calcite.rel.core.{Aggregate, AggregateCall}
 import org.apache.calcite.rel.metadata.RelMetadataQuery
-import org.apache.calcite.rel.{RelNode, RelShuttle}
+import org.apache.calcite.rel.{RelNode, RelShuttle, RelWriter}
 import org.apache.calcite.sql.SqlKind
 import org.apache.calcite.util.ImmutableBitSet
 import org.apache.flink.table.calcite.FlinkRelBuilder.NamedWindowProperty
@@ -52,6 +52,14 @@ class FlinkLogicalWindowAggregate(
 
   def getNamedProperties: Seq[NamedWindowProperty] = namedProperties
 
+  ov

[flink] branch release-1.10 updated (cad4134 -> 85d17b1)

2020-01-19 Thread kurt
This is an automated email from the ASF dual-hosted git repository.

kurt pushed a change to branch release-1.10
in repository https://gitbox.apache.org/repos/asf/flink.git.


from cad4134  [FLINK-15657][python][doc] Fix the python table api doc link
 new 98d209e  [FLINK-15577][table-planner] Add Window specs to 
WindowAggregate nodes' digests
 new 85d17b1  [FLINK-15577][table-planner-blink] Add different windows 
tests to blink planner

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../plan/batch/sql/agg/WindowAggregateTest.xml | 148 +
 .../plan/stream/sql/agg/WindowAggregateTest.xml|  89 +
 .../planner/plan/stream/table/GroupWindowTest.xml  |  25 
 .../plan/batch/sql/agg/WindowAggregateTest.scala   |  27 
 .../plan/stream/sql/agg/WindowAggregateTest.scala  |  27 
 .../plan/stream/table/GroupWindowTest.scala|  31 -
 .../plan/logical/rel/LogicalWindowAggregate.scala  |   2 +-
 .../logical/rel/LogicalWindowTableAggregate.scala  |   2 +-
 .../logical/FlinkLogicalWindowAggregate.scala  |  10 +-
 .../logical/FlinkLogicalWindowTableAggregate.scala |  10 +-
 .../table/api/batch/sql/GroupWindowTest.scala  |  59 
 .../table/api/stream/sql/GroupWindowTest.scala |  64 +
 .../table/GroupWindowTableAggregateTest.scala  |  59 
 13 files changed, 547 insertions(+), 6 deletions(-)



[flink] 02/02: [FLINK-15577][table-planner-blink] Add different windows tests to blink planner

2020-01-19 Thread kurt
This is an automated email from the ASF dual-hosted git repository.

kurt pushed a commit to branch release-1.10
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 85d17b13b5da3a6627f81096ff382580134516ac
Author: Benoit Hanotte 
AuthorDate: Tue Jan 14 11:19:09 2020 +0100

[FLINK-15577][table-planner-blink] Add different windows tests to blink 
planner

The Blink planner doesn't seem to be subject to the bug described in
FLINK-15577. For safety, we also add the tests to ensure no regression
is possible that would introduce the issue in the Blink planner.

(cherry picked from commit fdc10141418205c520ee4667285ed857d92c3740)
---
 .../plan/batch/sql/agg/WindowAggregateTest.xml | 148 +
 .../plan/stream/sql/agg/WindowAggregateTest.xml|  89 +
 .../planner/plan/stream/table/GroupWindowTest.xml  |  25 
 .../plan/batch/sql/agg/WindowAggregateTest.scala   |  27 
 .../plan/stream/sql/agg/WindowAggregateTest.scala  |  27 
 .../plan/stream/table/GroupWindowTest.scala|  31 -
 6 files changed, 345 insertions(+), 2 deletions(-)

diff --git 
a/flink-table/flink-table-planner-blink/src/test/resources/org/apache/flink/table/planner/plan/batch/sql/agg/WindowAggregateTest.xml
 
b/flink-table/flink-table-planner-blink/src/test/resources/org/apache/flink/table/planner/plan/batch/sql/agg/WindowAggregateTest.xml
index b91d909..07b7b71 100644
--- 
a/flink-table/flink-table-planner-blink/src/test/resources/org/apache/flink/table/planner/plan/batch/sql/agg/WindowAggregateTest.xml
+++ 
b/flink-table/flink-table-planner-blink/src/test/resources/org/apache/flink/table/planner/plan/batch/sql/agg/WindowAggregateTest.xml
@@ -111,6 +111,154 @@ Calc(select=[CAST(/(-($f0, /(*($f1, $f1), $f2)), $f2)) AS 
EXPR$0, CAST(/(-($f0,
 ]]>
 
   
+  
+
+  
+
+
+  
+
+
+  
+
+  
+  
+
+  
+
+
+  
+
+
+  
+
+  
+  
+
+  
+
+
+  
+
+
+  
+
+  
   
 
   
 
   
+  
+
+  
+
+
+  
+
+
+  
+
+  
   
 
   
 
   
+  
+
+  
+
+
+  
+
+
+  
+
+  
 
diff --git 
a/flink-table/flink-table-planner-blink/src/test/resources/org/apache/flink/table/planner/plan/stream/table/GroupWindowTest.xml
 
b/flink-table/flink-table-planner-blink/src/test/resources/org/apache/flink/table/planner/plan/stream/table/GroupWindowTest.xml
index 165c523..36a9d63 100644
--- 
a/flink-table/flink-table-planner-blink/src/test/resources/org/apache/flink/table/planner/plan/stream/table/GroupWindowTest.xml
+++ 
b/flink-table/flink-table-planner-blink/src/test/resources/org/apache/flink/table/planner/plan/stream/table/GroupWindowTest.xml
@@ -194,6 +194,31 @@ GroupWindowAggregate(groupBy=[string], 
window=[SessionGroupWindow('w, rowtime, 7
 ]]>
 
   
+  
+
+  
+
+
+  
+
+  
   
 
   

[flink] 01/02: [FLINK-15577][table-planner] Add Window specs to WindowAggregate nodes' digests

2020-01-19 Thread kurt
This is an automated email from the ASF dual-hosted git repository.

kurt pushed a commit to branch release-1.10
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 98d209eab2f6e6cad0bc678876a36090c774082b
Author: Benoit Hanotte 
AuthorDate: Mon Jan 13 10:48:25 2020 +0100

[FLINK-15577][table-planner] Add Window specs to WindowAggregate nodes' 
digests

The RelNode's digest is used by the Calcite HepPlanner to avoid adding
duplicate vertices to the graph. If an equivalent vertex was already
present in the graph, then that vertex is used in place of the newly
generated one.
This means that the digest needs to contain all the information
necessary to identifying a vertex and distinguishing it from similar
- but not equivalent - vertices.

In the case of the `WindowAggregation` nodes, the window specs are
currently not in the digest, meaning that two aggregations with the same
signatures and expressions but different windows are considered
equivalent by the planner, which is not correct and will lead to an
invalid Physical Plan.

This commit fixes this issue and adds a test ensuring that the window
specs are in the digest, as well as similar aggregations on two
different windows will not be considered equivalent.

This closes #10854

(cherry picked from commit 244718553742c086eefc95f927d7b26af597d40a)
---
 .../plan/logical/rel/LogicalWindowAggregate.scala  |  2 +-
 .../logical/rel/LogicalWindowTableAggregate.scala  |  2 +-
 .../logical/FlinkLogicalWindowAggregate.scala  | 10 +++-
 .../logical/FlinkLogicalWindowTableAggregate.scala | 10 +++-
 .../table/api/batch/sql/GroupWindowTest.scala  | 59 
 .../table/api/stream/sql/GroupWindowTest.scala | 64 ++
 .../table/GroupWindowTableAggregateTest.scala  | 59 
 7 files changed, 202 insertions(+), 4 deletions(-)

diff --git 
a/flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/plan/logical/rel/LogicalWindowAggregate.scala
 
b/flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/plan/logical/rel/LogicalWindowAggregate.scala
index b87afd6..ee456c4 100644
--- 
a/flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/plan/logical/rel/LogicalWindowAggregate.scala
+++ 
b/flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/plan/logical/rel/LogicalWindowAggregate.scala
@@ -49,7 +49,7 @@ class LogicalWindowAggregate(
 for (property <- namedProperties) {
   pw.item(property.name, property.property)
 }
-pw
+pw.item("window", window.toString)
   }
 
   override def copy(
diff --git 
a/flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/plan/logical/rel/LogicalWindowTableAggregate.scala
 
b/flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/plan/logical/rel/LogicalWindowTableAggregate.scala
index 02db874..3c722f5 100644
--- 
a/flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/plan/logical/rel/LogicalWindowTableAggregate.scala
+++ 
b/flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/plan/logical/rel/LogicalWindowTableAggregate.scala
@@ -50,7 +50,7 @@ class LogicalWindowTableAggregate(
 for (property <- namedProperties) {
   pw.item(property.name, property.property)
 }
-pw
+pw.item("window", window.toString)
   }
 
   override def copy(traitSet: RelTraitSet, inputs: util.List[RelNode]): 
TableAggregate = {
diff --git 
a/flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/plan/nodes/logical/FlinkLogicalWindowAggregate.scala
 
b/flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/plan/nodes/logical/FlinkLogicalWindowAggregate.scala
index 0c289c1..26deb4a 100644
--- 
a/flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/plan/nodes/logical/FlinkLogicalWindowAggregate.scala
+++ 
b/flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/plan/nodes/logical/FlinkLogicalWindowAggregate.scala
@@ -25,7 +25,7 @@ import org.apache.calcite.rel.`type`.RelDataType
 import org.apache.calcite.rel.convert.ConverterRule
 import org.apache.calcite.rel.core.{Aggregate, AggregateCall}
 import org.apache.calcite.rel.metadata.RelMetadataQuery
-import org.apache.calcite.rel.{RelNode, RelShuttle}
+import org.apache.calcite.rel.{RelNode, RelShuttle, RelWriter}
 import org.apache.calcite.sql.SqlKind
 import org.apache.calcite.util.ImmutableBitSet
 import org.apache.flink.table.calcite.FlinkRelBuilder.NamedWindowProperty
@@ -52,6 +52,14 @@ class FlinkLogicalWindowAggregate(
 
   def getNamedProperties: Seq[NamedWindowProperty] = namedProperties
 
+  override def explainTerms(pw: RelWriter): RelWriter = {
+super.explainTerms(pw)
+for (property <- namedProperties) {
+  pw.item(property.name, property.property)
+}
+pw.i

[flink] branch master updated (2ef9f47 -> fdc1014)

2020-01-19 Thread kurt
This is an automated email from the ASF dual-hosted git repository.

kurt pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 2ef9f47  [FLINK-15657][python][doc] Fix the python table api doc link
 add 2447185  [FLINK-15577][table-planner] Add Window specs to 
WindowAggregate nodes' digests
 add fdc1014  [FLINK-15577][table-planner-blink] Add different windows 
tests to blink planner

No new revisions were added by this update.

Summary of changes:
 .../plan/batch/sql/agg/WindowAggregateTest.xml | 148 +
 .../plan/stream/sql/agg/WindowAggregateTest.xml|  89 +
 .../planner/plan/stream/table/GroupWindowTest.xml  |  25 
 .../plan/batch/sql/agg/WindowAggregateTest.scala   |  27 
 .../plan/stream/sql/agg/WindowAggregateTest.scala  |  27 
 .../plan/stream/table/GroupWindowTest.scala|  31 -
 .../plan/logical/rel/LogicalWindowAggregate.scala  |   2 +-
 .../logical/rel/LogicalWindowTableAggregate.scala  |   2 +-
 .../logical/FlinkLogicalWindowAggregate.scala  |  10 +-
 .../logical/FlinkLogicalWindowTableAggregate.scala |  10 +-
 .../table/api/batch/sql/GroupWindowTest.scala  |  59 
 .../table/api/stream/sql/GroupWindowTest.scala |  64 +
 .../table/GroupWindowTableAggregateTest.scala  |  59 
 13 files changed, 547 insertions(+), 6 deletions(-)



[flink] branch release-1.10 updated: [FLINK-15466][table-planner-blink] Fix the wrong plan when use distinct aggregations with filter

2020-01-09 Thread kurt
This is an automated email from the ASF dual-hosted git repository.

kurt pushed a commit to branch release-1.10
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.10 by this push:
 new ce7968e  [FLINK-15466][table-planner-blink] Fix the wrong plan when 
use distinct aggregations with filter
ce7968e is described below

commit ce7968ecafcde69596897a2243235161cd347b13
Author: Shuo Cheng 
AuthorDate: Thu Jan 9 17:04:24 2020 +0800

[FLINK-15466][table-planner-blink] Fix the wrong plan when use distinct 
aggregations with filter

This closes #10760

(cherry picked from commit 6f2e9abffb0b1ef68e4f2cf058a24524b61e88a1)
---
 ...FlinkAggregateExpandDistinctAggregatesRule.java |  26 +-
 .../plan/batch/sql/agg/DistinctAggregateTest.xml   | 561 ++---
 ...nkAggregateExpandDistinctAggregatesRuleTest.xml | 195 +--
 .../plan/batch/sql/agg/DistinctAggregateTest.scala |  64 +--
 .../DistinctAggregateTestBase.scala}   |  81 ++-
 ...AggregateExpandDistinctAggregatesRuleTest.scala | 143 +-
 .../sql/agg/DistinctAggregateITCaseBase.scala  |  59 ++-
 7 files changed, 775 insertions(+), 354 deletions(-)

diff --git 
a/flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/plan/rules/logical/FlinkAggregateExpandDistinctAggregatesRule.java
 
b/flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/plan/rules/logical/FlinkAggregateExpandDistinctAggregatesRule.java
index 8023100..4b44eec 100644
--- 
a/flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/plan/rules/logical/FlinkAggregateExpandDistinctAggregatesRule.java
+++ 
b/flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/plan/rules/logical/FlinkAggregateExpandDistinctAggregatesRule.java
@@ -417,14 +417,17 @@ public final class 
FlinkAggregateExpandDistinctAggregatesRule extends RelOptRule
Aggregate aggregate) {
final Set groupSetTreeSet =
new TreeSet<>(ImmutableBitSet.ORDERING);
+   final Map 
groupSetToDistinctAggCallFilterArg = new HashMap<>();
for (AggregateCall aggCall : aggregate.getAggCallList()) {
if (!aggCall.isDistinct()) {
groupSetTreeSet.add(aggregate.getGroupSet());
} else {
-   groupSetTreeSet.add(
+   ImmutableBitSet groupSet =

ImmutableBitSet.of(aggCall.getArgList())

.setIf(aggCall.filterArg, aggCall.filterArg >= 0)
-   
.union(aggregate.getGroupSet()));
+   
.union(aggregate.getGroupSet());
+   
groupSetToDistinctAggCallFilterArg.put(groupSet, aggCall.filterArg);
+   groupSetTreeSet.add(groupSet);
}
}
 
@@ -471,10 +474,21 @@ public final class 
FlinkAggregateExpandDistinctAggregatesRule extends RelOptRule
final RexNode nodeZ = nodes.remove(nodes.size() - 1);
for (Map.Entry entry : 
filters.entrySet()) {
final long v = groupValue(fullGroupSet, 
entry.getKey());
-   nodes.add(
-   relBuilder.alias(
-   
relBuilder.equals(nodeZ, relBuilder.literal(v)),
-   "$g_" + v));
+   // Get and remap the filterArg of the distinct 
aggregate call.
+   int distinctAggCallFilterArg = 
remap(fullGroupSet,
+   
groupSetToDistinctAggCallFilterArg.getOrDefault(entry.getKey(), -1));
+   RexNode expr;
+   if (distinctAggCallFilterArg < 0) {
+   expr = relBuilder.equals(nodeZ, 
relBuilder.literal(v));
+   } else {
+   RexBuilder rexBuilder = 
aggregate.getCluster().getRexBuilder();
+   // merge the filter of the distinct 
aggregate call itself.
+   expr = relBuilder.and(
+   relBuilder.equals(nodeZ, 
relBuilder.literal(v)),
+   
rexBuilder.makeCall(SqlStdOperatorTable.IS_TRUE,
+   
relBuilder.fi

[flink] branch master updated (d31bd5a -> 6f2e9ab)

2020-01-09 Thread kurt
This is an automated email from the ASF dual-hosted git repository.

kurt pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from d31bd5a  [hotfix][javadocs] Fix class reference
 add 6f2e9ab  [FLINK-15466][table-planner-blink] Fix the wrong plan when 
use distinct aggregations with filter

No new revisions were added by this update.

Summary of changes:
 ...FlinkAggregateExpandDistinctAggregatesRule.java |  26 +-
 .../plan/batch/sql/agg/DistinctAggregateTest.xml   | 561 ++---
 ...nkAggregateExpandDistinctAggregatesRuleTest.xml | 195 +--
 .../plan/batch/sql/agg/DistinctAggregateTest.scala |  64 +--
 .../DistinctAggregateTestBase.scala}   |  81 ++-
 ...AggregateExpandDistinctAggregatesRuleTest.scala | 143 +-
 .../sql/agg/DistinctAggregateITCaseBase.scala  |  59 ++-
 7 files changed, 775 insertions(+), 354 deletions(-)
 copy 
flink-table/flink-table-planner-blink/src/test/scala/org/apache/flink/table/planner/plan/{rules/logical/FlinkAggregateExpandDistinctAggregatesRuleTest.scala
 => common/DistinctAggregateTestBase.scala} (69%)



[flink] branch release-1.10 updated: [FLINK-15231][table-runtime-blink] Remove useless methods in AbstractHeapVector

2019-12-30 Thread kurt
This is an automated email from the ASF dual-hosted git repository.

kurt pushed a commit to branch release-1.10
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.10 by this push:
 new 62f0303  [FLINK-15231][table-runtime-blink] Remove useless methods in 
AbstractHeapVector
62f0303 is described below

commit 62f0303f07120c51317aff171f105ecb0c65a2be
Author: Zhenghua Gao 
AuthorDate: Tue Dec 31 11:49:43 2019 +0800

[FLINK-15231][table-runtime-blink] Remove useless methods in 
AbstractHeapVector

This closes #10562
---
 .../dataformat/vector/heap/AbstractHeapVector.java | 50 --
 1 file changed, 50 deletions(-)

diff --git 
a/flink-table/flink-table-runtime-blink/src/main/java/org/apache/flink/table/dataformat/vector/heap/AbstractHeapVector.java
 
b/flink-table/flink-table-runtime-blink/src/main/java/org/apache/flink/table/dataformat/vector/heap/AbstractHeapVector.java
index d64bebc..af864df 100644
--- 
a/flink-table/flink-table-runtime-blink/src/main/java/org/apache/flink/table/dataformat/vector/heap/AbstractHeapVector.java
+++ 
b/flink-table/flink-table-runtime-blink/src/main/java/org/apache/flink/table/dataformat/vector/heap/AbstractHeapVector.java
@@ -18,10 +18,7 @@
 
 package org.apache.flink.table.dataformat.vector.heap;
 
-import org.apache.flink.table.dataformat.Decimal;
 import org.apache.flink.table.dataformat.vector.AbstractColumnVector;
-import org.apache.flink.table.types.logical.DecimalType;
-import org.apache.flink.table.types.logical.LogicalType;
 
 import java.util.Arrays;
 
@@ -85,51 +82,4 @@ public abstract class AbstractHeapVector extends 
AbstractColumnVector {
public HeapIntVector getDictionaryIds() {
return dictionaryIds;
}
-
-   public static AbstractHeapVector[] allocateHeapVectors(LogicalType[] 
fieldTypes, int maxRows) {
-   AbstractHeapVector[] columns = new 
AbstractHeapVector[fieldTypes.length];
-   for (int i = 0; i < fieldTypes.length; i++) {
-   columns[i] = createHeapColumn(fieldTypes[i], maxRows);
-   }
-   return columns;
-   }
-
-   public static AbstractHeapVector createHeapColumn(LogicalType 
fieldType, int maxRows) {
-   switch (fieldType.getTypeRoot()) {
-   case BOOLEAN:
-   return new HeapBooleanVector(maxRows);
-   case TINYINT:
-   return new HeapByteVector(maxRows);
-   case DOUBLE:
-   return new HeapDoubleVector(maxRows);
-   case FLOAT:
-   return new HeapFloatVector(maxRows);
-   case INTEGER:
-   case DATE:
-   case TIME_WITHOUT_TIME_ZONE:
-   return new HeapIntVector(maxRows);
-   case BIGINT:
-   case TIMESTAMP_WITHOUT_TIME_ZONE:
-   case TIMESTAMP_WITH_LOCAL_TIME_ZONE:
-   return new HeapLongVector(maxRows);
-   case DECIMAL:
-   DecimalType decimalType = (DecimalType) 
fieldType;
-   if 
(Decimal.is32BitDecimal(decimalType.getPrecision())) {
-   return new HeapIntVector(maxRows);
-   } else if 
(Decimal.is64BitDecimal(decimalType.getPrecision())) {
-   return new HeapLongVector(maxRows);
-   } else {
-   return new HeapBytesVector(maxRows);
-   }
-   case SMALLINT:
-   return new HeapShortVector(maxRows);
-   case CHAR:
-   case VARCHAR:
-   case BINARY:
-   case VARBINARY:
-   return new HeapBytesVector(maxRows);
-   default:
-   throw new 
UnsupportedOperationException(fieldType  + " is not supported now.");
-   }
-   }
 }



[flink] branch master updated (b2d7571 -> 9dc2528)

2019-12-30 Thread kurt
This is an automated email from the ASF dual-hosted git repository.

kurt pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from b2d7571  [FLINK-15439][table][docs] Fix incorrect description about 
unsupported DDL in "Queries" page
 add 9dc2528  [FLINK-15231][table-runtime-blink] Remove useless methods in 
AbstractHeapVector

No new revisions were added by this update.

Summary of changes:
 .../dataformat/vector/heap/AbstractHeapVector.java | 50 --
 1 file changed, 50 deletions(-)



[flink] branch release-1.10 updated: [FLINK-15175][sql client] Fix SqlClient not support with..select statement

2019-12-23 Thread kurt
This is an automated email from the ASF dual-hosted git repository.

kurt pushed a commit to branch release-1.10
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.10 by this push:
 new 3af8e1e  [FLINK-15175][sql client] Fix SqlClient not support 
with..select statement
3af8e1e is described below

commit 3af8e1ef31aa61cf08ed910df32a1a26dd26f892
Author: Liupengcheng 
AuthorDate: Wed Dec 18 15:13:41 2019 +0800

[FLINK-15175][sql client] Fix SqlClient not support with..select statement

This closes #10619
---
 .../java/org/apache/flink/table/client/cli/SqlCommandParser.java| 2 +-
 .../org/apache/flink/table/client/cli/SqlCommandParserTest.java | 6 ++
 2 files changed, 7 insertions(+), 1 deletion(-)

diff --git 
a/flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/cli/SqlCommandParser.java
 
b/flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/cli/SqlCommandParser.java
index da1fd94..a01b3c0 100644
--- 
a/flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/cli/SqlCommandParser.java
+++ 
b/flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/cli/SqlCommandParser.java
@@ -120,7 +120,7 @@ public final class SqlCommandParser {
SINGLE_OPERAND),
 
SELECT(
-   "(SELECT.*)",
+   "(WITH.*SELECT.*|SELECT.*)",
SINGLE_OPERAND),
 
INSERT_INTO(
diff --git 
a/flink-table/flink-sql-client/src/test/java/org/apache/flink/table/client/cli/SqlCommandParserTest.java
 
b/flink-table/flink-sql-client/src/test/java/org/apache/flink/table/client/cli/SqlCommandParserTest.java
index b5bad48..718a5cb 100644
--- 
a/flink-table/flink-sql-client/src/test/java/org/apache/flink/table/client/cli/SqlCommandParserTest.java
+++ 
b/flink-table/flink-sql-client/src/test/java/org/apache/flink/table/client/cli/SqlCommandParserTest.java
@@ -56,6 +56,12 @@ public class SqlCommandParserTest {
"   SELECT  complicated FROM table",
new SqlCommandCall(SqlCommand.SELECT, new 
String[]{"SELECT  complicated FROM table"}));
testValidSqlCommand(
+   "WITH t as (select complicated from table) select 
complicated from t",
+   new SqlCommandCall(SqlCommand.SELECT, new 
String[]{"WITH t as (select complicated from table) select complicated from 
t"}));
+   testValidSqlCommand(
+   "   WITH t as (select complicated from table) select 
complicated from t",
+   new SqlCommandCall(SqlCommand.SELECT, new 
String[]{"WITH t as (select complicated from table) select complicated from 
t"}));
+   testValidSqlCommand(
"INSERT INTO other SELECT 1+1",
new SqlCommandCall(SqlCommand.INSERT_INTO, new 
String[]{"INSERT INTO other SELECT 1+1"}));
testValidSqlCommand(



[flink] branch master updated (5ec3fea -> fc4927e)

2019-12-23 Thread kurt
This is an automated email from the ASF dual-hosted git repository.

kurt pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 5ec3fea  [FLINK-15344][documentation] Update limitations in hive udf 
document
 add fc4927e  [FLINK-15175][sql client] Fix SqlClient not support 
with..select statement

No new revisions were added by this update.

Summary of changes:
 .../java/org/apache/flink/table/client/cli/SqlCommandParser.java| 2 +-
 .../org/apache/flink/table/client/cli/SqlCommandParserTest.java | 6 ++
 2 files changed, 7 insertions(+), 1 deletion(-)



  1   2   3   4   >