[flink] 05/09: [FLINK-25391][connector-kinesis] Forward catalog table options

2022-01-25 Thread twalthr
This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 2cb86ff03747499bfceda74cb8cc1ea48c385452
Author: slinkydeveloper 
AuthorDate: Thu Jan 6 16:21:52 2022 +0100

[FLINK-25391][connector-kinesis] Forward catalog table options
---
 docs/content/docs/connectors/table/kinesis.md  | 79 --
 .../kinesis/table/KinesisDynamicTableFactory.java  | 16 -
 2 files changed, 88 insertions(+), 7 deletions(-)

diff --git a/docs/content/docs/connectors/table/kinesis.md 
b/docs/content/docs/connectors/table/kinesis.md
index f26b1e9..1840862 100644
--- a/docs/content/docs/connectors/table/kinesis.md
+++ b/docs/content/docs/connectors/table/kinesis.md
@@ -122,11 +122,12 @@ Connector Options
 
 
 
-  Option
-  Required
-  Default
-  Type
-  Description
+Option
+Required
+Forwarded
+Default
+Type
+Description
 
 
   Common Options
@@ -136,6 +137,7 @@ Connector Options
 
   connector
   required
+  no
   (none)
   String
   Specify what connector to use. For Kinesis use 
'kinesis'.
@@ -143,6 +145,7 @@ Connector Options
 
   stream
   required
+  yes
   (none)
   String
   Name of the Kinesis data stream backing this table.
@@ -150,6 +153,7 @@ Connector Options
 
   format
   required
+  no
   (none)
   String
   The format used to deserialize and serialize Kinesis data stream 
records. See Data Type Mapping for 
details.
@@ -157,6 +161,7 @@ Connector Options
 
   aws.region
   optional
+  no
   (none)
   String
   The AWS region where the stream is defined. Either this or 
aws.endpoint are required.
@@ -164,6 +169,7 @@ Connector Options
 
   aws.endpoint
   optional
+  no
   (none)
   String
   The AWS endpoint for Kinesis (derived from the AWS region setting if 
not set). Either this or aws.region are required.
@@ -185,6 +191,7 @@ Connector Options
 
   aws.credentials.provider
   optional
+  no
   AUTO
   String
   A credentials provider to use when authenticating against the 
Kinesis endpoint. See Authentication for 
details.
@@ -192,6 +199,7 @@ Connector Options
 
  aws.credentials.basic.accesskeyid
  optional
+  no
  (none)
  String
  The AWS access key ID to use when setting credentials provider 
type to BASIC.
@@ -199,6 +207,7 @@ Connector Options
 
  aws.credentials.basic.secretkey
  optional
+  no
  (none)
  String
  The AWS secret key to use when setting credentials provider type 
to BASIC.
@@ -206,6 +215,7 @@ Connector Options
 
  aws.credentials.profile.path
  optional
+  no
  (none)
  String
  Optional configuration for profile path if credential provider 
type is set to be PROFILE.
@@ -213,6 +223,7 @@ Connector Options
 
  aws.credentials.profile.name
  optional
+  no
  (none)
  String
  Optional configuration for profile name if credential provider 
type is set to be PROFILE.
@@ -220,6 +231,7 @@ Connector Options
 
  aws.credentials.role.arn
  optional
+  no
  (none)
  String
  The role ARN to use when credential provider type is set to 
ASSUME_ROLE or WEB_IDENTITY_TOKEN.
@@ -227,6 +239,7 @@ Connector Options
 
  aws.credentials.role.sessionName
  optional
+  no
  (none)
  String
  The role session name to use when credential provider type is set 
to ASSUME_ROLE or WEB_IDENTITY_TOKEN.
@@ -234,6 +247,7 @@ Connector Options
 
  aws.credentials.role.externalId
  optional
+  no
  (none)
  String
  The external ID to use when credential provider type is set to 
ASSUME_ROLE.
@@ -241,6 +255,7 @@ Connector Options
 
  aws.credentials.role.provider
  optional
+  no
  (none)
  String
  The credentials provider that provides credentials for assuming 
the role when credential provider type is set to ASSUME_ROLE. Roles can be 
nested, so this value can again be set to ASSUME_ROLE
@@ -248,6 +263,7 @@ Connector Options
 
  aws.credentials.webIdentityToken.file
  optional
+  no
  (none)
  String
  The absolute path to the web identity token file that should be 
used if provider type is set to WEB_IDENTITY_TOKEN.
@@ -262,6 +278,7 @@ Connector Options
 
   scan.stream.initpos
   optional
+  no
   LATEST
   String
   Initial position to be used when reading from the table. See Start Reading Position for details.
@@ -269,6 +286,7 @@ Connector Options

[flink] 07/09: [FLINK-25391][format-avro] Forward catalog table options

2022-01-25 Thread twalthr
This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 8f77862fa5ecbec5ee26b7b2b68478ad50943a3e
Author: slinkydeveloper 
AuthorDate: Thu Jan 6 16:37:17 2022 +0100

[FLINK-25391][format-avro] Forward catalog table options
---
 .../docs/connectors/table/formats/avro-confluent.md   | 15 ++-
 docs/content/docs/connectors/table/formats/avro.md|  5 -
 .../registry/confluent/RegistryAvroFormatFactory.java | 19 +++
 .../flink/formats/avro/AvroFileFormatFactory.java |  5 +
 4 files changed, 42 insertions(+), 2 deletions(-)

diff --git a/docs/content/docs/connectors/table/formats/avro-confluent.md 
b/docs/content/docs/connectors/table/formats/avro-confluent.md
index cf2fe70..28b33da 100644
--- a/docs/content/docs/connectors/table/formats/avro-confluent.md
+++ b/docs/content/docs/connectors/table/formats/avro-confluent.md
@@ -176,15 +176,17 @@ Format Options
   
 Option
 Required
+Forwarded
 Default
 Type
-Description
+Description
   
 
 
 
 format
 required
+no
 (none)
 String
 Specify what format to use, here should be 
'avro-confluent'.
@@ -192,6 +194,7 @@ Format Options
 
 avro-confluent.basic-auth.credentials-source
 optional
+yes
 (none)
 String
 Basic auth credentials source for Schema Registry
@@ -199,6 +202,7 @@ Format Options
 
 avro-confluent.basic-auth.user-info
 optional
+yes
 (none)
 String
 Basic auth user info for schema registry
@@ -206,6 +210,7 @@ Format Options
 
 avro-confluent.bearer-auth.credentials-source
 optional
+yes
 (none)
 String
 Bearer auth credentials source for Schema Registry
@@ -213,6 +218,7 @@ Format Options
 
 avro-confluent.bearer-auth.token
 optional
+yes
 (none)
 String
 Bearer auth token for Schema Registry
@@ -220,6 +226,7 @@ Format Options
 
 avro-confluent.properties
 optional
+yes
 (none)
 Map
 Properties map that is forwarded to the underlying Schema 
Registry. This is useful for options that are not officially exposed via Flink 
config options. However, note that Flink options have higher precedence.
@@ -227,6 +234,7 @@ Format Options
 
 avro-confluent.ssl.keystore.location
 optional
+yes
 (none)
 String
 Location / File of SSL keystore
@@ -234,6 +242,7 @@ Format Options
 
 avro-confluent.ssl.keystore.password
 optional
+yes
 (none)
 String
 Password for SSL keystore
@@ -241,6 +250,7 @@ Format Options
 
 avro-confluent.ssl.truststore.location
 optional
+yes
 (none)
 String
 Location / File of SSL truststore
@@ -248,6 +258,7 @@ Format Options
 
 avro-confluent.ssl.truststore.password
 optional
+yes
 (none)
 String
 Password for SSL truststore
@@ -255,6 +266,7 @@ Format Options
 
 avro-confluent.subject
 optional
+yes
 (none)
 String
 The Confluent Schema Registry subject under which to register 
the schema used by this format during serialization. By default, 'kafka' and 
'upsert-kafka' connectors use 'topic_name-value' or 
'topic_name-key' as the default subject name if this format is used as 
the value or key format. But for other connectors (e.g. 'filesystem'), the 
subject option is required when used as sink.
@@ -262,6 +274,7 @@ Format Options
 
 avro-confluent.url
 required
+yes
 (none)
 String
 The URL of the Confluent Schema Registry to fetch/register 
schemas.
diff --git a/docs/content/docs/connectors/table/formats/avro.md 
b/docs/content/docs/connectors/table/formats/avro.md
index 341ca0a..601a9dc 100644
--- a/docs/content/docs/connectors/table/formats/avro.md
+++ b/docs/content/docs/connectors/table/formats/avro.md
@@ -65,15 +65,17 @@ Format Options
   
 Option
 Required
+Forwarded
 Default
 Type
-Description
+Description
   
 
 
 
   format
   required
+  no
   (none)
   String
   Specify what format to use, here should be 'avro'.
@@ -81,6 +83,7 @@ Format Options

[flink] 06/09: [FLINK-25391][connector-hbase] Forward catalog table options

2022-01-25 Thread twalthr
This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 5175ed0d48835344c1cd4282372d6b01571d914b
Author: slinkydeveloper 
AuthorDate: Thu Jan 6 16:35:09 2022 +0100

[FLINK-25391][connector-hbase] Forward catalog table options
---
 docs/content/docs/connectors/table/hbase.md| 17 -
 .../hbase1/HBase1DynamicTableFactory.java  | 29 --
 .../hbase2/HBase2DynamicTableFactory.java  | 27 ++--
 .../hbase/table/HBaseConnectorOptionsUtil.java | 18 ++
 4 files changed, 65 insertions(+), 26 deletions(-)

diff --git a/docs/content/docs/connectors/table/hbase.md 
b/docs/content/docs/connectors/table/hbase.md
index 86d45e5..21436cd 100644
--- a/docs/content/docs/connectors/table/hbase.md
+++ b/docs/content/docs/connectors/table/hbase.md
@@ -82,15 +82,17 @@ Connector Options
   
 Option
 Required
+Forwarded
 Default
 Type
-Description
+Description
   
 
 
 
   connector
   required
+  no
   (none)
   String
   Specify what connector to use, valid values are:
@@ -103,6 +105,7 @@ Connector Options
 
   table-name
   required
+  yes
   (none)
   String
   The name of HBase table to connect. By default, the table is in 
'default' namespace. To assign the table a specified namespace you need to use 
'namespace:table'.
@@ -110,6 +113,7 @@ Connector Options
 
   zookeeper.quorum
   required
+  yes
   (none)
   String
   The HBase Zookeeper quorum.
@@ -117,6 +121,7 @@ Connector Options
 
   zookeeper.znode.parent
   optional
+  yes
   /hbase
   String
   The root dir in Zookeeper for HBase cluster.
@@ -124,6 +129,7 @@ Connector Options
 
   null-string-literal
   optional
+  yes
   null
   String
   Representation for null values for string fields. HBase source and 
sink encodes/decodes empty bytes as null values for all types except string 
type.
@@ -131,6 +137,7 @@ Connector Options
 
   sink.buffer-flush.max-size
   optional
+  yes
   2mb
   MemorySize
   Writing option, maximum size in memory of buffered rows for each 
writing request.
@@ -141,6 +148,7 @@ Connector Options
 
   sink.buffer-flush.max-rows
   optional
+  yes
   1000
   Integer
   Writing option, maximum number of rows to buffer for each writing 
request.
@@ -151,6 +159,7 @@ Connector Options
 
   sink.buffer-flush.interval
   optional
+  yes
   1s
   Duration
   Writing option, the interval to flush any buffered rows.
@@ -162,6 +171,7 @@ Connector Options
 
   sink.parallelism
   optional
+  no
   (none)
   Integer
   Defines the parallelism of the HBase sink operator. By default, the 
parallelism is determined by the framework using the same parallelism of the 
upstream chained operator.
@@ -169,6 +179,7 @@ Connector Options
 
   lookup.async
   optional
+  no
   false
   Boolean
   Whether async lookup are enabled. If true, the lookup will be async. 
Note, async only supports hbase-2.2 connector.
@@ -176,6 +187,7 @@ Connector Options
 
   lookup.cache.max-rows
   optional
+  yes
   -1
   Long
   The max number of rows of lookup cache, over this value, the oldest 
rows will be expired. Note, "lookup.cache.max-rows" and "lookup.cache.ttl" 
options must all be specified if any of them is specified. Lookup cache is 
disabled by default.
@@ -183,6 +195,7 @@ Connector Options
 
   lookup.cache.ttl
   optional
+  yes
   0 s
   Duration
   The max time to live for each rows in lookup cache, over this time, 
the oldest rows will be expired. Note, "cache.max-rows" and "cache.ttl" options 
must all be specified if any of them is specified.Lookup cache is disabled by 
default.
@@ -190,6 +203,7 @@ Connector Options
 
   lookup.max-retries
   optional
+  yes
   3
   Integer
   The max retry times if lookup database failed.
@@ -197,6 +211,7 @@ Connector Options
 
   properties.*
   optional
+  no
   (none)
   String
   
diff --git 
a/flink-connectors/flink-connector-hbase-1.4/src/main/java/org/apache/flink/connector/hbase1/HBase1DynamicTableFactory.java
 
b/flink-connectors/flink-connector-hbase-1.4/src/main/java/org/apache/flink/connector/hbase1/HBase1DynamicTableFactory.java
index 3454064..6a3e6ba 100644
--- 
a/flink-connectors/flink-connector-hbase-1.4/src/main/java/org/apache/flink/connector/hbase1/HBase1DynamicTableFactory.java
+++ 
b/flink-connectors/flink-connector-hbase-1.4/src/main/java/org/apache/flink/connector/hbase1/HBase1DynamicTab

[flink] 04/09: [FLINK-25391][connector-kafka] Forward catalog table options

2022-01-25 Thread twalthr
This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 0c34c994f9906e58963f85739fc951221b11d26a
Author: slinkydeveloper 
AuthorDate: Thu Jan 6 16:20:12 2022 +0100

[FLINK-25391][connector-kafka] Forward catalog table options
---
 docs/content/docs/connectors/table/kafka.md| 24 +-
 .../kafka/table/KafkaDynamicTableFactory.java  | 23 +
 2 files changed, 38 insertions(+), 9 deletions(-)

diff --git a/docs/content/docs/connectors/table/kafka.md 
b/docs/content/docs/connectors/table/kafka.md
index 6b728f6..34d73c4 100644
--- a/docs/content/docs/connectors/table/kafka.md
+++ b/docs/content/docs/connectors/table/kafka.md
@@ -179,15 +179,17 @@ Connector Options
 
   Option
   Required
+  Forwarded
   Default
   Type
-  Description
+  Description
 
 
 
 
   connector
   required
+  no
   (none)
   String
   Specify what connector to use, for Kafka use 
'kafka'.
@@ -195,6 +197,7 @@ Connector Options
 
   topic
   required for sink
+  yes
   (none)
   String
   Topic name(s) to read data from when the table is used as source. It 
also supports topic list for source by separating topic by semicolon like 
'topic-1;topic-2'. Note, only one of "topic-pattern" and "topic" 
can be specified for sources. When the table is used as sink, the topic name is 
the topic to write data to. Note topic list is not supported for sinks.
@@ -202,6 +205,7 @@ Connector Options
 
   topic-pattern
   optional
+  yes
   (none)
   String
   The regular expression for a pattern of topic names to read from. 
All topics with names that match the specified regular expression will be 
subscribed by the consumer when the job starts running. Note, only one of 
"topic-pattern" and "topic" can be specified for sources.
@@ -209,6 +213,7 @@ Connector Options
 
   properties.bootstrap.servers
   required
+  yes
   (none)
   String
   Comma separated list of Kafka brokers.
@@ -216,6 +221,7 @@ Connector Options
 
   properties.group.id
   optional for source, not applicable for sink
+  yes
   (none)
   String
   The id of the consumer group for Kafka source. If group ID is not 
specified, an automatically generated id "KafkaSource-{tableIdentifier}" will 
be used.
@@ -223,6 +229,7 @@ Connector Options
 
   properties.*
   optional
+  no
   (none)
   String
   
@@ -232,6 +239,7 @@ Connector Options
 
   format
   required
+  no
   (none)
   String
   The format used to deserialize and serialize the value part of Kafka 
messages.
@@ -243,6 +251,7 @@ Connector Options
 
   key.format
   optional
+  no
   (none)
   String
   The format used to deserialize and serialize the key part of Kafka 
messages.
@@ -254,6 +263,7 @@ Connector Options
 
   key.fields
   optional
+  no
   []
   ListString
   Defines an explicit list of physical columns from the table schema 
that configure the data
@@ -264,6 +274,7 @@ Connector Options
 
   key.fields-prefix
   optional
+  no
   (none)
   String
   Defines a custom prefix for all fields of the key format to avoid 
name clashes with fields
@@ -277,6 +288,7 @@ Connector Options
 
   value.format
   required
+  no
   (none)
   String
   The format used to deserialize and serialize the value part of Kafka 
messages.
@@ -288,6 +300,7 @@ Connector Options
 
   value.fields-include
   optional
+  no
   ALL
   EnumPossible values: [ALL, EXCEPT_KEY]
   Defines a strategy how to deal with key columns in the data type of 
the value format. By
@@ -298,6 +311,7 @@ Connector Options
 
   scan.startup.mode
   optional
+  yes
   group-offsets
   String
   Startup mode for Kafka consumer, valid values are 
'earliest-offset', 'latest-offset', 
'group-offsets', 'timestamp' and 
'specific-offsets'.
@@ -306,6 +320,7 @@ Connector Options
 
   scan.startup.specific-offsets
   optional
+  yes
   (none)
   String
   Specify offsets for each partition in case of 
'specific-offsets' startup mode, e.g. 
'partition:0,offset:42;partition:1,offset:300'.
@@ -314,6 +329,7 @@ Connector Options
 
   scan.startup.timestamp-millis
   optional
+  yes
   (none)
   Long
   Start from the specified epoch timestamp (milliseconds) used in case 
of 'timestamp' startup mode.
@@ -321,6 +337,7 @@ Connector Options
 
   scan.topic-partition-discovery.interval
   optional
+  yes
   (none)
   Duration
   Interval for consumer to discover dynamically created Ka

[flink] 09/09: [FLINK-25391][format-json] Forward catalog table options

2022-01-25 Thread twalthr
This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit fa161d3c5636370f5129320a9ca464e38f88fc6f
Author: slinkydeveloper 
AuthorDate: Wed Jan 12 18:57:48 2022 +0100

[FLINK-25391][format-json] Forward catalog table options

This closes #18290.
---
 docs/content/docs/connectors/table/formats/json.md | 10 +-
 .../java/org/apache/flink/formats/json/JsonFormatFactory.java  |  1 -
 2 files changed, 9 insertions(+), 2 deletions(-)

diff --git a/docs/content/docs/connectors/table/formats/json.md 
b/docs/content/docs/connectors/table/formats/json.md
index 56b8c84..d9c5e5c 100644
--- a/docs/content/docs/connectors/table/formats/json.md
+++ b/docs/content/docs/connectors/table/formats/json.md
@@ -69,15 +69,17 @@ Format Options
   
 Option
 Required
+Forwarded
 Default
 Type
-Description
+Description
   
 
 
 
   format
   required
+  no
   (none)
   String
   Specify what format to use, here should be 'json'.
@@ -85,6 +87,7 @@ Format Options
 
   json.fail-on-missing-field
   optional
+  no
   false
   Boolean
   Whether to fail if a field is missing or not.
@@ -92,6 +95,7 @@ Format Options
 
   json.ignore-parse-errors
   optional
+  no
   false
   Boolean
   Skip fields and rows with parse errors instead of failing.
@@ -100,6 +104,7 @@ Format Options
 
   json.timestamp-format.standard
   optional
+  yes
   'SQL'
   String
   Specify the input and output timestamp format for 
TIMESTAMP and TIMESTAMP_LTZ type. Currently supported 
values are 'SQL' and 'ISO-8601':
@@ -114,6 +119,7 @@ Format Options
 
   json.map-null-key.mode
   optional
+  yes
   'FAIL'
   String
   Specify the handling mode when serializing null keys for map data. 
Currently supported values are 'FAIL', 'DROP' and 
'LITERAL':
@@ -127,6 +133,7 @@ Format Options
 
   json.map-null-key.literal
   optional
+  yes
   'null'
   String
   Specify string literal to replace null key when 
'json.map-null-key.mode' is LITERAL.
@@ -134,6 +141,7 @@ Format Options
 
   json.encode.decimal-as-plain-number
   optional
+  yes
   false
   Boolean
   Encode all decimals as plain numbers instead of possible scientific 
notations. By default, decimals may be written using scientific notation. For 
example, 0.00027 is encoded as 2.7E-8 by default, 
and will be written as 0.00027 if set this option to true.
diff --git 
a/flink-formats/flink-json/src/main/java/org/apache/flink/formats/json/JsonFormatFactory.java
 
b/flink-formats/flink-json/src/main/java/org/apache/flink/formats/json/JsonFormatFactory.java
index bf2e287..74d8c53 100644
--- 
a/flink-formats/flink-json/src/main/java/org/apache/flink/formats/json/JsonFormatFactory.java
+++ 
b/flink-formats/flink-json/src/main/java/org/apache/flink/formats/json/JsonFormatFactory.java
@@ -155,7 +155,6 @@ public class JsonFormatFactory implements 
DeserializationFormatFactory, Serializ
 @Override
 public Set> forwardOptions() {
 Set> options = new HashSet<>();
-options.add(IGNORE_PARSE_ERRORS);
 options.add(TIMESTAMP_FORMAT);
 options.add(MAP_NULL_KEY_MODE);
 options.add(MAP_NULL_KEY_LITERAL);


[flink] 08/09: [FLINK-25391][format-csv] Forward catalog table options

2022-01-25 Thread twalthr
This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 0370c36eff86a0af9485405ca3a51663c33cbadf
Author: slinkydeveloper 
AuthorDate: Thu Jan 6 16:42:25 2022 +0100

[FLINK-25391][format-csv] Forward catalog table options
---
 docs/content/docs/connectors/table/formats/csv.md   | 12 +++-
 .../java/org/apache/flink/formats/csv/CsvFormatFactory.java | 13 +
 2 files changed, 24 insertions(+), 1 deletion(-)

diff --git a/docs/content/docs/connectors/table/formats/csv.md 
b/docs/content/docs/connectors/table/formats/csv.md
index 1e9918c..e7cc549 100644
--- a/docs/content/docs/connectors/table/formats/csv.md
+++ b/docs/content/docs/connectors/table/formats/csv.md
@@ -67,15 +67,17 @@ Format Options
   
 Option
 Required
+Forwarded
 Default
 Type
-Description
+Description
   
 
 
 
   format
   required
+  no
   (none)
   String
   Specify what format to use, here should be 'csv'.
@@ -83,6 +85,7 @@ Format Options
 
   csv.field-delimiter
   optional
+  yes
   ,
   String
   Field delimiter character (',' by default), must be 
single character. You can use backslash to specify special characters, e.g. 
'\t' represents the tab character.
@@ -92,6 +95,7 @@ Format Options
 
   csv.disable-quote-character
   optional
+  yes
   false
   Boolean
   Disabled quote character for enclosing field values (false by 
default).
@@ -100,6 +104,7 @@ Format Options
 
   csv.quote-character
   optional
+  yes
   "
   String
   Quote character for enclosing field values (" by 
default).
@@ -107,6 +112,7 @@ Format Options
 
   csv.allow-comments
   optional
+  yes
   false
   Boolean
   Ignore comment lines that start with '#' (disabled by 
default).
@@ -115,6 +121,7 @@ Format Options
 
   csv.ignore-parse-errors
   optional
+  no
   false
   Boolean
   Skip fields and rows with parse errors instead of failing.
@@ -123,6 +130,7 @@ Format Options
 
   csv.array-element-delimiter
   optional
+  yes
   ;
   String
   Array element delimiter string for separating
@@ -131,6 +139,7 @@ Format Options
 
   csv.escape-character
   optional
+  yes
   (none)
   String
   Escape character for escaping values (disabled by default).
@@ -138,6 +147,7 @@ Format Options
 
   csv.null-literal
   optional
+  yes
   (none)
   String
   Null literal string that is interpreted as a null value (disabled by 
default).
diff --git 
a/flink-formats/flink-csv/src/main/java/org/apache/flink/formats/csv/CsvFormatFactory.java
 
b/flink-formats/flink-csv/src/main/java/org/apache/flink/formats/csv/CsvFormatFactory.java
index 124f4a2..ddfd685 100644
--- 
a/flink-formats/flink-csv/src/main/java/org/apache/flink/formats/csv/CsvFormatFactory.java
+++ 
b/flink-formats/flink-csv/src/main/java/org/apache/flink/formats/csv/CsvFormatFactory.java
@@ -137,6 +137,19 @@ public final class CsvFormatFactory
 return options;
 }
 
+@Override
+public Set> forwardOptions() {
+Set> options = new HashSet<>();
+options.add(FIELD_DELIMITER);
+options.add(DISABLE_QUOTE_CHARACTER);
+options.add(QUOTE_CHARACTER);
+options.add(ALLOW_COMMENTS);
+options.add(ARRAY_ELEMENT_DELIMITER);
+options.add(ESCAPE_CHARACTER);
+options.add(NULL_LITERAL);
+return options;
+}
+
 // 
 //  Validation
 // 


[flink] 02/09: [FLINK-25391][connector-jdbc] Forward catalog table options

2022-01-25 Thread twalthr
This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit c61162b30f4b5567ecc2ee29481fcc87e5016428
Author: slinkydeveloper 
AuthorDate: Thu Jan 6 15:01:39 2022 +0100

[FLINK-25391][connector-jdbc] Forward catalog table options
---
 docs/content/docs/connectors/table/jdbc.md | 24 +-
 .../jdbc/table/JdbcDynamicTableFactory.java| 23 +
 2 files changed, 46 insertions(+), 1 deletion(-)

diff --git a/docs/content/docs/connectors/table/jdbc.md 
b/docs/content/docs/connectors/table/jdbc.md
index b289831..81a179a 100644
--- a/docs/content/docs/connectors/table/jdbc.md
+++ b/docs/content/docs/connectors/table/jdbc.md
@@ -93,15 +93,17 @@ Connector Options
   
 Option
 Required
+Forwarded
 Default
 Type
-Description
+Description
   
 
 
 
   connector
   required
+  no
   (none)
   String
   Specify what connector to use, here should be 
'jdbc'.
@@ -109,6 +111,7 @@ Connector Options
 
   url
   required
+  yes
   (none)
   String
   The JDBC database url.
@@ -116,6 +119,7 @@ Connector Options
 
   table-name
   required
+  yes
   (none)
   String
   The name of JDBC table to connect.
@@ -123,6 +127,7 @@ Connector Options
 
   driver
   optional
+  yes
   (none)
   String
   The class name of the JDBC driver to use to connect to this URL, if 
not set, it will automatically be derived from the URL.
@@ -130,6 +135,7 @@ Connector Options
 
   username
   optional
+  yes
   (none)
   String
   The JDBC user name. 'username' and 
'password' must both be specified if any of them is specified.
@@ -137,6 +143,7 @@ Connector Options
 
   password
   optional
+  yes
   (none)
   String
   The JDBC password.
@@ -144,6 +151,7 @@ Connector Options
 
   connection.max-retry-timeout
   optional
+  yes
   60s
   Duration
   Maximum timeout between retries. The timeout should be in second 
granularity and shouldn't be smaller than 1 second.
@@ -151,6 +159,7 @@ Connector Options
 
   scan.partition.column
   optional
+  no
   (none)
   String
   The column name used for partitioning the input. See the following 
Partitioned Scan section for more details.
@@ -158,6 +167,7 @@ Connector Options
 
   scan.partition.num
   optional
+  no
   (none)
   Integer
   The number of partitions.
@@ -165,6 +175,7 @@ Connector Options
 
   scan.partition.lower-bound
   optional
+  no
   (none)
   Integer
   The smallest value of the first partition.
@@ -172,6 +183,7 @@ Connector Options
 
   scan.partition.upper-bound
   optional
+  no
   (none)
   Integer
   The largest value of the last partition.
@@ -179,6 +191,7 @@ Connector Options
 
   scan.fetch-size
   optional
+  yes
   0
   Integer
   The number of rows that should be fetched from the database when 
reading per round trip. If the value specified is zero, then the hint is 
ignored.
@@ -186,6 +199,7 @@ Connector Options
 
   scan.auto-commit
   optional
+  yes
   true
   Boolean
   Sets the https://docs.oracle.com/javase/tutorial/jdbc/basics/transactions.html#commit_transactions;>auto-commit
 flag on the JDBC driver,
@@ -195,6 +209,7 @@ Connector Options
 
   lookup.cache.max-rows
   optional
+  yes
   (none)
   Integer
   The max number of rows of lookup cache, over this value, the oldest 
rows will be expired.
@@ -203,6 +218,7 @@ Connector Options
 
   lookup.cache.ttl
   optional
+  yes
   (none)
   Duration
   The max time to live for each rows in lookup cache, over this time, 
the oldest rows will be expired.
@@ -211,6 +227,7 @@ Connector Options
 
   lookup.cache.caching-missing-key
   optional
+  yes
   true
   Boolean
   Flag to cache missing key, true by default
@@ -218,6 +235,7 @@ Connector Options
 
   lookup.max-retries
   optional
+  yes
   3
   Integer
   The max retry times if lookup database failed.
@@ -225,6 +243,7 @@ Connector Options
 
   sink.buffer-flush.max-rows
   optional
+  yes
   100
   Integer
   The max size of buffered records before flush. Can be set to zero to 
disable it.
@@ -232,6 +251,7 @@ Connector Options
 
   sink.buffer-flush.interval
   optional
+  yes
   1s
   Duration
   The flush interval mills, over this time, asynchronous threads will 
flush data. Can be set to '0' to disable it. Note, 
'sink.buffer-flush.max-rows' can be set to '0' with 
the flush interval set allow

[flink] branch master updated (2160735 -> fa161d3)

2022-01-25 Thread twalthr
This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 2160735  [FLINK-25739][dist] Include Changelog to flink-dist jar
 new 5e28f66  [FLINK-25391][connector-elasticsearch] Forward catalog table 
options
 new c61162b  [FLINK-25391][connector-jdbc] Forward catalog table options
 new c926031  [FLINK-25391][connector-files] Forward catalog table options
 new 0c34c99  [FLINK-25391][connector-kafka] Forward catalog table options
 new 2cb86ff  [FLINK-25391][connector-kinesis] Forward catalog table options
 new 5175ed0  [FLINK-25391][connector-hbase] Forward catalog table options
 new 8f77862  [FLINK-25391][format-avro] Forward catalog table options
 new 0370c36e [FLINK-25391][format-csv] Forward catalog table options
 new fa161d3  [FLINK-25391][format-json] Forward catalog table options

The 9 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../content/docs/connectors/table/elasticsearch.md | 23 ++-
 docs/content/docs/connectors/table/filesystem.md   | 80 +-
 .../connectors/table/formats/avro-confluent.md | 15 +++-
 docs/content/docs/connectors/table/formats/avro.md |  5 +-
 docs/content/docs/connectors/table/formats/csv.md  | 12 +++-
 docs/content/docs/connectors/table/formats/json.md | 10 ++-
 docs/content/docs/connectors/table/hbase.md| 17 -
 docs/content/docs/connectors/table/jdbc.md | 24 ++-
 docs/content/docs/connectors/table/kafka.md| 24 ++-
 docs/content/docs/connectors/table/kinesis.md  | 79 +++--
 .../table/ElasticsearchDynamicSinkFactoryBase.java | 50 +-
 .../table/Elasticsearch6DynamicSinkFactory.java| 21 +++---
 .../file/table/AbstractFileSystemTable.java| 24 +++
 .../file/table/FileSystemTableFactory.java | 29 +++-
 .../connector/file/table/FileSystemTableSink.java  | 21 --
 .../file/table/FileSystemTableSource.java  | 25 ---
 .../hbase1/HBase1DynamicTableFactory.java  | 29 ++--
 .../hbase2/HBase2DynamicTableFactory.java  | 27 ++--
 .../hbase/table/HBaseConnectorOptionsUtil.java | 18 ++---
 .../jdbc/table/JdbcDynamicTableFactory.java| 23 +++
 .../kafka/table/KafkaDynamicTableFactory.java  | 23 ---
 .../kinesis/table/KinesisDynamicTableFactory.java  | 16 -
 .../confluent/RegistryAvroFormatFactory.java   | 19 +
 .../flink/formats/avro/AvroFileFormatFactory.java  |  5 ++
 .../apache/flink/formats/csv/CsvFormatFactory.java | 13 
 .../flink/formats/json/JsonFormatFactory.java  |  1 -
 26 files changed, 509 insertions(+), 124 deletions(-)


[flink] 03/09: [FLINK-25391][connector-files] Forward catalog table options

2022-01-25 Thread twalthr
This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit c9260311637ad47a6e67f154c629ddd49d9f262a
Author: slinkydeveloper 
AuthorDate: Thu Jan 6 14:52:50 2022 +0100

[FLINK-25391][connector-files] Forward catalog table options
---
 docs/content/docs/connectors/table/filesystem.md   | 80 +-
 .../file/table/AbstractFileSystemTable.java| 24 +++
 .../file/table/FileSystemTableFactory.java | 29 +++-
 .../connector/file/table/FileSystemTableSink.java  | 21 --
 .../file/table/FileSystemTableSource.java  | 25 ---
 5 files changed, 131 insertions(+), 48 deletions(-)

diff --git a/docs/content/docs/connectors/table/filesystem.md 
b/docs/content/docs/connectors/table/filesystem.md
index 24dfa11..e2c0aeb 100644
--- a/docs/content/docs/connectors/table/filesystem.md
+++ b/docs/content/docs/connectors/table/filesystem.md
@@ -208,21 +208,27 @@ a timeout that specifies the maximum duration for which a 
file can be open.
 
   
 
-Key
-Default
+Option
+Required
+Forwarded
+Default
 Type
-Description
+Description
 
   
   
 
 sink.rolling-policy.file-size
+optional
+yes
 128MB
 MemorySize
 The maximum part file size before rolling.
 
 
 sink.rolling-policy.rollover-interval
+optional
+yes
 30 min
 Duration
 The maximum time duration a part file can stay open before rolling 
(by default 30 min to avoid to many small files).
@@ -230,6 +236,8 @@ a timeout that specifies the maximum duration for which a 
file can be open.
 
 
 sink.rolling-policy.check-interval
+optional
+yes
 1 min
 Duration
 The interval for checking time based rolling policies. This 
controls the frequency to check whether a part file should rollover based on 
'sink.rolling-policy.rollover-interval'.
@@ -250,21 +258,27 @@ The file sink supports file compactions, which allows 
applications to have small
 
   
 
-Key
-Default
+Option
+Required
+Forwarded
+Default
 Type
-Description
+Description
 
   
   
 
 auto-compaction
+optional
+no
 false
 Boolean
 Whether to enable automatic compaction in streaming sink or not. 
The data will be written to temporary files. After the checkpoint is completed, 
the temporary files generated by a checkpoint will be compacted. The temporary 
files are invisible before compaction.
 
 
 compaction.file-size
+optional
+yes
 (none)
 MemorySize
 The compaction target file size, the default value is the rolling 
file size.
@@ -294,27 +308,35 @@ To define when to commit a partition, providing partition 
commit trigger:
 
   
 
-Key
-Default
+Option
+Required
+Forwarded
+Default
 Type
-Description
+Description
 
   
   
 
 sink.partition-commit.trigger
+optional
+yes
 process-time
 String
 Trigger type for partition commit: 'process-time': based on the 
time of the machine, it neither requires partition time extraction nor 
watermark generation. Commit partition once the 'current system time' passes 
'partition creation system time' plus 'delay'. 'partition-time': based on the 
time that extracted from partition values, it requires watermark generation. 
Commit partition once the 'watermark' passes 'time extracted from partition 
values' plus 'delay'.
 
 
 sink.partition-commit.delay
+optional
+yes
 0 s
 Duration
 The partition will not commit until the delay time. If it is a 
daily partition, should be '1 d', if it is a hourly partition, should be '1 
h'.
 
 
 sink.partition-commit.watermark-time-zone
+optional
+yes
 UTC
 String
 The time zone to parse the long watermark value to TIMESTAMP 
value, the parsed watermark timestamp is used to compare with partition time to 
decide the partition should commit or not. This option is only take effect when 
`sink.partition-commit.trigger` is set to 'partition-time'. If this option is 
not configured correctly, e.g. source rowtime is defined on TIMESTAMP_LTZ 
column, but this config is not configured, then users may see the partition 
committed after a few hours. Th [...]
@@ -356,33 +378,43 @@ Time extractors define extracting time from partition 
values.
 
   
 
-Key
-Default
+Option
+Required
+Forwarded
+Default
 Type
-Description
+Description

[flink] 01/09: [FLINK-25391][connector-elasticsearch] Forward catalog table options

2022-01-25 Thread twalthr
This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 5e28f66f6ef2ed03f9ee69148fe5079ae5e358c4
Author: slinkydeveloper 
AuthorDate: Thu Jan 6 14:52:38 2022 +0100

[FLINK-25391][connector-elasticsearch] Forward catalog table options
---
 .../content/docs/connectors/table/elasticsearch.md | 23 +-
 .../table/ElasticsearchDynamicSinkFactoryBase.java | 50 ++
 .../table/Elasticsearch6DynamicSinkFactory.java| 21 +
 3 files changed, 66 insertions(+), 28 deletions(-)

diff --git a/docs/content/docs/connectors/table/elasticsearch.md 
b/docs/content/docs/connectors/table/elasticsearch.md
index 22f0b60..b5ae31d 100644
--- a/docs/content/docs/connectors/table/elasticsearch.md
+++ b/docs/content/docs/connectors/table/elasticsearch.md
@@ -67,15 +67,17 @@ Connector Options
   
 Option
 Required
+Forwarded
 Default
 Type
-Description
+Description
   
 
 
 
   connector
   required
+  no
   (none)
   String
   Specify what connector to use, valid values are:
@@ -87,6 +89,7 @@ Connector Options
 
   hosts
   required
+  yes
   (none)
   String
   One or more Elasticsearch hosts to connect to, e.g. 
'http://host_name:9092;http://host_name:9093'.
@@ -94,6 +97,7 @@ Connector Options
 
   index
   required
+  yes
   (none)
   String
   Elasticsearch index for every record. Can be a static index (e.g. 
'myIndex') or
@@ -103,6 +107,7 @@ Connector Options
 
   document-type
   required in 6.x
+  yes in 6.x
   (none)
   String
   Elasticsearch document type. Not necessary anymore in 
elasticsearch-7.
@@ -110,6 +115,7 @@ Connector Options
 
   document-id.key-delimiter
   optional
+  yes
   _
   String
   Delimiter for composite keys ("_" by default), e.g., "$" would 
result in IDs "KEY1$KEY2$KEY3".
@@ -117,6 +123,7 @@ Connector Options
 
   username
   optional
+  yes
   (none)
   String
   Username used to connect to Elasticsearch instance. Please notice 
that Elasticsearch doesn't pre-bundled security feature, but you can enable it 
by following the https://www.elastic.co/guide/en/elasticsearch/reference/master/configuring-security.html;>guideline
 to secure an Elasticsearch cluster.
@@ -124,6 +131,7 @@ Connector Options
 
   password
   optional
+  yes
   (none)
   String
   Password used to connect to Elasticsearch instance. If 
username is configured, this option must be configured with 
non-empty string as well.
@@ -131,6 +139,7 @@ Connector Options
 
   sink.delivery-guarantee
   optional
+  no
   NONE
   String
   Optional delivery guarantee when committing. Valid values are 
NONE or AT_LEAST_ONCE.
@@ -138,6 +147,7 @@ Connector Options
 
   sink.bulk-flush.max-actions
   optional
+  yes
   1000
   Integer
   Maximum number of buffered actions per bulk request.
@@ -147,6 +157,7 @@ Connector Options
 
   sink.bulk-flush.max-size
   optional
+  yes
   2mb
   MemorySize
   Maximum size in memory of buffered actions per bulk request. Must be 
in MB granularity.
@@ -156,6 +167,7 @@ Connector Options
 
   sink.bulk-flush.interval
   optional
+  yes
   1s
   Duration
   The interval to flush buffered actions.
@@ -166,6 +178,7 @@ Connector Options
 
   sink.bulk-flush.backoff.strategy
   optional
+  yes
   NONE
   String
   Specify how to perform retries if any flush actions failed due to a 
temporary request error. Valid strategies are:
@@ -179,6 +192,7 @@ Connector Options
 
   sink.bulk-flush.backoff.max-retries
   optional
+  yes
   (none)
   Integer
   Maximum number of backoff retries.
@@ -186,6 +200,7 @@ Connector Options
 
   sink.bulk-flush.backoff.delay
   optional
+  yes
   (none)
   Duration
   Delay between each backoff attempt. For CONSTANT 
backoff, this is simply the delay between each retry. For 
EXPONENTIAL backoff, this is the initial base delay.
@@ -193,6 +208,7 @@ Connector Options
 
   sink.parallelism
   optional
+  no
   (none)
   Integer
   Defines the parallelism of the Elasticsearch sink operator. By 
default, the parallelism is determined by the framework using the same 
parallelism of the upstream chained operator.
@@ -200,6 +216,7 @@ Connector Options
 
   connection.path-prefix
   optional
+  yes
   (none)
   String
   Prefix string to be added to every REST communication, e.g., 
'/v1'.
@@ -207,6 +224,7 @@ Connector Options
 
   connection.request-timeout
   optional
+  

[flink] branch release-1.14 updated (e358ac6 -> 891ea2a)

2022-01-24 Thread twalthr
This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a change to branch release-1.14
in repository https://gitbox.apache.org/repos/asf/flink.git.


from e358ac6  [FLINK-25732][coordination] Pass serializable collection
 add 891ea2a  [hotfix][docs] Fixing multiple internal and external 404 links

No new revisions were added by this update.

Summary of changes:
 .../content.zh/docs/connectors/datastream/kafka.md |   2 +-
 .../docs/deployment/advanced/external_resources.md |   2 +-
 .../resource-providers/standalone/kubernetes.md|   2 +-
 .../serialization/custom_serialization.md  |   2 +-
 docs/content.zh/docs/dev/datastream/overview.md|   2 +-
 .../content.zh/docs/dev/table/concepts/overview.md | 101 +
 docs/content.zh/docs/dev/table/sqlClient.md|   2 +-
 .../content.zh/docs/libs/gelly/graph_generators.md |   2 +-
 .../docs/libs/gelly/iterative_graph_processing.md  |   2 +-
 docs/content.zh/release-notes/flink-1.10.md|   2 +-
 .../datastream/formats/azure_table_storage.md  |   2 +-
 docs/content/docs/connectors/table/downloads.md|   2 +-
 docs/content/docs/connectors/table/filesystem.md   |   2 +-
 .../docs/deployment/advanced/external_resources.md |   2 +-
 docs/content/docs/dev/dataset/examples.md  |   4 +-
 docs/content/docs/dev/dataset/hadoop_map_reduce.md |  14 +--
 docs/content/docs/dev/dataset/iterations.md|   2 +-
 docs/content/docs/dev/dataset/transformations.md   |   4 +-
 docs/content/docs/dev/table/concepts/overview.md   |  26 ++
 docs/content/docs/dev/table/config.md  |   2 -
 docs/content/docs/libs/gelly/graph_generators.md   |   2 +-
 .../docs/libs/gelly/iterative_graph_processing.md  |   2 +-
 docs/content/release-notes/flink-1.10.md   |   2 +-
 docs/layouts/shortcodes/query_state_warning.html   |   6 +-
 .../shortcodes/sql_optional_connectors.html|  20 ++--
 docs/layouts/shortcodes/sql_optional_formats.html  |  10 +-
 26 files changed, 180 insertions(+), 41 deletions(-)


[flink] branch master updated: [hotfix][annotations] Add v1.15 as the next Flink version to master

2022-01-24 Thread twalthr
This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new 06be5cd  [hotfix][annotations] Add v1.15 as the next Flink version to 
master
06be5cd is described below

commit 06be5cd1c220e7bf4a0c90d6f08388835b7181fd
Author: Marios Trivyzas 
AuthorDate: Mon Jan 17 14:56:40 2022 +0200

[hotfix][annotations] Add v1.15 as the next Flink version to master

The upcoming version to be released need to exist already in master so that
when the new release branch is created from master the version of the 
release
is already there.

Follows: #18340

This closes #18381.
---
 .../src/main/java/org/apache/flink/FlinkVersion.java | 9 +
 .../api/common/typeutils/TypeSerializerUpgradeTestBase.java  | 7 ---
 .../api/java/typeutils/runtime/PojoSerializerUpgradeTest.java| 3 +--
 .../api/java/typeutils/runtime/RowSerializerUpgradeTest.java | 3 +--
 .../api/scala/typeutils/EnumValueSerializerUpgradeTest.scala | 5 ++---
 .../scala/typeutils/ScalaCaseClassSerializerUpgradeTest.scala| 6 +++---
 .../api/scala/typeutils/TraversableSerializerUpgradeTest.scala   | 4 ++--
 .../table/runtime/typeutils/LinkedListSerializerUpgradeTest.java | 2 +-
 8 files changed, 19 insertions(+), 20 deletions(-)

diff --git a/flink-annotations/src/main/java/org/apache/flink/FlinkVersion.java 
b/flink-annotations/src/main/java/org/apache/flink/FlinkVersion.java
index 5271607..69e6fc7 100644
--- a/flink-annotations/src/main/java/org/apache/flink/FlinkVersion.java
+++ b/flink-annotations/src/main/java/org/apache/flink/FlinkVersion.java
@@ -51,7 +51,8 @@ public enum FlinkVersion {
 v1_11("1.11"),
 v1_12("1.12"),
 v1_13("1.13"),
-v1_14("1.14");
+v1_14("1.14"),
+v1_15("1.15");
 
 private final String versionStr;
 
@@ -68,10 +69,10 @@ public enum FlinkVersion {
 return this.ordinal() > otherVersion.ordinal();
 }
 
-/** Returns all versions equal to or higher than the selected version. */
-public Set orHigher() {
+/** Returns all versions within the defined range, inclusive both start 
and end. */
+public static Set rangeOf(FlinkVersion start, FlinkVersion 
end) {
 return Stream.of(FlinkVersion.values())
-.filter(v -> this.ordinal() <= v.ordinal())
+.filter(v -> v.ordinal() >= start.ordinal() && v.ordinal() <= 
end.ordinal())
 .collect(Collectors.toCollection(LinkedHashSet::new));
 }
 
diff --git 
a/flink-core/src/test/java/org/apache/flink/api/common/typeutils/TypeSerializerUpgradeTestBase.java
 
b/flink-core/src/test/java/org/apache/flink/api/common/typeutils/TypeSerializerUpgradeTestBase.java
index cb886d5..c67c29c 100644
--- 
a/flink-core/src/test/java/org/apache/flink/api/common/typeutils/TypeSerializerUpgradeTestBase.java
+++ 
b/flink-core/src/test/java/org/apache/flink/api/common/typeutils/TypeSerializerUpgradeTestBase.java
@@ -36,6 +36,7 @@ import java.nio.file.Path;
 import java.nio.file.Paths;
 import java.util.Collections;
 import java.util.List;
+import java.util.Set;
 
 import static org.apache.flink.util.Preconditions.checkNotNull;
 import static org.hamcrest.CoreMatchers.not;
@@ -51,11 +52,11 @@ import static org.junit.Assume.assumeThat;
 public abstract class TypeSerializerUpgradeTestBase
 extends TestLogger {
 
-public static final FlinkVersion[] MIGRATION_VERSIONS =
-FlinkVersion.v1_11.orHigher().toArray(new FlinkVersion[0]);
-
 public static final FlinkVersion CURRENT_VERSION = FlinkVersion.v1_14;
 
+public static final Set MIGRATION_VERSIONS =
+FlinkVersion.rangeOf(FlinkVersion.v1_11, CURRENT_VERSION);
+
 private final TestSpecification 
testSpecification;
 
 protected TypeSerializerUpgradeTestBase(
diff --git 
a/flink-core/src/test/java/org/apache/flink/api/java/typeutils/runtime/PojoSerializerUpgradeTest.java
 
b/flink-core/src/test/java/org/apache/flink/api/java/typeutils/runtime/PojoSerializerUpgradeTest.java
index 31d4c0f..6cef885 100644
--- 
a/flink-core/src/test/java/org/apache/flink/api/java/typeutils/runtime/PojoSerializerUpgradeTest.java
+++ 
b/flink-core/src/test/java/org/apache/flink/api/java/typeutils/runtime/PojoSerializerUpgradeTest.java
@@ -25,7 +25,6 @@ import org.junit.runner.RunWith;
 import org.junit.runners.Parameterized;
 
 import java.util.ArrayList;
-import java.util.Arrays;
 import java.util.Collection;
 import java.util.List;
 
@@ -48,7 +47,7 @@ public class PojoSerializerUpgradeTest extends 
TypeSerializerUpgradeTestBase(
diff --git 
a/flink-core/src/test/java/org/apache/flink/api/java/typeutils/runtime/RowSerializerUpgradeTest.java
 
b/flink-core/src/test/java/org/apache/flink/api/java/typeutils/run

[flink] branch master updated (74efa09 -> 98861d7)

2022-01-20 Thread twalthr
This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 74efa09  [hotfix][metrics][docs] Note that new job status metrics may 
evolve
 add 98861d7  [hotfix][docs] Fixing multiple internal and external 404 links

No new revisions were added by this update.

Summary of changes:
 .../content.zh/docs/connectors/datastream/kafka.md |   2 +-
 .../docs/deployment/advanced/external_resources.md |   2 +-
 .../resource-providers/standalone/kubernetes.md|   2 +-
 .../serialization/custom_serialization.md  |   2 +-
 docs/content.zh/docs/dev/datastream/overview.md|   2 +-
 .../content.zh/docs/dev/table/concepts/overview.md | 101 +
 docs/content.zh/docs/dev/table/sqlClient.md|   2 +-
 .../content.zh/docs/libs/gelly/graph_generators.md |   2 +-
 .../docs/libs/gelly/iterative_graph_processing.md  |   2 +-
 docs/content.zh/release-notes/flink-1.10.md|   2 +-
 .../datastream/formats/azure_table_storage.md  |   2 +-
 docs/content/docs/connectors/table/downloads.md|   2 +-
 docs/content/docs/connectors/table/filesystem.md   |   2 +-
 .../docs/deployment/advanced/external_resources.md |   2 +-
 docs/content/docs/dev/dataset/examples.md  |   4 +-
 docs/content/docs/dev/dataset/hadoop_map_reduce.md |  14 +--
 docs/content/docs/dev/dataset/iterations.md|   2 +-
 docs/content/docs/dev/dataset/transformations.md   |   4 +-
 docs/content/docs/dev/table/concepts/overview.md   |  26 ++
 docs/content/docs/dev/table/config.md  |   2 -
 docs/content/docs/libs/gelly/graph_generators.md   |   2 +-
 .../docs/libs/gelly/iterative_graph_processing.md  |   2 +-
 docs/content/release-notes/flink-1.10.md   |   2 +-
 docs/layouts/shortcodes/query_state_warning.html   |   6 +-
 .../shortcodes/sql_optional_connectors.html|  20 ++--
 docs/layouts/shortcodes/sql_optional_formats.html  |  10 +-
 26 files changed, 180 insertions(+), 41 deletions(-)


[flink] branch master updated (01c8150 -> 2e340e0)

2022-01-20 Thread twalthr
This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 01c8150  [hotfix][connectors] Fix: Infinite loop can arise when 
prepareCommit(flush=false) is called in AsyncSinkWriter with buffered elements
 add 2e340e0  [FLINK-25677][table-planner] Update ReplicateRows to the new 
type inference

No new revisions were added by this update.

Summary of changes:
 .../functions/BuiltInFunctionDefinitions.java  | 10 +++
 ...java => InternalReplicateRowsTypeStrategy.java} | 20 +++---
 .../strategies/SpecificTypeStrategies.java |  4 ++
 .../planner/plan/utils/SetOpRewriteUtil.scala  | 72 +++---
 .../planner/plan/batch/sql/SetOperatorsTest.xml|  8 +--
 .../planner/plan/batch/table/SetOperatorsTest.xml  |  8 +--
 .../planner/plan/common/PartialInsertTest.xml  | 16 ++---
 .../rules/logical/RewriteIntersectAllRuleTest.xml  | 12 ++--
 .../plan/rules/logical/RewriteMinusAllRuleTest.xml | 12 ++--
 .../planner/plan/stream/sql/SetOperatorsTest.xml   |  8 +--
 .../runtime/functions/table/ReplicateRows.java | 70 -
 .../ReplicateRowsFunction.java}| 36 +++
 12 files changed, 102 insertions(+), 174 deletions(-)
 copy 
flink-table/flink-table-common/src/main/java/org/apache/flink/table/types/inference/strategies/{ArrayTypeStrategy.java
 => InternalReplicateRowsTypeStrategy.java} (72%)
 delete mode 100644 
flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/runtime/functions/table/ReplicateRows.java
 copy 
flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/runtime/functions/{scalar/IfNullFunction.java
 => table/ReplicateRowsFunction.java} (50%)


[flink] branch master updated (3e862a3 -> de7cccc)

2022-01-19 Thread twalthr
This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 3e862a3  [FLINK-25436] Let Dispatcher only retain recovered jobs in 
BlobServer
 add de7  [FLINK-15352][connector-jdbc] Develop MySQLCatalog to connect 
Flink with MySQL tables and ecosystem.

No new revisions were added by this update.

Summary of changes:
 docs/content.zh/docs/connectors/table/jdbc.md  | 126 ++--
 docs/content.zh/docs/dev/table/catalogs.md |   2 +-
 docs/content/docs/connectors/table/jdbc.md |  92 --
 docs/content/docs/dev/table/catalogs.md|   2 +-
 .../jdbc/catalog/AbstractJdbcCatalog.java  | 133 +++-
 .../connector/jdbc/catalog/JdbcCatalogUtils.java   |   3 +
 .../flink/connector/jdbc/catalog/MySqlCatalog.java | 162 ++
 .../connector/jdbc/catalog/PostgresCatalog.java| 338 
 ...lectFactory.java => JdbcDialectTypeMapper.java} |  25 +-
 .../mysql/{MySQLDialect.java => MySqlDialect.java} |   6 +-
 ...ialectFactory.java => MySqlDialectFactory.java} |   6 +-
 .../jdbc/dialect/mysql/MySqlTypeMapper.java| 223 +
 .../jdbc/dialect/psql/PostgresTypeMapper.java  | 174 +++
 ...flink.connector.jdbc.dialect.JdbcDialectFactory |   2 +-
 .../flink/connector/jdbc/JdbcDataTypeTest.java |   2 +-
 .../connector/jdbc/catalog/MySqlCatalogITCase.java | 346 +
 .../jdbc/catalog/MySqlCatalogTestBase.java | 136 
 .../mysql-scripts/catalog-init-for-test.sql| 107 +++
 18 files changed, 1527 insertions(+), 358 deletions(-)
 create mode 100644 
flink-connectors/flink-connector-jdbc/src/main/java/org/apache/flink/connector/jdbc/catalog/MySqlCatalog.java
 copy 
flink-connectors/flink-connector-jdbc/src/main/java/org/apache/flink/connector/jdbc/dialect/{mysql/MySQLDialectFactory.java
 => JdbcDialectTypeMapper.java} (60%)
 rename 
flink-connectors/flink-connector-jdbc/src/main/java/org/apache/flink/connector/jdbc/dialect/mysql/{MySQLDialect.java
 => MySqlDialect.java} (96%)
 rename 
flink-connectors/flink-connector-jdbc/src/main/java/org/apache/flink/connector/jdbc/dialect/mysql/{MySQLDialectFactory.java
 => MySqlDialectFactory.java} (89%)
 create mode 100644 
flink-connectors/flink-connector-jdbc/src/main/java/org/apache/flink/connector/jdbc/dialect/mysql/MySqlTypeMapper.java
 create mode 100644 
flink-connectors/flink-connector-jdbc/src/main/java/org/apache/flink/connector/jdbc/dialect/psql/PostgresTypeMapper.java
 create mode 100644 
flink-connectors/flink-connector-jdbc/src/test/java/org/apache/flink/connector/jdbc/catalog/MySqlCatalogITCase.java
 create mode 100644 
flink-connectors/flink-connector-jdbc/src/test/java/org/apache/flink/connector/jdbc/catalog/MySqlCatalogTestBase.java
 create mode 100644 
flink-connectors/flink-connector-jdbc/src/test/resources/mysql-scripts/catalog-init-for-test.sql


[flink] branch master updated (4a75605 -> 50ff508)

2022-01-19 Thread twalthr
This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 4a75605  [FLINK-25633] Set locale to en-US to avoid ambiguous decimal 
formattings
 add 50ff508  [FLINK-25609][table] Anonymous/inline tables don't require 
ObjectIdentifier anymore

No new revisions were added by this update.

Summary of changes:
 .../delegation/hive/HiveParserDMLHelper.java   |  47 +++---
 .../hive/parse/HiveParserDDLSemanticAnalyzer.java  |  29 ++--
 .../kafka/table/KafkaDynamicTableFactoryTest.java  |   3 +-
 .../table/tests/test_table_environment_api.py  |  11 +-
 .../apache/flink/table/client/cli/CliClient.java   |  10 +-
 .../AbstractStreamTableEnvironmentImpl.java|  59 ---
 .../table/operations/ExternalQueryOperation.java   |  31 ++--
 .../org/apache/flink/table/api/StatementSet.java   |  16 +-
 .../java/org/apache/flink/table/api/Table.java |  14 +-
 .../apache/flink/table/api/TableEnvironment.java   |  11 +-
 .../flink/table/api/internal/StatementSetImpl.java |  28 ++-
 .../table/api/internal/TableDescriptorUtil.java|  51 --
 .../table/api/internal/TableEnvironmentImpl.java   | 139 ---
 .../apache/flink/table/api/internal/TableImpl.java |  39 +++--
 .../apache/flink/table/catalog/CatalogManager.java |  73 +++-
 .../flink/table/catalog/ContextResolvedTable.java  | 188 +
 .../flink/table/catalog/ExternalCatalogTable.java} |  18 +-
 .../table/operations/CollectModifyOperation.java   |  19 +--
 .../table/operations/ExternalModifyOperation.java  |  33 +---
 .../table/operations/ModifyOperationVisitor.java   |   2 +-
 .../table/operations/QueryOperationVisitor.java|   2 +-
 ...difyOperation.java => SinkModifyOperation.java} |  32 ++--
 ...eryOperation.java => SourceQueryOperation.java} |  31 ++--
 .../operations/ddl/CreateTableASOperation.java |  31 +++-
 .../utils/QueryOperationDefaultVisitor.java|   4 +-
 .../flink/table/api/TableEnvironmentTest.java  |  37 ++--
 .../resolver/ExpressionResolverTest.java   |  28 ++-
 .../flink/table/operations/QueryOperationTest.java |  34 +++-
 .../table/planner/catalog/CatalogSchemaTable.java  |  47 ++
 .../planner/catalog/DatabaseCalciteSchema.java |  17 +-
 .../table/planner/connectors/DynamicSinkUtils.java | 137 +++
 .../planner/connectors/DynamicSourceUtils.java | 100 +--
 .../InternalDataStreamQueryOperation.java  |   1 +
 .../operations/SqlCreateTableConverter.java|   5 +-
 .../operations/SqlToOperationConverter.java|  14 +-
 .../planner/plan/FlinkCalciteCatalogReader.java|  39 ++---
 .../planner/plan/QueryOperationConverter.java  |  35 ++--
 .../nodes/exec/common/CommonExecLookupJoin.java|   5 +-
 .../nodes/exec/spec/TemporalTableSourceSpec.java   |   8 +-
 .../PushPartitionIntoTableSourceScanRule.java  |  47 --
 .../PushProjectIntoTableSourceScanRule.java|  14 +-
 .../PushWatermarkIntoTableSourceScanRuleBase.java  |   2 +-
 .../planner/plan/schema/CatalogSourceTable.java|  80 ++---
 .../table/planner/calcite/FlinkRelBuilder.scala|   2 +-
 .../table/planner/delegation/BatchPlanner.scala|  34 +---
 .../table/planner/delegation/PlannerBase.scala |  90 +++---
 .../table/planner/delegation/StreamPlanner.scala   |  34 +---
 .../plan/metadata/FlinkRelMdUniqueKeys.scala   |   5 +-
 .../planner/plan/nodes/calcite/LogicalSink.scala   |  16 +-
 .../table/planner/plan/nodes/calcite/Sink.scala|   9 +-
 .../plan/nodes/logical/FlinkLogicalSink.scala  |  19 +--
 .../nodes/physical/batch/BatchPhysicalSink.scala   |  14 +-
 .../batch/BatchPhysicalTableSourceScan.scala   |   5 +-
 .../physical/common/CommonPhysicalLookupJoin.scala |   2 +-
 .../stream/StreamPhysicalChangelogNormalize.scala  |   9 +-
 .../nodes/physical/stream/StreamPhysicalSink.scala |  17 +-
 .../stream/StreamPhysicalTableSourceScan.scala |   5 +-
 .../FlinkChangelogModeInferenceProgram.scala   |  27 +--
 .../physical/batch/BatchPhysicalSinkRule.scala |  18 +-
 .../physical/stream/StreamPhysicalSinkRule.scala   |  18 +-
 .../stream/StreamPhysicalTableSourceScanRule.scala |  11 +-
 .../plan/schema/LegacyCatalogSourceTable.scala |   9 +-
 .../planner/plan/schema/TableSourceTable.scala |  26 ++-
 .../table/planner/plan/stats/FlinkStatistic.scala  |   7 +-
 .../flink/table/planner/sinks/TableSinkUtils.scala |   4 +-
 .../operations/SqlToOperationConverterTest.java|  12 +-
 .../plan/FlinkCalciteCatalogReaderTest.java|  10 +-
 .../serde/TemporalTableSourceSpecSerdeTest.java|   9 +-
 .../flink/table/api/TableEnvironmentITCase.scala   |   4 +-
 .../plan/metadata/FlinkRelMdHandlerTestBase.scala  |   3 +-
 .../planner/plan/metadata/MetadataTestUtil.scala   |  47 +++---
 .../planner/plan/stream/sql/TableSinkTest.scala|   2 +-
 .../vali

[flink] branch master updated (09cd1ff -> 6512214)

2022-01-18 Thread twalthr
This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 09cd1ff  [FLINK-25650][docs] Added "Interplay with long-running record 
processing" limit in unaligned checkpoint documentation
 add dce4e45  [hotfix][table-api-scala-bridge] Deprecate toAppendStream and 
toRetractStream
 add 6512214  [hotfix][table-common] Add default implementation for 
DynamicTableFactory.Context#getEnrichmentOptions to alleviate breaking change

No new revisions were added by this update.

Summary of changes:
 flink-examples/flink-examples-table/pom.xml |  1 +
 .../table/examples/scala/basics/StreamTableExample.scala|  2 +-
 .../flink/table/api/bridge/scala/TableConversions.scala | 13 +++--
 .../apache/flink/table/factories/DynamicTableFactory.java   |  4 +++-
 4 files changed, 16 insertions(+), 4 deletions(-)


[flink] 02/02: [FLINK-17321][table] Add support casting of map to map and multiset to multiset

2022-01-18 Thread twalthr
This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit f5c99c6f2612bc2ae437e85f5c44cae50f631e4e
Author: Sergey Nuyanzin 
AuthorDate: Wed Dec 15 18:21:28 2021 +0100

[FLINK-17321][table] Add support casting of map to map and multiset to 
multiset

This closes #18287.
---
 .../functions/casting/CastRuleProvider.java|   1 +
 .../MapToMapAndMultisetToMultisetCastRule.java | 198 +
 .../planner/functions/CastFunctionITCase.java  |  45 -
 .../planner/functions/casting/CastRulesTest.java   |  59 ++
 4 files changed, 297 insertions(+), 6 deletions(-)

diff --git 
a/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/functions/casting/CastRuleProvider.java
 
b/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/functions/casting/CastRuleProvider.java
index 5083519..961e81f 100644
--- 
a/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/functions/casting/CastRuleProvider.java
+++ 
b/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/functions/casting/CastRuleProvider.java
@@ -81,6 +81,7 @@ public class CastRuleProvider {
 .addRule(RawToBinaryCastRule.INSTANCE)
 // Collection rules
 .addRule(ArrayToArrayCastRule.INSTANCE)
+.addRule(MapToMapAndMultisetToMultisetCastRule.INSTANCE)
 .addRule(RowToRowCastRule.INSTANCE)
 // Special rules
 .addRule(CharVarCharTrimPadCastRule.INSTANCE)
diff --git 
a/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/functions/casting/MapToMapAndMultisetToMultisetCastRule.java
 
b/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/functions/casting/MapToMapAndMultisetToMultisetCastRule.java
new file mode 100644
index 000..89e0351
--- /dev/null
+++ 
b/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/functions/casting/MapToMapAndMultisetToMultisetCastRule.java
@@ -0,0 +1,198 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.functions.casting;
+
+import org.apache.flink.table.data.GenericMapData;
+import org.apache.flink.table.data.MapData;
+import org.apache.flink.table.types.logical.IntType;
+import org.apache.flink.table.types.logical.LogicalType;
+import org.apache.flink.table.types.logical.LogicalTypeRoot;
+import org.apache.flink.table.types.logical.MapType;
+import org.apache.flink.table.types.logical.MultisetType;
+
+import java.util.HashMap;
+import java.util.Map;
+
+import static 
org.apache.flink.table.planner.codegen.CodeGenUtils.boxedTypeTermForType;
+import static org.apache.flink.table.planner.codegen.CodeGenUtils.className;
+import static org.apache.flink.table.planner.codegen.CodeGenUtils.newName;
+import static 
org.apache.flink.table.planner.codegen.CodeGenUtils.rowFieldReadAccess;
+import static 
org.apache.flink.table.planner.functions.casting.CastRuleUtils.constructorCall;
+import static 
org.apache.flink.table.planner.functions.casting.CastRuleUtils.methodCall;
+
+/**
+ * {@link LogicalTypeRoot#MAP} to {@link LogicalTypeRoot#MAP} and {@link 
LogicalTypeRoot#MULTISET}
+ * to {@link LogicalTypeRoot#MULTISET} cast rule.
+ */
+class MapToMapAndMultisetToMultisetCastRule
+extends AbstractNullAwareCodeGeneratorCastRule {
+
+static final MapToMapAndMultisetToMultisetCastRule INSTANCE =
+new MapToMapAndMultisetToMultisetCastRule();
+
+private MapToMapAndMultisetToMultisetCastRule() {
+super(
+CastRulePredicate.builder()
+.predicate(
+MapToMapAndMultisetToMultisetCastRule
+
::isValidMapToMapOrMultisetToMultisetCasting)
+.build());
+}
+
+private static boolean isValidMapToMapOrMultisetToMultisetCasting(
+LogicalType input, LogicalType target) {
+return input.is

[flink] branch master updated (ed699b6 -> f5c99c6)

2022-01-18 Thread twalthr
This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from ed699b6  [FLINK-25637][network] Make sort-shuffle the default shuffle 
implementation for batch jobs
 new 745cfec  [hotfix][table-common] Fix InternalDataUtils for MapData tests
 new f5c99c6  [FLINK-17321][table] Add support casting of map to map and 
multiset to multiset

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../apache/flink/table/test/InternalDataUtils.java |   2 +-
 .../functions/casting/CastRuleProvider.java|   1 +
 .../MapToMapAndMultisetToMultisetCastRule.java | 198 +
 .../planner/functions/CastFunctionITCase.java  |  45 -
 .../planner/functions/casting/CastRulesTest.java   |  59 ++
 5 files changed, 298 insertions(+), 7 deletions(-)
 create mode 100644 
flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/functions/casting/MapToMapAndMultisetToMultisetCastRule.java


[flink] 01/02: [hotfix][table-common] Fix InternalDataUtils for MapData tests

2022-01-18 Thread twalthr
This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 745cfec705b4a884a7ccd4240ea817704aaf7267
Author: Timo Walther 
AuthorDate: Tue Jan 18 13:51:39 2022 +0100

[hotfix][table-common] Fix InternalDataUtils for MapData tests
---
 .../src/test/java/org/apache/flink/table/test/InternalDataUtils.java| 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git 
a/flink-table/flink-table-common/src/test/java/org/apache/flink/table/test/InternalDataUtils.java
 
b/flink-table/flink-table-common/src/test/java/org/apache/flink/table/test/InternalDataUtils.java
index 8abd316..5250d57 100644
--- 
a/flink-table/flink-table-common/src/test/java/org/apache/flink/table/test/InternalDataUtils.java
+++ 
b/flink-table/flink-table-common/src/test/java/org/apache/flink/table/test/InternalDataUtils.java
@@ -103,7 +103,7 @@ class InternalDataUtils {
 : ((MapType) logicalType).getValueType();
 
 final ArrayData.ElementGetter keyGetter = 
ArrayData.createElementGetter(keyType);
-final ArrayData.ElementGetter valueGetter = 
ArrayData.createElementGetter(keyType);
+final ArrayData.ElementGetter valueGetter = 
ArrayData.createElementGetter(valueType);
 
 final ArrayData keys = mapData.keyArray();
 final ArrayData values = mapData.valueArray();


[flink] branch master updated (97403ac -> eeec246)

2022-01-18 Thread twalthr
This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 97403ac  [FLINK-15585][table-planner] Update plan tests
 add eeec246  [FLINK-20286][connector-files] Table source now supports 
monitor continuously

No new revisions were added by this update.

Summary of changes:
 docs/content/docs/connectors/table/filesystem.md   | 31 
 .../file/table/FileSystemConnectorOptions.java | 13 
 .../file/table/FileSystemTableFactory.java |  1 +
 .../file/table/FileSystemTableSource.java  |  9 ++-
 .../table/FileSystemTableSinkStreamingITCase.java  | 91 ++
 5 files changed, 144 insertions(+), 1 deletion(-)
 create mode 100644 
flink-table/flink-table-planner/src/test/java/org/apache/flink/connector/file/table/FileSystemTableSinkStreamingITCase.java


[flink] branch master updated (ad7952a -> 97403ac)

2022-01-18 Thread twalthr
This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from ad7952a  [FLINK-25611][core] Remove CoordinatorExecutorThreadFactory 
thread creation guards
 add 625fcf7  [FLINK-15585][table-common] Declare TableFunction collector 
transient
 add 5a23dd4  [FLINK-15585][table] Improve function identifier string in 
plan digest
 add 97403ac  [FLINK-15585][table-planner] Update plan tests

No new revisions were added by this update.

Summary of changes:
 docs/content.zh/docs/dev/table/functions/udfs.md   |   3 +
 docs/content/docs/dev/table/functions/udfs.md  |   4 +-
 .../flink/table/functions/AggregateFunction.java   |   3 +-
 .../flink/table/functions/AsyncTableFunction.java  |   3 +-
 .../flink/table/functions/ScalarFunction.java  |   3 +-
 .../table/functions/TableAggregateFunction.java|   3 +-
 .../flink/table/functions/TableFunction.java   |   5 +-
 .../flink/table/functions/UserDefinedFunction.java |   8 +-
 .../table/functions/UserDefinedFunctionHelper.java |  31 
 .../functions/UserDefinedFunctionHelperTest.java   | 180 +++--
 .../planner/expressions/SqlAggFunctionVisitor.java |   5 +-
 .../converter/LegacyScalarFunctionConvertRule.java |   3 +-
 .../planner/functions/bridging/BridgingUtils.java  |   3 +-
 .../functions/inference/LookupCallContext.java |   3 +-
 .../flink/table/planner/codegen/CodeGenUtils.scala |   4 +-
 .../planner/functions/utils/TableSqlFunction.scala |   5 +-
 .../planner/plan/utils/SetOpRewriteUtil.scala  |   5 +-
 .../planner/plan/batch/sql/SetOperatorsTest.xml|   4 +-
 .../table/planner/plan/batch/table/CalcTest.xml|   8 +-
 .../planner/plan/batch/table/CorrelateTest.xml |  28 ++--
 .../planner/plan/batch/table/GroupWindowTest.xml   |   6 +-
 .../table/planner/plan/batch/table/JoinTest.xml|   8 +-
 .../batch/table/PythonOverWindowAggregateTest.xml  |   4 +-
 .../planner/plan/batch/table/SetOperatorsTest.xml  |   4 +-
 .../stringexpr/CorrelateStringExpressionTest.xml   |  32 ++--
 .../planner/plan/common/PartialInsertTest.xml  |   8 +-
 .../nodes/exec/operator/StreamOperatorNameTest.xml |  18 +--
 .../plan/rules/logical/PythonMapMergeRuleTest.xml  |  14 +-
 .../rules/logical/RewriteIntersectAllRuleTest.xml  |   8 +-
 .../plan/rules/logical/RewriteMinusAllRuleTest.xml |   8 +-
 .../planner/plan/stream/sql/SetOperatorsTest.xml   |   4 +-
 .../planner/plan/stream/table/AggregateTest.xml|   4 +-
 .../table/planner/plan/stream/table/CalcTest.xml   |  12 +-
 .../plan/stream/table/ColumnFunctionsTest.xml  |   4 +-
 .../planner/plan/stream/table/CorrelateTest.xml|  48 +++---
 .../planner/plan/stream/table/GroupWindowTest.xml  |  16 +-
 .../plan/stream/table/OverAggregateTest.xml|  38 ++---
 .../stream/table/PythonOverWindowAggregateTest.xml |   8 +-
 .../plan/stream/table/TableAggregateTest.xml   |  12 +-
 .../stream/table/TemporalTableFunctionJoinTest.xml |   8 +-
 40 files changed, 322 insertions(+), 253 deletions(-)


[flink] branch master updated (1cd8801 -> 07f668a)

2022-01-17 Thread twalthr
This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 1cd8801  [hotfix] Add 1.15 to FlinkVersion
 add 07f668a  [hotfix][table-common] Introduce ObjectIdentifier.ofAnonymous 
to allow storing anonymous, but still uniquely identified, names

No new revisions were added by this update.

Summary of changes:
 .../flink/table/catalog/ObjectIdentifier.java  | 84 ++
 .../flink/table/catalog/ObjectIdentifierTest.java  | 47 
 2 files changed, 117 insertions(+), 14 deletions(-)
 create mode 100644 
flink-table/flink-table-common/src/test/java/org/apache/flink/table/catalog/ObjectIdentifierTest.java


[flink] branch master updated (c385772 -> 8f84515)

2022-01-14 Thread twalthr
This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from c385772  [FLINK-25167][API / DataStream]mark StreamOperatorFactory as 
@PublicEvolving and add the new volation
 add 8f84515  [hotfix][table-planner] Ignore 
LogicalRelDataTypeConverterTest temporarily

No new revisions were added by this update.

Summary of changes:
 .../flink/table/planner/typeutils/LogicalRelDataTypeConverterTest.java  | 2 ++
 1 file changed, 2 insertions(+)


[flink] branch master updated (ac32c9f -> 6def8d7)

2022-01-14 Thread twalthr
This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from ac32c9f  [FLINK-25230][table-planner] Regenerate JSON plans
 add 6def8d7  [FLINK-25392][table-planner] Support new EXECUTE STATEMENT 
SET BEGIN ... END; syntax

No new revisions were added by this update.

Summary of changes:
 .../src/main/codegen/data/Parser.tdd   |   3 +
 .../src/main/codegen/includes/parserImpls.ftl  |  49 ++
 .../SqlRichExplain.java => dml/SqlExecute.java}|  50 +-
 .../SqlStatementSet.java}  |  67 ++---
 .../flink/sql/parser/FlinkSqlParserImplTest.java   |  45 +
 .../table/api/internal/TableEnvironmentImpl.java   |  19 +++-
 .../operations/BeginStatementSetOperation.java |   5 +-
 .../table/operations/EndStatementSetOperation.java |   5 +-
 .../flink/table/operations/ExplainOperation.java   |   5 +-
 .../table/operations/StatementSetOperation.java|  31 +-
 .../planner/calcite/FlinkCalciteSqlValidator.java  |  14 ---
 .../operations/SqlToOperationConverter.java|  38 ++--
 .../table/planner/calcite/FlinkPlannerImpl.scala   |  37 ---
 .../calcite/FlinkCalciteSqlValidatorTest.java  |  19 +++-
 .../operations/SqlToOperationConverterTest.java|  90 +
 .../explain/testStatementSetExecutionExplain.out   |  71 ++
 .../flink/table/api/TableEnvironmentITCase.scala   |  46 -
 .../flink/table/api/TableEnvironmentTest.scala | 107 -
 18 files changed, 567 insertions(+), 134 deletions(-)
 copy 
flink-table/flink-sql-parser/src/main/java/org/apache/flink/sql/parser/{dql/SqlRichExplain.java
 => dml/SqlExecute.java} (64%)
 copy 
flink-table/flink-sql-parser/src/main/java/org/apache/flink/sql/parser/{dql/SqlRichExplain.java
 => dml/SqlStatementSet.java} (54%)


[flink] 01/02: [FLINK-25230][table-planner] Replace RelDataType with LogicalType serialization

2022-01-14 Thread twalthr
This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 75992f495192b84cfe406d7e24a1188c7cb284b0
Author: Timo Walther 
AuthorDate: Wed Jan 12 14:34:43 2022 +0100

[FLINK-25230][table-planner] Replace RelDataType with LogicalType 
serialization
---
 .../exec/serde/AggregateCallJsonSerializer.java|   2 +-
 .../exec/serde/RelDataTypeJsonDeserializer.java| 173 +-
 .../exec/serde/RelDataTypeJsonSerializer.java  | 146 +
 .../nodes/exec/serde/RexNodeJsonSerializer.java|  74 ++-
 .../exec/serde/RexWindowBoundJsonSerializer.java   |   3 +-
 .../planner/plan/schema/StructuredRelDataType.java |   2 +-
 .../typeutils/LogicalRelDataTypeConverter.java | 649 +
 .../nodes/exec/serde/DataTypeJsonSerdeTest.java|  51 +-
 .../serde/DynamicTableSourceSpecSerdeTest.java |   3 +
 .../plan/nodes/exec/serde/JsonSerdeMocks.java  |  76 +++
 .../nodes/exec/serde/LogicalTypeJsonSerdeTest.java |  45 +-
 .../nodes/exec/serde/RelDataTypeJsonSerdeTest.java | 219 +++
 .../plan/nodes/exec/serde/RexNodeSerdeTest.java| 103 +---
 .../nodes/exec/serde/RexWindowBoundSerdeTest.java  |   3 +-
 .../serde/TemporalTableSourceSpecSerdeTest.java|   8 +-
 .../typeutils/LogicalRelDataTypeConverterTest.java | 215 +++
 16 files changed, 1157 insertions(+), 615 deletions(-)

diff --git 
a/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/plan/nodes/exec/serde/AggregateCallJsonSerializer.java
 
b/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/plan/nodes/exec/serde/AggregateCallJsonSerializer.java
index 92c85b4..27f2549 100644
--- 
a/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/plan/nodes/exec/serde/AggregateCallJsonSerializer.java
+++ 
b/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/plan/nodes/exec/serde/AggregateCallJsonSerializer.java
@@ -83,7 +83,7 @@ public class AggregateCallJsonSerializer extends 
StdSerializer {
 jsonGenerator.writeBooleanField(FIELD_NAME_DISTINCT, 
aggCall.isDistinct());
 jsonGenerator.writeBooleanField(FIELD_NAME_APPROXIMATE, 
aggCall.isApproximate());
 jsonGenerator.writeBooleanField(FIELD_NAME_IGNORE_NULLS, 
aggCall.ignoreNulls());
-jsonGenerator.writeObjectField(FIELD_NAME_TYPE, aggCall.getType());
+serializerProvider.defaultSerializeField(FIELD_NAME_TYPE, 
aggCall.getType(), jsonGenerator);
 jsonGenerator.writeEndObject();
 }
 
diff --git 
a/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/plan/nodes/exec/serde/RelDataTypeJsonDeserializer.java
 
b/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/plan/nodes/exec/serde/RelDataTypeJsonDeserializer.java
index 1476e41..6b35780 100644
--- 
a/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/plan/nodes/exec/serde/RelDataTypeJsonDeserializer.java
+++ 
b/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/plan/nodes/exec/serde/RelDataTypeJsonDeserializer.java
@@ -18,57 +18,26 @@
 
 package org.apache.flink.table.planner.plan.nodes.exec.serde;
 
-import org.apache.flink.api.common.typeinfo.TypeInformation;
-import org.apache.flink.table.api.TableException;
+import org.apache.flink.annotation.Internal;
 import org.apache.flink.table.planner.calcite.FlinkTypeFactory;
+import org.apache.flink.table.planner.typeutils.LogicalRelDataTypeConverter;
 import org.apache.flink.table.types.logical.LogicalType;
-import org.apache.flink.table.types.logical.RawType;
-import org.apache.flink.table.types.logical.StructuredType;
-import org.apache.flink.table.types.logical.TimestampKind;
-import org.apache.flink.table.types.logical.TypeInformationRawType;
-import org.apache.flink.table.types.logical.utils.LogicalTypeParser;
-import org.apache.flink.table.utils.EncodingUtils;
 
 import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.core.JsonParser;
-import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.core.ObjectCodec;
 import 
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.DeserializationContext;
 import 
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.JsonNode;
 import 
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.deser.std.StdDeserializer;
-import 
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.node.ArrayNode;
-import 
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.node.ObjectNode;
-import 
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.node.TextNode;
 
-import org.apache.calcite.avatica.util.TimeUnit;
 import org.apache.calcite.rel.type.RelDataType;
-import org.apache.calcite.rel.type.RelDataTypeFactory;
-import org.apache.calcite.rel.type.StructKind;
-import org.apache.calcite.sql.SqlIntervalQualifier;
-import

[flink] branch master updated (b8d1a48 -> ac32c9f)

2022-01-14 Thread twalthr
This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from b8d1a48  [hotfix] Fix typing errors for 
SortMergeResultPartitionReadScheduler#createSubpartitionReader
 new 75992f4  [FLINK-25230][table-planner] Replace RelDataType with 
LogicalType serialization
 new ac32c9f  [FLINK-25230][table-planner] Regenerate JSON plans

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../exec/serde/AggregateCallJsonSerializer.java|   2 +-
 .../exec/serde/RelDataTypeJsonDeserializer.java| 173 +-
 .../exec/serde/RelDataTypeJsonSerializer.java  | 146 +
 .../nodes/exec/serde/RexNodeJsonSerializer.java|  74 ++-
 .../exec/serde/RexWindowBoundJsonSerializer.java   |   3 +-
 .../planner/plan/schema/StructuredRelDataType.java |   2 +-
 .../typeutils/LogicalRelDataTypeConverter.java | 649 +
 .../nodes/exec/serde/DataTypeJsonSerdeTest.java|  51 +-
 .../serde/DynamicTableSourceSpecSerdeTest.java |   3 +
 .../plan/nodes/exec/serde/JsonSerdeMocks.java  |  76 +++
 .../nodes/exec/serde/LogicalTypeJsonSerdeTest.java |  45 +-
 .../nodes/exec/serde/RelDataTypeJsonSerdeTest.java | 219 +++
 .../plan/nodes/exec/serde/RexNodeSerdeTest.java| 103 +---
 .../nodes/exec/serde/RexWindowBoundSerdeTest.java  |   3 +-
 .../serde/TemporalTableSourceSpecSerdeTest.java|   8 +-
 .../typeutils/LogicalRelDataTypeConverterTest.java | 215 +++
 .../CalcJsonPlanTest_jsonplan/testComplexCalc.out  | 175 ++
 .../CalcJsonPlanTest_jsonplan/testSimpleFilter.out |  37 +-
 .../testCrossJoin.out  |  52 +-
 .../testCrossJoinOverrideParameters.out|  58 +-
 .../testJoinWithFilter.out |  69 +--
 .../testLeftOuterJoinWithLiteralTrue.out   |  52 +-
 .../testDeduplication.out  |  53 +-
 .../ExpandJsonPlanTest_jsonplan/testExpand.out | 197 ++-
 ...tDistinctAggCalls[isMiniBatchEnabled=false].out | 126 +---
 ...stDistinctAggCalls[isMiniBatchEnabled=true].out | 156 +
 ...gCallsWithGroupBy[isMiniBatchEnabled=false].out |  83 +--
 ...ggCallsWithGroupBy[isMiniBatchEnabled=true].out |  99 +---
 ...AggWithoutGroupBy[isMiniBatchEnabled=false].out | 103 +---
 ...eAggWithoutGroupBy[isMiniBatchEnabled=true].out | 124 +---
 ...erDefinedAggCalls[isMiniBatchEnabled=false].out |  81 +--
 ...serDefinedAggCalls[isMiniBatchEnabled=true].out |  81 +--
 .../testEventTimeHopWindow.out |  57 +-
 .../testEventTimeSessionWindow.out |  57 +-
 .../testEventTimeTumbleWindow.out  | 106 +---
 .../testProcTimeHopWindow.out  |  76 +--
 .../testProcTimeSessionWindow.out  |  76 +--
 .../testProcTimeTumbleWindow.out   |  76 +--
 .../testIncrementalAggregate.out   |  47 +-
 ...lAggregateWithSumCountDistinctAndRetraction.out |  95 +--
 .../testProcessingTimeInnerJoinWithOnClause.out| 148 ++---
 .../testRowTimeInnerJoinWithOnClause.out   |  99 +---
 .../testInnerJoinWithPk.out|  50 +-
 .../testLeftJoinNonEqui.out|  25 +-
 .../LimitJsonPlanTest_jsonplan/testLimit.out   |  10 +-
 .../testJoinTemporalTable.out  | 141 +
 ...testJoinTemporalTableWithProjectionPushDown.out | 135 +
 .../testMatch.out  | 162 ++---
 .../testProcTimeBoundedNonPartitionedRangeOver.out |  85 +--
 .../testProcTimeBoundedPartitionedRangeOver.out| 130 +
 ...undedPartitionedRowsOverWithBuiltinProctime.out |  98 +---
 .../testProcTimeUnboundedPartitionedRangeOver.out  | 106 +---
 ...stProctimeBoundedDistinctPartitionedRowOver.out | 116 +---
 ...edDistinctWithNonDistinctPartitionedRowOver.out | 184 ++
 .../testRowTimeBoundedPartitionedRowsOver.out  |  31 +-
 .../testPythonCalc.out |  20 +-
 .../testPythonFunctionInWhereClause.out|  65 +--
 .../testJoinWithFilter.out | 135 +
 .../testPythonTableFunction.out|  80 +--
 .../tesPythonAggCallsWithGroupBy.out   |  50 +-
 .../testEventTimeHopWindow.out |  77 +--
 .../testEventTimeSessionWindow.out |  77 +--
 .../testEventTimeTumbleWindow.out  |  99 +---
 .../testProcTimeHopWindow.out  |  91 +--
 .../testProcTimeSessionWindow.out  |  91 +--
 .../testProcTimeTumbleWindow.out   | 107 +---
 .../testProcTimeBoundedNonPartitionedRangeOver.out |  80 +--
 .../testProcTimeBoundedPartitionedRa

[flink] branch master updated (149f4fd -> 0157aa3)

2022-01-13 Thread twalthr
This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 149f4fd  [FLINK-24803][table-planner] Fix cast BINARY/VARBINARY to 
STRING
 add 0157aa3  [FLINK-25341][table-planner] Add StructuredToStringCastRule

No new revisions were added by this update.

Summary of changes:
 .../functions/casting/ArrayToArrayCastRule.java|   7 +-
 .../functions/casting/ArrayToStringCastRule.java   |   8 +-
 .../functions/casting/CastRuleProvider.java|  21 ++
 .../functions/casting/CodeGeneratorCastRule.java   |   4 +-
 .../casting/MapAndMultisetToStringCastRule.java|  22 +-
 .../functions/casting/RowToRowCastRule.java|  10 +-
 .../functions/casting/RowToStringCastRule.java |  15 +-
 .../casting/StructuredToStringCastRule.java| 233 +
 .../planner/functions/casting/CastRulesTest.java   |  54 -
 9 files changed, 328 insertions(+), 46 deletions(-)
 create mode 100644 
flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/functions/casting/StructuredToStringCastRule.java


[flink] branch master updated: [FLINK-24803][table-planner] Fix cast BINARY/VARBINARY to STRING

2022-01-13 Thread twalthr
This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new 149f4fd  [FLINK-24803][table-planner] Fix cast BINARY/VARBINARY to 
STRING
149f4fd is described below

commit 149f4fd3009641ee081ea5c6c05ddc281e84ba2e
Author: Marios Trivyzas 
AuthorDate: Tue Dec 28 12:23:11 2021 +0200

[FLINK-24803][table-planner] Fix cast BINARY/VARBINARY to STRING

Use an hex string representation when casting any kind of
`BINARY`, `VARBINARY` or `BYTES` to `CHAR`/`VARCHAR`/`STRING`, e.g.:

```
SELECT CAST(CAST(x'68656C6C6F20636F6465' AS BINARY(10)) AS VARCHAR)
```
gives:
```
68656c6c6f20636f6465
```

Apply padding or trimming if needed and also implement, the inverse
cast as well from the hex string to a
`BINARY`/`VARBINARY`/`BYTES` type.

With legacy behaviour enabled we will converting each byte to a UTF8
char and the opposite.

This closes #18221.
---
 .../apache/flink/table/utils/EncodingUtils.java|  56 +
 .../functions/casting/ArrayToStringCastRule.java   |   2 +-
 .../functions/casting/BinaryToStringCastRule.java  |  77 -
 .../planner/functions/casting/CastRuleUtils.java   |  13 ++-
 .../casting/MapAndMultisetToStringCastRule.java|   2 +-
 .../functions/casting/StringToBinaryCastRule.java  |  29 +++--
 .../planner/functions/CastFunctionITCase.java  |  32 +++---
 .../planner/functions/CastFunctionMiscITCase.java  |   4 +-
 .../planner/functions/casting/CastRulesTest.java   | 125 -
 .../planner/expressions/ScalarFunctionsTest.scala  |  19 +---
 10 files changed, 252 insertions(+), 107 deletions(-)

diff --git 
a/flink-table/flink-table-common/src/main/java/org/apache/flink/table/utils/EncodingUtils.java
 
b/flink-table/flink-table-common/src/main/java/org/apache/flink/table/utils/EncodingUtils.java
index c114062..d47779e 100644
--- 
a/flink-table/flink-table-common/src/main/java/org/apache/flink/table/utils/EncodingUtils.java
+++ 
b/flink-table/flink-table-common/src/main/java/org/apache/flink/table/utils/EncodingUtils.java
@@ -182,6 +182,62 @@ public abstract class EncodingUtils {
 return new String(hexChars);
 }
 
+/**
+ * Converts an array of characters representing hexadecimal values into an 
array of bytes of
+ * those same values. The returned array will be half the length of the 
passed array, as it
+ * takes two characters to represent any given byte. An exception is 
thrown if the passed char
+ * array has an odd number of elements.
+ *
+ * Copied from
+ * 
https://github.com/apache/commons-codec/blob/master/src/main/java/org/apache/commons/codec/binary/Hex.java.
+ *
+ * @param str An array of characters containing hexadecimal digits
+ * @return A byte array to contain the binary data decoded from the 
supplied char array.
+ * @throws TableException Thrown if an odd number of characters or illegal 
characters are
+ * supplied
+ */
+public static byte[] decodeHex(final String str) throws TableException {
+final int len = str.length();
+
+if ((len & 0x01) != 0) {
+throw new TableException("Odd number of characters.");
+}
+
+final int outLen = len >> 1;
+final byte[] out = new byte[outLen];
+
+// two characters form the hex value.
+for (int i = 0, j = 0; j < len; i++) {
+int f = toDigit(str.charAt(j), j) << 4;
+j++;
+f = f | toDigit(str.charAt(j), j);
+j++;
+out[i] = (byte) (f & 0xFF);
+}
+
+return out;
+}
+
+/**
+ * Converts a hexadecimal character to an integer.
+ *
+ * Copied from
+ * 
https://github.com/apache/commons-codec/blob/master/src/main/java/org/apache/commons/codec/binary/Hex.java.
+ *
+ * @param ch A character to convert to an integer digit
+ * @param idx The index of the character in the source
+ * @return An integer
+ * @throws TableException Thrown if ch is an illegal hex character
+ */
+private static int toDigit(final char ch, final int idx) throws 
TableException {
+final int digit = Character.digit(ch, 16);
+if (digit == -1) {
+throw new TableException(
+"Illegal hexadecimal character: [" + ch + "] at index: [" 
+ idx + "]");
+}
+return digit;
+}
+
 // 

 // Java String Repetition
 //
diff --git 
a/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/functions/casting/ArrayToStringCastRule.java
 
b/flink-table/flink-table-planner

[flink] branch master updated (1ea2a7a -> 4518a45)

2022-01-13 Thread twalthr
This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 1ea2a7a  [FLINK-25407][network] Fix the issues caused by FLINK-24035
 add 4518a45  [FLINK-25351][annotations] Introduce `FlinkVersion` as a 
global enum

No new revisions were added by this update.

Summary of changes:
 .../main/java/org/apache/flink/FlinkVersion.java   | 31 +++-
 .../connector/jdbc/xa/JdbcXaSinkMigrationTest.java | 19 ---
 .../kafka/FlinkKafkaConsumerBaseMigrationTest.java | 34 ++---
 .../FlinkKafkaProducerMigrationOperatorTest.java   | 17 +++
 .../kafka/FlinkKafkaProducerMigrationTest.java | 20 
 .../connectors/kafka/KafkaMigrationTestBase.java   | 12 ++---
 .../kafka/KafkaSerializerUpgradeTest.java  | 12 ++---
 .../kinesis/FlinkKinesisConsumerMigrationTest.java | 36 +++---
 .../runtime/WritableSerializerUpgradeTest.java |  8 +--
 .../CompositeTypeSerializerUpgradeTest.java| 12 ++---
 .../typeutils/TypeSerializerUpgradeTestBase.java   | 38 +++---
 .../base/BasicTypeSerializerUpgradeTest.java   | 54 ++--
 ...sicTypeSerializerUpgradeTestSpecifications.java | 52 +--
 .../typeutils/base/EnumSerializerUpgradeTest.java  | 12 ++---
 .../typeutils/base/ListSerializerUpgradeTest.java  |  8 +--
 .../typeutils/base/MapSerializerUpgradeTest.java   |  8 +--
 .../array/PrimitiveArraySerializerUpgradeTest.java | 22 
 ...veArraySerializerUpgradeTestSpecifications.java | 20 
 .../runtime/CopyableSerializerUpgradeTest.java |  8 +--
 .../runtime/NullableSerializerUpgradeTest.java | 12 ++---
 .../runtime/PojoSerializerUpgradeTest.java | 32 ++--
 .../PojoSerializerUpgradeTestSpecifications.java   | 22 
 .../runtime/RowSerializerUpgradeTest.java  | 14 +++---
 .../runtime/TupleSerializerUpgradeTest.java|  8 +--
 .../runtime/ValueSerializerUpgradeTest.java|  8 +--
 .../runtime/kryo/KryoSerializerUpgradeTest.java| 20 
 .../avro/typeutils/AvroSerializerUpgradeTest.java  | 12 ++---
 .../ContinuousFileProcessingMigrationTest.java | 36 +++---
 .../apache/flink/cep/NFASerializerUpgradeTest.java | 26 +-
 .../LockableTypeSerializerUpgradeTest.java |  8 +--
 .../flink/cep/operator/CEPMigrationTest.java   | 30 +--
 ...lueWithProperHashCodeSerializerUpgradeTest.java |  8 +--
 .../ValueArraySerializerUpgradeTest.java   | 40 +++
 .../state/ArrayListSerializerUpgradeTest.java  |  8 +--
 .../runtime/state/JavaSerializerUpgradeTest.java   |  8 +--
 .../state/VoidNamespaceSerializerUpgradeTest.java  |  8 +--
 .../state/ttl/TtlSerializerUpgradeTest.java|  8 +--
 .../typeutils/OptionSerializerUpgradeTest.java |  8 +--
 .../ScalaEitherSerializerUpgradeTest.java  |  8 +--
 .../typeutils/ScalaTrySerializerUpgradeTest.java   |  8 +--
 .../typeutils/EnumValueSerializerUpgradeTest.scala |  7 +--
 .../ScalaCaseClassSerializerUpgradeTest.scala  |  7 +--
 .../TraversableSerializerUpgradeTest.scala | 23 +
 .../api/datastream/UnionSerializerUpgradeTest.java | 12 ++---
 ...oPhaseCommitSinkStateSerializerUpgradeTest.java | 10 ++--
 .../api/operators/TimerSerializerUpgradeTest.java  |  8 +--
 .../co/BufferEntrySerializerUpgradeTest.java   |  8 +--
 .../windowing/WindowOperatorMigrationTest.java | 36 +++---
 .../windowing/WindowSerializerUpgradeTest.java | 12 ++---
 .../StreamElementSerializerUpgradeTest.java|  8 +--
 .../typeutils/LinkedListSerializerUpgradeTest.java |  8 +--
 .../LegacyStatefulJobSavepointMigrationITCase.java | 22 
 .../utils/StatefulJobSavepointMigrationITCase.java | 54 ++--
 .../StatefulJobWBroadcastStateMigrationITCase.java | 51 ++-
 .../TypeSerializerSnapshotMigrationITCase.java | 58 +++---
 .../AbstractKeyedOperatorRestoreTestBase.java  | 36 +++---
 .../restore/keyed/KeyedComplexChainTest.java   |  6 +--
 .../AbstractNonKeyedOperatorRestoreTestBase.java   | 40 +++
 .../operator/restore/unkeyed/ChainBreakTest.java   |  6 +--
 .../restore/unkeyed/ChainLengthDecreaseTest.java   |  6 +--
 .../restore/unkeyed/ChainLengthIncreaseTest.java   |  6 +--
 .../unkeyed/ChainLengthStatelessDecreaseTest.java  |  6 +--
 .../operator/restore/unkeyed/ChainOrderTest.java   |  6 +--
 .../operator/restore/unkeyed/ChainUnionTest.java   |  6 +--
 .../StatefulJobSavepointMigrationITCase.scala  | 57 ++---
 ...StatefulJobWBroadcastStateMigrationITCase.scala | 53 ++--
 66 files changed, 656 insertions(+), 651 deletions(-)
 rename 
flink-core/src/test/java/org/apache/flink/testutils/migration/MigrationVersion.java
 => flink-annotations/src/main/java/org/apache/flink/FlinkVersion.java (71%)


[flink] branch master updated (602db48 -> 7601bd3)

2022-01-11 Thread twalthr
This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 602db48   [FLINK-25190][coordination][metrics] Add 
"numPendingTaskManagers" metrics
 add 7601bd3  [FLINK-25488][table] Clarify delimiters usage in STR_TO_MAP 
function

No new revisions were added by this update.

Summary of changes:
 docs/data/sql_functions.yml | 6 --
 docs/data/sql_functions_zh.yml  | 3 ++-
 .../flink/table/planner/expressions/ScalarFunctionsTest.scala   | 3 +++
 .../org/apache/flink/table/runtime/functions/SqlFunctionUtils.java  | 4 +++-
 4 files changed, 12 insertions(+), 4 deletions(-)


[flink] branch master updated (23c477f -> 423b710)

2022-01-11 Thread twalthr
This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 23c477f  [FLINK-25280][connector/kafka] Disable log deletion in 
KafkaTestEnvironmentImpl to prevent records from being deleted during test run
 add ca3a3cf  [FLINK-25390][table-common] Introduce forwardOptions for 
table and format factories
 add 423b710  [FLINK-25390][connector-kafka][json] Show usage of helper to 
forward and merge options

No new revisions were added by this update.

Summary of changes:
 .../connector/elasticsearch/table/TestContext.java |   1 +
 .../kafka/table/KafkaConnectorOptionsUtil.java |   1 +
 .../kafka/table/KafkaDynamicTableFactory.java  |  20 ++-
 .../flink/formats/json/JsonFormatFactory.java  |  11 ++
 .../flink/formats/json/JsonFormatFactoryTest.java  | 137 ---
 .../flink/table/catalog/ManagedTableListener.java  |   3 +-
 .../table/factories/DecodingFormatFactory.java |   2 +-
 .../flink/table/factories/DynamicTableFactory.java |  67 +++-
 .../table/factories/EncodingFormatFactory.java |   2 +-
 .../apache/flink/table/factories/FactoryUtil.java  | 189 +++--
 .../flink/table/factories/FormatFactory.java   |  52 ++
 .../flink/table/factories/FactoryUtilTest.java | 165 ++
 .../table/factories/TestDynamicTableFactory.java   |   8 +
 .../flink/table/factories/TestFormatFactory.java   |   7 +
 .../flink/table/factories/utils/FactoryMocks.java  |   9 +
 15 files changed, 595 insertions(+), 79 deletions(-)
 create mode 100644 
flink-table/flink-table-common/src/main/java/org/apache/flink/table/factories/FormatFactory.java


[flink] branch master updated (0b0a76a -> 562344b)

2022-01-10 Thread twalthr
This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 0b0a76a  [FLINK-25369][table] Provide tables of specified 
catalog/database
 add e4a3272  [hotfix][table-api-java] Make DataTypeFactory configurable 
for CatalogManager
 add 6296f6b  [hotfix][table-common] Check for invalid unresolved types in 
DataTypes.of(LogicalType)
 add d7f5c51  [hotfix][table-common] Create UnresolvedIdentifier from 
ObjectIdentifier
 add 3d19ea5  [hotfix][table-api-java] Expose LogicalType creation in 
DataTypeFactory
 add 39d3b84  [hotfix][table-common] Fix LogicalType to DataType conversion 
for DistinctType
 add f28ba66  [hotfix][table-common] Allow disabling autoboxing for 
DataTypeUtils.isInternal
 add 6ed132b  [hotfix][table-planner] Make JSON plan test base more lenient 
for differently configured IDEs
 add 65a4657  [FLINK-25230][table-planner] Harden type serialization for 
LogicalType and DataType
 add 562344b  [FLINK-25230][table-planner] Regenerate JSON plans

No new revisions were added by this update.

Summary of changes:
 .../apache/flink/table/catalog/CatalogManager.java |  11 +-
 .../flink/table/catalog/DataTypeFactoryImpl.java   |  28 +-
 .../java/org/apache/flink/table/api/DataTypes.java |   2 +-
 .../flink/table/catalog/DataTypeFactory.java   |  25 +-
 .../flink/table/catalog/UnresolvedIdentifier.java  |   9 +
 .../apache/flink/table/types/FieldsDataType.java   |   2 +-
 .../table/types/extraction/ExtractionUtils.java|  27 +
 .../flink/table/types/utils/DataTypeUtils.java |  13 +-
 .../types/utils/LogicalTypeDataTypeConverter.java  |  36 +-
 .../table/types/utils/DataTypeFactoryMock.java |  17 +-
 .../nodes/exec/serde/DataTypeJsonDeserializer.java | 190 ++
 .../nodes/exec/serde/DataTypeJsonSerializer.java   | 171 +
 .../exec/serde/ExecNodeGraphJsonPlanGenerator.java |  16 +-
 .../plan/nodes/exec/serde/JsonSerdeUtil.java   |  13 +-
 .../exec/serde/LogicalTypeJsonDeserializer.java| 555 
 .../exec/serde/LogicalTypeJsonSerializer.java  | 723 -
 .../exec/serde/LogicalWindowJsonDeserializer.java  |   6 +-
 .../exec/serde/LogicalWindowJsonSerializer.java|  24 +-
 .../exec/serde/RelDataTypeJsonSerializer.java  |  19 +-
 .../plan/nodes/exec/serde/SerdeContext.java|  14 +-
 ...r.java => WindowReferenceJsonDeserializer.java} |  46 +-
 ...zer.java => WindowReferenceJsonSerializer.java} |  32 +-
 .../table/planner/typeutils/DataViewUtils.java |  52 +-
 .../nodes/exec/serde/DataTypeJsonSerdeTest.java| 127 
 .../exec/serde/DynamicTableSinkSpecSerdeTest.java  |   8 +-
 .../serde/DynamicTableSourceSpecSerdeTest.java |   8 +-
 ...erdeTest.java => LogicalTypeJsonSerdeTest.java} | 296 ++---
 .../exec/serde/LogicalTypeSerdeCoverageTest.java   |  60 --
 .../nodes/exec/serde/LogicalWindowSerdeTest.java   |   3 +-
 ...erdeTest.java => RelDataTypeJsonSerdeTest.java} |  15 +-
 .../test/resources/jsonplan/testGetJsonPlan.out|  34 +-
 .../CalcJsonPlanTest_jsonplan/testComplexCalc.out  |  69 +-
 .../CalcJsonPlanTest_jsonplan/testSimpleFilter.out |  57 +-
 .../testSimpleProject.out  |  40 +-
 .../testChangelogSource.out|  70 +-
 .../testUpsertSource.out   |  60 +-
 .../testCrossJoin.out  |  60 +-
 .../testCrossJoinOverrideParameters.out|  60 +-
 .../testJoinWithFilter.out |  60 +-
 .../testLeftOuterJoinWithLiteralTrue.out   |  60 +-
 .../testDeduplication.out  | 120 +---
 .../ExpandJsonPlanTest_jsonplan/testExpand.out | 142 +---
 ...tDistinctAggCalls[isMiniBatchEnabled=false].out | 106 +--
 ...stDistinctAggCalls[isMiniBatchEnabled=true].out | 332 --
 ...gCallsWithGroupBy[isMiniBatchEnabled=false].out | 106 +--
 ...ggCallsWithGroupBy[isMiniBatchEnabled=true].out | 146 +
 ...AggWithoutGroupBy[isMiniBatchEnabled=false].out | 110 +---
 ...eAggWithoutGroupBy[isMiniBatchEnabled=true].out | 154 +
 ...erDefinedAggCalls[isMiniBatchEnabled=false].out |  98 +--
 ...serDefinedAggCalls[isMiniBatchEnabled=true].out | 112 +---
 .../testEventTimeHopWindow.out |  85 +--
 .../testEventTimeSessionWindow.out |  85 +--
 .../testEventTimeTumbleWindow.out  | 165 ++---
 .../testProcTimeHopWindow.out  |  90 +--
 .../testProcTimeSessionWindow.out  |  90 +--
 .../testProcTimeTumbleWindow.out   | 131 +---
 .../testIncrementalAggregate.out   | 190 ++
 ...lAggregateWithSumCountDistinctAndRetraction.out | 278 +++-
 .../testProcessingTimeInnerJoinWithOnClause.out| 176 ++---
 .../testRowTimeInnerJoinWithOnClause.out   | 150 ++---
 .../JoinJsonPlanTest_jsonplan/testI

[flink] branch master updated: [FLINK-25228][table-test-utils] Introduce flink-table-test-utils

2022-01-07 Thread twalthr
This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new c38ffac  [FLINK-25228][table-test-utils] Introduce 
flink-table-test-utils
c38ffac is described below

commit c38ffac7c5ac5f7ea3ff2511ef71ae898d74c554
Author: slinkydeveloper 
AuthorDate: Wed Dec 15 15:10:07 2021 +0100

[FLINK-25228][table-test-utils] Introduce flink-table-test-utils

This closes #18255.
---
 flink-table/README.md  |   4 +
 .../apache/flink/table/test/ArrayDataAssert.java   |  65 
 .../apache/flink/table/test/DataTypeAssert.java|   2 +
 .../flink/table/test/DataTypeConditions.java   |   2 +
 .../apache/flink/table/test/InternalDataUtils.java | 165 +
 .../apache/flink/table/test/LogicalTypeAssert.java |   2 +
 .../flink/table/test/LogicalTypeConditions.java|   2 +
 .../org/apache/flink/table/test/MapDataAssert.java |  65 
 .../test/{StringDataAssert.java => RowAssert.java} |  27 ++--
 .../org/apache/flink/table/test/RowDataAssert.java |  32 
 .../apache/flink/table/test/RowDataListAssert.java |  80 ++
 .../apache/flink/table/test/StringDataAssert.java  |   2 +
 .../apache/flink/table/test/TableAssertions.java   |  82 ++
 .../planner/functions/casting/CastRulesTest.java   |  26 +---
 flink-table/flink-table-test-utils/pom.xml | 124 
 .../src/test/java/TableAssertionTest.java  |  75 ++
 flink-table/pom.xml|   1 +
 tools/ci/stage.sh  |   3 +-
 18 files changed, 726 insertions(+), 33 deletions(-)

diff --git a/flink-table/README.md b/flink-table/README.md
index b676673..92026fc 100644
--- a/flink-table/README.md
+++ b/flink-table/README.md
@@ -59,6 +59,10 @@ If you want to use Table API & SQL, check out the 
[documentation](https://nightl
 
 * `flink-sql-client`: CLI tool to submit queries to a Flink cluster
 
+### Testing
+
+* `flink-table-test-utils`: Brings in transitively all the dependencies you 
need to execute Table pipelines and provides some test utilities such as 
assertions, mocks and test harnesses.
+
 ### Notes
 
 No module except `flink-table-planner` should depend on `flink-table-runtime` 
in production classpath, 
diff --git 
a/flink-table/flink-table-common/src/test/java/org/apache/flink/table/test/ArrayDataAssert.java
 
b/flink-table/flink-table-common/src/test/java/org/apache/flink/table/test/ArrayDataAssert.java
new file mode 100644
index 000..ee69280
--- /dev/null
+++ 
b/flink-table/flink-table-common/src/test/java/org/apache/flink/table/test/ArrayDataAssert.java
@@ -0,0 +1,65 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.test;
+
+import org.apache.flink.annotation.Experimental;
+import org.apache.flink.table.data.ArrayData;
+import org.apache.flink.table.data.GenericArrayData;
+import org.apache.flink.table.types.DataType;
+import org.apache.flink.table.types.logical.LogicalType;
+
+import org.assertj.core.api.AbstractAssert;
+
+import java.util.Objects;
+
+import static org.assertj.core.api.Assertions.assertThat;
+
+/** Assertions for {@link ArrayData}. */
+@Experimental
+public class ArrayDataAssert extends AbstractAssert {
+
+public ArrayDataAssert(ArrayData arrayData) {
+super(arrayData, ArrayDataAssert.class);
+}
+
+public ArrayDataAssert hasSize(int size) {
+isNotNull();
+assertThat(this.actual.size()).isEqualTo(size);
+return this;
+}
+
+public ArrayDataAssert asGeneric(DataType dataType) {
+return asGeneric(dataType.getLogicalType());
+}
+
+public ArrayDataAssert asGeneric(LogicalType logicalType) {
+GenericArrayData actual = 
InternalDataUtils.toGenericArray(this.actual, logicalType);
+return new ArrayDataAssert(actual)
+.usingComparator(
+(x, y) -> {
+// Avoid converting actual again
+ 

[flink] branch master updated (6d52d10 -> 137b65c)

2022-01-06 Thread twalthr
This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 6d52d10  [FLINK-25518][table-planner] Harden JSON utilities for serde 
of the persisted plan
 new 5e4d13b  [hotfix][table-runtime][table-planner] Fix the shading 
package name of com.jayway and remove bad transitive deps
 new 64b9b3a  [FLINK-25525][examples-table] Fixed regression which doesn't 
allow to run the examples from IDEA
 new 137b65c  [FLINK-25487][core][table-planner-loader] Improve verbosity 
of classloading errors

The 3 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../core/classloading/ComponentClassLoader.java| 56 +++---
 .../core/classloading/SubmoduleClassLoader.java|  4 +-
 .../org/apache/flink/core/plugin/PluginLoader.java |  8 +++-
 .../classloading/ComponentClassLoaderTest.java | 45 +
 flink-examples/flink-examples-table/pom.xml| 27 +++
 .../env/beam/ProcessPythonEnvironmentManager.java  | 17 ---
 .../flink/table/planner/loader/PlannerModule.java  | 50 ---
 flink-table/flink-table-planner/pom.xml| 32 ++---
 flink-table/flink-table-runtime/pom.xml|  3 +-
 9 files changed, 160 insertions(+), 82 deletions(-)


[flink] 02/03: [FLINK-25525][examples-table] Fixed regression which doesn't allow to run the examples from IDEA

2022-01-06 Thread twalthr
This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 64b9b3a4b8b8f2032318360445db2f4cd6c36a57
Author: slinkydeveloper 
AuthorDate: Wed Jan 5 16:29:25 2022 +0100

[FLINK-25525][examples-table] Fixed regression which doesn't allow to run 
the examples from IDEA
---
 flink-examples/flink-examples-table/pom.xml | 27 ++-
 1 file changed, 18 insertions(+), 9 deletions(-)

diff --git a/flink-examples/flink-examples-table/pom.xml 
b/flink-examples/flink-examples-table/pom.xml
index bb45932..6791577 100644
--- a/flink-examples/flink-examples-table/pom.xml
+++ b/flink-examples/flink-examples-table/pom.xml
@@ -52,35 +52,44 @@ under the License.

flink-table-api-scala-bridge_${scala.binary.version}
${project.version}

+   
+   In particular, here we're forced to use 
flink-table-planner_${scala.binary.version} instead of
+   flink-table-planner-loader, because otherwise we hit this bug 
https://youtrack.jetbrains.com/issue/IDEA-93855
+   when trying to run the examples from within Intellij IDEA. This 
is only relevant to this specific
+   examples project, as it's in the same build tree of 
flink-parent.
+
+   In a real environment, you need flink-table-runtime and 
flink-table-planner-loader either
+   at test scope, for executing tests, or at provided scope, to 
run the main directly.
+-->

org.apache.flink
-   flink-connector-files
+   flink-table-runtime
${project.version}


org.apache.flink
-   flink-csv
+   
flink-table-planner_${scala.binary.version}
${project.version}

 
-   
+   

org.apache.flink
-   flink-test-utils
+   flink-connector-files
${project.version}
-   test


org.apache.flink
-   flink-table-runtime
+   flink-csv
${project.version}
-   test

+
+   

org.apache.flink
-   flink-table-planner-loader
+   flink-test-utils
${project.version}
test



[flink] 01/03: [hotfix][table-runtime][table-planner] Fix the shading package name of com.jayway and remove bad transitive deps

2022-01-06 Thread twalthr
This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 5e4d13bcaacf6079d99df678ee9ec4934253b9c5
Author: slinkydeveloper 
AuthorDate: Wed Jan 5 16:28:27 2022 +0100

[hotfix][table-runtime][table-planner] Fix the shading package name of 
com.jayway and remove bad transitive deps
---
 .../env/beam/ProcessPythonEnvironmentManager.java  | 17 ++--
 flink-table/flink-table-planner/pom.xml| 32 ++
 flink-table/flink-table-runtime/pom.xml|  3 +-
 3 files changed, 24 insertions(+), 28 deletions(-)

diff --git 
a/flink-python/src/main/java/org/apache/flink/python/env/beam/ProcessPythonEnvironmentManager.java
 
b/flink-python/src/main/java/org/apache/flink/python/env/beam/ProcessPythonEnvironmentManager.java
index ab8413a..ac63769 100644
--- 
a/flink-python/src/main/java/org/apache/flink/python/env/beam/ProcessPythonEnvironmentManager.java
+++ 
b/flink-python/src/main/java/org/apache/flink/python/env/beam/ProcessPythonEnvironmentManager.java
@@ -35,7 +35,6 @@ import org.apache.flink.util.function.FunctionWithException;
 
 import org.apache.flink.shaded.guava30.com.google.common.base.Strings;
 
-import org.codehaus.commons.nullanalysis.NotNull;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -96,16 +95,16 @@ public final class ProcessPythonEnvironmentManager 
implements PythonEnvironmentM
 
 private transient PythonEnvResources.PythonLeasedResource resource;
 
-@NotNull private final PythonDependencyInfo dependencyInfo;
-@NotNull private final Map systemEnv;
-@NotNull private final String[] tmpDirectories;
-@NotNull private final JobID jobID;
+private final PythonDependencyInfo dependencyInfo;
+private final Map systemEnv;
+private final String[] tmpDirectories;
+private final JobID jobID;
 
 public ProcessPythonEnvironmentManager(
-@NotNull PythonDependencyInfo dependencyInfo,
-@NotNull String[] tmpDirectories,
-@NotNull Map systemEnv,
-@NotNull JobID jobID) {
+PythonDependencyInfo dependencyInfo,
+String[] tmpDirectories,
+Map systemEnv,
+JobID jobID) {
 this.dependencyInfo = Objects.requireNonNull(dependencyInfo);
 this.tmpDirectories = Objects.requireNonNull(tmpDirectories);
 this.systemEnv = Objects.requireNonNull(systemEnv);
diff --git a/flink-table/flink-table-planner/pom.xml 
b/flink-table/flink-table-planner/pom.xml
index f57850d..58d95ae 100644
--- a/flink-table/flink-table-planner/pom.xml
+++ b/flink-table/flink-table-planner/pom.xml
@@ -102,15 +102,6 @@ under the License.
${project.version}

 
-   
-
-   
-   org.apache.flink
-   flink-cep
-   ${project.version}
-   provided
-   
-

 

@@ -341,6 +332,7 @@ under the License.



true
+   
false

${project.basedir}/target/dependency-reduced-pom.xml


@@ -398,14 +390,6 @@ under the License.

org.apache.flink.calcite.shaded.com.google


-   
com.jayway
-   
org.apache.flink.calcite.shaded.com.jayway
-   
-   
-   
com.fasterxml
-   
org.apache.flink.shaded.jackson2.com.fasterxml
-   
-   

org.apache.commons.codec

org.apache.flink.calcite.shaded.org.apache.commons.codec

@@ -414,11 +398,23 @@ under the License.

org.apache.flink.calcite.shaded.org.apache.commons.io

 
+   
+   
+   
com.fasterxml

[flink] 03/03: [FLINK-25487][core][table-planner-loader] Improve verbosity of classloading errors

2022-01-06 Thread twalthr
This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 137b65c5f23e5ba546ce52bd15fa7aa528ab95de
Author: slinkydeveloper 
AuthorDate: Wed Jan 5 16:30:11 2022 +0100

[FLINK-25487][core][table-planner-loader] Improve verbosity of classloading 
errors

This closes #18283.
---
 .../core/classloading/ComponentClassLoader.java| 56 +++---
 .../core/classloading/SubmoduleClassLoader.java|  4 +-
 .../org/apache/flink/core/plugin/PluginLoader.java |  8 +++-
 .../classloading/ComponentClassLoaderTest.java | 45 +
 .../flink/table/planner/loader/PlannerModule.java  | 50 ---
 5 files changed, 118 insertions(+), 45 deletions(-)

diff --git 
a/flink-core/src/main/java/org/apache/flink/core/classloading/ComponentClassLoader.java
 
b/flink-core/src/main/java/org/apache/flink/core/classloading/ComponentClassLoader.java
index 52d886b..76e6b32 100644
--- 
a/flink-core/src/main/java/org/apache/flink/core/classloading/ComponentClassLoader.java
+++ 
b/flink-core/src/main/java/org/apache/flink/core/classloading/ComponentClassLoader.java
@@ -28,6 +28,8 @@ import java.net.URLClassLoader;
 import java.util.Arrays;
 import java.util.Enumeration;
 import java.util.Iterator;
+import java.util.Map;
+import java.util.Optional;
 
 /**
  * A {@link URLClassLoader} that restricts which classes can be loaded to 
those contained within the
@@ -62,17 +64,22 @@ public class ComponentClassLoader extends URLClassLoader {
 private final String[] ownerFirstResourcePrefixes;
 private final String[] componentFirstResourcePrefixes;
 
+private final Map knownPackagePrefixesModuleAssociation;
+
 public ComponentClassLoader(
 URL[] classpath,
 ClassLoader ownerClassLoader,
 String[] ownerFirstPackages,
-String[] componentFirstPackages) {
+String[] componentFirstPackages,
+Map knownPackagePrefixesModuleAssociation) {
 super(classpath, PLATFORM_OR_BOOTSTRAP_LOADER);
 this.ownerClassLoader = ownerClassLoader;
 
 this.ownerFirstPackages = ownerFirstPackages;
 this.componentFirstPackages = componentFirstPackages;
 
+this.knownPackagePrefixesModuleAssociation = 
knownPackagePrefixesModuleAssociation;
+
 ownerFirstResourcePrefixes = 
convertPackagePrefixesToPathPrefixes(ownerFirstPackages);
 componentFirstResourcePrefixes =
 convertPackagePrefixesToPathPrefixes(componentFirstPackages);
@@ -86,22 +93,39 @@ public class ComponentClassLoader extends URLClassLoader {
 protected Class loadClass(final String name, final boolean resolve)
 throws ClassNotFoundException {
 synchronized (getClassLoadingLock(name)) {
-final Class loadedClass = findLoadedClass(name);
-if (loadedClass != null) {
-return resolveIfNeeded(resolve, loadedClass);
-}
-
-if (isComponentFirstClass(name)) {
-return loadClassFromComponentFirst(name, resolve);
+try {
+final Class loadedClass = findLoadedClass(name);
+if (loadedClass != null) {
+return resolveIfNeeded(resolve, loadedClass);
+}
+
+if (isComponentFirstClass(name)) {
+return loadClassFromComponentFirst(name, resolve);
+}
+if (isOwnerFirstClass(name)) {
+return loadClassFromOwnerFirst(name, resolve);
+}
+
+// making this behavior configurable 
(component-only/component-first/owner-first)
+// would allow this class to subsume the 
FlinkUserCodeClassLoader (with an added
+// exception handler)
+return loadClassFromComponentOnly(name, resolve);
+} catch (ClassNotFoundException e) {
+// If we know the package of this class
+Optional foundAssociatedModule =
+
knownPackagePrefixesModuleAssociation.entrySet().stream()
+.filter(entry -> 
name.startsWith(entry.getKey()))
+.map(Map.Entry::getValue)
+.findFirst();
+if (foundAssociatedModule.isPresent()) {
+throw new ClassNotFoundException(
+String.format(
+"Class '%s' not found. Perhaps you forgot 
to add the module '%s' to the classpath?",
+name, foundAssociatedModule.get()),
+e);
+}
+throw e;
 }
-if (isOwnerFirstClass(name)) {
-return loadClassFromOwnerFirst(name, resolve);
-}
-
-

[flink] branch master updated (c5581b8 -> 6d52d10)

2022-01-06 Thread twalthr
This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from c5581b8  [FLINK-25526][table-common] Deprecate the classes of the old 
factory stack
 add 6d52d10  [FLINK-25518][table-planner] Harden JSON utilities for serde 
of the persisted plan

No new revisions were added by this update.

Summary of changes:
 .../exec/serde/AggregateCallJsonDeserializer.java  |   7 +-
 .../exec/serde/CatalogTableJsonDeserializer.java   |   7 +-
 .../nodes/exec/serde/DurationJsonDeserializer.java |   3 +-
 .../exec/serde/ExecNodeGraphJsonPlanGenerator.java |  60 +--
 .../exec/serde/FlinkDeserializationContext.java|  93 -
 .../plan/nodes/exec/serde/JsonSerdeUtil.java   | 115 +
 .../exec/serde/LogicalTypeJsonDeserializer.java|   2 +-
 .../exec/serde/LogicalWindowJsonDeserializer.java  |  40 ---
 .../exec/serde/RelDataTypeJsonDeserializer.java|  41 
 .../exec/serde/RexLiteralJsonDeserializer.java |  46 -
 .../nodes/exec/serde/RexNodeJsonDeserializer.java  | 104 ++-
 .../exec/serde/RexWindowBoundJsonDeserializer.java |  10 +-
 .../plan/nodes/exec/serde/SerdeContext.java|   8 ++
 .../exec/serde/ChangelogModeJsonSerdeTest.java |   7 +-
 .../nodes/exec/serde/DurationJsonSerdeTest.java|   8 +-
 .../exec/serde/DynamicTableSinkSpecSerdeTest.java  |  11 +-
 .../serde/DynamicTableSourceSpecSerdeTest.java |  19 ++--
 .../nodes/exec/serde/InputPropertySerdeTest.java   |   3 +-
 .../exec/serde/IntervalJoinSpecJsonSerdeTest.java  |  13 +--
 .../nodes/exec/serde/JoinSpecJsonSerdeTest.java|  13 +--
 .../nodes/exec/serde/LogicalTypeSerdeTest.java |  16 ++-
 .../nodes/exec/serde/LogicalWindowSerdeTest.java   |  27 ++---
 .../plan/nodes/exec/serde/LookupKeySerdeTest.java  |  28 ++---
 .../nodes/exec/serde/RelDataTypeSerdeTest.java |  23 ++---
 .../plan/nodes/exec/serde/RexNodeSerdeTest.java|  17 ++-
 .../nodes/exec/serde/RexWindowBoundSerdeTest.java  |  40 +++
 .../serde/TemporalTableSourceSpecSerdeTest.java|  19 ++--
 .../table/runtime/groupwindow/WindowReference.java |   2 +-
 28 files changed, 303 insertions(+), 479 deletions(-)
 delete mode 100644 
flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/plan/nodes/exec/serde/FlinkDeserializationContext.java
 delete mode 100644 
flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/plan/nodes/exec/serde/RexLiteralJsonDeserializer.java


[flink] branch master updated (ed814c1 -> c5581b8)

2022-01-06 Thread twalthr
This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from ed814c1  [FLINK-25513][python] Handle properly for None result in 
flat_map and map of ConnectedStream
 add c5581b8  [FLINK-25526][table-common] Deprecate the classes of the old 
factory stack

No new revisions were added by this update.

Summary of changes:
 .../apache/flink/table/factories/DeserializationSchemaFactory.java| 3 +++
 .../org/apache/flink/table/factories/SerializationSchemaFactory.java  | 3 +++
 .../src/main/java/org/apache/flink/table/factories/TableFactory.java  | 3 ++-
 .../java/org/apache/flink/table/factories/TableFactoryService.java| 1 +
 .../java/org/apache/flink/table/factories/TableFormatFactory.java | 3 +++
 .../java/org/apache/flink/table/factories/TableFormatFactoryBase.java | 4 
 .../main/java/org/apache/flink/table/factories/TableSinkFactory.java  | 3 +++
 .../org/apache/flink/table/factories/TableSinkFactoryContextImpl.java | 1 +
 .../java/org/apache/flink/table/factories/TableSourceFactory.java | 3 +++
 .../apache/flink/table/factories/TableSourceFactoryContextImpl.java   | 1 +
 .../src/main/java/org/apache/flink/table/sinks/TableSink.java | 1 +
 .../src/main/java/org/apache/flink/table/sinks/TableSinkBase.java | 4 
 12 files changed, 29 insertions(+), 1 deletion(-)


[flink] branch master updated (2824c90 -> 76407f2)

2022-01-06 Thread twalthr
This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 2824c90  [FLINK-25010][hive] Speed up hive's createMRSplits by multi 
thread (#17988)
 add 76407f2  [hotfix][connector-files][connector-hive] Move Hive specific 
configs out of FileSystemConnectorOptions

No new revisions were added by this update.

Summary of changes:
 .../file/table/FileSystemConnectorOptions.java | 100 ---
 .../connectors/hive/HiveDynamicTableFactory.java   |   4 +-
 .../connectors/hive/HiveLookupTableSource.java |   8 +-
 .../apache/flink/connectors/hive/HiveOptions.java  | 110 +
 .../flink/connectors/hive/HiveSourceBuilder.java   |  15 ++-
 .../flink/connectors/hive/HiveTableSource.java |   4 +-
 .../hive/read/HivePartitionFetcherContextBase.java |   4 +-
 .../hive/HiveDynamicTableFactoryTest.java  |  26 ++---
 .../connectors/hive/HiveLookupJoinITCase.java  |  19 ++--
 .../connectors/hive/HiveTemporalJoinITCase.java|   5 +-
 .../hive/read/HivePartitionFetcherTest.java|  10 +-
 11 files changed, 157 insertions(+), 148 deletions(-)


[flink] branch master updated: [FLINK-25516][table-api-java] Add catalog object compile/restore options

2022-01-05 Thread twalthr
This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new 50ffe42  [FLINK-25516][table-api-java] Add catalog object 
compile/restore options
50ffe42 is described below

commit 50ffe425bd2070cd4a7f976e2360b8742cc83d8a
Author: Timo Walther 
AuthorDate: Tue Jan 4 11:29:17 2022 +0100

[FLINK-25516][table-api-java] Add catalog object compile/restore options

This closes #18262.
---
 .../generated/table_config_configuration.html  |  12 +++
 .../flink/table/api/config/TableConfigOptions.java | 120 +
 2 files changed, 132 insertions(+)

diff --git a/docs/layouts/shortcodes/generated/table_config_configuration.html 
b/docs/layouts/shortcodes/generated/table_config_configuration.html
index ca94bf0..c5e2be9 100644
--- a/docs/layouts/shortcodes/generated/table_config_configuration.html
+++ b/docs/layouts/shortcodes/generated/table_config_configuration.html
@@ -33,6 +33,18 @@
 The local time zone defines current session time zone id. It 
is used when converting to/from codeTIMESTAMP WITH LOCAL TIME 
ZONE/code. Internally, timestamps with local time zone are always 
represented in the UTC time zone. However, when converting to data types that 
don't include a time zone (e.g. TIMESTAMP, TIME, or simply STRING), the session 
time zone is used during conversion. The input of option is either a full name 
such as "America/Los_Angeles", or  [...]
 
 
+table.plan.compile.catalog-objects Batch Streaming
+ALL
+Enum
+Strategy how to persist catalog objects such as tables, 
functions, or data types into a plan during compilation. It influences the need 
for catalog metadata to be present during a restore operation and affects the 
plan size.Possible values:"ALL": All metadata about catalog 
tables, functions, or data types will be persisted into the plan during 
compilation. For catalog tables, this includes the table's identifier, schema, 
and options. For catalog functi [...]
+
+
+table.plan.restore.catalog-objects Batch Streaming
+ALL
+Enum
+Strategy how to restore catalog objects such as tables, 
functions, or data types using a given plan and performing catalog lookups if 
necessary. It influences the need for catalog metadata to bepresent and enables 
partial enrichment of plan information.Possible 
values:"ALL": Reads all metadata about catalog tables, functions, or 
data types that has been persisted in the plan. The strategy performs a catalog 
lookup by identifier to fill in missing infor [...]
+
+
 table.sql-dialect Batch Streaming
 "default"
 String
diff --git 
a/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/api/config/TableConfigOptions.java
 
b/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/api/config/TableConfigOptions.java
index 22b5b26..50c5dcb 100644
--- 
a/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/api/config/TableConfigOptions.java
+++ 
b/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/api/config/TableConfigOptions.java
@@ -18,13 +18,18 @@
 
 package org.apache.flink.table.api.config;
 
+import org.apache.flink.annotation.Internal;
 import org.apache.flink.annotation.PublicEvolving;
 import org.apache.flink.annotation.docs.Documentation;
 import org.apache.flink.configuration.ConfigOption;
+import org.apache.flink.configuration.DescribedEnum;
+import org.apache.flink.configuration.description.InlineElement;
 import org.apache.flink.table.api.PlannerType;
 import org.apache.flink.table.api.SqlDialect;
+import org.apache.flink.table.catalog.Catalog;
 
 import static org.apache.flink.configuration.ConfigOptions.key;
+import static org.apache.flink.configuration.description.TextElement.text;
 
 /**
  * This class holds {@link org.apache.flink.configuration.ConfigOption}s used 
by table planner.
@@ -90,6 +95,36 @@ public class TableConfigOptions {
 + "the session time zone is used during 
conversion. The input of option is either a full name "
 + "such as \"America/Los_Angeles\", or a 
custom timezone id such as \"GMT-08:00\".");
 
+// 
--
+// Options for plan handling
+// 
--
+
+@Documentation.TableOption(execMode = 
Documentation.ExecMode.BATCH_STREAMING)
+public static final ConfigOption 
PLAN_COMPILE_CATALOG_OBJECTS =
+key("table.plan.

[flink] branch master updated (6681a47 -> 612fa11)

2022-01-03 Thread twalthr
This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 6681a47  [hotfix][docs] Remove duplicate dot in 
generating_watermarks.md
 add be75c0d  Revert "[FLINK-25085][runtime] Add a scheduled thread pool in 
MainThreadExecutor and close it when the endpoint is stopped"
 add d6f1298  Revert "[FLINK-25427] Disable 
SavepointITCase.testTriggerSavepointAndResumeWithNoClaim because it is unstable"
 add 612fa11  Revert "[FLINK-25426] Disable 
UnalignedCheckpointRescaleITCase because it fails regularly"

No new revisions were added by this update.

Summary of changes:
 .../concurrent/ComponentMainThreadExecutor.java|  9 +--
 .../concurrent/ThrowingScheduledFuture.java| 76 
 .../flink/runtime/rpc/FencedRpcEndpoint.java   | 25 +--
 .../org/apache/flink/runtime/rpc/RpcEndpoint.java  | 81 +-
 .../ComponentMainThreadExecutorServiceAdapter.java |  3 -
 ...nuallyTriggeredComponentMainThreadExecutor.java |  3 -
 .../apache/flink/runtime/rpc/RpcEndpointTest.java  | 66 +++---
 .../flink/test/checkpointing/SavepointITCase.java  |  2 -
 .../UnalignedCheckpointRescaleITCase.java  |  2 -
 9 files changed, 31 insertions(+), 236 deletions(-)
 delete mode 100644 
flink-rpc/flink-rpc-core/src/main/java/org/apache/flink/runtime/concurrent/ThrowingScheduledFuture.java


[flink] 02/02: [FLINK-25187][table-planner] Apply padding when CASTing to BINARY()

2021-12-31 Thread twalthr
This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 49acb2723eda8ebd3fc59af19d4bc0abb9f1a318
Author: Marios Trivyzas 
AuthorDate: Tue Dec 21 16:42:23 2021 +0200

[FLINK-25187][table-planner] Apply padding when CASTing to BINARY()

Similarly to `CHAR()` when casting to a `BINARY()`
apply padding with 0 bytes to the right so that the resulting `byte[]`
matches exaxctly the specified length.

This closes #18162.
---
 .../functions/casting/BinaryToBinaryCastRule.java  | 41 +-
 .../functions/casting/RawToBinaryCastRule.java | 35 +--
 .../functions/casting/StringToBinaryCastRule.java  | 34 +--
 .../planner/functions/CastFunctionITCase.java  |  5 +++
 .../planner/functions/CastFunctionMiscITCase.java  | 10 +
 .../planner/functions/casting/CastRulesTest.java   | 50 +-
 6 files changed, 116 insertions(+), 59 deletions(-)

diff --git 
a/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/functions/casting/BinaryToBinaryCastRule.java
 
b/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/functions/casting/BinaryToBinaryCastRule.java
index 9887818..72fbcfc 100644
--- 
a/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/functions/casting/BinaryToBinaryCastRule.java
+++ 
b/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/functions/casting/BinaryToBinaryCastRule.java
@@ -18,8 +18,10 @@
 
 package org.apache.flink.table.planner.functions.casting;
 
+import org.apache.flink.table.types.logical.BinaryType;
 import org.apache.flink.table.types.logical.LogicalType;
 import org.apache.flink.table.types.logical.LogicalTypeFamily;
+import org.apache.flink.table.types.logical.LogicalTypeRoot;
 import org.apache.flink.table.types.logical.utils.LogicalTypeChecks;
 
 import java.util.Arrays;
@@ -47,7 +49,7 @@ class BinaryToBinaryCastRule extends 
AbstractExpressionCodeGeneratorCastRule {
@@ -61,7 +61,7 @@ class RawToBinaryCastRule extends 
AbstractNullAwareCodeGeneratorCastRule

[flink] 01/02: [hotfix][table-planner] Assume that length of source type is respected for CAST

2021-12-31 Thread twalthr
This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 1fa98436adf0bc2f88021c63e3bf1c166deb1ae7
Author: Marios Trivyzas 
AuthorDate: Tue Dec 21 13:54:25 2021 +0200

[hotfix][table-planner] Assume that length of source type is respected for 
CAST

When casting to CHAR/VARCHAR/BINARY/VARBINARY, we assume that
the length of the source type CHAR/VARCHAR/BINARY/VARBINARY is
respected, to avoid performance overhead by applying checks and trimming
at runtime. i.e. if input type is `VARCHAR(3)`, input value is 'foobar' and 
target
type is `VARCHAR(4)`, no trimming is applied and the result value remains:
`foobar`.
---
 .../functions/casting/BinaryToBinaryCastRule.java  |  1 +
 .../casting/CharVarCharTrimPadCastRule.java| 23 -
 .../planner/functions/casting/CastRulesTest.java   | 30 --
 3 files changed, 45 insertions(+), 9 deletions(-)

diff --git 
a/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/functions/casting/BinaryToBinaryCastRule.java
 
b/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/functions/casting/BinaryToBinaryCastRule.java
index bd21f4e..9887818 100644
--- 
a/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/functions/casting/BinaryToBinaryCastRule.java
+++ 
b/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/functions/casting/BinaryToBinaryCastRule.java
@@ -63,6 +63,7 @@ class BinaryToBinaryCastRule extends 
AbstractExpressionCodeGeneratorCastRule castRule =
 CastRuleProvider.resolve(inputLogicalType, 
VarCharType.STRING_TYPE);
 
@@ -103,27 +107,32 @@ class CharVarCharTrimPadCastRule
 
 final CastRuleUtils.CodeWriter writer = new 
CastRuleUtils.CodeWriter();
 if (context.legacyBehaviour()
-|| !(couldTrim(length) || couldPad(targetLogicalType, 
length))) {
+|| ((!couldTrim(targetLength)
+// Assume input length is respected by the 
source
+|| (inputLength != null && inputLength <= 
targetLength))
+&& !couldPad(targetLogicalType, targetLength))) {
 return writer.assignStmt(returnVariable, 
stringExpr).toString();
 }
 return writer.ifStmt(
-methodCall(stringExpr, "numChars") + " > " + 
length,
+methodCall(stringExpr, "numChars") + " > " + 
targetLength,
 thenWriter ->
 thenWriter.assignStmt(
 returnVariable,
-methodCall(stringExpr, 
"substring", 0, length)),
+methodCall(stringExpr, 
"substring", 0, targetLength)),
 elseWriter -> {
-if (couldPad(targetLogicalType, length)) {
+if (couldPad(targetLogicalType, targetLength)) 
{
 final String padLength = 
newName("padLength");
 final String padString = 
newName("padString");
 elseWriter.ifStmt(
-methodCall(stringExpr, "numChars") 
+ " < " + length,
+methodCall(stringExpr, "numChars")
++ " < "
++ targetLength,
 thenInnerWriter ->
 thenInnerWriter
 
.declStmt(int.class, padLength)
 .assignStmt(
 padLength,
-length
+
targetLength
 + 
" - "
 + 
methodCall(

 stringExpr,
diff --git 
a/flink-table/flink-table-planner/src/test/java/org/apache/flink/table/planner/functions/casting/CastRulesTest.java
 
b/flink-table/flink-table-planner/src/test/java/org/apache/flink/table/planner/functions/casting

[flink] branch master updated (ef839ff -> 49acb27)

2021-12-31 Thread twalthr
This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from ef839ff  [FLINK-21186][network] Wrap IOException in 
UncheckedIOException in RecordWriterOutput
 new 1fa9843  [hotfix][table-planner] Assume that length of source type is 
respected for CAST
 new 49acb27  [FLINK-25187][table-planner] Apply padding when CASTing to 
BINARY()

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../functions/casting/BinaryToBinaryCastRule.java  | 40 +
 .../casting/CharVarCharTrimPadCastRule.java| 23 +---
 .../functions/casting/RawToBinaryCastRule.java | 35 ++-
 .../functions/casting/StringToBinaryCastRule.java  | 34 ++-
 .../planner/functions/CastFunctionITCase.java  |  5 ++
 .../planner/functions/CastFunctionMiscITCase.java  | 10 
 .../planner/functions/casting/CastRulesTest.java   | 68 --
 7 files changed, 154 insertions(+), 61 deletions(-)


[flink] branch master updated (35d3d31 -> fdc53e7)

2021-12-30 Thread twalthr
This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 35d3d31  [FLINK-18295][runtime] Change IntermediateDataSet to 
explicitly have exactly one consumer vertex
 add ccfd13a  [FLINK-25128][table-planner][table-runtime] Move aggregate 
and table functions with runtime logic in runtime
 add 3e93060  [FLINK-25128][table-planner] Fix usage of avatica core 
DateTimeUtils class
 add 749bb77  [FLINK-25128][table] Reorganize table modules and introduce 
flink-table-planner-loader
 add fdc53e7  [FLINK-25128][e2e] Update tests to replace the planner jars 
whenever necessary to check both planners

No new revisions were added by this update.

Summary of changes:
 flink-architecture-tests/pom.xml   |   6 +-
 .../5b9eed8a-5fb6-4373-98ac-3be2a71941b8   |   3 +
 .../7602816f-5c01-4b7a-9e3e-235dfedec245   |   2 +-
 .../e5126cae-f3fe-48aa-b6fb-60ae6cc3fcd5   |  27 +-
 flink-dist/pom.xml |  53 +++-
 flink-dist/src/main/assemblies/bin.xml |  26 +-
 flink-dist/src/main/assemblies/opt.xml |  19 +-
 flink-docs/pom.xml |   2 +-
 .../flink-sql-client-test/pom.xml  |   2 +-
 .../flink-stream-sql-test/pom.xml  |  11 +-
 flink-end-to-end-tests/flink-tpcds-test/pom.xml|  14 +-
 flink-end-to-end-tests/run-nightly-tests.sh|   3 +-
 flink-end-to-end-tests/test-scripts/common.sh  |  10 +
 .../test-scripts/test_streaming_sql.sh |   6 +
 flink-examples/flink-examples-table/pom.xml|  23 +-
 flink-python/apache-flink-libraries/setup.py   |   2 +-
 flink-table/README.md  |  67 +
 flink-table/flink-sql-client/pom.xml   |  57 +---
 .../pom.xml|  63 ++---
 .../src/main/resources/META-INF/NOTICE |  10 +
 .../main/resources/META-INF/licenses/LICENSE.icu4j |   0
 flink-table/flink-table-common/pom.xml |   2 -
 flink-table/flink-table-planner-loader/pom.xml | 163 +++
 .../table/planner/loader/BaseDelegateFactory.java  |  49 
 .../planner/loader/DelegateExecutorFactory.java|  45 +++
 .../loader/DelegateExpressionParserFactory.java|  38 +++
 .../planner/loader/DelegatePlannerFactory.java |  38 +++
 .../flink/table/planner/loader/PlannerModule.java  | 144 ++
 .../org.apache.flink.table.factories.Factory   |  18 ++
 .../flink/table/planner/loader/LoaderITCase.java   | 100 +++
 flink-table/flink-table-planner/pom.xml| 313 +++--
 .../casting/TimestampToStringCastRule.java |   3 +-
 .../plan/nodes/exec/serde/JsonSerdeUtil.java   |   2 +
 .../plan/nodes/exec/utils/CommonPythonUtil.java|  16 +-
 .../src/main/resources/META-INF/NOTICE |  11 -
 .../codegen/agg/batch/WindowCodeGenerator.scala|   2 +-
 .../table/planner/delegation/PlannerBase.scala |   6 +-
 .../planner/plan/utils/AggFunctionFactory.scala|   2 +-
 .../planner/plan/utils/SetOpRewriteUtil.scala  |   2 +-
 .../FirstValueAggFunctionWithOrderTest.java|   1 +
 .../FirstValueAggFunctionWithoutOrderTest.java |   1 +
 ...stValueWithRetractAggFunctionWithOrderTest.java |   3 +-
 ...alueWithRetractAggFunctionWithoutOrderTest.java |   3 +-
 .../functions/aggfunctions/LagAggFunctionTest.java |   1 +
 .../LastValueAggFunctionWithOrderTest.java |   1 +
 .../LastValueAggFunctionWithoutOrderTest.java  |   1 +
 ...stValueWithRetractAggFunctionWithOrderTest.java |   3 +-
 ...alueWithRetractAggFunctionWithoutOrderTest.java |   3 +-
 .../ListAggWithRetractAggFunctionTest.java |   3 +-
 .../ListAggWsWithRetractAggFunctionTest.java   |   3 +-
 .../MaxWithRetractAggFunctionTest.java |   3 +-
 .../MinWithRetractAggFunctionTest.java |   3 +-
 .../PushLocalAggIntoTableSourceScanRuleTest.java   |   2 +-
 .../planner/plan/batch/sql/SetOperatorsTest.xml|   4 +-
 .../planner/plan/batch/table/SetOperatorsTest.xml  |   4 +-
 .../planner/plan/common/PartialInsertTest.xml  |   8 +-
 .../rules/logical/RewriteIntersectAllRuleTest.xml  |   8 +-
 .../plan/rules/logical/RewriteMinusAllRuleTest.xml |   8 +-
 .../planner/plan/stream/sql/SetOperatorsTest.xml   |   4 +-
 .../planner/plan/batch/sql/SubplanReuseTest.scala  |   2 +-
 .../planner/plan/stream/sql/SubplanReuseTest.scala |   2 +-
 .../runtime/stream/sql/AggregateITCase.scala   |   2 +-
 flink-table/flink-table-runtime/pom.xml|  27 +-
 .../functions/aggregate}/CollectAggFunction.java   |   3 +-
 .../aggregate}/FirstValueAggFunction.java  |   3 +-
 .../FirstValueWithRetractAggFunction.java  |   3 +-
 .../functions/aggregate}/JsonArrayAggFunction.java |   8 +-
 .../aggregate}/JsonObjectAggFunction.java  |   6

[flink] branch master updated (2298fae -> ec893d2)

2021-12-30 Thread twalthr
This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 2298fae  [hotfix][connectors][docs] Use big M letters for month in 
date format
 add 071f036  [hotfix][table-planner] Code fixes for Aggregate Functions
 add 9c419fd  [hotfix][table-planner][tests] Fix test for SumWithRetractAgg
 add ec893d2  [FLINK-24809][table-common][table-planner] Fix precision for 
aggs on DECIMAL types

No new revisions were added by this update.

Summary of changes:
 .../functions/BuiltInFunctionDefinitions.java  |  26 +++-
 .../strategies/AggDecimalPlusTypeStrategy.java |   9 +-
 .../planner/expressions/ExpressionBuilder.java |  18 ++-
 .../functions/aggfunctions/AvgAggFunction.java |  52 +--
 .../functions/aggfunctions/CollectAggFunction.java |   2 +-
 .../functions/aggfunctions/Count1AggFunction.java  |   3 +-
 .../functions/aggfunctions/CountAggFunction.java   |   3 +-
 .../aggfunctions/FirstValueAggFunction.java|   2 +-
 .../FirstValueWithRetractAggFunction.java  |   2 +-
 .../functions/aggfunctions/LagAggFunction.java |   1 -
 .../aggfunctions/LastValueAggFunction.java |   2 +-
 .../LastValueWithRetractAggFunction.java   |   2 +-
 .../functions/aggfunctions/LeadLagAggFunction.java |   6 +-
 .../functions/aggfunctions/ListAggFunction.java|  11 +-
 .../functions/aggfunctions/MaxAggFunction.java |   9 +-
 .../aggfunctions/MaxWithRetractAggFunction.java|   2 +-
 .../functions/aggfunctions/MinAggFunction.java |   9 +-
 .../aggfunctions/MinWithRetractAggFunction.java|   2 +-
 .../functions/aggfunctions/RankAggFunction.java|   2 +-
 .../aggfunctions/RankLikeAggFunctionBase.java  |   1 +
 .../aggfunctions/RowNumberAggFunction.java |   3 +-
 .../aggfunctions/SingleValueAggFunction.java   |  22 ++-
 .../functions/aggfunctions/Sum0AggFunction.java|  44 +-
 .../functions/aggfunctions/SumAggFunction.java |  21 ++-
 .../aggfunctions/SumWithRetractAggFunction.java|  74 +
 .../table/planner/codegen/ExprCodeGenerator.scala  |   5 +
 .../runtime/stream/sql/AggregateITCase.scala   | 169 +
 .../runtime/stream/table/AggregateITCase.scala |  50 ++
 28 files changed, 408 insertions(+), 144 deletions(-)


[flink] branch release-1.14 updated: [hotfix][connectors][docs] Use big M letters for month in date format

2021-12-30 Thread twalthr
This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a commit to branch release-1.14
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.14 by this push:
 new a73d1b2  [hotfix][connectors][docs] Use big M letters for month in 
date format
a73d1b2 is described below

commit a73d1b2d5a06c209cc87574862b3278b00f350f3
Author: Sergey Nuyanzin 
AuthorDate: Mon Dec 20 18:01:49 2021 +0100

[hotfix][connectors][docs] Use big M letters for month in date format
---
 docs/content.zh/docs/connectors/table/filesystem.md | 2 +-
 docs/content/docs/connectors/table/filesystem.md| 2 +-
 .../apache/flink/table/planner/expressions/TemporalTypesTest.scala  | 6 +++---
 .../apache/flink/table/filesystem/FileSystemConnectorOptions.java   | 2 +-
 4 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/docs/content.zh/docs/connectors/table/filesystem.md 
b/docs/content.zh/docs/connectors/table/filesystem.md
index de9438f..4efcc89 100644
--- a/docs/content.zh/docs/connectors/table/filesystem.md
+++ b/docs/content.zh/docs/connectors/table/filesystem.md
@@ -280,7 +280,7 @@ file sink 支持文件合并,以允许应用程序可以使用较小的检查
 partition.time-extractor.timestamp-pattern
 (none)
 String
- 'default' 时间提取器允许用户从分区字段中提取合法的时间戳模式。默认支持从第一个字段按 '-mm-dd 
hh:mm:ss' 时间戳模式提取。
+ 'default' 时间提取器允许用户从分区字段中提取合法的时间戳模式。默认支持从第一个字段按 '-MM-dd 
hh:mm:ss' 时间戳模式提取。
 如果需要从一个分区字段比如 ‘dt’ 提取时间戳,可以配置为: '$dt';
 如果需要从多个分区字段,比如 'year', 'month', 'day' 和 
'hour'提取时间戳,可以配置为:'$year-$month-$day $hour:00:00';
 如果需要从两字分区字段,比如 'dt' 和 'hour' 提取时间戳,可以配置为:'$dt $hour:00:00'.
diff --git a/docs/content/docs/connectors/table/filesystem.md 
b/docs/content/docs/connectors/table/filesystem.md
index 7f143b3..3f2b455 100644
--- a/docs/content/docs/connectors/table/filesystem.md
+++ b/docs/content/docs/connectors/table/filesystem.md
@@ -297,7 +297,7 @@ Time extractors define extracting time from partition 
values.
 partition.time-extractor.timestamp-pattern
 (none)
 String
-The 'default' construction way allows users to use partition 
fields to get a legal timestamp pattern. Default support '-mm-dd hh:mm:ss' 
from first field. If timestamp should be extracted from a single partition 
field 'dt', can configure: '$dt'. If timestamp should be extracted from 
multiple partition fields, say 'year', 'month', 'day' and 'hour', can 
configure: '$year-$month-$day $hour:00:00'. If timestamp should be extracted 
from two partition fields 'dt' and 'hour', can [...]
+The 'default' construction way allows users to use partition 
fields to get a legal timestamp pattern. Default support '-MM-dd hh:mm:ss' 
from first field. If timestamp should be extracted from a single partition 
field 'dt', can configure: '$dt'. If timestamp should be extracted from 
multiple partition fields, say 'year', 'month', 'day' and 'hour', can 
configure: '$year-$month-$day $hour:00:00'. If timestamp should be extracted 
from two partition fields 'dt' and 'hour', can [...]
 
   
 
diff --git 
a/flink-table/flink-table-planner/src/test/scala/org/apache/flink/table/planner/expressions/TemporalTypesTest.scala
 
b/flink-table/flink-table-planner/src/test/scala/org/apache/flink/table/planner/expressions/TemporalTypesTest.scala
index 70bd5ca..6cc9d6f 100644
--- 
a/flink-table/flink-table-planner/src/test/scala/org/apache/flink/table/planner/expressions/TemporalTypesTest.scala
+++ 
b/flink-table/flink-table-planner/src/test/scala/org/apache/flink/table/planner/expressions/TemporalTypesTest.scala
@@ -737,7 +737,7 @@ class TemporalTypesTest extends ExpressionTestBase {
 )
 
 testSqlApi(
-  "TO_TIMESTAMP(f14, '-mm-dd')",
+  "TO_TIMESTAMP(f14, '-MM-dd')",
   "null"
 )
   }
@@ -1002,7 +1002,7 @@ class TemporalTypesTest extends ExpressionTestBase {
   def testInvalidInputCase(): Unit = {
 val invalidStr = "invalid value"
 testSqlApi(s"DATE_FORMAT('$invalidStr', '/MM/dd HH:mm:ss')", nullable)
-testSqlApi(s"TO_TIMESTAMP('$invalidStr', '-mm-dd')", nullable)
+testSqlApi(s"TO_TIMESTAMP('$invalidStr', '-MM-dd')", nullable)
 testSqlApi(s"TO_DATE('$invalidStr')", nullable)
 testSqlApi(
   s"CONVERT_TZ('$invalidStr', 'UTC', 'Asia/Shanghai')",
@@ -1014,7 +1014,7 @@ class TemporalTypesTest extends ExpressionTestBase {
 val invalidStr = "invalid value"
 val cases = Seq(
   s"DATE_FORMAT('$invalidStr', '/MM/dd HH:mm:ss')",
-  s"TO_TIMESTAMP('$invalidStr', '-mm-dd')",
+  s"TO_TIMESTAMP('$invalidStr', '-MM-dd')",
   s"TO_DATE('$invalidStr')",
   s"CONVERT_TZ('$invalidStr', 'UTC', 'Asia/Shanghai')")
 
diff --git 
a/flink-table/flink-table-runtime/sr

[flink] branch master updated (3071f5c -> 2298fae)

2021-12-29 Thread twalthr
This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 3071f5c  [FLINK-25415] Add retries to CasandraConnectorITCase
 add 2298fae  [hotfix][connectors][docs] Use big M letters for month in 
date format

No new revisions were added by this update.

Summary of changes:
 docs/content.zh/docs/connectors/table/filesystem.md | 2 +-
 docs/content/docs/connectors/table/filesystem.md| 2 +-
 .../flink/connector/file/table/FileSystemConnectorOptions.java  | 2 +-
 .../apache/flink/table/planner/expressions/TemporalTypesTest.scala  | 6 +++---
 4 files changed, 6 insertions(+), 6 deletions(-)


[flink] branch master updated (2b1a9de -> 74ed032)

2021-12-21 Thread twalthr
This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 2b1a9de  [FLINK-25132][connector/kafka] Move record deserializing from 
SplitFetcher to RecordEmitter to support object-reusing deserializer
 add 74ed032  [FLINK-25365][python] Remove remaining references to planner 
from Python

No new revisions were added by this update.

Summary of changes:
 flink-python/pom.xml   |  13 +-
 .../datastream/stream_execution_environment.py |   2 +-
 flink-python/pyflink/table/table_environment.py|   4 +-
 .../flink/api/common/python/PythonBridgeUtils.java |  63 +--
 .../flink/streaming/api/utils/ProtoUtils.java  |  15 +-
 .../AbstractPythonStreamAggregateOperator.java |   8 +-
 ...AbstractPythonStreamGroupAggregateOperator.java |   4 +-
 .../PythonStreamGroupAggregateOperator.java|   4 +-
 .../PythonStreamGroupTableAggregateOperator.java   |   4 +-
 .../PythonStreamGroupWindowAggregateOperator.java  |  10 +-
 .../utils/python/PythonInputFormatTableSource.java |  68 +++
 .../flink/table/utils/python/PythonTableUtils.java | 535 +
 ...ghPythonStreamGroupWindowAggregateOperator.java |   4 +-
 .../PythonStreamGroupAggregateOperatorTest.java|   4 +-
 ...ythonStreamGroupTableAggregateOperatorTest.java |   4 +-
 .../stream/StreamExecPythonGroupAggregate.java |  13 +-
 .../StreamExecPythonGroupTableAggregate.java   |  13 +-
 .../StreamExecPythonGroupWindowAggregate.java  |  17 +-
 .../plan/nodes/exec/utils/CommonPythonUtil.java| 111 -
 .../table/planner/typeutils/DataViewUtils.java | 117 +
 .../codegen/agg/AggsHandlerCodeGenerator.scala |   3 +-
 .../planner/codegen/agg/ImperativeAggCodeGen.scala |   2 +-
 .../table/planner/plan/utils/AggregateUtil.scala   |   2 +-
 .../table/planner/plan/utils/aggregation.scala |   3 +-
 .../planner/typeutils/LegacyDataViewUtils.scala|   2 +-
 .../planner/utils/python/PythonTableUtils.scala| 476 --
 .../table/planner/codegen/agg/AggTestBase.scala|   2 +-
 .../plan/stream/table/PythonAggregateTest.scala|   2 +-
 .../flink/table/runtime/dataview/DataViewSpec.java |  35 +-
 .../flink/table/runtime/dataview/ListViewSpec.java |  59 +++
 .../flink/table/runtime/dataview/MapViewSpec.java  |  80 +++
 31 files changed, 923 insertions(+), 756 deletions(-)
 create mode 100644 
flink-python/src/main/java/org/apache/flink/table/utils/python/PythonInputFormatTableSource.java
 create mode 100644 
flink-python/src/main/java/org/apache/flink/table/utils/python/PythonTableUtils.java
 delete mode 100644 
flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/planner/utils/python/PythonTableUtils.scala
 copy 
flink-end-to-end-tests/flink-tpcds-test/src/main/java/org/apache/flink/table/tpcds/schema/Column.java
 => 
flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/runtime/dataview/DataViewSpec.java
 (59%)
 create mode 100644 
flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/runtime/dataview/ListViewSpec.java
 create mode 100644 
flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/runtime/dataview/MapViewSpec.java


[flink] branch master updated (19bc181 -> 3822612)

2021-12-20 Thread twalthr
This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 19bc181  [FLINK-25366][table-planner][table-runtime] Implement 
BINARY/VARBINARY length validation for sinks
 add 3822612  [FLINK-25215][table] ISODOW, ISOYEAR fail and DECADE gives 
wrong result for timestamps with timezones

No new revisions were added by this update.

Summary of changes:
 .../apache/flink/table/utils/DateTimeUtils.java|  6 ++--
 .../planner/expressions/TemporalTypesTest.scala| 35 ++
 2 files changed, 38 insertions(+), 3 deletions(-)


[flink] branch master updated: [FLINK-25366][table-planner][table-runtime] Implement BINARY/VARBINARY length validation for sinks

2021-12-20 Thread twalthr
This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new 19bc181  [FLINK-25366][table-planner][table-runtime] Implement 
BINARY/VARBINARY length validation for sinks
19bc181 is described below

commit 19bc18100802e8e5a56c5ce08e985d589db81838
Author: Marios Trivyzas 
AuthorDate: Fri Dec 17 15:57:05 2021 +0200

[FLINK-25366][table-planner][table-runtime] Implement BINARY/VARBINARY 
length validation for sinks

Similar to the length validation for CHAR/VARCHAR, implement the same logic 
for
BINARY/VARBINARY and apply any necessary trimming or padding to match the 
length
specified in the corresponding type.

This closes #18142.
---
 .../generated/execution_config_configuration.html  |  12 +-
 .../table/api/config/ExecutionConfigOptions.java   |  31 ++---
 .../plan/nodes/exec/common/CommonExecSink.java |  75 ---
 .../nodes/exec/common/CommonExecSinkITCase.java| 128 +-
 .../runtime/operators/sink/ConstraintEnforcer.java | 146 +++--
 5 files changed, 307 insertions(+), 85 deletions(-)

diff --git 
a/docs/layouts/shortcodes/generated/execution_config_configuration.html 
b/docs/layouts/shortcodes/generated/execution_config_configuration.html
index 099a9f9..3d3163d 100644
--- a/docs/layouts/shortcodes/generated/execution_config_configuration.html
+++ b/docs/layouts/shortcodes/generated/execution_config_configuration.html
@@ -53,12 +53,6 @@ By default no operator is disabled.
 Sets default parallelism for all operators (such as aggregate, 
join, filter) to run with parallel instances. This config has a higher priority 
than parallelism of StreamExecutionEnvironment (actually, this config overrides 
the parallelism of StreamExecutionEnvironment). A value of -1 indicates that no 
default parallelism is set, then it will fallback to use the parallelism of 
StreamExecutionEnvironment.
 
 
-table.exec.sink.char-length-enforcer Batch Streaming
-IGNORE
-Enum
-Determines whether string values for columns with 
CHAR(length)/VARCHAR(length) types will be trimmed or padded 
(only for CHAR(length)), so that their length will match the one 
defined by the length of their respective CHAR/VARCHAR column type.Possible values:"IGNORE": Don't apply any trimming and padding, and 
instead ignore the CHAR/VARCHAR length directive."TRIM_PAD": Trim and 
pad string values to match the length defi [...]
-
-
 table.exec.sink.keyed-shuffle Streaming
 AUTO
 Enum
@@ -77,6 +71,12 @@ By default no operator is disabled.
 Determines how Flink enforces NOT NULL column constraints when 
inserting null values.Possible values:"ERROR": Throw a 
runtime exception when writing null values into NOT NULL 
column."DROP": Drop records silently if a null value would have to be 
inserted into a NOT NULL column.
 
 
+table.exec.sink.type-length-enforcer Batch Streaming
+IGNORE
+Enum
+Determines whether values for columns with 
CHAR(length)/VARCHAR(length)/BINARY(length)/VARBINARY(length)
 types will be trimmed or padded (only for 
CHAR(length)/BINARY(length)), so that their length will match 
the one defined by the length of their respective CHAR/VARCHAR/BINARY/VARBINARY 
column type.Possible values:"IGNORE": Don't apply any 
trimming and padding, and instead ignore the CHAR/VARCHAR/BINARY/ [...]
+
+
 table.exec.sink.upsert-materialize Streaming
 AUTO
 Enum
diff --git 
a/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/api/config/ExecutionConfigOptions.java
 
b/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/api/config/ExecutionConfigOptions.java
index ba9a6c3..43a9b46 100644
--- 
a/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/api/config/ExecutionConfigOptions.java
+++ 
b/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/api/config/ExecutionConfigOptions.java
@@ -120,15 +120,16 @@ public class ExecutionConfigOptions {
 "Determines how Flink enforces NOT NULL column 
constraints when inserting null values.");
 
 @Documentation.TableOption(execMode = 
Documentation.ExecMode.BATCH_STREAMING)
-public static final ConfigOption 
TABLE_EXEC_SINK_CHAR_LENGTH_ENFORCER =
-key("table.exec.sink.char-length-enforcer")
-.enumType(CharLengthEnforcer.class)
-.defaultValue(CharLengthEnforcer.IGNORE)
+public static final ConfigOption 
TABLE_EXEC_SINK_TYPE_LENGTH_ENFORCER =
+key("table.exec.sink.typ

[flink] branch master updated (cdf3d48 -> 2e355d9)

2021-12-20 Thread twalthr
This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from cdf3d48  [FLINK-21406][parquet] Add AvroParquetReaders to read parquet 
files into Avro types.
 add 2e355d9  [FLINK-25364][python][table-planner] Remove dependency on 
planner for Python code generation

No new revisions were added by this update.

Summary of changes:
 .../python/AbstractStatelessFunctionOperator.java  | 34 ++---
 .../AbstractPythonStreamAggregateOperator.java |  8 +--
 ...stractArrowPythonAggregateFunctionOperator.java | 36 --
 ...tBatchArrowPythonAggregateFunctionOperator.java | 55 ++-
 ...hArrowPythonGroupAggregateFunctionOperator.java | 25 +++
 ...PythonGroupWindowAggregateFunctionOperator.java | 29 +++-
 ...wPythonOverWindowAggregateFunctionOperator.java | 29 
 ...tractStreamArrowPythonBoundedRangeOperator.java | 13 ++--
 ...stractStreamArrowPythonBoundedRowsOperator.java | 13 ++--
 ...wPythonOverWindowAggregateFunctionOperator.java | 23 +++---
 ...PythonGroupWindowAggregateFunctionOperator.java | 27 
 ...eamArrowPythonProcTimeBoundedRangeOperator.java | 13 ++--
 ...reamArrowPythonProcTimeBoundedRowsOperator.java | 13 ++--
 ...reamArrowPythonRowTimeBoundedRangeOperator.java | 13 ++--
 ...treamArrowPythonRowTimeBoundedRowsOperator.java | 13 ++--
 .../AbstractPythonScalarFunctionOperator.java  | 67 +-
 .../scalar/PythonScalarFunctionOperator.java   | 22 --
 .../arrow/ArrowPythonScalarFunctionOperator.java   | 20 --
 .../python/table/PythonTableFunctionOperator.java  | 54 ---
 ...owPythonGroupAggregateFunctionOperatorTest.java | 61 +++-
 ...onGroupWindowAggregateFunctionOperatorTest.java | 65 -
 ...honOverWindowAggregateFunctionOperatorTest.java | 70 +--
 ...onGroupWindowAggregateFunctionOperatorTest.java | 47 +
 ...rrowPythonProcTimeBoundedRangeOperatorTest.java | 38 +++---
 ...ArrowPythonProcTimeBoundedRowsOperatorTest.java | 38 +++---
 ...ArrowPythonRowTimeBoundedRangeOperatorTest.java | 39 ---
 ...mArrowPythonRowTimeBoundedRowsOperatorTest.java | 38 +++---
 .../scalar/PythonScalarFunctionOperatorTest.java   | 51 --
 .../ArrowPythonScalarFunctionOperatorTest.java | 51 --
 .../table/PythonTableFunctionOperatorTest.java | 43 +---
 .../apache/flink/table/connector/Projection.java   |  5 ++
 .../exec/batch/BatchExecPythonGroupAggregate.java  | 64 +
 .../batch/BatchExecPythonGroupWindowAggregate.java | 70 ++-
 .../exec/batch/BatchExecPythonOverAggregate.java   | 81 --
 .../nodes/exec/common/CommonExecPythonCalc.java| 62 +
 .../exec/common/CommonExecPythonCorrelate.java | 52 ++
 .../StreamExecPythonGroupWindowAggregate.java  | 58 +++-
 .../exec/stream/StreamExecPythonOverAggregate.java | 66 --
 .../plan/nodes/exec/utils/CommonPythonUtil.java|  3 +-
 39 files changed, 953 insertions(+), 556 deletions(-)


[flink] branch master updated: [FLINK-25282][table-planner][table-runtime] Move runtime code from table-planner to table-runtime

2021-12-16 Thread twalthr
This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new a54c2e7  [FLINK-25282][table-planner][table-runtime] Move runtime code 
from table-planner to table-runtime
a54c2e7 is described below

commit a54c2e75eb8f214552643adbdc2a5ce2abc2506c
Author: slinkydeveloper 
AuthorDate: Thu Dec 9 12:30:18 2021 +0100

[FLINK-25282][table-planner][table-runtime] Move runtime code from 
table-planner to table-runtime

- Removes the dependency on SqlFunctions from Calcite
- Move DefaultWatermarkGeneratorSupplier to runtime and rename to 
GeneratedWatermarkGeneratorSupplier
- Remove dependency on BuiltInMethod from Calcite for floor, ceil and abs
- Copy from Calcite json functions in SqlJsonUtils. Now jackson and 
jsonpath are shipped by runtime.
- Move various Flink functions

This closes #18108.
---
 .../apache/flink/table/functions/SqlLikeUtils.java |  24 +-
 .../apache/flink/table/utils/DateTimeUtils.java|  89 
 flink-table/flink-table-planner/pom.xml|   8 +-
 .../abilities/source/WatermarkPushDownSpec.java|  92 +---
 .../nodes/exec/stream/StreamExecIntervalJoin.java  |  75 +--
 .../stream/StreamExecLegacyTableSourceScan.java|  82 +---
 .../src/main/resources/META-INF/NOTICE |   4 -
 .../table/planner/codegen/ExprCodeGenerator.scala  |  18 +-
 .../table/planner/codegen/GenerateUtils.scala  |  32 +-
 .../planner/codegen/calls/BuiltInMethods.scala |  86 ++--
 .../planner/codegen/calls/FloorCeilCallGen.scala   |  14 +-
 .../planner/codegen/calls/FunctionGenerator.scala  |  81 ++--
 .../planner/codegen/calls/JsonValueCallGen.scala   |  17 +-
 .../table/planner/codegen/calls/LikeCallGen.scala  |   9 +-
 .../planner/codegen/calls/StringCallGen.scala  |   7 +-
 flink-table/flink-table-runtime/pom.xml|  38 ++
 .../table/runtime/functions/SqlFunctionUtils.java  | 148 ++
 .../table/runtime/functions/SqlJsonUtils.java  | 517 -
 .../GeneratedWatermarkGeneratorSupplier.java   | 109 +
 .../join/interval/FilterAllFlatMapFunction.java|  48 ++
 .../join/interval/PaddingLeftMapFunction.java  |  53 +++
 .../join/interval/PaddingRightMapFunction.java |  53 +++
 .../PeriodicWatermarkAssignerWrapper.java  |  57 +++
 .../PunctuatedWatermarkAssignerWrapper.java|  74 +++
 .../src/main/resources/META-INF/NOTICE |   9 +
 flink-table/pom.xml|   5 +-
 26 files changed, 1393 insertions(+), 356 deletions(-)

diff --git 
a/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/functions/SqlLikeUtils.java
 
b/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/functions/SqlLikeUtils.java
index aa22466..5c3efaf 100644
--- 
a/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/functions/SqlLikeUtils.java
+++ 
b/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/functions/SqlLikeUtils.java
@@ -46,12 +46,30 @@ public class SqlLikeUtils {
 
 private SqlLikeUtils() {}
 
-/** SQL like function with escape. */
+/** SQL {@code LIKE} function. */
+public static boolean like(String s, String pattern) {
+final String regex = sqlToRegexLike(pattern, null);
+return Pattern.matches(regex, s);
+}
+
+/** SQL {@code LIKE} function with escape. */
 public static boolean like(String s, String pattern, String escape) {
 final String regex = sqlToRegexLike(pattern, escape);
 return Pattern.matches(regex, s);
 }
 
+/** SQL {@code SIMILAR} function. */
+public static boolean similar(String s, String pattern) {
+final String regex = sqlToRegexSimilar(pattern, null);
+return Pattern.matches(regex, s);
+}
+
+/** SQL {@code SIMILAR} function with escape. */
+public static boolean similar(String s, String pattern, String escape) {
+final String regex = sqlToRegexSimilar(pattern, escape);
+return Pattern.matches(regex, s);
+}
+
 /** Translates a SQL LIKE pattern to Java regex pattern, with optional 
escape string. */
 public static String sqlToRegexLike(String sqlPattern, CharSequence 
escapeStr) {
 final char escapeChar;
@@ -192,7 +210,7 @@ public class SqlLikeUtils {
 }
 
 /** Translates a SQL SIMILAR pattern to Java regex pattern, with optional 
escape string. */
-static String sqlToRegexSimilar(String sqlPattern, CharSequence escapeStr) 
{
+public static String sqlToRegexSimilar(String sqlPattern, CharSequence 
escapeStr) {
 final char escapeChar;
 if (escapeStr != null) {
 if (escapeStr.length() != 1) {
@@ -206,7 +224,7 @@ public class SqlLikeUtils {
 }
 
 /** Translates SQL SIMILAR pattern to Java regex pattern

[flink] branch master updated (f14ff3d -> 3dda816)

2021-12-16 Thread twalthr
This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from f14ff3d  [FLINK-25158][table-planner][table-runtime] Fix NULL, TRUE 
and FALSE string representation to uppercase
 add 3dda816  [hotfix][table-planner] Fix compilation issues in 
CastRulesTest

No new revisions were added by this update.

Summary of changes:
 .../planner/functions/casting/CastRulesTest.java   | 43 +-
 1 file changed, 17 insertions(+), 26 deletions(-)


[flink] branch master updated (28eb197 -> a79e004)

2021-12-15 Thread twalthr
This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 28eb197  [FLINK-25326][connectors/kafka] Fix application of log levels 
in KafkaUtils.createKafkacontainer
 add a79e004  [FLINK-17151][table] Align Calcite's and Flink's SYMBOL types

No new revisions were added by this update.

Summary of changes:
 .../strategies/SymbolArgumentTypeStrategy.java | 14 +++--
 .../flink/table/types/logical/SymbolType.java  | 64 --
 .../table/types/utils/ClassDataTypeConverter.java  |  2 +-
 .../table/types/ClassDataTypeConverterTest.java|  3 +-
 .../apache/flink/table/types/LogicalTypesTest.java |  6 +-
 .../table/types/ValueDataTypeConverterTest.java|  5 +-
 .../strategies/SymbolArgumentTypeStrategyTest.java | 10 ++--
 .../table/planner/calcite/FlinkTypeFactory.scala   |  6 +-
 .../flink/table/planner/codegen/CodeGenUtils.scala | 12 ++--
 .../table/planner/codegen/GenerateUtils.scala  | 10 +---
 .../planner/codegen/calls/FunctionGenerator.scala  | 58 
 11 files changed, 87 insertions(+), 103 deletions(-)


[flink] branch master updated (ad066f5 -> 3ab9802)

2021-12-15 Thread twalthr
This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from ad066f5  [FLINK-24580][Connectors/Kinesis] Make 
ConnectTimeoutException recoverable (#17785)
 add 419eb1e  [FLINK-25051][table-planner] Port raw <-> binary logic to 
CastRule
 add 3ab9802  [FLINK-24419][table-planner] Trim to length when casting to 
BINARY/VARBINARY

No new revisions were added by this update.

Summary of changes:
 ...ngCastRule.java => BinaryToBinaryCastRule.java} |  53 ++
 .../functions/casting/CastRuleProvider.java|   3 +
 .../planner/functions/casting/CastRuleUtils.java   |   4 +
 .../functions/casting/RawToBinaryCastRule.java | 113 +
 .../functions/casting/StringToBinaryCastRule.java  |  64 +++-
 .../planner/codegen/calls/ScalarOperatorGens.scala |   7 --
 .../planner/functions/CastFunctionITCase.java  |  21 ++--
 .../planner/functions/CastFunctionMiscITCase.java  |  18 +++-
 .../planner/functions/casting/CastRulesTest.java   |  32 --
 9 files changed, 259 insertions(+), 56 deletions(-)
 copy 
flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/functions/casting/{TimestampToStringCastRule.java
 => BinaryToBinaryCastRule.java} (50%)
 create mode 100644 
flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/functions/casting/RawToBinaryCastRule.java


[flink] branch master updated: [FLINK-25304][table-planner][tests] Add tests for padding of fractional seconds

2021-12-14 Thread twalthr
This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new 458e798  [FLINK-25304][table-planner][tests] Add tests for padding of 
fractional seconds
458e798 is described below

commit 458e798f4fb6afe00bbb17d4c4530846fc5de112
Author: Marios Trivyzas 
AuthorDate: Tue Dec 14 15:49:12 2021 +0200

[FLINK-25304][table-planner][tests] Add tests for padding of fractional 
seconds

Add Unit and IT tests to validate the `0` padding of the fractional seconds
when casting a `TIMESTAMP` or `TIMESTAMP_LTZ` to string, so that the length
of the fractional seconds in the resulting string matches the `precision`
specified on the source type.

This closes #18106.
---
 .../planner/functions/CastFunctionITCase.java  | 14 ++-
 .../planner/functions/casting/CastRulesTest.java   | 27 ++
 2 files changed, 40 insertions(+), 1 deletion(-)

diff --git 
a/flink-table/flink-table-planner/src/test/java/org/apache/flink/table/planner/functions/CastFunctionITCase.java
 
b/flink-table/flink-table-planner/src/test/java/org/apache/flink/table/planner/functions/CastFunctionITCase.java
index eceea3c..36450b0 100644
--- 
a/flink-table/flink-table-planner/src/test/java/org/apache/flink/table/planner/functions/CastFunctionITCase.java
+++ 
b/flink-table/flink-table-planner/src/test/java/org/apache/flink/table/planner/functions/CastFunctionITCase.java
@@ -211,8 +211,12 @@ public class CastFunctionITCase extends 
BuiltInFunctionTestBase {
 // seconds are lost
 .fromCase(TIME(5), DEFAULT_TIME, "12:34:56")
 .fromCase(TIMESTAMP(), DEFAULT_TIMESTAMP, "2021-09-24 
12:34:56.123456")
-.fromCase(TIMESTAMP(8), DEFAULT_TIMESTAMP, "2021-09-24 
12:34:56.12345670")
+.fromCase(TIMESTAMP(9), DEFAULT_TIMESTAMP, "2021-09-24 
12:34:56.123456700")
 .fromCase(TIMESTAMP(4), DEFAULT_TIMESTAMP, "2021-09-24 
12:34:56.1234")
+.fromCase(
+TIMESTAMP(3),
+LocalDateTime.parse("2021-09-24T12:34:56.1"),
+"2021-09-24 12:34:56.100")
 .fromCase(TIMESTAMP(4).nullable(), null, null)
 
 // https://issues.apache.org/jira/browse/FLINK-20869
@@ -222,6 +226,14 @@ public class CastFunctionITCase extends 
BuiltInFunctionTestBase {
 TIMESTAMP_LTZ(5),
 DEFAULT_TIMESTAMP_LTZ,
 "2021-09-25 07:54:56.12345")
+.fromCase(
+TIMESTAMP_LTZ(9),
+DEFAULT_TIMESTAMP_LTZ,
+"2021-09-25 07:54:56.123456700")
+.fromCase(
+TIMESTAMP_LTZ(3),
+fromLocalTZ("2021-09-24T22:34:56.1"),
+"2021-09-25 07:54:56.100")
 .fromCase(INTERVAL(YEAR()), 84, "+7-00")
 .fromCase(INTERVAL(MONTH()), 5, "+0-05")
 .fromCase(INTERVAL(MONTH()), 123, "+10-03")
diff --git 
a/flink-table/flink-table-planner/src/test/java/org/apache/flink/table/planner/functions/casting/CastRulesTest.java
 
b/flink-table/flink-table-planner/src/test/java/org/apache/flink/table/planner/functions/casting/CastRulesTest.java
index 76cd6d9..3d943ab 100644
--- 
a/flink-table/flink-table-planner/src/test/java/org/apache/flink/table/planner/functions/casting/CastRulesTest.java
+++ 
b/flink-table/flink-table-planner/src/test/java/org/apache/flink/table/planner/functions/casting/CastRulesTest.java
@@ -527,7 +527,34 @@ class CastRulesTest {
 fromString(String.valueOf(Double.MAX_VALUE)))
 .fromCase(STRING(), fromString("Hello"), 
fromString("Hello"))
 .fromCase(TIMESTAMP(), TIMESTAMP, TIMESTAMP_STRING)
+.fromCase(
+TIMESTAMP(9),
+TIMESTAMP,
+fromString("2021-09-24 12:34:56.123456000"))
+.fromCase(
+TIMESTAMP(7), TIMESTAMP, 
fromString("2021-09-24 12:34:56.1234560"))
+.fromCase(
+TIMESTAMP(3),
+TimestampData.fromLocalDateTime(
+
LocalDateTime

[flink] 01/04: [FLINK-24413][table] Apply trimming & padding when CASTing to CHAR/VARCHAR

2021-12-14 Thread twalthr
This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 09aad58943a1597466a78ebb8d543e6baa3f5092
Author: Marios Trivyzas 
AuthorDate: Thu Dec 9 13:52:38 2021 +0100

[FLINK-24413][table] Apply trimming & padding when CASTing to CHAR/VARCHAR

Apply trimming when CASTing to `CHAR()` or `VARCHAR()`
and the length of the result string exceeds the length specified.
Apply padding to the right with spaces when CASTing to `CHAR()`
and the result string's length is less than the specified length, so
that the length of result string matches exactly the length.

This closes #18063.
---
 .../flink/table/types/logical/VarCharType.java |   2 +
 .../functions/casting/ArrayToStringCastRule.java   | 187 +++-
 .../functions/casting/BinaryToStringCastRule.java  |   3 +-
 .../functions/casting/BooleanToStringCastRule.java |   3 +-
 .../functions/casting/CastRulePredicate.java   |  52 ++--
 .../functions/casting/CastRuleProvider.java|  23 +-
 .../casting/CharVarCharTrimPadCastRule.java| 252 
 .../functions/casting/DateToStringCastRule.java|   7 +-
 .../casting/IntervalToStringCastRule.java  |   3 +-
 .../casting/MapAndMultisetToStringCastRule.java| 300 +++
 .../functions/casting/NumericToStringCastRule.java |   3 +-
 .../functions/casting/RawToStringCastRule.java |  54 +++-
 .../functions/casting/RowToStringCastRule.java |  78 +++--
 .../functions/casting/TimeToStringCastRule.java|   3 +-
 .../casting/TimestampToStringCastRule.java |   3 +-
 .../table/planner/codegen/calls/IfCallGen.scala|  23 +-
 .../planner/functions/CastFunctionITCase.java  |  29 +-
 .../functions/casting/CastRuleProviderTest.java|  19 ++
 .../planner/functions/casting/CastRulesTest.java   | 332 +
 .../planner/expressions/ScalarFunctionsTest.scala  |  16 +-
 20 files changed, 1117 insertions(+), 275 deletions(-)

diff --git 
a/flink-table/flink-table-common/src/main/java/org/apache/flink/table/types/logical/VarCharType.java
 
b/flink-table/flink-table-common/src/main/java/org/apache/flink/table/types/logical/VarCharType.java
index 5a71b21..7c73b6c 100644
--- 
a/flink-table/flink-table-common/src/main/java/org/apache/flink/table/types/logical/VarCharType.java
+++ 
b/flink-table/flink-table-common/src/main/java/org/apache/flink/table/types/logical/VarCharType.java
@@ -54,6 +54,8 @@ public final class VarCharType extends LogicalType {
 
 public static final int DEFAULT_LENGTH = 1;
 
+public static final VarCharType STRING_TYPE = new VarCharType(MAX_LENGTH);
+
 private static final String FORMAT = "VARCHAR(%d)";
 
 private static final String MAX_FORMAT = "STRING";
diff --git 
a/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/functions/casting/ArrayToStringCastRule.java
 
b/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/functions/casting/ArrayToStringCastRule.java
index e470739..57f9e48 100644
--- 
a/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/functions/casting/ArrayToStringCastRule.java
+++ 
b/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/functions/casting/ArrayToStringCastRule.java
@@ -23,6 +23,7 @@ import org.apache.flink.table.types.logical.ArrayType;
 import org.apache.flink.table.types.logical.LogicalType;
 import org.apache.flink.table.types.logical.LogicalTypeFamily;
 import org.apache.flink.table.types.logical.LogicalTypeRoot;
+import org.apache.flink.table.types.logical.utils.LogicalTypeChecks;
 
 import static org.apache.flink.table.planner.codegen.CodeGenUtils.className;
 import static org.apache.flink.table.planner.codegen.CodeGenUtils.newName;
@@ -32,6 +33,9 @@ import static 
org.apache.flink.table.planner.functions.casting.CastRuleUtils.NUL
 import static 
org.apache.flink.table.planner.functions.casting.CastRuleUtils.constructorCall;
 import static 
org.apache.flink.table.planner.functions.casting.CastRuleUtils.methodCall;
 import static 
org.apache.flink.table.planner.functions.casting.CastRuleUtils.strLiteral;
+import static 
org.apache.flink.table.planner.functions.casting.CharVarCharTrimPadCastRule.couldTrim;
+import static 
org.apache.flink.table.planner.functions.casting.CharVarCharTrimPadCastRule.stringExceedsLength;
+import static org.apache.flink.table.types.logical.VarCharType.STRING_TYPE;
 
 /** {@link LogicalTypeRoot#ARRAY} to {@link 
LogicalTypeFamily#CHARACTER_STRING} cast rule. */
 class ArrayToStringCastRule extends 
AbstractNullAwareCodeGeneratorCastRule {
@@ -51,28 +55,54 @@ class ArrayToStringCastRule extends 
AbstractNullAwareCodeGeneratorCastRule:
+/* Example generated code for ARRAY -> CHAR(10)
 
 isNull$0 = _myInputIsNull;
 if (!isNull$0) {
 builder$1.set

[flink] branch master updated (efa3362 -> 4b1df49)

2021-12-14 Thread twalthr
This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from efa3362  [FLINK-24232][coordination] Skip history server archiving for 
suspended jobs
 new 09aad58  [FLINK-24413][table] Apply trimming & padding when CASTing to 
CHAR/VARCHAR
 new b6ca017  [hotfix][table] Make use of VarCharType.STRING_TYPE
 new b0b68f1  [hotfix][table-planner][tests] Minor fixes to remove IDE 
warnings.
 new 4b1df49  [hotfix][table] Rename precision to length for CHAR/VARCHAR 
sink enforcer

The 4 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../generated/execution_config_configuration.html  |   4 +-
 .../table/api/config/ExecutionConfigOptions.java   |  31 +-
 .../flink/table/types/logical/VarCharType.java |   2 +
 .../types/logical/utils/LogicalTypeParser.java |   2 +-
 .../apache/flink/table/types/DataTypesTest.java|   2 +-
 .../flink/table/types/LogicalCommonTypeTest.java   |   4 +-
 .../flink/table/types/LogicalTypeParserTest.java   |   2 +-
 .../types/extraction/DataTypeExtractorTest.java|   9 +-
 .../functions/casting/ArrayToStringCastRule.java   | 187 +++-
 .../functions/casting/BinaryToStringCastRule.java  |   3 +-
 .../functions/casting/BooleanToStringCastRule.java |   3 +-
 .../functions/casting/CastRulePredicate.java   |  52 ++--
 .../functions/casting/CastRuleProvider.java|  23 +-
 .../casting/CharVarCharTrimPadCastRule.java| 252 
 .../functions/casting/DateToStringCastRule.java|   7 +-
 .../casting/IntervalToStringCastRule.java  |   3 +-
 .../casting/MapAndMultisetToStringCastRule.java| 300 +++
 .../functions/casting/NumericToStringCastRule.java |   3 +-
 .../functions/casting/RawToStringCastRule.java |  54 +++-
 .../functions/casting/RowToStringCastRule.java |  78 +++--
 .../functions/casting/TimeToStringCastRule.java|   3 +-
 .../casting/TimestampToStringCastRule.java |   3 +-
 .../plan/nodes/exec/common/CommonExecSink.java |  10 +-
 .../table/planner/plan/type/FlinkReturnTypes.java  |   4 +-
 .../table/planner/codegen/calls/IfCallGen.scala|  23 +-
 .../planner/codegen/calls/StringCallGen.scala  |   2 +-
 .../planner/codegen/SortCodeGeneratorTest.java |   2 +-
 .../planner/functions/CastFunctionITCase.java  |  90 +++---
 .../functions/casting/CastRuleProviderTest.java|  19 ++
 .../planner/functions/casting/CastRulesTest.java   | 332 +
 .../nodes/exec/common/CommonExecSinkITCase.java|  18 +-
 .../apache/flink/table/api/batch/ExplainTest.scala |   2 +-
 .../flink/table/api/stream/ExplainTest.scala   |   2 +-
 .../planner/calcite/FlinkTypeFactoryTest.scala |   6 +-
 .../table/planner/codegen/agg/AggTestBase.scala|   4 +-
 .../codegen/agg/batch/BatchAggTestBase.scala   |   2 +-
 .../agg/batch/HashAggCodeGeneratorTest.scala   |   2 +-
 .../agg/batch/SortAggCodeGeneratorTest.scala   |   4 +-
 .../planner/expressions/ScalarFunctionsTest.scala  |  16 +-
 .../expressions/utils/ExpressionTestBase.scala |   2 +-
 .../plan/batch/sql/DagOptimizationTest.scala   |   2 +-
 .../planner/plan/metadata/MetadataTestUtil.scala   |   6 +-
 .../plan/stream/sql/DagOptimizationTest.scala  |   2 +-
 .../planner/plan/stream/sql/LegacySinkTest.scala   |   2 +-
 .../stream/sql/MiniBatchIntervalInferTest.scala|   2 +-
 .../batch/sql/PartitionableSinkITCase.scala|   2 +-
 .../planner/runtime/batch/sql/UnionITCase.scala|   2 +-
 .../planner/runtime/stream/sql/CalcITCase.scala|   4 +-
 .../runtime/operators/sink/ConstraintEnforcer.java |  60 ++--
 .../flink/table/data/BinaryArrayDataTest.java  |   3 +-
 .../apache/flink/table/data/BinaryRowDataTest.java |   3 +-
 .../flink/table/data/DataFormatConvertersTest.java |   4 +-
 .../window/SlicingWindowAggOperatorTest.java   |   3 +-
 .../ProcTimeDeduplicateFunctionTestBase.java   |   3 +-
 .../RowTimeDeduplicateFunctionTestBase.java|   3 +-
 .../RowTimeWindowDeduplicateOperatorTest.java  |   3 +-
 .../join/RandomSortMergeInnerJoinTest.java |   6 +-
 .../join/String2HashJoinOperatorTest.java  |  14 +-
 .../join/String2SortMergeJoinOperatorTest.java |  12 +-
 .../interval/TimeIntervalStreamJoinTestBase.java   |   6 +-
 .../TemporalProcessTimeJoinOperatorTest.java   |   6 +-
 .../temporal/TemporalTimeJoinOperatorTestBase.java |  12 +-
 .../join/window/WindowJoinOperatorTest.java|   6 +-
 .../ProcTimeRangeBoundedPrecedingFunctionTest.java |   2 +-
 .../operators/over/RowTimeOverWindowTestBase.java  |   4 +-
 .../operators/rank/TopNFunctionTestBase.java   |

[flink] 04/04: [hotfix][table] Rename precision to length for CHAR/VARCHAR sink enforcer

2021-12-14 Thread twalthr
This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 4b1df4945141907022a3c5ddae21723a4d5a42f4
Author: Marios Trivyzas 
AuthorDate: Mon Dec 13 14:58:13 2021 +0200

[hotfix][table] Rename precision to length for CHAR/VARCHAR sink enforcer

Rename all `precision` references in code and docs to `length`
which were introduced with: 
https://github.com/apache/flink/commit/1151071b67b866bc18225fc7f522d29e819a6238
---
 .../generated/execution_config_configuration.html  |  4 +-
 .../table/api/config/ExecutionConfigOptions.java   | 31 ++-
 .../plan/nodes/exec/common/CommonExecSink.java | 10 ++--
 .../nodes/exec/common/CommonExecSinkITCase.java| 18 +++
 .../runtime/operators/sink/ConstraintEnforcer.java | 60 +++---
 5 files changed, 61 insertions(+), 62 deletions(-)

diff --git 
a/docs/layouts/shortcodes/generated/execution_config_configuration.html 
b/docs/layouts/shortcodes/generated/execution_config_configuration.html
index 2a35fc8..099a9f9 100644
--- a/docs/layouts/shortcodes/generated/execution_config_configuration.html
+++ b/docs/layouts/shortcodes/generated/execution_config_configuration.html
@@ -53,10 +53,10 @@ By default no operator is disabled.
 Sets default parallelism for all operators (such as aggregate, 
join, filter) to run with parallel instances. This config has a higher priority 
than parallelism of StreamExecutionEnvironment (actually, this config overrides 
the parallelism of StreamExecutionEnvironment). A value of -1 indicates that no 
default parallelism is set, then it will fallback to use the parallelism of 
StreamExecutionEnvironment.
 
 
-table.exec.sink.char-precision-enforcer Batch Streaming
+table.exec.sink.char-length-enforcer Batch Streaming
 IGNORE
 Enum
-Determines whether string values for columns with 
CHAR(precision)/VARCHAR(precision) types will be trimmed or 
padded (only for CHAR(precision)), so that their length will match the 
one defined by the precision of their respective CHAR/VARCHAR column type.Possible values:"IGNORE": Don't apply any trimming and padding, 
and instead ignore the CHAR/VARCHAR precision directive."TRIM_PAD": 
Trim and pad string values to match  [...]
+Determines whether string values for columns with 
CHAR(length)/VARCHAR(length) types will be trimmed or padded 
(only for CHAR(length)), so that their length will match the one 
defined by the length of their respective CHAR/VARCHAR column type.Possible values:"IGNORE": Don't apply any trimming and padding, and 
instead ignore the CHAR/VARCHAR length directive."TRIM_PAD": Trim and 
pad string values to match the length defi [...]
 
 
 table.exec.sink.keyed-shuffle Streaming
diff --git 
a/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/api/config/ExecutionConfigOptions.java
 
b/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/api/config/ExecutionConfigOptions.java
index 6f655b2..ba9a6c3 100644
--- 
a/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/api/config/ExecutionConfigOptions.java
+++ 
b/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/api/config/ExecutionConfigOptions.java
@@ -120,16 +120,15 @@ public class ExecutionConfigOptions {
 "Determines how Flink enforces NOT NULL column 
constraints when inserting null values.");
 
 @Documentation.TableOption(execMode = 
Documentation.ExecMode.BATCH_STREAMING)
-public static final ConfigOption
-TABLE_EXEC_SINK_CHAR_PRECISION_ENFORCER =
-key("table.exec.sink.char-precision-enforcer")
-.enumType(CharPrecisionEnforcer.class)
-.defaultValue(CharPrecisionEnforcer.IGNORE)
-.withDescription(
-"Determines whether string values for 
columns with CHAR()/VARCHAR() "
-+ "types will be trimmed or padded 
(only for CHAR()), so that their "
-+ "length will match the one 
defined by the precision of their respective "
-+ "CHAR/VARCHAR column type.");
+public static final ConfigOption 
TABLE_EXEC_SINK_CHAR_LENGTH_ENFORCER =
+key("table.exec.sink.char-length-enforcer")
+.enumType(CharLengthEnforcer.class)
+.defaultValue(CharLengthEnforcer.IGNORE)
+.withDescription(
+"Determines whether string values for columns with

[flink] 03/04: [hotfix][table-planner][tests] Minor fixes to remove IDE warnings.

2021-12-14 Thread twalthr
This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit b0b68f1176888137c615902e0aa5dadf13c75ef9
Author: Marios Trivyzas 
AuthorDate: Fri Dec 10 09:49:40 2021 +0100

[hotfix][table-planner][tests] Minor fixes to remove IDE warnings.
---
 .../planner/functions/CastFunctionITCase.java  | 61 ++
 1 file changed, 27 insertions(+), 34 deletions(-)

diff --git 
a/flink-table/flink-table-planner/src/test/java/org/apache/flink/table/planner/functions/CastFunctionITCase.java
 
b/flink-table/flink-table-planner/src/test/java/org/apache/flink/table/planner/functions/CastFunctionITCase.java
index ae5481b..eceea3c 100644
--- 
a/flink-table/flink-table-planner/src/test/java/org/apache/flink/table/planner/functions/CastFunctionITCase.java
+++ 
b/flink-table/flink-table-planner/src/test/java/org/apache/flink/table/planner/functions/CastFunctionITCase.java
@@ -549,19 +549,19 @@ public class CastFunctionITCase extends 
BuiltInFunctionTestBase {
 .fromCase(
 TINYINT(),
 DEFAULT_POSITIVE_TINY_INT,
-Integer.valueOf(DEFAULT_POSITIVE_TINY_INT))
+(int) DEFAULT_POSITIVE_TINY_INT)
 .fromCase(
 TINYINT(),
 DEFAULT_NEGATIVE_TINY_INT,
-Integer.valueOf(DEFAULT_NEGATIVE_TINY_INT))
+(int) DEFAULT_NEGATIVE_TINY_INT)
 .fromCase(
 SMALLINT(),
 DEFAULT_POSITIVE_SMALL_INT,
-Integer.valueOf(DEFAULT_POSITIVE_SMALL_INT))
+(int) DEFAULT_POSITIVE_SMALL_INT)
 .fromCase(
 SMALLINT(),
 DEFAULT_NEGATIVE_SMALL_INT,
-Integer.valueOf(DEFAULT_NEGATIVE_SMALL_INT))
+(int) DEFAULT_NEGATIVE_SMALL_INT)
 .fromCase(INT(), DEFAULT_POSITIVE_INT, 
DEFAULT_POSITIVE_INT)
 .fromCase(INT(), DEFAULT_NEGATIVE_INT, 
DEFAULT_NEGATIVE_INT)
 .fromCase(BIGINT(), 123, 123)
@@ -607,21 +607,21 @@ public class CastFunctionITCase extends 
BuiltInFunctionTestBase {
 .fromCase(
 TINYINT(),
 DEFAULT_POSITIVE_TINY_INT,
-Long.valueOf(DEFAULT_POSITIVE_TINY_INT))
+(long) DEFAULT_POSITIVE_TINY_INT)
 .fromCase(
 TINYINT(),
 DEFAULT_NEGATIVE_TINY_INT,
-Long.valueOf(DEFAULT_NEGATIVE_TINY_INT))
+(long) DEFAULT_NEGATIVE_TINY_INT)
 .fromCase(
 SMALLINT(),
 DEFAULT_POSITIVE_SMALL_INT,
-Long.valueOf(DEFAULT_POSITIVE_SMALL_INT))
+(long) DEFAULT_POSITIVE_SMALL_INT)
 .fromCase(
 SMALLINT(),
 DEFAULT_NEGATIVE_SMALL_INT,
-Long.valueOf(DEFAULT_NEGATIVE_SMALL_INT))
-.fromCase(INT(), DEFAULT_POSITIVE_INT, 
Long.valueOf(DEFAULT_POSITIVE_INT))
-.fromCase(INT(), DEFAULT_NEGATIVE_INT, 
Long.valueOf(DEFAULT_NEGATIVE_INT))
+(long) DEFAULT_NEGATIVE_SMALL_INT)
+.fromCase(INT(), DEFAULT_POSITIVE_INT, (long) 
DEFAULT_POSITIVE_INT)
+.fromCase(INT(), DEFAULT_NEGATIVE_INT, (long) 
DEFAULT_NEGATIVE_INT)
 .fromCase(BIGINT(), DEFAULT_POSITIVE_BIGINT, 
DEFAULT_POSITIVE_BIGINT)
 .fromCase(BIGINT(), DEFAULT_NEGATIVE_BIGINT, 
DEFAULT_NEGATIVE_BIGINT)
 .fromCase(FLOAT(), DEFAULT_POSITIVE_FLOAT, 123L)
@@ -667,29 +667,25 @@ public class CastFunctionITCase extends 
BuiltInFunctionTestBase {
 .fromCase(
 TINYINT(),
 DEFAULT_POSITIVE_TINY_INT,
-Float.valueOf(DEFAULT_POSITIVE_TINY_INT))
+(float) DEFAULT_POSITIVE_TINY_INT)
 .fromCase(
 TINYINT(),
 DEFAULT_NEGATIVE_TINY_INT,
-Float.valueOf(DEFAULT_NEGATIVE_TINY_INT))
+(float) DEFAULT_NEGATIVE_TINY_INT

[flink] 02/04: [hotfix][table] Make use of VarCharType.STRING_TYPE

2021-12-14 Thread twalthr
This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit b6ca017e095de4b055e58af195e1f0a00e312d6e
Author: Marios Trivyzas 
AuthorDate: Thu Dec 9 13:50:10 2021 +0100

[hotfix][table] Make use of VarCharType.STRING_TYPE

Replace occurrences of `new VarCharType(MAX_LENGTH()` with new constant
`VarCharType.STRING_TYPE`.
---
 .../flink/table/types/logical/utils/LogicalTypeParser.java |  2 +-
 .../java/org/apache/flink/table/types/DataTypesTest.java   |  2 +-
 .../apache/flink/table/types/LogicalCommonTypeTest.java|  4 ++--
 .../apache/flink/table/types/LogicalTypeParserTest.java|  2 +-
 .../table/types/extraction/DataTypeExtractorTest.java  |  9 +++--
 .../flink/table/planner/plan/type/FlinkReturnTypes.java|  4 ++--
 .../flink/table/planner/codegen/calls/StringCallGen.scala  |  2 +-
 .../flink/table/planner/codegen/SortCodeGeneratorTest.java |  2 +-
 .../org/apache/flink/table/api/batch/ExplainTest.scala |  2 +-
 .../org/apache/flink/table/api/stream/ExplainTest.scala|  2 +-
 .../flink/table/planner/calcite/FlinkTypeFactoryTest.scala |  6 +++---
 .../flink/table/planner/codegen/agg/AggTestBase.scala  |  4 ++--
 .../table/planner/codegen/agg/batch/BatchAggTestBase.scala |  2 +-
 .../codegen/agg/batch/HashAggCodeGeneratorTest.scala   |  2 +-
 .../codegen/agg/batch/SortAggCodeGeneratorTest.scala   |  4 ++--
 .../planner/expressions/utils/ExpressionTestBase.scala |  2 +-
 .../table/planner/plan/batch/sql/DagOptimizationTest.scala |  2 +-
 .../table/planner/plan/metadata/MetadataTestUtil.scala |  6 +++---
 .../planner/plan/stream/sql/DagOptimizationTest.scala  |  2 +-
 .../table/planner/plan/stream/sql/LegacySinkTest.scala |  2 +-
 .../plan/stream/sql/MiniBatchIntervalInferTest.scala   |  2 +-
 .../runtime/batch/sql/PartitionableSinkITCase.scala|  2 +-
 .../table/planner/runtime/batch/sql/UnionITCase.scala  |  2 +-
 .../table/planner/runtime/stream/sql/CalcITCase.scala  |  4 ++--
 .../org/apache/flink/table/data/BinaryArrayDataTest.java   |  3 +--
 .../org/apache/flink/table/data/BinaryRowDataTest.java |  3 +--
 .../apache/flink/table/data/DataFormatConvertersTest.java  |  4 ++--
 .../aggregate/window/SlicingWindowAggOperatorTest.java |  3 +--
 .../deduplicate/ProcTimeDeduplicateFunctionTestBase.java   |  3 +--
 .../deduplicate/RowTimeDeduplicateFunctionTestBase.java|  3 +--
 .../window/RowTimeWindowDeduplicateOperatorTest.java   |  3 +--
 .../operators/join/RandomSortMergeInnerJoinTest.java   |  6 +++---
 .../operators/join/String2HashJoinOperatorTest.java| 14 ++
 .../operators/join/String2SortMergeJoinOperatorTest.java   | 12 +---
 .../join/interval/TimeIntervalStreamJoinTestBase.java  |  6 +++---
 .../join/temporal/TemporalProcessTimeJoinOperatorTest.java |  6 +++---
 .../join/temporal/TemporalTimeJoinOperatorTestBase.java| 12 +---
 .../operators/join/window/WindowJoinOperatorTest.java  |  6 +++---
 .../over/ProcTimeRangeBoundedPrecedingFunctionTest.java|  2 +-
 .../runtime/operators/over/RowTimeOverWindowTestBase.java  |  4 +---
 .../table/runtime/operators/rank/TopNFunctionTestBase.java |  8 ++--
 .../operators/rank/window/WindowRankOperatorTest.java  |  5 ++---
 .../runtime/operators/sort/ProcTimeSortOperatorTest.java   |  5 +
 .../runtime/operators/sort/RowTimeSortOperatorTest.java| 10 ++
 .../runtime/operators/sort/StreamSortOperatorTest.java |  2 +-
 .../operators/window/WindowOperatorContractTest.java   |  3 +--
 .../table/runtime/operators/window/WindowOperatorTest.java | 11 ---
 .../table/runtime/types/DataTypePrecisionFixerTest.java|  2 +-
 .../table/runtime/typeutils/RowDataSerializerTest.java |  6 +++---
 .../util/collections/binary/BytesHashMapTestBase.java  |  2 +-
 .../util/collections/binary/BytesMultiMapTestBase.java |  4 ++--
 51 files changed, 93 insertions(+), 128 deletions(-)

diff --git 
a/flink-table/flink-table-common/src/main/java/org/apache/flink/table/types/logical/utils/LogicalTypeParser.java
 
b/flink-table/flink-table-common/src/main/java/org/apache/flink/table/types/logical/utils/LogicalTypeParser.java
index 1c69d77..15b5daa 100644
--- 
a/flink-table/flink-table-common/src/main/java/org/apache/flink/table/types/logical/utils/LogicalTypeParser.java
+++ 
b/flink-table/flink-table-common/src/main/java/org/apache/flink/table/types/logical/utils/LogicalTypeParser.java
@@ -528,7 +528,7 @@ public final class LogicalTypeParser {
 case VARCHAR:
 return parseVarCharType();
 case STRING:
-return new VarCharType(VarCharType.MAX_LENGTH);
+return VarCharType.STRING_TYPE;
 case BOOLEAN:
 return new BooleanType();
 case

[flink] branch master updated: [FLINK-25229][table] Introduce flink-table-api-bridge-base

2021-12-12 Thread twalthr
This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new e4ae2ef  [FLINK-25229][table] Introduce flink-table-api-bridge-base
e4ae2ef is described below

commit e4ae2ef81e9ecbda10c4dcc5776584b07c2f5e6b
Author: slinkydeveloper 
AuthorDate: Thu Dec 9 14:55:05 2021 +0100

[FLINK-25229][table] Introduce flink-table-api-bridge-base

This closes #18065.
---
 flink-connectors/flink-connector-hbase-1.4/pom.xml |   6 +
 flink-connectors/flink-connector-hbase-2.2/pom.xml |   6 +
 flink-connectors/flink-connector-hive/pom.xml  |   6 +
 flink-connectors/flink-connector-jdbc/pom.xml  |   6 +
 flink-connectors/flink-connector-kafka/pom.xml |   6 +
 flink-formats/flink-avro/pom.xml   |   6 +
 flink-formats/flink-csv/pom.xml|   6 +
 flink-formats/flink-json/pom.xml   |   6 +
 flink-formats/flink-orc/pom.xml|   6 +
 flink-formats/flink-parquet/pom.xml|   6 +
 flink-python/pom.xml   |   6 +
 flink-table/flink-sql-client/pom.xml   |   6 +
 .../pom.xml|  46 +--
 .../AbstractStreamTableEnvironmentImpl.java| 329 +
 .../table/delegation/StreamExecutorFactory.java|  37 +++
 .../operations/DataStreamQueryOperation.java}  |  25 +-
 .../table/operations/ExternalQueryOperation.java}  |   4 +-
 flink-table/flink-table-api-java-bridge/pom.xml|   6 +
 .../java/internal/StreamTableEnvironmentImpl.java  | 270 +
 .../operations/JavaDataStreamQueryOperation.java   | 116 
 .../table/api/internal/TableEnvironmentImpl.java   |   3 +-
 .../flink/table/delegation/ExecutorFactory.java|  11 +-
 flink-table/flink-table-api-scala-bridge/pom.xml   |   6 +
 .../operations/ScalaExternalQueryOperation.java| 121 
 .../internal/StreamTableEnvironmentImpl.scala  | 233 ++-
 flink-table/flink-table-planner/pom.xml|  24 +-
 .../planner/delegation/DefaultExecutorFactory.java |   4 +-
 java => InternalDataStreamQueryOperation.java} |   8 +-
 .../planner/plan/QueryOperationConverter.java  |  44 +--
 .../flink/table/planner/utils/TableTestBase.scala  |  33 +--
 flink-table/flink-table-uber/pom.xml   |   6 +
 flink-table/pom.xml|   1 +
 tools/ci/stage.sh  |   2 +
 33 files changed, 549 insertions(+), 852 deletions(-)

diff --git a/flink-connectors/flink-connector-hbase-1.4/pom.xml 
b/flink-connectors/flink-connector-hbase-1.4/pom.xml
index 99bac72..cd4b56c 100644
--- a/flink-connectors/flink-connector-hbase-1.4/pom.xml
+++ b/flink-connectors/flink-connector-hbase-1.4/pom.xml
@@ -135,6 +135,12 @@ under the License.
 

org.apache.flink
+   
flink-table-api-scala-bridge_${scala.binary.version}
+   ${project.version}
+   test
+   
+   
+   org.apache.flink

flink-table-planner_${scala.binary.version}
${project.version}
test-jar
diff --git a/flink-connectors/flink-connector-hbase-2.2/pom.xml 
b/flink-connectors/flink-connector-hbase-2.2/pom.xml
index ba86049..ed520ce 100644
--- a/flink-connectors/flink-connector-hbase-2.2/pom.xml
+++ b/flink-connectors/flink-connector-hbase-2.2/pom.xml
@@ -260,6 +260,12 @@ under the License.
 

org.apache.flink
+   
flink-table-api-scala-bridge_${scala.binary.version}
+   ${project.version}
+   test
+   
+   
+   org.apache.flink

flink-table-planner_${scala.binary.version}
${project.version}
test-jar
diff --git a/flink-connectors/flink-connector-hive/pom.xml 
b/flink-connectors/flink-connector-hive/pom.xml
index 37f3dea..f807b12 100644
--- a/flink-connectors/flink-connector-hive/pom.xml
+++ b/flink-connectors/flink-connector-hive/pom.xml
@@ -531,6 +531,12 @@ under the License.
 

org.apache.flink
+   
flink-table-api-scala-bridge_${scala.binary.version}
+   ${project.version}
+   test
+   
+   
+   org.apache.flink

flink-table-planner_${scala.binary.version}
${project.version}
test-jar
diff --git a/flink-connectors/flink-connector-jdbc/pom.xml 
b/flink-connectors/flink-connector-jdbc/pom.xml
index 3ccecc6..4edd126 100

[flink] branch master updated: [FLINK-24186][table-planner] Allow multiple rowtime attributes for collect() and print()

2021-12-10 Thread twalthr
This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new f27e53a  [FLINK-24186][table-planner] Allow multiple rowtime 
attributes for collect() and print()
f27e53a is described below

commit f27e53a03516ca7de7ec6c86a905f7d8a88b1271
Author: Timo Walther 
AuthorDate: Thu Dec 9 13:35:06 2021 +0100

[FLINK-24186][table-planner] Allow multiple rowtime attributes for 
collect() and print()

This closes #17217.
---
 .../planner/connectors/CollectDynamicSink.java |  2 +-
 .../plan/nodes/exec/batch/BatchExecSink.java   |  3 +-
 .../plan/nodes/exec/common/CommonExecSink.java |  2 +-
 .../plan/nodes/exec/stream/StreamExecSink.java | 13 
 .../org/apache/flink/table/api/TableITCase.scala   | 35 +-
 .../runtime/stream/table/TableSinkITCase.scala |  5 ++--
 6 files changed, 49 insertions(+), 11 deletions(-)

diff --git 
a/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/connectors/CollectDynamicSink.java
 
b/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/connectors/CollectDynamicSink.java
index 98fcf8b..be59089 100644
--- 
a/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/connectors/CollectDynamicSink.java
+++ 
b/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/connectors/CollectDynamicSink.java
@@ -49,7 +49,7 @@ import java.util.function.Function;
 
 /** Table sink for {@link TableResult#collect()}. */
 @Internal
-final class CollectDynamicSink implements DynamicTableSink {
+public final class CollectDynamicSink implements DynamicTableSink {
 
 private final ObjectIdentifier tableIdentifier;
 private final DataType consumedDataType;
diff --git 
a/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/plan/nodes/exec/batch/BatchExecSink.java
 
b/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/plan/nodes/exec/batch/BatchExecSink.java
index 3633628..64a1c0c 100644
--- 
a/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/plan/nodes/exec/batch/BatchExecSink.java
+++ 
b/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/plan/nodes/exec/batch/BatchExecSink.java
@@ -56,6 +56,7 @@ public class BatchExecSink extends CommonExecSink implements 
BatchExecNode translateToPlanInternal(PlannerBase 
planner) {
 final Transformation inputTransform =
 (Transformation) 
getInputEdges().get(0).translateToPlan(planner);
-return createSinkTransformation(planner, inputTransform, -1, false);
+final DynamicTableSink tableSink = 
tableSinkSpec.getTableSink(planner.getFlinkContext());
+return createSinkTransformation(planner, inputTransform, tableSink, 
-1, false);
 }
 }
diff --git 
a/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/plan/nodes/exec/common/CommonExecSink.java
 
b/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/plan/nodes/exec/common/CommonExecSink.java
index 9c1870f..65500b9 100644
--- 
a/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/plan/nodes/exec/common/CommonExecSink.java
+++ 
b/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/plan/nodes/exec/common/CommonExecSink.java
@@ -117,9 +117,9 @@ public abstract class CommonExecSink extends 
ExecNodeBase
 protected Transformation createSinkTransformation(
 PlannerBase planner,
 Transformation inputTransform,
+DynamicTableSink tableSink,
 int rowtimeFieldIndex,
 boolean upsertMaterialize) {
-final DynamicTableSink tableSink = 
tableSinkSpec.getTableSink(planner.getFlinkContext());
 final ResolvedSchema schema = 
tableSinkSpec.getCatalogTable().getResolvedSchema();
 final SinkRuntimeProvider runtimeProvider =
 tableSink.getSinkRuntimeProvider(new 
SinkRuntimeProviderContext(isBounded));
diff --git 
a/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/plan/nodes/exec/stream/StreamExecSink.java
 
b/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/plan/nodes/exec/stream/StreamExecSink.java
index c145b59..848779c 100644
--- 
a/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/plan/nodes/exec/stream/StreamExecSink.java
+++ 
b/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/plan/nodes/exec/stream/StreamExecSink.java
@@ -23,6 +23,7 @@ import org.apache.flink.table.api.TableException;
 import org.apache.flink.table.connector.ChangelogMode;
 import org.apache.flink.table.connector.sink.DynamicTableSink;
 import org.apache.flink.table.data.RowData

[flink] branch master updated (fca04c3 -> 2a2e72d)

2021-12-09 Thread twalthr
This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from fca04c3  [FLINK-24077][HBase/IT] use MiniClusterWithClientResource as 
@ClassRule.
 add 2a2e72d  [FLINK-25157][table-planner] Introduce NullToStringCastRule

No new revisions were added by this update.

Summary of changes:
 .../AbstractNullAwareCodeGeneratorCastRule.java |  2 +-
 .../planner/functions/casting/CastCodeBlock.java| 10 +-
 .../planner/functions/casting/CastRuleProvider.java |  1 +
 .../planner/functions/casting/IdentityCastRule.java |  2 +-
 ...tringCastRule.java => NullToStringCastRule.java} | 21 -
 .../planner/functions/casting/CastRulesTest.java|  3 +++
 .../table/data/binary/BinaryStringDataUtil.java |  2 ++
 7 files changed, 29 insertions(+), 12 deletions(-)
 copy 
flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/functions/casting/{DateToStringCastRule.java
 => NullToStringCastRule.java} (69%)


[flink] branch master updated (4b42936 -> 34c74e1)

2021-12-08 Thread twalthr
This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 4b42936  [FLINK-25052][table-planner] Port row to row casting to 
CastRule
 add 34c74e1  [FLINK-25156][table-planner] Support distinct type in 
CastRules

No new revisions were added by this update.

Summary of changes:
 .../functions/casting/CastRuleProvider.java| 13 -
 .../functions/casting/CastRuleProviderTest.java| 61 ++
 2 files changed, 73 insertions(+), 1 deletion(-)
 create mode 100644 
flink-table/flink-table-planner/src/test/java/org/apache/flink/table/planner/functions/casting/CastRuleProviderTest.java


[flink] branch master updated (d8d3779 -> 4b42936)

2021-12-08 Thread twalthr
This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from d8d3779  [FLINK-25189][docs][connectors/elasticsearch] Update 
supported versions on Elasticsearch connector page
 add 4b42936  [FLINK-25052][table-planner] Port row to row casting to 
CastRule

No new revisions were added by this update.

Summary of changes:
 .../apache/flink/table/test/TableAssertions.java   |   4 +
 .../casting/AbstractCodeGeneratorCastRule.java |  19 +-
 .../functions/casting/CastRuleProvider.java|   1 +
 .../planner/functions/casting/CastRuleUtils.java   |  19 +
 .../functions/casting/RowToRowCastRule.java| 231 +
 .../functions/casting/RowToStringCastRule.java |  22 +-
 .../flink/table/planner/codegen/CodeGenUtils.scala |  26 +-
 .../planner/codegen/calls/ScalarOperatorGens.scala |  35 --
 .../planner/functions/CastFunctionITCase.java  |  86 ++--
 .../planner/functions/casting/CastRulesTest.java   | 566 +++--
 10 files changed, 639 insertions(+), 370 deletions(-)
 create mode 100644 
flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/functions/casting/RowToRowCastRule.java


[flink] branch master updated (2b167ae -> a4299a2)

2021-12-08 Thread twalthr
This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 2b167ae  [FLINK-23532] Remove unnecessary StreamTask#finishTask
 add a4299a2  [hotfix][table-common] Fix typo in ProjectableDecodingFormat

No new revisions were added by this update.

Summary of changes:
 .../apache/flink/table/connector/format/ProjectableDecodingFormat.java  | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)


[flink] 02/02: [hotfix][table-api-java] Migrate SchemaTranslatorTest to assertj

2021-12-07 Thread twalthr
This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit a0c74c1e07030e7bca0c93e24a1e7643937371a7
Author: Timo Walther 
AuthorDate: Mon Dec 6 17:05:36 2021 +0100

[hotfix][table-api-java] Migrate SchemaTranslatorTest to assertj
---
 .../flink/table/catalog/SchemaTranslatorTest.java  | 372 ++---
 1 file changed, 176 insertions(+), 196 deletions(-)

diff --git 
a/flink-table/flink-table-api-java/src/test/java/org/apache/flink/table/catalog/SchemaTranslatorTest.java
 
b/flink-table/flink-table-api-java/src/test/java/org/apache/flink/table/catalog/SchemaTranslatorTest.java
index c2c2e9f..3bfe128 100644
--- 
a/flink-table/flink-table-api-java/src/test/java/org/apache/flink/table/catalog/SchemaTranslatorTest.java
+++ 
b/flink-table/flink-table-api-java/src/test/java/org/apache/flink/table/catalog/SchemaTranslatorTest.java
@@ -20,7 +20,6 @@ package org.apache.flink.table.catalog;
 
 import org.apache.flink.api.common.typeinfo.TypeInformation;
 import org.apache.flink.api.common.typeinfo.Types;
-import org.apache.flink.table.api.DataTypes;
 import org.apache.flink.table.api.Schema;
 import org.apache.flink.table.api.ValidationException;
 import org.apache.flink.table.catalog.SchemaTranslator.ConsumingResult;
@@ -37,12 +36,18 @@ import java.time.DayOfWeek;
 import java.util.Arrays;
 import java.util.Optional;
 
-import static org.apache.flink.core.testutils.FlinkMatchers.containsMessage;
-import static org.junit.Assert.assertEquals;
-import static org.junit.Assert.assertFalse;
-import static org.junit.Assert.assertNull;
-import static org.junit.Assert.assertThat;
-import static org.junit.Assert.assertTrue;
+import static org.apache.flink.core.testutils.FlinkAssertions.anyCauseMatches;
+import static org.apache.flink.table.api.DataTypes.BIGINT;
+import static org.apache.flink.table.api.DataTypes.BOOLEAN;
+import static org.apache.flink.table.api.DataTypes.DECIMAL;
+import static org.apache.flink.table.api.DataTypes.DOUBLE;
+import static org.apache.flink.table.api.DataTypes.FIELD;
+import static org.apache.flink.table.api.DataTypes.INT;
+import static org.apache.flink.table.api.DataTypes.ROW;
+import static org.apache.flink.table.api.DataTypes.STRING;
+import static org.apache.flink.table.api.DataTypes.TIMESTAMP_LTZ;
+import static org.assertj.core.api.Assertions.assertThat;
+import static org.assertj.core.api.Assertions.assertThatThrownBy;
 
 /** Tests for {@link SchemaTranslator}. */
 public class SchemaTranslatorTest {
@@ -56,63 +61,53 @@ public class SchemaTranslatorTest {
 SchemaTranslator.createConsumingResult(
 dataTypeFactoryWithRawType(DayOfWeek.class), 
inputTypeInfo, null);
 
-assertEquals(
-DataTypes.ROW(
-DataTypes.FIELD(
-"f0",
-DataTypes.ROW(
-DataTypes.FIELD("f0", 
DataTypes.INT()),
-DataTypes.FIELD("f1", 
DataTypes.BOOLEAN(,
-DataTypes.FIELD(
-"f1", 
DataTypeFactoryMock.dummyRaw(DayOfWeek.class)))
-.notNull(),
-result.getPhysicalDataType());
-
-assertTrue(result.isTopLevelRecord());
-
-assertEquals(
-Schema.newBuilder()
-.column(
-"f0",
-DataTypes.ROW(
-DataTypes.FIELD("f0", DataTypes.INT()),
-DataTypes.FIELD("f1", 
DataTypes.BOOLEAN(
-.column("f1", 
DataTypeFactoryMock.dummyRaw(DayOfWeek.class))
-.build(),
-result.getSchema());
-
-assertNull(result.getProjections());
+assertThat(result.getPhysicalDataType())
+.isEqualTo(
+ROW(
+FIELD(
+"f0",
+ROW(FIELD("f0", INT()), 
FIELD("f1", BOOLEAN(,
+FIELD("f1", 
DataTypeFactoryMock.dummyRaw(DayOfWeek.class)))
+.notNull());
+
+assertThat(result.isTopLevelRecord()).isTrue();
+
+assertThat(result.getSchema())
+.isEqualTo(
+Schema.newBuilder()
+.column("f0", ROW(FIELD("f0", INT()), 
FIELD("f1", BOOLEAN(
+.column("f1", 
DataTypeFactoryMock

[flink] branch master updated (d946302 -> a0c74c1)

2021-12-07 Thread twalthr
This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from d946302  [FLINK-25114][table-runtime] Remove flink-scala dependency 
and scala suffix
 new 57a6f02  [FLINK-25014][table-api-java] Perform toDataStream projection 
case-insensitive
 new a0c74c1  [hotfix][table-api-java] Migrate SchemaTranslatorTest to 
assertj

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../flink/table/catalog/SchemaTranslator.java  |  38 ++-
 .../flink/table/catalog/SchemaTranslatorTest.java  | 372 ++---
 2 files changed, 207 insertions(+), 203 deletions(-)


[flink] 01/02: [FLINK-25014][table-api-java] Perform toDataStream projection case-insensitive

2021-12-07 Thread twalthr
This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 57a6f02357fd2eb7e3c59e9903f9ec33a655ead6
Author: Timo Walther 
AuthorDate: Mon Dec 6 15:21:46 2021 +0100

[FLINK-25014][table-api-java] Perform toDataStream projection 
case-insensitive

This closes #18029.
---
 .../flink/table/catalog/SchemaTranslator.java  | 38 ++
 .../flink/table/catalog/SchemaTranslatorTest.java  |  4 +--
 2 files changed, 33 insertions(+), 9 deletions(-)

diff --git 
a/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/catalog/SchemaTranslator.java
 
b/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/catalog/SchemaTranslator.java
index 4f9fad4..467840c 100644
--- 
a/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/catalog/SchemaTranslator.java
+++ 
b/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/catalog/SchemaTranslator.java
@@ -43,6 +43,7 @@ import javax.annotation.Nullable;
 
 import java.util.Collections;
 import java.util.List;
+import java.util.Locale;
 import java.util.Optional;
 import java.util.stream.Collectors;
 import java.util.stream.IntStream;
@@ -113,19 +114,42 @@ public final class SchemaTranslator {
 ResolvedSchema inputSchema,
 AbstractDataType targetDataType) {
 final List inputFieldNames = inputSchema.getColumnNames();
+final List inputFieldNamesNormalized =
+inputFieldNames.stream()
+.map(n -> n.toLowerCase(Locale.ROOT))
+.collect(Collectors.toList());
 final DataType resolvedDataType = 
dataTypeFactory.createDataType(targetDataType);
 final List targetFieldNames = flattenToNames(resolvedDataType);
+final List targetFieldNamesNormalized =
+targetFieldNames.stream()
+.map(n -> n.toLowerCase(Locale.ROOT))
+.collect(Collectors.toList());
 final List targetFieldDataTypes = 
flattenToDataTypes(resolvedDataType);
 
 // help in reorder fields for POJOs if all field names are present but 
out of order,
 // otherwise let the sink validation fail later
-final List projections;
-if (targetFieldNames.size() == inputFieldNames.size()
-&& !targetFieldNames.equals(inputFieldNames)
-&& targetFieldNames.containsAll(inputFieldNames)) {
-projections = targetFieldNames;
-} else {
-projections = null;
+List projections = null;
+if (targetFieldNames.size() == inputFieldNames.size()) {
+// reordering by name (case-sensitive)
+if (targetFieldNames.containsAll(inputFieldNames)) {
+projections = targetFieldNames;
+}
+// reordering by name (case-insensitive) but fields must be unique
+else if 
(targetFieldNamesNormalized.containsAll(inputFieldNamesNormalized)
+&& targetFieldNamesNormalized.stream().distinct().count()
+== targetFieldNames.size()
+&& inputFieldNamesNormalized.stream().distinct().count()
+== inputFieldNames.size()) {
+projections =
+targetFieldNamesNormalized.stream()
+.map(
+targetName -> {
+final int inputFieldPos =
+
inputFieldNamesNormalized.indexOf(targetName);
+return 
inputFieldNames.get(inputFieldPos);
+})
+.collect(Collectors.toList());
+}
 }
 
 final Schema schema =
diff --git 
a/flink-table/flink-table-api-java/src/test/java/org/apache/flink/table/catalog/SchemaTranslatorTest.java
 
b/flink-table/flink-table-api-java/src/test/java/org/apache/flink/table/catalog/SchemaTranslatorTest.java
index 46bab61..c2c2e9f 100644
--- 
a/flink-table/flink-table-api-java/src/test/java/org/apache/flink/table/catalog/SchemaTranslatorTest.java
+++ 
b/flink-table/flink-table-api-java/src/test/java/org/apache/flink/table/catalog/SchemaTranslatorTest.java
@@ -90,7 +90,7 @@ public class SchemaTranslatorTest {
 ResolvedSchema.of(
 Column.physical("c", DataTypes.INT()),
 Column.physical("a", DataTypes.BOOLEAN()),
-Column.physical("b", DataTypes.DOUBLE()));
+Column.physical("B", DataTypes.DOUBLE())); // 
case-insensitive mapping
 
 final DataType physicalDataType =

[flink] branch master updated: [FLINK-25114][table-runtime] Remove flink-scala dependency and scala suffix

2021-12-07 Thread twalthr
This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new d946302  [FLINK-25114][table-runtime] Remove flink-scala dependency 
and scala suffix
d946302 is described below

commit d9463022504a6bccad30d681c71f46658c073041
Author: slinkydeveloper 
AuthorDate: Wed Dec 1 14:54:36 2021 +0100

[FLINK-25114][table-runtime] Remove flink-scala dependency and scala suffix

This closes #18011.
---
 flink-architecture-tests/pom.xml   |  2 +-
 flink-connectors/flink-connector-hive/pom.xml  |  2 +-
 .../flink-avro-confluent-registry/pom.xml  |  2 +-
 flink-python/pom.xml   |  4 +-
 flink-table/flink-sql-client/pom.xml   |  2 +-
 flink-table/flink-table-planner/pom.xml|  4 +-
 flink-table/flink-table-runtime/pom.xml| 10 +---
 .../table/data/util/DataFormatConverters.java  | 66 +++---
 flink-table/flink-table-uber/pom.xml   |  4 +-
 9 files changed, 68 insertions(+), 28 deletions(-)

diff --git a/flink-architecture-tests/pom.xml b/flink-architecture-tests/pom.xml
index 23330f9..d981444 100644
--- a/flink-architecture-tests/pom.xml
+++ b/flink-architecture-tests/pom.xml
@@ -116,7 +116,7 @@ under the License.
 

org.apache.flink
-   
flink-table-runtime_${scala.binary.version}
+   flink-table-runtime
${project.version}
test

diff --git a/flink-connectors/flink-connector-hive/pom.xml 
b/flink-connectors/flink-connector-hive/pom.xml
index 7b022ab..37f3dea 100644
--- a/flink-connectors/flink-connector-hive/pom.xml
+++ b/flink-connectors/flink-connector-hive/pom.xml
@@ -147,7 +147,7 @@ under the License.
 

org.apache.flink
-   
flink-table-runtime_${scala.binary.version}
+   flink-table-runtime
${project.version}
provided

diff --git a/flink-formats/flink-avro-confluent-registry/pom.xml 
b/flink-formats/flink-avro-confluent-registry/pom.xml
index 59fc1d1..6dda873 100644
--- a/flink-formats/flink-avro-confluent-registry/pom.xml
+++ b/flink-formats/flink-avro-confluent-registry/pom.xml
@@ -113,7 +113,7 @@ under the License.


org.apache.flink
-   
flink-table-runtime_${scala.binary.version}
+   flink-table-runtime
${project.version}
test

diff --git a/flink-python/pom.xml b/flink-python/pom.xml
index 8007026..53d0acd 100644
--- a/flink-python/pom.xml
+++ b/flink-python/pom.xml
@@ -76,7 +76,7 @@ under the License.


org.apache.flink
-   
flink-table-runtime_${scala.binary.version}
+   flink-table-runtime
${project.version}
provided

@@ -190,7 +190,7 @@ under the License.
 

org.apache.flink
-   
flink-table-runtime_${scala.binary.version}
+   flink-table-runtime
${project.version}
test-jar
test
diff --git a/flink-table/flink-sql-client/pom.xml 
b/flink-table/flink-sql-client/pom.xml
index 6254ae9..44ec8ad 100644
--- a/flink-table/flink-sql-client/pom.xml
+++ b/flink-table/flink-sql-client/pom.xml
@@ -86,7 +86,7 @@ under the License.
 

org.apache.flink
-   
flink-table-runtime_${scala.binary.version}
+   flink-table-runtime
${project.version}

 
diff --git a/flink-table/flink-table-planner/pom.xml 
b/flink-table/flink-table-planner/pom.xml
index 26912eb..c971579 100644
--- a/flink-table/flink-table-planner/pom.xml
+++ b/flink-table/flink-table-planner/pom.xml
@@ -113,7 +113,7 @@ under the License.
 

org.apache.flink
-   
flink-table-runtime_${scala.binary.version}
+   flink-table-runtime
${project.version}

 
@@ -269,7 +269,7 @@ under the License.
 

org.apache.flink
-   
flink-table-runtime_${scala.binary.version}
+   flink-table-runtime
${project.version}
test-jar
test
diff --git a/flink-table/flink-table-runtime/pom.xml 
b/flink-table/flink-table-runtime/pom.xml
index

[flink] branch master updated: [FLINK-25186][table-common] Fix ServiceLoaderUtil#load to work with Java 11

2021-12-06 Thread twalthr
This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new dd446c9  [FLINK-25186][table-common] Fix ServiceLoaderUtil#load to 
work with Java 11
dd446c9 is described below

commit dd446c9d56be5f33c683611102ec7026cf95e395
Author: slinkydeveloper 
AuthorDate: Mon Dec 6 12:30:05 2021 +0100

[FLINK-25186][table-common] Fix ServiceLoaderUtil#load to work with Java 11

This closes #18020.
---
 .../apache/flink/table/factories/FactoryUtil.java  |  2 +-
 .../flink/table/factories/ServiceLoaderUtil.java   | 61 --
 2 files changed, 23 insertions(+), 40 deletions(-)

diff --git 
a/flink-table/flink-table-common/src/main/java/org/apache/flink/table/factories/FactoryUtil.java
 
b/flink-table/flink-table-common/src/main/java/org/apache/flink/table/factories/FactoryUtil.java
index 5430b3a..cd828de 100644
--- 
a/flink-table/flink-table-common/src/main/java/org/apache/flink/table/factories/FactoryUtil.java
+++ 
b/flink-table/flink-table-common/src/main/java/org/apache/flink/table/factories/FactoryUtil.java
@@ -690,7 +690,7 @@ public final class FactoryUtil {
 static List discoverFactories(ClassLoader classLoader) {
 final List result = new LinkedList<>();
 ServiceLoaderUtil.load(Factory.class, classLoader)
-.forEachRemaining(
+.forEach(
 loadResult -> {
 if (loadResult.hasFailed()) {
 if (loadResult.getError() instanceof 
NoClassDefFoundError) {
diff --git 
a/flink-table/flink-table-common/src/main/java/org/apache/flink/table/factories/ServiceLoaderUtil.java
 
b/flink-table/flink-table-common/src/main/java/org/apache/flink/table/factories/ServiceLoaderUtil.java
index 313ae5c..620e9c3 100644
--- 
a/flink-table/flink-table-common/src/main/java/org/apache/flink/table/factories/ServiceLoaderUtil.java
+++ 
b/flink-table/flink-table-common/src/main/java/org/apache/flink/table/factories/ServiceLoaderUtil.java
@@ -18,7 +18,9 @@
 
 package org.apache.flink.table.factories;
 
+import java.util.ArrayList;
 import java.util.Iterator;
+import java.util.List;
 import java.util.NoSuchElementException;
 import java.util.ServiceLoader;
 
@@ -26,12 +28,27 @@ import java.util.ServiceLoader;
 class ServiceLoaderUtil {
 
 /**
- * This method behaves similarly to {@link ServiceLoader#load(Class, 
ClassLoader)} and it also
- * wraps the returned {@link Iterator} to iterate safely through the 
loaded services, eventually
- * catching load failures like {@link NoClassDefFoundError}.
+ * This method behaves similarly to {@link ServiceLoader#load(Class, 
ClassLoader)}, but it
+ * returns a list with the results of the iteration, wrapping the 
iteration failures such as
+ * {@link NoClassDefFoundError}.
  */
-static  Iterator> load(Class clazz, ClassLoader 
classLoader) {
-return new SafeIterator<>(ServiceLoader.load(clazz, 
classLoader).iterator());
+static  List> load(Class clazz, ClassLoader 
classLoader) {
+List> loadResults = new ArrayList<>();
+
+Iterator serviceLoaderIterator = ServiceLoader.load(clazz, 
classLoader).iterator();
+
+while (true) {
+try {
+T next = serviceLoaderIterator.next();
+loadResults.add(new LoadResult<>(next));
+} catch (NoSuchElementException e) {
+break;
+} catch (Throwable t) {
+loadResults.add(new LoadResult<>(t));
+}
+}
+
+return loadResults;
 }
 
 static class LoadResult {
@@ -63,38 +80,4 @@ class ServiceLoaderUtil {
 return service;
 }
 }
-
-/**
- * This iterator wraps {@link Iterator#hasNext()} and {@link 
Iterator#next()} in try-catch, and
- * returns {@link LoadResult} to handle such failures.
- */
-private static class SafeIterator implements Iterator> {
-
-private final Iterator iterator;
-
-public SafeIterator(Iterator iterator) {
-this.iterator = iterator;
-}
-
-@Override
-public boolean hasNext() {
-try {
-return iterator.hasNext();
-} catch (Throwable t) {
-return true;
-}
-}
-
-@Override
-public LoadResult next() {
-try {
-if (iterator.hasNext()) {
-return new LoadResult<>(iterator.next());
-}
-} catch (Throwable t) {
-return new LoadResult<>(t);
-}
-throw new NoSuchElementException();
-}
-}
 }


[flink] branch master updated (5ee7357 -> dd6a8f1)

2021-12-06 Thread twalthr
This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 5ee7357  [hotfix][build] Remove ES5 connector dependency from 
flink-architecture-tests
 add dd6a8f1  [FLINK-24290][table] Support structured types in JSON 
functions

No new revisions were added by this update.

Summary of changes:
 .../strategies/SpecificInputTypeStrategies.java|  2 +
 .../table/planner/codegen/JsonGenerateUtils.scala  | 26 +
 .../planner/functions/JsonFunctionsITCase.java | 44 ++
 3 files changed, 57 insertions(+), 15 deletions(-)


[flink] branch master updated (dac6425 -> 76b47b2)

2021-12-06 Thread twalthr
This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from dac6425  [FLINK-17782] Add array,map,row types support for parquet row 
writer
 new a332f8f  [FLINK-25111][table-api][table-planner] Add config option to 
determine CAST behaviour
 new 76b47b2  [hotfix][table-planner] Add class header comment to generated 
code

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../generated/execution_config_configuration.html  |  5 
 .../functions/AdvancedFunctionsExampleITCase.java  | 18 ++--
 .../table/api/config/ExecutionConfigOptions.java   | 34 ++
 .../planner/connectors/CollectDynamicSink.java | 11 +--
 .../table/planner/connectors/DynamicSinkUtils.java |  6 +++-
 .../casting/AbstractCodeGeneratorCastRule.java |  5 
 .../AbstractExpressionCodeGeneratorCastRule.java   |  5 
 .../table/planner/functions/casting/CastRule.java  | 10 ++-
 .../functions/casting/CodeGeneratorCastRule.java   |  4 +++
 .../casting/RowDataToStringConverterImpl.java  |  8 +++--
 .../functions/casting/RowToStringCastRule.java | 13 -
 .../flink/table/planner/codegen/CodeGenUtils.scala |  2 ++
 .../planner/codegen/CodeGeneratorContext.scala | 25 
 .../planner/codegen/FunctionCodeGenerator.scala|  1 +
 .../planner/codegen/calls/ScalarOperatorGens.scala | 14 +
 .../planner/functions/casting/CastRulesTest.java   | 18 ++--
 .../expressions/utils/ExpressionTestBase.scala |  6 
 17 files changed, 165 insertions(+), 20 deletions(-)


[flink] 01/02: [FLINK-25111][table-api][table-planner] Add config option to determine CAST behaviour

2021-12-06 Thread twalthr
This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit a332f8f51cfaf6fc3c4d88719ab000eef548e14f
Author: Marios Trivyzas 
AuthorDate: Fri Dec 3 10:23:30 2021 +0100

[FLINK-25111][table-api][table-planner] Add config option to determine CAST 
behaviour

Add a new `ExecutionConfigOption`, so that users can choose between the 
legacy
behaviour of CAST or the new one, including improvements and fixes.

This closes #17985.
---
 .../generated/execution_config_configuration.html  |  5 
 .../functions/AdvancedFunctionsExampleITCase.java  | 18 ++--
 .../table/api/config/ExecutionConfigOptions.java   | 34 ++
 .../planner/connectors/CollectDynamicSink.java | 11 +--
 .../table/planner/connectors/DynamicSinkUtils.java |  6 +++-
 .../casting/AbstractCodeGeneratorCastRule.java |  5 
 .../AbstractExpressionCodeGeneratorCastRule.java   |  5 
 .../table/planner/functions/casting/CastRule.java  | 10 ++-
 .../functions/casting/CodeGeneratorCastRule.java   |  4 +++
 .../casting/RowDataToStringConverterImpl.java  |  8 +++--
 .../functions/casting/RowToStringCastRule.java | 13 -
 .../flink/table/planner/codegen/CodeGenUtils.scala |  2 ++
 .../planner/codegen/calls/ScalarOperatorGens.scala |  8 +
 .../planner/functions/casting/CastRulesTest.java   | 18 ++--
 .../expressions/utils/ExpressionTestBase.scala |  6 
 15 files changed, 133 insertions(+), 20 deletions(-)

diff --git 
a/docs/layouts/shortcodes/generated/execution_config_configuration.html 
b/docs/layouts/shortcodes/generated/execution_config_configuration.html
index e0c9c53..e809eb0 100644
--- a/docs/layouts/shortcodes/generated/execution_config_configuration.html
+++ b/docs/layouts/shortcodes/generated/execution_config_configuration.html
@@ -58,6 +58,11 @@ By default no operator is disabled.
 Enum
 Determines whether string values for columns with 
CHAR(precision)/VARCHAR(precision) types will be trimmed or 
padded (only for CHAR(precision)), so that their length will match the 
one defined by the precision of their respective CHAR/VARCHAR column type.Possible values:"IGNORE": Don't apply any trimming and padding, 
and instead ignore the CHAR/VARCHAR precision directive."TRIM_PAD": 
Trim and pad string values to match  [...]
 
+table.exec.sink.legacy-cast-behaviour Batch Streaming
+ENABLED
+Enum
+Determines whether CAST will operate following the legacy 
behaviour or the new one that introduces various fixes and improvements.Possible values:"ENABLED": CAST will operate following the 
legacy behaviour."DISABLED": CAST will operate following the new 
correct behaviour.
+
 
 table.exec.sink.not-null-enforcer Batch Streaming
 ERROR
diff --git 
a/flink-examples/flink-examples-table/src/test/java/org/apache/flink/table/examples/java/functions/AdvancedFunctionsExampleITCase.java
 
b/flink-examples/flink-examples-table/src/test/java/org/apache/flink/table/examples/java/functions/AdvancedFunctionsExampleITCase.java
index 3306911..4bc98c2 100644
--- 
a/flink-examples/flink-examples-table/src/test/java/org/apache/flink/table/examples/java/functions/AdvancedFunctionsExampleITCase.java
+++ 
b/flink-examples/flink-examples-table/src/test/java/org/apache/flink/table/examples/java/functions/AdvancedFunctionsExampleITCase.java
@@ -41,41 +41,41 @@ public class AdvancedFunctionsExampleITCase extends 
ExampleOutputTestBase {
 assertThat(
 consoleOutput,
 containsString(
-"|Guillermo Smith |(5, 
2020-12-05) |"));
+"|Guillermo Smith | 
(5,2020-12-05) |"));
 assertThat(
 consoleOutput,
 containsString(
-"|John Turner |   (12, 
2020-10-02) |"));
+"|John Turner |
(12,2020-10-02) |"));
 assertThat(
 consoleOutput,
 containsString(
-"| Brandy Sanders |(1, 
2020-10-14) |"));
+"| Brandy Sanders | 
(1,2020-10-14) |"));
 assertThat(
 consoleOutput,
 containsString(
-"|Valeria Mendoza |   (10, 
2020-06-02) |"));
+"|Valeria Mendoza |
(10,2020-06-02) |"));
 assertThat(
 consoleOutput,

[flink] 02/02: [hotfix][table-planner] Add class header comment to generated code

2021-12-06 Thread twalthr
This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 76b47b219dab18ddb41beb29fa8664110bdb891d
Author: Marios Trivyzas 
AuthorDate: Thu Dec 2 12:25:27 2021 +0100

[hotfix][table-planner] Add class header comment to generated code

Use a class header comment to add useful configuration variables that can
help debugging a generated class code. Added timezone and legacy behaviour
info in this comment on the generated class implementing CAST.
---
 .../planner/codegen/CodeGeneratorContext.scala | 25 ++
 .../planner/codegen/FunctionCodeGenerator.scala|  1 +
 .../planner/codegen/calls/ScalarOperatorGens.scala |  6 ++
 3 files changed, 32 insertions(+)

diff --git 
a/flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/planner/codegen/CodeGeneratorContext.scala
 
b/flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/planner/codegen/CodeGeneratorContext.scala
index 3be8032..6ddcbe7 100644
--- 
a/flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/planner/codegen/CodeGeneratorContext.scala
+++ 
b/flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/planner/codegen/CodeGeneratorContext.scala
@@ -51,6 +51,10 @@ class CodeGeneratorContext(val tableConfig: TableConfig) {
   // holding a list of objects that could be used passed into generated class
   val references: mutable.ArrayBuffer[AnyRef] = new 
mutable.ArrayBuffer[AnyRef]()
 
+  // set of strings (lines) that will be concatenated into a single class 
header comment
+  private val reusableHeaderComments: mutable.LinkedHashSet[String] =
+mutable.LinkedHashSet[String]()
+
   // set of member statements that will be added only once
   // we use a LinkedHashSet to keep the insertion order
   private val reusableMemberStatements: mutable.LinkedHashSet[String] =
@@ -143,6 +147,16 @@ class CodeGeneratorContext(val tableConfig: TableConfig) {
 
   def nullCheck: Boolean = tableConfig.getNullCheck
 
+
+  /**
+* Add a line comment to [[reusableHeaderComments]] list which will be 
concatenated
+* into a single class header comment.
+* @param comment The comment to add for class header
+*/
+  def addReusableHeaderComment(comment: String): Unit = {
+reusableHeaderComments.add(comment)
+  }
+
   // 
-
   // Local Variables for Code Split
   // 
-
@@ -197,6 +211,17 @@ class CodeGeneratorContext(val tableConfig: TableConfig) {
   // 
-
 
   /**
+* @return Comment to be added as a header comment on the generated class
+*/
+  def getClassHeaderComment(): String = {
+s"""
+|/*
+| * ${reusableHeaderComments.mkString("\n * ")}
+| */
+""".stripMargin
+  }
+
+  /**
 * @return code block of statements that need to be placed in the member 
area of the class
 * (e.g. inner class definition)
 */
diff --git 
a/flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/planner/codegen/FunctionCodeGenerator.scala
 
b/flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/planner/codegen/FunctionCodeGenerator.scala
index 44a4c23..24c286f 100644
--- 
a/flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/planner/codegen/FunctionCodeGenerator.scala
+++ 
b/flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/planner/codegen/FunctionCodeGenerator.scala
@@ -125,6 +125,7 @@ object FunctionCodeGenerator {
 
 val funcCode =
   j"""
+  ${ctx.getClassHeaderComment()}
   public class $funcName
   extends ${samHeader._1.getCanonicalName} {
 
diff --git 
a/flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/planner/codegen/calls/ScalarOperatorGens.scala
 
b/flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/planner/codegen/calls/ScalarOperatorGens.scala
index 045976c..61252f4 100644
--- 
a/flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/planner/codegen/calls/ScalarOperatorGens.scala
+++ 
b/flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/planner/codegen/calls/ScalarOperatorGens.scala
@@ -937,6 +937,12 @@ object ScalarOperatorGens {
   operand: GeneratedExpression,
   targetType: LogicalType)
 : GeneratedExpression = {
+
+ctx.addReusableHeaderComment(
+  s"Using option 
'${ExecutionConfigOptions.TABLE_EXEC_LEGACY_CAST_BEHAVIOUR.key()}':" +
+s"'${isLegacyCastBehaviourEnabled(ctx)}'")
+ctx.addReusableHeaderComment("Timezone: " + 
ctx.tableConfig.getLocalTimeZone)
+
 // Try 

[flink] 03/08: [hotfix][connectors] Every connector now shades the flink-connector-base in its uber jar

2021-12-03 Thread twalthr
This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 158c68c9933e584e79b1011abdfeaee7dcc2f0d2
Author: slinkydeveloper 
AuthorDate: Fri Dec 3 10:03:21 2021 +0100

[hotfix][connectors] Every connector now shades the flink-connector-base in 
its uber jar

Signed-off-by: slinkydeveloper 
---
 flink-connectors/flink-sql-connector-hbase-1.4/pom.xml | 1 +
 flink-connectors/flink-sql-connector-hbase-2.2/pom.xml | 1 +
 flink-connectors/flink-sql-connector-kafka/pom.xml | 3 ++-
 flink-connectors/flink-sql-connector-kinesis/pom.xml   | 1 +
 flink-connectors/flink-sql-connector-rabbitmq/pom.xml  | 3 ++-
 5 files changed, 7 insertions(+), 2 deletions(-)

diff --git a/flink-connectors/flink-sql-connector-hbase-1.4/pom.xml 
b/flink-connectors/flink-sql-connector-hbase-1.4/pom.xml
index 33f9556..8e08cdf 100644
--- a/flink-connectors/flink-sql-connector-hbase-1.4/pom.xml
+++ b/flink-connectors/flink-sql-connector-hbase-1.4/pom.xml
@@ -68,6 +68,7 @@ under the License.



+   
org.apache.flink:flink-connector-base

org.apache.flink:flink-connector-hbase-base

org.apache.flink:flink-connector-hbase-1.4

org.apache.hbase:hbase-*
diff --git a/flink-connectors/flink-sql-connector-hbase-2.2/pom.xml 
b/flink-connectors/flink-sql-connector-hbase-2.2/pom.xml
index 56ca5e7..385df35 100644
--- a/flink-connectors/flink-sql-connector-hbase-2.2/pom.xml
+++ b/flink-connectors/flink-sql-connector-hbase-2.2/pom.xml
@@ -68,6 +68,7 @@ under the License.



+   
org.apache.flink:flink-connector-base

org.apache.flink:flink-connector-hbase-base

org.apache.flink:flink-connector-hbase-2.2

org.apache.hbase:hbase-*
diff --git a/flink-connectors/flink-sql-connector-kafka/pom.xml 
b/flink-connectors/flink-sql-connector-kafka/pom.xml
index c26142b..ad63e6c 100644
--- a/flink-connectors/flink-sql-connector-kafka/pom.xml
+++ b/flink-connectors/flink-sql-connector-kafka/pom.xml
@@ -58,8 +58,9 @@ under the License.



-   
org.apache.kafka:*
+   
org.apache.flink:flink-connector-base

org.apache.flink:flink-connector-kafka
+   
org.apache.kafka:*



diff --git a/flink-connectors/flink-sql-connector-kinesis/pom.xml 
b/flink-connectors/flink-sql-connector-kinesis/pom.xml
index d75a85f..1eb1694 100644
--- a/flink-connectors/flink-sql-connector-kinesis/pom.xml
+++ b/flink-connectors/flink-sql-connector-kinesis/pom.xml
@@ -58,6 +58,7 @@ under the License.



+   
org.apache.flink:flink-connector-base

org.apache.flink:flink-connector-kinesis

com.fasterxml.jackson.core:jackson-core

com.fasterxml.jackson.core:jackson-databind
diff --git a/flink-connectors/flink-sql-connector-rabbitmq/pom.xml 
b/flink-connectors/flink-sql-connector-rabbitmq/pom.xml
index 8055910..c521faf 100644
--- a/flink-connectors/flink-sql-connector-rabbitmq/pom.xml
+++ b/flink-connectors/flink-sql-connector-rabbitmq/pom.xml
@@ -58,8 +58,9 @@ under the License

[flink] 04/08: [FLINK-24687][table-common] Fix the Table Factory loading mechanism to tolerate NoClassDefFoundError. Added a test and converted FactoryUtil to use assertj.

2021-12-03 Thread twalthr
This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 97237d0d86ef4610174ba8a2579341822ed8d21e
Author: slinkydeveloper 
AuthorDate: Wed Nov 24 12:11:21 2021 +0100

[FLINK-24687][table-common] Fix the Table Factory loading mechanism to 
tolerate NoClassDefFoundError. Added a test and converted FactoryUtil to use 
assertj.

Signed-off-by: slinkydeveloper 
---
 .../apache/flink/testutils/ClassLoaderUtils.java   |  48 +++-
 .../apache/flink/table/factories/FactoryUtil.java  |  33 ++-
 .../flink/table/factories/ServiceLoaderUtil.java   | 100 
 .../flink/table/factories/FactoryUtilTest.java | 275 +
 4 files changed, 340 insertions(+), 116 deletions(-)

diff --git 
a/flink-core/src/test/java/org/apache/flink/testutils/ClassLoaderUtils.java 
b/flink-core/src/test/java/org/apache/flink/testutils/ClassLoaderUtils.java
index 8207b2d..25f5ea1 100644
--- a/flink-core/src/test/java/org/apache/flink/testutils/ClassLoaderUtils.java
+++ b/flink-core/src/test/java/org/apache/flink/testutils/ClassLoaderUtils.java
@@ -34,21 +34,25 @@ import java.nio.file.Files;
 import java.nio.file.Path;
 import java.nio.file.SimpleFileVisitor;
 import java.nio.file.attribute.BasicFileAttributes;
+import java.util.ArrayList;
 import java.util.Arrays;
 import java.util.HashMap;
+import java.util.LinkedHashMap;
+import java.util.List;
 import java.util.Map;
 import java.util.UUID;
 
 /** Utilities to create class loaders. */
 public class ClassLoaderUtils {
+
 public static URLClassLoader compileAndLoadJava(File root, String 
filename, String source)
 throws IOException {
 return withRoot(root).addClass(filename.replaceAll("\\.java", ""), 
source).build();
 }
 
-private static URLClassLoader createClassLoader(File root) throws 
MalformedURLException {
-return new URLClassLoader(
-new URL[] {root.toURI().toURL()}, 
Thread.currentThread().getContextClassLoader());
+private static URLClassLoader createClassLoader(File root, ClassLoader 
parent)
+throws MalformedURLException {
+return new URLClassLoader(new URL[] {root.toURI().toURL()}, parent);
 }
 
 private static void writeAndCompile(File root, String filename, String 
source)
@@ -76,7 +80,14 @@ public class ClassLoaderUtils {
 
 private static int compileClass(File sourceFile) {
 JavaCompiler compiler = ToolProvider.getSystemJavaCompiler();
-return compiler.run(null, null, null, "-proc:none", 
sourceFile.getPath());
+return compiler.run(
+null,
+null,
+null,
+"-proc:none",
+"-classpath",
+sourceFile.getParent() + ":" + 
System.getProperty("java.class.path"),
+sourceFile.getPath());
 }
 
 public static URL[] getClasspathURLs() {
@@ -96,16 +107,23 @@ public class ClassLoaderUtils {
 }
 }
 
+/**
+ * Builder for a {@link ClassLoader} where you can add resources and 
compile java source code.
+ */
 public static class ClassLoaderBuilder {
 
 private final File root;
 private final Map classes;
 private final Map resources;
+private final Map> services;
+private ClassLoader parent;
 
 private ClassLoaderBuilder(File root) {
 this.root = root;
-this.classes = new HashMap<>();
-this.resources = new HashMap<>();
+this.classes = new LinkedHashMap<>();
+this.resources = new LinkedHashMap<>();
+this.services = new HashMap<>();
+this.parent = Thread.currentThread().getContextClassLoader();
 }
 
 public ClassLoaderBuilder addResource(String targetPath, String 
resource) {
@@ -119,6 +137,11 @@ public class ClassLoaderUtils {
 return this;
 }
 
+public ClassLoaderBuilder addService(String serviceClass, String 
implClass) {
+services.computeIfAbsent(serviceClass, k -> new 
ArrayList<>()).add(implClass);
+return this;
+}
+
 public ClassLoaderBuilder addClass(String className, String source) {
 String oldValue = classes.putIfAbsent(className, source);
 
@@ -130,22 +153,33 @@ public class ClassLoaderUtils {
 return this;
 }
 
+public ClassLoaderBuilder withParentClassLoader(ClassLoader 
classLoader) {
+this.parent = classLoader;
+return this;
+}
+
 public URLClassLoader build() throws IOException {
 for (Map.Entry classInfo : classes.entrySet()) {
 writeAndCompile(root, createFileName(classInfo.getKey()), 
classIn

[flink] 05/08: [FLINK-24687][table-planner] Remove planner dependency on FileSystemConnectorOptions

2021-12-03 Thread twalthr
This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit d12fd3d729f772d5c545d58987049bb9ce9f1da8
Author: slinkydeveloper 
AuthorDate: Mon Nov 8 10:46:10 2021 +0100

[FLINK-24687][table-planner] Remove planner dependency on 
FileSystemConnectorOptions

Signed-off-by: slinkydeveloper 
---
 .../rules/physical/batch/BatchPhysicalLegacySinkRule.scala| 11 ++-
 .../plan/rules/physical/batch/BatchPhysicalSinkRule.scala | 11 ++-
 .../rules/physical/stream/StreamPhysicalLegacySinkRule.scala  | 11 ++-
 .../plan/rules/physical/stream/StreamPhysicalSinkRule.scala   |  7 ---
 4 files changed, 22 insertions(+), 18 deletions(-)

diff --git 
a/flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/planner/plan/rules/physical/batch/BatchPhysicalLegacySinkRule.scala
 
b/flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/planner/plan/rules/physical/batch/BatchPhysicalLegacySinkRule.scala
index edebe6c..72d8afd 100644
--- 
a/flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/planner/plan/rules/physical/batch/BatchPhysicalLegacySinkRule.scala
+++ 
b/flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/planner/plan/rules/physical/batch/BatchPhysicalLegacySinkRule.scala
@@ -28,7 +28,6 @@ import org.apache.flink.table.sinks.PartitionableTableSink
 import org.apache.calcite.plan.RelOptRule
 import org.apache.calcite.rel.convert.ConverterRule
 import org.apache.calcite.rel.{RelCollations, RelNode}
-import org.apache.flink.table.filesystem.FileSystemConnectorOptions
 
 import scala.collection.JavaConversions._
 
@@ -53,12 +52,14 @@ class BatchPhysicalLegacySinkRule extends ConverterRule(
 val dynamicPartIndices =
   
dynamicPartFields.map(partitionSink.getTableSchema.getFieldNames.indexOf(_))
 
+// TODO This option is hardcoded to remove the dependency of 
planner from
+//  flink-connector-files. We should move this option out of 
FileSystemConnectorOptions
 val shuffleEnable = sink
-.catalogTable
-.getOptions
-
.get(FileSystemConnectorOptions.SINK_SHUFFLE_BY_PARTITION.key())
+  .catalogTable
+  .getOptions
+  .getOrDefault("sink.shuffle-by-partition.enable", "false")
 
-if (shuffleEnable != null && shuffleEnable.toBoolean) {
+if (shuffleEnable.toBoolean) {
   requiredTraitSet = requiredTraitSet.plus(
 FlinkRelDistribution.hash(dynamicPartIndices
 .map(Integer.valueOf), requireStrict = false))
diff --git 
a/flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/planner/plan/rules/physical/batch/BatchPhysicalSinkRule.scala
 
b/flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/planner/plan/rules/physical/batch/BatchPhysicalSinkRule.scala
index 5a00b51..b9c9f8f 100644
--- 
a/flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/planner/plan/rules/physical/batch/BatchPhysicalSinkRule.scala
+++ 
b/flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/planner/plan/rules/physical/batch/BatchPhysicalSinkRule.scala
@@ -30,7 +30,6 @@ import org.apache.flink.table.types.logical.RowType
 import org.apache.calcite.plan.RelOptRule
 import org.apache.calcite.rel.convert.ConverterRule
 import org.apache.calcite.rel.{RelCollationTraitDef, RelCollations, RelNode}
-import org.apache.flink.table.filesystem.FileSystemConnectorOptions
 
 import scala.collection.JavaConversions._
 import scala.collection.mutable
@@ -68,12 +67,14 @@ class BatchPhysicalSinkRule extends ConverterRule(
 val dynamicPartIndices =
   dynamicPartFields.map(fieldNames.indexOf(_))
 
+// TODO This option is hardcoded to remove the dependency of 
planner from
+//  flink-connector-files. We should move this option out of 
FileSystemConnectorOptions
 val shuffleEnable = sink
-.catalogTable
-.getOptions
-
.get(FileSystemConnectorOptions.SINK_SHUFFLE_BY_PARTITION.key())
+  .catalogTable
+  .getOptions
+  .getOrDefault("sink.shuffle-by-partition.enable", "false")
 
-if (shuffleEnable != null && shuffleEnable.toBoolean) {
+if (shuffleEnable.toBoolean) {
   requiredTraitSet = requiredTraitSet.plus(
 FlinkRelDistribution.hash(dynamicPartIndices
 .map(Integer.valueOf), requireStrict = false))
diff --git 
a/flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/planner/plan/rules/physical/stream/StreamPhysicalLegacySinkRule.scala
 
b/flink-table/flink-table-planner/src/main/scala/o

[flink] 01/08: [hotfix][table-runtime] Update copyright for some filesystem classes

2021-12-03 Thread twalthr
This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit ab090d5341901265a95e12e1a6a965147b4ebe02
Author: slinkydeveloper 
AuthorDate: Mon Nov 29 16:09:51 2021 +0100

[hotfix][table-runtime] Update copyright for some filesystem classes

Signed-off-by: slinkydeveloper 
---
 .../table/filesystem/stream/compact/CompactContext.java | 13 ++---
 .../table/filesystem/stream/compact/CompactOperator.java| 13 ++---
 .../table/filesystem/stream/compact/CompactReader.java  | 13 ++---
 .../table/filesystem/stream/compact/CompactWriter.java  | 13 ++---
 .../org/apache/flink/table/runtime/util/BinPacking.java | 13 ++---
 5 files changed, 30 insertions(+), 35 deletions(-)

diff --git 
a/flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/filesystem/stream/compact/CompactContext.java
 
b/flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/filesystem/stream/compact/CompactContext.java
index 0771e7f..4f81544 100644
--- 
a/flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/filesystem/stream/compact/CompactContext.java
+++ 
b/flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/filesystem/stream/compact/CompactContext.java
@@ -7,14 +7,13 @@
  * "License"); you may not use this file except in compliance
  * with the License.  You may obtain a copy of the License at
  *
- *   http://www.apache.org/licenses/LICENSE-2.0
+ * http://www.apache.org/licenses/LICENSE-2.0
  *
- * Unless required by applicable law or agreed to in writing,
- * software distributed under the License is distributed on an
- * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
- * KIND, either express or implied.  See the License for the
- * specific language governing permissions and limitations
- * under the License.
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
  */
 
 package org.apache.flink.table.filesystem.stream.compact;
diff --git 
a/flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/filesystem/stream/compact/CompactOperator.java
 
b/flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/filesystem/stream/compact/CompactOperator.java
index f8a2823..d31e155 100644
--- 
a/flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/filesystem/stream/compact/CompactOperator.java
+++ 
b/flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/filesystem/stream/compact/CompactOperator.java
@@ -7,14 +7,13 @@
  * "License"); you may not use this file except in compliance
  * with the License.  You may obtain a copy of the License at
  *
- *   http://www.apache.org/licenses/LICENSE-2.0
+ * http://www.apache.org/licenses/LICENSE-2.0
  *
- * Unless required by applicable law or agreed to in writing,
- * software distributed under the License is distributed on an
- * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
- * KIND, either express or implied.  See the License for the
- * specific language governing permissions and limitations
- * under the License.
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
  */
 
 package org.apache.flink.table.filesystem.stream.compact;
diff --git 
a/flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/filesystem/stream/compact/CompactReader.java
 
b/flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/filesystem/stream/compact/CompactReader.java
index 05cdd90..c396461 100644
--- 
a/flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/filesystem/stream/compact/CompactReader.java
+++ 
b/flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/filesystem/stream/compact/CompactReader.java
@@ -7,14 +7,13 @@
  * "License"); you may not use this file except in compliance
  * with the License.  You may obtain a copy of the License at
  *
- *   http://www.apache.org/licenses/LICENSE-2.0
+ * http://www.apache.org/licenses/LICENSE-2.0
  *
- * Unless required by applicable law or agreed to in writing,
- * software distributed under the License is distributed on an
- * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
- * KIND, either express or implied.  See the License for the
- * specific language governing permissi

[flink] 07/08: [FLINK-24687][table-runtime] Refactored test csv format to be independent of planner (except ScanRuntimeProviderContext.INSTANCE::createDataStructureConverter) and to implement Serializ

2021-12-03 Thread twalthr
This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 6bb090751093e4f9f8a05c80857af11381f88599
Author: slinkydeveloper 
AuthorDate: Wed Nov 24 15:07:12 2021 +0100

[FLINK-24687][table-runtime] Refactored test csv format to be independent 
of planner (except 
ScanRuntimeProviderContext.INSTANCE::createDataStructureConverter) and to 
implement SerializationSchema more than BulkWriterFormatFactory. Moved to a 
specific package

Signed-off-by: slinkydeveloper 
---
 .../table/filesystem/FileSystemTableSink.java  |  5 +-
 .../testcsv}/TestCsvDeserializationSchema.java | 35 +
 .../testcsv/TestCsvFormatFactory.java} | 89 +-
 .../testcsv/TestCsvSerializationSchema.java| 58 ++
 .../org.apache.flink.table.factories.Factory   |  2 +-
 5 files changed, 104 insertions(+), 85 deletions(-)

diff --git 
a/flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/filesystem/FileSystemTableSink.java
 
b/flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/filesystem/FileSystemTableSink.java
index 2e9af35..bbd7425 100644
--- 
a/flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/filesystem/FileSystemTableSink.java
+++ 
b/flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/filesystem/FileSystemTableSink.java
@@ -343,7 +343,10 @@ public class FileSystemTableSink extends 
AbstractFileSystemTable
 @Override
 public DynamicTableSource.DataStructureConverter 
createDataStructureConverter(
 DataType producedDataType) {
-throw new TableException("Compaction reader not support 
DataStructure converter.");
+// This method cannot be implemented without changing the
+// DynamicTableSink.DataStructureConverter interface
+throw new UnsupportedOperationException(
+"Compaction reader not support DataStructure 
converter.");
 }
 };
 }
diff --git 
a/flink-table/flink-table-runtime/src/test/java/org/apache/flink/table/filesystem/TestCsvDeserializationSchema.java
 
b/flink-table/flink-table-runtime/src/test/java/org/apache/flink/formats/testcsv/TestCsvDeserializationSchema.java
similarity index 83%
rename from 
flink-table/flink-table-runtime/src/test/java/org/apache/flink/table/filesystem/TestCsvDeserializationSchema.java
rename to 
flink-table/flink-table-runtime/src/test/java/org/apache/flink/formats/testcsv/TestCsvDeserializationSchema.java
index dbec987..1569e0d 100644
--- 
a/flink-table/flink-table-runtime/src/test/java/org/apache/flink/table/filesystem/TestCsvDeserializationSchema.java
+++ 
b/flink-table/flink-table-runtime/src/test/java/org/apache/flink/formats/testcsv/TestCsvDeserializationSchema.java
@@ -16,24 +16,22 @@
  * limitations under the License.
  */
 
-package org.apache.flink.table.filesystem;
+package org.apache.flink.formats.testcsv;
 
 import org.apache.flink.api.common.serialization.DeserializationSchema;
 import org.apache.flink.api.common.typeinfo.TypeInformation;
+import org.apache.flink.table.connector.source.DynamicTableSource;
 import org.apache.flink.table.data.GenericRowData;
 import org.apache.flink.table.data.RowData;
-import org.apache.flink.table.data.conversion.DataStructureConverter;
-import org.apache.flink.table.data.conversion.DataStructureConverters;
-import org.apache.flink.table.runtime.typeutils.InternalTypeInfo;
 import org.apache.flink.table.types.DataType;
 import org.apache.flink.table.types.logical.LogicalTypeRoot;
-import org.apache.flink.table.types.logical.RowType;
 import org.apache.flink.types.parser.FieldParser;
 import org.apache.flink.util.InstantiationUtil;
 
 import java.io.IOException;
 import java.math.BigDecimal;
 import java.util.List;
+import java.util.function.Function;
 
 /**
  * The {@link DeserializationSchema} that output {@link RowData}.
@@ -41,7 +39,7 @@ import java.util.List;
  * NOTE: This is meant only for testing purpose and doesn't provide a 
feature complete stable csv
  * parser! If you need a feature complete CSV parser, check out the flink-csv 
package.
  */
-public class TestCsvDeserializationSchema implements 
DeserializationSchema {
+class TestCsvDeserializationSchema implements DeserializationSchema {
 
 private final List physicalFieldTypes;
 private final int physicalFieldCount;
@@ -49,20 +47,34 @@ public class TestCsvDeserializationSchema implements 
DeserializationSchema typeInfo;
 private final int[] indexMapping;
 
-@SuppressWarnings("rawtypes")
-private transient DataStructureConverter[] csvRowToRowDataConverters;
+private final DynamicTableSource.DataStructureConverter[] 
csvRowToRowDataConverters;
 
 private transient FieldParser[] fieldParsers;
 
-public TestCsv

[flink] branch master updated (61e7877 -> 9bbadb9)

2021-12-03 Thread twalthr
This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 61e7877  [hotfix][tests] Remove Mocking from 
TaskLocalStateStoreImplTest
 new ab090d5  [hotfix][table-runtime] Update copyright for some filesystem 
classes
 new 44b2756  [hotfix][dist] flink-json and flink-csv are now declared as 
dependencies in the flink-dist to enforce the reactor order
 new 158c68c  [hotfix][connectors] Every connector now shades the 
flink-connector-base in its uber jar
 new 97237d0  [FLINK-24687][table-common] Fix the Table Factory loading 
mechanism to tolerate NoClassDefFoundError. Added a test and converted 
FactoryUtil to use assertj.
 new d12fd3d  [FLINK-24687][table-planner] Remove planner dependency on 
FileSystemConnectorOptions
 new 2ae04c2  [FLINK-24687][parquet] Copied DecimalDataUtils#is32BitDecimal 
and DecimalDataUtils#is32BitDecimal in ParquetSchemaConverter to remove the 
dependency on DecimalDataUtils (from planner)
 new 6bb0907  [FLINK-24687][table-runtime] Refactored test csv format to be 
independent of planner (except 
ScanRuntimeProviderContext.INSTANCE::createDataStructureConverter) and to 
implement SerializationSchema more than BulkWriterFormatFactory. Moved to a 
specific package
 new 9bbadb9  [FLINK-24687][table][connectors] Move FileSystemTableSink, 
FileSystemTableSource to flink-connector-files and columnar support to 
flink-table-common

The 8 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 docs/content/docs/connectors/table/filesystem.md   |   1 +
 docs/data/sql_connectors.yml   |  46 ++--
 .../e577412e-8d38-496c-a680-b842112e4b94   |   1 -
 .../flink-connector-elasticsearch-base/pom.xml |   5 +
 flink-connectors/flink-connector-files/pom.xml |  38 ++-
 .../file/table}/AbstractFileSystemTable.java   |   9 +-
 .../flink/connector/file/table}/BinPacking.java|  18 +-
 .../connector/file/table}/ColumnarRowIterator.java |   6 +-
 .../file/table}/ContinuousPartitionFetcher.java|   2 +-
 .../file/table}/DefaultPartTimeExtractor.java  |   4 +-
 .../file/table}/DeserializationSchemaAdapter.java  |   6 +-
 .../file/table}/DynamicPartitionWriter.java|   2 +-
 .../file/table}/EmptyMetaStoreFactory.java |   2 +-
 .../connector/file/table}/EnrichedRowData.java |   2 +-
 .../file/table}/FileInfoExtractorBulkFormat.java   |   6 +-
 .../connector/file/table}/FileSystemCommitter.java |   6 +-
 .../file/table}/FileSystemConnectorOptions.java|   2 +-
 .../connector/file/table}/FileSystemFactory.java   |   2 +-
 .../file/table}/FileSystemOutputFormat.java|   2 +-
 .../file/table}/FileSystemTableFactory.java|  10 +-
 .../connector/file/table}/FileSystemTableSink.java |  38 +--
 .../file/table}/FileSystemTableSource.java |  25 +-
 .../file/table}/GroupedPartitionWriter.java|   2 +-
 .../connector/file/table}/LimitableBulkFormat.java |   4 +-
 .../file/table}/MetastoreCommitPolicy.java |   6 +-
 .../connector/file/table}/OutputFormatFactory.java |   2 +-
 .../file/table}/PartitionCommitPolicy.java |   2 +-
 .../connector/file/table}/PartitionComputer.java   |   2 +-
 .../connector/file/table}/PartitionFetcher.java|   2 +-
 .../file/table}/PartitionFieldExtractor.java   |   4 +-
 .../connector/file/table}/PartitionLoader.java |   5 +-
 .../connector/file/table}/PartitionReader.java |   2 +-
 .../file/table}/PartitionTempFileManager.java  |   2 +-
 .../file/table}/PartitionTimeExtractor.java|   2 +-
 .../connector/file/table}/PartitionWriter.java |   2 +-
 .../file/table}/PartitionWriterFactory.java|   4 +-
 .../file/table}/ProjectingBulkFormat.java  |   2 +-
 .../file/table}/RowDataPartitionComputer.java  |   2 +-
 .../file/table}/RowPartitionComputer.java  |   2 +-
 .../file/table}/SerializationSchemaAdapter.java|   2 +-
 .../file/table}/SingleDirectoryWriter.java |   2 +-
 .../file/table}/SuccessFileCommitPolicy.java   |   4 +-
 .../file/table}/TableMetaStoreFactory.java |   2 +-
 .../table/factories/BulkReaderFormatFactory.java   |   7 +-
 .../table/factories/BulkWriterFormatFactory.java   |   4 +-
 .../table/factories/FileSystemFormatFactory.java   |   3 +-
 .../file/table}/format/BulkDecodingFormat.java |   4 +-
 .../table}/stream/AbstractStreamingWriter.java |   2 +-
 .../file/table}/stream/PartitionCommitInfo.java|   5 +-
 .../table}/stream/PartitionCommitPredicate.java|  11 +-
 .../file/table}/stream/PartitionCommitTrigger.java |  11 +-
 .../file/table}/stream/PartitionCommitter.java |  20 +-
 .../stream/PartitionTime

[flink] 06/08: [FLINK-24687][parquet] Copied DecimalDataUtils#is32BitDecimal and DecimalDataUtils#is32BitDecimal in ParquetSchemaConverter to remove the dependency on DecimalDataUtils (from planner)

2021-12-03 Thread twalthr
This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 2ae04c24262810646675c490616fd0053d0d0107
Author: slinkydeveloper 
AuthorDate: Wed Nov 24 14:51:34 2021 +0100

[FLINK-24687][parquet] Copied DecimalDataUtils#is32BitDecimal and 
DecimalDataUtils#is32BitDecimal in ParquetSchemaConverter to remove the 
dependency on DecimalDataUtils (from planner)

Signed-off-by: slinkydeveloper 
---
 .../apache/flink/formats/parquet/row/ParquetRowDataWriter.java |  6 +++---
 .../flink/formats/parquet/utils/ParquetSchemaConverter.java|  9 +
 .../flink/formats/parquet/vector/ParquetDecimalVector.java |  6 +++---
 .../flink/formats/parquet/vector/ParquetSplitReaderUtil.java   | 10 +-
 .../parquet/vector/reader/FixedLenBytesColumnReader.java   | 10 +-
 5 files changed, 25 insertions(+), 16 deletions(-)

diff --git 
a/flink-formats/flink-parquet/src/main/java/org/apache/flink/formats/parquet/row/ParquetRowDataWriter.java
 
b/flink-formats/flink-parquet/src/main/java/org/apache/flink/formats/parquet/row/ParquetRowDataWriter.java
index ee1556f..63ec0b0 100644
--- 
a/flink-formats/flink-parquet/src/main/java/org/apache/flink/formats/parquet/row/ParquetRowDataWriter.java
+++ 
b/flink-formats/flink-parquet/src/main/java/org/apache/flink/formats/parquet/row/ParquetRowDataWriter.java
@@ -18,7 +18,7 @@
 
 package org.apache.flink.formats.parquet.row;
 
-import org.apache.flink.table.data.DecimalDataUtils;
+import org.apache.flink.formats.parquet.utils.ParquetSchemaConverter;
 import org.apache.flink.table.data.RowData;
 import org.apache.flink.table.data.TimestampData;
 import org.apache.flink.table.types.logical.DecimalType;
@@ -315,8 +315,8 @@ public class ParquetRowDataWriter {
 
 // 1 <= precision <= 18, writes as FIXED_LEN_BYTE_ARRAY
 // optimizer for UnscaledBytesWriter
-if (DecimalDataUtils.is32BitDecimal(precision)
-|| DecimalDataUtils.is64BitDecimal(precision)) {
+if (ParquetSchemaConverter.is32BitDecimal(precision)
+|| ParquetSchemaConverter.is64BitDecimal(precision)) {
 return new LongUnscaledBytesWriter();
 }
 
diff --git 
a/flink-formats/flink-parquet/src/main/java/org/apache/flink/formats/parquet/utils/ParquetSchemaConverter.java
 
b/flink-formats/flink-parquet/src/main/java/org/apache/flink/formats/parquet/utils/ParquetSchemaConverter.java
index b3a0296..6219439 100644
--- 
a/flink-formats/flink-parquet/src/main/java/org/apache/flink/formats/parquet/utils/ParquetSchemaConverter.java
+++ 
b/flink-formats/flink-parquet/src/main/java/org/apache/flink/formats/parquet/utils/ParquetSchemaConverter.java
@@ -113,4 +113,13 @@ public class ParquetSchemaConverter {
 }
 return numBytes;
 }
+
+// From DecimalDataUtils
+public static boolean is32BitDecimal(int precision) {
+return precision <= 9;
+}
+
+public static boolean is64BitDecimal(int precision) {
+return precision <= 18 && precision > 9;
+}
 }
diff --git 
a/flink-formats/flink-parquet/src/main/java/org/apache/flink/formats/parquet/vector/ParquetDecimalVector.java
 
b/flink-formats/flink-parquet/src/main/java/org/apache/flink/formats/parquet/vector/ParquetDecimalVector.java
index 714e597..6ca1d95 100644
--- 
a/flink-formats/flink-parquet/src/main/java/org/apache/flink/formats/parquet/vector/ParquetDecimalVector.java
+++ 
b/flink-formats/flink-parquet/src/main/java/org/apache/flink/formats/parquet/vector/ParquetDecimalVector.java
@@ -18,8 +18,8 @@
 
 package org.apache.flink.formats.parquet.vector;
 
+import org.apache.flink.formats.parquet.utils.ParquetSchemaConverter;
 import org.apache.flink.table.data.DecimalData;
-import org.apache.flink.table.data.DecimalDataUtils;
 import org.apache.flink.table.data.vector.BytesColumnVector;
 import org.apache.flink.table.data.vector.ColumnVector;
 import org.apache.flink.table.data.vector.DecimalColumnVector;
@@ -40,10 +40,10 @@ public class ParquetDecimalVector implements 
DecimalColumnVector {
 
 @Override
 public DecimalData getDecimal(int i, int precision, int scale) {
-if (DecimalDataUtils.is32BitDecimal(precision)) {
+if (ParquetSchemaConverter.is32BitDecimal(precision)) {
 return DecimalData.fromUnscaledLong(
 ((IntColumnVector) vector).getInt(i), precision, scale);
-} else if (DecimalDataUtils.is64BitDecimal(precision)) {
+} else if (ParquetSchemaConverter.is64BitDecimal(precision)) {
 return DecimalData.fromUnscaledLong(
 ((LongColumnVector) vector).getLong(i), precision, scale);
 } else {
diff --git 
a/flink-formats/flink-parquet/src/main/java/org/apache/flink/formats/parquet/vector/ParquetSplitReaderUtil.java
 
b/flink-formats/flink-parquet/src/main/java/org/apache/flink

[flink] 02/08: [hotfix][dist] flink-json and flink-csv are now declared as dependencies in the flink-dist to enforce the reactor order

2021-12-03 Thread twalthr
This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 44b2756b3d7dd42d7b9691da2463a1f5f50fee01
Author: slinkydeveloper 
AuthorDate: Thu Dec 2 11:01:14 2021 +0100

[hotfix][dist] flink-json and flink-csv are now declared as dependencies in 
the flink-dist to enforce the reactor order

Signed-off-by: slinkydeveloper 
---
 flink-dist/pom.xml | 14 ++
 1 file changed, 14 insertions(+)

diff --git a/flink-dist/pom.xml b/flink-dist/pom.xml
index 47c6062..ffe067c 100644
--- a/flink-dist/pom.xml
+++ b/flink-dist/pom.xml
@@ -343,6 +343,20 @@ under the License.
 

org.apache.flink
+   flink-json
+   ${project.version}
+   provided
+   
+
+   
+   org.apache.flink
+   flink-csv
+   ${project.version}
+   provided
+   
+
+   
+   org.apache.flink
flink-azure-fs-hadoop
${project.version}
provided


[flink] branch master updated (a675ed8 -> 1151071)

2021-12-03 Thread twalthr
This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from a675ed8  [FLINK-25150][API] Fix the violation of api annotation
 add c35c5b5  [hotfix][table-runtime Improve performance of UpdatableRowData
 add 1151071  [FLINK-24753] Implement CHAR/VARCHAR length validation for 
sinks

No new revisions were added by this update.

Summary of changes:
 .../generated/execution_config_configuration.html  |   8 +-
 .../7602816f-5c01-4b7a-9e3e-235dfedec245   |   2 +-
 .../src/test/resources/sql/table.q |   4 +-
 .../table/api/config/ExecutionConfigOptions.java   |  73 +-
 .../plan/nodes/exec/common/CommonExecSink.java |  78 --
 .../nodes/exec/common/CommonExecSinkITCase.java| 272 ++-
 .../runtime/batch/table/TableSinkITCase.scala  |  88 +--
 .../runtime/stream/table/TableSinkITCase.scala | 172 +---
 .../apache/flink/table/data/UpdatableRowData.java  |  44 ++--
 .../runtime/operators/sink/ConstraintEnforcer.java | 289 +
 .../operators/sink/SinkNotNullEnforcer.java|  71 -
 11 files changed, 656 insertions(+), 445 deletions(-)
 create mode 100644 
flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/runtime/operators/sink/ConstraintEnforcer.java
 delete mode 100644 
flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/runtime/operators/sink/SinkNotNullEnforcer.java


[flink] branch master updated (ea7a356 -> b17748b)

2021-12-02 Thread twalthr
This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from ea7a356  [FLINK-25071][parquet] SerializableConfiguration should not 
load resources
 add b17748b  [FLINK-24902][table-planner] Add back support for casting 
decimal to boolean

No new revisions were added by this update.

Summary of changes:
 .../functions/casting/CastRuleProvider.java|  2 +-
 .../casting/IntegerNumericToBooleanCastRule.java   | 47 --
 ...CastRule.java => NumericToBooleanCastRule.java} | 25 ++--
 .../planner/codegen/calls/BuiltInMethods.scala |  4 ++
 .../planner/functions/casting/CastRulesTest.java   | 18 -
 .../apache/flink/table/data/DecimalDataUtils.java  |  4 ++
 .../apache/flink/table/data/DecimalDataTest.java   |  3 ++
 7 files changed, 42 insertions(+), 61 deletions(-)
 delete mode 100644 
flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/functions/casting/IntegerNumericToBooleanCastRule.java
 copy 
flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/functions/casting/{StringToBooleanCastRule.java
 => NumericToBooleanCastRule.java} (67%)


[flink] branch master updated (cf454dd -> 39ffdcc)

2021-12-01 Thread twalthr
This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from cf454dd  [hotfix][ci] Properly setup cron builds for 1.13+
 add 0452506  [FLINK-25079][test-utils-junit] Add new assertj style 
assertions in FlinkAssertions to replace FlinkMatchers
 add 023100d  [FLINK-25079][table-common] Add some initial assertj 
assertions for table data and types apis
 add 39ffdcc  [FLINK-25079][table-common] Refactored some tests to use the 
new assertions

No new revisions were added by this update.

Summary of changes:
 .../flink/table/data/utils/JoinedRowDataTest.java  |  28 +-
 .../table/data/utils/ProjectedRowDataTest.java |  44 +-
 .../apache/flink/table/test/DataTypeAssert.java|  73 ++
 .../flink/table/test/DataTypeConditions.java   |  31 +-
 .../apache/flink/table/test/LogicalTypeAssert.java | 155 
 .../flink/table/test/LogicalTypeConditions.java|  18 +-
 .../org/apache/flink/table/test/RowDataAssert.java |  74 ++
 .../apache/flink/table/test/StringDataAssert.java} |  30 +-
 .../apache/flink/table/test/TableAssertions.java}  |  34 +-
 .../table/types/ClassDataTypeConverterTest.java|   9 +-
 .../org/apache/flink/table/types/DataTypeTest.java | 200 ++---
 .../apache/flink/table/types/DataTypesTest.java|  41 +-
 .../flink/table/types/LogicalCommonTypeTest.java   |  15 +-
 .../table/types/LogicalTypeCastAvoidanceTest.java  |  10 +-
 .../flink/table/types/LogicalTypeCastsTest.java|  13 +-
 .../table/types/LogicalTypeDuplicatorTest.java |   7 +-
 .../flink/table/types/LogicalTypeParserTest.java   |  12 +-
 .../apache/flink/table/types/LogicalTypesTest.java | 983 +++--
 .../table/types/TypeInfoDataTypeConverterTest.java |   8 +-
 .../apache/flink/table/types/TypeTestingUtils.java |  66 --
 .../table/types/ValueDataTypeConverterTest.java|  11 +-
 .../types/extraction/DataTypeExtractorTest.java|   5 +-
 .../extraction/TypeInferenceExtractorTest.java |  47 +-
 .../inference/InputTypeStrategiesTestBase.java |  29 +-
 .../types/inference/TypeStrategiesTestBase.java|  23 +-
 .../types/logical/utils/LogicalTypeChecksTest.java |  46 +-
 .../logical/utils/LogicalTypeMergingTest.java  |  90 +-
 .../table/types/utils/DataTypeFactoryMock.java |   6 +-
 .../flink/table/types/utils/DataTypeUtilsTest.java | 132 ++-
 .../flink-test-utils-junit/pom.xml |   6 +
 .../flink/core/testutils/FlinkAssertions.java  | 118 +++
 .../apache/flink/core/testutils/FlinkMatchers.java |  10 +-
 pom.xml|   8 +-
 33 files changed, 1355 insertions(+), 1027 deletions(-)
 create mode 100644 
flink-table/flink-table-common/src/test/java/org/apache/flink/table/test/DataTypeAssert.java
 copy 
flink-end-to-end-tests/flink-tpcds-test/src/main/java/org/apache/flink/table/tpcds/schema/Column.java
 => 
flink-table/flink-table-common/src/test/java/org/apache/flink/table/test/DataTypeConditions.java
 (57%)
 create mode 100644 
flink-table/flink-table-common/src/test/java/org/apache/flink/table/test/LogicalTypeAssert.java
 copy 
flink-connectors/flink-connector-elasticsearch-base/src/main/java/org/apache/flink/connector/elasticsearch/table/LogicalTypeWithIndex.java
 => 
flink-table/flink-table-common/src/test/java/org/apache/flink/table/test/LogicalTypeConditions.java
 (68%)
 create mode 100644 
flink-table/flink-table-common/src/test/java/org/apache/flink/table/test/RowDataAssert.java
 copy 
flink-table/{flink-table-runtime/src/main/java/org/apache/flink/table/data/conversion/StringStringConverter.java
 => 
flink-table-common/src/test/java/org/apache/flink/table/test/StringDataAssert.java}
 (56%)
 copy 
flink-table/flink-table-common/src/{main/java/org/apache/flink/table/types/inference/strategies/MissingTypeStrategy.java
 => test/java/org/apache/flink/table/test/TableAssertions.java} (54%)
 delete mode 100644 
flink-table/flink-table-common/src/test/java/org/apache/flink/table/types/TypeTestingUtils.java
 create mode 100644 
flink-test-utils-parent/flink-test-utils-junit/src/main/java/org/apache/flink/core/testutils/FlinkAssertions.java


[flink] branch master updated (183d320 -> 91efbc2)

2021-12-01 Thread twalthr
This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 183d320  [FLINK-25093][tests] Refactor test to use retry function for 
cloning Flink docker repo
 add 91efbc2  [FLINK-24902][table-planner] Port integer numeric -> boolean 
and boolean -> numeric to CastRule

No new revisions were added by this update.

Summary of changes:
 .../casting/BooleanToNumericCastRule.java  | 101 +
 .../functions/casting/CastRuleProvider.java|  75 +++
 ...e.java => IntegerNumericToBooleanCastRule.java} |  21 ++---
 .../planner/codegen/calls/BuiltInMethods.scala |   4 +
 .../planner/codegen/calls/ScalarOperatorGens.scala |  23 -
 .../planner/functions/casting/CastRulesTest.java   |  45 +++--
 .../apache/flink/table/data/DecimalDataUtils.java  |   8 --
 .../apache/flink/table/data/DecimalDataTest.java   |   4 -
 8 files changed, 190 insertions(+), 91 deletions(-)
 create mode 100644 
flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/functions/casting/BooleanToNumericCastRule.java
 copy 
flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/functions/casting/{StringToBinaryCastRule.java
 => IntegerNumericToBooleanCastRule.java} (67%)


[flink] branch master updated: [FLINK-25075][table] Refactored PlannerExpressionParser to avoid instantiation through reflections

2021-12-01 Thread twalthr
This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new 8c40849  [FLINK-25075][table] Refactored PlannerExpressionParser to 
avoid instantiation through reflections
8c40849 is described below

commit 8c40849c5249d96532305d6b88302ee863371541
Author: slinkydeveloper 
AuthorDate: Fri Nov 26 11:44:26 2021 +0100

[FLINK-25075][table] Refactored PlannerExpressionParser to avoid 
instantiation through reflections

This closes #17931.
---
 .../java/internal/StreamTableEnvironmentImpl.java  |  4 +-
 .../main/java/org/apache/flink/table/api/Over.java |  5 +-
 .../flink/table/api/OverWindowPartitioned.java |  4 +-
 .../table/api/OverWindowPartitionedOrdered.java|  6 +-
 .../api/OverWindowPartitionedOrderedPreceding.java |  6 +-
 .../java/org/apache/flink/table/api/Session.java   |  4 +-
 .../org/apache/flink/table/api/SessionWithGap.java |  4 +-
 .../flink/table/api/SessionWithGapOnTime.java  |  4 +-
 .../java/org/apache/flink/table/api/Slide.java |  4 +-
 .../org/apache/flink/table/api/SlideWithSize.java  |  4 +-
 .../flink/table/api/SlideWithSizeAndSlide.java |  4 +-
 .../table/api/SlideWithSizeAndSlideOnTime.java |  4 +-
 .../java/org/apache/flink/table/api/Tumble.java|  4 +-
 .../org/apache/flink/table/api/TumbleWithSize.java |  4 +-
 .../flink/table/api/TumbleWithSizeOnTime.java  |  4 +-
 .../apache/flink/table/api/internal/TableImpl.java | 93 +-
 .../ExpressionParser.java  | 24 +++---
 .../ExpressionParserFactory.java}  | 29 ---
 .../table/delegation/PlannerExpressionParser.java  | 73 -
 .../table/api/WindowCreationValidationTest.java| 57 -
 .../delegation/DefaultExpressionParserFactory.java | 53 
 .../org.apache.flink.table.factories.Factory   |  1 +
 ...ParserImpl.scala => ExpressionParserImpl.scala} | 19 +++--
 .../planner/expressions/KeywordParseTest.scala | 13 ++-
 .../planner/expressions/ScalarFunctionsTest.scala  | 10 +--
 .../expressions/utils/ExpressionTestBase.scala |  7 +-
 .../planner/plan/utils/RexNodeExtractorTest.scala  | 48 +--
 .../planner/runtime/utils/BatchTableEnvUtil.scala  |  6 +-
 .../runtime/utils/CollectionBatchExecTable.scala   |  5 +-
 .../planner/runtime/utils/StreamTableEnvUtil.scala |  5 +-
 30 files changed, 228 insertions(+), 280 deletions(-)

diff --git 
a/flink-table/flink-table-api-java-bridge/src/main/java/org/apache/flink/table/api/bridge/java/internal/StreamTableEnvironmentImpl.java
 
b/flink-table/flink-table-api-java-bridge/src/main/java/org/apache/flink/table/api/bridge/java/internal/StreamTableEnvironmentImpl.java
index 72a92d0..b0957f6 100644
--- 
a/flink-table/flink-table-api-java-bridge/src/main/java/org/apache/flink/table/api/bridge/java/internal/StreamTableEnvironmentImpl.java
+++ 
b/flink-table/flink-table-api-java-bridge/src/main/java/org/apache/flink/table/api/bridge/java/internal/StreamTableEnvironmentImpl.java
@@ -49,10 +49,10 @@ import org.apache.flink.table.catalog.UnresolvedIdentifier;
 import org.apache.flink.table.connector.ChangelogMode;
 import org.apache.flink.table.delegation.Executor;
 import org.apache.flink.table.delegation.ExecutorFactory;
+import org.apache.flink.table.delegation.ExpressionParser;
 import org.apache.flink.table.delegation.Planner;
 import org.apache.flink.table.expressions.ApiExpressionUtils;
 import org.apache.flink.table.expressions.Expression;
-import org.apache.flink.table.expressions.ExpressionParser;
 import org.apache.flink.table.factories.FactoryUtil;
 import org.apache.flink.table.factories.PlannerFactoryUtil;
 import org.apache.flink.table.functions.AggregateFunction;
@@ -458,7 +458,7 @@ public final class StreamTableEnvironmentImpl extends 
TableEnvironmentImpl
 
 @Override
 public  Table fromDataStream(DataStream dataStream, String fields) {
-List expressions = 
ExpressionParser.parseExpressionList(fields);
+List expressions = 
ExpressionParser.INSTANCE.parseExpressionList(fields);
 return fromDataStream(dataStream, expressions.toArray(new 
Expression[0]));
 }
 
diff --git 
a/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/api/Over.java
 
b/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/api/Over.java
index 7862816..97a1eec 100644
--- 
a/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/api/Over.java
+++ 
b/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/api/Over.java
@@ -19,8 +19,8 @@
 package org.apache.flink.table.api;
 
 import org.apache.flink.annotation.PublicEvolving;
+import org.apache.flink.table.delegation.ExpressionParser;
 import org.apache.flink.table.expressions.Expression;
-imp

[flink] branch master updated (3761bbf -> 30644a0)

2021-12-01 Thread twalthr
This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 3761bbf  [FLINK-24980] Introduce numBytesProduced counter into 
ResultPartition to record the size of result partition.
 add 30644a0  [FLINK-25060][table-common] Replace projection methods of 
FLINK-24399 with the new Projection util

No new revisions were added by this update.

Summary of changes:
 .../table/ElasticsearchDynamicSinkFactoryBase.java |  3 +-
 .../source/AbstractHBaseDynamicTableSource.java|  3 +-
 .../jdbc/table/JdbcDynamicTableSource.java |  3 +-
 .../connectors/kafka/table/KafkaDynamicSink.java   |  4 +-
 .../connectors/kafka/table/KafkaDynamicSource.java |  4 +-
 .../confluent/RegistryAvroFormatFactory.java   |  3 +-
 .../debezium/DebeziumAvroFormatFactory.java|  3 +-
 .../flink/formats/avro/AvroFormatFactory.java  |  3 +-
 .../flink/formats/json/JsonFormatFactory.java  |  3 +-
 .../json/canal/CanalJsonDecodingFormat.java|  3 +-
 .../json/debezium/DebeziumJsonDecodingFormat.java  |  3 +-
 .../json/maxwell/MaxwellJsonDecodingFormat.java|  3 +-
 .../formats/parquet/ParquetFileFormatFactory.java  |  4 +-
 .../flink/table/catalog/SchemaTranslator.java  |  4 +-
 .../apache/flink/table/connector/Projection.java   | 64 --
 .../table/connector/format/DecodingFormat.java |  4 +-
 .../format/ProjectableDecodingFormat.java  |  2 +-
 .../flink/table/factories/DynamicTableFactory.java | 12 ++--
 .../org/apache/flink/table/types/DataType.java | 51 -
 .../flink/table/types/utils/DataTypeUtils.java | 63 -
 .../org/apache/flink/table/types/DataTypeTest.java | 17 --
 .../flink/table/types/utils/DataTypeUtilsTest.java | 31 ---
 .../PushProjectIntoTableSourceScanRule.java| 15 +
 .../filesystem/TestCsvFileSystemFormatFactory.java |  3 +-
 24 files changed, 112 insertions(+), 196 deletions(-)


[flink] branch master updated (f031829 -> a7192af)

2021-11-30 Thread twalthr
This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from f031829  [hotfix] Use Hamcrest assertThat in FileUtilsTest
 add a7192af  [FLINK-24507][table] Cleanup DateTimeUtils

No new revisions were added by this update.

Summary of changes:
 .../flink/connector/hbase1/util/HBaseTestBase.java |   12 +-
 .../flink/connector/hbase2/util/HBaseTestBase.java |   12 +-
 .../orc/nohive/vector/AbstractOrcNoHiveVector.java |4 +-
 .../flink/orc/vector/AbstractOrcColumnVector.java  |4 +-
 .../flink/orc/OrcColumnarRowSplitReaderTest.java   |4 +-
 .../parquet/vector/ParquetSplitReaderUtil.java |6 +-
 .../parquet/ParquetColumnarRowInputFormatTest.java |2 +-
 .../vector/ParquetColumnarRowSplitReaderTest.java  |3 +-
 .../table/runtime/typeutils/PythonTypeUtils.java   |   35 -
 .../serializers/python/DateSerializer.java |6 +-
 .../apache/flink/table/utils/DateTimeUtils.java| 1149 +++-
 .../flink/table/utils/print/TableauStyleTest.java  |2 +-
 .../expressions/converter/ExpressionConverter.java |2 +-
 .../casting/AbstractCodeGeneratorCastRule.java |   39 +-
 .../AbstractExpressionCodeGeneratorCastRule.java   |6 +
 .../planner/functions/casting/CastRuleUtils.java   |   34 +
 .../functions/utils/HiveTableSqlFunction.java  |2 +-
 .../utils}/TimestampStringUtils.java   |   29 +-
 .../planner/codegen/CodeGeneratorContext.scala |8 +-
 .../table/planner/codegen/ExpressionReducer.scala  |2 +-
 .../table/planner/codegen/GenerateUtils.scala  |2 +-
 .../planner/codegen/calls/BuiltInMethods.scala |  154 +--
 .../planner/codegen/calls/FunctionGenerator.scala  |   14 -
 .../planner/codegen/calls/ScalarOperatorGens.scala |   10 +-
 .../planner/codegen/calls/StringCallGen.scala  |6 +-
 .../table/planner/plan/utils/PartitionPruner.scala |8 +-
 .../planner/plan/utils/RexNodeExtractor.scala  |2 +-
 .../planner/codegen/calls/BuiltInMethodsTest.java  |   54 +
 .../planner/functions/casting/CastRulesTest.java   |   19 +-
 .../stream/jsonplan/TableSourceJsonPlanITCase.java |   12 +-
 .../jsonplan/WatermarkAssignerJsonPlanITCase.java  |8 +-
 .../validation/ScalarOperatorsValidationTest.scala |   12 +-
 .../planner/runtime/batch/sql/CalcITCase.scala |4 +-
 .../runtime/batch/sql/TableScanITCase.scala|   18 +-
 .../planner/runtime/batch/sql/UnnestITCase.scala   |4 +-
 .../sql/agg/AggregateReduceGroupingITCase.scala|4 +-
 .../table/planner/runtime/utils/TestData.scala |   44 +-
 .../table/planner/utils/DateTimeTestUtil.scala |6 +-
 .../table/data/binary/BinaryStringDataUtil.java|   15 +-
 .../table/data/conversion/DateDateConverter.java   |4 +-
 .../data/conversion/DateLocalDateConverter.java|4 +-
 .../data/conversion/TimeLocalTimeConverter.java|4 +-
 .../table/data/conversion/TimeTimeConverter.java   |4 +-
 .../table/data/util/DataFormatConverters.java  |   16 +-
 .../flink/table/data/DataFormatConvertersTest.java |6 +-
 45 files changed, 725 insertions(+), 1070 deletions(-)
 rename 
flink-table/flink-table-planner/src/main/java/org/apache/flink/table/{util => 
planner/utils}/TimestampStringUtils.java (66%)
 create mode 100644 
flink-table/flink-table-planner/src/test/java/org/apache/flink/table/planner/codegen/calls/BuiltInMethodsTest.java


[flink] branch master updated (6f05e6f -> a09cc47)

2021-11-29 Thread twalthr
This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 6f05e6f  [FLINK-25080][tests] Move tests to flink-core
 add a09cc47  [FLINK-25047][table] Resolve architectural violations

No new revisions were added by this update.

Summary of changes:
 .../5b9eed8a-5fb6-4373-98ac-3be2a71941b8   | 275 +
 .../7602816f-5c01-4b7a-9e3e-235dfedec245   |  29 +--
 .../e5126cae-f3fe-48aa-b6fb-60ae6cc3fcd5   |  12 +-
 .../flink/table/catalog/hive/HiveCatalog.java  |   7 +-
 .../org/apache/flink/util/CloseableIterator.java   |   3 +
 .../org/apache/flink/table/api/ApiExpression.java  |   2 +
 .../flink/table/api/EnvironmentSettings.java   |   1 +
 .../apache/flink/table/api/FormatDescriptor.java   |   1 +
 .../apache/flink/table/api/TableDescriptor.java|   1 +
 .../table/api/config/ExecutionConfigOptions.java   |   2 +
 .../apache/flink/table/catalog/CatalogManager.java |   4 +-
 .../flink/table/operations/QueryOperation.java |   4 +-
 .../table/api/AmbiguousTableFactoryException.java  |  12 +-
 .../java/org/apache/flink/table/api/DataTypes.java |   4 +
 .../flink/table/api/ExpressionParserException.java |  13 +-
 .../table/api/NoMatchingTableFactoryException.java |  12 +-
 .../java/org/apache/flink/table/api/Schema.java|   8 +
 .../org/apache/flink/table/api/TableColumn.java|   3 +
 .../org/apache/flink/table/api/TableSchema.java|   1 +
 .../flink/table/api/constraints/Constraint.java|   1 +
 .../flink/table/catalog/CatalogBaseTable.java  |   1 +
 .../flink/table/catalog/CatalogDatabase.java   |   3 +
 .../flink/table/catalog/CatalogFunction.java   |   3 +
 .../flink/table/catalog/CatalogPartition.java  |   3 +
 .../flink/table/catalog/CatalogPartitionSpec.java  |   3 +
 .../org/apache/flink/table/catalog/Column.java |   3 +
 .../org/apache/flink/table/catalog/Constraint.java |   1 +
 .../flink/table/catalog/ObjectIdentifier.java  |   2 +
 .../org/apache/flink/table/catalog/ObjectPath.java |   2 +
 .../flink/table/catalog/UnresolvedIdentifier.java  |   4 +-
 .../exceptions/DatabaseAlreadyExistException.java  |   3 +
 .../exceptions/DatabaseNotEmptyException.java  |   3 +
 .../exceptions/DatabaseNotExistException.java  |   3 +
 .../exceptions/FunctionAlreadyExistException.java  |   2 +
 .../exceptions/FunctionNotExistException.java  |   2 +
 .../PartitionAlreadyExistsException.java   |   2 +
 .../exceptions/PartitionNotExistException.java |   2 +
 .../exceptions/PartitionSpecInvalidException.java  |   2 +
 .../exceptions/TableAlreadyExistException.java |   2 +
 .../catalog/exceptions/TableNotExistException.java |   2 +
 .../exceptions/TableNotPartitionedException.java   |   2 +
 .../exceptions/TablePartitionedException.java  |   2 +
 .../catalog/stats/CatalogColumnStatistics.java |   3 +
 .../stats/CatalogColumnStatisticsDataBase.java |   3 +
 .../catalog/stats/CatalogTableStatistics.java  |   3 +
 .../flink/table/connector/ChangelogMode.java   |   1 +
 .../flink/table/connector/RuntimeConverter.java|   1 +
 .../table/connector/sink/DynamicTableSink.java |   3 +
 .../table/connector/source/DynamicTableSource.java |   2 +
 .../table/connector/source/LookupTableSource.java  |   2 +
 .../table/connector/source/ScanTableSource.java|   2 +
 .../source/abilities/SupportsFilterPushDown.java   |   1 +
 .../org/apache/flink/table/data/ArrayData.java |   1 +
 .../java/org/apache/flink/table/data/RowData.java  |   1 +
 .../ExpressionParserException.java |  14 +-
 .../factories/AmbiguousTableFactoryException.java  |  52 
 .../flink/table/factories/CatalogFactory.java  |   1 +
 .../flink/table/factories/DynamicTableFactory.java |   1 +
 .../apache/flink/table/factories/FactoryUtil.java  |   4 +-
 .../table/factories/FunctionDefinitionFactory.java |   2 +
 .../flink/table/factories/ModuleFactory.java   |   1 +
 .../factories/NoMatchingTableFactoryException.java |  64 +
 .../flink/table/factories/TableFactoryService.java |   2 -
 .../flink/table/factories/TableSinkFactory.java|   1 +
 .../flink/table/factories/TableSourceFactory.java  |   1 +
 .../functions/BuiltInFunctionDefinitions.java  |   2 +
 .../flink/table/functions/SpecializedFunction.java |   1 +
 .../flink/table/types/inference/Signature.java |   1 +
 .../flink/table/types/inference/TypeInference.java |   1 +
 .../table/types/logical/DayTimeIntervalType.java   |   1 +
 .../flink/table/types/logical/DistinctType.java|   1 +
 .../apache/flink/table/types/logical/RowType.java  |   1 +
 .../flink/table/types/logical/StructuredType.java  |   3 +
 .../table/types/logical/YearMonthIntervalType.java |   1 +
 .../factories/TableSinkFactoryServiceTest.java |   2 -
 .../planner/expressions}/UnresolvedException.java  |   5 +-
 .../nodes/exec

[flink] 03/03: [FLINK-24781][table-planner] Refactor cast of literals to use CastExecutor

2021-11-21 Thread twalthr
This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 9f7eef293f723800945a9759c50adbf8786a2bd4
Author: slinkydeveloper 
AuthorDate: Tue Nov 16 10:48:08 2021 +0100

[FLINK-24781][table-planner] Refactor cast of literals to use CastExecutor

Signed-off-by: slinkydeveloper 

This closes #17800.
---
 .../CodeGeneratedExpressionCastExecutor.java   |  3 +-
 .../flink/table/planner/codegen/CodeGenUtils.scala | 26 ++-
 .../table/planner/codegen/GenerateUtils.scala  | 16 
 .../planner/codegen/calls/BuiltInMethods.scala |  1 -
 .../table/planner/codegen/calls/IfCallGen.scala|  7 +-
 .../planner/codegen/calls/ScalarOperatorGens.scala | 89 --
 .../validation/ScalarOperatorsValidationTest.scala | 12 +--
 7 files changed, 85 insertions(+), 69 deletions(-)

diff --git 
a/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/functions/casting/CodeGeneratedExpressionCastExecutor.java
 
b/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/functions/casting/CodeGeneratedExpressionCastExecutor.java
index 7c361ac..6e57593 100644
--- 
a/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/functions/casting/CodeGeneratedExpressionCastExecutor.java
+++ 
b/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/functions/casting/CodeGeneratedExpressionCastExecutor.java
@@ -57,7 +57,8 @@ class CodeGeneratedExpressionCastExecutor implements 
CastExecutor JBoolean, Byte => JByte, Double => JDouble, Float 
=> JFloat, Integer => JInt, Long => JLong, Object => JObject, Short => JShort}
 import java.util.concurrent.atomic.AtomicLong
-
 import org.apache.flink.api.common.ExecutionConfig
 import org.apache.flink.api.common.functions.RuntimeContext
 import org.apache.flink.core.memory.MemorySegment
@@ -33,10 +32,10 @@ import 
org.apache.flink.table.data.util.DataFormatConverters.IdentityConverter
 import org.apache.flink.table.data.utils.JoinedRowData
 import org.apache.flink.table.functions.UserDefinedFunction
 import 
org.apache.flink.table.planner.codegen.GenerateUtils.{generateInputFieldUnboxing,
 generateNonNullField}
+import 
org.apache.flink.table.planner.codegen.calls.BuiltInMethods.BINARY_STRING_DATA_FROM_STRING
 import org.apache.flink.table.runtime.dataview.StateDataViewStore
 import org.apache.flink.table.runtime.generated.{AggsHandleFunction, 
HashFunction, NamespaceAggsHandleFunction, TableAggsHandleFunction}
 import 
org.apache.flink.table.runtime.types.LogicalTypeDataTypeConverter.fromDataTypeToLogicalType
-import org.apache.flink.table.runtime.types.PlannerTypeUtils.isInteroperable
 import org.apache.flink.table.runtime.typeutils.TypeCheckUtils
 import org.apache.flink.table.runtime.util.{MurmurHashUtil, TimeWindowUtil}
 import org.apache.flink.table.types.DataType
@@ -46,6 +45,7 @@ import 
org.apache.flink.table.types.logical.utils.LogicalTypeChecks
 import 
org.apache.flink.table.types.logical.utils.LogicalTypeChecks.{getFieldCount, 
getPrecision, getScale}
 import 
org.apache.flink.table.types.logical.utils.LogicalTypeUtils.toInternalConversionClass
 import org.apache.flink.table.types.utils.DataTypeUtils.isInternal
+import org.apache.flink.table.utils.EncodingUtils
 import org.apache.flink.types.{Row, RowKind}
 
 import scala.annotation.tailrec
@@ -195,6 +195,28 @@ object CodeGenUtils {
 case _ => boxedTypeTermForType(t)
   }
 
+  /**
+   * Converts values to stringified representation to include in the codegen.
+   *
+   * This method doesn't support complex types.
+   */
+  def primitiveLiteralForType(value: Any): String = value match {
+// ordered by type root definition
+case _: JBoolean => value.toString
+case _: JByte => s"((byte)$value)"
+case _: JShort => s"((short)$value)"
+case _: JInt => value.toString
+case _: JLong => value.toString + "L"
+case _: JFloat => value.toString + "f"
+case _: JDouble => value.toString + "d"
+case sd: StringData =>
+  qualifyMethod(BINARY_STRING_DATA_FROM_STRING) + "(\"" +
+EncodingUtils.escapeJava(sd.toString) + "\")"
+case td: TimestampData =>
+  s"$TIMESTAMP_DATA.fromEpochMillis(${td.getMillisecond}L, 
${td.getNanoOfMillisecond})"
+case _ => throw new IllegalArgumentException("Illegal literal type: " + 
value.getClass)
+  }
+
   @tailrec
   def boxedTypeTermForType(t: LogicalType): String = t.getTypeRoot match {
 // ordered by type root definition
diff --git 
a/flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/planner/codegen/GenerateUtils.scala
 
b/flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/planner/codegen/GenerateUtils.scala
index d

[flink] 02/03: [FLINK-24781][table-planner] Add string parsing methods to BinaryStringDataUtil and add from string cast rules

2021-11-21 Thread twalthr
This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 0426d8c7af0191f75e6aaa4696b3358de059dc67
Author: slinkydeveloper 
AuthorDate: Mon Nov 15 13:40:34 2021 +0100

[FLINK-24781][table-planner] Add string parsing methods to 
BinaryStringDataUtil and add from string cast rules

Signed-off-by: slinkydeveloper 
---
 .../apache/flink/table/utils/DateTimeUtils.java|   5 +-
 .../AbstractExpressionCodeGeneratorCastRule.java   |   9 +-
 .../functions/casting/CastRuleProvider.java|   8 +
 .../CodeGeneratedExpressionCastExecutor.java   |   1 -
 .../functions/casting/StringToBinaryCastRule.java  |  50 +
 .../functions/casting/StringToBooleanCastRule.java |  55 ++
 .../functions/casting/StringToDateCastRule.java|  55 ++
 .../functions/casting/StringToDecimalCastRule.java |  63 ++
 .../casting/StringToNumericPrimitiveCastRule.java  |  78 
 .../functions/casting/StringToTimeCastRule.java|  58 ++
 .../casting/StringToTimestampCastRule.java |  63 ++
 .../planner/codegen/calls/BuiltInMethods.scala |  68 +--
 .../planner/codegen/calls/ScalarOperatorGens.scala | 105 --
 .../planner/functions/casting/CastRulesTest.java   | 211 -
 .../table/data/binary/BinaryStringDataUtil.java| 169 ++---
 .../flink/table/data/BinaryStringDataTest.java |  41 ++--
 16 files changed, 826 insertions(+), 213 deletions(-)

diff --git 
a/flink-table/flink-table-common/src/main/java/org/apache/flink/table/utils/DateTimeUtils.java
 
b/flink-table/flink-table-common/src/main/java/org/apache/flink/table/utils/DateTimeUtils.java
index fdd715b..06fc883 100644
--- 
a/flink-table/flink-table-common/src/main/java/org/apache/flink/table/utils/DateTimeUtils.java
+++ 
b/flink-table/flink-table-common/src/main/java/org/apache/flink/table/utils/DateTimeUtils.java
@@ -317,6 +317,7 @@ public class DateTimeUtils {
 // 

 // String --> Timestamp conversion
 // 

+
 public static TimestampData toTimestampData(String dateStr) {
 int length = dateStr.length();
 String format;
@@ -411,7 +412,7 @@ public class DateTimeUtils {
  * @param format date time string format
  * @param tz the time zone
  */
-public static Long toTimestamp(String dateStr, String format, TimeZone tz) 
{
+private static Long toTimestamp(String dateStr, String format, TimeZone 
tz) {
 SimpleDateFormat formatter = FORMATTER_CACHE.get(format);
 formatter.setTimeZone(tz);
 try {
@@ -1717,7 +1718,7 @@ public class DateTimeUtils {
 return timeStringToUnixDate(v, 0);
 }
 
-public static Integer timeStringToUnixDate(String v, int start) {
+private static Integer timeStringToUnixDate(String v, int start) {
 final int colon1 = v.indexOf(':', start);
 // timezone hh:mm:ss[.ss][[+|-]hh:mm:ss]
 // refer https://www.w3.org/TR/NOTE-datetime
diff --git 
a/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/functions/casting/AbstractExpressionCodeGeneratorCastRule.java
 
b/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/functions/casting/AbstractExpressionCodeGeneratorCastRule.java
index aa0a50b..6700a7e 100644
--- 
a/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/functions/casting/AbstractExpressionCodeGeneratorCastRule.java
+++ 
b/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/functions/casting/AbstractExpressionCodeGeneratorCastRule.java
@@ -73,7 +73,14 @@ abstract class AbstractExpressionCodeGeneratorCastRule
 box(
 generateExpression(
 
createCodeGeneratorCastRuleContext(context),
-unbox(inputArgumentName, 
inputLogicalType),
+unbox(
+// We need the casting because 
the rules uses the
+// concrete classes (e.g. 
StringData and
+// BinaryStringData)
+cast(
+
boxedTypeTermForType(inputLogicalType),
+inputArgumentName),
+inputLogicalType),
 inputLogicalType,
 targetLogicalType),
 targetLogicalType));
diff --git 
a/fl

[flink] branch master updated (cd988b6 -> 9f7eef2)

2021-11-21 Thread twalthr
This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from cd988b6  [FLINK-24918][state] Support to specify the data dir for 
state benchmark
 new 92c02fc  [FLINK-24781][table-planner] Added CastRule#canFail and make 
sure ScalarOperatorGens wraps the cast invocation in a try-catch
 new 0426d8c  [FLINK-24781][table-planner] Add string parsing methods to 
BinaryStringDataUtil and add from string cast rules
 new 9f7eef2  [FLINK-24781][table-planner] Refactor cast of literals to use 
CastExecutor

The 3 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../apache/flink/table/utils/DateTimeUtils.java|   5 +-
 .../functions/casting/AbstractCastRule.java|   5 +
 .../AbstractExpressionCodeGeneratorCastRule.java   |  11 +-
 .../table/planner/functions/casting/CastRule.java  |   2 +
 .../functions/casting/CastRuleProvider.java|   8 +
 .../CodeGeneratedExpressionCastExecutor.java   |   9 +-
 ...ngCastRule.java => StringToBinaryCastRule.java} |  22 +-
 ...gCastRule.java => StringToBooleanCastRule.java} |  24 +-
 ...ringCastRule.java => StringToDateCastRule.java} |  24 +-
 ...lCastRule.java => StringToDecimalCastRule.java} |  23 +-
 .../casting/StringToNumericPrimitiveCastRule.java  |  78 +++
 ...ringCastRule.java => StringToTimeCastRule.java} |  27 ++-
 ...astRule.java => StringToTimestampCastRule.java} |  39 ++--
 .../flink/table/planner/codegen/CodeGenUtils.scala |  26 ++-
 .../table/planner/codegen/GenerateUtils.scala  |  16 --
 .../planner/codegen/calls/BuiltInMethods.scala |  67 +-
 .../table/planner/codegen/calls/IfCallGen.scala|   7 +-
 .../planner/codegen/calls/ScalarOperatorGens.scala | 254 -
 .../planner/functions/casting/CastRulesTest.java   | 211 -
 .../validation/ScalarOperatorsValidationTest.scala |  12 +-
 .../table/data/binary/BinaryStringDataUtil.java| 169 --
 .../flink/table/data/BinaryStringDataTest.java |  41 ++--
 22 files changed, 726 insertions(+), 354 deletions(-)
 copy 
flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/functions/casting/{NumericToStringCastRule.java
 => StringToBinaryCastRule.java} (67%)
 copy 
flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/functions/casting/{DateToStringCastRule.java
 => StringToBooleanCastRule.java} (68%)
 copy 
flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/functions/casting/{DateToStringCastRule.java
 => StringToDateCastRule.java} (68%)
 copy 
flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/functions/casting/{DecimalToDecimalCastRule.java
 => StringToDecimalCastRule.java} (71%)
 create mode 100644 
flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/functions/casting/StringToNumericPrimitiveCastRule.java
 copy 
flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/functions/casting/{DateToStringCastRule.java
 => StringToTimeCastRule.java} (67%)
 copy 
flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/functions/casting/{IntervalToStringCastRule.java
 => StringToTimestampCastRule.java} (59%)


[flink] 01/03: [FLINK-24781][table-planner] Added CastRule#canFail and make sure ScalarOperatorGens wraps the cast invocation in a try-catch

2021-11-21 Thread twalthr
This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 92c02fc747f7794f2c20ac161ad5d7b9c0f2c0f8
Author: slinkydeveloper 
AuthorDate: Mon Nov 15 13:39:51 2021 +0100

[FLINK-24781][table-planner] Added CastRule#canFail and make sure 
ScalarOperatorGens wraps the cast invocation in a try-catch

Signed-off-by: slinkydeveloper 
---
 .../functions/casting/AbstractCastRule.java|  5 ++
 .../AbstractExpressionCodeGeneratorCastRule.java   |  2 +
 .../table/planner/functions/casting/CastRule.java  |  2 +
 .../CodeGeneratedExpressionCastExecutor.java   |  7 ++-
 .../planner/codegen/calls/ScalarOperatorGens.scala | 60 +-
 5 files changed, 63 insertions(+), 13 deletions(-)

diff --git 
a/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/functions/casting/AbstractCastRule.java
 
b/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/functions/casting/AbstractCastRule.java
index c193139..840c8df 100644
--- 
a/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/functions/casting/AbstractCastRule.java
+++ 
b/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/functions/casting/AbstractCastRule.java
@@ -31,4 +31,9 @@ abstract class AbstractCastRule implements 
CastRule {
 public CastRulePredicate getPredicateDefinition() {
 return predicate;
 }
+
+@Override
+public boolean canFail() {
+return false;
+}
 }
diff --git 
a/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/functions/casting/AbstractExpressionCodeGeneratorCastRule.java
 
b/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/functions/casting/AbstractExpressionCodeGeneratorCastRule.java
index 0b14ddc..aa0a50b 100644
--- 
a/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/functions/casting/AbstractExpressionCodeGeneratorCastRule.java
+++ 
b/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/functions/casting/AbstractExpressionCodeGeneratorCastRule.java
@@ -25,7 +25,9 @@ import 
org.apache.flink.table.types.logical.utils.LogicalTypeUtils;
 
 import java.util.Collections;
 
+import static 
org.apache.flink.table.planner.codegen.CodeGenUtils.boxedTypeTermForType;
 import static 
org.apache.flink.table.planner.functions.casting.CastRuleUtils.box;
+import static 
org.apache.flink.table.planner.functions.casting.CastRuleUtils.cast;
 import static 
org.apache.flink.table.planner.functions.casting.CastRuleUtils.unbox;
 
 /**
diff --git 
a/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/functions/casting/CastRule.java
 
b/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/functions/casting/CastRule.java
index e93effb..58217e4 100644
--- 
a/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/functions/casting/CastRule.java
+++ 
b/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/functions/casting/CastRule.java
@@ -45,6 +45,8 @@ public interface CastRule {
 CastExecutor create(
 Context context, LogicalType inputLogicalType, LogicalType 
targetLogicalType);
 
+boolean canFail();
+
 /** Casting context. */
 interface Context {
 ZoneId getSessionZoneId();
diff --git 
a/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/functions/casting/CodeGeneratedExpressionCastExecutor.java
 
b/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/functions/casting/CodeGeneratedExpressionCastExecutor.java
index c94db8d..f39089a 100644
--- 
a/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/functions/casting/CodeGeneratedExpressionCastExecutor.java
+++ 
b/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/functions/casting/CodeGeneratedExpressionCastExecutor.java
@@ -53,7 +53,12 @@ class CodeGeneratedExpressionCastExecutor 
implements CastExecutorhttps://issues.apache.org/jira/browse/FLINK-24385 for more 
details
+  val castCode =
+s"""
+   | // --- Cast section generated by 
${className(codeGeneratorCastRule.getClass)}
+   | try {
+   |   ${castCodeBlock.getCode}
+   |   $resultTerm = ${castCodeBlock.getReturnTerm};
+   |   $nullTerm = ${castCodeBlock.getIsNullTerm};
+   | } catch (${className[Throwable]} e) {
+   |   $resultTerm = ${primitiveDefaultValue(targetType)};
+   |   $nullTerm = true;
+   | }
+   | // --- End cast section
+   """.stripMargin
+
+  return GeneratedExpression(
+resultTerm,
+nullTerm,
+

[flink] branch release-1.14 updated (d3df986 -> 3719c04)

2021-11-21 Thread twalthr
This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a change to branch release-1.14
in repository https://gitbox.apache.org/repos/asf/flink.git.


from d3df986  [FLINK-24835][table-planner] Fix bug in 
`RelTimeIndicatorConverter` when materialize time attribute fields of regular 
join's inputs
 add 3719c04  [FLINK-24608][table-planner][table-runtime] Insert rowtime 
into StreamRecord for SinkProviders

No new revisions were added by this update.

Summary of changes:
 .../plan/nodes/exec/common/CommonExecSink.java |  22 +-
 .../plan/nodes/exec/stream/StreamExecMatch.java|  13 +-
 .../nodes/exec/common/CommonExecSinkITCase.java| 281 +
 .../operators/match/RowtimeProcessFunction.java|  60 -
 .../StreamRecordTimestampInserter.java}|  44 ++--
 5 files changed, 325 insertions(+), 95 deletions(-)
 create mode 100644 
flink-table/flink-table-planner/src/test/java/org/apache/flink/table/planner/plan/nodes/exec/common/CommonExecSinkITCase.java
 delete mode 100644 
flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/runtime/operators/match/RowtimeProcessFunction.java
 copy 
flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/runtime/operators/{sort/LimitOperator.java
 => sink/StreamRecordTimestampInserter.java} (55%)


<    1   2   3   4   5   6   7   8   9   10   >