[GitHub] [flink-table-store] zjureel commented on pull request #309: [FLINK-29345] Create reusing reader/writer config in orc format

2022-10-07 Thread GitBox


zjureel commented on PR #309:
URL: 
https://github.com/apache/flink-table-store/pull/309#issuecomment-1272232503

   Hi @JingsongLi
   I create this new PR for 
[FLINK-29345](https://issues.apache.org/jira/browse/FLINK-29345?jql=project%20%3D%20FLINK%20AND%20component%20%3D%20%22Table%20Store%22%20AND%20text%20~%20%22Too%20many%22).
 I find the `ThreadLocalClassLoaderConfiguration` in `OrcBulkWriterFactory` 
can't be removed while it is used to avoid classloader leaks and the detail is 
in the docs of `ThreadLocalClassLoaderConfiguration`.
   I have created `writerConf` and `readerConf` of 
`org.apache.hadoop.conf.Configuration` and reuse them in createReaderFactory 
and createWriterFactory


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-table-store] zjureel opened a new pull request, #309: [FLINK-29345] Create reusing reader/writer config in orc format

2022-10-07 Thread GitBox


zjureel opened a new pull request, #309:
URL: https://github.com/apache/flink-table-store/pull/309

   Currently OrcFileFormat will create new org.apache.hadoop.conf.Configuration 
instance in methods createReaderFactory and createWriterFactory. The 
initialization of org.apache.hadoop.conf.Configuration tries to load local file 
core-site.xml.
   
   This PR will initialize the writerConf and readerConf of 
org.apache.hadoop.conf.Configuration and reuse them in createReaderFactory and 
createWriterFactory.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] lincoln-lil commented on pull request #20983: [FLINK-29458][docs] When two tables have the same field, do not speci…

2022-10-07 Thread GitBox


lincoln-lil commented on PR #20983:
URL: https://github.com/apache/flink/pull/20983#issuecomment-1272230868

   @ZmmBigdata Thanks for fixing this! It's better to fix  the chinese version 
(though it hasn't been translated yet) 
`docs/content.zh/docs/dev/table/sql/queries/joins.md` as well.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-29544) Update Flink doc

2022-10-07 Thread ConradJam (Jira)
ConradJam created FLINK-29544:
-

 Summary: Update Flink doc
 Key: FLINK-29544
 URL: https://issues.apache.org/jira/browse/FLINK-29544
 Project: Flink
  Issue Type: Sub-task
  Components: Documentation
Affects Versions: 1.17.0
Reporter: ConradJam
 Fix For: 1.17.0


update flink doc add configuration field describe

[https://nightlies.apache.org/flink/flink-docs-master/docs/ops/rest_api/#jars-jarid-run]

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-29543) Jar Run Rest Handler Support Flink Configuration

2022-10-07 Thread ConradJam (Jira)
ConradJam created FLINK-29543:
-

 Summary: Jar Run Rest Handler Support Flink Configuration
 Key: FLINK-29543
 URL: https://issues.apache.org/jira/browse/FLINK-29543
 Project: Flink
  Issue Type: Sub-task
  Components: Runtime / REST
Affects Versions: 1.17.0
Reporter: ConradJam
 Fix For: 1.17.0


Flink JM Rest Api Support Flink Configuration field



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-25540) [JUnit5 Migration] Module: flink-runtime

2022-10-07 Thread RocMarshal (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17614358#comment-17614358
 ] 

RocMarshal commented on FLINK-25540:


Of course. anytime. [~mapohl]  Thank you~

> [JUnit5 Migration] Module: flink-runtime
> 
>
> Key: FLINK-25540
> URL: https://issues.apache.org/jira/browse/FLINK-25540
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Reporter: Hang Ruan
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29292) Change MergeFunction to produce not only KeyValues

2022-10-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-29292:
---
Labels: pull-request-available  (was: )

> Change MergeFunction to produce not only KeyValues
> --
>
> Key: FLINK-29292
> URL: https://issues.apache.org/jira/browse/FLINK-29292
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table Store
>Reporter: Caizhi Weng
>Assignee: Caizhi Weng
>Priority: Major
>  Labels: pull-request-available
> Fix For: table-store-0.3.0
>
>
> {{MergeFunction}} of full compaction need to produce changelogs instead of 
> single {{KeyValue}}. We need to modify {{MergeFunction}} into a generic class.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink-table-store] tsreaper opened a new pull request, #308: [FLINK-29292] Make result of MergeFunction generic

2022-10-07 Thread GitBox


tsreaper opened a new pull request, #308:
URL: https://github.com/apache/flink-table-store/pull/308

   `MergeFunction` of full compaction need to produce changelogs instead of 
single `KeyValue`. We need to modify MergeFunction into a generic class.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Assigned] (FLINK-29292) Change MergeFunction to produce not only KeyValues

2022-10-07 Thread Caizhi Weng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Caizhi Weng reassigned FLINK-29292:
---

Assignee: Caizhi Weng

> Change MergeFunction to produce not only KeyValues
> --
>
> Key: FLINK-29292
> URL: https://issues.apache.org/jira/browse/FLINK-29292
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table Store
>Reporter: Caizhi Weng
>Assignee: Caizhi Weng
>Priority: Major
> Fix For: table-store-0.3.0
>
>
> {{MergeFunction}} of full compaction need to produce changelogs instead of 
> single {{KeyValue}}. We need to modify {{MergeFunction}} into a generic class.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-29291) Change DataFileWriter into a factory to create writers

2022-10-07 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee closed FLINK-29291.

Resolution: Fixed

master: a881f41368ef48148d5cef795822ac51625811b6

> Change DataFileWriter into a factory to create writers
> --
>
> Key: FLINK-29291
> URL: https://issues.apache.org/jira/browse/FLINK-29291
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table Store
>Reporter: Caizhi Weng
>Assignee: Caizhi Weng
>Priority: Major
>  Labels: pull-request-available
> Fix For: table-store-0.3.0
>
>
> Currently {{DataFileWriter}} exposes {{write}} method for data files and 
> extra files.
> However, as the number of patterns to write files is increasing (for example, 
> we'd like to write some records into a data file, then write some other 
> records into an extra files when producing changelogs from full compaction) 
> we'll have to keep adding methods to {{DataFileWriter}} if we keep the 
> current implementation.
> We'd like to refactor {{DataFileWriter}} into a factory to create writers, so 
> that the users of writers can write however they like.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink-table-store] JingsongLi merged pull request #298: [FLINK-29291] Change DataFileWriter into a factory to create writers

2022-10-07 Thread GitBox


JingsongLi merged PR #298:
URL: https://github.com/apache/flink-table-store/pull/298


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Closed] (FLINK-29149) Migrate E2E tests to catalog-based and enable E2E tests for Flink1.14

2022-10-07 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29149?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee closed FLINK-29149.

Resolution: Fixed

master: 7e2f5850c4ae2315a729b3b1ba007162414ccd89

> Migrate E2E tests to catalog-based and enable E2E tests for Flink1.14
> -
>
> Key: FLINK-29149
> URL: https://issues.apache.org/jira/browse/FLINK-29149
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table Store
>Affects Versions: table-store-0.3.0
>Reporter: Jane Chan
>Assignee: Jane Chan
>Priority: Major
>  Labels: pull-request-available
> Fix For: table-store-0.3.0
>
>
> For LogStoreE2eTest, should add a step to manually create Kafka topic



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink-table-store] JingsongLi merged pull request #307: [FLINK-29149] Migrate E2E tests to catalog-based and enable E2E tests for Flink 1.14

2022-10-07 Thread GitBox


JingsongLi merged PR #307:
URL: https://github.com/apache/flink-table-store/pull/307


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-29542) Unload.md wrongly writes UNLOAD operation as LOAD operation

2022-10-07 Thread qingbo jiao (Jira)
qingbo jiao created FLINK-29542:
---

 Summary: Unload.md wrongly writes UNLOAD operation as LOAD 
operation
 Key: FLINK-29542
 URL: https://issues.apache.org/jira/browse/FLINK-29542
 Project: Flink
  Issue Type: Improvement
  Components: Documentation
Affects Versions: 1.17.0
Reporter: qingbo jiao


UNLOAD statements can be executed with the {{executeSql()}} method of the 
{{{}TableEnvironment{}}}. The {{executeSql()}} method returns ‘OK’ for a 
successful {color:#FF}LOAD{color} operation; otherwise it will throw an 
exception.

 

which should be {color:#FF}UNLOAD {color}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] flinkbot commented on pull request #20983: [FLINK-29458][docs] When two tables have the same field, do not speci…

2022-10-07 Thread GitBox


flinkbot commented on PR #20983:
URL: https://github.com/apache/flink/pull/20983#issuecomment-1272212106

   
   ## CI report:
   
   * 9d25c8e5d1ac05cf3465b2f509fd90c8a6bc35b0 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-29458) When two tables have the same field, do not specify the table name,Exception will be thrown:SqlValidatorException :Column 'currency' is ambiguous

2022-10-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-29458:
---
Labels: pull-request-available  (was: )

> When two tables have the same field, do not specify the table name,Exception 
> will be thrown:SqlValidatorException :Column 'currency' is ambiguous
> -
>
> Key: FLINK-29458
> URL: https://issues.apache.org/jira/browse/FLINK-29458
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation, Table SQL / API
>Affects Versions: 1.14.4
>Reporter: ZuoYan
>Assignee: ZuoYan
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2022-09-28-21-00-01-302.png, 
> image-2022-09-28-21-00-09-054.png, image-2022-09-28-21-00-22-733.png
>
>
> When two tables are join, the two tables have the same field. When querying 
> select, an exception will be thrown if the table name is not specified
> exception content
> Column 'currency' is ambiguous。
> !image-2022-09-28-21-00-22-733.png!
>  
> !image-2022-09-28-21-00-01-302.png!
> !image-2022-09-28-21-00-09-054.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] ZmmBigdata opened a new pull request, #20983: [FLINK-29458][docs] When two tables have the same field, do not speci…

2022-10-07 Thread GitBox


ZmmBigdata opened a new pull request, #20983:
URL: https://github.com/apache/flink/pull/20983

   ## What is the purpose of the change
   
   When two tables have the same field, do not specify the table name,Exception 
will be thrown:SqlValidatorException :Column 'currency' is ambiguous.
   
   The page url is 
https://nightlies.apache.org/flink/flink-docs-release-1.14/docs/dev/table/sql/queries/joins/
   
   
   ## Brief change log
   
   *The table name is added before the query field,“ orders.currency”*
   
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
 - If yes, how is the feature documented?  no
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] LadyForest commented on a diff in pull request #20652: [FLINK-22315][table] Support add/modify column/constraint/watermark for ALTER TABLE statement

2022-10-07 Thread GitBox


LadyForest commented on code in PR #20652:
URL: https://github.com/apache/flink/pull/20652#discussion_r990580442


##
flink-table/flink-table-planner/src/test/java/org/apache/flink/table/planner/operations/SqlToOperationConverterTest.java:
##
@@ -1466,6 +1477,404 @@ public void 
testAlterTableCompactOnManagedPartitionedTable() throws Exception {
 parse("alter table tb1 compact", SqlDialect.DEFAULT), 
staticPartitions);
 }
 
+@Test
+public void testAlterTableAddColumn() throws Exception {
+prepareNonManagedTable(false);
+ObjectIdentifier tableIdentifier = ObjectIdentifier.of("cat1", "db1", 
"tb1");
+Schema originalSchema =
+
catalogManager.getTable(tableIdentifier).get().getTable().getUnresolvedSchema();
+
+// test duplicated column name
+assertThatThrownBy(() -> parse("alter table tb1 add a bigint", 
SqlDialect.DEFAULT))
+.isInstanceOf(ValidationException.class)
+.hasMessageContaining("Try to add a column 'a' which already 
exists in the table.");
+
+// test reference nonexistent column name
+assertThatThrownBy(() -> parse("alter table tb1 add x bigint after y", 
SqlDialect.DEFAULT))
+.isInstanceOf(ValidationException.class)
+.hasMessageContaining(
+"Referenced column 'y' by 'AFTER' does not exist in 
the table.");
+
+// test add a single column
+Operation operation =
+parse(
+"alter table tb1 add d double not null comment 'd is 
double not null'",
+SqlDialect.DEFAULT);
+assertAlterTableSchema(
+operation,
+tableIdentifier,
+Schema.newBuilder()
+.fromSchema(originalSchema)
+.column("d", DataTypes.DOUBLE().notNull())
+.withComment("d is double not null")
+.build());
+
+// test add multiple columns with pk
+operation =
+parse(
+"alter table tb1 add (\n"
++ " e as upper(a) first,\n"
++ " f as b*2 after e,\n"
++ " g int metadata from 'mk1' virtual comment 
'comment_metadata' first,\n"
++ " h string primary key not enforced after 
a)",
+SqlDialect.DEFAULT);
+
+List unresolvedColumns =
+new ArrayList<>(originalSchema.getColumns());
+unresolvedColumns.add(
+0,
+new Schema.UnresolvedMetadataColumn(
+"g", DataTypes.INT(), "mk1", true, 
"comment_metadata"));
+unresolvedColumns.add(
+1, new Schema.UnresolvedComputedColumn("e", new 
SqlCallExpression("UPPER(`a`)")));
+unresolvedColumns.add(
+2, new Schema.UnresolvedComputedColumn("f", new 
SqlCallExpression("`b` * 2")));
+unresolvedColumns.add(
+4, new Schema.UnresolvedPhysicalColumn("h", 
DataTypes.STRING().notNull()));
+assertAlterTableSchema(
+operation,
+tableIdentifier,
+
Schema.newBuilder().fromColumns(unresolvedColumns).primaryKey("h").build());
+
+// test add nested type
+operation =
+parse(
+"alter table tb1 add (\n"
++ " r row not null> not null comment 'add composite type',\n"
++ " m map,\n"
++ " g as r.r1 * 2 after r,\n"
++ " ts as to_timestamp(r.r2) comment 'rowtime' 
after g,\n"
++ " na as r.r3 after ts)",
+SqlDialect.DEFAULT);
+assertAlterTableSchema(
+operation,
+tableIdentifier,
+Schema.newBuilder()
+.fromSchema(originalSchema)
+.column(
+"r",
+DataTypes.ROW(
+DataTypes.FIELD("r1", 
DataTypes.BIGINT()),
+DataTypes.FIELD("r2", 
DataTypes.STRING()),
+DataTypes.FIELD(
+"r3",
+
DataTypes.ARRAY(DataTypes.DOUBLE())
+.notNull()))
+.notNull())
+.withComment("add composite type")
+.columnByExpression("g", "`r`.`r1` * 2")
+.columnByExpression("ts", "TO_TIMESTAMP(`r`.`r2`)")
+ 

[jira] [Commented] (FLINK-29526) Java doc mistake in SequenceNumberRange#contains()

2022-10-07 Thread Yanfei Lei (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17614333#comment-17614333
 ] 

Yanfei Lei commented on FLINK-29526:


Thanks for reporting this, you're right. From the implementation in 
{{{}GenericSequenceNumberRange#contains{}}}, the range should be left-bounded.

> Java doc mistake in SequenceNumberRange#contains()
> --
>
> Key: FLINK-29526
> URL: https://issues.apache.org/jira/browse/FLINK-29526
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / State Backends
>Reporter: Feifan Wang
>Priority: Not a Priority
>  Labels: pull-request-available
> Attachments: image-2022-10-06-10-50-16-927.png
>
>
> !image-2022-10-06-10-50-16-927.png|width=554,height=106!
> Hi [~masteryhx] , It seems a typo, I have submit a pr for it.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink-table-store] JingsongLi commented on a diff in pull request #305: [FLINK-28256] Move the write and prepareCommit logic of AbstractTableWrite to FileStoreWrite

2022-10-07 Thread GitBox


JingsongLi commented on code in PR #305:
URL: https://github.com/apache/flink-table-store/pull/305#discussion_r990578854


##
flink-table-store-core/src/main/java/org/apache/flink/table/store/file/operation/FileStoreWrite.java:
##
@@ -52,4 +56,35 @@ RecordWriter createEmptyWriter(
  */
 Callable createCompactWriter(
 BinaryRowData partition, int bucket, @Nullable List 
compactFiles);
+
+/**
+ * If overwrite is true, the writer will overwrite the store, otherwise it 
won't.
+ *
+ * @param overwrite the overwrite flag
+ */
+void withOverwrite(boolean overwrite);
+
+/**
+ * Write the record to the store.
+ *
+ * @param record the given record
+ * @throws Exception the thrown exception when writing the record
+ */
+void write(SinkRecord record) throws Exception;

Review Comment:
   `SinkRecord` is a class in `table` instead of file store.
   Can we just introduce a  `write(BinaryRowData partition, int bucket, T t)` 
to this?
   `WriteFunction` can be hold in `TableWriteImpl` instead of 
`AbstractFileStoreWrite`.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] fredia commented on pull request #20921: [FLINK-29307][runtime/checkpoint] Fix timeInQueue in CheckpointRequestDecider

2022-10-07 Thread GitBox


fredia commented on PR #20921:
URL: https://github.com/apache/flink/pull/20921#issuecomment-1272207658

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-table-store] JingsongLi commented on a diff in pull request #298: [FLINK-29291] Change DataFileWriter into a factory to create writers

2022-10-07 Thread GitBox


JingsongLi commented on code in PR #298:
URL: https://github.com/apache/flink-table-store/pull/298#discussion_r990578391


##
flink-table-store-core/src/main/java/org/apache/flink/table/store/file/mergetree/MergeTreeWriter.java:
##
@@ -163,7 +163,11 @@ public void flushMemory() throws Exception {
 // adding one record then remove one record, but after merging 
this record will not
 // appear in lsm file. This is OK because we can also skip 
this changelog.
 DataFileMeta fileMeta = writer.result();
-if (fileMeta != null) {
+if (fileMeta == null) {
+for (String extraFile : extraFiles) {

Review Comment:
   Can we add a test for this? Maybe in 
`ChangelogWithKeyFileStoreTableTest.testStreamingChangelog`



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-29149) Migrate E2E tests to catalog-based and enable E2E tests for Flink1.14

2022-10-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29149?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-29149:
---
Labels: pull-request-available  (was: )

> Migrate E2E tests to catalog-based and enable E2E tests for Flink1.14
> -
>
> Key: FLINK-29149
> URL: https://issues.apache.org/jira/browse/FLINK-29149
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table Store
>Affects Versions: table-store-0.3.0
>Reporter: Jane Chan
>Assignee: Jane Chan
>Priority: Major
>  Labels: pull-request-available
> Fix For: table-store-0.3.0
>
>
> For LogStoreE2eTest, should add a step to manually create Kafka topic



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink-table-store] LadyForest commented on a diff in pull request #307: [FLINK-29149] Migrate E2E tests to catalog-based and enable E2E tests for Flink 1.14

2022-10-07 Thread GitBox


LadyForest commented on code in PR #307:
URL: https://github.com/apache/flink-table-store/pull/307#discussion_r990578269


##
flink-table-store-format/src/main/java/org/apache/flink/table/store/format/parquet/ParquetInputFormatFactory.java:
##
@@ -38,34 +40,35 @@ public static BulkFormat create(
 RowType producedRowType,
 TypeInformation producedTypeInfo,
 boolean isUtcTimestamp) {
-Class formatClass = null;
+Class formatClass;

Review Comment:
   > Need to rebase the master once 
[FLINK-29506](https://issues.apache.org/jira/browse/FLINK-29506) is merged.
   
   done



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-23580) Cannot handle such jdbc url

2022-10-07 Thread ArchieWan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17614331#comment-17614331
 ] 

ArchieWan commented on FLINK-23580:
---

I occured same problem, has anyone already solved it?

> Cannot handle such jdbc url
> ---
>
> Key: FLINK-23580
> URL: https://issues.apache.org/jira/browse/FLINK-23580
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC
>Affects Versions: 1.12.0
>Reporter: chenpeng
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor, 
> pull-request-available
> Attachments: image-2021-08-02-16-02-21-897.png
>
>
>  
> Caused by: java.lang.IllegalStateException: Cannot handle such jdbc url: 
> jdbc:clickhouse://xx:8123/dict
> {code:java}
> SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".SLF4J: Failed 
> to load class "org.slf4j.impl.StaticLoggerBinder".SLF4J: Defaulting to 
> no-operation (NOP) logger implementationSLF4J: See 
> http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
> org.apache.flink.table.api.ValidationException: Unable to create a source for 
> reading table 'default_catalog.default_database.sink_table'.
> Table options are:
> 'connector'='jdbc''driver'='ru.yandex.clickhouse.ClickHouseDriver''password'='''table-name'='tbl3_dict''url'='jdbc:clickhouse://xxx:8123/dict''username'='default'
>  at 
> org.apache.flink.table.factories.FactoryUtil.createTableSource(FactoryUtil.java:125)
>  at 
> org.apache.flink.table.planner.plan.schema.CatalogSourceTable.createDynamicTableSource(CatalogSourceTable.java:265)
>  at 
> org.apache.flink.table.planner.plan.schema.CatalogSourceTable.toRel(CatalogSourceTable.java:100)
>  at 
> org.apache.calcite.sql2rel.SqlToRelConverter.toRel(SqlToRelConverter.java:3585)
>  at 
> org.apache.calcite.sql2rel.SqlToRelConverter.convertIdentifier(SqlToRelConverter.java:2507)
>  at 
> org.apache.calcite.sql2rel.SqlToRelConverter.convertFrom(SqlToRelConverter.java:2144)
>  at 
> org.apache.calcite.sql2rel.SqlToRelConverter.convertFrom(SqlToRelConverter.java:2093)
>  at 
> org.apache.calcite.sql2rel.SqlToRelConverter.convertFrom(SqlToRelConverter.java:2050)
>  at 
> org.apache.calcite.sql2rel.SqlToRelConverter.convertSelectImpl(SqlToRelConverter.java:663)
>  at 
> org.apache.calcite.sql2rel.SqlToRelConverter.convertSelect(SqlToRelConverter.java:644)
>  at 
> org.apache.calcite.sql2rel.SqlToRelConverter.convertQueryRecursive(SqlToRelConverter.java:3438)
>  at 
> org.apache.calcite.sql2rel.SqlToRelConverter.convertQuery(SqlToRelConverter.java:570)
>  at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.org$apache$flink$table$planner$calcite$FlinkPlannerImpl$$rel(FlinkPlannerImpl.scala:165)
>  at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.rel(FlinkPlannerImpl.scala:157)
>  at 
> org.apache.flink.table.planner.operations.SqlToOperationConverter.toQueryOperation(SqlToOperationConverter.java:823)
>  at 
> org.apache.flink.table.planner.operations.SqlToOperationConverter.convertSqlQuery(SqlToOperationConverter.java:795)
>  at 
> org.apache.flink.table.planner.operations.SqlToOperationConverter.convert(SqlToOperationConverter.java:250)
>  at 
> org.apache.flink.table.planner.delegation.ParserImpl.parse(ParserImpl.java:78)
>  at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.sqlQuery(TableEnvironmentImpl.java:639)
>  at FlinkStreamSql.test7(FlinkStreamSql.java:212) at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>  at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>  at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at 
> org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at 
> 

[jira] [Updated] (FLINK-29541) [JUnit5 Migration] Module: flink-table-planner

2022-10-07 Thread Lijie Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lijie Wang updated FLINK-29541:
---
Component/s: Tests

> [JUnit5 Migration] Module: flink-table-planner
> --
>
> Key: FLINK-29541
> URL: https://issues.apache.org/jira/browse/FLINK-29541
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner, Tests
>Reporter: Lijie Wang
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-27940) [JUnit5 Migration] Module: flink-connector-jdbc

2022-10-07 Thread Lijie Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lijie Wang updated FLINK-27940:
---
Component/s: (was: Tests)

> [JUnit5 Migration] Module: flink-connector-jdbc
> ---
>
> Key: FLINK-27940
> URL: https://issues.apache.org/jira/browse/FLINK-27940
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / JDBC
>Affects Versions: 1.16.0
>Reporter: Alexander Preuss
>Assignee: Lijie Wang
>Priority: Minor
>  Labels: starter
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-29541) [JUnit5 Migration] Module: flink-table-planner

2022-10-07 Thread Lijie Wang (Jira)
Lijie Wang created FLINK-29541:
--

 Summary: [JUnit5 Migration] Module: flink-table-planner
 Key: FLINK-29541
 URL: https://issues.apache.org/jira/browse/FLINK-29541
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / Planner
Reporter: Lijie Wang






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-27940) [JUnit5 Migration] Module: flink-connector-jdbc

2022-10-07 Thread Lijie Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lijie Wang updated FLINK-27940:
---
Component/s: Tests

> [JUnit5 Migration] Module: flink-connector-jdbc
> ---
>
> Key: FLINK-27940
> URL: https://issues.apache.org/jira/browse/FLINK-27940
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / JDBC, Tests
>Affects Versions: 1.16.0
>Reporter: Alexander Preuss
>Assignee: Lijie Wang
>Priority: Minor
>  Labels: starter
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-27940) [JUnit5 Migration] Module: flink-connector-jdbc

2022-10-07 Thread Lijie Wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17614327#comment-17614327
 ] 

Lijie Wang commented on FLINK-27940:


[~mapohl]  It was blocked by the migration of {{JdbcTablePlanTest}}, because it 
extends {{TableTestBase}} in {{flink-table-planner}} module. So the 
{{TableTestBase}} needs to be migrated first, then I'll continue this work.

> [JUnit5 Migration] Module: flink-connector-jdbc
> ---
>
> Key: FLINK-27940
> URL: https://issues.apache.org/jira/browse/FLINK-27940
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / JDBC
>Affects Versions: 1.16.0
>Reporter: Alexander Preuss
>Assignee: Lijie Wang
>Priority: Minor
>  Labels: starter
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-29406) Expose Finish Method For TableFunction

2022-10-07 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29406?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee closed FLINK-29406.

Resolution: Fixed

master: 1f2001cfd28dbf6fdecdb2645052cdd61c84

> Expose Finish Method For TableFunction
> --
>
> Key: FLINK-29406
> URL: https://issues.apache.org/jira/browse/FLINK-29406
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Affects Versions: 1.14.5, 1.16.0, 1.15.2
>Reporter: lincoln lee
>Assignee: lincoln lee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.17.0
>
>
> FLIP-260: Expose Finish Method For TableFunction
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-260%3A+Expose+Finish+Method+For+TableFunction



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] JingsongLi merged pull request #20899: [FLINK-29406][table] Expose finish method for TableFunction

2022-10-07 Thread GitBox


JingsongLi merged PR #20899:
URL: https://github.com/apache/flink/pull/20899


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-29506) ParquetInputFormatFactory fails to create format on Flink 1.14

2022-10-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-29506:
---
Labels: pull-request-available  (was: )

> ParquetInputFormatFactory fails to create format on Flink 1.14
> --
>
> Key: FLINK-29506
> URL: https://issues.apache.org/jira/browse/FLINK-29506
> Project: Flink
>  Issue Type: Bug
>  Components: Table Store
>Affects Versions: table-store-0.3.0
>Reporter: Jane Chan
>Priority: Major
>  Labels: pull-request-available
> Fix For: table-store-0.3.0
>
> Attachments: image-2022-10-05-15-19-25-641.png, 
> image-2022-10-05-15-20-19-422.png
>
>
> The current way to instantiate format has issues. See
> [https://github.com/apache/flink-table-store/blob/master/flink-table-store-format/src/main/java/org/apache/flink/table/store/format/parquet/ParquetInputFormatFactory.java#L36]
> ParquetColumnarRowInputFormat#createPartitionedFormat only differs in 
> arguments for Flink 1.14 and Flink 1.15. It'll direct throw 
> IllegalArgumentException when using Flink1.14.
> !image-2022-10-05-15-19-25-641.png|width=617,height=375!
>  
> !image-2022-10-05-15-20-19-422.png|width=617,height=390!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-29506) ParquetInputFormatFactory fails to create format on Flink 1.14

2022-10-07 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee closed FLINK-29506.

  Assignee: Jane Chan
Resolution: Fixed

master: 09827774e2f435de3133ced33c61dcf1e6ceae0a

> ParquetInputFormatFactory fails to create format on Flink 1.14
> --
>
> Key: FLINK-29506
> URL: https://issues.apache.org/jira/browse/FLINK-29506
> Project: Flink
>  Issue Type: Bug
>  Components: Table Store
>Affects Versions: table-store-0.3.0
>Reporter: Jane Chan
>Assignee: Jane Chan
>Priority: Major
>  Labels: pull-request-available
> Fix For: table-store-0.3.0
>
> Attachments: image-2022-10-05-15-19-25-641.png, 
> image-2022-10-05-15-20-19-422.png
>
>
> The current way to instantiate format has issues. See
> [https://github.com/apache/flink-table-store/blob/master/flink-table-store-format/src/main/java/org/apache/flink/table/store/format/parquet/ParquetInputFormatFactory.java#L36]
> ParquetColumnarRowInputFormat#createPartitionedFormat only differs in 
> arguments for Flink 1.14 and Flink 1.15. It'll direct throw 
> IllegalArgumentException when using Flink1.14.
> !image-2022-10-05-15-19-25-641.png|width=617,height=375!
>  
> !image-2022-10-05-15-20-19-422.png|width=617,height=390!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink-table-store] JingsongLi merged pull request #306: [FLINK-29506] ParquetInputFormatFactory fails to create format on Flink 1.14

2022-10-07 Thread GitBox


JingsongLi merged PR #306:
URL: https://github.com/apache/flink-table-store/pull/306


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-27344) FLIP-222: Support full job lifecycle statements in SQL client

2022-10-07 Thread Paul Lin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17614320#comment-17614320
 ] 

Paul Lin commented on FLINK-27344:
--

[~ekoblov] Yes, we've roughly considered the stored procedures approach, but 
given that Flink currently doesn't support stored procedures and it's not very 
straightforward from the users' point of view, we then lean toward the new 
syntax approach (as many databases also provide similar SQL statements to 
control jobs/tasks, like Hive/CrukroachDB/KSQL). WDYT?

> FLIP-222: Support full job lifecycle statements in SQL client
> -
>
> Key: FLINK-27344
> URL: https://issues.apache.org/jira/browse/FLINK-27344
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / Client
>Reporter: Paul Lin
>Assignee: Paul Lin
>Priority: Major
>
> With the efforts in FLIP-24 and FLIP-91, Flink SQL client supports submitting 
> SQL jobs but lacks further support for their lifecycles afterward which is 
> crucial for streaming use cases. That means Flink SQL client users have to 
> turn to other clients (e.g. CLI) or APIs (e.g. REST API) to manage the jobs, 
> like triggering savepoints or canceling queries, which makes the user 
> experience of SQL client incomplete. 
> Therefore, this proposal aims to complete the capability of SQL client by 
> adding job lifecycle statements. With these statements, users could manage 
> jobs and savepoints through pure SQL in SQL client.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] snuyanzin commented on pull request #19780: [FLINK-27727][tests] Migrate TypeSerializerUpgradeTestBase to JUnit5

2022-10-07 Thread GitBox


snuyanzin commented on PR #19780:
URL: https://github.com/apache/flink/pull/19780#issuecomment-1272181209

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-29237) RexSimplify can not be removed after update to calcite 1.27

2022-10-07 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin updated FLINK-29237:

Description: 

It seems there is some work should be done to make it happen
Currently removal of RexSimplify from Flink repo leads to failure of several 
tests like
{{IntervalJoinTest#testFallbackToRegularJoin}}
{{CalcITCase.testOrWithIsNullInIf}}
{{CalcITCase.testOrWithIsNullPredicate}}
example of failure
{noformat}
Sep 07 11:25:08 java.lang.AssertionError: 
Sep 07 11:25:08 
Sep 07 11:25:08 Results do not match for query:
Sep 07 11:25:08   
Sep 07 11:25:08 SELECT * FROM NullTable3 AS T
Sep 07 11:25:08 WHERE T.a = 1 OR T.a = 3 OR T.a IS NULL
Sep 07 11:25:08 
Sep 07 11:25:08 
Sep 07 11:25:08 Results
Sep 07 11:25:08  == Correct Result - 4 ==   == Actual Result - 2 ==
Sep 07 11:25:08  +I[1, 1, Hi]   +I[1, 1, Hi]
Sep 07 11:25:08  +I[3, 2, Hello world]  +I[3, 2, Hello world]
Sep 07 11:25:08 !+I[null, 999, NullTuple]   
Sep 07 11:25:08 !+I[null, 999, NullTuple]   
Sep 07 11:25:08 
Sep 07 11:25:08 Plan:
Sep 07 11:25:08   == Abstract Syntax Tree ==
Sep 07 11:25:08 LogicalProject(inputs=[0..2])
Sep 07 11:25:08 +- LogicalFilter(condition=[OR(=($0, 1), =($0, 3), IS 
NULL($0))])
Sep 07 11:25:08+- LogicalTableScan(table=[[default_catalog, 
default_database, NullTable3]])
Sep 07 11:25:08 
Sep 07 11:25:08 == Optimized Logical Plan ==
Sep 07 11:25:08 Calc(select=[a, b, c], where=[SEARCH(a, Sarg[1, 3; NULL AS 
TRUE])])
Sep 07 11:25:08 +- BoundedStreamScan(table=[[default_catalog, default_database, 
NullTable3]], fields=[a, b, c])
Sep 07 11:25:08 
Sep 07 11:25:08
Sep 07 11:25:08 at org.junit.Assert.fail(Assert.java:89)
Sep 07 11:25:08 at 
org.apache.flink.table.planner.runtime.utils.BatchTestBase.$anonfun$check$1(BatchTestBase.scala:154)
Sep 07 11:25:08 at 
org.apache.flink.table.planner.runtime.utils.BatchTestBase.$anonfun$check$1$adapted(BatchTestBase.scala:147)
Sep 07 11:25:08 at scala.Option.foreach(Option.scala:257)
Sep 07 11:25:08 at 
org.apache.flink.table.planner.runtime.utils.BatchTestBase.check(BatchTestBase.scala:147)
Sep 07 11:25:08 at 
org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:162)
Sep 07 11:25:08 at 
org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:562)
Sep 07 11:25:08 at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:548)
Sep 07 11:25:08 

{noformat}

  was:
{noformat}
Sep 07 11:25:08 java.lang.AssertionError: 
Sep 07 11:25:08 
Sep 07 11:25:08 Results do not match for query:
Sep 07 11:25:08   
Sep 07 11:25:08 SELECT * FROM NullTable3 AS T
Sep 07 11:25:08 WHERE T.a = 1 OR T.a = 3 OR T.a IS NULL
Sep 07 11:25:08 
Sep 07 11:25:08 
Sep 07 11:25:08 Results
Sep 07 11:25:08  == Correct Result - 4 ==   == Actual Result - 2 ==
Sep 07 11:25:08  +I[1, 1, Hi]   +I[1, 1, Hi]
Sep 07 11:25:08  +I[3, 2, Hello world]  +I[3, 2, Hello world]
Sep 07 11:25:08 !+I[null, 999, NullTuple]   
Sep 07 11:25:08 !+I[null, 999, NullTuple]   
Sep 07 11:25:08 
Sep 07 11:25:08 Plan:
Sep 07 11:25:08   == Abstract Syntax Tree ==
Sep 07 11:25:08 LogicalProject(inputs=[0..2])
Sep 07 11:25:08 +- LogicalFilter(condition=[OR(=($0, 1), =($0, 3), IS 
NULL($0))])
Sep 07 11:25:08+- LogicalTableScan(table=[[default_catalog, 
default_database, NullTable3]])
Sep 07 11:25:08 
Sep 07 11:25:08 == Optimized Logical Plan ==
Sep 07 11:25:08 Calc(select=[a, b, c], where=[SEARCH(a, Sarg[1, 3; NULL AS 
TRUE])])
Sep 07 11:25:08 +- BoundedStreamScan(table=[[default_catalog, default_database, 
NullTable3]], fields=[a, b, c])
Sep 07 11:25:08 
Sep 07 11:25:08
Sep 07 11:25:08 at org.junit.Assert.fail(Assert.java:89)
Sep 07 11:25:08 at 
org.apache.flink.table.planner.runtime.utils.BatchTestBase.$anonfun$check$1(BatchTestBase.scala:154)
Sep 07 11:25:08 at 
org.apache.flink.table.planner.runtime.utils.BatchTestBase.$anonfun$check$1$adapted(BatchTestBase.scala:147)
Sep 07 11:25:08 at scala.Option.foreach(Option.scala:257)
Sep 07 11:25:08 at 
org.apache.flink.table.planner.runtime.utils.BatchTestBase.check(BatchTestBase.scala:147)
Sep 07 11:25:08 at 
org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:162)
Sep 07 11:25:08 at 
org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:562)
Sep 07 11:25:08 at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:548)
Sep 07 11:25:08 

{noformat}


> RexSimplify can not be removed after update to calcite 1.27
> ---
>
> Key: FLINK-29237
> URL: https://issues.apache.org/jira/browse/FLINK-29237
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API, Table SQL / Planner
>

[jira] [Updated] (FLINK-29237) RexSimplify can not be removed after update to calcite 1.27

2022-10-07 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin updated FLINK-29237:

Description: 
It seems there is some work should be done to make it happen
Currently removal of RexSimplify from Flink repo leads to failure of several 
tests like
{{IntervalJoinTest#testFallbackToRegularJoin}}
{{CalcITCase#testOrWithIsNullInIf}}
{{CalcITCase#testOrWithIsNullPredicate}}
example of failure
{noformat}
Sep 07 11:25:08 java.lang.AssertionError: 
Sep 07 11:25:08 
Sep 07 11:25:08 Results do not match for query:
Sep 07 11:25:08   
Sep 07 11:25:08 SELECT * FROM NullTable3 AS T
Sep 07 11:25:08 WHERE T.a = 1 OR T.a = 3 OR T.a IS NULL
Sep 07 11:25:08 
Sep 07 11:25:08 
Sep 07 11:25:08 Results
Sep 07 11:25:08  == Correct Result - 4 ==   == Actual Result - 2 ==
Sep 07 11:25:08  +I[1, 1, Hi]   +I[1, 1, Hi]
Sep 07 11:25:08  +I[3, 2, Hello world]  +I[3, 2, Hello world]
Sep 07 11:25:08 !+I[null, 999, NullTuple]   
Sep 07 11:25:08 !+I[null, 999, NullTuple]   
Sep 07 11:25:08 
Sep 07 11:25:08 Plan:
Sep 07 11:25:08   == Abstract Syntax Tree ==
Sep 07 11:25:08 LogicalProject(inputs=[0..2])
Sep 07 11:25:08 +- LogicalFilter(condition=[OR(=($0, 1), =($0, 3), IS 
NULL($0))])
Sep 07 11:25:08+- LogicalTableScan(table=[[default_catalog, 
default_database, NullTable3]])
Sep 07 11:25:08 
Sep 07 11:25:08 == Optimized Logical Plan ==
Sep 07 11:25:08 Calc(select=[a, b, c], where=[SEARCH(a, Sarg[1, 3; NULL AS 
TRUE])])
Sep 07 11:25:08 +- BoundedStreamScan(table=[[default_catalog, default_database, 
NullTable3]], fields=[a, b, c])
Sep 07 11:25:08 
Sep 07 11:25:08
Sep 07 11:25:08 at org.junit.Assert.fail(Assert.java:89)
Sep 07 11:25:08 at 
org.apache.flink.table.planner.runtime.utils.BatchTestBase.$anonfun$check$1(BatchTestBase.scala:154)
Sep 07 11:25:08 at 
org.apache.flink.table.planner.runtime.utils.BatchTestBase.$anonfun$check$1$adapted(BatchTestBase.scala:147)
Sep 07 11:25:08 at scala.Option.foreach(Option.scala:257)
Sep 07 11:25:08 at 
org.apache.flink.table.planner.runtime.utils.BatchTestBase.check(BatchTestBase.scala:147)
Sep 07 11:25:08 at 
org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:162)
Sep 07 11:25:08 at 
org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:562)
Sep 07 11:25:08 at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:548)
Sep 07 11:25:08 

{noformat}

  was:

It seems there is some work should be done to make it happen
Currently removal of RexSimplify from Flink repo leads to failure of several 
tests like
{{IntervalJoinTest#testFallbackToRegularJoin}}
{{CalcITCase.testOrWithIsNullInIf}}
{{CalcITCase.testOrWithIsNullPredicate}}
example of failure
{noformat}
Sep 07 11:25:08 java.lang.AssertionError: 
Sep 07 11:25:08 
Sep 07 11:25:08 Results do not match for query:
Sep 07 11:25:08   
Sep 07 11:25:08 SELECT * FROM NullTable3 AS T
Sep 07 11:25:08 WHERE T.a = 1 OR T.a = 3 OR T.a IS NULL
Sep 07 11:25:08 
Sep 07 11:25:08 
Sep 07 11:25:08 Results
Sep 07 11:25:08  == Correct Result - 4 ==   == Actual Result - 2 ==
Sep 07 11:25:08  +I[1, 1, Hi]   +I[1, 1, Hi]
Sep 07 11:25:08  +I[3, 2, Hello world]  +I[3, 2, Hello world]
Sep 07 11:25:08 !+I[null, 999, NullTuple]   
Sep 07 11:25:08 !+I[null, 999, NullTuple]   
Sep 07 11:25:08 
Sep 07 11:25:08 Plan:
Sep 07 11:25:08   == Abstract Syntax Tree ==
Sep 07 11:25:08 LogicalProject(inputs=[0..2])
Sep 07 11:25:08 +- LogicalFilter(condition=[OR(=($0, 1), =($0, 3), IS 
NULL($0))])
Sep 07 11:25:08+- LogicalTableScan(table=[[default_catalog, 
default_database, NullTable3]])
Sep 07 11:25:08 
Sep 07 11:25:08 == Optimized Logical Plan ==
Sep 07 11:25:08 Calc(select=[a, b, c], where=[SEARCH(a, Sarg[1, 3; NULL AS 
TRUE])])
Sep 07 11:25:08 +- BoundedStreamScan(table=[[default_catalog, default_database, 
NullTable3]], fields=[a, b, c])
Sep 07 11:25:08 
Sep 07 11:25:08
Sep 07 11:25:08 at org.junit.Assert.fail(Assert.java:89)
Sep 07 11:25:08 at 
org.apache.flink.table.planner.runtime.utils.BatchTestBase.$anonfun$check$1(BatchTestBase.scala:154)
Sep 07 11:25:08 at 
org.apache.flink.table.planner.runtime.utils.BatchTestBase.$anonfun$check$1$adapted(BatchTestBase.scala:147)
Sep 07 11:25:08 at scala.Option.foreach(Option.scala:257)
Sep 07 11:25:08 at 
org.apache.flink.table.planner.runtime.utils.BatchTestBase.check(BatchTestBase.scala:147)
Sep 07 11:25:08 at 
org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:162)
Sep 07 11:25:08 at 
org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:562)
Sep 07 11:25:08 at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:548)
Sep 07 11:25:08 

{noformat}


> RexSimplify can not be removed after update to calcite 1.27
> 

[jira] [Updated] (FLINK-29237) RexSimplify can not be removed after update to calcite 1.27

2022-10-07 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin updated FLINK-29237:

Summary: RexSimplify can not be removed after update to calcite 1.27  (was: 
CalcITCase.testOrWithIsNullPredicate fails after update to calcite 1.27)

> RexSimplify can not be removed after update to calcite 1.27
> ---
>
> Key: FLINK-29237
> URL: https://issues.apache.org/jira/browse/FLINK-29237
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API, Table SQL / Planner
>Reporter: Sergey Nuyanzin
>Priority: Major
>
> {noformat}
> Sep 07 11:25:08 java.lang.AssertionError: 
> Sep 07 11:25:08 
> Sep 07 11:25:08 Results do not match for query:
> Sep 07 11:25:08   
> Sep 07 11:25:08 SELECT * FROM NullTable3 AS T
> Sep 07 11:25:08 WHERE T.a = 1 OR T.a = 3 OR T.a IS NULL
> Sep 07 11:25:08 
> Sep 07 11:25:08 
> Sep 07 11:25:08 Results
> Sep 07 11:25:08  == Correct Result - 4 ==   == Actual Result - 2 ==
> Sep 07 11:25:08  +I[1, 1, Hi]   +I[1, 1, Hi]
> Sep 07 11:25:08  +I[3, 2, Hello world]  +I[3, 2, Hello world]
> Sep 07 11:25:08 !+I[null, 999, NullTuple]   
> Sep 07 11:25:08 !+I[null, 999, NullTuple]   
> Sep 07 11:25:08 
> Sep 07 11:25:08 Plan:
> Sep 07 11:25:08   == Abstract Syntax Tree ==
> Sep 07 11:25:08 LogicalProject(inputs=[0..2])
> Sep 07 11:25:08 +- LogicalFilter(condition=[OR(=($0, 1), =($0, 3), IS 
> NULL($0))])
> Sep 07 11:25:08+- LogicalTableScan(table=[[default_catalog, 
> default_database, NullTable3]])
> Sep 07 11:25:08 
> Sep 07 11:25:08 == Optimized Logical Plan ==
> Sep 07 11:25:08 Calc(select=[a, b, c], where=[SEARCH(a, Sarg[1, 3; NULL AS 
> TRUE])])
> Sep 07 11:25:08 +- BoundedStreamScan(table=[[default_catalog, 
> default_database, NullTable3]], fields=[a, b, c])
> Sep 07 11:25:08 
> Sep 07 11:25:08
> Sep 07 11:25:08   at org.junit.Assert.fail(Assert.java:89)
> Sep 07 11:25:08   at 
> org.apache.flink.table.planner.runtime.utils.BatchTestBase.$anonfun$check$1(BatchTestBase.scala:154)
> Sep 07 11:25:08   at 
> org.apache.flink.table.planner.runtime.utils.BatchTestBase.$anonfun$check$1$adapted(BatchTestBase.scala:147)
> Sep 07 11:25:08   at scala.Option.foreach(Option.scala:257)
> Sep 07 11:25:08   at 
> org.apache.flink.table.planner.runtime.utils.BatchTestBase.check(BatchTestBase.scala:147)
> Sep 07 11:25:08   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:162)
> Sep 07 11:25:08   at 
> org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:562)
> Sep 07 11:25:08   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:548)
> Sep 07 11:25:08 
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] flinkbot commented on pull request #20982: [FLINK-29539] [Deployment/Kubernetes] dnsPolicy in FlinkPod is not overridable

2022-10-07 Thread GitBox


flinkbot commented on PR #20982:
URL: https://github.com/apache/flink/pull/20982#issuecomment-1272145985

   
   ## CI report:
   
   * 87c713e393e7bc5f62f20326182e394d0543dfb9 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-29539) dnsPolicy in FlinkPod is not overridable

2022-10-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-29539:
---
Labels: pull-request-available  (was: )

> dnsPolicy in FlinkPod is not overridable 
> -
>
> Key: FLINK-29539
> URL: https://issues.apache.org/jira/browse/FLINK-29539
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / Kubernetes
>Reporter: Carlos Castro
>Priority: Major
>  Labels: pull-request-available
>
> With this PR [https://github.com/apache/flink/pull/18119 
> |https://github.com/apache/flink/pull/18119]it stopped being possible to 
> override the dnsPolicy in the FlinkPod spec.
> To fix it, it should check first if the dnsPolicy is not null before applying 
> the default.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-29540) SubQueryAntiJoinTest started to fail after Calcite 1.27

2022-10-07 Thread Sergey Nuyanzin (Jira)
Sergey Nuyanzin created FLINK-29540:
---

 Summary: SubQueryAntiJoinTest started to fail after Calcite 1.27
 Key: FLINK-29540
 URL: https://issues.apache.org/jira/browse/FLINK-29540
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / API
Reporter: Sergey Nuyanzin


Probably the reason is https://issues.apache.org/jira/browse/CALCITE-4560

 

some tests are failing with
{noformat}
java.lang.NullPointerException
at 
org.apache.calcite.sql2rel.RelDecorrelator.createValueGenerator(RelDecorrelator.java:858)
at 
org.apache.calcite.sql2rel.RelDecorrelator.decorrelateInputWithValueGenerator(RelDecorrelator.java:1070)
at 
org.apache.calcite.sql2rel.RelDecorrelator.maybeAddValueGenerator(RelDecorrelator.java:987)
at 
org.apache.calcite.sql2rel.RelDecorrelator.decorrelateRel(RelDecorrelator.java:1199)
at 
org.apache.calcite.sql2rel.RelDecorrelator.decorrelateRel(RelDecorrelator.java:1165)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.calcite.util.ReflectUtil$2.invoke(ReflectUtil.java:531)
at 
org.apache.calcite.sql2rel.RelDecorrelator.getInvoke(RelDecorrelator.java:729)
at 
org.apache.calcite.sql2rel.RelDecorrelator.decorrelateRel(RelDecorrelator.java:771)
at 
org.apache.calcite.sql2rel.RelDecorrelator.decorrelateRel(RelDecorrelator.java:760)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.calcite.util.ReflectUtil$2.invoke(ReflectUtil.java:531)
at 
org.apache.calcite.sql2rel.RelDecorrelator.getInvoke(RelDecorrelator.java:729)
at 
org.apache.calcite.sql2rel.RelDecorrelator.decorrelateRel(RelDecorrelator.java:1236)
at 
org.apache.calcite.sql2rel.RelDecorrelator.decorrelateRel(RelDecorrelator.java:1218)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.calcite.util.ReflectUtil$2.invoke(ReflectUtil.java:531)
at 
org.apache.calcite.sql2rel.RelDecorrelator.getInvoke(RelDecorrelator.java:729)
at 
org.apache.calcite.sql2rel.RelDecorrelator.decorrelateRel(RelDecorrelator.java:771)
at 
org.apache.calcite.sql2rel.RelDecorrelator.decorrelateRel(RelDecorrelator.java:760)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.calcite.util.ReflectUtil$2.invoke(ReflectUtil.java:531)
at 
org.apache.calcite.sql2rel.RelDecorrelator.getInvoke(RelDecorrelator.java:729)
at 
org.apache.calcite.sql2rel.RelDecorrelator.decorrelateRel(RelDecorrelator.java:1186)
at 
org.apache.calcite.sql2rel.RelDecorrelator.decorrelateRel(RelDecorrelator.java:1165)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.calcite.util.ReflectUtil$2.invoke(ReflectUtil.java:531)
at 
org.apache.calcite.sql2rel.RelDecorrelator.getInvoke(RelDecorrelator.java:729)
at 
org.apache.calcite.sql2rel.RelDecorrelator.decorrelateRel(RelDecorrelator.java:771)
at 
org.apache.calcite.sql2rel.RelDecorrelator.decorrelateRel(RelDecorrelator.java:760)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.calcite.util.ReflectUtil$2.invoke(ReflectUtil.java:531)
at 
org.apache.calcite.sql2rel.RelDecorrelator.getInvoke(RelDecorrelator.java:729)
at 

[GitHub] [flink] carloscastrojumo opened a new pull request, #20982: [FLINK-29539] [Deployment/Kubernetes] dnsPolicy in FlinkPod is not overridable

2022-10-07 Thread GitBox


carloscastrojumo opened a new pull request, #20982:
URL: https://github.com/apache/flink/pull/20982

   ## What is the purpose of the change
   
   With this PR https://github.com/apache/flink/pull/18119 , `dnsPolicy` in the 
FlinkPod is no longer possible to override to other values than the default 
ones. This change add back the possibility to override it. 
   
   ## Brief change log
   
   - Check if `dnsPolicy` value is being passed in the PodTemplate.
   - Fix `DNS_POLICY_DEFAULT` and  `DNS_POLICY_HOSTNETWORK`  typo.
   
   ## Verifying this change
   
 - Added test that validates dnsPolicy before and after override.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? ( no)
 - If yes, how is the feature documented? (not applicable)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-29539) dnsPolicy in FlinkPod is not overridable

2022-10-07 Thread Carlos Castro (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carlos Castro updated FLINK-29539:
--
Component/s: Deployment / Kubernetes

> dnsPolicy in FlinkPod is not overridable 
> -
>
> Key: FLINK-29539
> URL: https://issues.apache.org/jira/browse/FLINK-29539
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / Kubernetes
>Reporter: Carlos Castro
>Priority: Major
>
> With this PR [https://github.com/apache/flink/pull/18119 
> |https://github.com/apache/flink/pull/18119]it stopped being possible to 
> override the dnsPolicy in the FlinkPod spec.
> To fix it, it should check first if the dnsPolicy is not null before applying 
> the default.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-29539) dnsPolicy in FlinkPod is not overridable

2022-10-07 Thread Carlos Castro (Jira)
Carlos Castro created FLINK-29539:
-

 Summary: dnsPolicy in FlinkPod is not overridable 
 Key: FLINK-29539
 URL: https://issues.apache.org/jira/browse/FLINK-29539
 Project: Flink
  Issue Type: Bug
Reporter: Carlos Castro


With this PR [https://github.com/apache/flink/pull/18119 
|https://github.com/apache/flink/pull/18119]it stopped being possible to 
override the dnsPolicy in the FlinkPod spec.

To fix it, it should check first if the dnsPolicy is not null before applying 
the default.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29203) Support optimization of Union(all, Values, Values) to Values

2022-10-07 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin updated FLINK-29203:

Description: 
This optimization was introduced at 
https://issues.apache.org/jira/browse/CALCITE-4383 
There are several issues with that
1. now union all tries to do casting to least restrictive type [1] as a result 
SetOperatorsITCase#testUnionAllWithCommonType fails like below
2. JoinITCase#testUncorrelatedScalar fails like mentioned at 
https://issues.apache.org/jira/browse/FLINK-29204
3. org.apache.calcite.plan.hep.HepPlanner#findBestExp could be empty for 
LogicalValues  after such optimization
{noformat}
org.apache.flink.table.planner.codegen.CodeGenException: Incompatible types of 
expression and result type. Expression[GeneratedExpression(((int) 
12),false,,INT NOT NULL,Some(12))] type is [INT NOT NULL], result type is 
[DECIMAL(13, 3) NOT NULL]

    at 
org.apache.flink.table.planner.codegen.ExprCodeGenerator.$anonfun$generateResultExpression$2(ExprCodeGenerator.scala:309)
    at 
org.apache.flink.table.planner.codegen.ExprCodeGenerator.$anonfun$generateResultExpression$2$adapted(ExprCodeGenerator.scala:293)
    at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:58)
    at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:51)
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
    at 
org.apache.flink.table.planner.codegen.ExprCodeGenerator.generateResultExpression(ExprCodeGenerator.scala:293)
    at 
org.apache.flink.table.planner.codegen.ExprCodeGenerator.generateResultExpression(ExprCodeGenerator.scala:247)
    at 
org.apache.flink.table.planner.codegen.ValuesCodeGenerator$.$anonfun$generatorInputFormat$1(ValuesCodeGenerator.scala:45)
    at 
scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:233)
    at scala.collection.Iterator.foreach(Iterator.scala:937)
    at scala.collection.Iterator.foreach$(Iterator.scala:937)
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1425)
    at scala.collection.IterableLike.foreach(IterableLike.scala:70)
    at scala.collection.IterableLike.foreach$(IterableLike.scala:69)
    at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
    at scala.collection.TraversableLike.map(TraversableLike.scala:233)
    at scala.collection.TraversableLike.map$(TraversableLike.scala:226)
    at scala.collection.AbstractTraversable.map(Traversable.scala:104)
    at 
org.apache.flink.table.planner.codegen.ValuesCodeGenerator$.generatorInputFormat(ValuesCodeGenerator.scala:43)
    at 
org.apache.flink.table.planner.codegen.ValuesCodeGenerator.generatorInputFormat(ValuesCodeGenerator.scala)
    at 
org.apache.flink.table.planner.plan.nodes.exec.common.CommonExecValues.translateToPlanInternal(CommonExecValues.java:66)
    at 
org.apache.flink.table.planner.plan.nodes.exec.batch.BatchExecValues.translateToPlanInternal(BatchExecValues.java:57)
    at 
org.apache.flink.table.planner.plan.nodes.exec.ExecNodeBase.translateToPlan(ExecNodeBase.java:158)
    at 
org.apache.flink.table.planner.plan.nodes.exec.ExecEdge.translateToPlan(ExecEdge.java:257)
    at 
org.apache.flink.table.planner.plan.nodes.exec.batch.BatchExecSink.translateToPlanInternal(BatchExecSink.java:65)
    at 
org.apache.flink.table.planner.plan.nodes.exec.ExecNodeBase.translateToPlan(ExecNodeBase.java:158)
    at 
org.apache.flink.table.planner.delegation.BatchPlanner.$anonfun$translateToPlan$1(BatchPlanner.scala:93)
    at 
scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:233)
    at scala.collection.Iterator.foreach(Iterator.scala:937)
    at scala.collection.Iterator.foreach$(Iterator.scala:937)
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1425)
    at scala.collection.IterableLike.foreach(IterableLike.scala:70)
    at scala.collection.IterableLike.foreach$(IterableLike.scala:69)
    at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
    at scala.collection.TraversableLike.map(TraversableLike.scala:233)
    at scala.collection.TraversableLike.map$(TraversableLike.scala:226)
    at scala.collection.AbstractTraversable.map(Traversable.scala:104)
    at 
org.apache.flink.table.planner.delegation.BatchPlanner.translateToPlan(BatchPlanner.scala:92)
    at 
org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:197)
    at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:1723)
    at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.executeQueryOperation(TableEnvironmentImpl.java:860)
    at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:1368)
    at org.apache.flink.table.api.internal.TableImpl.execute(TableImpl.java:475)
    at 
org.apache.flink.table.planner.runtime.utils.BatchTestBase.executeQuery(BatchTestBase.scala:308)
    

[jira] [Closed] (FLINK-29204) JoinITCase#testUncorrelatedScalar fails with Cannot generate a valid execution plan for the given query after calcite update 1.27

2022-10-07 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin closed FLINK-29204.
---
Release Note: Seems a duplicate of 
https://issues.apache.org/jira/browse/FLINK-29203
  Resolution: Duplicate

> JoinITCase#testUncorrelatedScalar fails with Cannot generate a valid 
> execution plan for the given query after calcite update 1.27
> -
>
> Key: FLINK-29204
> URL: https://issues.apache.org/jira/browse/FLINK-29204
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API, Table SQL / Planner
>Reporter: Sergey Nuyanzin
>Priority: Major
>
> {noformat}
> org.apache.flink.table.api.TableException: Cannot generate a valid execution 
> plan for the given query: 
> FlinkLogicalSink(table=[*anonymous_collect$69*], fields=[b])
> +- FlinkLogicalCalc(select=[EXPR$0 AS b])
>    +- FlinkLogicalJoin(condition=[true], joinType=[left])
>       :- FlinkLogicalValues(type=[RecordType(INTEGER ZERO)], tuples=[[\{ 0 
> }]])
>       +- FlinkLogicalValues(type=[RecordType(INTEGER EXPR$0)], tuples=[[\{ 1 
> }]])
> This exception indicates that the query uses an unsupported SQL feature.
> Please check the documentation for the set of currently supported SQL 
> features.
>     at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkVolcanoProgram.optimize(FlinkVolcanoProgram.scala:70)
>     at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram.$anonfun$optimize$1(FlinkChainedProgram.scala:59)
>     at 
> scala.collection.TraversableOnce.$anonfun$foldLeft$1(TraversableOnce.scala:156)
>     at 
> scala.collection.TraversableOnce.$anonfun$foldLeft$1$adapted(TraversableOnce.scala:156)
>     at scala.collection.Iterator.foreach(Iterator.scala:937)
>     at scala.collection.Iterator.foreach$(Iterator.scala:937)
>     at scala.collection.AbstractIterator.foreach(Iterator.scala:1425)
>     at scala.collection.IterableLike.foreach(IterableLike.scala:70)
>     at scala.collection.IterableLike.foreach$(IterableLike.scala:69)
>     at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>     at scala.collection.TraversableOnce.foldLeft(TraversableOnce.scala:156)
>     at scala.collection.TraversableOnce.foldLeft$(TraversableOnce.scala:154)
>     at scala.collection.AbstractTraversable.foldLeft(Traversable.scala:104)
>     at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram.optimize(FlinkChainedProgram.scala:55)
>     at 
> org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.optimizeTree(BatchCommonSubGraphBasedOptimizer.scala:93)
>     at 
> org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.optimizeBlock(BatchCommonSubGraphBasedOptimizer.scala:58)
>     at 
> org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.$anonfun$doOptimize$1(BatchCommonSubGraphBasedOptimizer.scala:45)
>     at 
> org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.$anonfun$doOptimize$1$adapted(BatchCommonSubGraphBasedOptimizer.scala:45)
>     at scala.collection.immutable.List.foreach(List.scala:388)
>     at 
> org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.doOptimize(BatchCommonSubGraphBasedOptimizer.scala:45)
>     at 
> org.apache.flink.table.planner.plan.optimize.CommonSubGraphBasedOptimizer.optimize(CommonSubGraphBasedOptimizer.scala:87)
>     at 
> org.apache.flink.table.planner.delegation.PlannerBase.optimize(PlannerBase.scala:315)
>     at 
> org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:195)
>     at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:1723)
>     at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.executeQueryOperation(TableEnvironmentImpl.java:860)
>     at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:1368)
>     at 
> org.apache.flink.table.api.internal.TableImpl.execute(TableImpl.java:475)
>     at 
> org.apache.flink.table.planner.runtime.utils.BatchTestBase.executeQuery(BatchTestBase.scala:308)
>     at 
> org.apache.flink.table.planner.runtime.utils.BatchTestBase.check(BatchTestBase.scala:144)
>     at 
> org.apache.flink.table.planner.runtime.utils.BatchTestBase.checkResult(BatchTestBase.scala:108)
>     at 
> org.apache.flink.table.planner.runtime.batch.sql.join.JoinITCase.testUncorrelatedScalar(JoinITCase.scala:1061)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> 

[jira] (FLINK-29203) Support optimization of Union(all, Values, Values) to Values

2022-10-07 Thread Sergey Nuyanzin (Jira)


[ https://issues.apache.org/jira/browse/FLINK-29203 ]


Sergey Nuyanzin deleted comment on FLINK-29203:
-

was (Author: sergey nuyanzin):
Probably was solve by 
https://github.com/apache/flink/commit/91a6a8215fe3b0c68a47d8e362f8737ec37d1709

> Support optimization of Union(all, Values, Values) to Values 
> -
>
> Key: FLINK-29203
> URL: https://issues.apache.org/jira/browse/FLINK-29203
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API, Table SQL / Planner
>Reporter: Sergey Nuyanzin
>Priority: Major
>
> This optimization was introduced at 
> https://issues.apache.org/jira/browse/CALCITE-4383 
> There are several issues with that
> 1. now union all tries to do casting to least restrictive type [1] as a 
> result SetOperatorsITCase#testUnionAllWithCommonType fails like below
> 2. JoinITCase#testUncorrelatedScalar fails like mentioned at 
> https://issues.apache.org/jira/browse/FLINK-29204
> 3. org.apache.calcite.plan.hep.HepPlanner#findBestExp could be empty for 
> LogicalValues  after such optimization
> {noformat}
> org.apache.flink.table.planner.codegen.CodeGenException: Incompatible types 
> of expression and result type. Expression[GeneratedExpression(((int) 
> 12),false,,INT NOT NULL,Some(12))] type is [INT NOT NULL], result type is 
> [DECIMAL(13, 3) NOT NULL]
>     at 
> org.apache.flink.table.planner.codegen.ExprCodeGenerator.$anonfun$generateResultExpression$2(ExprCodeGenerator.scala:309)
>     at 
> org.apache.flink.table.planner.codegen.ExprCodeGenerator.$anonfun$generateResultExpression$2$adapted(ExprCodeGenerator.scala:293)
>     at 
> scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:58)
>     at 
> scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:51)
>     at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
>     at 
> org.apache.flink.table.planner.codegen.ExprCodeGenerator.generateResultExpression(ExprCodeGenerator.scala:293)
>     at 
> org.apache.flink.table.planner.codegen.ExprCodeGenerator.generateResultExpression(ExprCodeGenerator.scala:247)
>     at 
> org.apache.flink.table.planner.codegen.ValuesCodeGenerator$.$anonfun$generatorInputFormat$1(ValuesCodeGenerator.scala:45)
>     at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:233)
>     at scala.collection.Iterator.foreach(Iterator.scala:937)
>     at scala.collection.Iterator.foreach$(Iterator.scala:937)
>     at scala.collection.AbstractIterator.foreach(Iterator.scala:1425)
>     at scala.collection.IterableLike.foreach(IterableLike.scala:70)
>     at scala.collection.IterableLike.foreach$(IterableLike.scala:69)
>     at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>     at scala.collection.TraversableLike.map(TraversableLike.scala:233)
>     at scala.collection.TraversableLike.map$(TraversableLike.scala:226)
>     at scala.collection.AbstractTraversable.map(Traversable.scala:104)
>     at 
> org.apache.flink.table.planner.codegen.ValuesCodeGenerator$.generatorInputFormat(ValuesCodeGenerator.scala:43)
>     at 
> org.apache.flink.table.planner.codegen.ValuesCodeGenerator.generatorInputFormat(ValuesCodeGenerator.scala)
>     at 
> org.apache.flink.table.planner.plan.nodes.exec.common.CommonExecValues.translateToPlanInternal(CommonExecValues.java:66)
>     at 
> org.apache.flink.table.planner.plan.nodes.exec.batch.BatchExecValues.translateToPlanInternal(BatchExecValues.java:57)
>     at 
> org.apache.flink.table.planner.plan.nodes.exec.ExecNodeBase.translateToPlan(ExecNodeBase.java:158)
>     at 
> org.apache.flink.table.planner.plan.nodes.exec.ExecEdge.translateToPlan(ExecEdge.java:257)
>     at 
> org.apache.flink.table.planner.plan.nodes.exec.batch.BatchExecSink.translateToPlanInternal(BatchExecSink.java:65)
>     at 
> org.apache.flink.table.planner.plan.nodes.exec.ExecNodeBase.translateToPlan(ExecNodeBase.java:158)
>     at 
> org.apache.flink.table.planner.delegation.BatchPlanner.$anonfun$translateToPlan$1(BatchPlanner.scala:93)
>     at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:233)
>     at scala.collection.Iterator.foreach(Iterator.scala:937)
>     at scala.collection.Iterator.foreach$(Iterator.scala:937)
>     at scala.collection.AbstractIterator.foreach(Iterator.scala:1425)
>     at scala.collection.IterableLike.foreach(IterableLike.scala:70)
>     at scala.collection.IterableLike.foreach$(IterableLike.scala:69)
>     at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>     at scala.collection.TraversableLike.map(TraversableLike.scala:233)
>     at scala.collection.TraversableLike.map$(TraversableLike.scala:226)
>     at scala.collection.AbstractTraversable.map(Traversable.scala:104)
>     at 
> 

[jira] [Updated] (FLINK-29203) Support optimization of Union(all, Values, Values) to Values

2022-10-07 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin updated FLINK-29203:

Description: 
This optimization was introduced at 
https://issues.apache.org/jira/browse/CALCITE-4383 
now union all tries to do casting to least restrictive type [1]
as a result SetOperatorsITCase#testUnionAllWithCommonType fails like
{noformat}
org.apache.flink.table.planner.codegen.CodeGenException: Incompatible types of 
expression and result type. Expression[GeneratedExpression(((int) 
12),false,,INT NOT NULL,Some(12))] type is [INT NOT NULL], result type is 
[DECIMAL(13, 3) NOT NULL]

    at 
org.apache.flink.table.planner.codegen.ExprCodeGenerator.$anonfun$generateResultExpression$2(ExprCodeGenerator.scala:309)
    at 
org.apache.flink.table.planner.codegen.ExprCodeGenerator.$anonfun$generateResultExpression$2$adapted(ExprCodeGenerator.scala:293)
    at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:58)
    at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:51)
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
    at 
org.apache.flink.table.planner.codegen.ExprCodeGenerator.generateResultExpression(ExprCodeGenerator.scala:293)
    at 
org.apache.flink.table.planner.codegen.ExprCodeGenerator.generateResultExpression(ExprCodeGenerator.scala:247)
    at 
org.apache.flink.table.planner.codegen.ValuesCodeGenerator$.$anonfun$generatorInputFormat$1(ValuesCodeGenerator.scala:45)
    at 
scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:233)
    at scala.collection.Iterator.foreach(Iterator.scala:937)
    at scala.collection.Iterator.foreach$(Iterator.scala:937)
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1425)
    at scala.collection.IterableLike.foreach(IterableLike.scala:70)
    at scala.collection.IterableLike.foreach$(IterableLike.scala:69)
    at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
    at scala.collection.TraversableLike.map(TraversableLike.scala:233)
    at scala.collection.TraversableLike.map$(TraversableLike.scala:226)
    at scala.collection.AbstractTraversable.map(Traversable.scala:104)
    at 
org.apache.flink.table.planner.codegen.ValuesCodeGenerator$.generatorInputFormat(ValuesCodeGenerator.scala:43)
    at 
org.apache.flink.table.planner.codegen.ValuesCodeGenerator.generatorInputFormat(ValuesCodeGenerator.scala)
    at 
org.apache.flink.table.planner.plan.nodes.exec.common.CommonExecValues.translateToPlanInternal(CommonExecValues.java:66)
    at 
org.apache.flink.table.planner.plan.nodes.exec.batch.BatchExecValues.translateToPlanInternal(BatchExecValues.java:57)
    at 
org.apache.flink.table.planner.plan.nodes.exec.ExecNodeBase.translateToPlan(ExecNodeBase.java:158)
    at 
org.apache.flink.table.planner.plan.nodes.exec.ExecEdge.translateToPlan(ExecEdge.java:257)
    at 
org.apache.flink.table.planner.plan.nodes.exec.batch.BatchExecSink.translateToPlanInternal(BatchExecSink.java:65)
    at 
org.apache.flink.table.planner.plan.nodes.exec.ExecNodeBase.translateToPlan(ExecNodeBase.java:158)
    at 
org.apache.flink.table.planner.delegation.BatchPlanner.$anonfun$translateToPlan$1(BatchPlanner.scala:93)
    at 
scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:233)
    at scala.collection.Iterator.foreach(Iterator.scala:937)
    at scala.collection.Iterator.foreach$(Iterator.scala:937)
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1425)
    at scala.collection.IterableLike.foreach(IterableLike.scala:70)
    at scala.collection.IterableLike.foreach$(IterableLike.scala:69)
    at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
    at scala.collection.TraversableLike.map(TraversableLike.scala:233)
    at scala.collection.TraversableLike.map$(TraversableLike.scala:226)
    at scala.collection.AbstractTraversable.map(Traversable.scala:104)
    at 
org.apache.flink.table.planner.delegation.BatchPlanner.translateToPlan(BatchPlanner.scala:92)
    at 
org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:197)
    at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:1723)
    at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.executeQueryOperation(TableEnvironmentImpl.java:860)
    at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:1368)
    at org.apache.flink.table.api.internal.TableImpl.execute(TableImpl.java:475)
    at 
org.apache.flink.table.planner.runtime.utils.BatchTestBase.executeQuery(BatchTestBase.scala:308)
    at 
org.apache.flink.table.planner.runtime.batch.table.SetOperatorsITCase.testUnionAllWithCommonType(SetOperatorsITCase.scala:74)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at 

[jira] [Updated] (FLINK-29203) Support optimization of Union(all, Values, Values) to Values

2022-10-07 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin updated FLINK-29203:

Summary: Support optimization of Union(all, Values, Values) to Values   
(was: SetOperatorsITCase#testUnionAllWithCommonType is failing after Calcite 
update to 1.27)

> Support optimization of Union(all, Values, Values) to Values 
> -
>
> Key: FLINK-29203
> URL: https://issues.apache.org/jira/browse/FLINK-29203
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API, Table SQL / Planner
>Reporter: Sergey Nuyanzin
>Priority: Major
>
> As part of https://issues.apache.org/jira/browse/CALCITE-4383 
> now union all tries to do casting to least restrictive type [1]
> as a result SetOperatorsITCase#testUnionAllWithCommonType fails like
> {noformat}
> org.apache.flink.table.planner.codegen.CodeGenException: Incompatible types 
> of expression and result type. Expression[GeneratedExpression(((int) 
> 12),false,,INT NOT NULL,Some(12))] type is [INT NOT NULL], result type is 
> [DECIMAL(13, 3) NOT NULL]
>     at 
> org.apache.flink.table.planner.codegen.ExprCodeGenerator.$anonfun$generateResultExpression$2(ExprCodeGenerator.scala:309)
>     at 
> org.apache.flink.table.planner.codegen.ExprCodeGenerator.$anonfun$generateResultExpression$2$adapted(ExprCodeGenerator.scala:293)
>     at 
> scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:58)
>     at 
> scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:51)
>     at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
>     at 
> org.apache.flink.table.planner.codegen.ExprCodeGenerator.generateResultExpression(ExprCodeGenerator.scala:293)
>     at 
> org.apache.flink.table.planner.codegen.ExprCodeGenerator.generateResultExpression(ExprCodeGenerator.scala:247)
>     at 
> org.apache.flink.table.planner.codegen.ValuesCodeGenerator$.$anonfun$generatorInputFormat$1(ValuesCodeGenerator.scala:45)
>     at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:233)
>     at scala.collection.Iterator.foreach(Iterator.scala:937)
>     at scala.collection.Iterator.foreach$(Iterator.scala:937)
>     at scala.collection.AbstractIterator.foreach(Iterator.scala:1425)
>     at scala.collection.IterableLike.foreach(IterableLike.scala:70)
>     at scala.collection.IterableLike.foreach$(IterableLike.scala:69)
>     at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>     at scala.collection.TraversableLike.map(TraversableLike.scala:233)
>     at scala.collection.TraversableLike.map$(TraversableLike.scala:226)
>     at scala.collection.AbstractTraversable.map(Traversable.scala:104)
>     at 
> org.apache.flink.table.planner.codegen.ValuesCodeGenerator$.generatorInputFormat(ValuesCodeGenerator.scala:43)
>     at 
> org.apache.flink.table.planner.codegen.ValuesCodeGenerator.generatorInputFormat(ValuesCodeGenerator.scala)
>     at 
> org.apache.flink.table.planner.plan.nodes.exec.common.CommonExecValues.translateToPlanInternal(CommonExecValues.java:66)
>     at 
> org.apache.flink.table.planner.plan.nodes.exec.batch.BatchExecValues.translateToPlanInternal(BatchExecValues.java:57)
>     at 
> org.apache.flink.table.planner.plan.nodes.exec.ExecNodeBase.translateToPlan(ExecNodeBase.java:158)
>     at 
> org.apache.flink.table.planner.plan.nodes.exec.ExecEdge.translateToPlan(ExecEdge.java:257)
>     at 
> org.apache.flink.table.planner.plan.nodes.exec.batch.BatchExecSink.translateToPlanInternal(BatchExecSink.java:65)
>     at 
> org.apache.flink.table.planner.plan.nodes.exec.ExecNodeBase.translateToPlan(ExecNodeBase.java:158)
>     at 
> org.apache.flink.table.planner.delegation.BatchPlanner.$anonfun$translateToPlan$1(BatchPlanner.scala:93)
>     at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:233)
>     at scala.collection.Iterator.foreach(Iterator.scala:937)
>     at scala.collection.Iterator.foreach$(Iterator.scala:937)
>     at scala.collection.AbstractIterator.foreach(Iterator.scala:1425)
>     at scala.collection.IterableLike.foreach(IterableLike.scala:70)
>     at scala.collection.IterableLike.foreach$(IterableLike.scala:69)
>     at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>     at scala.collection.TraversableLike.map(TraversableLike.scala:233)
>     at scala.collection.TraversableLike.map$(TraversableLike.scala:226)
>     at scala.collection.AbstractTraversable.map(Traversable.scala:104)
>     at 
> org.apache.flink.table.planner.delegation.BatchPlanner.translateToPlan(BatchPlanner.scala:92)
>     at 
> org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:197)
>     at 
> 

[jira] [Updated] (FLINK-29203) SetOperatorsITCase#testUnionAllWithCommonType is failing after Calcite update to 1.27

2022-10-07 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin updated FLINK-29203:

Description: 
As part of https://issues.apache.org/jira/browse/CALCITE-4383 
now union all tries to do casting to least restrictive type [1]
as a result SetOperatorsITCase#testUnionAllWithCommonType fails like
{noformat}
org.apache.flink.table.planner.codegen.CodeGenException: Incompatible types of 
expression and result type. Expression[GeneratedExpression(((int) 
12),false,,INT NOT NULL,Some(12))] type is [INT NOT NULL], result type is 
[DECIMAL(13, 3) NOT NULL]

    at 
org.apache.flink.table.planner.codegen.ExprCodeGenerator.$anonfun$generateResultExpression$2(ExprCodeGenerator.scala:309)
    at 
org.apache.flink.table.planner.codegen.ExprCodeGenerator.$anonfun$generateResultExpression$2$adapted(ExprCodeGenerator.scala:293)
    at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:58)
    at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:51)
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
    at 
org.apache.flink.table.planner.codegen.ExprCodeGenerator.generateResultExpression(ExprCodeGenerator.scala:293)
    at 
org.apache.flink.table.planner.codegen.ExprCodeGenerator.generateResultExpression(ExprCodeGenerator.scala:247)
    at 
org.apache.flink.table.planner.codegen.ValuesCodeGenerator$.$anonfun$generatorInputFormat$1(ValuesCodeGenerator.scala:45)
    at 
scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:233)
    at scala.collection.Iterator.foreach(Iterator.scala:937)
    at scala.collection.Iterator.foreach$(Iterator.scala:937)
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1425)
    at scala.collection.IterableLike.foreach(IterableLike.scala:70)
    at scala.collection.IterableLike.foreach$(IterableLike.scala:69)
    at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
    at scala.collection.TraversableLike.map(TraversableLike.scala:233)
    at scala.collection.TraversableLike.map$(TraversableLike.scala:226)
    at scala.collection.AbstractTraversable.map(Traversable.scala:104)
    at 
org.apache.flink.table.planner.codegen.ValuesCodeGenerator$.generatorInputFormat(ValuesCodeGenerator.scala:43)
    at 
org.apache.flink.table.planner.codegen.ValuesCodeGenerator.generatorInputFormat(ValuesCodeGenerator.scala)
    at 
org.apache.flink.table.planner.plan.nodes.exec.common.CommonExecValues.translateToPlanInternal(CommonExecValues.java:66)
    at 
org.apache.flink.table.planner.plan.nodes.exec.batch.BatchExecValues.translateToPlanInternal(BatchExecValues.java:57)
    at 
org.apache.flink.table.planner.plan.nodes.exec.ExecNodeBase.translateToPlan(ExecNodeBase.java:158)
    at 
org.apache.flink.table.planner.plan.nodes.exec.ExecEdge.translateToPlan(ExecEdge.java:257)
    at 
org.apache.flink.table.planner.plan.nodes.exec.batch.BatchExecSink.translateToPlanInternal(BatchExecSink.java:65)
    at 
org.apache.flink.table.planner.plan.nodes.exec.ExecNodeBase.translateToPlan(ExecNodeBase.java:158)
    at 
org.apache.flink.table.planner.delegation.BatchPlanner.$anonfun$translateToPlan$1(BatchPlanner.scala:93)
    at 
scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:233)
    at scala.collection.Iterator.foreach(Iterator.scala:937)
    at scala.collection.Iterator.foreach$(Iterator.scala:937)
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1425)
    at scala.collection.IterableLike.foreach(IterableLike.scala:70)
    at scala.collection.IterableLike.foreach$(IterableLike.scala:69)
    at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
    at scala.collection.TraversableLike.map(TraversableLike.scala:233)
    at scala.collection.TraversableLike.map$(TraversableLike.scala:226)
    at scala.collection.AbstractTraversable.map(Traversable.scala:104)
    at 
org.apache.flink.table.planner.delegation.BatchPlanner.translateToPlan(BatchPlanner.scala:92)
    at 
org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:197)
    at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:1723)
    at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.executeQueryOperation(TableEnvironmentImpl.java:860)
    at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:1368)
    at org.apache.flink.table.api.internal.TableImpl.execute(TableImpl.java:475)
    at 
org.apache.flink.table.planner.runtime.utils.BatchTestBase.executeQuery(BatchTestBase.scala:308)
    at 
org.apache.flink.table.planner.runtime.batch.table.SetOperatorsITCase.testUnionAllWithCommonType(SetOperatorsITCase.scala:74)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at 

[jira] [Updated] (FLINK-29203) SetOperatorsITCase#testUnionAllWithCommonType is failing after Calcite update to 1.27

2022-10-07 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin updated FLINK-29203:

Description: 
As part of https://issues.apache.org/jira/browse/CALCITE-4383 
now union all tries to do casting to least restrictive type
as a result SetOperatorsITCase#testUnionAllWithCommonType fails like
{noformat}
org.apache.flink.table.planner.codegen.CodeGenException: Incompatible types of 
expression and result type. Expression[GeneratedExpression(((int) 
12),false,,INT NOT NULL,Some(12))] type is [INT NOT NULL], result type is 
[DECIMAL(13, 3) NOT NULL]

    at 
org.apache.flink.table.planner.codegen.ExprCodeGenerator.$anonfun$generateResultExpression$2(ExprCodeGenerator.scala:309)
    at 
org.apache.flink.table.planner.codegen.ExprCodeGenerator.$anonfun$generateResultExpression$2$adapted(ExprCodeGenerator.scala:293)
    at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:58)
    at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:51)
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
    at 
org.apache.flink.table.planner.codegen.ExprCodeGenerator.generateResultExpression(ExprCodeGenerator.scala:293)
    at 
org.apache.flink.table.planner.codegen.ExprCodeGenerator.generateResultExpression(ExprCodeGenerator.scala:247)
    at 
org.apache.flink.table.planner.codegen.ValuesCodeGenerator$.$anonfun$generatorInputFormat$1(ValuesCodeGenerator.scala:45)
    at 
scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:233)
    at scala.collection.Iterator.foreach(Iterator.scala:937)
    at scala.collection.Iterator.foreach$(Iterator.scala:937)
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1425)
    at scala.collection.IterableLike.foreach(IterableLike.scala:70)
    at scala.collection.IterableLike.foreach$(IterableLike.scala:69)
    at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
    at scala.collection.TraversableLike.map(TraversableLike.scala:233)
    at scala.collection.TraversableLike.map$(TraversableLike.scala:226)
    at scala.collection.AbstractTraversable.map(Traversable.scala:104)
    at 
org.apache.flink.table.planner.codegen.ValuesCodeGenerator$.generatorInputFormat(ValuesCodeGenerator.scala:43)
    at 
org.apache.flink.table.planner.codegen.ValuesCodeGenerator.generatorInputFormat(ValuesCodeGenerator.scala)
    at 
org.apache.flink.table.planner.plan.nodes.exec.common.CommonExecValues.translateToPlanInternal(CommonExecValues.java:66)
    at 
org.apache.flink.table.planner.plan.nodes.exec.batch.BatchExecValues.translateToPlanInternal(BatchExecValues.java:57)
    at 
org.apache.flink.table.planner.plan.nodes.exec.ExecNodeBase.translateToPlan(ExecNodeBase.java:158)
    at 
org.apache.flink.table.planner.plan.nodes.exec.ExecEdge.translateToPlan(ExecEdge.java:257)
    at 
org.apache.flink.table.planner.plan.nodes.exec.batch.BatchExecSink.translateToPlanInternal(BatchExecSink.java:65)
    at 
org.apache.flink.table.planner.plan.nodes.exec.ExecNodeBase.translateToPlan(ExecNodeBase.java:158)
    at 
org.apache.flink.table.planner.delegation.BatchPlanner.$anonfun$translateToPlan$1(BatchPlanner.scala:93)
    at 
scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:233)
    at scala.collection.Iterator.foreach(Iterator.scala:937)
    at scala.collection.Iterator.foreach$(Iterator.scala:937)
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1425)
    at scala.collection.IterableLike.foreach(IterableLike.scala:70)
    at scala.collection.IterableLike.foreach$(IterableLike.scala:69)
    at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
    at scala.collection.TraversableLike.map(TraversableLike.scala:233)
    at scala.collection.TraversableLike.map$(TraversableLike.scala:226)
    at scala.collection.AbstractTraversable.map(Traversable.scala:104)
    at 
org.apache.flink.table.planner.delegation.BatchPlanner.translateToPlan(BatchPlanner.scala:92)
    at 
org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:197)
    at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:1723)
    at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.executeQueryOperation(TableEnvironmentImpl.java:860)
    at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:1368)
    at org.apache.flink.table.api.internal.TableImpl.execute(TableImpl.java:475)
    at 
org.apache.flink.table.planner.runtime.utils.BatchTestBase.executeQuery(BatchTestBase.scala:308)
    at 
org.apache.flink.table.planner.runtime.batch.table.SetOperatorsITCase.testUnionAllWithCommonType(SetOperatorsITCase.scala:74)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at 

[GitHub] [flink] xinbinhuang commented on pull request #20289: [FLINK-23633][FLIP-150][connector/common] HybridSource: Support dynamic stop position in FileSource

2022-10-07 Thread GitBox


xinbinhuang commented on PR #20289:
URL: https://github.com/apache/flink/pull/20289#issuecomment-1272048838

   @tweise was distracted by other works. Let me get back to this.
   > I think we need to zoom in why or why not the enumerator knows the actual 
stop position without involvement of the reader.
   
   Our use case is to expose end offset or timestamp based on the content of 
the file. We're archiving out-of-retention messages into S3 using a 
long-running job. Normally there are multiple messages inside the files, and 
the timestamp of the last message may not align with the file metadata. So 
we'll need to actually parse the file content to find out either the last 
timestamp or offset. That's why I think sending back the split would make 
sense, since it's already processed there. 
   
   Do you have any recommendations around this? Or do you think this's too 
complex to implement?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-29203) SetOperatorsITCase#testUnionAllWithCommonType is failing after Calcite update to 1.27

2022-10-07 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin updated FLINK-29203:

Description: 
SetOperatorsITCase#testUnionAllWithCommonType fails like
{noformat}
org.apache.flink.table.planner.codegen.CodeGenException: Incompatible types of 
expression and result type. Expression[GeneratedExpression(((int) 
12),false,,INT NOT NULL,Some(12))] type is [INT NOT NULL], result type is 
[DECIMAL(13, 3) NOT NULL]

    at 
org.apache.flink.table.planner.codegen.ExprCodeGenerator.$anonfun$generateResultExpression$2(ExprCodeGenerator.scala:309)
    at 
org.apache.flink.table.planner.codegen.ExprCodeGenerator.$anonfun$generateResultExpression$2$adapted(ExprCodeGenerator.scala:293)
    at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:58)
    at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:51)
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
    at 
org.apache.flink.table.planner.codegen.ExprCodeGenerator.generateResultExpression(ExprCodeGenerator.scala:293)
    at 
org.apache.flink.table.planner.codegen.ExprCodeGenerator.generateResultExpression(ExprCodeGenerator.scala:247)
    at 
org.apache.flink.table.planner.codegen.ValuesCodeGenerator$.$anonfun$generatorInputFormat$1(ValuesCodeGenerator.scala:45)
    at 
scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:233)
    at scala.collection.Iterator.foreach(Iterator.scala:937)
    at scala.collection.Iterator.foreach$(Iterator.scala:937)
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1425)
    at scala.collection.IterableLike.foreach(IterableLike.scala:70)
    at scala.collection.IterableLike.foreach$(IterableLike.scala:69)
    at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
    at scala.collection.TraversableLike.map(TraversableLike.scala:233)
    at scala.collection.TraversableLike.map$(TraversableLike.scala:226)
    at scala.collection.AbstractTraversable.map(Traversable.scala:104)
    at 
org.apache.flink.table.planner.codegen.ValuesCodeGenerator$.generatorInputFormat(ValuesCodeGenerator.scala:43)
    at 
org.apache.flink.table.planner.codegen.ValuesCodeGenerator.generatorInputFormat(ValuesCodeGenerator.scala)
    at 
org.apache.flink.table.planner.plan.nodes.exec.common.CommonExecValues.translateToPlanInternal(CommonExecValues.java:66)
    at 
org.apache.flink.table.planner.plan.nodes.exec.batch.BatchExecValues.translateToPlanInternal(BatchExecValues.java:57)
    at 
org.apache.flink.table.planner.plan.nodes.exec.ExecNodeBase.translateToPlan(ExecNodeBase.java:158)
    at 
org.apache.flink.table.planner.plan.nodes.exec.ExecEdge.translateToPlan(ExecEdge.java:257)
    at 
org.apache.flink.table.planner.plan.nodes.exec.batch.BatchExecSink.translateToPlanInternal(BatchExecSink.java:65)
    at 
org.apache.flink.table.planner.plan.nodes.exec.ExecNodeBase.translateToPlan(ExecNodeBase.java:158)
    at 
org.apache.flink.table.planner.delegation.BatchPlanner.$anonfun$translateToPlan$1(BatchPlanner.scala:93)
    at 
scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:233)
    at scala.collection.Iterator.foreach(Iterator.scala:937)
    at scala.collection.Iterator.foreach$(Iterator.scala:937)
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1425)
    at scala.collection.IterableLike.foreach(IterableLike.scala:70)
    at scala.collection.IterableLike.foreach$(IterableLike.scala:69)
    at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
    at scala.collection.TraversableLike.map(TraversableLike.scala:233)
    at scala.collection.TraversableLike.map$(TraversableLike.scala:226)
    at scala.collection.AbstractTraversable.map(Traversable.scala:104)
    at 
org.apache.flink.table.planner.delegation.BatchPlanner.translateToPlan(BatchPlanner.scala:92)
    at 
org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:197)
    at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:1723)
    at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.executeQueryOperation(TableEnvironmentImpl.java:860)
    at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:1368)
    at org.apache.flink.table.api.internal.TableImpl.execute(TableImpl.java:475)
    at 
org.apache.flink.table.planner.runtime.utils.BatchTestBase.executeQuery(BatchTestBase.scala:308)
    at 
org.apache.flink.table.planner.runtime.batch.table.SetOperatorsITCase.testUnionAllWithCommonType(SetOperatorsITCase.scala:74)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    

[jira] [Resolved] (FLINK-29202) CliClient fails with NPE during start (after calcite update to 1.27)

2022-10-07 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin resolved FLINK-29202.
-
Release Note: Solved by inclusion org.checkerframework:checker-qual to 
shade plugin artifact set
  Resolution: Fixed

> CliClient fails with NPE during start (after calcite update to 1.27)
> 
>
> Key: FLINK-29202
> URL: https://issues.apache.org/jira/browse/FLINK-29202
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Client, Table SQL / Planner
>Reporter: Sergey Nuyanzin
>Priority: Major
>
> After update to calcite 1.27 sqlclient fails with
> {noformat}
> Exception in thread "main" org.apache.flink.table.client.SqlClientException: 
> Unexpected exception. This is a bug. Please consider filing an issue.
>   at 
> org.apache.flink.table.client.SqlClient.startClient(SqlClient.java:201)
>   at org.apache.flink.table.client.SqlClient.main(SqlClient.java:161)
> Caused by: java.lang.ExceptionInInitializerError
>   at 
> org.apache.calcite.tools.Frameworks$ConfigBuilder.(Frameworks.java:241)
>   at 
> org.apache.calcite.tools.Frameworks$ConfigBuilder.(Frameworks.java:217)
>   at 
> org.apache.calcite.tools.Frameworks.newConfigBuilder(Frameworks.java:201)
>   at 
> org.apache.flink.table.planner.delegation.PlannerContext.createFrameworkConfig(PlannerContext.java:140)
>   at 
> org.apache.flink.table.planner.delegation.PlannerContext.(PlannerContext.java:124)
>   at 
> org.apache.flink.table.planner.delegation.PlannerBase.(PlannerBase.scala:121)
>   at 
> org.apache.flink.table.planner.delegation.StreamPlanner.(StreamPlanner.scala:65)
>   at 
> org.apache.flink.table.planner.delegation.DefaultPlannerFactory.create(DefaultPlannerFactory.java:65)
>   at 
> org.apache.flink.table.factories.PlannerFactoryUtil.createPlanner(PlannerFactoryUtil.java:58)
>   at 
> org.apache.flink.table.client.gateway.context.ExecutionContext.createStreamTableEnvironment(ExecutionContext.java:130)
>   at 
> org.apache.flink.table.client.gateway.context.ExecutionContext.createTableEnvironment(ExecutionContext.java:104)
>   at 
> org.apache.flink.table.client.gateway.context.ExecutionContext.(ExecutionContext.java:66)
>   at 
> org.apache.flink.table.client.gateway.context.SessionContext.create(SessionContext.java:229)
>   at 
> org.apache.flink.table.client.gateway.local.LocalContextUtils.buildSessionContext(LocalContextUtils.java:87)
>   at 
> org.apache.flink.table.client.gateway.local.LocalExecutor.openSession(LocalExecutor.java:87)
>   at org.apache.flink.table.client.SqlClient.start(SqlClient.java:88)
>   at 
> org.apache.flink.table.client.SqlClient.startClient(SqlClient.java:187)
>   ... 1 more
> Caused by: java.lang.NullPointerException
>   at 
> sun.reflect.annotation.TypeAnnotationParser.mapTypeAnnotations(TypeAnnotationParser.java:356)
>   at 
> sun.reflect.annotation.AnnotatedTypeFactory$AnnotatedTypeBaseImpl.(AnnotatedTypeFactory.java:139)
>   at 
> sun.reflect.annotation.AnnotatedTypeFactory.buildAnnotatedType(AnnotatedTypeFactory.java:65)
>   at 
> sun.reflect.annotation.TypeAnnotationParser.buildAnnotatedType(TypeAnnotationParser.java:79)
>   at 
> java.lang.reflect.Executable.getAnnotatedReturnType0(Executable.java:640)
>   at java.lang.reflect.Method.getAnnotatedReturnType(Method.java:648)
>   at 
> org.apache.calcite.util.ImmutableBeans.makeDef(ImmutableBeans.java:146)
>   at 
> org.apache.calcite.util.ImmutableBeans.access$000(ImmutableBeans.java:55)
>   at org.apache.calcite.util.ImmutableBeans$1.load(ImmutableBeans.java:68)
>   at org.apache.calcite.util.ImmutableBeans$1.load(ImmutableBeans.java:65)
>   at 
> org.apache.flink.calcite.shaded.com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3529)
>   at 
> org.apache.flink.calcite.shaded.com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2278)
>   at 
> org.apache.flink.calcite.shaded.com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2155)
>   at 
> org.apache.flink.calcite.shaded.com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2045)
>   at 
> org.apache.flink.calcite.shaded.com.google.common.cache.LocalCache.get(LocalCache.java:3951)
>   at 
> org.apache.flink.calcite.shaded.com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3974)
>   at 
> org.apache.flink.calcite.shaded.com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4958)
>   at 
> org.apache.calcite.util.ImmutableBeans.create_(ImmutableBeans.java:95)
>   at org.apache.calcite.util.ImmutableBeans.create(ImmutableBeans.java:76)
>   at 
> 

[jira] [Commented] (FLINK-29050) [JUnit5 Migration] Module: flink-hadoop-compatibility

2022-10-07 Thread Nagaraj Tantri (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17614223#comment-17614223
 ] 

Nagaraj Tantri commented on FLINK-29050:


Hi, can I work on this?

> [JUnit5 Migration] Module: flink-hadoop-compatibility
> -
>
> Key: FLINK-29050
> URL: https://issues.apache.org/jira/browse/FLINK-29050
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Hadoop Compatibility, Tests
>Reporter: RocMarshal
>Priority: Major
>  Labels: starter
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-29495) PulsarSinkE2ECase hang

2022-10-07 Thread Yufan Sheng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17614192#comment-17614192
 ] 

Yufan Sheng commented on FLINK-29495:
-

I'll check it latter today.

> PulsarSinkE2ECase hang
> --
>
> Key: FLINK-29495
> URL: https://issues.apache.org/jira/browse/FLINK-29495
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Pulsar
>Affects Versions: 1.16.0, 1.17.0, 1.15.2
>Reporter: Xingbo Huang
>Assignee: Yufan Sheng
>Priority: Critical
>  Labels: test-stability
>
> {code:java}
> 2022-10-02T05:53:56.0611489Z "main" #1 prio=5 os_prio=0 cpu=5171.60ms 
> elapsed=9072.82s tid=0x7f9508028000 nid=0x54ef1 waiting on condition  
> [0x7f950f994000]
> 2022-10-02T05:53:56.0612041Zjava.lang.Thread.State: TIMED_WAITING 
> (parking)
> 2022-10-02T05:53:56.0612475Z  at 
> jdk.internal.misc.Unsafe.park(java.base@11.0.16.1/Native Method)
> 2022-10-02T05:53:56.0613302Z  - parking to wait for  <0x87d261f8> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> 2022-10-02T05:53:56.0613959Z  at 
> java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.16.1/LockSupport.java:234)
> 2022-10-02T05:53:56.0614661Z  at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(java.base@11.0.16.1/AbstractQueuedSynchronizer.java:2123)
> 2022-10-02T05:53:56.0615428Z  at 
> org.apache.pulsar.common.util.collections.GrowableArrayBlockingQueue.poll(GrowableArrayBlockingQueue.java:203)
> 2022-10-02T05:53:56.0616165Z  at 
> org.apache.pulsar.client.impl.MultiTopicsConsumerImpl.internalReceive(MultiTopicsConsumerImpl.java:370)
> 2022-10-02T05:53:56.0616807Z  at 
> org.apache.pulsar.client.impl.ConsumerBase.receive(ConsumerBase.java:198)
> 2022-10-02T05:53:56.0617486Z  at 
> org.apache.flink.connector.pulsar.testutils.sink.PulsarPartitionDataReader.poll(PulsarPartitionDataReader.java:72)
>  {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=41526=logs=6e8542d7-de38-5a33-4aca-458d6c87066d=5846934b-7a4f-545b-e5b0-eb4d8bda32e1



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] syhily commented on pull request #20980: [FLINK-29532][Connector/Pulsar] Update Pulsar dependency to 2.10.1

2022-10-07 Thread GitBox


syhily commented on PR #20980:
URL: https://github.com/apache/flink/pull/20980#issuecomment-1271887695

   I think some test dependencies may need extra check. I'll review this PR 
latter today in China.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-kubernetes-operator] morhidi commented on pull request #377: [FLINK-28979] Add owner reference to flink deployment object

2022-10-07 Thread GitBox


morhidi commented on PR #377:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/377#issuecomment-1271857159

   @gyfora can you start the workflow in this, pls?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Comment Edited] (FLINK-29509) Set correct subtaskId during recovery of committables

2022-10-07 Thread Krzysztof Chmielewski (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17613888#comment-17613888
 ] 

Krzysztof Chmielewski edited comment on FLINK-29509 at 10/7/22 4:29 PM:


PR ready for review :)
[https://github.com/apache/flink/pull/20979]


was (Author: kristoffsc):
PR:
https://github.com/apache/flink/pull/20979

> Set correct subtaskId during recovery of committables
> -
>
> Key: FLINK-29509
> URL: https://issues.apache.org/jira/browse/FLINK-29509
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Common
>Affects Versions: 1.17.0, 1.15.2, 1.16.1
>Reporter: Fabian Paul
>Assignee: Krzysztof Chmielewski
>Priority: Critical
>
> When we recover the `CheckpointCommittableManager` we ignore the subtaskId it 
> is recovered on. 
> [https://github.com/apache/flink/blob/d191bda7e63a2c12416cba56090e5cd75426079b/flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/operators/sink/committables/CheckpointCommittableManagerImpl.java#L58]
> This becomes a problem when a sink uses a post-commit topology because 
> multiple committer operators might forward committable summaries coming from 
> the same subtaskId.
>  
> It should be possible to use the subtaskId already present in the 
> `CommittableCollector` when creating the `CheckpointCommittableManager`s.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink-web] XComp commented on a diff in pull request #574: Announcement blogpost for the 1.16 release

2022-10-07 Thread GitBox


XComp commented on code in PR #574:
URL: https://github.com/apache/flink-web/pull/574#discussion_r990240285


##
_posts/2022-10-10-1.16-announcement.md:
##
@@ -0,0 +1,401 @@
+---
+layout: post
+title:  "Announcing the Release of Apache Flink 1.16"
+subtitle: ""
+date: 2022-10-10T08:00:00.000Z
+categories: news
+authors:
+- godfreyhe:
+  name: "Godfrey He"
+  twitter: "godfreyhe"
+
+---
+
+Apache Flink continues to grow at a rapid pace and is one of the most active 
+communities in Apache. Flink 1.16 had over 230 contributors enthusiastically 
participating, 
+with 19 FLIPs and 900+ issues completed, bringing a lot of exciting features 
to the community.
+
+Flink has become the leading role and factual standard of stream processing, 
+and the concept of the unification of stream (aka unbounded) and batch (aka 
bounded) data 
+processing is gradually gaining recognition and is being successfully 
implemented in more 
+and more companies. Previously, the integrated stream and batch concept placed 
more emphasis 
+on a unified API and a unified computing framework. This year, based on this, 
Flink proposed 
+the next development direction of [Flink-Streaming 
Warehouse](https://www.alibabacloud.com/blog/more-than-computing-a-new-era-led-by-the-warehouse-architecture-of-apache-flink_598821)
 (Streamhouse), 
+which further upgraded the scope of stream-batch integration: it truly 
completes not only 
+the unified computation but also unified storage, thus realizing unified 
real-time analysis.
+

Review Comment:
   Not sure if I understand your concern correctly. :thinking: Table of content 
would be just a collection of links to the different sections in the blog post. 
...something like an overview of all the topics with a link



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] MartijnVisser commented on a diff in pull request #20170: [FLINK-28405][Connector/Kafka] Update Confluent Platform images used for testing to v7.2.2

2022-10-07 Thread GitBox


MartijnVisser commented on code in PR #20170:
URL: https://github.com/apache/flink/pull/20170#discussion_r990199370


##
tools/azure-pipelines/cache_docker_images.sh:
##
@@ -28,7 +28,7 @@ then
 fi
 
 # This is the pattern that determines which containers we save.
-DOCKER_IMAGE_CACHE_PATTERN="testcontainers|kafka|postgres|mysql|pulsar|cassandra"
+DOCKER_IMAGE_CACHE_PATTERN="testcontainers|kafka|postgres|mysql|pulsar|cassandra|schema_registry"

Review Comment:
   FFS. Fixed



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-29501) Allow overriding JobVertex parallelisms during job submission

2022-10-07 Thread Chesnay Schepler (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17614120#comment-17614120
 ] 

Chesnay Schepler commented on FLINK-29501:
--

??the Recale API is both disabled and broken at the moment??

In practice the entire mechanism doesn't exist. I'd ignore that there are still 
some legacy API fragments around that we haven't removed IIRC exclusively so 
that users don't hit a 404 when attempting to do a rescale.

??If you think it is already robust enough to support rescaling requests, we 
can re-enable the rescale rest API and also add job vertex overrides to it??

It's definitely robust enough to be used in production I think.
IIRC Till had a prototype for adding a REST endpoint that adjusts the target 
parallelism somewhere.
I don't think it worked on a per-vertex basis though; just a global parallelism 
increase for all vertices (since in reactive mode everything scales uniformly 
anyway). But that shouldn't be difficult to change.

> Allow overriding JobVertex parallelisms during job submission
> -
>
> Key: FLINK-29501
> URL: https://issues.apache.org/jira/browse/FLINK-29501
> Project: Flink
>  Issue Type: New Feature
>  Components: Runtime / Configuration, Runtime / REST
>Reporter: Maximilian Michels
>Assignee: Maximilian Michels
>Priority: Minor
>  Labels: pull-request-available
>
> It is a common scenario that users want to make changes to the parallelisms 
> in the JobGraph. For example, because they discover that the job needs more 
> or less resources. There is the option to do this globally via the job 
> parallelism. However, for fine-tuned jobs jobs with potentially many 
> branches, tuning on the job vertex level is required.
> This is to propose a way such that users can apply a mapping \{{jobVertexId 
> => parallelism}} before the job is submitted without having to modify the 
> JobGraph manually.
> One way to achieving this would be to add an optional map field to the Rest 
> API jobs endpoint. However, in deployment modes like the application mode, 
> this might not make sense because users do not have control the rest endpoint.
> Similarly to how other job parameters are passed in the application mode, we 
> propose to add the overrides as a configuration parameter.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] LadyForest commented on a diff in pull request #20652: [FLINK-22315][table] Support add/modify column/constraint/watermark for ALTER TABLE statement

2022-10-07 Thread GitBox


LadyForest commented on code in PR #20652:
URL: https://github.com/apache/flink/pull/20652#discussion_r990178647


##
flink-table/flink-table-planner/src/test/java/org/apache/flink/table/planner/operations/SqlToOperationConverterTest.java:
##
@@ -1466,6 +1477,404 @@ public void 
testAlterTableCompactOnManagedPartitionedTable() throws Exception {
 parse("alter table tb1 compact", SqlDialect.DEFAULT), 
staticPartitions);
 }
 
+@Test
+public void testAlterTableAddColumn() throws Exception {
+prepareNonManagedTable(false);
+ObjectIdentifier tableIdentifier = ObjectIdentifier.of("cat1", "db1", 
"tb1");
+Schema originalSchema =
+
catalogManager.getTable(tableIdentifier).get().getTable().getUnresolvedSchema();
+
+// test duplicated column name
+assertThatThrownBy(() -> parse("alter table tb1 add a bigint", 
SqlDialect.DEFAULT))
+.isInstanceOf(ValidationException.class)
+.hasMessageContaining("Try to add a column 'a' which already 
exists in the table.");
+
+// test reference nonexistent column name
+assertThatThrownBy(() -> parse("alter table tb1 add x bigint after y", 
SqlDialect.DEFAULT))
+.isInstanceOf(ValidationException.class)
+.hasMessageContaining(
+"Referenced column 'y' by 'AFTER' does not exist in 
the table.");
+
+// test add a single column
+Operation operation =
+parse(
+"alter table tb1 add d double not null comment 'd is 
double not null'",
+SqlDialect.DEFAULT);
+assertAlterTableSchema(
+operation,
+tableIdentifier,
+Schema.newBuilder()
+.fromSchema(originalSchema)
+.column("d", DataTypes.DOUBLE().notNull())
+.withComment("d is double not null")
+.build());
+
+// test add multiple columns with pk
+operation =

Review Comment:
   MySQL will throw the exception, and we've aligned the behavior. I can add a 
case to verify.
   ```sql
   mysql> show create table foo;
   
+---+---+
   | Table | Create Table   

   |
   
+---+---+
   | foo   | CREATE TABLE `foo` (
 `a` int DEFAULT NULL,
 `b` varchar(20) DEFAULT NULL,
 `c` double DEFAULT NULL
   ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci |
   
+---+---+
   1 row in set (0.01 sec)
   
   mysql> alter table foo add e int after f, add f double;
   ERROR 1054 (42S22): Unknown column 'f' in 'foo'
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-web] mbalassi merged pull request #575: Kubernetes Operator 1.2.0 release blogpost

2022-10-07 Thread GitBox


mbalassi merged PR #575:
URL: https://github.com/apache/flink-web/pull/575


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] zentol commented on a diff in pull request #20170: [FLINK-28405][Connector/Kafka] Update Confluent Platform images used for testing to v7.2.2

2022-10-07 Thread GitBox


zentol commented on code in PR #20170:
URL: https://github.com/apache/flink/pull/20170#discussion_r990173189


##
tools/azure-pipelines/cache_docker_images.sh:
##
@@ -28,7 +28,7 @@ then
 fi
 
 # This is the pattern that determines which containers we save.
-DOCKER_IMAGE_CACHE_PATTERN="testcontainers|kafka|postgres|mysql|pulsar|cassandra"
+DOCKER_IMAGE_CACHE_PATTERN="testcontainers|kafka|postgres|mysql|pulsar|cassandra|schema_registry"

Review Comment:
   you sure this doesn't need a `-` instead of `_`?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-kubernetes-operator] gyfora merged pull request #397: [doc] Add CRD Documentation about the Flink Deployment Modes

2022-10-07 Thread GitBox


gyfora merged PR #397:
URL: https://github.com/apache/flink-kubernetes-operator/pull/397


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-kubernetes-operator] usamj opened a new pull request, #397: [doc] Add CRD Documentation about the Flink Deployment Modes

2022-10-07 Thread GitBox


usamj opened a new pull request, #397:
URL: https://github.com/apache/flink-kubernetes-operator/pull/397

   ## What is the purpose of the change
   
   * Adding documentation about Native and Standalone deployment modes


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-29538) Setup CI based on the new Elasticsearch CI setup

2022-10-07 Thread Martijn Visser (Jira)
Martijn Visser created FLINK-29538:
--

 Summary: Setup CI based on the new Elasticsearch CI setup
 Key: FLINK-29538
 URL: https://issues.apache.org/jira/browse/FLINK-29538
 Project: Flink
  Issue Type: Sub-task
Reporter: Martijn Visser
Assignee: Martijn Visser






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink-kubernetes-operator] gyfora merged pull request #395: Add CRD Documentation about the Flink Deployment Modes

2022-10-07 Thread GitBox


gyfora merged PR #395:
URL: https://github.com/apache/flink-kubernetes-operator/pull/395


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Closed] (FLINK-29535) Flink Operator Certificate renew issue

2022-10-07 Thread Gyula Fora (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29535?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gyula Fora closed FLINK-29535.
--
Resolution: Duplicate

please reopen it in case the other fix is not working

> Flink Operator Certificate renew issue
> --
>
> Key: FLINK-29535
> URL: https://issues.apache.org/jira/browse/FLINK-29535
> Project: Flink
>  Issue Type: Bug
>  Components: Kubernetes Operator
>Reporter: Sebastian Struß
>Priority: Major
>
> It seems that there is an issue with the Kubernetes Operator (at least in 
> version 1.1.0) when it comes to certificates for the webhook.
> We've seen this error message pop up in the logs:
> | |
> |An exceptionCaught() event was fired, and it reached at the tail of the 
> pipeline. It usually means the last handler in the pipeline did not handle 
> the exception.|
> | 
> and
> javax.net.ssl.SSLHandshakeException: Received fatal alert: bad_certificate at 
> sun.security.ssl.Alert.createSSLException(Unknown Source) ~[?:?] at 
> sun.security.ssl.Alert.createSSLException(Unknown Source) ~[?:?] at 
> sun.security.ssl.TransportContext.fatal(Unknown Source) ~[?:?] at 
> sun.security.ssl.Alert$AlertConsumer.consume(Unknown Source) ~[?:?] at 
> sun.security.ssl.TransportContext.dispatch(Unknown Source) ~[?:?] at 
> sun.security.ssl.SSLTransport.decode(Unknown Source) ~[?:?] at 
> sun.security.ssl.SSLEngineImpl.decode(Unknown Source) ~[?:?] at 
> sun.security.ssl.SSLEngineImpl.readRecord(Unknown Source) ~[?:?] at 
> sun.security.ssl.SSLEngineImpl.unwrap(Unknown Source) ~[?:?] at 
> sun.security.ssl.SSLEngineImpl.unwrap(Unknown Source) ~[?:?] at 
> javax.net.ssl.SSLEngine.unwrap(Unknown Source) ~[?:?] at 
> org.apache.flink.shaded.netty4.io.netty.handler.ssl.SslHandler$SslEngineType$3.unwrap(SslHandler.java:296)
>  ~[flink-kubernetes-operator-1.1.0-shaded.jar:1.1.0] at 
> org.apache.flink.shaded.netty4.io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1342)
>  ~[flink-kubernetes-operator-1.1.0-shaded.jar:1.1.0] at 
> org.apache.flink.shaded.netty4.io.netty.handler.ssl.SslHandler.decodeJdkCompatible(SslHandler.java:1235)
>  ~[flink-kubernetes-operator-1.1.0-shaded.jar:1.1.0] at 
> org.apache.flink.shaded.netty4.io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1284)
>  ~[flink-kubernetes-operator-1.1.0-shaded.jar:1.1.0] at 
> org.apache.flink.shaded.netty4.io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:507)
>  ~[flink-kubernetes-operator-1.1.0-shaded.jar:1.1.0] at 
> org.apache.flink.shaded.netty4.io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:446)
>  ~[flink-kubernetes-operator-1.1.0-shaded.jar:1.1.0]|
> It happens when our fluxcd is trying to update the FlinkDeployment resource.
> This seems to trigger a webhook to an endpoint (in the operator) which is 
> serving a (then) invalid certificate.
> We've noticed this after 18 days of it running, so maybe something shortlived 
> was not renewed correctly?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink-web] usamj commented on pull request #575: Kubernetes Operator 1.2.0 release blogpost

2022-10-07 Thread GitBox


usamj commented on PR #575:
URL: https://github.com/apache/flink-web/pull/575#issuecomment-1271672934

   Looks good 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-29535) Flink Operator Certificate renew issue

2022-10-07 Thread Jira


[ 
https://issues.apache.org/jira/browse/FLINK-29535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17614108#comment-17614108
 ] 

Sebastian Struß commented on FLINK-29535:
-

Great News! I will check it out soon. Thanks! :)

> Flink Operator Certificate renew issue
> --
>
> Key: FLINK-29535
> URL: https://issues.apache.org/jira/browse/FLINK-29535
> Project: Flink
>  Issue Type: Bug
>  Components: Kubernetes Operator
>Reporter: Sebastian Struß
>Priority: Major
>
> It seems that there is an issue with the Kubernetes Operator (at least in 
> version 1.1.0) when it comes to certificates for the webhook.
> We've seen this error message pop up in the logs:
> | |
> |An exceptionCaught() event was fired, and it reached at the tail of the 
> pipeline. It usually means the last handler in the pipeline did not handle 
> the exception.|
> | 
> and
> javax.net.ssl.SSLHandshakeException: Received fatal alert: bad_certificate at 
> sun.security.ssl.Alert.createSSLException(Unknown Source) ~[?:?] at 
> sun.security.ssl.Alert.createSSLException(Unknown Source) ~[?:?] at 
> sun.security.ssl.TransportContext.fatal(Unknown Source) ~[?:?] at 
> sun.security.ssl.Alert$AlertConsumer.consume(Unknown Source) ~[?:?] at 
> sun.security.ssl.TransportContext.dispatch(Unknown Source) ~[?:?] at 
> sun.security.ssl.SSLTransport.decode(Unknown Source) ~[?:?] at 
> sun.security.ssl.SSLEngineImpl.decode(Unknown Source) ~[?:?] at 
> sun.security.ssl.SSLEngineImpl.readRecord(Unknown Source) ~[?:?] at 
> sun.security.ssl.SSLEngineImpl.unwrap(Unknown Source) ~[?:?] at 
> sun.security.ssl.SSLEngineImpl.unwrap(Unknown Source) ~[?:?] at 
> javax.net.ssl.SSLEngine.unwrap(Unknown Source) ~[?:?] at 
> org.apache.flink.shaded.netty4.io.netty.handler.ssl.SslHandler$SslEngineType$3.unwrap(SslHandler.java:296)
>  ~[flink-kubernetes-operator-1.1.0-shaded.jar:1.1.0] at 
> org.apache.flink.shaded.netty4.io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1342)
>  ~[flink-kubernetes-operator-1.1.0-shaded.jar:1.1.0] at 
> org.apache.flink.shaded.netty4.io.netty.handler.ssl.SslHandler.decodeJdkCompatible(SslHandler.java:1235)
>  ~[flink-kubernetes-operator-1.1.0-shaded.jar:1.1.0] at 
> org.apache.flink.shaded.netty4.io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1284)
>  ~[flink-kubernetes-operator-1.1.0-shaded.jar:1.1.0] at 
> org.apache.flink.shaded.netty4.io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:507)
>  ~[flink-kubernetes-operator-1.1.0-shaded.jar:1.1.0] at 
> org.apache.flink.shaded.netty4.io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:446)
>  ~[flink-kubernetes-operator-1.1.0-shaded.jar:1.1.0]|
> It happens when our fluxcd is trying to update the FlinkDeployment resource.
> This seems to trigger a webhook to an endpoint (in the operator) which is 
> serving a (then) invalid certificate.
> We've noticed this after 18 days of it running, so maybe something shortlived 
> was not renewed correctly?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] LadyForest commented on a diff in pull request #20652: [FLINK-22315][table] Support add/modify column/constraint/watermark for ALTER TABLE statement

2022-10-07 Thread GitBox


LadyForest commented on code in PR #20652:
URL: https://github.com/apache/flink/pull/20652#discussion_r990151599


##
flink-table/flink-table-common/src/main/java/org/apache/flink/table/api/Schema.java:
##
@@ -697,15 +697,17 @@ public static final class UnresolvedPhysicalColumn 
extends UnresolvedColumn {
 
 private final AbstractDataType dataType;
 
-UnresolvedPhysicalColumn(String columnName, AbstractDataType 
dataType) {
+public UnresolvedPhysicalColumn(String columnName, AbstractDataType 
dataType) {

Review Comment:
   After rethinking, I agree with you.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-29535) Flink Operator Certificate renew issue

2022-10-07 Thread Gyula Fora (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17614104#comment-17614104
 ] 

Gyula Fora commented on FLINK-29535:


I think this might be already fixed in 1.2.0: 
https://issues.apache.org/jira/browse/FLINK-28272

 

> Flink Operator Certificate renew issue
> --
>
> Key: FLINK-29535
> URL: https://issues.apache.org/jira/browse/FLINK-29535
> Project: Flink
>  Issue Type: Bug
>  Components: Kubernetes Operator
>Reporter: Sebastian Struß
>Priority: Major
>
> It seems that there is an issue with the Kubernetes Operator (at least in 
> version 1.1.0) when it comes to certificates for the webhook.
> We've seen this error message pop up in the logs:
> | |
> |An exceptionCaught() event was fired, and it reached at the tail of the 
> pipeline. It usually means the last handler in the pipeline did not handle 
> the exception.|
> | 
> and
> javax.net.ssl.SSLHandshakeException: Received fatal alert: bad_certificate at 
> sun.security.ssl.Alert.createSSLException(Unknown Source) ~[?:?] at 
> sun.security.ssl.Alert.createSSLException(Unknown Source) ~[?:?] at 
> sun.security.ssl.TransportContext.fatal(Unknown Source) ~[?:?] at 
> sun.security.ssl.Alert$AlertConsumer.consume(Unknown Source) ~[?:?] at 
> sun.security.ssl.TransportContext.dispatch(Unknown Source) ~[?:?] at 
> sun.security.ssl.SSLTransport.decode(Unknown Source) ~[?:?] at 
> sun.security.ssl.SSLEngineImpl.decode(Unknown Source) ~[?:?] at 
> sun.security.ssl.SSLEngineImpl.readRecord(Unknown Source) ~[?:?] at 
> sun.security.ssl.SSLEngineImpl.unwrap(Unknown Source) ~[?:?] at 
> sun.security.ssl.SSLEngineImpl.unwrap(Unknown Source) ~[?:?] at 
> javax.net.ssl.SSLEngine.unwrap(Unknown Source) ~[?:?] at 
> org.apache.flink.shaded.netty4.io.netty.handler.ssl.SslHandler$SslEngineType$3.unwrap(SslHandler.java:296)
>  ~[flink-kubernetes-operator-1.1.0-shaded.jar:1.1.0] at 
> org.apache.flink.shaded.netty4.io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1342)
>  ~[flink-kubernetes-operator-1.1.0-shaded.jar:1.1.0] at 
> org.apache.flink.shaded.netty4.io.netty.handler.ssl.SslHandler.decodeJdkCompatible(SslHandler.java:1235)
>  ~[flink-kubernetes-operator-1.1.0-shaded.jar:1.1.0] at 
> org.apache.flink.shaded.netty4.io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1284)
>  ~[flink-kubernetes-operator-1.1.0-shaded.jar:1.1.0] at 
> org.apache.flink.shaded.netty4.io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:507)
>  ~[flink-kubernetes-operator-1.1.0-shaded.jar:1.1.0] at 
> org.apache.flink.shaded.netty4.io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:446)
>  ~[flink-kubernetes-operator-1.1.0-shaded.jar:1.1.0]|
> It happens when our fluxcd is trying to update the FlinkDeployment resource.
> This seems to trigger a webhook to an endpoint (in the operator) which is 
> serving a (then) invalid certificate.
> We've noticed this after 18 days of it running, so maybe something shortlived 
> was not renewed correctly?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] LadyForest commented on a diff in pull request #20652: [FLINK-22315][table] Support add/modify column/constraint/watermark for ALTER TABLE statement

2022-10-07 Thread GitBox


LadyForest commented on code in PR #20652:
URL: https://github.com/apache/flink/pull/20652#discussion_r990148877


##
flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/operations/AlterTableSchemaUtil.java:
##
@@ -0,0 +1,457 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.operations;
+
+import org.apache.flink.sql.parser.ddl.SqlAlterTableAdd;
+import org.apache.flink.sql.parser.ddl.SqlAlterTableSchema;
+import org.apache.flink.sql.parser.ddl.SqlTableColumn;
+import org.apache.flink.sql.parser.ddl.SqlWatermark;
+import org.apache.flink.sql.parser.ddl.constraint.SqlTableConstraint;
+import org.apache.flink.sql.parser.ddl.position.SqlTableColumnPosition;
+import org.apache.flink.table.api.Schema;
+import org.apache.flink.table.api.ValidationException;
+import org.apache.flink.table.catalog.ResolvedCatalogTable;
+import org.apache.flink.table.expressions.SqlCallExpression;
+import org.apache.flink.table.planner.calcite.FlinkTypeFactory;
+import org.apache.flink.table.types.DataType;
+import org.apache.flink.table.types.logical.LogicalType;
+import org.apache.flink.table.types.utils.TypeConversions;
+
+import org.apache.calcite.rel.type.RelDataType;
+import org.apache.calcite.sql.SqlDataTypeSpec;
+import org.apache.calcite.sql.SqlIdentifier;
+import org.apache.calcite.sql.SqlLiteral;
+import org.apache.calcite.sql.SqlNode;
+import org.apache.calcite.sql.validate.SqlValidator;
+import org.apache.calcite.util.NlsString;
+
+import javax.annotation.Nullable;
+
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Optional;
+import java.util.Set;
+import java.util.function.Consumer;
+import java.util.function.Function;
+import java.util.stream.Collectors;
+
+/** Util to convert {@link SqlAlterTableSchema} with source table to generate 
new {@link Schema}. */
+public class AlterTableSchemaUtil {
+
+private final SqlValidator sqlValidator;
+private final Consumer validateTableConstraint;
+private final Function escapeExpression;
+
+AlterTableSchemaUtil(
+SqlValidator sqlValidator,
+Function escapeExpression,
+Consumer validateTableConstraint) {
+this.sqlValidator = sqlValidator;
+this.validateTableConstraint = validateTableConstraint;
+this.escapeExpression = escapeExpression;
+}
+
+public Schema convertSchema(
+SqlAlterTableSchema alterTableSchema, ResolvedCatalogTable 
originalTable) {
+UnresolvedSchemaBuilder builder =
+new UnresolvedSchemaBuilder(
+originalTable,
+(FlinkTypeFactory) sqlValidator.getTypeFactory(),
+sqlValidator,
+validateTableConstraint,
+escapeExpression);
+AlterSchemaStrategy strategy =
+alterTableSchema instanceof SqlAlterTableAdd
+? AlterSchemaStrategy.ADD
+: AlterSchemaStrategy.MODIFY;
+builder.addOrModifyColumns(strategy, 
alterTableSchema.getColumns().getList());
+List fullConstraint = 
alterTableSchema.getFullConstraint();
+if (!fullConstraint.isEmpty()) {
+builder.addOrModifyPrimaryKey(strategy, fullConstraint.get(0));
+}
+alterTableSchema
+.getWatermark()
+.ifPresent(sqlWatermark -> 
builder.addOrModifyWatermarks(strategy, sqlWatermark));
+return builder.build();
+}
+
+private static class UnresolvedSchemaBuilder {
+
+List newColumnNames = new ArrayList<>();
+Set alterColumnNames = new HashSet<>();
+Map columns = new HashMap<>();
+Map watermarkSpec = new 
HashMap<>();

Review Comment:
   Correct me if I'm wrong, but I don't think we can support multiple 
watermarks.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about 

[GitHub] [flink] LadyForest commented on a diff in pull request #20652: [FLINK-22315][table] Support add/modify column/constraint/watermark for ALTER TABLE statement

2022-10-07 Thread GitBox


LadyForest commented on code in PR #20652:
URL: https://github.com/apache/flink/pull/20652#discussion_r990147098


##
flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/operations/AlterTableSchemaUtil.java:
##
@@ -0,0 +1,457 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.operations;
+
+import org.apache.flink.sql.parser.ddl.SqlAlterTableAdd;
+import org.apache.flink.sql.parser.ddl.SqlAlterTableSchema;
+import org.apache.flink.sql.parser.ddl.SqlTableColumn;
+import org.apache.flink.sql.parser.ddl.SqlWatermark;
+import org.apache.flink.sql.parser.ddl.constraint.SqlTableConstraint;
+import org.apache.flink.sql.parser.ddl.position.SqlTableColumnPosition;
+import org.apache.flink.table.api.Schema;
+import org.apache.flink.table.api.ValidationException;
+import org.apache.flink.table.catalog.ResolvedCatalogTable;
+import org.apache.flink.table.expressions.SqlCallExpression;
+import org.apache.flink.table.planner.calcite.FlinkTypeFactory;
+import org.apache.flink.table.types.DataType;
+import org.apache.flink.table.types.logical.LogicalType;
+import org.apache.flink.table.types.utils.TypeConversions;
+
+import org.apache.calcite.rel.type.RelDataType;
+import org.apache.calcite.sql.SqlDataTypeSpec;
+import org.apache.calcite.sql.SqlIdentifier;
+import org.apache.calcite.sql.SqlLiteral;
+import org.apache.calcite.sql.SqlNode;
+import org.apache.calcite.sql.validate.SqlValidator;
+import org.apache.calcite.util.NlsString;
+
+import javax.annotation.Nullable;
+
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Optional;
+import java.util.Set;
+import java.util.function.Consumer;
+import java.util.function.Function;
+import java.util.stream.Collectors;
+
+/** Util to convert {@link SqlAlterTableSchema} with source table to generate 
new {@link Schema}. */
+public class AlterTableSchemaUtil {
+
+private final SqlValidator sqlValidator;
+private final Consumer validateTableConstraint;
+private final Function escapeExpression;
+
+AlterTableSchemaUtil(
+SqlValidator sqlValidator,
+Function escapeExpression,
+Consumer validateTableConstraint) {
+this.sqlValidator = sqlValidator;
+this.validateTableConstraint = validateTableConstraint;
+this.escapeExpression = escapeExpression;
+}
+
+public Schema convertSchema(
+SqlAlterTableSchema alterTableSchema, ResolvedCatalogTable 
originalTable) {
+UnresolvedSchemaBuilder builder =
+new UnresolvedSchemaBuilder(
+originalTable,
+(FlinkTypeFactory) sqlValidator.getTypeFactory(),
+sqlValidator,
+validateTableConstraint,
+escapeExpression);
+AlterSchemaStrategy strategy =
+alterTableSchema instanceof SqlAlterTableAdd
+? AlterSchemaStrategy.ADD
+: AlterSchemaStrategy.MODIFY;
+builder.addOrModifyColumns(strategy, 
alterTableSchema.getColumns().getList());
+List fullConstraint = 
alterTableSchema.getFullConstraint();
+if (!fullConstraint.isEmpty()) {
+builder.addOrModifyPrimaryKey(strategy, fullConstraint.get(0));
+}
+alterTableSchema
+.getWatermark()
+.ifPresent(sqlWatermark -> 
builder.addOrModifyWatermarks(strategy, sqlWatermark));
+return builder.build();
+}
+
+private static class UnresolvedSchemaBuilder {
+
+List newColumnNames = new ArrayList<>();
+Set alterColumnNames = new HashSet<>();
+Map columns = new HashMap<>();
+Map watermarkSpec = new 
HashMap<>();
+@Nullable Schema.UnresolvedPrimaryKey primaryKey = null;
+
+// Intermediate state
+Map physicalFieldNamesToTypes = new HashMap<>();
+Map metadataFieldNamesToTypes = new HashMap<>();
+Map computedFieldNamesToTypes = new HashMap<>();
+
+Function escapeExpressions;
+FlinkTypeFactory typeFactory;

[jira] [Updated] (FLINK-15571) Create a Redis Streams Connector for Flink

2022-10-07 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15571?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser updated FLINK-15571:
---
Component/s: Connectors / Redis Streams
 (was: Connectors / Common)

> Create a Redis Streams Connector for Flink
> --
>
> Key: FLINK-15571
> URL: https://issues.apache.org/jira/browse/FLINK-15571
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / Redis Streams
>Reporter: Tugdual Grall
>Assignee: ZhuoYu Chen
>Priority: Minor
>  Labels: pull-request-available, stale-assigned
>
> Redis has a "log data structure" called Redis Streams, it would be nice to 
> integrate Redis Streams and Apache Flink as:
>  * Source
>  * Sink
> See Redis Streams introduction: [https://redis.io/topics/streams-intro]
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-29537) Rename flink-connector-redis repository to flink-connector-redis-streams

2022-10-07 Thread Martijn Visser (Jira)
Martijn Visser created FLINK-29537:
--

 Summary: Rename flink-connector-redis repository to 
flink-connector-redis-streams
 Key: FLINK-29537
 URL: https://issues.apache.org/jira/browse/FLINK-29537
 Project: Flink
  Issue Type: Sub-task
Reporter: Martijn Visser
Assignee: Martijn Visser






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-29536) Add WATCH_NAMESPACES env var to kubernetes operator

2022-10-07 Thread Tony Garrard (Jira)
Tony Garrard created FLINK-29536:


 Summary: Add WATCH_NAMESPACES env var to kubernetes operator
 Key: FLINK-29536
 URL: https://issues.apache.org/jira/browse/FLINK-29536
 Project: Flink
  Issue Type: Improvement
  Components: Kubernetes Operator
Affects Versions: kubernetes-operator-1.2.0
Reporter: Tony Garrard
 Fix For: kubernetes-operator-1.2.0


Provide the ability to set the namespaces watched by the operator using an env 
var. Whilst the additional config can still be used, the presence of the env 
var will take priority.

 

Reasons for issue
 # Operator will take effect of the setting immediately as pod will roll 
(rather than waiting for the config to be refreshed)
 # If the operator is to be olm bundled we will be able to set the target 
namespace using the following 

{{env:}}

  {{  - name: WATCHED_NAMESPACE}}

  {{valueFrom:}}

  {{  fieldRef:}}

 {{fieldPath: 
metadata.annotations['olm.targetNamespaces']}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink-web] godfreyhe commented on a diff in pull request #574: Announcement blogpost for the 1.16 release

2022-10-07 Thread GitBox


godfreyhe commented on code in PR #574:
URL: https://github.com/apache/flink-web/pull/574#discussion_r990127036


##
_posts/2022-10-10-1.16-announcement.md:
##
@@ -0,0 +1,401 @@
+---
+layout: post
+title:  "Announcing the Release of Apache Flink 1.16"
+subtitle: ""
+date: 2022-10-10T08:00:00.000Z
+categories: news
+authors:
+- godfreyhe:
+  name: "Godfrey He"
+  twitter: "godfreyhe"
+
+---
+
+Apache Flink continues to grow at a rapid pace and is one of the most active 
+communities in Apache. Flink 1.16 had over 230 contributors enthusiastically 
participating, 
+with 19 FLIPs and 900+ issues completed, bringing a lot of exciting features 
to the community.
+
+Flink has become the leading role and factual standard of stream processing, 
+and the concept of the unification of stream (aka unbounded) and batch (aka 
bounded) data 

Review Comment:
   make sense



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-web] godfreyhe commented on a diff in pull request #574: Announcement blogpost for the 1.16 release

2022-10-07 Thread GitBox


godfreyhe commented on code in PR #574:
URL: https://github.com/apache/flink-web/pull/574#discussion_r990124684


##
_posts/2022-10-10-1.16-announcement.md:
##
@@ -0,0 +1,401 @@
+---
+layout: post
+title:  "Announcing the Release of Apache Flink 1.16"
+subtitle: ""
+date: 2022-10-10T08:00:00.000Z
+categories: news
+authors:
+- godfreyhe:
+  name: "Godfrey He"
+  twitter: "godfreyhe"
+
+---
+
+Apache Flink continues to grow at a rapid pace and is one of the most active 
+communities in Apache. Flink 1.16 had over 230 contributors enthusiastically 
participating, 
+with 19 FLIPs and 900+ issues completed, bringing a lot of exciting features 
to the community.
+
+Flink has become the leading role and factual standard of stream processing, 
+and the concept of the unification of stream (aka unbounded) and batch (aka 
bounded) data 
+processing is gradually gaining recognition and is being successfully 
implemented in more 
+and more companies. Previously, the integrated stream and batch concept placed 
more emphasis 
+on a unified API and a unified computing framework. This year, based on this, 
Flink proposed 
+the next development direction of [Flink-Streaming 
Warehouse](https://www.alibabacloud.com/blog/more-than-computing-a-new-era-led-by-the-warehouse-architecture-of-apache-flink_598821)
 (Streamhouse), 
+which further upgraded the scope of stream-batch integration: it truly 
completes not only 
+the unified computation but also unified storage, thus realizing unified 
real-time analysis.
+

Review Comment:
   I think it's hard to unify them into a table because they describe the 
content in different dimensions.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-29535) Flink Operator Certificate renew issue

2022-10-07 Thread Jira
Sebastian Struß created FLINK-29535:
---

 Summary: Flink Operator Certificate renew issue
 Key: FLINK-29535
 URL: https://issues.apache.org/jira/browse/FLINK-29535
 Project: Flink
  Issue Type: Bug
  Components: Kubernetes Operator
Reporter: Sebastian Struß


It seems that there is an issue with the Kubernetes Operator (at least in 
version 1.1.0) when it comes to certificates for the webhook.

We've seen this error message pop up in the logs:
| |
|An exceptionCaught() event was fired, and it reached at the tail of the 
pipeline. It usually means the last handler in the pipeline did not handle the 
exception.|
| 
and

javax.net.ssl.SSLHandshakeException: Received fatal alert: bad_certificate at 
sun.security.ssl.Alert.createSSLException(Unknown Source) ~[?:?] at 
sun.security.ssl.Alert.createSSLException(Unknown Source) ~[?:?] at 
sun.security.ssl.TransportContext.fatal(Unknown Source) ~[?:?] at 
sun.security.ssl.Alert$AlertConsumer.consume(Unknown Source) ~[?:?] at 
sun.security.ssl.TransportContext.dispatch(Unknown Source) ~[?:?] at 
sun.security.ssl.SSLTransport.decode(Unknown Source) ~[?:?] at 
sun.security.ssl.SSLEngineImpl.decode(Unknown Source) ~[?:?] at 
sun.security.ssl.SSLEngineImpl.readRecord(Unknown Source) ~[?:?] at 
sun.security.ssl.SSLEngineImpl.unwrap(Unknown Source) ~[?:?] at 
sun.security.ssl.SSLEngineImpl.unwrap(Unknown Source) ~[?:?] at 
javax.net.ssl.SSLEngine.unwrap(Unknown Source) ~[?:?] at 
org.apache.flink.shaded.netty4.io.netty.handler.ssl.SslHandler$SslEngineType$3.unwrap(SslHandler.java:296)
 ~[flink-kubernetes-operator-1.1.0-shaded.jar:1.1.0] at 
org.apache.flink.shaded.netty4.io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1342)
 ~[flink-kubernetes-operator-1.1.0-shaded.jar:1.1.0] at 
org.apache.flink.shaded.netty4.io.netty.handler.ssl.SslHandler.decodeJdkCompatible(SslHandler.java:1235)
 ~[flink-kubernetes-operator-1.1.0-shaded.jar:1.1.0] at 
org.apache.flink.shaded.netty4.io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1284)
 ~[flink-kubernetes-operator-1.1.0-shaded.jar:1.1.0] at 
org.apache.flink.shaded.netty4.io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:507)
 ~[flink-kubernetes-operator-1.1.0-shaded.jar:1.1.0] at 
org.apache.flink.shaded.netty4.io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:446)
 ~[flink-kubernetes-operator-1.1.0-shaded.jar:1.1.0]|

It happens when our fluxcd is trying to update the FlinkDeployment resource.

This seems to trigger a webhook to an endpoint (in the operator) which is 
serving a (then) invalid certificate.

We've noticed this after 18 days of it running, so maybe something shortlived 
was not renewed correctly?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-29501) Allow overriding JobVertex parallelisms during job submission

2022-10-07 Thread Maximilian Michels (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17614081#comment-17614081
 ] 

Maximilian Michels commented on FLINK-29501:


[~chesnay] I think you have a better insight into the state of the adaptive 
scheduler. If you think it is already robust enough to support rescaling 
requests, we can re-enable the rescale rest API and also add job vertex 
overrides to it.

> Allow overriding JobVertex parallelisms during job submission
> -
>
> Key: FLINK-29501
> URL: https://issues.apache.org/jira/browse/FLINK-29501
> Project: Flink
>  Issue Type: New Feature
>  Components: Runtime / Configuration, Runtime / REST
>Reporter: Maximilian Michels
>Assignee: Maximilian Michels
>Priority: Minor
>  Labels: pull-request-available
>
> It is a common scenario that users want to make changes to the parallelisms 
> in the JobGraph. For example, because they discover that the job needs more 
> or less resources. There is the option to do this globally via the job 
> parallelism. However, for fine-tuned jobs jobs with potentially many 
> branches, tuning on the job vertex level is required.
> This is to propose a way such that users can apply a mapping \{{jobVertexId 
> => parallelism}} before the job is submitted without having to modify the 
> JobGraph manually.
> One way to achieving this would be to add an optional map field to the Rest 
> API jobs endpoint. However, in deployment modes like the application mode, 
> this might not make sense because users do not have control the rest endpoint.
> Similarly to how other job parameters are passed in the application mode, we 
> propose to add the overrides as a configuration parameter.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-29501) Allow overriding JobVertex parallelisms during job submission

2022-10-07 Thread Maximilian Michels (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17614078#comment-17614078
 ] 

Maximilian Michels commented on FLINK-29501:


Let me try to summarize:

It is a valid use case for external systems to observe the job via task level 
metrics. Based on these observations, we may want to automatically alter the 
job vertex parallelisms. There is no programmatic way to do this with Flink now 
if the k8s application mode is used where we don't have direct control over the 
JobGraph. Also, the observed job can have **any** topology and we cannot assume 
the entry point respects parallelism overrides as arguments in the main method.

There are two possible solutions here:
 # Provide the overrides as part of the configuration
 # Issue a call to the rescale API

The problem with (2) is that the Recale API is both disabled and broken at the 
moment, see https://issues.apache.org/jira/browse/FLINK-12312 It has 
fundamental issues like not being able to use the HA mode with it. I'm afraid, 
this makes option (1) the only viable option to me because, frankly, I'm not 
sure the issues of the rescale mode can easily be fixed.

> Allow overriding JobVertex parallelisms during job submission
> -
>
> Key: FLINK-29501
> URL: https://issues.apache.org/jira/browse/FLINK-29501
> Project: Flink
>  Issue Type: New Feature
>  Components: Runtime / Configuration, Runtime / REST
>Reporter: Maximilian Michels
>Assignee: Maximilian Michels
>Priority: Minor
>  Labels: pull-request-available
>
> It is a common scenario that users want to make changes to the parallelisms 
> in the JobGraph. For example, because they discover that the job needs more 
> or less resources. There is the option to do this globally via the job 
> parallelism. However, for fine-tuned jobs jobs with potentially many 
> branches, tuning on the job vertex level is required.
> This is to propose a way such that users can apply a mapping \{{jobVertexId 
> => parallelism}} before the job is submitted without having to modify the 
> JobGraph manually.
> One way to achieving this would be to add an optional map field to the Rest 
> API jobs endpoint. However, in deployment modes like the application mode, 
> this might not make sense because users do not have control the rest endpoint.
> Similarly to how other job parameters are passed in the application mode, we 
> propose to add the overrides as a configuration parameter.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] snuyanzin commented on a diff in pull request #19780: [FLINK-27727][tests] Migrate TypeSerializerUpgradeTestBase to JUnit5

2022-10-07 Thread GitBox


snuyanzin commented on code in PR #19780:
URL: https://github.com/apache/flink/pull/19780#discussion_r988662523


##
flink-core/src/test/java/org/apache/flink/api/java/typeutils/runtime/RowSerializerUpgradeTest.java:
##
@@ -40,15 +38,9 @@
 import static org.hamcrest.Matchers.is;
 
 /** A {@link TypeSerializerUpgradeTestBase} for {@link RowSerializer}. */
-@RunWith(Parameterized.class)
 public class RowSerializerUpgradeTest extends 
TypeSerializerUpgradeTestBase {

Review Comment:
   Junit5 has issues with access if it is not public
   ```
   java.lang.IllegalAccessError: tried to access class 
org.apache.flink.api.java.typeutils.runtime.RowSerializerUpgradeTest from class 
org.apache.flink.api.java.typeutils.runtime.RowSerializerUpgradeTest$RowSerializerVerifier$generated2$
   
at 
org.apache.flink.api.java.typeutils.runtime.RowSerializerUpgradeTest$RowSerializerVerifier$generated2$.createUpgradedSerializer(RowSerializerUpgradeTest.java:108)
at 
org.apache.flink.api.common.typeutils.TypeSerializerUpgradeTestBase$ClassLoaderSafeUpgradeVerifier.createUpgradedSerializer(TypeSerializerUpgradeTestBase.java:174)
at 
org.apache.flink.api.common.typeutils.TypeSerializerUpgradeTestBase.upgradedSerializerIsValidAfterMigration(TypeSerializerUpgradeTestBase.java:345)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:725)
at 
org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60)
at 
org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131)
at 
org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149)
at 
org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140)
...
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] snuyanzin commented on a diff in pull request #19780: [FLINK-27727][tests] Migrate TypeSerializerUpgradeTestBase to JUnit5

2022-10-07 Thread GitBox


snuyanzin commented on code in PR #19780:
URL: https://github.com/apache/flink/pull/19780#discussion_r988900830


##
flink-streaming-java/src/test/java/org/apache/flink/streaming/api/functions/sink/TwoPhaseCommitSinkStateSerializerUpgradeTest.java:
##
@@ -39,22 +37,12 @@
 /**
  * A {@link TypeSerializerUpgradeTestBase} for {@link 
TwoPhaseCommitSinkFunction.StateSerializer}.
  */
-@RunWith(Parameterized.class)
 public class TwoPhaseCommitSinkStateSerializerUpgradeTest

Review Comment:
   junit5 has issues if it is not public
   ```
   java.lang.IllegalAccessError: tried to access class 
org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkStateSerializerUpgradeTest
 from class 
org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkStateSerializerUpgradeTest$TwoPhaseCommitSinkStateSerializerVerifier$generated2$
   
at 
org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkStateSerializerUpgradeTest$TwoPhaseCommitSinkStateSerializerVerifier$generated2$.createUpgradedSerializer(TwoPhaseCommitSinkStateSerializerUpgradeTest.java:103)
at 
org.apache.flink.api.common.typeutils.TypeSerializerUpgradeTestBase$ClassLoaderSafeUpgradeVerifier.createUpgradedSerializer(TypeSerializerUpgradeTestBase.java:171)
at 
org.apache.flink.api.common.typeutils.TypeSerializerUpgradeTestBase.upgradedSerializerIsValidAfterMigration(TypeSerializerUpgradeTestBase.java:342)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:725)
at 
org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60)
at 
org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131)
at 
org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149)
at 
org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140)
   ...
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] snuyanzin commented on a diff in pull request #19780: [FLINK-27727][tests] Migrate TypeSerializerUpgradeTestBase to JUnit5

2022-10-07 Thread GitBox


snuyanzin commented on code in PR #19780:
URL: https://github.com/apache/flink/pull/19780#discussion_r988892634


##
flink-table/flink-table-runtime/src/test/java/org/apache/flink/table/runtime/typeutils/LinkedListSerializerUpgradeTest.java:
##
@@ -37,17 +35,10 @@
 import static org.hamcrest.Matchers.is;
 
 /** A {@link TypeSerializerUpgradeTestBase} for {@link LinkedListSerializer}. 
*/
-@RunWith(Parameterized.class)
 public class LinkedListSerializerUpgradeTest

Review Comment:
   junit5 has issues if it is not public
   ```
   java.lang.IllegalAccessError: tried to access class 
org.apache.flink.table.runtime.typeutils.LinkedListSerializerUpgradeTest from 
class 
org.apache.flink.table.runtime.typeutils.LinkedListSerializerUpgradeTest$LinkedListSerializerVerifier$generated2$
   
at 
org.apache.flink.table.runtime.typeutils.LinkedListSerializerUpgradeTest$LinkedListSerializerVerifier$generated2$.createUpgradedSerializer(LinkedListSerializerUpgradeTest.java:99)
at 
org.apache.flink.api.common.typeutils.TypeSerializerUpgradeTestBase$ClassLoaderSafeUpgradeVerifier.createUpgradedSerializer(TypeSerializerUpgradeTestBase.java:171)
at 
org.apache.flink.api.common.typeutils.TypeSerializerUpgradeTestBase.upgradedSerializerIsValidAfterMigration(TypeSerializerUpgradeTestBase.java:342)
   ...
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] mxm commented on pull request #20953: [FLINK-29501] Add option to override job vertex parallelisms during job submission

2022-10-07 Thread GitBox


mxm commented on PR #20953:
URL: https://github.com/apache/flink/pull/20953#issuecomment-1271569023

   Closing for now as the discussion is still ongoing.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] mxm closed pull request #20953: [FLINK-29501] Add option to override job vertex parallelisms during job submission

2022-10-07 Thread GitBox


mxm closed pull request #20953: [FLINK-29501] Add option to override job vertex 
parallelisms during job submission
URL: https://github.com/apache/flink/pull/20953


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] mxm commented on a diff in pull request #20953: [FLINK-29501] Add option to override job vertex parallelisms during job submission

2022-10-07 Thread GitBox


mxm commented on code in PR #20953:
URL: https://github.com/apache/flink/pull/20953#discussion_r990068651


##
flink-core/src/main/java/org/apache/flink/configuration/ConfigOptions.java:
##
@@ -270,7 +283,33 @@ public final ConfigOption> defaultValues(E... 
values) {
  * @return The config option without a default value.
  */
 public ConfigOption> noDefaultValue() {
-return new ConfigOption<>(key, clazz, 
ConfigOption.EMPTY_DESCRIPTION, null, true);
+return new ConfigOption<>(key, clazz, 
ConfigOption.EMPTY_DESCRIPTION, null, LIST);
+}
+}
+
+/** Builder for map type {@link ConfigOption} with a value type V. */
+public static class MapConfigOptionBuilder {
+private final String key;
+private final Class clazz;
+
+MapConfigOptionBuilder(String key, Class clazz) {
+this.key = key;
+this.clazz = clazz;
+}
+
+/** Defines that the option's type should be a list of previously 
defined atomic type. */
+@SuppressWarnings("rawtypes")

Review Comment:
   The compiler doesn't let me.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] mxm commented on a diff in pull request #20953: [FLINK-29501] Add option to override job vertex parallelisms during job submission

2022-10-07 Thread GitBox


mxm commented on code in PR #20953:
URL: https://github.com/apache/flink/pull/20953#discussion_r990068246


##
flink-core/src/main/java/org/apache/flink/configuration/PipelineOptions.java:
##
@@ -165,6 +166,14 @@ public class PipelineOptions {
 "Register a custom, serializable user 
configuration object. The configuration can be "
 + " accessed in operators");
 
+public static final ConfigOption> 
PARALLELISM_OVERRIDES =
+key("pipeline.jobvertex-parallelism-overrides")
+.mapType(Integer.class)
+.defaultValue(Collections.emptyMap())
+.withDescription(
+"A parallelism override map (jobVertexId -> 
parallelism) which will be used to update"
++ " the parallelism of the corresponding 
job vertices of submitted JobGraphs.");

Review Comment:
   We could but since the overrides are derived from the JobGraph and that's an 
implementation detail per-se, I don't think we can disguise that if we add this 
option.
   
   A job vertex hosts one or more operators, so calling it operator would be 
incorrect.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] mxm commented on a diff in pull request #20953: [FLINK-29501] Add option to override job vertex parallelisms during job submission

2022-10-07 Thread GitBox


mxm commented on code in PR #20953:
URL: https://github.com/apache/flink/pull/20953#discussion_r990065742


##
flink-runtime/src/main/java/org/apache/flink/runtime/rest/handler/job/JobSubmitHandler.java:
##
@@ -148,11 +150,30 @@ private CompletableFuture loadJobGraph(
 HttpResponseStatus.BAD_REQUEST,
 e));
 }
+try {
+applyParallelismOverrides(jobGraph);
+} catch (Exception e) {
+throw new CompletionException(
+new RestHandlerException(
+"Failed to apply parallelism 
overrides",
+
HttpResponseStatus.INTERNAL_SERVER_ERROR,
+e));
+}
 return jobGraph;
 },
 executor);
 }
 
+private void applyParallelismOverrides(JobGraph jobGraph) {
+Map overrides = 
configuration.get(PipelineOptions.PARALLELISM_OVERRIDES);

Review Comment:
   The original design had an extra field in the payload to specify the 
override map. This was then changed to go through the configuration instead. I 
was under the assumption that the application mode would also use the REST API 
but it goes directly through the dispatcher. So you're right, this won't work. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-kubernetes-operator] gyfora merged pull request #396: [hotfixt][docs] Remove all "simply/easily" usages

2022-10-07 Thread GitBox


gyfora merged PR #396:
URL: https://github.com/apache/flink-kubernetes-operator/pull/396


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] XComp commented on a diff in pull request #20925: [FLINK-29468][connectors][filesystems][formats] Update Jackson-BOM to 2.13.4

2022-10-07 Thread GitBox


XComp commented on code in PR #20925:
URL: https://github.com/apache/flink/pull/20925#discussion_r98861


##
pom.xml:
##
@@ -142,6 +142,7 @@ under the License.
3.5.9
5.2.0
1.11.1
+   2.13.4

Review Comment:
   I actually thought of putting it right below 
`` further up in [line 
126](https://github.com/apache/flink/blob/0025fab2c7bd4ddfd0f4788143204d5d24aa2686/pom.xml#L126).
 But I noticed that it would screw up the lexicographical order. So, leaving it 
like that is good enough, I guess. Could we add some comment here explaining 
why we have a separate jackson dependency here along the shaded version? 
Something along the lines of
   > Version for transitive Jackson dependencies that are not used within Flink 
itself.



##
pom.xml:
##
@@ -142,6 +142,7 @@ under the License.
3.5.9
5.2.0
1.11.1
+   2.13.4

Review Comment:
   Introducing the property could be done in a separate hotfix commit.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] zentol commented on a diff in pull request #20170: [FLINK-28405][Connector/Kafka] Update Confluent Platform images used for testing to v7.2.2

2022-10-07 Thread GitBox


zentol commented on code in PR #20170:
URL: https://github.com/apache/flink/pull/20170#discussion_r989997402


##
flink-test-utils-parent/flink-test-utils-junit/src/main/java/org/apache/flink/util/DockerImageVersions.java:
##
@@ -28,7 +28,9 @@
  */
 public class DockerImageVersions {
 
-public static final String KAFKA = "confluentinc/cp-kafka:6.2.2";
+public static final String KAFKA = "confluentinc/cp-kafka:7.2.2";
+
+public static final String SCHEMA_REGISTRY = 
"confluentinc/cp-schema-registry:7.2.2";

Review Comment:
   I don't remember. I think because in some cases we pull LATEST, so we'd 
invalidate the cache quite frequently.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Closed] (FLINK-29534) @TypeInfo on field requires field type to be valid Pojo

2022-10-07 Thread Exidex (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Exidex closed FLINK-29534.
--
Resolution: Invalid

> @TypeInfo on field requires field type to be valid Pojo 
> 
>
> Key: FLINK-29534
> URL: https://issues.apache.org/jira/browse/FLINK-29534
> Project: Flink
>  Issue Type: Bug
>  Components: API / Type Serialization System
>Affects Versions: 1.14.0, 1.15.0
>Reporter: Exidex
>Priority: Major
>
> The ability to place @TypeInfo on field was added in 
> [https://github.com/apache/flink/pull/8344] . But it seams like the fact that 
> it requires field to be a valid POJO was overlooked. In my case I was trying 
> to add custom serializer for Jackson's ObjectNode (wrapped in List but not 
> sure if this is relevant https://issues.apache.org/jira/browse/FLINK-26470) 
> which is not a valid POJO, and this requirement seams to defeat the whole 
> purpose of such feature. It also doesn't look like like there's a way to 
> register {{org.apache.flink.api.common.typeutils.TypeSerializer}} globally on 
> 3rd-party types
> code snippet from TypeExtractor:
> {code:java}
> Type fieldType = field.getGenericType();
> if (!isValidPojoField(field, clazz, typeHierarchy) && clazz != Row.class) {
>     LOG.info(
>             "Class "
>                     + clazz
>                     + " cannot be used as a POJO type because not all fields 
> are valid POJO fields, "
>                     + "and must be processed as GenericType. {}",
>             GENERIC_TYPE_DOC_HINT);
>     return null;
> }
> try {
>     final TypeInformation typeInfo;
>     List fieldTypeHierarchy = new ArrayList<>(typeHierarchy);
>     TypeInfoFactory factory = getTypeInfoFactory(field);
>     if (factory != null) {{code}
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-29534) @TypeInfo on field requires field type to be valid Pojo

2022-10-07 Thread Exidex (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17614036#comment-17614036
 ] 

Exidex commented on FLINK-29534:


You are right the problem I had was in different place. Here it worked 
properly, but there was still a log in logs which misled me.

> @TypeInfo on field requires field type to be valid Pojo 
> 
>
> Key: FLINK-29534
> URL: https://issues.apache.org/jira/browse/FLINK-29534
> Project: Flink
>  Issue Type: Bug
>  Components: API / Type Serialization System
>Affects Versions: 1.14.0, 1.15.0
>Reporter: Exidex
>Priority: Major
>
> The ability to place @TypeInfo on field was added in 
> [https://github.com/apache/flink/pull/8344] . But it seams like the fact that 
> it requires field to be a valid POJO was overlooked. In my case I was trying 
> to add custom serializer for Jackson's ObjectNode (wrapped in List but not 
> sure if this is relevant https://issues.apache.org/jira/browse/FLINK-26470) 
> which is not a valid POJO, and this requirement seams to defeat the whole 
> purpose of such feature. It also doesn't look like like there's a way to 
> register {{org.apache.flink.api.common.typeutils.TypeSerializer}} globally on 
> 3rd-party types
> code snippet from TypeExtractor:
> {code:java}
> Type fieldType = field.getGenericType();
> if (!isValidPojoField(field, clazz, typeHierarchy) && clazz != Row.class) {
>     LOG.info(
>             "Class "
>                     + clazz
>                     + " cannot be used as a POJO type because not all fields 
> are valid POJO fields, "
>                     + "and must be processed as GenericType. {}",
>             GENERIC_TYPE_DOC_HINT);
>     return null;
> }
> try {
>     final TypeInformation typeInfo;
>     List fieldTypeHierarchy = new ArrayList<>(typeHierarchy);
>     TypeInfoFactory factory = getTypeInfoFactory(field);
>     if (factory != null) {{code}
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-28347) Update testcontainers dependency to v1.17.5

2022-10-07 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-28347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser updated FLINK-28347:
---
Description: 
Changelog: 
https://github.com/testcontainers/testcontainers-java/releases/tag/1.17.5

Main benefits for Flink: Elasticsearch and Pulsar improvements

  was:
Changelog: 
https://github.com/testcontainers/testcontainers-java/releases/tag/1.17.3

Main benefits for Flink: Elasticsearch and Pulsar improvements


> Update testcontainers dependency to v1.17.5
> ---
>
> Key: FLINK-28347
> URL: https://issues.apache.org/jira/browse/FLINK-28347
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Test Infrastructure
>Reporter: Martijn Visser
>Assignee: Martijn Visser
>Priority: Major
>  Labels: pull-request-available, stale-assigned
>
> Changelog: 
> https://github.com/testcontainers/testcontainers-java/releases/tag/1.17.5
> Main benefits for Flink: Elasticsearch and Pulsar improvements



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


  1   2   >