[jira] [Commented] (FLINK-31689) Filesystem sink fails when parallelism of compactor operator changed

2023-09-29 Thread jirawech.s (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17770637#comment-17770637
 ] 

jirawech.s commented on FLINK-31689:


[~martijnvisser] Could you help with this PR, seems like no one review it

> Filesystem sink fails when parallelism of compactor operator changed
> 
>
> Key: FLINK-31689
> URL: https://issues.apache.org/jira/browse/FLINK-31689
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / FileSystem
>Affects Versions: 1.16.1
>Reporter: jirawech.s
>Assignee: jirawech.s
>Priority: Major
>  Labels: pull-request-available
> Attachments: HelloFlinkHadoopSink.java
>
>
> I encounter this error when i tried to use Filesystem sink with Table SQL. I 
> have not tested with Datastream API tho. You may refers to the error as below
> {code:java}
> // code placeholder
> java.util.NoSuchElementException
>   at java.util.ArrayList$Itr.next(ArrayList.java:864)
>   at 
> org.apache.flink.connector.file.table.stream.compact.CompactOperator.initializeState(CompactOperator.java:119)
>   at 
> org.apache.flink.streaming.api.operators.StreamOperatorStateHandler.initializeOperatorState(StreamOperatorStateHandler.java:122)
>   at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator.initializeState(AbstractStreamOperator.java:286)
>   at 
> org.apache.flink.streaming.runtime.tasks.RegularOperatorChain.initializeStateAndOpenOperators(RegularOperatorChain.java:106)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.restoreGates(StreamTask.java:700)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$1.call(StreamTaskActionExecutor.java:55)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.restoreInternal(StreamTask.java:676)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.restore(StreamTask.java:643)
>   at 
> org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:948)
>   at 
> org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:917)
>   at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:741)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:563)
>   at java.lang.Thread.run(Thread.java:750) {code}
> I cannot attach the full reproducible code here, but you may follow my pseudo 
> code in attachment and reproducible steps below
> 1. Create Kafka source
> 2. Set state.savepoints.dir
> 3. Set Job parallelism to 1
> 4. Create FileSystem Sink
> 5. Run the job and trigger savepoint with API
> {noformat}
> curl -X POST localhost:8081/jobs/:jobId/savepoints -d '{"cancel-job": 
> false}'{noformat}
> {color:#172b4d}6. Cancel job, change parallelism to 2, and resume job from 
> savepoint{color}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31689) Filesystem sink fails when parallelism of compactor operator changed

2023-09-29 Thread jirawech.s (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jirawech.s updated FLINK-31689:
---
Labels: pull-request-available  (was: pull-request-available stale-assigned)

> Filesystem sink fails when parallelism of compactor operator changed
> 
>
> Key: FLINK-31689
> URL: https://issues.apache.org/jira/browse/FLINK-31689
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / FileSystem
>Affects Versions: 1.16.1
>Reporter: jirawech.s
>Assignee: jirawech.s
>Priority: Major
>  Labels: pull-request-available
> Attachments: HelloFlinkHadoopSink.java
>
>
> I encounter this error when i tried to use Filesystem sink with Table SQL. I 
> have not tested with Datastream API tho. You may refers to the error as below
> {code:java}
> // code placeholder
> java.util.NoSuchElementException
>   at java.util.ArrayList$Itr.next(ArrayList.java:864)
>   at 
> org.apache.flink.connector.file.table.stream.compact.CompactOperator.initializeState(CompactOperator.java:119)
>   at 
> org.apache.flink.streaming.api.operators.StreamOperatorStateHandler.initializeOperatorState(StreamOperatorStateHandler.java:122)
>   at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator.initializeState(AbstractStreamOperator.java:286)
>   at 
> org.apache.flink.streaming.runtime.tasks.RegularOperatorChain.initializeStateAndOpenOperators(RegularOperatorChain.java:106)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.restoreGates(StreamTask.java:700)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$1.call(StreamTaskActionExecutor.java:55)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.restoreInternal(StreamTask.java:676)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.restore(StreamTask.java:643)
>   at 
> org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:948)
>   at 
> org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:917)
>   at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:741)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:563)
>   at java.lang.Thread.run(Thread.java:750) {code}
> I cannot attach the full reproducible code here, but you may follow my pseudo 
> code in attachment and reproducible steps below
> 1. Create Kafka source
> 2. Set state.savepoints.dir
> 3. Set Job parallelism to 1
> 4. Create FileSystem Sink
> 5. Run the job and trigger savepoint with API
> {noformat}
> curl -X POST localhost:8081/jobs/:jobId/savepoints -d '{"cancel-job": 
> false}'{noformat}
> {color:#172b4d}6. Cancel job, change parallelism to 2, and resume job from 
> savepoint{color}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31689) Filesystem sink fails when parallelism of compactor operator changed

2023-08-12 Thread jirawech.s (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17753648#comment-17753648
 ] 

jirawech.s commented on FLINK-31689:


waiting for PR review https://github.com/apache/flink/pull/22400

> Filesystem sink fails when parallelism of compactor operator changed
> 
>
> Key: FLINK-31689
> URL: https://issues.apache.org/jira/browse/FLINK-31689
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / FileSystem
>Affects Versions: 1.16.1
>Reporter: jirawech.s
>Assignee: jirawech.s
>Priority: Major
>  Labels: pull-request-available
> Attachments: HelloFlinkHadoopSink.java
>
>
> I encounter this error when i tried to use Filesystem sink with Table SQL. I 
> have not tested with Datastream API tho. You may refers to the error as below
> {code:java}
> // code placeholder
> java.util.NoSuchElementException
>   at java.util.ArrayList$Itr.next(ArrayList.java:864)
>   at 
> org.apache.flink.connector.file.table.stream.compact.CompactOperator.initializeState(CompactOperator.java:119)
>   at 
> org.apache.flink.streaming.api.operators.StreamOperatorStateHandler.initializeOperatorState(StreamOperatorStateHandler.java:122)
>   at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator.initializeState(AbstractStreamOperator.java:286)
>   at 
> org.apache.flink.streaming.runtime.tasks.RegularOperatorChain.initializeStateAndOpenOperators(RegularOperatorChain.java:106)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.restoreGates(StreamTask.java:700)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$1.call(StreamTaskActionExecutor.java:55)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.restoreInternal(StreamTask.java:676)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.restore(StreamTask.java:643)
>   at 
> org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:948)
>   at 
> org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:917)
>   at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:741)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:563)
>   at java.lang.Thread.run(Thread.java:750) {code}
> I cannot attach the full reproducible code here, but you may follow my pseudo 
> code in attachment and reproducible steps below
> 1. Create Kafka source
> 2. Set state.savepoints.dir
> 3. Set Job parallelism to 1
> 4. Create FileSystem Sink
> 5. Run the job and trigger savepoint with API
> {noformat}
> curl -X POST localhost:8081/jobs/:jobId/savepoints -d '{"cancel-job": 
> false}'{noformat}
> {color:#172b4d}6. Cancel job, change parallelism to 2, and resume job from 
> savepoint{color}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31689) Filesystem sink fails when parallelism of compactor operator changed

2023-08-12 Thread jirawech.s (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jirawech.s updated FLINK-31689:
---
Labels: pull-request-available  (was: pull-request-available stale-assigned)

> Filesystem sink fails when parallelism of compactor operator changed
> 
>
> Key: FLINK-31689
> URL: https://issues.apache.org/jira/browse/FLINK-31689
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / FileSystem
>Affects Versions: 1.16.1
>Reporter: jirawech.s
>Assignee: jirawech.s
>Priority: Major
>  Labels: pull-request-available
> Attachments: HelloFlinkHadoopSink.java
>
>
> I encounter this error when i tried to use Filesystem sink with Table SQL. I 
> have not tested with Datastream API tho. You may refers to the error as below
> {code:java}
> // code placeholder
> java.util.NoSuchElementException
>   at java.util.ArrayList$Itr.next(ArrayList.java:864)
>   at 
> org.apache.flink.connector.file.table.stream.compact.CompactOperator.initializeState(CompactOperator.java:119)
>   at 
> org.apache.flink.streaming.api.operators.StreamOperatorStateHandler.initializeOperatorState(StreamOperatorStateHandler.java:122)
>   at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator.initializeState(AbstractStreamOperator.java:286)
>   at 
> org.apache.flink.streaming.runtime.tasks.RegularOperatorChain.initializeStateAndOpenOperators(RegularOperatorChain.java:106)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.restoreGates(StreamTask.java:700)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$1.call(StreamTaskActionExecutor.java:55)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.restoreInternal(StreamTask.java:676)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.restore(StreamTask.java:643)
>   at 
> org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:948)
>   at 
> org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:917)
>   at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:741)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:563)
>   at java.lang.Thread.run(Thread.java:750) {code}
> I cannot attach the full reproducible code here, but you may follow my pseudo 
> code in attachment and reproducible steps below
> 1. Create Kafka source
> 2. Set state.savepoints.dir
> 3. Set Job parallelism to 1
> 4. Create FileSystem Sink
> 5. Run the job and trigger savepoint with API
> {noformat}
> curl -X POST localhost:8081/jobs/:jobId/savepoints -d '{"cancel-job": 
> false}'{noformat}
> {color:#172b4d}6. Cancel job, change parallelism to 2, and resume job from 
> savepoint{color}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31689) Filesystem sink fails when parallelism of compactor operator changed

2023-05-29 Thread jirawech.s (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17727071#comment-17727071
 ] 

jirawech.s commented on FLINK-31689:


Anyone can help me review PR pls

https://github.com/apache/flink/pull/22400

> Filesystem sink fails when parallelism of compactor operator changed
> 
>
> Key: FLINK-31689
> URL: https://issues.apache.org/jira/browse/FLINK-31689
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / FileSystem
>Affects Versions: 1.16.1
>Reporter: jirawech.s
>Assignee: jirawech.s
>Priority: Major
>  Labels: pull-request-available
> Attachments: HelloFlinkHadoopSink.java
>
>
> I encounter this error when i tried to use Filesystem sink with Table SQL. I 
> have not tested with Datastream API tho. You may refers to the error as below
> {code:java}
> // code placeholder
> java.util.NoSuchElementException
>   at java.util.ArrayList$Itr.next(ArrayList.java:864)
>   at 
> org.apache.flink.connector.file.table.stream.compact.CompactOperator.initializeState(CompactOperator.java:119)
>   at 
> org.apache.flink.streaming.api.operators.StreamOperatorStateHandler.initializeOperatorState(StreamOperatorStateHandler.java:122)
>   at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator.initializeState(AbstractStreamOperator.java:286)
>   at 
> org.apache.flink.streaming.runtime.tasks.RegularOperatorChain.initializeStateAndOpenOperators(RegularOperatorChain.java:106)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.restoreGates(StreamTask.java:700)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$1.call(StreamTaskActionExecutor.java:55)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.restoreInternal(StreamTask.java:676)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.restore(StreamTask.java:643)
>   at 
> org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:948)
>   at 
> org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:917)
>   at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:741)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:563)
>   at java.lang.Thread.run(Thread.java:750) {code}
> I cannot attach the full reproducible code here, but you may follow my pseudo 
> code in attachment and reproducible steps below
> 1. Create Kafka source
> 2. Set state.savepoints.dir
> 3. Set Job parallelism to 1
> 4. Create FileSystem Sink
> 5. Run the job and trigger savepoint with API
> {noformat}
> curl -X POST localhost:8081/jobs/:jobId/savepoints -d '{"cancel-job": 
> false}'{noformat}
> {color:#172b4d}6. Cancel job, change parallelism to 2, and resume job from 
> savepoint{color}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31811) Unsupported complex data type for Flink SQL

2023-04-14 Thread jirawech.s (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17712595#comment-17712595
 ] 

jirawech.s commented on FLINK-31811:


[~KristoffSC] I see. It may be duplicated issue, yet i see different error 
here. Could you share me reproducible code? I can assure you that it works with 
Flink version 1.15.1 in my case.

> Unsupported complex data type for Flink SQL
> ---
>
> Key: FLINK-31811
> URL: https://issues.apache.org/jira/browse/FLINK-31811
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / FileSystem
>Affects Versions: 1.16.1
>Reporter: jirawech.s
>Priority: Major
> Fix For: 1.16.2
>
>
> I found this issue when I tried to write data on local filesystem using Flink 
> SQL
> {code:java}
> 19:51:32,966 INFO  org.apache.flink.runtime.executiongraph.ExecutionGraph     
>   [] - compact-operator (1/4) 
> (4f2a09b638c786f74262c675d248afd9_80fe6c4f32f605d447b391cdb16cc1ff_0_4) 
> switched from RUNNING to FAILED on 69ed2306-371b-4bfc-a98e-bf75fb41748f @ 
> localhost (dataPort=-1).
> java.lang.IndexOutOfBoundsException: Index: 1, Size: 1
>     at java.util.ArrayList.rangeCheck(ArrayList.java:659) ~[?:1.8.0_301]
>     at java.util.ArrayList.get(ArrayList.java:435) ~[?:1.8.0_301]
>     at org.apache.parquet.schema.GroupType.getType(GroupType.java:216) 
> ~[parquet-column-1.12.2.jar:1.12.2]
>     at 
> org.apache.flink.formats.parquet.vector.ParquetSplitReaderUtil.createWritableColumnVector(ParquetSplitReaderUtil.java:523)
>  ~[flink-parquet-1.16.1.jar:1.16.1]
>     at 
> org.apache.flink.formats.parquet.vector.ParquetSplitReaderUtil.createWritableColumnVector(ParquetSplitReaderUtil.java:503)
>  ~[flink-parquet-1.16.1.jar:1.16.1]
>     at 
> org.apache.flink.formats.parquet.ParquetVectorizedInputFormat.createWritableVectors(ParquetVectorizedInputFormat.java:281)
>  ~[flink-parquet-1.16.1.jar:1.16.1]
>     at 
> org.apache.flink.formats.parquet.ParquetVectorizedInputFormat.createReaderBatch(ParquetVectorizedInputFormat.java:270)
>  ~[flink-parquet-1.16.1.jar:1.16.1]
>     at 
> org.apache.flink.formats.parquet.ParquetVectorizedInputFormat.createPoolOfBatches(ParquetVectorizedInputFormat.java:260)
>  ~[flink-parquet-1.16.1.jar:1.16.1]
>      {code}
> What i tried to do is writing complex data type to parquet file
> Here is the schema of sink table. The problematic data type is 
> ARRAY>
> {code:java}
> CREATE TEMPORARY TABLE local_table (
>  `user_id` STRING, `order_id` STRING, `amount` INT, `restaurant_id` STRING, 
> `experiment` ARRAY>, `dt` STRING
> ) PARTITIONED BY (`dt`) WITH (
>   'connector'='filesystem',
>   'path'='file:///tmp/test_hadoop_write',
>   'format'='parquet',
>   'auto-compaction'='true',
>   'sink.partition-commit.policy.kind'='success-file'
> ) {code}
> PS. It is used to work in Flink version 1.15.1
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31811) Unsupported complex data type for Flink SQL

2023-04-14 Thread jirawech.s (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17712400#comment-17712400
 ] 

jirawech.s commented on FLINK-31811:


[~KristoffSC] I guess what you mentioned is parquet reader. I was able to write 
complex type to FileSystem fine on Flink 1.15.1.
P.S. Writing Array of String works fine tho in 1.16.0

> Unsupported complex data type for Flink SQL
> ---
>
> Key: FLINK-31811
> URL: https://issues.apache.org/jira/browse/FLINK-31811
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / FileSystem
>Affects Versions: 1.16.1
>Reporter: jirawech.s
>Priority: Major
> Fix For: 1.16.2
>
>
> I found this issue when I tried to write data on local filesystem using Flink 
> SQL
> {code:java}
> 19:51:32,966 INFO  org.apache.flink.runtime.executiongraph.ExecutionGraph     
>   [] - compact-operator (1/4) 
> (4f2a09b638c786f74262c675d248afd9_80fe6c4f32f605d447b391cdb16cc1ff_0_4) 
> switched from RUNNING to FAILED on 69ed2306-371b-4bfc-a98e-bf75fb41748f @ 
> localhost (dataPort=-1).
> java.lang.IndexOutOfBoundsException: Index: 1, Size: 1
>     at java.util.ArrayList.rangeCheck(ArrayList.java:659) ~[?:1.8.0_301]
>     at java.util.ArrayList.get(ArrayList.java:435) ~[?:1.8.0_301]
>     at org.apache.parquet.schema.GroupType.getType(GroupType.java:216) 
> ~[parquet-column-1.12.2.jar:1.12.2]
>     at 
> org.apache.flink.formats.parquet.vector.ParquetSplitReaderUtil.createWritableColumnVector(ParquetSplitReaderUtil.java:523)
>  ~[flink-parquet-1.16.1.jar:1.16.1]
>     at 
> org.apache.flink.formats.parquet.vector.ParquetSplitReaderUtil.createWritableColumnVector(ParquetSplitReaderUtil.java:503)
>  ~[flink-parquet-1.16.1.jar:1.16.1]
>     at 
> org.apache.flink.formats.parquet.ParquetVectorizedInputFormat.createWritableVectors(ParquetVectorizedInputFormat.java:281)
>  ~[flink-parquet-1.16.1.jar:1.16.1]
>     at 
> org.apache.flink.formats.parquet.ParquetVectorizedInputFormat.createReaderBatch(ParquetVectorizedInputFormat.java:270)
>  ~[flink-parquet-1.16.1.jar:1.16.1]
>     at 
> org.apache.flink.formats.parquet.ParquetVectorizedInputFormat.createPoolOfBatches(ParquetVectorizedInputFormat.java:260)
>  ~[flink-parquet-1.16.1.jar:1.16.1]
>      {code}
> What i tried to do is writing complex data type to parquet file
> Here is the schema of sink table. The problematic data type is 
> ARRAY>
> {code:java}
> CREATE TEMPORARY TABLE local_table (
>  `user_id` STRING, `order_id` STRING, `amount` INT, `restaurant_id` STRING, 
> `experiment` ARRAY>, `dt` STRING
> ) PARTITIONED BY (`dt`) WITH (
>   'connector'='filesystem',
>   'path'='file:///tmp/test_hadoop_write',
>   'format'='parquet',
>   'auto-compaction'='true',
>   'sink.partition-commit.policy.kind'='success-file'
> ) {code}
> PS. It is used to work in Flink version 1.15.1
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-31811) Unsupported complex data type for Flink SQL

2023-04-14 Thread jirawech.s (Jira)
jirawech.s created FLINK-31811:
--

 Summary: Unsupported complex data type for Flink SQL
 Key: FLINK-31811
 URL: https://issues.apache.org/jira/browse/FLINK-31811
 Project: Flink
  Issue Type: Bug
  Components: Connectors / FileSystem
Affects Versions: 1.16.1
Reporter: jirawech.s
 Fix For: 1.16.2


I found this issue when I tried to write data on local filesystem using Flink 
SQL
{code:java}
19:51:32,966 INFO  org.apache.flink.runtime.executiongraph.ExecutionGraph       
[] - compact-operator (1/4) 
(4f2a09b638c786f74262c675d248afd9_80fe6c4f32f605d447b391cdb16cc1ff_0_4) 
switched from RUNNING to FAILED on 69ed2306-371b-4bfc-a98e-bf75fb41748f @ 
localhost (dataPort=-1).
java.lang.IndexOutOfBoundsException: Index: 1, Size: 1
    at java.util.ArrayList.rangeCheck(ArrayList.java:659) ~[?:1.8.0_301]
    at java.util.ArrayList.get(ArrayList.java:435) ~[?:1.8.0_301]
    at org.apache.parquet.schema.GroupType.getType(GroupType.java:216) 
~[parquet-column-1.12.2.jar:1.12.2]
    at 
org.apache.flink.formats.parquet.vector.ParquetSplitReaderUtil.createWritableColumnVector(ParquetSplitReaderUtil.java:523)
 ~[flink-parquet-1.16.1.jar:1.16.1]
    at 
org.apache.flink.formats.parquet.vector.ParquetSplitReaderUtil.createWritableColumnVector(ParquetSplitReaderUtil.java:503)
 ~[flink-parquet-1.16.1.jar:1.16.1]
    at 
org.apache.flink.formats.parquet.ParquetVectorizedInputFormat.createWritableVectors(ParquetVectorizedInputFormat.java:281)
 ~[flink-parquet-1.16.1.jar:1.16.1]
    at 
org.apache.flink.formats.parquet.ParquetVectorizedInputFormat.createReaderBatch(ParquetVectorizedInputFormat.java:270)
 ~[flink-parquet-1.16.1.jar:1.16.1]
    at 
org.apache.flink.formats.parquet.ParquetVectorizedInputFormat.createPoolOfBatches(ParquetVectorizedInputFormat.java:260)
 ~[flink-parquet-1.16.1.jar:1.16.1]
     {code}
What i tried to do is writing complex data type to parquet file
Here is the schema of sink table. The problematic data type is 
ARRAY>
{code:java}
CREATE TEMPORARY TABLE local_table (
 `user_id` STRING, `order_id` STRING, `amount` INT, `restaurant_id` STRING, 
`experiment` ARRAY>, `dt` STRING
) PARTITIONED BY (`dt`) WITH (
  'connector'='filesystem',
  'path'='file:///tmp/test_hadoop_write',
  'format'='parquet',
  'auto-compaction'='true',
  'sink.partition-commit.policy.kind'='success-file'
) {code}
PS. It is used to work in Flink version 1.15.1

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31689) Filesystem sink fails when parallelism of compactor operator changed

2023-04-14 Thread jirawech.s (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17712290#comment-17712290
 ] 

jirawech.s commented on FLINK-31689:


[~luoyuxia] I open [PR|https://github.com/apache/flink/pull/22400]. Could you 
help me review?

> Filesystem sink fails when parallelism of compactor operator changed
> 
>
> Key: FLINK-31689
> URL: https://issues.apache.org/jira/browse/FLINK-31689
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / FileSystem
>Affects Versions: 1.16.1
>Reporter: jirawech.s
>Assignee: jirawech.s
>Priority: Major
>  Labels: pull-request-available
> Attachments: HelloFlinkHadoopSink.java
>
>
> I encounter this error when i tried to use Filesystem sink with Table SQL. I 
> have not tested with Datastream API tho. You may refers to the error as below
> {code:java}
> // code placeholder
> java.util.NoSuchElementException
>   at java.util.ArrayList$Itr.next(ArrayList.java:864)
>   at 
> org.apache.flink.connector.file.table.stream.compact.CompactOperator.initializeState(CompactOperator.java:119)
>   at 
> org.apache.flink.streaming.api.operators.StreamOperatorStateHandler.initializeOperatorState(StreamOperatorStateHandler.java:122)
>   at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator.initializeState(AbstractStreamOperator.java:286)
>   at 
> org.apache.flink.streaming.runtime.tasks.RegularOperatorChain.initializeStateAndOpenOperators(RegularOperatorChain.java:106)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.restoreGates(StreamTask.java:700)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$1.call(StreamTaskActionExecutor.java:55)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.restoreInternal(StreamTask.java:676)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.restore(StreamTask.java:643)
>   at 
> org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:948)
>   at 
> org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:917)
>   at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:741)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:563)
>   at java.lang.Thread.run(Thread.java:750) {code}
> I cannot attach the full reproducible code here, but you may follow my pseudo 
> code in attachment and reproducible steps below
> 1. Create Kafka source
> 2. Set state.savepoints.dir
> 3. Set Job parallelism to 1
> 4. Create FileSystem Sink
> 5. Run the job and trigger savepoint with API
> {noformat}
> curl -X POST localhost:8081/jobs/:jobId/savepoints -d '{"cancel-job": 
> false}'{noformat}
> {color:#172b4d}6. Cancel job, change parallelism to 2, and resume job from 
> savepoint{color}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31689) Filesystem sink fails when parallelism of compactor operator changed

2023-04-05 Thread jirawech.s (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17708875#comment-17708875
 ] 

jirawech.s commented on FLINK-31689:


[~luoyuxia] Is it possible that you assign me to do this task? Or what do i do 
next?

> Filesystem sink fails when parallelism of compactor operator changed
> 
>
> Key: FLINK-31689
> URL: https://issues.apache.org/jira/browse/FLINK-31689
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / FileSystem
>Affects Versions: 1.16.1
>Reporter: jirawech.s
>Priority: Major
> Attachments: HelloFlinkHadoopSink.java
>
>
> I encounter this error when i tried to use Filesystem sink with Table SQL. I 
> have not tested with Datastream API tho. You may refers to the error as below
> {code:java}
> // code placeholder
> java.util.NoSuchElementException
>   at java.util.ArrayList$Itr.next(ArrayList.java:864)
>   at 
> org.apache.flink.connector.file.table.stream.compact.CompactOperator.initializeState(CompactOperator.java:119)
>   at 
> org.apache.flink.streaming.api.operators.StreamOperatorStateHandler.initializeOperatorState(StreamOperatorStateHandler.java:122)
>   at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator.initializeState(AbstractStreamOperator.java:286)
>   at 
> org.apache.flink.streaming.runtime.tasks.RegularOperatorChain.initializeStateAndOpenOperators(RegularOperatorChain.java:106)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.restoreGates(StreamTask.java:700)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$1.call(StreamTaskActionExecutor.java:55)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.restoreInternal(StreamTask.java:676)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.restore(StreamTask.java:643)
>   at 
> org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:948)
>   at 
> org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:917)
>   at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:741)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:563)
>   at java.lang.Thread.run(Thread.java:750) {code}
> I cannot attach the full reproducible code here, but you may follow my pseudo 
> code in attachment and reproducible steps below
> 1. Create Kafka source
> 2. Set state.savepoints.dir
> 3. Set Job parallelism to 1
> 4. Create FileSystem Sink
> 5. Run the job and trigger savepoint with API
> {noformat}
> curl -X POST localhost:8081/jobs/:jobId/savepoints -d '{"cancel-job": 
> false}'{noformat}
> {color:#172b4d}6. Cancel job, change parallelism to 2, and resume job from 
> savepoint{color}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31689) Filesystem sink fails when parallelism of compactor operator changed

2023-04-04 Thread jirawech.s (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17708271#comment-17708271
 ] 

jirawech.s commented on FLINK-31689:


[~luoyuxia] Thank you so much for detailed explanation.
What about i tried on my local, built and tested it and PR back? I see that i 
need this first to contribute back
*Only start working on the implementation if there is consensus on the approach 
(e.g. you are assigned to the ticket)*

[link|https://flink.apache.org/how-to-contribute/overview/]

> Filesystem sink fails when parallelism of compactor operator changed
> 
>
> Key: FLINK-31689
> URL: https://issues.apache.org/jira/browse/FLINK-31689
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / FileSystem
>Affects Versions: 1.16.1
>Reporter: jirawech.s
>Priority: Major
> Attachments: HelloFlinkHadoopSink.java
>
>
> I encounter this error when i tried to use Filesystem sink with Table SQL. I 
> have not tested with Datastream API tho. You may refers to the error as below
> {code:java}
> // code placeholder
> java.util.NoSuchElementException
>   at java.util.ArrayList$Itr.next(ArrayList.java:864)
>   at 
> org.apache.flink.connector.file.table.stream.compact.CompactOperator.initializeState(CompactOperator.java:119)
>   at 
> org.apache.flink.streaming.api.operators.StreamOperatorStateHandler.initializeOperatorState(StreamOperatorStateHandler.java:122)
>   at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator.initializeState(AbstractStreamOperator.java:286)
>   at 
> org.apache.flink.streaming.runtime.tasks.RegularOperatorChain.initializeStateAndOpenOperators(RegularOperatorChain.java:106)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.restoreGates(StreamTask.java:700)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$1.call(StreamTaskActionExecutor.java:55)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.restoreInternal(StreamTask.java:676)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.restore(StreamTask.java:643)
>   at 
> org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:948)
>   at 
> org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:917)
>   at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:741)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:563)
>   at java.lang.Thread.run(Thread.java:750) {code}
> I cannot attach the full reproducible code here, but you may follow my pseudo 
> code in attachment and reproducible steps below
> 1. Create Kafka source
> 2. Set state.savepoints.dir
> 3. Set Job parallelism to 1
> 4. Create FileSystem Sink
> 5. Run the job and trigger savepoint with API
> {noformat}
> curl -X POST localhost:8081/jobs/:jobId/savepoints -d '{"cancel-job": 
> false}'{noformat}
> {color:#172b4d}6. Cancel job, change parallelism to 2, and resume job from 
> savepoint{color}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31689) Filesystem sink fails when parallelism of compactor operator changed

2023-04-03 Thread jirawech.s (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17708030#comment-17708030
 ] 

jirawech.s commented on FLINK-31689:


[~luoyuxia] I see. So, we could say that it is normal behaviour of 
CompactOperator in File Sink for now. If we were to improve, we could do that 
by implementing state compatible CompactOperator right? Could you point me to 
the code/class i should check out. I am not so familiar with Flink development

> Filesystem sink fails when parallelism of compactor operator changed
> 
>
> Key: FLINK-31689
> URL: https://issues.apache.org/jira/browse/FLINK-31689
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / FileSystem
>Affects Versions: 1.16.1
>Reporter: jirawech.s
>Priority: Major
> Attachments: HelloFlinkHadoopSink.java
>
>
> I encounter this error when i tried to use Filesystem sink with Table SQL. I 
> have not tested with Datastream API tho. You may refers to the error as below
> {code:java}
> // code placeholder
> java.util.NoSuchElementException
>   at java.util.ArrayList$Itr.next(ArrayList.java:864)
>   at 
> org.apache.flink.connector.file.table.stream.compact.CompactOperator.initializeState(CompactOperator.java:119)
>   at 
> org.apache.flink.streaming.api.operators.StreamOperatorStateHandler.initializeOperatorState(StreamOperatorStateHandler.java:122)
>   at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator.initializeState(AbstractStreamOperator.java:286)
>   at 
> org.apache.flink.streaming.runtime.tasks.RegularOperatorChain.initializeStateAndOpenOperators(RegularOperatorChain.java:106)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.restoreGates(StreamTask.java:700)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$1.call(StreamTaskActionExecutor.java:55)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.restoreInternal(StreamTask.java:676)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.restore(StreamTask.java:643)
>   at 
> org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:948)
>   at 
> org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:917)
>   at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:741)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:563)
>   at java.lang.Thread.run(Thread.java:750) {code}
> I cannot attach the full reproducible code here, but you may follow my pseudo 
> code in attachment and reproducible steps below
> 1. Create Kafka source
> 2. Set state.savepoints.dir
> 3. Set Job parallelism to 1
> 4. Create FileSystem Sink
> 5. Run the job and trigger savepoint with API
> {noformat}
> curl -X POST localhost:8081/jobs/:jobId/savepoints -d '{"cancel-job": 
> false}'{noformat}
> {color:#172b4d}6. Cancel job, change parallelism to 2, and resume job from 
> savepoint{color}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31689) Filesystem sink fails when parallelism of compactor operator changed

2023-04-03 Thread jirawech.s (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17707891#comment-17707891
 ] 

jirawech.s commented on FLINK-31689:


Is this normal behavior? If i use Kafka Sink, i am able to increase/decrease 
parallelism fine. 

> Filesystem sink fails when parallelism of compactor operator changed
> 
>
> Key: FLINK-31689
> URL: https://issues.apache.org/jira/browse/FLINK-31689
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / FileSystem
>Affects Versions: 1.16.1
>Reporter: jirawech.s
>Priority: Major
> Attachments: HelloFlinkHadoopSink.java
>
>
> I encounter this error when i tried to use Filesystem sink with Table SQL. I 
> have not tested with Datastream API tho. You may refers to the error as below
> {code:java}
> // code placeholder
> java.util.NoSuchElementException
>   at java.util.ArrayList$Itr.next(ArrayList.java:864)
>   at 
> org.apache.flink.connector.file.table.stream.compact.CompactOperator.initializeState(CompactOperator.java:119)
>   at 
> org.apache.flink.streaming.api.operators.StreamOperatorStateHandler.initializeOperatorState(StreamOperatorStateHandler.java:122)
>   at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator.initializeState(AbstractStreamOperator.java:286)
>   at 
> org.apache.flink.streaming.runtime.tasks.RegularOperatorChain.initializeStateAndOpenOperators(RegularOperatorChain.java:106)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.restoreGates(StreamTask.java:700)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$1.call(StreamTaskActionExecutor.java:55)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.restoreInternal(StreamTask.java:676)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.restore(StreamTask.java:643)
>   at 
> org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:948)
>   at 
> org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:917)
>   at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:741)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:563)
>   at java.lang.Thread.run(Thread.java:750) {code}
> I cannot attach the full reproducible code here, but you may follow my pseudo 
> code in attachment and reproducible steps below
> 1. Create Kafka source
> 2. Set state.savepoints.dir
> 3. Set Job parallelism to 1
> 4. Create FileSystem Sink
> 5. Run the job and trigger savepoint with API
> {noformat}
> curl -X POST localhost:8081/jobs/:jobId/savepoints -d '{"cancel-job": 
> false}'{noformat}
> {color:#172b4d}6. Cancel job, change parallelism to 2, and resume job from 
> savepoint{color}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-31689) Filesystem sink fails when parallelism of compactor operator changed

2023-04-02 Thread jirawech.s (Jira)
jirawech.s created FLINK-31689:
--

 Summary: Filesystem sink fails when parallelism of compactor 
operator changed
 Key: FLINK-31689
 URL: https://issues.apache.org/jira/browse/FLINK-31689
 Project: Flink
  Issue Type: Bug
  Components: Connectors / FileSystem
Affects Versions: 1.16.1
Reporter: jirawech.s
 Attachments: HelloFlinkHadoopSink.java

I encounter this error when i tried to use Filesystem sink with Table SQL. I 
have not tested with Datastream API tho. You may refers to the error as below
{code:java}
// code placeholder
java.util.NoSuchElementException
at java.util.ArrayList$Itr.next(ArrayList.java:864)
at 
org.apache.flink.connector.file.table.stream.compact.CompactOperator.initializeState(CompactOperator.java:119)
at 
org.apache.flink.streaming.api.operators.StreamOperatorStateHandler.initializeOperatorState(StreamOperatorStateHandler.java:122)
at 
org.apache.flink.streaming.api.operators.AbstractStreamOperator.initializeState(AbstractStreamOperator.java:286)
at 
org.apache.flink.streaming.runtime.tasks.RegularOperatorChain.initializeStateAndOpenOperators(RegularOperatorChain.java:106)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.restoreGates(StreamTask.java:700)
at 
org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$1.call(StreamTaskActionExecutor.java:55)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.restoreInternal(StreamTask.java:676)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.restore(StreamTask.java:643)
at 
org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:948)
at 
org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:917)
at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:741)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:563)
at java.lang.Thread.run(Thread.java:750) {code}
I cannot attach the full reproducible code here, but you may follow my pseudo 
code in attachment and reproducible steps below
1. Create Kafka source

2. Set state.savepoints.dir

3. Set Job parallelism to 1

4. Create FileSystem Sink

5. Run the job and trigger savepoint with API
{noformat}
curl -X POST localhost:8081/jobs/:jobId/savepoints -d '{"cancel-job": 
false}'{noformat}
{color:#172b4d}6. Cancel job, change parallelism to 2, and resume job from 
savepoint{color}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31424) NullPointer when using StatementSet for multiple sinks

2023-03-22 Thread jirawech.s (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17703828#comment-17703828
 ] 

jirawech.s commented on FLINK-31424:


[~qingyue] Thank you so much (y)

> NullPointer when using StatementSet for multiple sinks
> --
>
> Key: FLINK-31424
> URL: https://issues.apache.org/jira/browse/FLINK-31424
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.16.1
>Reporter: jirawech.s
>Assignee: Jane Chan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0, 1.17.1
>
> Attachments: HelloFlinkWindowJoin.java
>
>
> I got following error when i tried to execute multiple sinks using 
> StatementSet. I am not sure what it is and where to find possible solution.
> Error
> {code:java}
> // code placeholder
> Exception in thread "main" java.lang.NullPointerException
>     at 
> org.apache.flink.table.planner.plan.metadata.FlinkRelMdWindowProperties.getJoinWindowProperties(FlinkRelMdWindowProperties.scala:373)
>     at 
> org.apache.flink.table.planner.plan.metadata.FlinkRelMdWindowProperties.getWindowProperties(FlinkRelMdWindowProperties.scala:349)
>     at 
> GeneratedMetadataHandler_WindowProperties.getWindowProperties_$(Unknown 
> Source)
>     at GeneratedMetadataHandler_WindowProperties.getWindowProperties(Unknown 
> Source)
>     at 
> org.apache.flink.table.planner.plan.metadata.FlinkRelMetadataQuery.getRelWindowProperties(FlinkRelMetadataQuery.java:261)
>     at 
> org.apache.flink.table.planner.plan.optimize.StreamCommonSubGraphBasedOptimizer.createIntermediateRelTable(StreamCommonSubGraphBasedOptimizer.scala:287)
>     at 
> org.apache.flink.table.planner.plan.optimize.StreamCommonSubGraphBasedOptimizer.optimizeBlock(StreamCommonSubGraphBasedOptimizer.scala:138)
>     at 
> org.apache.flink.table.planner.plan.optimize.StreamCommonSubGraphBasedOptimizer.$anonfun$optimizeBlock$1(StreamCommonSubGraphBasedOptimizer.scala:111)
>     at 
> org.apache.flink.table.planner.plan.optimize.StreamCommonSubGraphBasedOptimizer.$anonfun$optimizeBlock$1$adapted(StreamCommonSubGraphBasedOptimizer.scala:109)
>     at 
> scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:58)
>     at 
> scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:51)
>     at 
> scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47){code}
> You can test the code, please see the attachment



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31424) NullPointer when using StatementSet for multiple sinks

2023-03-14 Thread jirawech.s (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17700249#comment-17700249
 ] 

jirawech.s commented on FLINK-31424:


[~zhengyiweng] Sure thing. Maybe you can take a look first and then i can help 
review it.

> NullPointer when using StatementSet for multiple sinks
> --
>
> Key: FLINK-31424
> URL: https://issues.apache.org/jira/browse/FLINK-31424
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.16.1
>Reporter: jirawech.s
>Priority: Major
> Attachments: HelloFlinkWindowJoin.java
>
>
> I got following error when i tried to execute multiple sinks using 
> StatementSet. I am not sure what it is and where to find possible solution.
> Error
> {code:java}
> // code placeholder
> Exception in thread "main" java.lang.NullPointerException
>     at 
> org.apache.flink.table.planner.plan.metadata.FlinkRelMdWindowProperties.getJoinWindowProperties(FlinkRelMdWindowProperties.scala:373)
>     at 
> org.apache.flink.table.planner.plan.metadata.FlinkRelMdWindowProperties.getWindowProperties(FlinkRelMdWindowProperties.scala:349)
>     at 
> GeneratedMetadataHandler_WindowProperties.getWindowProperties_$(Unknown 
> Source)
>     at GeneratedMetadataHandler_WindowProperties.getWindowProperties(Unknown 
> Source)
>     at 
> org.apache.flink.table.planner.plan.metadata.FlinkRelMetadataQuery.getRelWindowProperties(FlinkRelMetadataQuery.java:261)
>     at 
> org.apache.flink.table.planner.plan.optimize.StreamCommonSubGraphBasedOptimizer.createIntermediateRelTable(StreamCommonSubGraphBasedOptimizer.scala:287)
>     at 
> org.apache.flink.table.planner.plan.optimize.StreamCommonSubGraphBasedOptimizer.optimizeBlock(StreamCommonSubGraphBasedOptimizer.scala:138)
>     at 
> org.apache.flink.table.planner.plan.optimize.StreamCommonSubGraphBasedOptimizer.$anonfun$optimizeBlock$1(StreamCommonSubGraphBasedOptimizer.scala:111)
>     at 
> org.apache.flink.table.planner.plan.optimize.StreamCommonSubGraphBasedOptimizer.$anonfun$optimizeBlock$1$adapted(StreamCommonSubGraphBasedOptimizer.scala:109)
>     at 
> scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:58)
>     at 
> scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:51)
>     at 
> scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47){code}
> You can test the code, please see the attachment



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31424) NullPointer when using StatementSet for multiple sinks

2023-03-14 Thread jirawech.s (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17700039#comment-17700039
 ] 

jirawech.s commented on FLINK-31424:


[~zhengyiweng] I see. Thank you so much. Could you mind walk me through how to 
start if i want to fix this kind of issue? I am not familiar with Java so much. 
I do not know where to start. For example, I can clone the repo and start to 
run unit test for Flink Table API? If yes, which unit test i need to check.

> NullPointer when using StatementSet for multiple sinks
> --
>
> Key: FLINK-31424
> URL: https://issues.apache.org/jira/browse/FLINK-31424
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.16.1
>Reporter: jirawech.s
>Priority: Major
> Attachments: HelloFlinkWindowJoin.java
>
>
> I got following error when i tried to execute multiple sinks using 
> StatementSet. I am not sure what it is and where to find possible solution.
> Error
> {code:java}
> // code placeholder
> Exception in thread "main" java.lang.NullPointerException
>     at 
> org.apache.flink.table.planner.plan.metadata.FlinkRelMdWindowProperties.getJoinWindowProperties(FlinkRelMdWindowProperties.scala:373)
>     at 
> org.apache.flink.table.planner.plan.metadata.FlinkRelMdWindowProperties.getWindowProperties(FlinkRelMdWindowProperties.scala:349)
>     at 
> GeneratedMetadataHandler_WindowProperties.getWindowProperties_$(Unknown 
> Source)
>     at GeneratedMetadataHandler_WindowProperties.getWindowProperties(Unknown 
> Source)
>     at 
> org.apache.flink.table.planner.plan.metadata.FlinkRelMetadataQuery.getRelWindowProperties(FlinkRelMetadataQuery.java:261)
>     at 
> org.apache.flink.table.planner.plan.optimize.StreamCommonSubGraphBasedOptimizer.createIntermediateRelTable(StreamCommonSubGraphBasedOptimizer.scala:287)
>     at 
> org.apache.flink.table.planner.plan.optimize.StreamCommonSubGraphBasedOptimizer.optimizeBlock(StreamCommonSubGraphBasedOptimizer.scala:138)
>     at 
> org.apache.flink.table.planner.plan.optimize.StreamCommonSubGraphBasedOptimizer.$anonfun$optimizeBlock$1(StreamCommonSubGraphBasedOptimizer.scala:111)
>     at 
> org.apache.flink.table.planner.plan.optimize.StreamCommonSubGraphBasedOptimizer.$anonfun$optimizeBlock$1$adapted(StreamCommonSubGraphBasedOptimizer.scala:109)
>     at 
> scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:58)
>     at 
> scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:51)
>     at 
> scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47){code}
> You can test the code, please see the attachment



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31424) NullPointer when using StatementSet for multiple sinks

2023-03-13 Thread jirawech.s (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jirawech.s updated FLINK-31424:
---
Description: 
I got following error when i tried to execute multiple sinks using 
StatementSet. I am not sure what it is and where to find possible solution.

Error
{code:java}
// code placeholder
Exception in thread "main" java.lang.NullPointerException
    at 
org.apache.flink.table.planner.plan.metadata.FlinkRelMdWindowProperties.getJoinWindowProperties(FlinkRelMdWindowProperties.scala:373)
    at 
org.apache.flink.table.planner.plan.metadata.FlinkRelMdWindowProperties.getWindowProperties(FlinkRelMdWindowProperties.scala:349)
    at GeneratedMetadataHandler_WindowProperties.getWindowProperties_$(Unknown 
Source)
    at GeneratedMetadataHandler_WindowProperties.getWindowProperties(Unknown 
Source)
    at 
org.apache.flink.table.planner.plan.metadata.FlinkRelMetadataQuery.getRelWindowProperties(FlinkRelMetadataQuery.java:261)
    at 
org.apache.flink.table.planner.plan.optimize.StreamCommonSubGraphBasedOptimizer.createIntermediateRelTable(StreamCommonSubGraphBasedOptimizer.scala:287)
    at 
org.apache.flink.table.planner.plan.optimize.StreamCommonSubGraphBasedOptimizer.optimizeBlock(StreamCommonSubGraphBasedOptimizer.scala:138)
    at 
org.apache.flink.table.planner.plan.optimize.StreamCommonSubGraphBasedOptimizer.$anonfun$optimizeBlock$1(StreamCommonSubGraphBasedOptimizer.scala:111)
    at 
org.apache.flink.table.planner.plan.optimize.StreamCommonSubGraphBasedOptimizer.$anonfun$optimizeBlock$1$adapted(StreamCommonSubGraphBasedOptimizer.scala:109)
    at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:58)
    at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:51)
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47){code}
You can test the code, please see the attachment

  was:
I got following error when i tried to execute multiple sinks using 
StatementSet. I am not sure what it is and where to find possible solution. 
This does not happen on Flink version 1.15.1

Error
{code:java}
// code placeholder
Exception in thread "main" java.lang.NullPointerException
    at 
org.apache.flink.table.planner.plan.metadata.FlinkRelMdWindowProperties.getJoinWindowProperties(FlinkRelMdWindowProperties.scala:373)
    at 
org.apache.flink.table.planner.plan.metadata.FlinkRelMdWindowProperties.getWindowProperties(FlinkRelMdWindowProperties.scala:349)
    at GeneratedMetadataHandler_WindowProperties.getWindowProperties_$(Unknown 
Source)
    at GeneratedMetadataHandler_WindowProperties.getWindowProperties(Unknown 
Source)
    at 
org.apache.flink.table.planner.plan.metadata.FlinkRelMetadataQuery.getRelWindowProperties(FlinkRelMetadataQuery.java:261)
    at 
org.apache.flink.table.planner.plan.optimize.StreamCommonSubGraphBasedOptimizer.createIntermediateRelTable(StreamCommonSubGraphBasedOptimizer.scala:287)
    at 
org.apache.flink.table.planner.plan.optimize.StreamCommonSubGraphBasedOptimizer.optimizeBlock(StreamCommonSubGraphBasedOptimizer.scala:138)
    at 
org.apache.flink.table.planner.plan.optimize.StreamCommonSubGraphBasedOptimizer.$anonfun$optimizeBlock$1(StreamCommonSubGraphBasedOptimizer.scala:111)
    at 
org.apache.flink.table.planner.plan.optimize.StreamCommonSubGraphBasedOptimizer.$anonfun$optimizeBlock$1$adapted(StreamCommonSubGraphBasedOptimizer.scala:109)
    at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:58)
    at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:51)
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47){code}
You can test the code, please see the attachment


> NullPointer when using StatementSet for multiple sinks
> --
>
> Key: FLINK-31424
> URL: https://issues.apache.org/jira/browse/FLINK-31424
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.16.1
>Reporter: jirawech.s
>Priority: Major
> Attachments: HelloFlinkWindowJoin.java
>
>
> I got following error when i tried to execute multiple sinks using 
> StatementSet. I am not sure what it is and where to find possible solution.
> Error
> {code:java}
> // code placeholder
> Exception in thread "main" java.lang.NullPointerException
>     at 
> org.apache.flink.table.planner.plan.metadata.FlinkRelMdWindowProperties.getJoinWindowProperties(FlinkRelMdWindowProperties.scala:373)
>     at 
> org.apache.flink.table.planner.plan.metadata.FlinkRelMdWindowProperties.getWindowProperties(FlinkRelMdWindowProperties.scala:349)
>     at 
> GeneratedMetadataHandler_WindowProperties.getWindowProperties_$(Unknown 
> Source)
>     at GeneratedMetadataHandler_WindowProperties.getWindowProperties(Unknown 
> Source)
>     at 
> 

[jira] [Updated] (FLINK-31424) NullPointer when using StatementSet for multiple sinks

2023-03-13 Thread jirawech.s (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jirawech.s updated FLINK-31424:
---
Description: 
I got following error when i tried to execute multiple sinks using 
StatementSet. I am not sure what it is and where to find possible solution. 
This does not happen on Flink version 1.15.1

Error
{code:java}
// code placeholder
Exception in thread "main" java.lang.NullPointerException
    at 
org.apache.flink.table.planner.plan.metadata.FlinkRelMdWindowProperties.getJoinWindowProperties(FlinkRelMdWindowProperties.scala:373)
    at 
org.apache.flink.table.planner.plan.metadata.FlinkRelMdWindowProperties.getWindowProperties(FlinkRelMdWindowProperties.scala:349)
    at GeneratedMetadataHandler_WindowProperties.getWindowProperties_$(Unknown 
Source)
    at GeneratedMetadataHandler_WindowProperties.getWindowProperties(Unknown 
Source)
    at 
org.apache.flink.table.planner.plan.metadata.FlinkRelMetadataQuery.getRelWindowProperties(FlinkRelMetadataQuery.java:261)
    at 
org.apache.flink.table.planner.plan.optimize.StreamCommonSubGraphBasedOptimizer.createIntermediateRelTable(StreamCommonSubGraphBasedOptimizer.scala:287)
    at 
org.apache.flink.table.planner.plan.optimize.StreamCommonSubGraphBasedOptimizer.optimizeBlock(StreamCommonSubGraphBasedOptimizer.scala:138)
    at 
org.apache.flink.table.planner.plan.optimize.StreamCommonSubGraphBasedOptimizer.$anonfun$optimizeBlock$1(StreamCommonSubGraphBasedOptimizer.scala:111)
    at 
org.apache.flink.table.planner.plan.optimize.StreamCommonSubGraphBasedOptimizer.$anonfun$optimizeBlock$1$adapted(StreamCommonSubGraphBasedOptimizer.scala:109)
    at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:58)
    at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:51)
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47){code}
You can test the code, please see the attachment

  was:
I got following error when i tried to execute multiple sinks using StatementSet

Error
{code:java}
// code placeholder
Exception in thread "main" java.lang.NullPointerException
    at 
org.apache.flink.table.planner.plan.metadata.FlinkRelMdWindowProperties.getJoinWindowProperties(FlinkRelMdWindowProperties.scala:373)
    at 
org.apache.flink.table.planner.plan.metadata.FlinkRelMdWindowProperties.getWindowProperties(FlinkRelMdWindowProperties.scala:349)
    at GeneratedMetadataHandler_WindowProperties.getWindowProperties_$(Unknown 
Source)
    at GeneratedMetadataHandler_WindowProperties.getWindowProperties(Unknown 
Source)
    at 
org.apache.flink.table.planner.plan.metadata.FlinkRelMetadataQuery.getRelWindowProperties(FlinkRelMetadataQuery.java:261)
    at 
org.apache.flink.table.planner.plan.optimize.StreamCommonSubGraphBasedOptimizer.createIntermediateRelTable(StreamCommonSubGraphBasedOptimizer.scala:287)
    at 
org.apache.flink.table.planner.plan.optimize.StreamCommonSubGraphBasedOptimizer.optimizeBlock(StreamCommonSubGraphBasedOptimizer.scala:138)
    at 
org.apache.flink.table.planner.plan.optimize.StreamCommonSubGraphBasedOptimizer.$anonfun$optimizeBlock$1(StreamCommonSubGraphBasedOptimizer.scala:111)
    at 
org.apache.flink.table.planner.plan.optimize.StreamCommonSubGraphBasedOptimizer.$anonfun$optimizeBlock$1$adapted(StreamCommonSubGraphBasedOptimizer.scala:109)
    at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:58)
    at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:51)
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47) {code}


> NullPointer when using StatementSet for multiple sinks
> --
>
> Key: FLINK-31424
> URL: https://issues.apache.org/jira/browse/FLINK-31424
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.16.1
>Reporter: jirawech.s
>Priority: Major
> Attachments: HelloFlinkWindowJoin.java
>
>
> I got following error when i tried to execute multiple sinks using 
> StatementSet. I am not sure what it is and where to find possible solution. 
> This does not happen on Flink version 1.15.1
> Error
> {code:java}
> // code placeholder
> Exception in thread "main" java.lang.NullPointerException
>     at 
> org.apache.flink.table.planner.plan.metadata.FlinkRelMdWindowProperties.getJoinWindowProperties(FlinkRelMdWindowProperties.scala:373)
>     at 
> org.apache.flink.table.planner.plan.metadata.FlinkRelMdWindowProperties.getWindowProperties(FlinkRelMdWindowProperties.scala:349)
>     at 
> GeneratedMetadataHandler_WindowProperties.getWindowProperties_$(Unknown 
> Source)
>     at GeneratedMetadataHandler_WindowProperties.getWindowProperties(Unknown 
> Source)
>     at 
> 

[jira] [Created] (FLINK-31424) NullPointer when using StatementSet for multiple sinks

2023-03-13 Thread jirawech.s (Jira)
jirawech.s created FLINK-31424:
--

 Summary: NullPointer when using StatementSet for multiple sinks
 Key: FLINK-31424
 URL: https://issues.apache.org/jira/browse/FLINK-31424
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Planner
Affects Versions: 1.16.1
Reporter: jirawech.s
 Attachments: HelloFlinkWindowJoin.java

I got following error when i tried to execute multiple sinks using StatementSet

Error
{code:java}
// code placeholder
Exception in thread "main" java.lang.NullPointerException
    at 
org.apache.flink.table.planner.plan.metadata.FlinkRelMdWindowProperties.getJoinWindowProperties(FlinkRelMdWindowProperties.scala:373)
    at 
org.apache.flink.table.planner.plan.metadata.FlinkRelMdWindowProperties.getWindowProperties(FlinkRelMdWindowProperties.scala:349)
    at GeneratedMetadataHandler_WindowProperties.getWindowProperties_$(Unknown 
Source)
    at GeneratedMetadataHandler_WindowProperties.getWindowProperties(Unknown 
Source)
    at 
org.apache.flink.table.planner.plan.metadata.FlinkRelMetadataQuery.getRelWindowProperties(FlinkRelMetadataQuery.java:261)
    at 
org.apache.flink.table.planner.plan.optimize.StreamCommonSubGraphBasedOptimizer.createIntermediateRelTable(StreamCommonSubGraphBasedOptimizer.scala:287)
    at 
org.apache.flink.table.planner.plan.optimize.StreamCommonSubGraphBasedOptimizer.optimizeBlock(StreamCommonSubGraphBasedOptimizer.scala:138)
    at 
org.apache.flink.table.planner.plan.optimize.StreamCommonSubGraphBasedOptimizer.$anonfun$optimizeBlock$1(StreamCommonSubGraphBasedOptimizer.scala:111)
    at 
org.apache.flink.table.planner.plan.optimize.StreamCommonSubGraphBasedOptimizer.$anonfun$optimizeBlock$1$adapted(StreamCommonSubGraphBasedOptimizer.scala:109)
    at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:58)
    at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:51)
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47) {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)