[jira] [Commented] (FLINK-28513) Flink Table API CSV streaming sink throws SerializedThrowable exception

2022-12-04 Thread Samrat Deb (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17643135#comment-17643135
 ] 

Samrat Deb commented on FLINK-28513:


 
org.apache.flink.runtime.JobException: Recovery is suppressed by 
NoRestartBackoffTimeStrategy
at 
org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.handleFailure(ExecutionFailureHandler.java:139)
at 
org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.getFailureHandlingResult(ExecutionFailureHandler.java:83)
at 
org.apache.flink.runtime.scheduler.DefaultScheduler.recordTaskFailure(DefaultScheduler.java:256)
at 
org.apache.flink.runtime.scheduler.DefaultScheduler.handleTaskFailure(DefaultScheduler.java:247)
at 
org.apache.flink.runtime.scheduler.DefaultScheduler.onTaskFailed(DefaultScheduler.java:240)
at 
org.apache.flink.runtime.scheduler.SchedulerBase.onTaskExecutionStateUpdate(SchedulerBase.java:738)
at 
org.apache.flink.runtime.scheduler.SchedulerBase.updateTaskExecutionState(SchedulerBase.java:715)
at 
org.apache.flink.runtime.scheduler.SchedulerNG.updateTaskExecutionState(SchedulerNG.java:78)
at 
org.apache.flink.runtime.jobmaster.JobMaster.updateTaskExecutionState(JobMaster.java:477)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.lambda$handleRpcInvocation$1(AkkaRpcActor.java:309)
at 
org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:83)
at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcInvocation(AkkaRpcActor.java:307)
at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:222)
at 
org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:84)
at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:168)
at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:24)
at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:20)
at scala.PartialFunction.applyOrElse(PartialFunction.scala:123)
at scala.PartialFunction.applyOrElse$(PartialFunction.scala:122)
at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:20)
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:172)
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:172)
at akka.actor.Actor.aroundReceive(Actor.scala:537)
at akka.actor.Actor.aroundReceive$(Actor.scala:535)
at akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:220)
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:580)
at akka.actor.ActorCell.invoke(ActorCell.scala:548)
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:270)
at akka.dispatch.Mailbox.run(Mailbox.scala:231)
at akka.dispatch.Mailbox.exec(Mailbox.scala:243)
at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
at 
java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
at 
java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175)
Caused by: java.util.concurrent.ExecutionException: 
java.lang.UnsupportedOperationException: Cannot sync state to system like S3. 
Use persist() to create a persistent recoverable intermediate point.
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:192)
at 
org.apache.flink.streaming.runtime.tasks.SourceStreamTask$LegacySourceFunctionThread.completeProcessing(SourceStreamTask.java:363)
at 
org.apache.flink.streaming.runtime.tasks.SourceStreamTask$LegacySourceFunctionThread.run(SourceStreamTask.java:335)
Caused by: java.lang.UnsupportedOperationException: Cannot sync state to system 
like S3. Use persist() to create a persistent recoverable intermediate point.
at 
org.apache.flink.core.fs.RefCountedBufferingFileStream.sync(RefCountedBufferingFileStream.java:111)
at 
org.apache.flink.fs.s3.common.writer.S3RecoverableFsDataOutputStream.sync(S3RecoverableFsDataOutputStream.java:129)
at 
org.apache.flink.formats.csv.CsvBulkWriter.finish(CsvBulkWriter.java:111)
at 
org.apache.flink.connector.file.table.FileSystemTableSink$ProjectionBulkFactory$1.finish(FileSystemTab

[jira] [Commented] (FLINK-28513) Flink Table API CSV streaming sink throws SerializedThrowable exception

2022-12-05 Thread Danny Cranmer (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17643167#comment-17643167
 ] 

Danny Cranmer commented on FLINK-28513:
---

Assigning to [~samrat007] as discussed offline

> Flink Table API CSV streaming sink throws SerializedThrowable exception
> ---
>
> Key: FLINK-28513
> URL: https://issues.apache.org/jira/browse/FLINK-28513
> Project: Flink
>  Issue Type: Bug
>  Components: FileSystems, Table SQL / API
>Affects Versions: 1.15.1
>Reporter: Jaya Ananthram
>Priority: Critical
>
> Table API S3 streaming sink (CSV format) throws the following exception,
> {code:java}
> Caused by: org.apache.flink.util.SerializedThrowable: 
> S3RecoverableFsDataOutputStream cannot sync state to S3. Use persist() to 
> create a persistent recoverable intermediate point.
> at 
> org.apache.flink.fs.s3.common.utils.RefCountedBufferingFileStream.sync(RefCountedBufferingFileStream.java:111)
>  ~[flink-s3-fs-hadoop-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.fs.s3.common.writer.S3RecoverableFsDataOutputStream.sync(S3RecoverableFsDataOutputStream.java:129)
>  ~[flink-s3-fs-hadoop-1.15.1.jar:1.15.1]
> at org.apache.flink.formats.csv.CsvBulkWriter.finish(CsvBulkWriter.java:110) 
> ~[flink-csv-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.connector.file.table.FileSystemTableSink$ProjectionBulkFactory$1.finish(FileSystemTableSink.java:642)
>  ~[flink-connector-files-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.BulkPartWriter.closeForCommit(BulkPartWriter.java:64)
>  ~[flink-file-sink-common-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.Bucket.closePartFile(Bucket.java:263)
>  ~[flink-streaming-java-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.Bucket.prepareBucketForCheckpointing(Bucket.java:305)
>  ~[flink-streaming-java-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.Bucket.onReceptionOfCheckpoint(Bucket.java:277)
>  ~[flink-streaming-java-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.Buckets.snapshotActiveBuckets(Buckets.java:270)
>  ~[flink-streaming-java-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.Buckets.snapshotState(Buckets.java:261)
>  ~[flink-streaming-java-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.StreamingFileSinkHelper.snapshotState(StreamingFileSinkHelper.java:87)
>  ~[flink-streaming-java-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.connector.file.table.stream.AbstractStreamingWriter.snapshotState(AbstractStreamingWriter.java:129)
>  ~[flink-connector-files-1.15.1.jar:1.15.1]
> {code}
> In my table config, I am trying to read from Kafka and write to S3 (s3a) 
> using table API and checkpoint configuration using s3p (presto). Even I tried 
> with a simple datagen example instead of Kafka with local file system as 
> checkpointing (`file:///` instead of `s3p://`) and I am getting the same 
> issue. Exactly it is fails when the code triggers the checkpoint.
> Some related slack conversation and SO conversation 
> [here|https://apache-flink.slack.com/archives/C03G7LJTS2G/p1657609776339389], 
> [here|https://stackoverflow.com/questions/62138635/flink-streaming-compression-not-working-using-amazon-aws-s3-connector-streaming]
>  and 
> [here|https://stackoverflow.com/questions/72943730/flink-table-api-streaming-s3-sink-throws-serializedthrowable-exception]
> Since there is no work around for S3 table API streaming sink, I am marking 
> this as critical. if this is not a relevant severity, feel free to reduce the 
> priority. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-28513) Flink Table API CSV streaming sink throws SerializedThrowable exception

2023-09-04 Thread Hong Liang Teoh (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17761785#comment-17761785
 ] 

Hong Liang Teoh commented on FLINK-28513:
-

 merged commit 
[{{e921489}}|https://github.com/apache/flink/commit/e921489279ca70b179521ec4619514725b061491]
 into apache:master

> Flink Table API CSV streaming sink throws SerializedThrowable exception
> ---
>
> Key: FLINK-28513
> URL: https://issues.apache.org/jira/browse/FLINK-28513
> Project: Flink
>  Issue Type: Bug
>  Components: FileSystems, Table SQL / API
>Affects Versions: 1.15.1
>Reporter: Jaya Ananthram
>Assignee: Samrat Deb
>Priority: Critical
>  Labels: pull-request-available, stale-assigned
>
> Table API S3 streaming sink (CSV format) throws the following exception,
> {code:java}
> Caused by: org.apache.flink.util.SerializedThrowable: 
> S3RecoverableFsDataOutputStream cannot sync state to S3. Use persist() to 
> create a persistent recoverable intermediate point.
> at 
> org.apache.flink.fs.s3.common.utils.RefCountedBufferingFileStream.sync(RefCountedBufferingFileStream.java:111)
>  ~[flink-s3-fs-hadoop-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.fs.s3.common.writer.S3RecoverableFsDataOutputStream.sync(S3RecoverableFsDataOutputStream.java:129)
>  ~[flink-s3-fs-hadoop-1.15.1.jar:1.15.1]
> at org.apache.flink.formats.csv.CsvBulkWriter.finish(CsvBulkWriter.java:110) 
> ~[flink-csv-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.connector.file.table.FileSystemTableSink$ProjectionBulkFactory$1.finish(FileSystemTableSink.java:642)
>  ~[flink-connector-files-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.BulkPartWriter.closeForCommit(BulkPartWriter.java:64)
>  ~[flink-file-sink-common-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.Bucket.closePartFile(Bucket.java:263)
>  ~[flink-streaming-java-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.Bucket.prepareBucketForCheckpointing(Bucket.java:305)
>  ~[flink-streaming-java-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.Bucket.onReceptionOfCheckpoint(Bucket.java:277)
>  ~[flink-streaming-java-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.Buckets.snapshotActiveBuckets(Buckets.java:270)
>  ~[flink-streaming-java-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.Buckets.snapshotState(Buckets.java:261)
>  ~[flink-streaming-java-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.StreamingFileSinkHelper.snapshotState(StreamingFileSinkHelper.java:87)
>  ~[flink-streaming-java-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.connector.file.table.stream.AbstractStreamingWriter.snapshotState(AbstractStreamingWriter.java:129)
>  ~[flink-connector-files-1.15.1.jar:1.15.1]
> {code}
> In my table config, I am trying to read from Kafka and write to S3 (s3a) 
> using table API and checkpoint configuration using s3p (presto). Even I tried 
> with a simple datagen example instead of Kafka with local file system as 
> checkpointing (`file:///` instead of `s3p://`) and I am getting the same 
> issue. Exactly it is fails when the code triggers the checkpoint.
> Some related slack conversation and SO conversation 
> [here|https://apache-flink.slack.com/archives/C03G7LJTS2G/p1657609776339389], 
> [here|https://stackoverflow.com/questions/62138635/flink-streaming-compression-not-working-using-amazon-aws-s3-connector-streaming]
>  and 
> [here|https://stackoverflow.com/questions/72943730/flink-table-api-streaming-s3-sink-throws-serializedthrowable-exception]
> Since there is no work around for S3 table API streaming sink, I am marking 
> this as critical. if this is not a relevant severity, feel free to reduce the 
> priority. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-28513) Flink Table API CSV streaming sink throws SerializedThrowable exception

2023-09-04 Thread Hong Liang Teoh (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17761787#comment-17761787
 ] 

Hong Liang Teoh commented on FLINK-28513:
-

[~samrat007]  Could we backport this bugfix to Flink 1.17 and 1.18 as well?

> Flink Table API CSV streaming sink throws SerializedThrowable exception
> ---
>
> Key: FLINK-28513
> URL: https://issues.apache.org/jira/browse/FLINK-28513
> Project: Flink
>  Issue Type: Bug
>  Components: FileSystems, Table SQL / API
>Affects Versions: 1.15.1
>Reporter: Jaya Ananthram
>Assignee: Samrat Deb
>Priority: Critical
>  Labels: pull-request-available, stale-assigned
> Fix For: 1.19.0
>
>
> Table API S3 streaming sink (CSV format) throws the following exception,
> {code:java}
> Caused by: org.apache.flink.util.SerializedThrowable: 
> S3RecoverableFsDataOutputStream cannot sync state to S3. Use persist() to 
> create a persistent recoverable intermediate point.
> at 
> org.apache.flink.fs.s3.common.utils.RefCountedBufferingFileStream.sync(RefCountedBufferingFileStream.java:111)
>  ~[flink-s3-fs-hadoop-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.fs.s3.common.writer.S3RecoverableFsDataOutputStream.sync(S3RecoverableFsDataOutputStream.java:129)
>  ~[flink-s3-fs-hadoop-1.15.1.jar:1.15.1]
> at org.apache.flink.formats.csv.CsvBulkWriter.finish(CsvBulkWriter.java:110) 
> ~[flink-csv-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.connector.file.table.FileSystemTableSink$ProjectionBulkFactory$1.finish(FileSystemTableSink.java:642)
>  ~[flink-connector-files-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.BulkPartWriter.closeForCommit(BulkPartWriter.java:64)
>  ~[flink-file-sink-common-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.Bucket.closePartFile(Bucket.java:263)
>  ~[flink-streaming-java-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.Bucket.prepareBucketForCheckpointing(Bucket.java:305)
>  ~[flink-streaming-java-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.Bucket.onReceptionOfCheckpoint(Bucket.java:277)
>  ~[flink-streaming-java-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.Buckets.snapshotActiveBuckets(Buckets.java:270)
>  ~[flink-streaming-java-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.Buckets.snapshotState(Buckets.java:261)
>  ~[flink-streaming-java-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.StreamingFileSinkHelper.snapshotState(StreamingFileSinkHelper.java:87)
>  ~[flink-streaming-java-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.connector.file.table.stream.AbstractStreamingWriter.snapshotState(AbstractStreamingWriter.java:129)
>  ~[flink-connector-files-1.15.1.jar:1.15.1]
> {code}
> In my table config, I am trying to read from Kafka and write to S3 (s3a) 
> using table API and checkpoint configuration using s3p (presto). Even I tried 
> with a simple datagen example instead of Kafka with local file system as 
> checkpointing (`file:///` instead of `s3p://`) and I am getting the same 
> issue. Exactly it is fails when the code triggers the checkpoint.
> Some related slack conversation and SO conversation 
> [here|https://apache-flink.slack.com/archives/C03G7LJTS2G/p1657609776339389], 
> [here|https://stackoverflow.com/questions/62138635/flink-streaming-compression-not-working-using-amazon-aws-s3-connector-streaming]
>  and 
> [here|https://stackoverflow.com/questions/72943730/flink-table-api-streaming-s3-sink-throws-serializedthrowable-exception]
> Since there is no work around for S3 table API streaming sink, I am marking 
> this as critical. if this is not a relevant severity, feel free to reduce the 
> priority. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-28513) Flink Table API CSV streaming sink throws SerializedThrowable exception

2023-09-04 Thread Samrat Deb (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17761790#comment-17761790
 ] 

Samrat Deb commented on FLINK-28513:


Hi [~liangtl] 

backport for 1.17 :- [https://github.com/apache/flink/pull/23351]
backport for 1.18 :-  [https://github.com/apache/flink/pull/23352]

Please review whenever time 

> Flink Table API CSV streaming sink throws SerializedThrowable exception
> ---
>
> Key: FLINK-28513
> URL: https://issues.apache.org/jira/browse/FLINK-28513
> Project: Flink
>  Issue Type: Bug
>  Components: FileSystems, Table SQL / API
>Affects Versions: 1.15.1
>Reporter: Jaya Ananthram
>Assignee: Samrat Deb
>Priority: Critical
>  Labels: pull-request-available, stale-assigned
> Fix For: 1.19.0
>
>
> Table API S3 streaming sink (CSV format) throws the following exception,
> {code:java}
> Caused by: org.apache.flink.util.SerializedThrowable: 
> S3RecoverableFsDataOutputStream cannot sync state to S3. Use persist() to 
> create a persistent recoverable intermediate point.
> at 
> org.apache.flink.fs.s3.common.utils.RefCountedBufferingFileStream.sync(RefCountedBufferingFileStream.java:111)
>  ~[flink-s3-fs-hadoop-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.fs.s3.common.writer.S3RecoverableFsDataOutputStream.sync(S3RecoverableFsDataOutputStream.java:129)
>  ~[flink-s3-fs-hadoop-1.15.1.jar:1.15.1]
> at org.apache.flink.formats.csv.CsvBulkWriter.finish(CsvBulkWriter.java:110) 
> ~[flink-csv-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.connector.file.table.FileSystemTableSink$ProjectionBulkFactory$1.finish(FileSystemTableSink.java:642)
>  ~[flink-connector-files-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.BulkPartWriter.closeForCommit(BulkPartWriter.java:64)
>  ~[flink-file-sink-common-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.Bucket.closePartFile(Bucket.java:263)
>  ~[flink-streaming-java-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.Bucket.prepareBucketForCheckpointing(Bucket.java:305)
>  ~[flink-streaming-java-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.Bucket.onReceptionOfCheckpoint(Bucket.java:277)
>  ~[flink-streaming-java-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.Buckets.snapshotActiveBuckets(Buckets.java:270)
>  ~[flink-streaming-java-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.Buckets.snapshotState(Buckets.java:261)
>  ~[flink-streaming-java-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.StreamingFileSinkHelper.snapshotState(StreamingFileSinkHelper.java:87)
>  ~[flink-streaming-java-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.connector.file.table.stream.AbstractStreamingWriter.snapshotState(AbstractStreamingWriter.java:129)
>  ~[flink-connector-files-1.15.1.jar:1.15.1]
> {code}
> In my table config, I am trying to read from Kafka and write to S3 (s3a) 
> using table API and checkpoint configuration using s3p (presto). Even I tried 
> with a simple datagen example instead of Kafka with local file system as 
> checkpointing (`file:///` instead of `s3p://`) and I am getting the same 
> issue. Exactly it is fails when the code triggers the checkpoint.
> Some related slack conversation and SO conversation 
> [here|https://apache-flink.slack.com/archives/C03G7LJTS2G/p1657609776339389], 
> [here|https://stackoverflow.com/questions/62138635/flink-streaming-compression-not-working-using-amazon-aws-s3-connector-streaming]
>  and 
> [here|https://stackoverflow.com/questions/72943730/flink-table-api-streaming-s3-sink-throws-serializedthrowable-exception]
> Since there is no work around for S3 table API streaming sink, I am marking 
> this as critical. if this is not a relevant severity, feel free to reduce the 
> priority. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-28513) Flink Table API CSV streaming sink throws SerializedThrowable exception

2023-09-04 Thread Samrat Deb (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17761794#comment-17761794
 ] 

Samrat Deb commented on FLINK-28513:


I dont have access to update the fix version . Please help updating the `Fix 
Version/s` for this issue

> Flink Table API CSV streaming sink throws SerializedThrowable exception
> ---
>
> Key: FLINK-28513
> URL: https://issues.apache.org/jira/browse/FLINK-28513
> Project: Flink
>  Issue Type: Bug
>  Components: FileSystems, Table SQL / API
>Affects Versions: 1.15.1
>Reporter: Jaya Ananthram
>Assignee: Samrat Deb
>Priority: Critical
>  Labels: pull-request-available, stale-assigned
> Fix For: 1.19.0
>
>
> Table API S3 streaming sink (CSV format) throws the following exception,
> {code:java}
> Caused by: org.apache.flink.util.SerializedThrowable: 
> S3RecoverableFsDataOutputStream cannot sync state to S3. Use persist() to 
> create a persistent recoverable intermediate point.
> at 
> org.apache.flink.fs.s3.common.utils.RefCountedBufferingFileStream.sync(RefCountedBufferingFileStream.java:111)
>  ~[flink-s3-fs-hadoop-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.fs.s3.common.writer.S3RecoverableFsDataOutputStream.sync(S3RecoverableFsDataOutputStream.java:129)
>  ~[flink-s3-fs-hadoop-1.15.1.jar:1.15.1]
> at org.apache.flink.formats.csv.CsvBulkWriter.finish(CsvBulkWriter.java:110) 
> ~[flink-csv-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.connector.file.table.FileSystemTableSink$ProjectionBulkFactory$1.finish(FileSystemTableSink.java:642)
>  ~[flink-connector-files-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.BulkPartWriter.closeForCommit(BulkPartWriter.java:64)
>  ~[flink-file-sink-common-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.Bucket.closePartFile(Bucket.java:263)
>  ~[flink-streaming-java-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.Bucket.prepareBucketForCheckpointing(Bucket.java:305)
>  ~[flink-streaming-java-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.Bucket.onReceptionOfCheckpoint(Bucket.java:277)
>  ~[flink-streaming-java-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.Buckets.snapshotActiveBuckets(Buckets.java:270)
>  ~[flink-streaming-java-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.Buckets.snapshotState(Buckets.java:261)
>  ~[flink-streaming-java-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.StreamingFileSinkHelper.snapshotState(StreamingFileSinkHelper.java:87)
>  ~[flink-streaming-java-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.connector.file.table.stream.AbstractStreamingWriter.snapshotState(AbstractStreamingWriter.java:129)
>  ~[flink-connector-files-1.15.1.jar:1.15.1]
> {code}
> In my table config, I am trying to read from Kafka and write to S3 (s3a) 
> using table API and checkpoint configuration using s3p (presto). Even I tried 
> with a simple datagen example instead of Kafka with local file system as 
> checkpointing (`file:///` instead of `s3p://`) and I am getting the same 
> issue. Exactly it is fails when the code triggers the checkpoint.
> Some related slack conversation and SO conversation 
> [here|https://apache-flink.slack.com/archives/C03G7LJTS2G/p1657609776339389], 
> [here|https://stackoverflow.com/questions/62138635/flink-streaming-compression-not-working-using-amazon-aws-s3-connector-streaming]
>  and 
> [here|https://stackoverflow.com/questions/72943730/flink-table-api-streaming-s3-sink-throws-serializedthrowable-exception]
> Since there is no work around for S3 table API streaming sink, I am marking 
> this as critical. if this is not a relevant severity, feel free to reduce the 
> priority. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-28513) Flink Table API CSV streaming sink throws SerializedThrowable exception

2023-09-04 Thread Hong Liang Teoh (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17761855#comment-17761855
 ] 

Hong Liang Teoh commented on FLINK-28513:
-

merged commit 
[{{d06a297}}|https://github.com/apache/flink/commit/d06a297422fd4884aa21655fdf1f1bce94cc3a8a]
 into apache:release-1.17

> Flink Table API CSV streaming sink throws SerializedThrowable exception
> ---
>
> Key: FLINK-28513
> URL: https://issues.apache.org/jira/browse/FLINK-28513
> Project: Flink
>  Issue Type: Bug
>  Components: FileSystems, Table SQL / API
>Affects Versions: 1.15.1
>Reporter: Jaya Ananthram
>Assignee: Samrat Deb
>Priority: Critical
>  Labels: pull-request-available, stale-assigned
> Fix For: 1.19.0
>
>
> Table API S3 streaming sink (CSV format) throws the following exception,
> {code:java}
> Caused by: org.apache.flink.util.SerializedThrowable: 
> S3RecoverableFsDataOutputStream cannot sync state to S3. Use persist() to 
> create a persistent recoverable intermediate point.
> at 
> org.apache.flink.fs.s3.common.utils.RefCountedBufferingFileStream.sync(RefCountedBufferingFileStream.java:111)
>  ~[flink-s3-fs-hadoop-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.fs.s3.common.writer.S3RecoverableFsDataOutputStream.sync(S3RecoverableFsDataOutputStream.java:129)
>  ~[flink-s3-fs-hadoop-1.15.1.jar:1.15.1]
> at org.apache.flink.formats.csv.CsvBulkWriter.finish(CsvBulkWriter.java:110) 
> ~[flink-csv-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.connector.file.table.FileSystemTableSink$ProjectionBulkFactory$1.finish(FileSystemTableSink.java:642)
>  ~[flink-connector-files-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.BulkPartWriter.closeForCommit(BulkPartWriter.java:64)
>  ~[flink-file-sink-common-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.Bucket.closePartFile(Bucket.java:263)
>  ~[flink-streaming-java-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.Bucket.prepareBucketForCheckpointing(Bucket.java:305)
>  ~[flink-streaming-java-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.Bucket.onReceptionOfCheckpoint(Bucket.java:277)
>  ~[flink-streaming-java-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.Buckets.snapshotActiveBuckets(Buckets.java:270)
>  ~[flink-streaming-java-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.Buckets.snapshotState(Buckets.java:261)
>  ~[flink-streaming-java-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.StreamingFileSinkHelper.snapshotState(StreamingFileSinkHelper.java:87)
>  ~[flink-streaming-java-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.connector.file.table.stream.AbstractStreamingWriter.snapshotState(AbstractStreamingWriter.java:129)
>  ~[flink-connector-files-1.15.1.jar:1.15.1]
> {code}
> In my table config, I am trying to read from Kafka and write to S3 (s3a) 
> using table API and checkpoint configuration using s3p (presto). Even I tried 
> with a simple datagen example instead of Kafka with local file system as 
> checkpointing (`file:///` instead of `s3p://`) and I am getting the same 
> issue. Exactly it is fails when the code triggers the checkpoint.
> Some related slack conversation and SO conversation 
> [here|https://apache-flink.slack.com/archives/C03G7LJTS2G/p1657609776339389], 
> [here|https://stackoverflow.com/questions/62138635/flink-streaming-compression-not-working-using-amazon-aws-s3-connector-streaming]
>  and 
> [here|https://stackoverflow.com/questions/72943730/flink-table-api-streaming-s3-sink-throws-serializedthrowable-exception]
> Since there is no work around for S3 table API streaming sink, I am marking 
> this as critical. if this is not a relevant severity, feel free to reduce the 
> priority. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-28513) Flink Table API CSV streaming sink throws SerializedThrowable exception

2023-09-04 Thread Hong Liang Teoh (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17761856#comment-17761856
 ] 

Hong Liang Teoh commented on FLINK-28513:
-

> I dont have access to update the fix version . Please help updating the `Fix 
> Version/s` for this issue
 
Updated

> Flink Table API CSV streaming sink throws SerializedThrowable exception
> ---
>
> Key: FLINK-28513
> URL: https://issues.apache.org/jira/browse/FLINK-28513
> Project: Flink
>  Issue Type: Bug
>  Components: FileSystems, Table SQL / API
>Affects Versions: 1.15.1
>Reporter: Jaya Ananthram
>Assignee: Samrat Deb
>Priority: Critical
>  Labels: pull-request-available, stale-assigned
> Fix For: 1.18.0, 1.17.2, 1.19.0
>
>
> Table API S3 streaming sink (CSV format) throws the following exception,
> {code:java}
> Caused by: org.apache.flink.util.SerializedThrowable: 
> S3RecoverableFsDataOutputStream cannot sync state to S3. Use persist() to 
> create a persistent recoverable intermediate point.
> at 
> org.apache.flink.fs.s3.common.utils.RefCountedBufferingFileStream.sync(RefCountedBufferingFileStream.java:111)
>  ~[flink-s3-fs-hadoop-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.fs.s3.common.writer.S3RecoverableFsDataOutputStream.sync(S3RecoverableFsDataOutputStream.java:129)
>  ~[flink-s3-fs-hadoop-1.15.1.jar:1.15.1]
> at org.apache.flink.formats.csv.CsvBulkWriter.finish(CsvBulkWriter.java:110) 
> ~[flink-csv-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.connector.file.table.FileSystemTableSink$ProjectionBulkFactory$1.finish(FileSystemTableSink.java:642)
>  ~[flink-connector-files-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.BulkPartWriter.closeForCommit(BulkPartWriter.java:64)
>  ~[flink-file-sink-common-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.Bucket.closePartFile(Bucket.java:263)
>  ~[flink-streaming-java-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.Bucket.prepareBucketForCheckpointing(Bucket.java:305)
>  ~[flink-streaming-java-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.Bucket.onReceptionOfCheckpoint(Bucket.java:277)
>  ~[flink-streaming-java-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.Buckets.snapshotActiveBuckets(Buckets.java:270)
>  ~[flink-streaming-java-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.Buckets.snapshotState(Buckets.java:261)
>  ~[flink-streaming-java-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.StreamingFileSinkHelper.snapshotState(StreamingFileSinkHelper.java:87)
>  ~[flink-streaming-java-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.connector.file.table.stream.AbstractStreamingWriter.snapshotState(AbstractStreamingWriter.java:129)
>  ~[flink-connector-files-1.15.1.jar:1.15.1]
> {code}
> In my table config, I am trying to read from Kafka and write to S3 (s3a) 
> using table API and checkpoint configuration using s3p (presto). Even I tried 
> with a simple datagen example instead of Kafka with local file system as 
> checkpointing (`file:///` instead of `s3p://`) and I am getting the same 
> issue. Exactly it is fails when the code triggers the checkpoint.
> Some related slack conversation and SO conversation 
> [here|https://apache-flink.slack.com/archives/C03G7LJTS2G/p1657609776339389], 
> [here|https://stackoverflow.com/questions/62138635/flink-streaming-compression-not-working-using-amazon-aws-s3-connector-streaming]
>  and 
> [here|https://stackoverflow.com/questions/72943730/flink-table-api-streaming-s3-sink-throws-serializedthrowable-exception]
> Since there is no work around for S3 table API streaming sink, I am marking 
> this as critical. if this is not a relevant severity, feel free to reduce the 
> priority. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-28513) Flink Table API CSV streaming sink throws SerializedThrowable exception

2023-09-05 Thread Hong Liang Teoh (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17762039#comment-17762039
 ] 

Hong Liang Teoh commented on FLINK-28513:
-

merged commit 
[{{2437cf5}}|https://github.com/apache/flink/commit/2437cf568785a05ece70fde9f917637731740e46]
 into apache:release-1.18

> Flink Table API CSV streaming sink throws SerializedThrowable exception
> ---
>
> Key: FLINK-28513
> URL: https://issues.apache.org/jira/browse/FLINK-28513
> Project: Flink
>  Issue Type: Bug
>  Components: FileSystems, Table SQL / API
>Affects Versions: 1.15.1
>Reporter: Jaya Ananthram
>Assignee: Samrat Deb
>Priority: Critical
>  Labels: pull-request-available, stale-assigned
> Fix For: 1.18.0, 1.17.2, 1.19.0
>
>
> Table API S3 streaming sink (CSV format) throws the following exception,
> {code:java}
> Caused by: org.apache.flink.util.SerializedThrowable: 
> S3RecoverableFsDataOutputStream cannot sync state to S3. Use persist() to 
> create a persistent recoverable intermediate point.
> at 
> org.apache.flink.fs.s3.common.utils.RefCountedBufferingFileStream.sync(RefCountedBufferingFileStream.java:111)
>  ~[flink-s3-fs-hadoop-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.fs.s3.common.writer.S3RecoverableFsDataOutputStream.sync(S3RecoverableFsDataOutputStream.java:129)
>  ~[flink-s3-fs-hadoop-1.15.1.jar:1.15.1]
> at org.apache.flink.formats.csv.CsvBulkWriter.finish(CsvBulkWriter.java:110) 
> ~[flink-csv-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.connector.file.table.FileSystemTableSink$ProjectionBulkFactory$1.finish(FileSystemTableSink.java:642)
>  ~[flink-connector-files-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.BulkPartWriter.closeForCommit(BulkPartWriter.java:64)
>  ~[flink-file-sink-common-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.Bucket.closePartFile(Bucket.java:263)
>  ~[flink-streaming-java-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.Bucket.prepareBucketForCheckpointing(Bucket.java:305)
>  ~[flink-streaming-java-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.Bucket.onReceptionOfCheckpoint(Bucket.java:277)
>  ~[flink-streaming-java-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.Buckets.snapshotActiveBuckets(Buckets.java:270)
>  ~[flink-streaming-java-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.Buckets.snapshotState(Buckets.java:261)
>  ~[flink-streaming-java-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.StreamingFileSinkHelper.snapshotState(StreamingFileSinkHelper.java:87)
>  ~[flink-streaming-java-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.connector.file.table.stream.AbstractStreamingWriter.snapshotState(AbstractStreamingWriter.java:129)
>  ~[flink-connector-files-1.15.1.jar:1.15.1]
> {code}
> In my table config, I am trying to read from Kafka and write to S3 (s3a) 
> using table API and checkpoint configuration using s3p (presto). Even I tried 
> with a simple datagen example instead of Kafka with local file system as 
> checkpointing (`file:///` instead of `s3p://`) and I am getting the same 
> issue. Exactly it is fails when the code triggers the checkpoint.
> Some related slack conversation and SO conversation 
> [here|https://apache-flink.slack.com/archives/C03G7LJTS2G/p1657609776339389], 
> [here|https://stackoverflow.com/questions/62138635/flink-streaming-compression-not-working-using-amazon-aws-s3-connector-streaming]
>  and 
> [here|https://stackoverflow.com/questions/72943730/flink-table-api-streaming-s3-sink-throws-serializedthrowable-exception]
> Since there is no work around for S3 table API streaming sink, I am marking 
> this as critical. if this is not a relevant severity, feel free to reduce the 
> priority. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)