[ https://issues.apache.org/jira/browse/FLINK-11838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17303537#comment-17303537 ]
Galen Warren commented on FLINK-11838: -------------------------------------- Not that I'm aware of. Writing checkpoints/savepoints and HA data to GCS buckets works fine as-is, but to use StreamingFileSink, a RecoverableWriter implementation is required. That exists now for S3 but not GCS. I have a local GCS RecoverableWriter implementation I'm using and which seems to be working, and I'm hoping to push the first part of the code for review by [~xintongsong] this weekend. > Create RecoverableWriter for GCS > -------------------------------- > > Key: FLINK-11838 > URL: https://issues.apache.org/jira/browse/FLINK-11838 > Project: Flink > Issue Type: New Feature > Components: Connectors / FileSystem > Affects Versions: 1.8.0 > Reporter: Fokko Driesprong > Assignee: Galen Warren > Priority: Major > Labels: pull-request-available, usability > Fix For: 1.13.0 > > Time Spent: 20m > Remaining Estimate: 0h > > GCS supports the resumable upload which we can use to create a Recoverable > writer similar to the S3 implementation: > https://cloud.google.com/storage/docs/json_api/v1/how-tos/resumable-upload > After using the Hadoop compatible interface: > https://github.com/apache/flink/pull/7519 > We've noticed that the current implementation relies heavily on the renaming > of the files on the commit: > https://github.com/apache/flink/blob/master/flink-filesystems/flink-hadoop-fs/src/main/java/org/apache/flink/runtime/fs/hdfs/HadoopRecoverableFsDataOutputStream.java#L233-L259 > This is suboptimal on an object store such as GCS. Therefore we would like to > implement a more GCS native RecoverableWriter -- This message was sent by Atlassian Jira (v8.3.4#803005)