[ 
https://issues.apache.org/jira/browse/FLINK-11378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16767056#comment-16767056
 ] 

Robert Metzger commented on FLINK-11378:
----------------------------------------

[~martijnvdgrift]: I gave you contributor permissions in our Jira (so that I 
can assign you) and assigned you, because you've opened a PR for this.

> Allow HadoopRecoverableWriter to write to Hadoop compatible Filesystems
> -----------------------------------------------------------------------
>
>                 Key: FLINK-11378
>                 URL: https://issues.apache.org/jira/browse/FLINK-11378
>             Project: Flink
>          Issue Type: Improvement
>          Components: FileSystem
>            Reporter: Martijn van de Grift
>            Assignee: Martijn van de Grift
>            Priority: Major
>              Labels: pull-request-available
>          Time Spent: 10m
>  Remaining Estimate: 0h
>
> At a client we're using Flink jobs to read data from Kafka and writing to 
> GCS. In earlier versions, we've used `BucketingFileSink` for this, but we 
> want to switch to the newer `StreamingFileSink`.
> Since we're running Flink on Google's DataProc, we're using the Hadoop 
> compatible GCS 
> [connector|https://github.com/GoogleCloudPlatform/bigdata-interop] made by 
> Google. This currently doesn't work on Flink, because Flink checks for a HDFS 
> scheme at 'HadoopRecoverableWriter'.
> We've successfully ran our jobs by creating a custom Flink Distro which has 
> the hdfs scheme check removed.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to