[ 
https://issues.apache.org/jira/browse/FLINK-10203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16669135#comment-16669135
 ] 

ASF GitHub Bot commented on FLINK-10203:
----------------------------------------

kl0u commented on issue #6608: [FLINK-10203]Support truncate method for old 
Hadoop versions in HadoopRecoverableFsDataOutputStream
URL: https://github.com/apache/flink/pull/6608#issuecomment-434409845
 
 
   Hi Artsem, you are correct that it is not used but I already have a branch
   for it and there is an open Jira for that that I have assigned to myself.
   
   On Tue, Oct 30, 2018, 17:01 Artsem Semianenka <notificati...@github.com>
   wrote:
   
   > @StephanEwen <https://github.com/StephanEwen> I really like your idea
   > regarding recoverable writer with "Recover for resume" property. I found
   > the method which you are talking about:
   > boolean supportsResume()
   > in RecoverableWriterm
   > 
<https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/fs/RecoverableWriter.java>
   > interface, but as far as I can see this method is not using in the Flink
   > project.
   > Using search by the whole project I found only implementation of this
   > method, but no one invocation of the method
   > 
https://github.com/apache/flink/search?q=supportsResume&unscoped_q=supportsResume
   >
   > —
   > You are receiving this because you are subscribed to this thread.
   > Reply to this email directly, view it on GitHub
   > <https://github.com/apache/flink/pull/6608#issuecomment-434359231>, or mute
   > the thread
   > 
<https://github.com/notifications/unsubscribe-auth/ACS1qHy06OH988Pm4lbEqpfbTgYIpNIhks5uqHflgaJpZM4WJndk>
   > .
   >
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Support truncate method for old Hadoop versions in 
> HadoopRecoverableFsDataOutputStream
> --------------------------------------------------------------------------------------
>
>                 Key: FLINK-10203
>                 URL: https://issues.apache.org/jira/browse/FLINK-10203
>             Project: Flink
>          Issue Type: Bug
>          Components: DataStream API, filesystem-connector
>    Affects Versions: 1.6.0, 1.6.1, 1.7.0
>            Reporter: Artsem Semianenka
>            Assignee: Artsem Semianenka
>            Priority: Major
>              Labels: pull-request-available
>         Attachments: legacy truncate logic.pdf
>
>
> New StreamingFileSink ( introduced in 1.6 Flink version ) use 
> HadoopRecoverableFsDataOutputStream wrapper to write data in HDFS.
> HadoopRecoverableFsDataOutputStream is a wrapper for FSDataOutputStream to 
> have an ability to restore from certain point of file after failure and 
> continue write data. To achieve this recover functionality the 
> HadoopRecoverableFsDataOutputStream use "truncate" method which was 
> introduced only in Hadoop 2.7 .
> Unfortunately there are a few official Hadoop distributive which latest 
> version still use Hadoop 2.6 (This distributives: Cloudera, Pivotal HD ). As 
> the result Flinks Hadoop connector can't work with this distributives.
> Flink declares that supported Hadoop from version 2.4.0 upwards 
> ([https://ci.apache.org/projects/flink/flink-docs-release-1.6/start/building.html#hadoop-versions])
> I guess we should emulate the functionality of "truncate" method for older 
> Hadoop versions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to