[ 
https://issues.apache.org/jira/browse/FLINK-10203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16610310#comment-16610310
 ] 

ASF GitHub Bot commented on FLINK-10203:
----------------------------------------

StephanEwen commented on issue #6608: [FLINK-10203]Support truncate method for 
old Hadoop versions in HadoopRecoverableFsDataOutputStream
URL: https://github.com/apache/flink/pull/6608#issuecomment-420199851
 
 
   We were of at the Flink Forward conference the last week, so slow progress 
on PRs...
   
   In principle, the design looks fine. To double check:
     - Does HDFS permit to rename to an already existing file name (replacing 
that existing file)?
     - If yes, this sounds reasonable. If not, this could be an issue (need to 
delete the original file before rename, failure in that case means file does 
not exist on recovery).
   
   There is another option to approach this:
     - The recoverable writer has the option to say if it can also "recover for 
resume" or only "recover for commit". "Recover for resume" leads to appending 
to the started file, while only supporting "recover for commit" means that a 
new part file would be started after recovery.
     - This version could declare itself to only "recover for commit", in which 
case we would never have to go back to the original file name, but only copy 
from the "part in progress"-file to the published file name, avoiding the above 
problem.
     - That would mean we need to have the "truncater" handle the "truncate 
existing file back" logic and "truncating rename". The legacy hadoop handler 
would only implement the second - truncating rename.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Support truncate method for old Hadoop versions in 
> HadoopRecoverableFsDataOutputStream
> --------------------------------------------------------------------------------------
>
>                 Key: FLINK-10203
>                 URL: https://issues.apache.org/jira/browse/FLINK-10203
>             Project: Flink
>          Issue Type: Bug
>          Components: DataStream API, filesystem-connector
>    Affects Versions: 1.6.0, 1.6.1, 1.7.0
>            Reporter: Artsem Semianenka
>            Assignee: Artsem Semianenka
>            Priority: Major
>              Labels: pull-request-available
>         Attachments: legacy truncate logic.pdf
>
>
> New StreamingFileSink ( introduced in 1.6 Flink version ) use 
> HadoopRecoverableFsDataOutputStream wrapper to write data in HDFS.
> HadoopRecoverableFsDataOutputStream is a wrapper for FSDataOutputStream to 
> have an ability to restore from certain point of file after failure and 
> continue write data. To achieve this recover functionality the 
> HadoopRecoverableFsDataOutputStream use "truncate" method which was 
> introduced only in Hadoop 2.7 .
> Unfortunately there are a few official Hadoop distributive which latest 
> version still use Hadoop 2.6 (This distributives: Cloudera, Pivotal HD ). As 
> the result Flinks Hadoop connector can't work with this distributives.
> Flink declares that supported Hadoop from version 2.4.0 upwards 
> ([https://ci.apache.org/projects/flink/flink-docs-release-1.6/start/building.html#hadoop-versions])
> I guess we should emulate the functionality of "truncate" method for older 
> Hadoop versions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to