[GitHub] StephanEwen commented on issue #6608: [FLINK-10203]Support truncate method for old Hadoop versions in HadoopRecoverableFsDataOutputStream
StephanEwen commented on issue #6608: [FLINK-10203]Support truncate method for old Hadoop versions in HadoopRecoverableFsDataOutputStream URL: https://github.com/apache/flink/pull/6608#issuecomment-420199851 We were of at the Flink Forward conference the last week, so slow progress on PRs... In principle, the design looks fine. To double check: - Does HDFS permit to rename to an already existing file name (replacing that existing file)? - If yes, this sounds reasonable. If not, this could be an issue (need to delete the original file before rename, failure in that case means file does not exist on recovery). There is another option to approach this: - The recoverable writer has the option to say if it can also "recover for resume" or only "recover for commit". "Recover for resume" leads to appending to the started file, while only supporting "recover for commit" means that a new part file would be started after recovery. - This version could declare itself to only "recover for commit", in which case we would never have to go back to the original file name, but only copy from the "part in progress"-file to the published file name, avoiding the above problem. - That would mean we need to have the "truncater" handle the "truncate existing file back" logic and "truncating rename". The legacy hadoop handler would only implement the second - truncating rename. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] StephanEwen commented on issue #6608: [FLINK-10203]Support truncate method for old Hadoop versions in HadoopRecoverableFsDataOutputStream
StephanEwen commented on issue #6608: [FLINK-10203]Support truncate method for old Hadoop versions in HadoopRecoverableFsDataOutputStream URL: https://github.com/apache/flink/pull/6608#issuecomment-415507439 I see, that is a fair reason. There are parallel efforts to add Parquet support to the old BucketingSink, but I see the point. Before going into a deep review, can you update the description with how exactly the legacy truncater should be working: what copy and rename steps it does and how it behaves under failure / repeated calls. Also, I would suggest to name it `Truncater` rather than `TruncateManager`. Too many managers all around already ;-) This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] StephanEwen commented on issue #6608: [FLINK-10203]Support truncate method for old Hadoop versions in HadoopRecoverableFsDataOutputStream
StephanEwen commented on issue #6608: [FLINK-10203]Support truncate method for old Hadoop versions in HadoopRecoverableFsDataOutputStream URL: https://github.com/apache/flink/pull/6608#issuecomment-415451715 @art4ul Initially, we wanted to keep the new StreamingFileSink code simple and clean, that's why we decided to only with Hadoop 2.7+ for HDFS and retained the old BucketingSink, to support prior Hadoop versions. Is it a critical issue on your side to not use the BucketingSink? This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services