Enzo90910 opened a new issue, #22150:
URL: https://github.com/apache/beam/issues/22150

   ### What happened?
   
   I've been trying to use hadoopfilesystem.HadoopFileSystem to write TFX 
artefacts to HDFS.  When the file I try to write is smaller than 20MB 
everything is fine. But when it is larger than 20MB (python's BufferedWriter 
default buffer size), the written file is corrupted without any error being 
detected.
   I believe it is because HdfsUploader uses the HDFScli (and through it, the 
WebHDFS API) as if one could stream write a file after opening it, as is 
usually the case for filesystems, but it doesn't seem to be the case for 
WebHDFS/HDFScli: the file must be written all at once with exactly one PUT 
query.
   - When the whole file is smaller than the buffer of BufferedWriter, it gets 
written all at once when closing the BufferedWriter and everything works.
   - When the file is larger than the buffer, several write calls are made to 
the hdfs.InsecureClient on the *same* file handle and all hell breaks loose.
   
   If I am right, this should be rather difficult to fix since beam's writing 
API necessitates being able to stream to a file, and WebHDFS doesn't seem to 
allow it.
   
   ### Issue Priority
   
   Priority: 1
   
   ### Issue Component
   
   Component: io-py-hadoop


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to