[ https://issues.apache.org/jira/browse/BEAM-3649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16377125#comment-16377125 ]
Ismaël Mejía commented on BEAM-3649: ------------------------------------ Thanks for confirming, you can also try if you want the new support for native S3 via Beam Filesystem that was merged in 2.3.0. > HadoopSeekableByteChannel breaks when backing InputStream doesn't supporte > ByteBuffers > -------------------------------------------------------------------------------------- > > Key: BEAM-3649 > URL: https://issues.apache.org/jira/browse/BEAM-3649 > Project: Beam > Issue Type: Bug > Components: io-java-hadoop > Affects Versions: 2.0.0, 2.1.0, 2.2.0 > Reporter: Guillaume Balaine > Priority: Minor > > This happened last summer, when I wanted to use S3A as the backing HDFS > access implementation. > This is because while this method is called : > [https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FSDataInputStream.java#L145] > This class does not implement ByteBuffer readable > https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java > I fixed it by manually incrementing the read position and copying the backing > array instead of buffering. > [https://github.com/Igosuki/beam/commit/3838f0db43b6422833a045d1f097f6d7643219f1] > I know the s3 direct implementation is the preferred path, but this is > possible, and likely happens to a lot of developers. -- This message was sent by Atlassian JIRA (v7.6.3#76005)