ahmarsuhail commented on code in PR #7763: URL: https://github.com/apache/hadoop/pull/7763#discussion_r2177571796
########## hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AAnalyticsAcceleratorStreamReading.java: ########## @@ -194,4 +215,127 @@ public void testInvalidConfigurationThrows() throws Exception { () -> S3SeekableInputStreamConfiguration.fromConfiguration(connectorConfiguration)); } + @Test + public void testLargeFileMultipleGets() throws Throwable { + describe("Large file should trigger multiple GET requests"); + + Path dest = writeThenReadFile("large-test-file.txt", 10 * 1024 * 1024); // 10MB + + + try (FSDataInputStream inputStream = getFileSystem().open(dest)) { + IOStatistics ioStats = inputStream.getIOStatistics(); + inputStream.readFully(new byte[(int) getFileSystem().getFileStatus(dest).getLen()]); Review Comment: you know you've just created a 10MB file in the previous step, so you don't need to use getFileStatus to get the size again. Create a buffer of size 10MB before line 227, `byte[] buffer = new byte[S_1M * 10];` and then use that to read the whole file. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org --------------------------------------------------------------------- To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org