ahmarsuhail commented on code in PR #7763:
URL: https://github.com/apache/hadoop/pull/7763#discussion_r2177579811


##########
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AAnalyticsAcceleratorStreamReading.java:
##########
@@ -194,4 +215,127 @@ public void testInvalidConfigurationThrows() throws 
Exception {
         () -> 
S3SeekableInputStreamConfiguration.fromConfiguration(connectorConfiguration));
   }
 
+  @Test
+  public void testLargeFileMultipleGets() throws Throwable {
+    describe("Large file should trigger multiple GET requests");
+
+    Path dest = writeThenReadFile("large-test-file.txt", 10 * 1024 * 1024); // 
10MB
+
+
+    try (FSDataInputStream inputStream = getFileSystem().open(dest)) {
+      IOStatistics ioStats = inputStream.getIOStatistics();
+      inputStream.readFully(new byte[(int) 
getFileSystem().getFileStatus(dest).getLen()]);
+
+      verifyStatisticCounterValue(ioStats, STREAM_READ_ANALYTICS_GET_REQUESTS, 
2);
+    }
+  }
+
+  @Test
+  public void testSmallFileSingleGet() throws Throwable {
+    describe("Small file should trigger only one GET request");
+
+    Path dest = writeThenReadFile("small-test-file.txt", 1 * 1024 * 1024); // 
1KB

Review Comment:
   make the file bigger, say 6MB.
   
   then do 
   
   in.read(2MB)
   in.seek(4MB)
   in.read(2MB)
   
   you're verifying here that AAL has downloaded the whole file. so even if you 
read a little bit, then seek to another position and read some more, it doesn't 
trigger a new read. 
   
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org

Reply via email to