steveloughran commented on code in PR #5563:
URL: https://github.com/apache/hadoop/pull/5563#discussion_r1175384535


##########
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3ABlockOutputArray.java:
##########
@@ -79,6 +80,42 @@ public void testRegularUpload() throws IOException {
     verifyUpload("regular", 1024);
   }
 
+  /**
+   * Test that the DiskBlock's local file doesn't result in error when the S3 
key exceeds the max
+   * char limit of the local file system. Currently
+   * {@link java.io.File#createTempFile(String, String, File)} is being relied 
on to handle the
+   * truncation.
+   * @throws IOException
+   */
+  @Test
+  public void testDiskBlockCreate() throws IOException {
+    S3ADataBlocks.BlockFactory diskBlockFactory =

Review Comment:
   use try-with-resources, even if I doubt this is at risk of leaking things



##########
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3ABlockOutputArray.java:
##########
@@ -79,6 +80,42 @@ public void testRegularUpload() throws IOException {
     verifyUpload("regular", 1024);
   }
 
+  /**
+   * Test that the DiskBlock's local file doesn't result in error when the S3 
key exceeds the max
+   * char limit of the local file system. Currently
+   * {@link java.io.File#createTempFile(String, String, File)} is being relied 
on to handle the
+   * truncation.
+   * @throws IOException
+   */
+  @Test
+  public void testDiskBlockCreate() throws IOException {
+    S3ADataBlocks.BlockFactory diskBlockFactory =
+      new S3ADataBlocks.DiskBlockFactory(getFileSystem());
+    String s3Key = // 1024 char
+      
"very_long_s3_key__very_long_s3_key__very_long_s3_key__very_long_s3_key__" +
+        
"very_long_s3_key__very_long_s3_key__very_long_s3_key__very_long_s3_key__" +
+        
"very_long_s3_key__very_long_s3_key__very_long_s3_key__very_long_s3_key__" +
+        
"very_long_s3_key__very_long_s3_key__very_long_s3_key__very_long_s3_key__" +
+        
"very_long_s3_key__very_long_s3_key__very_long_s3_key__very_long_s3_key__" +
+        
"very_long_s3_key__very_long_s3_key__very_long_s3_key__very_long_s3_key__" +
+        
"very_long_s3_key__very_long_s3_key__very_long_s3_key__very_long_s3_key__" +
+        
"very_long_s3_key__very_long_s3_key__very_long_s3_key__very_long_s3_key__" +
+        
"very_long_s3_key__very_long_s3_key__very_long_s3_key__very_long_s3_key__" +
+        
"very_long_s3_key__very_long_s3_key__very_long_s3_key__very_long_s3_key__" +
+        
"very_long_s3_key__very_long_s3_key__very_long_s3_key__very_long_s3_key__" +
+        
"very_long_s3_key__very_long_s3_key__very_long_s3_key__very_long_s3_key__" +
+        
"very_long_s3_key__very_long_s3_key__very_long_s3_key__very_long_s3_key__" +
+        
"very_long_s3_key__very_long_s3_key__very_long_s3_key__very_long_s3_key__" +
+        "very_long_s3_key";
+    S3ADataBlocks.DataBlock dataBlock = diskBlockFactory.create("spanId", 
s3Key, 1,
+      getFileSystem().getDefaultBlockSize(), null);
+    LOG.info(dataBlock.toString()); // block file name and location can be 
viewed in failsafe-report
+
+    // delete the block file
+    dataBlock.innerClose();

Review Comment:
   are there any more asserts here, e.g that the file exists afterwards?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org

Reply via email to