[ https://issues.apache.org/jira/browse/HADOOP-18399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17639591#comment-17639591 ]
ASF GitHub Bot commented on HADOOP-18399: ----------------------------------------- virajjasani commented on code in PR #5054: URL: https://github.com/apache/hadoop/pull/5054#discussion_r1032874644 ########## hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3APrefetchingInputStream.java: ########## @@ -294,4 +294,34 @@ public void testStatusProbesAfterClosingStream() throws Throwable { } + @Test + public void testCacheFileExistence() throws Throwable { Review Comment: Sorry, I had to remove this change in the latest revision. The problem could arise with the file cleanup: we would not know which exact cache file was created by this test, meaning what if another test (with -Dprefetch option) running in parallel also creates a new cache file and we remove it as part of this test? That can screw up parallel tests. > SingleFilePerBlockCache to use LocalDirAllocator for file allocatoin > -------------------------------------------------------------------- > > Key: HADOOP-18399 > URL: https://issues.apache.org/jira/browse/HADOOP-18399 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 > Affects Versions: 3.4.0 > Reporter: Steve Loughran > Assignee: Viraj Jasani > Priority: Major > Labels: pull-request-available > > prefetching stream's SingleFilePerBlockCache uses Files.tempFile() to > allocate a temp file. > it should be using LocalDirAllocator to allocate space from a list of dirs, > taking a config key to use. for s3a we will use the Constants.BUFFER_DIR > option, which on yarn deployments is fixed under the env.LOCAL_DIR path, so > automatically cleaned up on container exit -- This message was sent by Atlassian Jira (v8.20.10#820010) --------------------------------------------------------------------- To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org