dlmarion commented on PR #3955: URL: https://github.com/apache/accumulo/pull/3955#issuecomment-1817093641
> With something that is not a "directory per table" - do we need to be concerned with the absolute number of files that may end up there? Basically running out of indoes or the equivalent? And / or are there performance considerations with large numbers of files in a single directory? In the case where the value of the tmp dir is `file:///`, then I don't think so. The Compactor would be writing one file for the current compaction, and would clean it up before it starts the next compaction. I don't know if there is an inode limitation for HDFS or S3. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
