[ https://issues.apache.org/jira/browse/HADOOP-7225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13019688#comment-13019688 ]
John Carrino commented on HADOOP-7225: -------------------------------------- I think this may need to get filed under HDFS instead of Hadoop Common. > DataNode.setNewStorageID pulls entropy from /dev/random > ------------------------------------------------------- > > Key: HADOOP-7225 > URL: https://issues.apache.org/jira/browse/HADOOP-7225 > Project: Hadoop Common > Issue Type: Bug > Components: fs > Affects Versions: 0.17.0 > Environment: linux > Reporter: John Carrino > Priority: Minor > Original Estimate: 10m > Remaining Estimate: 10m > > DataNode.setNewStorageID uses SecureRandom.getInstance("SHA1PRNG") which > always pulls fresh entropy. > It wouldn't be so bad if this were only the 120 bits needed by sha1, but the > default impl of SecureRandom actually uses a BufferedInputStream around > /dev/random and pulls 1024 bits of entropy for this one call. > If you are on a system without much entropy coming in, this call can block > and block others. > Can we just change this to use "new > SecureRandom().nextInt(Integer.MAX_VALUE)" instead? -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira