[ https://issues.apache.org/jira/browse/HDFS-1835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13021968#comment-13021968 ]
John Carrino commented on HDFS-1835: ------------------------------------ I have a test that can go into the org/apache/hadoop/hdfs/server/datanode package public void testRate() throws Exception { DatanodeRegistration reg = new DatanodeRegistration(); reg.setName("name"); long startTime = System.currentTimeMillis(); long count = 0; while (System.currentTimeMillis() - startTime < 20) { DataNode.setNewStorageID(reg); count++; } assertTrue("count was: " + count, count > 10); } > DataNode.setNewStorageID pulls entropy from /dev/random > ------------------------------------------------------- > > Key: HDFS-1835 > URL: https://issues.apache.org/jira/browse/HDFS-1835 > Project: Hadoop HDFS > Issue Type: Bug > Components: data-node > Affects Versions: 0.20.2 > Reporter: John Carrino > Fix For: 0.22.0 > > Attachments: DataNode.patch > > Original Estimate: 10m > Remaining Estimate: 10m > > DataNode.setNewStorageID uses SecureRandom.getInstance("SHA1PRNG") which > always pulls fresh entropy. > It wouldn't be so bad if this were only the 120 bits needed by sha1, but the > default impl of SecureRandom actually uses a BufferedInputStream around > /dev/random and pulls 1024 bits of entropy for this one call. > If you are on a system without much entropy coming in, this call can block > and block others. > Can we just change this to use "new > SecureRandom().nextInt(Integer.MAX_VALUE)" instead? -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira