[ 
https://issues.apache.org/jira/browse/HDFS-1835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13106993#comment-13106993
 ] 

John Carrino commented on HDFS-1835:
------------------------------------

If you just 'cat /dev/urandom > /dev/null' it will eat entropy.  urandom 
prefers entropy, but doesn't require it.  Even if you are using urandom, you 
can push the entropy low enough that if someone then uses /dev/random, it may 
block.

> DataNode.setNewStorageID pulls entropy from /dev/random
> -------------------------------------------------------
>
>                 Key: HDFS-1835
>                 URL: https://issues.apache.org/jira/browse/HDFS-1835
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: data-node
>    Affects Versions: 0.20.2
>            Reporter: John Carrino
>            Assignee: John Carrino
>             Fix For: 0.23.0
>
>         Attachments: DataNode.patch, hdfs-1835.txt
>
>   Original Estimate: 10m
>  Remaining Estimate: 10m
>
> DataNode.setNewStorageID uses SecureRandom.getInstance("SHA1PRNG") which 
> always pulls fresh entropy.
> It wouldn't be so bad if this were only the 120 bits needed by sha1, but the 
> default impl of SecureRandom actually uses a BufferedInputStream around 
> /dev/random and pulls 1024 bits of entropy for this one call.
> If you are on a system without much entropy coming in, this call can block 
> and block others.
> Can we just change this to use "new 
> SecureRandom().nextInt(Integer.MAX_VALUE)" instead?

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to