[ 
https://issues.apache.org/jira/browse/HDFS-1835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13021681#comment-13021681
 ] 

John Carrino commented on HDFS-1835:
------------------------------------

The difference between my patch and the original only exists on the linux 
version of the jre.  On windows it behaves the same. 

The original behavior was to seed a sha1prng with randomness from /dev/random.
The new  behavior is to just read from /dev/urandom.  Writing a test to tell 
these two cases apart would be quite difficult.

I think the best way would involve turning off all entropy generation on linux 
and then consuming all the entropy, then running DataNode.setNewStorageID to 
make sure it doesn't block.  

> DataNode.setNewStorageID pulls entropy from /dev/random
> -------------------------------------------------------
>
>                 Key: HDFS-1835
>                 URL: https://issues.apache.org/jira/browse/HDFS-1835
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: data-node
>    Affects Versions: 0.20.2
>            Reporter: John Carrino
>             Fix For: 0.22.0
>
>         Attachments: DataNode.patch
>
>   Original Estimate: 10m
>  Remaining Estimate: 10m
>
> DataNode.setNewStorageID uses SecureRandom.getInstance("SHA1PRNG") which 
> always pulls fresh entropy.
> It wouldn't be so bad if this were only the 120 bits needed by sha1, but the 
> default impl of SecureRandom actually uses a BufferedInputStream around 
> /dev/random and pulls 1024 bits of entropy for this one call.
> If you are on a system without much entropy coming in, this call can block 
> and block others.
> Can we just change this to use "new 
> SecureRandom().nextInt(Integer.MAX_VALUE)" instead?

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to