[ 
https://issues.apache.org/jira/browse/HDFS-4253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13532901#comment-13532901
 ] 

Andy Isaacson commented on HDFS-4253:
-------------------------------------

bq. I don't see any reason why shuffle(a) could not be equal to shuffle(b), for 
two completely unrelated DatanodeIDs a and b.

That's true, equality is possible.  It's very unlikely given that we're 
choosing N items (where N is the replication count of a block, so nearly always 
3, sometimes 10, possibly as absurdly high as 50) from the range of 
{{Random#NextInt}} which is about 2**32.  The algorithm does something 
reasonable in the case that the shuffle has a collision (it puts the items in 
some order, either stable or not, and either result is fine for the rest of the 
algorithm). It would be possible to remove the possibility of collisions, but I 
don't know how to do that quickly with minimal code.  So the current 
implementation seemed to strike a nice balance between the desired behavior, 
efficient and easily understandable code, and low algorithmic complexity.

bq. It also seems better to just use hashCode, rather than creating your own 
random set of random ints associated with objects.

It's important that we get a different answer each time 
{{pseudoSortByDistance}} is invoked; that randomization is what spreads the 
read load out across the replicas. So using a stable value like hashCode would 
defeat that goal of this change.  (Possibly it might be true that hashCode 
ordering would be different in different {{DFSClient}} instances on different 
nodes, but I see no guarantee of that, and even if it's true, depending on such 
a subtle implementation detail would be dangerous. And it still doesn't resolve 
the issue that a single DFSClient should pick different replicas from a given 
class, for various reads of a given block.)
                
> block replica reads get hot-spots due to NetworkTopology#pseudoSortByDistance
> -----------------------------------------------------------------------------
>
>                 Key: HDFS-4253
>                 URL: https://issues.apache.org/jira/browse/HDFS-4253
>             Project: Hadoop HDFS
>          Issue Type: Bug
>    Affects Versions: 3.0.0, 2.0.2-alpha
>            Reporter: Andy Isaacson
>            Assignee: Andy Isaacson
>         Attachments: hdfs4253-1.txt, hdfs4253-2.txt, hdfs4253.txt
>
>
> When many nodes (10) read from the same block simultaneously, we get 
> asymmetric distribution of read load.  This can result in slow block reads 
> when one replica is serving most of the readers and the other replicas are 
> idle.  The busy DN bottlenecks on its network link.
> This is especially visible with large block sizes and high replica counts (I 
> reproduced the problem with {{-Ddfs.block.size=4294967296}} and replication 
> 5), but the same behavior happens on a small scale with normal-sized blocks 
> and replication=3.
> The root of the problem is in {{NetworkTopology#pseudoSortByDistance}} which 
> explicitly does not try to spread traffic among replicas in a given rack -- 
> it only randomizes usage for off-rack replicas.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to