[ 
https://issues.apache.org/jira/browse/HDFS-1150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12866762#action_12866762
 ] 

Jakob Homan commented on HDFS-1150:
-----------------------------------

The issue occurs when a client reads or writes blocks to a dn (and along the 
pipeline during writing).  While the datanode is able to use the block access 
token to verify that the client has permission to do so, the client has no way 
of verifying that it's talking to a genuine datanode, rather than an imposter 
process that has come up on the datanode's port (say, after the datanode has 
crashed). 

To correct this, rather than change the DataTransferProtocol, and potentially 
introduce bugs into the write pipeline, we're looking at using the common's 
jsvc/daemon library to start a secure datanode as root, grab the necessary 
resources (eg privileged ports) and then drop privileges.  This allows clients 
a reasonable certainty that the datanode they're talking to was started 
securely and is who they expect them to be.

> Verify datanodes' identities to clients in secure clusters
> ----------------------------------------------------------
>
>                 Key: HDFS-1150
>                 URL: https://issues.apache.org/jira/browse/HDFS-1150
>             Project: Hadoop HDFS
>          Issue Type: New Feature
>          Components: data-node
>    Affects Versions: 0.22.0
>            Reporter: Jakob Homan
>            Assignee: Jakob Homan
>
> Currently we use block access tokens to allow datanodes to verify clients' 
> identities, however we don't have a way for clients to verify the 
> authenticity of the datanodes themselves.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to