Problem with RPC encryption over wire

2013-11-13 Thread rab ra
Hello,

I am facing a problem in using Hadoop RPC encryption while transfer feature
in hadoop 2.2.0. I have 3 node cluster


Service running in node 1 (master)
Resource manager
Namenode
DataNode
SecondaryNamenode

Service running in slaves ( node 2  3)
NodeManager



I am trying to make data transfer between master and slave secure. For
that, I wanted to use data encryption over wire (RPC encryption) feature of
hadoop 2.2.0

When I ran the code, I get the below exception

Caused by: java.net.SocketTimeoutException: 6 millis timeout while
waiting for channel to be ready for read.


In another run, I saw in log the following error

No common protection layer between server and client

Not sure whether my configuration is inline with what I want to achieve.

Can someone give me some hint on where I am going wrong?

By the way, I have the below configuration setting in all of these nodes

Core-site.xml

configuration

  property
namefs.defaultFS/name
valuehdfs://master:8020/value
  /property

  property
namehadoop.tmp.dir/name
value/tmp/value
  /property
!--
  property
namehadoop.rpc.protection/name
valueprivacy/value
  /property
--
  property
nameio.file.buffer.size/name
value131072/value
  /property

/configuration

Hdfs-site.xml
configuration

  property
namedfs.replication/name
value1/value
   /property

  property
namedfs.name.dir/name
value/app/hadoop/dfs-2.2.0/name/value
  /property

  property
namedfs.data.dir/name
value/app/hadoop/dfs-2.2.0/data/value
  /property

  property
namedfs.encrypt.data.transfer/name
valuetrue/value
  /property

  property
namedfs.encrypt.data.transfer.algorithm/name
valuerc4/value
  /property

  property
namedfs.block.access.token.enable/name
valuetrue/value
  /property

/configuration

Mapred-site.xml

configuration

  property
namemapreduce.framework.name/name
valueyarn/value
  /property
!--
  property
namemapreduce.jobtracker.address/name
valuemaster:8032/value
  /property
--
  property
namemapreduce.tasktracker.map.tasks.maximum/name
value1/value
  /property

  property
namemapreduce.tasktracker.reduce.tasks.maximum/name
value1/value
  /property

  property
namemapreduce.map.speculative/name
valuefalse/value
  /property

  property
namemapreduce.reduce.speculative/name
valuefalse/value
  /property

  property
namemapreduce.map.java.opts/name
value-Xmx1024m/value
  /property

/configuration


Yarn-site.xml

configuration

  property
nameyarn.resourcemanager.hostname/name
valuemaster/value
  /property

  property
nameyarn.log-aggregation-enable/name
valuetrue/value
  /property

  property
nameyarn.nodemanager.aux-services/name
valuemapreduce_shuffle/value
  /property

  property
nameyarn.nodemanager.aux-services.mapreduce.shuffle.class/name
valueorg.apache.hadoop.mapred.ShuffleHandler/value
  /property

/configuration



With thanks and regards
Rab


Re: Problem with RPC encryption over wire

2013-11-13 Thread Daryn Sharp
No common protection layer between server and client  likely means the host 
for job submission does not have hadoop.rpc.protection=privacy.  In order for 
QOP to work, all client hosts (DN  others used to access the cluster) must 
have an identical setting.

A few quick questions: I'm assuming you mis-posted your configs and the 
protection setting isn't really commented out?  Your configs don't show 
security being enabled, but you do have it enabled, correct?  Otherwise QOP 
shouldn't apply.  Perhaps a bit obvious, but did you restart your NN after 
changing the QOP?  Since your defaultFS is just master, are you using HA?

It's a bit concerning that you aren't consistently receiving the mismatch 
error.  Is the client looping on retries and then you get timeouts after 5 
attempts?  If yes, we've got a major bug.  5 is the default number of RPC 
readers which handle SASL auth which means the protection mismatch is killing 
off the reader threads and rendering the NN unusable.  This shouldn't be 
possible, but what does your NN log show?

Daryn

On Nov 13, 2013, at 6:05 AM, rab ra rab...@gmail.commailto:rab...@gmail.com 
wrote:



Hello,

I am facing a problem in using Hadoop RPC encryption while transfer feature in 
hadoop 2.2.0. I have 3 node cluster


Service running in node 1 (master)
Resource manager
Namenode
DataNode
SecondaryNamenode

Service running in slaves ( node 2  3)
NodeManager



I am trying to make data transfer between master and slave secure. For that, I 
wanted to use data encryption over wire (RPC encryption) feature of hadoop 2.2.0

When I ran the code, I get the below exception

Caused by: java.net.SocketTimeoutException: 6 millis timeout while waiting 
for channel to be ready for read.


In another run, I saw in log the following error

No common protection layer between server and client

Not sure whether my configuration is inline with what I want to achieve.

Can someone give me some hint on where I am going wrong?

By the way, I have the below configuration setting in all of these nodes

Core-site.xml

configuration

  property
namefs.defaultFS/name
valuehdfs://master:8020/value
  /property

  property
namehadoop.tmp.dir/name
value/tmp/value
  /property
!--
  property
namehadoop.rpc.protection/name
valueprivacy/value
  /property
--
  property
nameio.file.buffer.size/name
value131072/value
  /property

/configuration

Hdfs-site.xml
configuration

  property
namedfs.replication/name
value1/value
   /property

  property
namedfs.name.dir/name
value/app/hadoop/dfs-2.2.0/name/value
  /property

  property
namedfs.data.dir/name
value/app/hadoop/dfs-2.2.0/data/value
  /property

  property
namedfs.encrypt.data.transfer/name
valuetrue/value
  /property

  property
namedfs.encrypt.data.transfer.algorithm/name
valuerc4/value
  /property

  property
namedfs.block.access.token.enable/name
valuetrue/value
  /property

/configuration

Mapred-site.xml

configuration

  property
namemapreduce.framework.namehttp://mapreduce.framework.name//name
valueyarn/value
  /property
!--
  property
namemapreduce.jobtracker.address/name
valuemaster:8032/value
  /property
--
  property
namemapreduce.tasktracker.map.tasks.maximum/name
value1/value
  /property

  property
namemapreduce.tasktracker.reduce.tasks.maximum/name
value1/value
  /property

  property
namemapreduce.map.speculative/name
valuefalse/value
  /property

  property
namemapreduce.reduce.speculative/name
valuefalse/value
  /property

  property
namemapreduce.map.java.opts/name
value-Xmx1024m/value
  /property

/configuration


Yarn-site.xml

configuration

  property
nameyarn.resourcemanager.hostname/name
valuemaster/value
  /property

  property
nameyarn.log-aggregation-enable/name
valuetrue/value
  /property

  property
nameyarn.nodemanager.aux-services/name
valuemapreduce_shuffle/value
  /property

  property
nameyarn.nodemanager.aux-services.mapreduce.shuffle.class/name
valueorg.apache.hadoop.mapred.ShuffleHandler/value
  /property

/configuration



With thanks and regards
Rab