I'm trying to configure a HDFS cluster with HA, kerberos and cipher. For HA
I have used QJM with automatic failover.
Til now I have HA and Kerberos running propertly, but I'm having problems
when try to add cipher. Specifically when I set in core-site.xml the
property hadoop.rpc.protection to something different to authentication,
after starting journalnodes if I try to exectute "hdfs nodemanager -format"
I get this this message:

13/12/18 18:15:04 INFO blockmanagement.DatanodeManager:
dfs.block.invalidate.limit=1000
13/12/18 18:15:04 INFO util.GSet: Computing capacity for map BlocksMap
13/12/18 18:15:04 INFO util.GSet: VM type       = 64-bit
13/12/18 18:15:04 INFO util.GSet: 2.0% max memory = 889 MB
13/12/18 18:15:04 INFO util.GSet: capacity      = 2^21 = 2097152 entries
13/12/18 18:15:04 INFO blockmanagement.BlockManager:
dfs.block.access.token.enable=true
13/12/18 18:15:04 INFO blockmanagement.BlockManager:
dfs.block.access.key.update.interval=600 min(s),
dfs.block.access.token.lifetime=600 min(s),
dfs.encrypt.data.transfer.algorithm=null
13/12/18 18:15:04 INFO blockmanagement.BlockManager:
defaultReplication         = 3
13/12/18 18:15:04 INFO blockmanagement.BlockManager:
maxReplication             = 512
13/12/18 18:15:04 INFO blockmanagement.BlockManager:
minReplication             = 1
13/12/18 18:15:04 INFO blockmanagement.BlockManager:
maxReplicationStreams      = 2
13/12/18 18:15:04 INFO blockmanagement.BlockManager:
shouldCheckForEnoughRacks  = false
13/12/18 18:15:04 INFO blockmanagement.BlockManager:
replicationRecheckInterval = 3000
13/12/18 18:15:04 INFO blockmanagement.BlockManager:
encryptDataTransfer        = true
13/12/18 18:15:04 INFO namenode.FSNamesystem: fsOwner             =
hdfsadmin/jcr1.jcfernandez.cediant...@jcfernandez.cediant.es (auth:KERBEROS)
13/12/18 18:15:04 INFO namenode.FSNamesystem: supergroup          =
hadoopadm
13/12/18 18:15:04 INFO namenode.FSNamesystem: isPermissionEnabled = true
13/12/18 18:15:04 INFO namenode.FSNamesystem: Determined nameservice ID:
hdfscluster
13/12/18 18:15:04 INFO namenode.FSNamesystem: HA Enabled: true
13/12/18 18:15:04 INFO namenode.FSNamesystem: Append Enabled: true
13/12/18 18:15:06 INFO util.GSet: Computing capacity for map INodeMap
13/12/18 18:15:06 INFO util.GSet: VM type       = 64-bit
13/12/18 18:15:06 INFO util.GSet: 1.0% max memory = 889 MB
13/12/18 18:15:06 INFO util.GSet: capacity      = 2^20 = 1048576 entries
13/12/18 18:15:06 INFO namenode.NameNode: Caching file names occuring more
than 10 times
13/12/18 18:15:06 INFO namenode.FSNamesystem:
dfs.namenode.safemode.threshold-pct = 0.9990000128746033
13/12/18 18:15:06 INFO namenode.FSNamesystem:
dfs.namenode.safemode.min.datanodes = 0
13/12/18 18:15:06 INFO namenode.FSNamesystem:
dfs.namenode.safemode.extension     = 30000
13/12/18 18:15:06 INFO namenode.FSNamesystem: Retry cache on namenode is
enabled
13/12/18 18:15:06 INFO namenode.FSNamesystem: Retry cache will use 0.03 of
total heap and retry cache entry expiry time is 600000 millis
13/12/18 18:15:06 INFO util.GSet: Computing capacity for map Namenode Retry
Cache
13/12/18 18:15:06 INFO util.GSet: VM type       = 64-bit
13/12/18 18:15:06 INFO util.GSet: 0.029999999329447746% max memory = 889 MB
13/12/18 18:15:06 INFO util.GSet: capacity      = 2^15 = 32768 entries
Re-format filesystem in Storage Directory /home/hdfsadmin/HDFS.DATA/meta ?
(Y or N) Y
13/12/18 18:15:10 ERROR security.UserGroupInformation:
PriviledgedActionException as:hdfsadmin/
jcr1.jcfernandez.cediant...@jcfernandez.cediant.es (auth:KERBEROS)
cause:javax.security.sasl.SaslException: No common protection layer between
client and server

Also I noticed in journalnode's log this warning:
2013-12-18 18:15:43,994 WARN
org.apache.hadoop.security.authentication.server.AuthenticationFilter:
'signature.secret' configuration not set, using a random value as secret

But I have configured the property
hadoop.http.authentication.signature.secret.file, which have read
permissions for hdfsadmin(user running daemons) and also checked the full
path.

Could anyone help me?

Reply via email to