Hi all,

I have created a hadoop/hbase/zookeeper cluster that is secured and verified.  
Now a simple test is to connect an hbase client (e.g, shell) to see its 
behavior.

Well, I get the following message on the hbase master: AccessControlException: 
authentication is required.

Looking at the code it appears that the client passed "simple" authentication 
byte in the rpc header.  Why, I don't know?

My client configuration is as follows:

hbase-site.xml:
   <property>
      <name>hbase.security.authentication</name>
      <value>kerberos</value>
   </property>

   <property>
      <name>hbase.rpc.engine</name>
      <value>org.apache.hadoop.hbase.ipc.SecureRpcEngine</value>
   </property>

hbase-env.sh:
export HBASE_OPTS="$HBASE_OPTS 
-Djava.security.auth.login.config=/usr/local/hadoop/hbase/conf/hbase.jaas"

hbase.jaas:
Client {
   com.sun.security.auth.module.Krb5LoginModule required
   useKeyTab=false
   useTicketCache=true
 };

I issue kinit for the client I want to use.  Then invoke hbase shell.  I simply 
issue list and see the error on the server.

Any ideas what I am doing wrong?

Thanks so much!


_____________________________________________
From: Tony Dean
Sent: Tuesday, June 05, 2012 5:41 PM
To: common-user@hadoop.apache.org
Subject: hadoop file permission 1.0.3 (security)


Can someone detail the options that are available to set file permissions at 
the hadoop and os level?  Here's what I have discovered thus far:

dfs.permissions  = true|false (works as advertised)
dfs.supergroup = supergroup (works as advertised)
dfs.umaskmode = umask (I believe this should be used in lieu of dfs.umask) - it 
appears to set the permissions for files created in hadoop fs (minus execute 
permission).
why was dffs.umask deprecated?  what's difference between the 2.
dfs.datanode.data.dir.perm = perm (not sure this is working at all?) I thought 
it was supposed to set permission on blks at the os level.

Are there any other file permission configuration properties?

What I would really like to do is set data blk file permissions at the os level 
so that the blocks can be locked down from all users except super and 
supergroup, but allow it to be used accessed by hadoop API as specified by hdfs 
permissions.  Is this possible?

Thanks.


Tony Dean
SAS Institute Inc.
Senior Software Developer
919-531-6704

 << OLE Object: Picture (Device Independent Bitmap) >>



Reply via email to