try to create a directory which will be accessible for everyone (777) and
point output directory there (like --output-path
/temp/MYTABLE_GLOBAL_INDEX_HFILE).
Could you also provide a bit more information whether you are using
kerberos and versions of hdfs/hbase/phoenix.

Thanks,
Sergey

On Tue, May 23, 2017 at 10:51 AM, anil gupta <anilgupt...@gmail.com> wrote:

> I think you need to run the tool as "hbase" user.
>
> On Tue, May 23, 2017 at 5:43 AM, cmbendre <chaitanya.ben...@zeotap.com>
> wrote:
>
>> I created an ASYNC index and ran the IndexTool Map-Reduce job to populate
>> it.
>> Here is the command i used
>>
>> hbase org.apache.phoenix.mapreduce.index.IndexTool --data-table MYTABLE
>> --index-table MYTABLE_GLOBAL_INDEX --output-path
>> MYTABLE_GLOBAL_INDEX_HFILE
>>
>> I can see that index HFiles are created successfully on HDFS but then the
>> job fails due to permission errors. The files are created as "hadoop"
>> user,
>> and they do not have any permission for "hbase" user. Here is the error i
>> get -
>>
>> /Caused by:
>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.
>> security.AccessControlException):
>> Permission denied: user=hbase, access=EXECUTE,
>> inode="/user/hadoop/MYTABLE_GLOBAL_INDEX_HFILE/MYTABLE_GLOBA
>> L_INDEX/0/a4c9888f8e284158bfb79b30b2cdee82":hadoop:hadoop:drwxrwx---
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.c
>> heck(FSPermissionChecker.java:320)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.c
>> heckTraverse(FSPermissionChecker.java:259)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.c
>> heckPermission(FSPermissionChecker.java:205)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.c
>> heckPermission(FSPermissionChecker.java:190)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPerm
>> ission(FSDirectory.java:1728)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPerm
>> ission(FSDirectory.java:1712)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPath
>> Access(FSDirectory.java:1686)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlock
>> LocationsInt(FSNamesystem.java:1830)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlock
>> Locations(FSNamesystem.java:1799)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlock
>> Locations(FSNamesystem.java:1712)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.get
>> BlockLocations(NameNodeRpcServer.java:588)
>>         at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServ
>> erSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolS
>> erverSideTranslatorPB.java:365)
>>         at
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocol
>> Protos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNam
>> enodeProtocolProtos.java)
>>         at
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcIn
>> voker.call(ProtobufRpcEngine.java:616)
>>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
>>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
>>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
>>         at java.security.AccessController.doPrivileged(Native Method)
>>         at javax.security.auth.Subject.doAs(Subject.java:422)
>>         at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGro
>> upInformation.java:1698)
>>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2045)/
>>
>> The hack i am using right now is to set the permissions manually for these
>> files when the IndexTool job is running. Is there a better way ?
>>
>>
>>
>> --
>> View this message in context: http://apache-phoenix-user-lis
>> t.1124778.n5.nabble.com/Async-Index-Creation-fails-due-to-
>> permission-issue-tp3573.html
>> Sent from the Apache Phoenix User List mailing list archive at Nabble.com.
>>
>
>
>
> --
> Thanks & Regards,
> Anil Gupta
>

Reply via email to