[ 
https://issues.apache.org/jira/browse/KNOX-180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13790314#comment-13790314
 ] 

Kevin Minder commented on KNOX-180:
-----------------------------------

OK.  This is is likely a Hadoop/WebHCat configuration issue or at least a 
mismatch between the Knox demo configuration and your Hadoop install.  It is 
very likely that had you made a direct call to WebHCat (i.e. not via Knox) you 
would have received the same error.  This might be something you want to verify 
by the way.  In any case the telling message is:

ERROR | 09 Oct 2013 04:43:44,748 | 
org.apache.hadoop.security.UserGroupInformation | PriviledgedActionException 
as:mapred via hcat cause:org.apache.hadoop.ipc.RemoteException: User: hcat is 
not allowed to impersonate mapred

This tells me that the user 'hcat' may not be setup as a proxyuser or `mapred` 
may not be in the required group (probably `users`).  Take a look at the 
properties hadoop.proxyuser.hcat.groups and hadoop.proxyuser.hcat.hosts in your 
core-site.xml.  Details can be found here:
http://docs.hortonworks.com/HDPDocuments/HDP1/HDP-1.3.2/bk_installing_manually_book/content/rpm-chap14-2-3-hadoop.html

Note that this exception can be ignored and is documented in HIVE-5127.
ERROR | 09 Oct 2013 04:43:44,691 | org.apache.hadoop.conf.Configuration | 
Failed to set setXIncludeAware(true) for parser 
org.apache.xerces.jaxp.DocumentBuilderFactoryImpl@64efd17b:java.lang.UnsupportedOperationException:
 This parser does not support specification "null" version "null"

One of two things could be going on.
1) You don't have the hadoop.proxyuser.hcat.* properties setup in your config.
2) The user 'mapred' isn't in group 'users' or whatever group specified in 
hadoop.proxyuser.hcat.groups.

My guess is that the issue is #2.  If this is the case there are two ways to 
resolve this.

1) Submit the job as a user that is in the group 'users'.
2) Add a group that mapred is a member to hadoop.proxyuser.hcat.groups

You should do #1 and probably not even consider #2.  Keep in mind that the "out 
of the box" configuration for Knox is intended to make Knox very easy to use 
with Sandbox as a demonstration.  In your case you are using HDP.  We have 
tried to clear this up in Knox 0.3.0 BTW.

Assuming that you want to try and submit the job as someone other than 'mapred' 
and that property 'hadoop.proxyuser.hcat.groups' is set to 'users' you will 
need to do two things.
1) Edit {GATEWAY_HOME}/conf/users.ldif and add a user that is in group `users`. 
 Restart the test LDAP server.
2) Edit {GATEWAY_HOME}/samples/ExampleSubmitJob.groovy and change variables 
username and password to match the new user you added.

Hopefully this will resolve you issue.  If this works you should probably be 
using the new user you added to users.ldif for everything.  That will mean 
modifying any sample groovy scripts you use.


> Server error while submitting mapreduce job with knox-0.2.0
> -----------------------------------------------------------
>
>                 Key: KNOX-180
>                 URL: https://issues.apache.org/jira/browse/KNOX-180
>             Project: Apache Knox
>          Issue Type: Bug
>          Components: Server
>         Environment: knox-0.2.0
>            Reporter: Amit Kamble
>
> I am using HDP 1.3.2 and knox-0.2.0
> I am trying to run sample mapreduce job (wordcount) provided with knox-0.2.0 
> distribution (i.e. hadoop-examples.jar) using groovy. But while submitting 
> its giving an error:
> DEBUG hadoop.gateway: Dispatching request: DELETE 
> http://localhost:50070/webhdfs/v1/tmp/test?user.name=mapred&recursive=true&op=DELETE
> Delete /tmp/test 200
> DEBUG hadoop.gateway: Dispatching request: PUT 
> http://localhost:50070/webhdfs/v1/tmp/test?user.name=mapred&op=MKDIRS
> Create /tmp/test 200
> DEBUG hadoop.gateway: Dispatching request: PUT 
> http://localhost:50070/webhdfs/v1/tmp/test/input/FILE?user.name=mapred&op=CREATE
> DEBUG hadoop.gateway: Dispatching request: PUT 
> http://localhost:50070/webhdfs/v1/tmp/test/hadoop-examples.jar?user.name=mapred&op=CREATE
> DEBUG hadoop.gateway: Dispatching request: PUT 
> http://localhost:50075/webhdfs/v1/tmp/test/hadoop-examples.jar?user.name=mapred&overwrite=false&op=CREATE
> DEBUG hadoop.gateway: Dispatching request: PUT 
> http://localhost:50075/webhdfs/v1/tmp/test/input/FILE?user.name=mapred&overwrite=false&op=CREATE
> Put /tmp/test/hadoop-examples.jar 201
> Put /tmp/test/input/FILE 201
> DEBUG hadoop.gateway: Dispatching request: POST 
> http://localhost:50111/templeton/v1/mapreduce/jar
> Caught: org.apache.hadoop.gateway.shell.HadoopException: 
> org.apache.hadoop.gateway.shell.ErrorResponse: HTTP/1.1 500 Server Error
> org.apache.hadoop.gateway.shell.HadoopException: 
> org.apache.hadoop.gateway.shell.ErrorResponse: HTTP/1.1 500 Server Error
>         at 
> org.apache.hadoop.gateway.shell.AbstractRequest.now(AbstractRequest.java:72)
>         at org.apache.hadoop.gateway.shell.AbstractRequest$now.call(Unknown 
> Source)
>         at ExampleSubmitJob.run(ExampleSubmitJob.groovy:42)
>         at org.apache.hadoop.gateway.shell.Shell.main(Shell.java:40)
>         at 
> org.apache.hadoop.gateway.launcher.Invoker.invokeMainMethod(Invoker.java:64)
>         at org.apache.hadoop.gateway.launcher.Invoker.invoke(Invoker.java:37)
>         at org.apache.hadoop.gateway.launcher.Command.run(Command.java:100)
>         at org.apache.hadoop.gateway.launcher.Launcher.run(Launcher.java:70)
>         at org.apache.hadoop.gateway.launcher.Launcher.main(Launcher.java:49)
> Caused by: org.apache.hadoop.gateway.shell.ErrorResponse: HTTP/1.1 500 Server 
> Error
>         at org.apache.hadoop.gateway.shell.Hadoop.executeNow(Hadoop.java:107)
>         at 
> org.apache.hadoop.gateway.shell.AbstractRequest.execute(AbstractRequest.java:47)
>         at 
> org.apache.hadoop.gateway.shell.job.Java$Request.access$100(Java.java:38)
>         at 
> org.apache.hadoop.gateway.shell.job.Java$Request$1.call(Java.java:82)
>         at 
> org.apache.hadoop.gateway.shell.job.Java$Request$1.call(Java.java:70)
>         at 
> org.apache.hadoop.gateway.shell.AbstractRequest.now(AbstractRequest.java:70)
>         ... 8 more
> In case of Oozie workflow, its working properly and submitting workflow 
> successfully. In my case, its working fine with WebHDFS and Oozie but not 
> with WebHCat/Templeton.
> Could you please suggest if I missed any configuration setting with this.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

Reply via email to