[
https://issues.apache.org/jira/browse/KNOX-180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13788103#comment-13788103
]
Kevin Minder commented on KNOX-180:
-----------------------------------
The first thing we need to figure out is that you appear to be out of disk
space on the host or VM that is running WebHCat. Given your reported success
with WebHDFS and Oozie I'd be surprised if this were the only issue. The other
thing that you should consider if you are not using a Sandbox VM for this
testing is changing the account being used to match a real user account in your
Hadoop cluster. This will involve changing the sample Groovy (i.e. username =
"mapred") and the default conf/users.ldif to include that account information.
> Server error while submitting mapreduce job with knox-0.2.0
> -----------------------------------------------------------
>
> Key: KNOX-180
> URL: https://issues.apache.org/jira/browse/KNOX-180
> Project: Apache Knox
> Issue Type: Bug
> Components: Server
> Environment: knox-0.2.0
> Reporter: Amit Kamble
>
> I am using HDP 1.3.2 and knox-0.2.0
> I am trying to run sample mapreduce job (wordcount) provided with knox-0.2.0
> distribution (i.e. hadoop-examples.jar) using groovy. But while submitting
> its giving an error:
> DEBUG hadoop.gateway: Dispatching request: DELETE
> http://localhost:50070/webhdfs/v1/tmp/test?user.name=mapred&recursive=true&op=DELETE
> Delete /tmp/test 200
> DEBUG hadoop.gateway: Dispatching request: PUT
> http://localhost:50070/webhdfs/v1/tmp/test?user.name=mapred&op=MKDIRS
> Create /tmp/test 200
> DEBUG hadoop.gateway: Dispatching request: PUT
> http://localhost:50070/webhdfs/v1/tmp/test/input/FILE?user.name=mapred&op=CREATE
> DEBUG hadoop.gateway: Dispatching request: PUT
> http://localhost:50070/webhdfs/v1/tmp/test/hadoop-examples.jar?user.name=mapred&op=CREATE
> DEBUG hadoop.gateway: Dispatching request: PUT
> http://localhost:50075/webhdfs/v1/tmp/test/hadoop-examples.jar?user.name=mapred&overwrite=false&op=CREATE
> DEBUG hadoop.gateway: Dispatching request: PUT
> http://localhost:50075/webhdfs/v1/tmp/test/input/FILE?user.name=mapred&overwrite=false&op=CREATE
> Put /tmp/test/hadoop-examples.jar 201
> Put /tmp/test/input/FILE 201
> DEBUG hadoop.gateway: Dispatching request: POST
> http://localhost:50111/templeton/v1/mapreduce/jar
> Caught: org.apache.hadoop.gateway.shell.HadoopException:
> org.apache.hadoop.gateway.shell.ErrorResponse: HTTP/1.1 500 Server Error
> org.apache.hadoop.gateway.shell.HadoopException:
> org.apache.hadoop.gateway.shell.ErrorResponse: HTTP/1.1 500 Server Error
> at
> org.apache.hadoop.gateway.shell.AbstractRequest.now(AbstractRequest.java:72)
> at org.apache.hadoop.gateway.shell.AbstractRequest$now.call(Unknown
> Source)
> at ExampleSubmitJob.run(ExampleSubmitJob.groovy:42)
> at org.apache.hadoop.gateway.shell.Shell.main(Shell.java:40)
> at
> org.apache.hadoop.gateway.launcher.Invoker.invokeMainMethod(Invoker.java:64)
> at org.apache.hadoop.gateway.launcher.Invoker.invoke(Invoker.java:37)
> at org.apache.hadoop.gateway.launcher.Command.run(Command.java:100)
> at org.apache.hadoop.gateway.launcher.Launcher.run(Launcher.java:70)
> at org.apache.hadoop.gateway.launcher.Launcher.main(Launcher.java:49)
> Caused by: org.apache.hadoop.gateway.shell.ErrorResponse: HTTP/1.1 500 Server
> Error
> at org.apache.hadoop.gateway.shell.Hadoop.executeNow(Hadoop.java:107)
> at
> org.apache.hadoop.gateway.shell.AbstractRequest.execute(AbstractRequest.java:47)
> at
> org.apache.hadoop.gateway.shell.job.Java$Request.access$100(Java.java:38)
> at
> org.apache.hadoop.gateway.shell.job.Java$Request$1.call(Java.java:82)
> at
> org.apache.hadoop.gateway.shell.job.Java$Request$1.call(Java.java:70)
> at
> org.apache.hadoop.gateway.shell.AbstractRequest.now(AbstractRequest.java:70)
> ... 8 more
> In case of Oozie workflow, its working properly and submitting workflow
> successfully. In my case, its working fine with WebHDFS and Oozie but not
> with WebHCat/Templeton.
> Could you please suggest if I missed any configuration setting with this.
--
This message was sent by Atlassian JIRA
(v6.1#6144)