Hello Vikalp,

you are showing the output of the client. The errors I ve showed were coming from the ResourceManager, the GiraphApplicationMaster (gam-stderr.log) and the GiraphYarnTask (TaskRunner). Checkout these logfiles on your systems.

In case you get an error-message "class GiraphApplicationMaster not found": I solved it by adding a for-loop in hadoop-env.sh:

for f in `find $HADOOP_HOME/share/myLib/ -name \*.jar`; do
  if [ "$HADOOP_CLASSPATH" ]; then
    export HADOOP_CLASSPATH=$HADOOP_CLASSPATH:$f
  else
    export HADOOP_CLASSPATH=$f
  fi
done

In share/myLib/giraph I have put giraph-1.1.0-hadoop-2.4.0.jar and giraph-examples-1.1.0-hadoop-2.4.0.jar (without the dependencies). Additionally I copied every jar from /usr/local/giraph/lib also to this folder.

Then you can restart the cluster and every instance should know giraph.

After this it should function, if you have enough memory. I get then a message, that the GiraphApplicationMaster dont have enough memory and then get killed. But before this, it has started the TaskRunner, which keeps the whole job endlessly alive.

Sincerely,



On 12.08.2014 11:55, Vikalp Handa wrote:
@Alexander Sirotin : Thanks for your reply. I am really sorry I haven't faced this problem after I executed it. Rather I am now having a different issue with Containers :
*
*
*Result : *
14/08/12 15:17:51 INFO yarn.GiraphYarnClient: ApplicationSumbissionContext for GiraphApplicationMaster launch container is populated. 14/08/12 15:17:51 INFO yarn.GiraphYarnClient: Submitting application to ASM 14/08/12 15:17:52 INFO impl.YarnClientImpl: Submitted application application_1407836750214_0001 14/08/12 15:17:52 INFO yarn.GiraphYarnClient: Got new appId after submission :application_1407836750214_0001 14/08/12 15:17:52 INFO yarn.GiraphYarnClient: GiraphApplicationMaster container request was submitted to ResourceManager for job: Giraph: org.apache.giraph.examples.SimpleShortestPathsComputation 14/08/12 15:17:52 INFO yarn.GiraphYarnClient: Giraph: org.apache.giraph.examples.SimpleShortestPathsComputation, Elapsed: 0.99 secs 14/08/12 15:17:52 INFO yarn.GiraphYarnClient: appattempt_1407836750214_0001_000001, State: ACCEPTED, Containers used: 1 14/08/12 15:17:56 INFO yarn.GiraphYarnClient: Giraph: org.apache.giraph.examples.SimpleShortestPathsComputation, Elapsed: 5.01 secs 14/08/12 15:17:56 INFO yarn.GiraphYarnClient: appattempt_1407836750214_0001_000002, State: ACCEPTED, Containers used: 0 14/08/12 15:18:00*ERROR yarn.GiraphYarnClient: Giraph: org.apache.giraph.examples.SimpleShortestPathsComputation reports FAILED state, diagnostics show: Application application_1407836750214_0001 failed 2 times due to AM Container for appattempt_1407836750214_0001_000002 exited with exitCode: 1 due to: Exception from container-launch: org.apache.hadoop.util.Shell$ExitCodeException: *
*org.apache.hadoop.util.Shell$ExitCodeException:*
at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
at org.apache.hadoop.util.Shell.run(Shell.java:418)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)


Container exited with a non-zero exit code 1
.Failing this attempt.. Failing the application.
14/08/12 15:18:00 INFO yarn.GiraphYarnClient: Cleaning up HDFS distributed cache directory for Giraph job. 14/08/12 15:18:00 INFO yarn.GiraphYarnClient: Completed Giraph: org.apache.giraph.examples.SimpleShortestPathsComputation: FAILED, total running time: 0 minutes, 7 seconds.

I have also checked my *yarn-site.xml* file and updated with the following property - value pairs inside configuration:

|||<||property||>|
|||<||name||>yarn.nodemanager.aux-services.mapreduce.shuffle.class</||name||>|
|||<||value||>org.apache.hadoop.mapred.ShuffleHandler</||value||>|
|||</||property||>|
|||<||property||>|
|||<||name||>yarn.application.classpath</||name||>|
|||<||value||>|
|||%HADOOP_HOME%\etc\hadoop,|
|||%HADOOP_HOME%\share\hadoop\common\*,|
|||%HADOOP_HOME%\share\hadoop\common\lib\*,|
|||%HADOOP_HOME%\share\hadoop\hdfs\*,|
|||%HADOOP_HOME%\share\hadoop\hdfs\lib\*,|
|||%HADOOP_HOME%\share\hadoop\mapreduce\*,|
|||%HADOOP_HOME%\share\hadoop\mapreduce\lib\*,|
|||%HADOOP_HOME%\share\hadoop\yarn\*,|
|||%HADOOP_HOME%\share\hadoop\yarn\lib\*|
|||</||value||>|
|||</||property||>|
*|
|*


Regards,
Vikalp Handa


Reply via email to