Hi Vikalp,

in $HADOOP_HOME/share there you will find Jars. But also Hadoop itself uses this place. Important is also there the subdirectory hadoop/mapreduce, where Jars will be used for TaskRunner and ApplicationMasters. If you would put your giraph-Jar there, the ApplicationMaster should find it, but the bad thing is, it is not your directory ;-) and 2nd in the Jar with dependencies, there are Classes included, which has Hadoop already and this caused errors on my systems. Therefore I created myLib by my own and put only additional Jars into it.

In hadoop-env.sh you can search for a loop-command 'for f in '.... This loop I just copied and modified for my own usage. I am sorry I am not an expert, but I do not want to waste so much time on it and made this workaround :-P

If I start to run my system successfully ( I also using Hadoop 2.4 with Giraph 1.1.0), then I will let you know how.

Sincerely

On 12.08.2014 15:52, Vikalp Handa wrote:
Hi Alexander,

I looked into my gam-stderr.log file and found *Error: Could not find or load main class org.apache.giraph.yarn.GiraphApplicationMaster* As you have already mentioned in your reply about that for loop in *hadoop-env.sh. * So can you please tell me what is *mylib *and***share/myLib/giraph** ? *Because I am having only doc and hadoop directories inside $HADOOP_HOME/share/

Also how to get *giraph-1.1.0-hadoop-2.4.0.jar *and*giraph-examples-1.1.0-hadoop-*2.4.0.jar (without the dependencies) as I am having only giraph-examples-1.1.0-SNAPSHOT-for-hadoop-2.4.0-jar-with-dependencies.jar inside giraph-examples/target/munged/.




Regards,
Vikalp Handa


On Tue, Aug 12, 2014 at 4:25 PM, Alexander Sirotin <sirot...@web.de <mailto:sirot...@web.de>> wrote:

    Hello Vikalp,

    you are showing the output of the client. The errors I ve showed
    were coming from the ResourceManager, the GiraphApplicationMaster
    (gam-stderr.log) and the GiraphYarnTask (TaskRunner). Checkout
    these logfiles on your systems.

    In case you get an error-message "class GiraphApplicationMaster
    not found": I solved it by adding a for-loop in hadoop-env.sh:

    for f in `find $HADOOP_HOME/share/myLib/ -name \*.jar`; do
      if [ "$HADOOP_CLASSPATH" ]; then
        export HADOOP_CLASSPATH=$HADOOP_CLASSPATH:$f
      else
        export HADOOP_CLASSPATH=$f
      fi
    done

    In share/myLib/giraph I have put giraph-1.1.0-hadoop-2.4.0.jar and
    giraph-examples-1.1.0-hadoop-2.4.0.jar (without the dependencies).
    Additionally I copied every jar from /usr/local/giraph/lib also to
    this folder.

    Then you can restart the cluster and every instance should know
    giraph.

    After this it should function, if you have enough memory. I get
    then a message, that the GiraphApplicationMaster dont have enough
    memory and then get killed. But before this, it has started the
    TaskRunner, which keeps the whole job endlessly alive.

    Sincerely,




    On 12.08.2014 11:55, Vikalp Handa wrote:
    @Alexander Sirotin : Thanks for your reply. I am really sorry I
    haven't faced this problem after I executed it. Rather I am now
    having a different issue with Containers :
    *
    *
    *Result : *
    14/08/12 15:17:51 INFO yarn.GiraphYarnClient:
    ApplicationSumbissionContext for GiraphApplicationMaster launch
    container is populated.
    14/08/12 15:17:51 INFO yarn.GiraphYarnClient: Submitting
    application to ASM
    14/08/12 15:17:52 INFO impl.YarnClientImpl: Submitted application
    application_1407836750214_0001
    14/08/12 15:17:52 INFO yarn.GiraphYarnClient: Got new appId after
    submission :application_1407836750214_0001
    14/08/12 15:17:52 INFO yarn.GiraphYarnClient:
    GiraphApplicationMaster container request was submitted to
    ResourceManager for job: Giraph:
    org.apache.giraph.examples.SimpleShortestPathsComputation
    14/08/12 15:17:52 INFO yarn.GiraphYarnClient: Giraph:
    org.apache.giraph.examples.SimpleShortestPathsComputation,
    Elapsed: 0.99 secs
    14/08/12 15:17:52 INFO yarn.GiraphYarnClient:
    appattempt_1407836750214_0001_000001, State: ACCEPTED, Containers
    used: 1
    14/08/12 15:17:56 INFO yarn.GiraphYarnClient: Giraph:
    org.apache.giraph.examples.SimpleShortestPathsComputation,
    Elapsed: 5.01 secs
    14/08/12 15:17:56 INFO yarn.GiraphYarnClient:
    appattempt_1407836750214_0001_000002, State: ACCEPTED, Containers
    used: 0
    14/08/12 15:18:00*ERROR yarn.GiraphYarnClient: Giraph:
    org.apache.giraph.examples.SimpleShortestPathsComputation reports
    FAILED state, diagnostics show: Application
    application_1407836750214_0001 failed 2 times due to AM Container
    for appattempt_1407836750214_0001_000002 exited with  exitCode: 1
    due to: Exception from container-launch:
    org.apache.hadoop.util.Shell$ExitCodeException: *
    *org.apache.hadoop.util.Shell$ExitCodeException:*
    at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
    at org.apache.hadoop.util.Shell.run(Shell.java:418)
    at
    org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
    at
    
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
    at
    
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
    at
    
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at
    
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at
    
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)


    Container exited with a non-zero exit code 1
    .Failing this attempt.. Failing the application.
    14/08/12 15:18:00 INFO yarn.GiraphYarnClient: Cleaning up HDFS
    distributed cache directory for Giraph job.
    14/08/12 15:18:00 INFO yarn.GiraphYarnClient: Completed Giraph:
    org.apache.giraph.examples.SimpleShortestPathsComputation:
    FAILED, total running time: 0 minutes, 7 seconds.

    I have also checked my *yarn-site.xml* file and updated with the
    following property - value pairs inside configuration:

    |||<||property||>|
    
|||<||name||>yarn.nodemanager.aux-services.mapreduce.shuffle.class</||name||>|
    |||<||value||>org.apache.hadoop.mapred.ShuffleHandler</||value||>|
    |||</||property||>|
    |||<||property||>|
    |||<||name||>yarn.application.classpath</||name||>|
    |||<||value||>|
    |||%HADOOP_HOME%\etc\hadoop,|
    |||%HADOOP_HOME%\share\hadoop\common\*,|
    |||%HADOOP_HOME%\share\hadoop\common\lib\*,|
    |||%HADOOP_HOME%\share\hadoop\hdfs\*,|
    |||%HADOOP_HOME%\share\hadoop\hdfs\lib\*,|
    |||%HADOOP_HOME%\share\hadoop\mapreduce\*,|
    |||%HADOOP_HOME%\share\hadoop\mapreduce\lib\*,|
    |||%HADOOP_HOME%\share\hadoop\yarn\*,|
    |||%HADOOP_HOME%\share\hadoop\yarn\lib\*|
    |||</||value||>|
    |||</||property||>|
    *|
    |*


    Regards,
    Vikalp Handa




Reply via email to