Re: [VOTE] Apache Giraph 1.1.0 RC1

2014-11-04 Thread Claudio Martella
I am indeed having some problems. mvn install will fail because the test is
opening too many files:

Caused by: java.io.FileNotFoundException:
/private/var/folders/5b/8yx5dbyn40nbt_70syjs86chgp/T/giraph-hive-1415098102276/metastore_db/seg0/c90.dat
(Too many open files in system)

at java.io.RandomAccessFile.open(Native Method)

at java.io.RandomAccessFile.init(RandomAccessFile.java:241)

at org.apache.derby.impl.io.DirRandomAccessFile.init(Unknown Source)

at org.apache.derby.impl.io.DirRandomAccessFile4.init(Unknown Source)

at org.apache.derby.impl.io.DirFile4.getRandomAccessFile(Unknown Source)

at org.apache.derby.impl.store.raw.data.RAFContainer.run(Unknown Source)

at java.security.AccessController.doPrivileged(Native Method)

at
org.apache.derby.impl.store.raw.data.RAFContainer.createContainer(Unknown
Source)

at
org.apache.derby.impl.store.raw.data.RAFContainer4.createContainer(Unknown
Source)

at org.apache.derby.impl.store.raw.data.FileContainer.createIdent(Unknown
Source)

at org.apache.derby.impl.store.raw.data.RAFContainer.createIdentity(Unknown
Source)

at org.apache.derby.impl.services.cache.ConcurrentCache.create(Unknown
Source)

at
org.apache.derby.impl.store.raw.data.BaseDataFileFactory.addContainer(Unknown
Source)

at org.apache.derby.impl.store.raw.xact.Xact.addContainer(Unknown Source)

at org.apache.derby.impl.store.access.heap.Heap.create(Unknown Source)

at
org.apache.derby.impl.store.access.heap.HeapConglomerateFactory.createConglomerate(Unknown
Source)

at
org.apache.derby.impl.store.access.RAMTransaction.createConglomerate(Unknown
Source)

at
org.apache.derby.impl.sql.catalog.DataDictionaryImpl.createConglomerate(Unknown
Source)

at
org.apache.derby.impl.sql.catalog.DataDictionaryImpl.createDictionaryTables(Unknown
Source)

at org.apache.derby.impl.sql.catalog.DataDictionaryImpl.boot(Unknown Source)

at org.apache.derby.impl.services.monitor.BaseMonitor.boot(Unknown Source)

at org.apache.derby.impl.services.monitor.TopService.bootModule(Unknown
Source)

at org.apache.derby.impl.services.monitor.BaseMonitor.startModule(Unknown
Source)

at org.apache.derby.iapi.services.monitor.Monitor.bootServiceModule(Unknown
Source)

at org.apache.derby.impl.db.BasicDatabase.boot(Unknown Source)

at org.apache.derby.impl.services.monitor.BaseMonitor.boot(Unknown Source)

at org.apache.derby.impl.services.monitor.TopService.bootModule(Unknown
Source)

at org.apache.derby.impl.services.monitor.BaseMonitor.bootService(Unknown
Source)

at
org.apache.derby.impl.services.monitor.BaseMonitor.createPersistentService(Unknown
Source)

at
org.apache.derby.iapi.services.monitor.Monitor.createPersistentService(Unknown
Source)

... 96 more


I have to investigate why this happens. I'm not using a different ulimit
than what I have on my Mac OS X by default. Where are you building yours?



On Sat, Nov 1, 2014 at 11:49 PM, Roman Shaposhnik ro...@shaposhnik.org
wrote:

 Ping! Any progress on testing the current RC?

 Thanks,
 Roman.

 On Fri, Oct 31, 2014 at 9:00 AM, Claudio Martella
 claudio.marte...@gmail.com wrote:
  Oh, thanks for the info!
 
  On Fri, Oct 31, 2014 at 3:06 PM, Roman Shaposhnik ro...@shaposhnik.org
  wrote:
 
  On Fri, Oct 31, 2014 at 3:26 AM, Claudio Martella
  claudio.marte...@gmail.com wrote:
   Hi Roman,
  
   thanks again for this. I have had a look at the staging site so far
 (our
   cluster has been down whole week... universities...), and I was
   wondering if
   you have an insight why some of the docs are missing, e.g. gora and
   rexster
   documentation.
 
  None of them are missing. The links moved to a User Docs - Modules
  though:
 http://people.apache.org/~rvs/giraph-1.1.0-RC1/site/gora.html
 http://people.apache.org/~rvs/giraph-1.1.0-RC1/site/rexster.html
  and so forth.
 
  Thanks,
  Roman.
 
 
 
 
  --
 Claudio Martella
 




-- 
   Claudio Martella


Re: Graph partitioning and data locality

2014-11-04 Thread Claudio Martella
Hi,

answers are inline.

On Tue, Nov 4, 2014 at 8:36 AM, Martin Junghanns martin.jungha...@gmx.net
wrote:

 Hi group,

 I got a question concerning the graph partitioning step. If I understood
 the code correctly, the graph is distributed to n partitions by using
 vertexID.hashCode()  n. I got two questions concerning that step.

 1) Is the whole graph loaded and partitioned only by the Master? This
 would mean, the whole data has to be moved to that Master map job and then
 moved to the physical node the specific worker for the partition runs on.
 As this sounds like a huge overhead, I further inspected the code:
 I saw that there is also a WorkerGraphPartitioner and I assume he calls
 the partitioning method on his local data (lets say his local HDFS blocks)
 and if the resulting partition for a vertex is not himself, the data gets
 moved to that worker, which reduces the overhead. Is this assumption
 correct?


That is correct, workers forward vertex data to the correct worker who is
responsible for that vertex via hash-partitioning (by default), meaning
that the master is not involved.



 2) Let's say the graph is already partitioned in the file system, e.g.
 blocks on physical nodes contain logical connected graph nodes. Is it
 possible to just read the data as it is and skip the partitioning step? In
 that case I currently assume, that the vertexID should contain the
 partitionID and the custom partitioning would be an identity function in
 that case (instead of hashing or range).


In principle you can. You would need to organize splits so that they
contain all the data for each particular worker, and then assign relevant
splits to the corresponding worker.



 Thanks for your time and help!

 Cheers,
 Martin




-- 
   Claudio Martella


RE: Graph partitioning and data locality

2014-11-04 Thread Pavan Kumar A
You can also look at https://issues.apache.org/jira/browse/GIRAPH-908which 
solves the case where you have a partition map and would like graph to be 
partitioned that way after loading the input. It does not however solve the {do 
not shuffle data part}

From: claudio.marte...@gmail.com
Date: Tue, 4 Nov 2014 16:20:21 +0100
Subject: Re: Graph partitioning and data locality
To: user@giraph.apache.org

Hi,
answers are inline.
On Tue, Nov 4, 2014 at 8:36 AM, Martin Junghanns martin.jungha...@gmx.net 
wrote:
Hi group,



I got a question concerning the graph partitioning step. If I understood the 
code correctly, the graph is distributed to n partitions by using 
vertexID.hashCode()  n. I got two questions concerning that step.



1) Is the whole graph loaded and partitioned only by the Master? This would 
mean, the whole data has to be moved to that Master map job and then moved to 
the physical node the specific worker for the partition runs on. As this sounds 
like a huge overhead, I further inspected the code:

I saw that there is also a WorkerGraphPartitioner and I assume he calls the 
partitioning method on his local data (lets say his local HDFS blocks) and if 
the resulting partition for a vertex is not himself, the data gets moved to 
that worker, which reduces the overhead. Is this assumption correct?

That is correct, workers forward vertex data to the correct worker who is 
responsible for that vertex via hash-partitioning (by default), meaning that 
the master is not involved. 


2) Let's say the graph is already partitioned in the file system, e.g. blocks 
on physical nodes contain logical connected graph nodes. Is it possible to just 
read the data as it is and skip the partitioning step? In that case I currently 
assume, that the vertexID should contain the partitionID and the custom 
partitioning would be an identity function in that case (instead of hashing or 
range).

In principle you can. You would need to organize splits so that they contain 
all the data for each particular worker, and then assign relevant splits to the 
corresponding worker. 


Thanks for your time and help!



Cheers,

Martin



-- 
Claudio Martella
   
  

Re: Problem running Giraph in local mode (Stuck at MASTER_ZOOKEEPER_ONLY checkWorkers)

2014-11-04 Thread Tripti Singh
Which profile did you build?
I see log statements from GiraphJob class while yarn actually invokes 
GiraphYarnJob class afaik. I have not tried in local job runner mode so I might 
be wrong.(FYI, i built it using hadoop_yarn profile)

Thanks,
Tripti.
Sent from my iPhone

On 04-Nov-2014, at 9:57 pm, Garimella Kiran 
kiran.garime...@aalto.fimailto:kiran.garime...@aalto.fi wrote:

Hi all,

I am having some issues running my code locally on a single machine. I am using 
Giraph 1.1.0 and Hadoop 2.5.1 ..

The execution gets stuck and the following lines just keeps getting printed:

14/11/04 17:08:09 INFO mapred.LocalJobRunner: MASTER_ZOOKEEPER_ONLY 
checkWorkers: Only found 0 responses of 1 needed to start superstep -1  map

14/11/04 17:07:43 INFO master.BspServiceMaster: checkWorkers: Only found 0 
responses of 1 needed to start superstep -1.  Reporting every 3 msecs, 
569973 more msecs left before giving up.

14/11/04 17:07:43 INFO master.BspServiceMaster: logMissingWorkersOnSuperstep: 
No response from partition 1 (could be master)


The complete log is here: http://pastebin.com/cyBmHutq

(I was able to run the same code on a different machine, which had Giraph 1.0.0 
and Hadoop 0.20 with out any problems) ..
I'm a novice Giraph user, so excuse me if I'm missing something obvious.
Please let me know if I forgot to add something in the email.

Regards,
Kiran


Compiling Giraph 1.1

2014-11-04 Thread Ryan
I'm attempting to build, compile and install Giraph 1.1 on a server running
CDH5.1.2. A few weeks ago I successfully compiled it by changing the
hadoop_2 profile version to be 2.3.0-cdh5.1.2. I recently did a fresh
install and was unable to build, compile and install (perhaps due to the
latest code updates).

The error seems to be related to the SaslNettyClient and SaslNettyServer.
Any idea on fixes?

Here's part of the error log:

[ERROR] Failed to execute goal
org.apache.maven.plugins:maven-compiler-plugin:3.0:compile
(default-compile) on project giraph-core: Compilation failure: Compilation
failure:
[ERROR]
/[myPath]/giraph/giraph-core/src/main/java/org/apache/giraph/comm/netty/SaslNettyClient.java:[28,34]
cannot find symbol
[ERROR] symbol:   class SaslPropertiesResolver
[ERROR] location: package org.apache.hadoop.security
...
[ERROR]
/[myPath]/giraph/giraph-core/src/main/java/org/apache/giraph/comm/netty/SaslNettyServer.java:[108,11]
cannot find symbol
[ERROR] symbol:   variable SaslPropertiesResolver
[ERROR] location: class org.apache.giraph.comm.netty.SaslNettyServer


Re: Compiling Giraph 1.1

2014-11-04 Thread Roman Shaposhnik
What's the exact compilation incantation you use?

Thanks,
Roman.

On Tue, Nov 4, 2014 at 9:56 AM, Ryan freelanceflashga...@gmail.com wrote:
 I'm attempting to build, compile and install Giraph 1.1 on a server running
 CDH5.1.2. A few weeks ago I successfully compiled it by changing the
 hadoop_2 profile version to be 2.3.0-cdh5.1.2. I recently did a fresh
 install and was unable to build, compile and install (perhaps due to the
 latest code updates).

 The error seems to be related to the SaslNettyClient and SaslNettyServer.
 Any idea on fixes?

 Here's part of the error log:

 [ERROR] Failed to execute goal
 org.apache.maven.plugins:maven-compiler-plugin:3.0:compile (default-compile)
 on project giraph-core: Compilation failure: Compilation failure:
 [ERROR]
 /[myPath]/giraph/giraph-core/src/main/java/org/apache/giraph/comm/netty/SaslNettyClient.java:[28,34]
 cannot find symbol
 [ERROR] symbol:   class SaslPropertiesResolver
 [ERROR] location: package org.apache.hadoop.security
 ...
 [ERROR]
 /[myPath]/giraph/giraph-core/src/main/java/org/apache/giraph/comm/netty/SaslNettyServer.java:[108,11]
 cannot find symbol
 [ERROR] symbol:   variable SaslPropertiesResolver
 [ERROR] location: class org.apache.giraph.comm.netty.SaslNettyServer



Re: [VOTE] Apache Giraph 1.1.0 RC1

2014-11-04 Thread Roman Shaposhnik
On Mon, Nov 3, 2014 at 4:51 PM, Maja Kabiljo majakabi...@fb.com wrote:
 We¹ve been running code which is the same as release candidate plus fix on
 GIRAPH-961 in production for 5 days now, no problems. This is
 hadoop_facebook profile, using only hive-io from all io modules.

Great! This tells me that once I cut RC2 with GIRAPH-961 you guys will be
ready to vote!

Thanks,
Roman.


Re: [VOTE] Apache Giraph 1.1.0 RC1

2014-11-04 Thread Roman Shaposhnik
On Tue, Nov 4, 2014 at 5:47 AM, Claudio Martella
claudio.marte...@gmail.com wrote:
 I am indeed having some problems. mvn install will fail because the test is
 opening too many files:

[snip]

 I have to investigate why this happens. I'm not using a different ulimit
 than what I have on my Mac OS X by default. Where are you building yours?

This is really weird. I have not issues whatsoever on Mac OS X with
the following setup:
   $ uname -a
   Darwin usxxshaporm1.corp.emc.com 12.4.1 Darwin Kernel Version
12.4.1: Tue May 21 17:04:50 PDT 2013;
root:xnu-2050.40.51~1/RELEASE_X86_64 x86_64
   $ ulimit -a
   core file size  (blocks, -c) 0
   data seg size   (kbytes, -d) unlimited
   file size   (blocks, -f) unlimited
   max locked memory   (kbytes, -l) unlimited
   max memory size (kbytes, -m) unlimited
   open files  (-n) 2560
   pipe size(512 bytes, -p) 1
   stack size  (kbytes, -s) 8192
   cpu time   (seconds, -t) unlimited
   max user processes  (-u) 709
   virtual memory  (kbytes, -v) unlimited
   $ mvn --version
   Apache Maven 3.2.3 (33f8c3e1027c3ddde99d3cdebad2656a31e8fdf4;
2014-08-11T13:58:10-07:00)
   Maven home: /Users/shapor/dist/apache-maven-3.2.3
   Java version: 1.7.0_51, vendor: Oracle Corporation
   Java home: 
/Library/Java/JavaVirtualMachines/jdk1.7.0_51.jdk/Contents/Home/jre
   Default locale: en_US, platform encoding: UTF-8
   OS name: mac os x, version: 10.8.4, arch: x86_64, family: mac


Thanks,
Roman.


Re: Problem running Giraph in local mode (Stuck at MASTER_ZOOKEEPER_ONLY checkWorkers)

2014-11-04 Thread Garimella Kiran
Hi,

I built it using the hadoop_2 profile.

More specifically, the command I used is: mvn –Phadoop_2 –Dhadoop=2.5.1 compile 
-DskipTests

Regards,
Kiran

From: Tripti Singh tri...@yahoo-inc.commailto:tri...@yahoo-inc.com
Reply-To: user@giraph.apache.orgmailto:user@giraph.apache.org 
user@giraph.apache.orgmailto:user@giraph.apache.org
Date: Tuesday, November 4, 2014 at 7:27 PM
To: user@giraph.apache.orgmailto:user@giraph.apache.org 
user@giraph.apache.orgmailto:user@giraph.apache.org
Subject: Re: Problem running Giraph in local mode (Stuck at 
MASTER_ZOOKEEPER_ONLY checkWorkers)

Which profile did you build?
I see log statements from GiraphJob class while yarn actually invokes 
GiraphYarnJob class afaik. I have not tried in local job runner mode so I might 
be wrong.(FYI, i built it using hadoop_yarn profile)

Thanks,
Tripti.
Sent from my iPhone

On 04-Nov-2014, at 9:57 pm, Garimella Kiran 
kiran.garime...@aalto.fimailto:kiran.garime...@aalto.fi wrote:

Hi all,

I am having some issues running my code locally on a single machine. I am using 
Giraph 1.1.0 and Hadoop 2.5.1 ..

The execution gets stuck and the following lines just keeps getting printed:

14/11/04 17:08:09 INFO mapred.LocalJobRunner: MASTER_ZOOKEEPER_ONLY 
checkWorkers: Only found 0 responses of 1 needed to start superstep -1  map

14/11/04 17:07:43 INFO master.BspServiceMaster: checkWorkers: Only found 0 
responses of 1 needed to start superstep -1.  Reporting every 3 msecs, 
569973 more msecs left before giving up.

14/11/04 17:07:43 INFO master.BspServiceMaster: logMissingWorkersOnSuperstep: 
No response from partition 1 (could be master)


The complete log is here: http://pastebin.com/cyBmHutq

(I was able to run the same code on a different machine, which had Giraph 1.0.0 
and Hadoop 0.20 with out any problems) ..
I'm a novice Giraph user, so excuse me if I'm missing something obvious.
Please let me know if I forgot to add something in the email.

Regards,
Kiran


Giraph 1.1.0- SNAPSHOT with Hadoop 2.2.0: Classnotfound Exception

2014-11-04 Thread Charith Wickramarachchi
Hi,

I have a trouble when trying to run giraph trunk on Hadoop 2.2.0

Job terminated, giving the exception

ERROR yarn.GiraphYarnClient: Giraph:
org.apache.giraph.examples.SimpleShortestPathsComputation
reports FAILED state, diagnostics show: Application
application_1415138324209_0002 failed 2 times due to AM Container for
appattempt_1415138324209_0002_02 exited with  exitCode: 1 due to:
Exception from container-launch:
org.apache.hadoop.util.Shell$ExitCodeException:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:464)

stderror logs have following error:

Error: Could not find or load main class
org.apache.giraph.yarn.GiraphApplicationMaster


I build the project using following command

mvn -Phadoop_yarn -fae -DskipTests -Dhadoop.version=2.2.0 clean install

I could see the class org.apache.giraph.yarn.GiraphApplicationMaster in the
jar.


Please let me know how to resolve this issue? Also is there a way to run
giraph as Mapreduce application instead of yarn with hadoop 2.2.0?

Thanks,
Charith



-- 
Charith Dhanushka Wickramaarachchi

Tel  +1 213 447 4253
Web  http://apache.org/~charith http://www-scf.usc.edu/~cwickram/
http://charith.wickramaarachchi.org/
Blog  http://charith.wickramaarachchi.org/
http://charithwiki.blogspot.com/
Twitter  @charithwiki https://twitter.com/charithwiki

This communication may contain privileged or other confidential information
and is intended exclusively for the addressee/s. If you are not the
intended recipient/s, or believe that you may have
received this communication in error, please reply to the sender indicating
that fact and delete the copy you received and in addition, you should not
print, copy, retransmit, disseminate, or otherwise use the information
contained in this communication. Internet communications cannot be
guaranteed to be timely, secure, error or virus-free. The sender does not
accept liability for any errors or omissions


Re: Compiling Giraph 1.1

2014-11-04 Thread Ryan
It's 'mvn -Phadoop_2 -fae -DskipTests clean install'

Thanks,
Ryan

On Tue, Nov 4, 2014 at 2:02 PM, Roman Shaposhnik ro...@shaposhnik.org
wrote:

 What's the exact compilation incantation you use?

 Thanks,
 Roman.

 On Tue, Nov 4, 2014 at 9:56 AM, Ryan freelanceflashga...@gmail.com
 wrote:
  I'm attempting to build, compile and install Giraph 1.1 on a server
 running
  CDH5.1.2. A few weeks ago I successfully compiled it by changing the
  hadoop_2 profile version to be 2.3.0-cdh5.1.2. I recently did a fresh
  install and was unable to build, compile and install (perhaps due to the
  latest code updates).
 
  The error seems to be related to the SaslNettyClient and SaslNettyServer.
  Any idea on fixes?
 
  Here's part of the error log:
 
  [ERROR] Failed to execute goal
  org.apache.maven.plugins:maven-compiler-plugin:3.0:compile
 (default-compile)
  on project giraph-core: Compilation failure: Compilation failure:
  [ERROR]
 
 /[myPath]/giraph/giraph-core/src/main/java/org/apache/giraph/comm/netty/SaslNettyClient.java:[28,34]
  cannot find symbol
  [ERROR] symbol:   class SaslPropertiesResolver
  [ERROR] location: package org.apache.hadoop.security
  ...
  [ERROR]
 
 /[myPath]/giraph/giraph-core/src/main/java/org/apache/giraph/comm/netty/SaslNettyServer.java:[108,11]
  cannot find symbol
  [ERROR] symbol:   variable SaslPropertiesResolver
  [ERROR] location: class org.apache.giraph.comm.netty.SaslNettyServer
 



Re: Giraph 1.1.0- SNAPSHOT with Hadoop 2.2.0: Classnotfound Exception

2014-11-04 Thread Charith Wickramarachchi
Hi,

Adding some information to the thread:

I tried to run application Map reduce mode by doing the following steps.
(Java version 1.7.0_71)

1) Build giraph: $mvn -Phadoop_2 -fae -DskipTests clean install

2) Run the map reduce job: $HADOOP_HOME/bin/hadoop jar
~/giraph-examples-1.1.0-SNAPSHOT-for-hadoop-2.5.1-jar-with-dependencies.jar
org.apache.giraph.GiraphRunner
org.apache.giraph.examples.SimpleShortestPathsComputation
-vif org.apache.giraph.io.formats.JsonLongDoubleFloatDoubleVertexInputFormat
-vip /user/charith/input/tiny_graph.txt -vof
org.apache.giraph.io.formats.IdWithValueTextOutputFormat
-op /user/charith/output -ca mapred.job.tracker=10.0.0.1:54311 -ca mapred
.job.tracker=10.0.0.1:54311 -w 2

I am getting following Exception

INFO mapreduce.Job: Job job_1415138324209_0007 failed with state FAILED due
to: Application application_1415138324209_0007 failed 2 times due to AM
Container for appattempt_1415138324209_0007_02 exited with  exitCode: 1
due to: Exception from container-launch:
org.apache.hadoop.util.Shell$ExitCodeException:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:464)



And the logs give following exception.

Exception in thread main java.lang.NoClassDefFoundError:
org/apache/hadoop/service/CompositeService
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:791)
at 
java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:449)
at java.net.URLClassLoader.access$100(URLClassLoader.java:71)
at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:423)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
at java.lang.ClassLoader.loadClass(ClassLoader.java:356)
at sun.launcher.LauncherHelper.checkAndLoadMain(LauncherHelper.java:480)
Caused by: java.lang.ClassNotFoundException:
org.apache.hadoop.service.CompositeService
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:423)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
at java.lang.ClassLoader.loadClass(ClassLoader.java:356)
... 13 more




It will be great if someone could assist me in this matter.

Thanks,
Charith









On Tue, Nov 4, 2014 at 2:19 PM, Charith Wickramarachchi 
charith.dhanus...@gmail.com wrote:

 Hi,

 I have a trouble when trying to run giraph trunk on Hadoop 2.2.0

 Job terminated, giving the exception

 ERROR yarn.GiraphYarnClient: Giraph: 
 org.apache.giraph.examples.SimpleShortestPathsComputation
 reports FAILED state, diagnostics show: Application
 application_1415138324209_0002 failed 2 times due to AM Container for
 appattempt_1415138324209_0002_02 exited with  exitCode: 1 due to:
 Exception from container-launch:
 org.apache.hadoop.util.Shell$ExitCodeException:
 at org.apache.hadoop.util.Shell.runCommand(Shell.java:464)

 stderror logs have following error:

 Error: Could not find or load main class 
 org.apache.giraph.yarn.GiraphApplicationMaster


 I build the project using following command

 mvn -Phadoop_yarn -fae -DskipTests -Dhadoop.version=2.2.0 clean install

 I could see the class org.apache.giraph.yarn.GiraphApplicationMaster in
 the jar.


 Please let me know how to resolve this issue? Also is there a way to run
 giraph as Mapreduce application instead of yarn with hadoop 2.2.0?

 Thanks,
 Charith



 --
 Charith Dhanushka Wickramaarachchi

 Tel  +1 213 447 4253
 Web  http://apache.org/~charith http://www-scf.usc.edu/~cwickram/
 http://charith.wickramaarachchi.org/
 Blog  http://charith.wickramaarachchi.org/
 http://charithwiki.blogspot.com/
 Twitter  @charithwiki https://twitter.com/charithwiki

 This communication may contain privileged or other confidential information
 and is intended exclusively for the addressee/s. If you are not the
 intended recipient/s, or believe that you may have
 received this communication in error, please reply to the sender indicating
 that fact and delete the copy you received and in addition, you should
 not print, copy, retransmit, disseminate, or otherwise use the
 information contained in this communication. Internet communications
 cannot be guaranteed to be timely, secure, error or virus-free. The
 sender does not accept liability for any errors or omissions




-- 
Charith Dhanushka Wickramaarachchi

Tel  +1 213 447 4253
Web