Extending giraph with graph centric programming API

2014-06-09 Thread Praveen kumar s.k
Hi All, Will this functionality be available with 1.1.0 Regards, Praveen

Errors while running large graph

2014-05-27 Thread Praveen kumar s.k
Hi All, I am getting several errors consistently while processing large graph. The code works when the size of the graph is in terms of GB's. we have implemented compression and removing the dead end nodes in de Bruijn graph My cluster settings are Cores WorkersRAM/Core Graphsize

Re: Errors while running large graph

2014-05-27 Thread Praveen kumar s.k
-XX:ParallelGCThreads=4 -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:MaxGCPauseMillis=100 Since the master doesn't use much memory letting zk have more is reasonable. On 5/27/14, 9:25 AM, Praveen kumar s.k wrote: Hi All, I am getting several errors consistently while processing large

Re: Giraph job hangs indefinitely and is eventually killed by JobTracker

2014-04-03 Thread Praveen kumar s.k
You have given -w 30, make sure that that many number of map tasks are configured in your cluster On Thu, Apr 3, 2014 at 6:24 PM, Avery Ching ach...@apache.org wrote: My guess is that you don't get your resources. It would be very helpful to print the master log. You can find it when the job

Getting custom counter values from Giraph job

2014-02-27 Thread Praveen kumar s.k
Hi all, I have hadoop job which calls a giraph job. I have used persistent aggregaters inside giraph job. After the haltcomputation is done, i want the aggregated values back to Hadoop job. As of now before halting the giraph job I write the aggregated count values to a file and my Hadoop job

Re: java.lang.OutOfMemoryError: Java heap space

2013-10-21 Thread Praveen kumar s.k
You should try to utilize maximum of your RAM to map tasks. You can do that by changing the parameters at mapred-site.xml file. add the below statements to set 6.5 GB of RAM for map tasks property namemapred.map.child.java.opts/name value-Xmx6500m/value descriptionheap size for map