Hi All,
Will this functionality be available with 1.1.0
Regards,
Praveen
Hi All,
I am getting several errors consistently while processing large graph.
The code works when the size of the graph is in terms of GB's.
we have implemented compression and removing the dead end nodes in de
Bruijn graph
My cluster settings are
Cores WorkersRAM/Core Graphsize
-XX:ParallelGCThreads=4 -XX:+UseConcMarkSweepGC
-XX:CMSInitiatingOccupancyFraction=70 -XX:MaxGCPauseMillis=100
Since the master doesn't use much memory letting zk have more is reasonable.
On 5/27/14, 9:25 AM, Praveen kumar s.k wrote:
Hi All,
I am getting several errors consistently while processing large
You have given -w 30, make sure that that many number of map tasks are
configured in your cluster
On Thu, Apr 3, 2014 at 6:24 PM, Avery Ching ach...@apache.org wrote:
My guess is that you don't get your resources. It would be very helpful to
print the master log. You can find it when the job
Hi all,
I have hadoop job which calls a giraph job. I have used persistent
aggregaters inside giraph job. After the haltcomputation is done, i want
the aggregated values back to Hadoop job.
As of now before halting the giraph job I write the aggregated count
values to a file and my Hadoop job
You should try to utilize maximum of your RAM to map tasks. You can do that
by changing the parameters at mapred-site.xml file. add the below
statements to set 6.5 GB of RAM for map tasks
property
namemapred.map.child.java.opts/name
value-Xmx6500m/value
descriptionheap size for map