Hi Sai, your question is like "the question" for using Giraph.

Those resources depends on how much memory do you have on every node, it
depends if the cluster it's used for another user at the same time, depends
on the type of program that you are running, etc. Virtual memory can be
easily increased, but physical memory limit is a problem indeed.

I recommend you to post how much memory do you have available on each node
of your cluster to yarn, and maybe someone can give you a more precise
advice on how to tune those parameters.

You should look some old discussions about those values like this one:
https://www.mail-archive.com/user@giraph.apache.org/msg02628.html

Bye

-- 
*José Luis Larroque*
Analista Programador Universitario - Facultad de Informática - UNLP
Desarrollador Java y .NET  en LIFIA

2017-02-16 7:32 GMT-03:00 Sai Ganesh Muthuraman <saiganesh...@gmail.com>:

> Hi,
>
> I am trying to run a giraph application (computing betweenness centrality)
> in the XSEDE comet cluster. But everytime I get some error relating to
> container launch. Either the virtual memory or physical memory is running
> out.
>
>
> The avoid this, it looks like that the following parameters have to be set.
> i) The maximum memory yarn can utilize on every node
> ii) Breakup of total resources available into containers
> iii) Physical RAM limit for each Map And Reduce task
> iv) The JVM heap size limit for each task
> v) The amount of virtual memory each task will get
>
> If I were to use *N nodes* for computation, and I want to use *W workers*,
> what should the following parameters be?
>
> In mapred-site.xml
> mapreduce.map.memory.mb
> mapreduce.reduce.memory.mb
> mapreduce.map.cpu.vcores
> mapreduce.reduce.cpu.vcores
>
> In yarn-site.xml
> yarn.nodemanager.resource.memory-mb
> yarn.scheduler.minimum-allocation-mb
> yarn.scheduler.minimum-allocation-vcores
> yarn.scheduler.maximum-allocation-vcores
> yarn.nodemanager.resource.cpu-vcores
>
> Sai Ganesh
>
>
>

Reply via email to