Hi Giraph User,

I have a hadoop cluster of 10 nodes, each node has 32GB memory and 8
cpu cores.

I want to have 1 worker per machine and 8 compute threads for each worker.

To achieve the same, I have specified following values for the parameters:

mapreduce.map.cpu.vcores = 8  mapreduce.map.memory.mb = 32000

yarn.scheduler.minimum-allocation-vcores = 8
 yarn.scheduler.maximum-allocation-vcores = 8

yarn.nodemanager.resource.memory-mb = 32000
 yarn.nodemanager.resource.cpu-vcores = 8

I am running a custom application with following custom arguments:

giraph.numComputeThreads = 8  giraph.userPartitionCount = 64 and 8 worker
(-w 8).

In the logs, I can see that 8 compute threads are running in each container.

INFO graph.GraphTaskManager: execute: *8 partitions to process with 8
compute thread(s), originally 8 thread(s) on superstep 0*

 The ApplicationMaster logs show that each container has only one cpu core
assigned.

INFO yarn.GiraphApplicationMaster: Launching command on a new container.,
containerId=container_1482658643124_0002_01_000007,
containerNode=orion-09.local:50519, containerNodeURI=orion-09.local:8042,
containerResourceMemory=32000, *containerCPU=1*

 UI also shows that only 1 cpu core is being used and logging shows that
only 8 partitions are created.

Can you please point out what changes to be made so that each worker uses
all 8 cores and each gets 8 partitions?

Thanks
Ravikant

Reply via email to