Hi,
Recently, I moved from a single machine setup to a 2 machine setup. I was
successfully able to run my job that uses the HDFS to get data. I have 3
trivial questions

1- To access HDFS, I have to manually give the IP address of server running
HDFS. I thought that Hama will automatically pick from the configurations
but it does not. I am probably doing something wrong. Right now my code
work by using the following.

FileSystem fs = FileSystem.get(new URI("hdfs://server_ip:port/"), conf);

2- On my master server, when I start hama it automatically starts hama in
the slave machine (all good). Both master and slave are set as
groomservers. This means that I have 2 servers to run my job which means
that I can open more BSPPeerChild processes. And if I submit my jar with 3
bsp tasks then everything works fine. But when I move to 4 tasks, Hama
freezes. Here is the result of JPS command on slave.


Result of JPS command on Master


​

You can see that it is only opening tasks on slaves but not on master.

Note: I tried to change the bsp.tasks.maximum property in hama-default.xml
to 4 but still same result.

3- I want my cluster to open as many BSPPeerChild processes as possible. Is
there any setting that can I do to achieve that ? Or hama picks up the
values from hama-default.xml to open tasks ?


Regards,

Behroz Sikander

Reply via email to