Re: Unable to start namenode : Address already in use

2015-06-06 Thread Chandrashekhar Kotekar
1) Check if by any chance process is still running by using jps -V command. 2) If it is running then kill it by sudo kill -9 proc-id 3) Execute name node start command again. 4) Go to bottom of the name node log file and post it here. Regards, Chandrash3khar Kotekar Mobile - +91 8600011455

Re: Map Reduce Help

2015-05-05 Thread Chandrashekhar Kotekar
Technically yes, you can keep all map reduce jobs in single jar file because all map reduce jobs are nothing but java classes but I think its better to keep all map-reduce job isolated so that you will be able to modify them easily in future. Regards, Chandrash3khar Kotekar Mobile - +91

Re: Can we control data distribution and load balancing in Hadoop Cluster?

2015-05-03 Thread Chandrashekhar Kotekar
Your question is very vague. Can you give us more details about the problem you are trying to solve? Regards, Chandrash3khar Kotekar Mobile - +91 8600011455 On Sun, May 3, 2015 at 11:59 PM, Answer Agrawal yrsna.tse...@gmail.com wrote: Hi As I studied that data distribution, load balancing,

journal node shared edits directory should be present on HDFS or NAS or anything else?

2015-02-12 Thread Chandrashekhar Kotekar
Hi, I am trying to configure name node HA and I want to further configure automatic fail over. I am confused about '*dfs.namenode.shared.edits.dir*' configuration. Documentation says that active namde node writes to shared storage. I would like to know if this means that name nodes write it on

Re: journal node shared edits directory should be present on HDFS or NAS or anything else?

2015-02-12 Thread Chandrashekhar Kotekar
Thanks Regards Brahma Reddy Battula -- *From:* Chandrashekhar Kotekar [shekhar.kote...@gmail.com] *Sent:* Thursday, February 12, 2015 5:01 PM *To:* user@hadoop.apache.org *Subject:* journal node shared edits directory should be present on HDFS or NAS

Re: What happens to data nodes when name node has failed for long time?

2014-12-14 Thread Chandrashekhar Kotekar
) is restored the cluster will resume healthy operation. This is part of hadoop’s ability to handle network partition events. *Rich Haase* | Sr. Software Engineer | Pandora m 303.887.1146 | rha...@pandora.com From: Chandrashekhar Kotekar shekhar.kote...@gmail.com Reply-To: user

What happens to data nodes when name node has failed for long time?

2014-12-12 Thread Chandrashekhar Kotekar
Hi, What happens if name node has crashed for more than one hour but secondary name node, all the data nodes, job tracker, task trackers are running fine? Do those daemon services also automatically shutdown after some time? Or those services keep running hoping for namenode to come back?

Fwd: Multiple ways to write Hadoop program driver - Which one to choose?

2013-04-23 Thread Chandrashekhar Kotekar
Hi, I have observed that there are multiple ways to write driver method of Hadoop program. Following method is given in Hadoop Tutorial by Yahoohttp://developer.yahoo.com/hadoop/tutorial/module4.html public void run(String inputPath, String outputPath) throws Exception { JobConf conf =