1) Check if by any chance process is still running by using jps -V command.
2) If it is running then kill it by sudo kill -9 proc-id
3) Execute name node start command again.
4) Go to bottom of the name node log file and post it here.
Regards,
Chandrash3khar Kotekar
Mobile - +91 8600011455
Technically yes, you can keep all map reduce jobs in single jar file
because all map reduce jobs are nothing but java classes but I think its
better to keep all map-reduce job isolated so that you will be able to
modify them easily in future.
Regards,
Chandrash3khar Kotekar
Mobile - +91
Your question is very vague. Can you give us more details about the problem
you are trying to solve?
Regards,
Chandrash3khar Kotekar
Mobile - +91 8600011455
On Sun, May 3, 2015 at 11:59 PM, Answer Agrawal yrsna.tse...@gmail.com
wrote:
Hi
As I studied that data distribution, load balancing,
Hi,
I am trying to configure name node HA and I want to further configure
automatic fail over. I am confused about '*dfs.namenode.shared.edits.dir*'
configuration.
Documentation says that active namde node writes to shared storage. I
would like to know if this means that name nodes write it on
Thanks Regards
Brahma Reddy Battula
--
*From:* Chandrashekhar Kotekar [shekhar.kote...@gmail.com]
*Sent:* Thursday, February 12, 2015 5:01 PM
*To:* user@hadoop.apache.org
*Subject:* journal node shared edits directory should be present on HDFS
or NAS
) is restored the cluster will resume
healthy operation. This is part of hadoop’s ability to handle network
partition events.
*Rich Haase* | Sr. Software Engineer | Pandora
m 303.887.1146 | rha...@pandora.com
From: Chandrashekhar Kotekar shekhar.kote...@gmail.com
Reply-To: user
Hi,
What happens if name node has crashed for more than one hour but secondary
name node, all the data nodes, job tracker, task trackers are running fine?
Do those daemon services also automatically shutdown after some time? Or
those services keep running hoping for namenode to come back?
Hi,
I have observed that there are multiple ways to write driver method of
Hadoop program.
Following method is given in Hadoop Tutorial by
Yahoohttp://developer.yahoo.com/hadoop/tutorial/module4.html
public void run(String inputPath, String outputPath) throws Exception {
JobConf conf =