I am running a cloudera hadoop cluster and I have noticed that some of my
services are showing a status of "Unknown Health". I have checked the
individual UI's (ie: HBase, TaskTracker, Datanode, etc) and all them appear to
be healthy and running smoothly.
However, for example when I look at the
1, 2013, at 10:38 AM, Michael Namaiandeh
mailto:mnamaian...@healthcit.com>> wrote:
I am trying to setup a 4 node Cloudera Hadoop cluster. However; two of my data
nodes are showing up as "dead" nodes. After looking at the logs, I found that
the dead nodes have a different ClusterID tha
I am trying to setup a 4 node Cloudera Hadoop cluster. However; two of my data
nodes are showing up as "dead" nodes. After looking at the logs, I found that
the dead nodes have a different ClusterID than my working/"live" nodes.
How do I configure the dead nodes with the correct ClusterID. I ca
I am trying to install Hue for Hadoop but when I run the make install, I
receive the following error message. Any help would be grealty appreciated.
gcc -pthread -fno-strict-aliasing -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2
-fexceptions -fstack-protector --param=ssp-buffer-size=4 -m -unwind-t
Mohit,
Take a look at this article:
http://www.kernelhardware.org/how-should-run-fsck-linux-file-system/
From: Mohit Vadhera [mailto:project.linux.p...@gmail.com]
Sent: Friday, March 01, 2013 1:52 PM
To:
Subject: hadoop filesystem corrupt
Hi,
While moving the data my data folder didn't move
Are your partitions LVM or something else? If it’s not LVM then you can use
GParted to re-configure your LV configuration.
From: Jeffrey Buell [mailto:jbu...@vmware.com]
Sent: Monday, February 25, 2013 4:10 PM
To: user@hadoop.apache.org
Subject: Re: Format the harddrive
I've installed RHEL 6.1
Or Google's cousin.
http://www.lmgtfy.com/
From: Oleg Ruchovets [mailto:oruchov...@gmail.com]
Sent: Monday, February 25, 2013 8:38 AM
To: user@hadoop.apache.org
Subject: Re: Hadoop advantages vs Traditional Relational DB
Yes , Sure I ased uncle google first :-). I already saw these links , but
d running? Also, you can try checking the job tracker
logs to see if it provides any information.
Regards,
Robert
On Tue, Dec 4, 2012 at 10:39 AM, Michael Namaiandeh
mailto:mnamaian...@healthcit.com>> wrote:
I started up Apache Hadoop version 1.0.4 and tried to submit a job but I
noticed
Hi Hadoop user community,
I am trying to setup my first Hadoop cluster and I've found most of the
instructions a little confusing. I've seen how-to's that say "core-site.xml"
should have hdfs://localhost:8020 and others say hdfs://localhost:50030". Which
one is correct? Can someone please help