Thanks, Amogh.
Best regards,
Michael
--- On Thu, 3/18/10, Amogh Vasekar wrote:
From: Amogh Vasekar
Subject: Re: java.lang.NullPointerException at
org.apache.hadoop.mapred.IFile$Writer.(IFile.java:102)
To: "common-user@hadoop.apache.org"
Date: Thursday, March 18, 2010, 11:34 PM
Hi,
http://h
Thanks, Ninad. This really helps.
Best regards,
Michael
--- On Fri, 3/19/10, Ninad Raut wrote:
From: Ninad Raut
Subject: Re: performance analysis?
To: common-user@hadoop.apache.org
Date: Friday, March 19, 2010, 12:02 AM
The Best and Easy to Configure tool is Ganglia. Haoop has built in suppo
I have logged a comment in
https://issues.apache.org/jira/browse/HADOOP-4829which is related to
IllegalStateException that I saw when Cache.remove()
tried to remove shutdown hook in the process of JVM shutting down.
Cheers
On Wed, Mar 10, 2010 at 11:00 AM, Todd Lipcon wrote:
> Hi,
>
> The issue
If you don't want to wait then you can do
bin/hadoop dfsadmin -safemode leave.
And this might be useful for reference.
-safemode : Safe mode maintenance command.
Safe mode is a Namenode state in which it
1. does not accept changes to the name space (read-only)
There's a bit of an issue if you have no data in your HDFS -- 0 blocks out
of 0 is considered 100% reported, so NN leaves safe mode even if there are
no DNs talking to it yet.
For a fix, please see HDFS-528, included in Cloudera's CDH2.
Thanks
-Todd
On Fri, Mar 19, 2010 at 10:29 AM, Bill Haber
At startup, the namenode goes into 'safe' mode to wait for all data nodes to
send block reports on data they are holding. This is normal for hadoop and
necessary to make sure all replicated data is accounted for across the
cluster. It is the nature of the beast to work this way for good reasons.
What is the namemode doing upon startup? I have to wait about 1 minute
and watch for the namenode dfs usage drop from 100% otherwise the install
is unusable. Is this typical? Is something wrong with my install?
I've been attempting the Pseudo distributed tutorial example for a
while trying to
Hi Utku,
hiho currently works only with MySQL. It works with Hadoop 0.20. It uses
features in the MySQL Connector J to load data parallelly to the database
through multiple map tasks. The number of tasks are determined based on the
number of files you wish to load to the database. The current rele
Thank you both Aaron and Sonal for your precious comments and contributions.
I'll check both the projects and try to make a design decision.
I'm familiar with the sqoop and just heard about hiho.
Sonal: I guess what hiho is a single map/reduce job handling the MySQL
hadoop Integration. Is it als