Hi,
I am a beginner in Hadoop Map Reduce. Please redirect me if I am not posting
in the correct forum.
I have created my own Key Type which implements from WritableComparable. I
would like to use TotalOrderPartitioner with this Key and Text as Value. But
I keep encountering errors when the Tota
>
>> Hi,
>>
>> When I start up hadoop, the namenode log show "STATE* Safe mode ON" like
>> that , how to set it off?
> I can set it off with command "hadoop fs -dfsadmin leave" after start up,
> but how can I just start HDFS
> out of Safe mode?
>> Thanks.
>>
>> Ring
>>
>> the s
Hi Adarsh,
Try this link
http://shuyo.wordpress.com/2011/03/08/hadoop-development-environment-with-eclipse/
regards
Sagar
From: Adarsh Sharma [adarsh.sha...@orkash.com]
Sent: Friday, April 08, 2011 9:45 AM
To: common-user@hadoop.apache.org
Subject: Configu
I have a 0.20.2 cluster. I notice that our nodes with 2 TB disks waste
tons of disk io doing a 'du -sk' of each data directory. Instead of
'du -sk' why not just do this with java.io.file? How is this going to
work with 4TB 8TB disks and up ? It seems like calculating used and
free disk space could
Dear all,
I am following the below links to configure Eclipse with hadoop
Environment But don't able to find the Map-Reduce Perspective in Open
Perspective > Other Option.
http://developer.yahoo.com/hadoop/tutorial/module3.html#eclipse
http://wiki.apache.org/hadoop/EclipseEnvironment
I copi
To Whom It May Concern,
When trying to run Hadoop 0.21 with JDK 1.6_23 I get an error:
java.lang.NoClassDefFoundError: org/apache/hadoop/util/PlatformName.
The full error log is in the attached .png
Can you help me? I'd be grateful.
Yours faithfully,
Witold Januszewski
We have fairly good evidence that, as of 0.20.2, hadoop does not set
the thread context class loader to the class loader than includes all
the .jar files from the lib subdirectory of a job jar.
Code we wrote (which is sitting in the 'main' part of the job jar)
calls a class in Mahout (which is sit
We are in the late stages of a research project on the usage of Apache Hadoop
for managing and analyzing large scale data. The purpose is to assess the state
and maturity of the management of large-scale data to determine best practices
and trends which would lead to further adoption of Hadoop a
> But when I tried to implement a real life project, things has become too
> complicated to me, things didn't go the way I expected them to go,
> I had to implement it using plain map/redapi
as with any framework (in Java), it takes time to find the best practices. many
of which are docum
Thanks for your answers,
I checked cascading for a while,
It was easy to get started and to do the tutorial, I really liked the modeling
of pipes, cogroups and so on...
But when I tried to implement a real life project, things has become too
complicated to me, things didn't go the way I expec
Hi,
Does the copyMerge method of class FileUtil can only merge files with
the exact size in bytes into one file, or the size of source files do
not matter?
Regards,
Xiaobo Gu
> How do you test your code, which Unit test libraries your using, how do you
> run your automatic tests after you have finished the development?
> Do you have test/qa/staging environments beside the dev and the production?
> How do you keep it similar to the production
> Code reuse - how do you
On 04/07/2011 03:39 AM, Guy Doulberg wrote:
Hey,
I have been developing Map/Red jars for a while now, and I am still not
comfortable with the developing environment I gathered for myself (and the team)
I am curious how other Hadoop developers out-there, are developing their jobs...
What IDE y
On 04/06/2011 08:40 PM, Haruyasu Ueda wrote:
Hi all,
I'm writing M/R java program.
I want to abort a job itself in a map task, when the map task found
irregular data.
I have two idea to do so.
1. execulte "bin/hadoop -kill jobID" in map task, from slave machine.
2. raise an IOException to
using 0.21.0. I have implemented a custom InputFormat. The RecordReader
extends org.apache.hadoop.mapreduce.RecordReader
The sample I looked at threw an IOException when there was incompatible
input line. But I am not sure who is supposed to catch and handle this
exception. The task just failed wh
fs -copyFromLocal creating a empty file in the localhast ,when i was trying
to create my localhost as psuedo mode in hadoop
--
View this message in context:
http://old.nabble.com/problem-to-creating-file-in-HDFS-tp31342064p31342064.html
Sent from the Hadoop core-user mailing list archive at Nabb
Hey,
I have been developing Map/Red jars for a while now, and I am still not
comfortable with the developing environment I gathered for myself (and the team)
I am curious how other Hadoop developers out-there, are developing their jobs...
What IDE you are using,
What plugins to the IDE you are
Or to set the Main class in the manifest of the Jar,
-Original Message-
From: Bill Graham [mailto:billgra...@gmail.com]
Sent: Wednesday, April 06, 2011 11:17 PM
To: Shuja Rehman
Cc: common-user@hadoop.apache.org
Subject: Re: Including Additional Jars
You need to pass the mainClass aft
18 matches
Mail list logo