Hi fellow users,
Do you know which are the new methods in the new mapreduce package should I
use as the replacement for the following old API methods in JobConf ?
conf.setOutputKeyComparatorClass
conf.setOutputValueGroupingComparator
Thank you,
Wei Shung
Hi,
On Sun, Mar 11, 2012 at 10:57 PM, Weishung Chung weish...@gmail.com wrote:
Do you know which are the new methods in the new mapreduce package should I
use as the replacement for the following old API methods in JobConf ?
The names have been simplified in the new API.
Thank you so much :D
On Sun, Mar 11, 2012 at 12:33 PM, Harsh J ha...@cloudera.com wrote:
Hi,
On Sun, Mar 11, 2012 at 10:57 PM, Weishung Chung weish...@gmail.com
wrote:
Do you know which are the new methods in the new mapreduce package
should I
use as the replacement for the following
Hi All,
I've been trying to setup Cloudera's Ch4 Beta 1 release of MapReduce 2.0 on
a small cluster for testing but i'm not having much luck getting things
running. I've been following the guides on
http://hadoop.apache.org/common/docs/r0.23.1/hadoop-yarn/hadoop-yarn-site/ClusterSetup.htmlto
Hi all I've been with Hadoop-0.20-append for a few time and I plan to upgrade
to 1.0.0 release, but i find there are many people taking about the NEW API, so
I'm lost, can anyone please tell me what is the new API? Is the OLD one
available in the 1.0.0 release? Thanks CheersRamon
there are many people talking about the NEW API
This might be related to releases 0.21 or later, where append and related
functionality is re-implemented.
1.0 comes from 0.20.205 and has same API as 0.20-append.
Sent from phone
On Mar 11, 2012, at 6:27 PM, WangRamon ramon_w...@hotmail.com
Hi People,
I hope that you can help-me, I started a single node for test following
this post (
http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/)
now I have all running, I put some files by command lines, but now I have
some questions at my mind.
1º - Can
Hi People,
I hope that you can help-me, I started a single node for test following
this post (
http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
)
now I have all running, I put some files by command lines, but now I have
some questions at my mind.
1º - Can
As far as I know, the new API is the stuff in the Java package named
org.apache.hadoop.mapreduce, while the old API is in
org.apache.hadoop.mapred.
And, just to keep you on your toes, I am told the new API is deprecated,
it is the old API that is currently favored.
Regards,
Mike
Hey Keith,
You're most likely missing the HDFS jar somehow. I use the package
installation and am able to run the following successfully:
hadoop jar /usr/lib/hadoop/hadoop-mapreduce-examples.jar randomwriter
-Dmapreduce.job.user.name=$USER
New API isn't 'deprecated', but just marked evolving and is unstable
(things may change between releases).
Using the old, stable API is a good idea for 0.20.x/1.x stable
releases as they are more complete in it. However, we've added some
more backports to new API in 1.0.1 to help those who use
Hi Harsh,
Thanks for getting back to me on this on a sunday.
Your guess is the same as mine, but i'm not sure where this is happening or
how.
I installed this manually using the tarballs because the cluster i'm
working on is mostly cut off from the internet. I also can't seem to
install
Dear Patai,
Thanks for your reply.
we need only to install hadoop no Hbase or other tools,
Could you please introduce some useful sites or docs to use puppet for
setting up hadoop cluster?
Thanks.
Masoud.
On 03/10/2012 04:30 PM, Patai Sangbutsarakum wrote:
We did 2pb clusters by puppet.
Hello,
I am quite newbie with Hadoop but I have done 2 years of Java Prog. Anyway,
I have a question:
1. once Job is started, does MAP -method read only once its needed data from
input -hdfs folder (for example) or does it update data needed to be
processed every now and then from Input folder
14 matches
Mail list logo