Hi all !
I have downloaded hadoop-0.21.I am behind my college proxy.
I get the following error while building mumak :
$cd /home/arun/Documents/hadoop-0.21.0/mapred
$ant package
Buildfile: build.xml
clover.setup:
clover.info:
[echo]
[echo] Clover not found. Code coverage
Hi all !
I have downloaded hadoop-0.21.I am behind my college proxy.
Installed :-
ivy version : 2.1.0~rc2-3ubuntu1
Ant version : 1.7.1-4ubuntu1.1
I get the following error while building mumak :
$cd /home/arun/Documents/hadoop-0.21.0/mapred
$ant package
Buildfile: build.xml
clover.setup:
When I run the job, the throws the following error.
11/07/29 22:22:22 INFO mapred.JobClient: Task Id :
attempt_201107292131_0011_m_00_2, Status : FAILED
java.io.IOException: Type mismatch in value from map: expected
org.apache.hadoop.io.IntWritable, recieved org.apache.hadoop.io.Text
But
If you want to use a combiner, your map has to output the same types
as your combiner outputs. In your case, modify your map to look like
this:
public static class TokenizerMapper
extends MapperText, Text, Text, IntWritable{
public void map(Text key, Text value, Context context
So set a proxy?
http://ant.apache.org/manual/proxy.html
On Fri, Jul 29, 2011 at 3:32 PM, Arun K arunk...@gmail.com wrote:
Hi all !
I have downloaded hadoop-0.21.I am behind my college proxy.
Installed :-
ivy version : 2.1.0~rc2-3ubuntu1
Ant version : 1.7.1-4ubuntu1.1
I get the
0.21.0 hasn't even reached stable yet. The most recent stable version is
0.20.203
On Thu, Jul 28, 2011 at 3:30 PM, Aaron Baff aaron.b...@telescope.tv wrote:
Does this mean 0.22.0 has reached stable and will be released as the stable
version soon?
--Aaron
-Original Message-
From:
Hi all,
Does anybody have examples of how one moves files from the local
filestructure/HDFS to the distributed cache in MapReduce? A Google search
turned up examples in Pig but not MR.
--
Roger Chen
UC Davis Genome Center
I'm running into a wall with one of my map reduce jobs (actually its a 7
jobs, chained together). I get to the 5th MR job, which takes as input the
output from the 3rd MR job, and right off the bat I start getting Lost task
tracker and Could not obtain block... errors. Eventually I get enough of
It was my understanding (could easily be wrong) that 0.21.0 was never going to
be considered a stable, production version and 0.22.0 was going to be the next
big stable revision.
--Aaron
-Original Message-
From: Roger Chen [mailto:rogc...@ucdavis.edu]
Sent: Friday, July 29, 2011 10:20
Good evening,
does anyone have an example of how I can use the TotalOrderPartitioner (with
InputSampler)? It seems that I have to use a patch before I can use it
(on hadoop 0.20.2), but the patch fails with some errors.
In addition, I have searched for hours but haven't found an example
Slight modification: I now know how to add files to the distributed file
cache, which can be done via this command placed in the main or run class:
DistributedCache.addCacheFile(new URI(/user/hadoop/thefile.dat),
conf);
However I am still having trouble locating the file in the
Did you try using -files option in your hadoop jar command as:
/usr/bin/hadoop jar jar name main class name -files absolute path of
file to be added to distributed cache input dir output dir
On Fri, Jul 29, 2011 at 11:05 AM, Roger Chen rogc...@ucdavis.edu wrote:
Slight modification: I now
After moving it to the distributed cache, how would I call it within my
MapReduce program?
On Fri, Jul 29, 2011 at 11:09 AM, Mapred Learn mapred.le...@gmail.comwrote:
Did you try using -files option in your hadoop jar command as:
/usr/bin/hadoop jar jar name main class name -files absolute
ok for accessing it in mapper code, u can do something like:
On Fri, Jul 29, 2011 at 11:09 AM, Mapred Learn mapred.le...@gmail.comwrote:
Did you try using -files option in your hadoop jar command as:
/usr/bin/hadoop jar jar name main class name -files absolute path of
file to be added to
I hope my previous reply helps...
On Fri, Jul 29, 2011 at 11:11 AM, Roger Chen rogc...@ucdavis.edu wrote:
After moving it to the distributed cache, how would I call it within my
MapReduce program?
On Fri, Jul 29, 2011 at 11:09 AM, Mapred Learn mapred.le...@gmail.com
wrote:
Did you try
Please unsubscribe me.
On Jul 29, 2011, at 1:18 PM, Mapred Learn wrote:
I hope my previous reply helps...
On Fri, Jul 29, 2011 at 11:11 AM, Roger Chen rogc...@ucdavis.edu
wrote:
After moving it to the distributed cache, how would I call it
within my
MapReduce program?
On Fri, Jul 29,
If you already have your ivy2 cache with all the artifacts
then you can manually download/copy the ivy.jar and place that in the ivy
folder and do an ant build with -Doffline=true.
-Giri
On Fri, Jul 29, 2011 at 3:02 AM, Arun K arunk...@gmail.com wrote:
Hi all !
I have downloaded
Thanks for the response! However, I'm having an issue with this line
Path[] cacheFiles = DistributedCache.getLocalCacheFiles(conf);
because conf has private access in org.apache.hadoop.configured
On Fri, Jul 29, 2011 at 11:18 AM, Mapred Learn mapred.le...@gmail.comwrote:
I hope my previous
Is this what you are looking for?
http://hadoop.apache.org/common/docs/current/mapred_tutorial.html
search for jobConf
On Fri, Jul 29, 2011 at 1:51 PM, Roger Chen rogc...@ucdavis.edu wrote:
Thanks for the response! However, I'm having an issue with this line
Path[] cacheFiles =
jobConf is deprecated in 0.20.2 I believe; you're supposed to be using
Configuration for that
On Fri, Jul 29, 2011 at 1:59 PM, Mohit Anchlia mohitanch...@gmail.comwrote:
Is this what you are looking for?
http://hadoop.apache.org/common/docs/current/mapred_tutorial.html
search for jobConf
Thanks Joey,
It works, but one place I don't understand:
1: in the map
extends MapperText, Text, Text, IntWritable
so the output value is of type IntWritable
2: in the reduce
extends ReducerText,Text,Text,IntWritable
So input value is of type Text.
type of map output should be the same as
Hi all, I have now resolved my issue by doing a try/catch statement. Thanks
for all the help!
On Fri, Jul 29, 2011 at 2:51 PM, Roger Chen rogc...@ucdavis.edu wrote:
jobConf is deprecated in 0.20.2 I believe; you're supposed to be using
Configuration for that
On Fri, Jul 29, 2011 at 1:59 PM,
Has anyone had experience with chaining map jobs in Hadoop framework 0.20.2?
Thanks.
--
Roger Chen
UC Davis Genome Center
Moving to mapreduce-user@, bcc common-user@.
Use JobControl:
http://hadoop.apache.org/common/docs/r0.20.0/mapred_tutorial.html#Job+Control
Arun
On Jul 29, 2011, at 4:24 PM, Roger Chen wrote:
Has anyone had experience with chaining map jobs in Hadoop framework 0.20.2?
Thanks.
--
Roger
I could have sworn that I gave an example earlier this week on how to push and
pull stuff from distributed cache.
Date: Fri, 29 Jul 2011 14:51:26 -0700
Subject: Re: Moving Files to Distributed Cache in MapReduce
From: rogc...@ucdavis.edu
To: common-user@hadoop.apache.org
jobConf is
Here's the meat of my post earlier...
Sample code on putting a file on the cache:
DistributedCache.addCacheFile(new URI(path+MyFileName,conf));
Sample code in pulling data off the cache:
private Path[] localFiles =
DistributedCache.getLocalCacheFiles(context.getConfiguration());
Hi all !
I have added the following code to build.xml and tried to build : $ant
package.
I have also tried to remove removed the entire ivy2 (~/.ivy2/* ) directory
and rebuild but couldn't succeed.
setproxy proxyhost=192.168.0.90 proxyport=8080
proxyuser=ranam proxypassword=passwd
27 matches
Mail list logo