Hi!
If you are interested in Cascading I recommend you to ask on the Cascading
mailing list or come ask in the irc channel.
The mailing list can be found at the bottom left corner of www.cascading.org
.
Regards Erik
Hi guys!
Thanks for your help, but still no luck, I did try to set it up on a
different machine with Eclipse 3.2.2 and the
IBM plugin instead of the Hadoop one, in that one I only needed to fill out
the install directory and the host
and that worked just fine.
I have filled out the ports correctly
Hey Philipp!
Not sure about your time tracking thing, probably works, I've just used a
bash script
to start the jar and then you can do the timing in the script.
About how to compile the jars, you need to include the dependencies too, but
you will see what you are missing when you run the job.
Reg
Hey Philipp!
MR jobs are run locally if you just run the java file, to get it running in
distributed mode
you need to create a job jar and run that like ./bin/hadoop jar ...
Regards Erik
Thanks guys!
Running Linux and the remote cluster is also Linux.
I have the properties set up like that already on my remote cluster, but
not sure where to input this info into Eclipse.
And when changing the ports to 9000 and 9001 I get:
Error: java.io.IOException: Unknown protocol to job tracker:
I'm using Eclipse 3.3.2 and want to view my remote cluster using the Hadoop
plugin.
Everything shows up and I can see the map/reduce perspective but when trying
to
connect to a location I get:
"Error: Call failed on local exception"
I've set the host to for example xx0, where xx0 is a remote machi
Hi!
I have been trying to get the logs from Hadoop to redirect to a remote log
server.
Tried to add the socket appender in the log4j.properties file in the conf
directory
and also to add commons.logging + log4j jars + the same log4j.properties
file
into the WEB-INF of the master but I still get not
gt;
> Thanks,
> Lohit
>
> - Original Message
> From: Erik Holstad <[EMAIL PROTECTED]>
> To: core-user@hadoop.apache.org
> Sent: Friday, November 14, 2008 5:08:03 PM
> Subject: Cleaning up files in HDFS?
>
> Hi!
> We would like to run a delete script t
Hi!
We would like to run a delete script that deletes all files older than
x days that are stored in lib l in hdfs, what is the best way of doing that?
Regards Erik
Hi!
Is there a way of using the value read in the configure() in the Map or
Reduce phase?
Erik
On Thu, Oct 23, 2008 at 2:40 AM, Aaron Kimball <[EMAIL PROTECTED]> wrote:
> See Configuration.setInt() in the API. (JobConf inherits from
> Configuration). You can read it back in the configure() metho
Hi Steve!
I you can pass -jobconf mapred.map.tasks=$MAPPERS -jobconf
mapred.reduce.tasks=$REDUCERS
to the streaming job to set the number of reducers and mappers.
Regards Erik
On Wed, Oct 15, 2008 at 4:25 PM, Steve Gao <[EMAIL PROTECTED]> wrote:
> Is there a way to change number of mappers in H
Hi!
I'm trying to run a MR job, but it keeps on failing and I can't understand
why.
Sometimes it shows output at 66% and sometimes 98% or so.
I had a couple of exception before that I didn't catch that made the job to
fail.
The log file from the task can be found at:
http://pastebin.com/m4414d369
Hi!
I'm writing a mapreduce job where I want the output from the mapper to go
strait
to the HDFS without passing the reduce method. Have been told that I can do:
c.setOutputFormat(TextOutputFormat.class); also added
Path path = new Path("user");
FileOutputFormat.setOutputPath(c, path);
But I still
13 matches
Mail list logo