Re: Reduce gets struck at 99%

2010-04-08 Thread Eric Arenas
Yes Raghava,

I have experience that issue before, and the solution that you mentioned also 
solved my issue (adding a context.progress or setcontext to tell the JT that my 
jobs are still running)

regards
 Eric Arenas





From: Raghava Mutharaju 
To: common-user@hadoop.apache.org; mapreduce-u...@hadoop.apache.org
Sent: Thu, April 8, 2010 10:30:49 AM
Subject: Reduce gets struck at 99%

Hello all,

 I got the time out error as mentioned below -- after 600 seconds, that 
attempt was killed and the attempt would be deemed a failure. I searched around 
about this error, and one of the suggestions to include "progress" statements 
in the reducer -- it might be taking longer than 600 seconds and so is timing 
out. I added calls to context.progress() and context.setStatus(str) in the 
reducer. Now, it works fine -- there are no timeout errors.

 But, for a few jobs, it takes awfully long time to move from "Map 
100%, Reduce 99%" to Reduce 100%. For some jobs its 15mins and for some it was 
more than an hour. The reduce code is not complex -- 2 level loop and couple of 
if-else blocks. The input size is also not huge, for the job that gets struck 
for an hour at reduce 99%, it would take in 130. Some of them are 1-3 MB in 
size and couple of them are 16MB in size. 

 Has anyone encountered this problem before? Any pointers? I use Hadoop 
0.20.2 on a linux cluster of 16 nodes.

Thank you.

Regards,
Raghava.


On Thu, Apr 1, 2010 at 2:24 AM, Raghava Mutharaju  
wrote:

Hi all,
>
>   I am running a series of jobs one after another. While executing the 
> 4th job, the job fails. It fails in the reducer --- the progress percentage 
> would be map 100%, reduce 99%. It gives out the following message
>
>10/04/01 01:04:15 INFO mapred.JobClient: Task Id : 
>attempt_201003240138_0110_r_18_1, Status : FAILED 
>Task attempt_201003240138_0110_r_18_1 failed to report status for 602 
>seconds. Killing!
>
>It makes several attempts again to execute it but fails with similar message. 
>I couldn't get anything from this error message and wanted to look at logs 
>(located in the default dir of ${HADOOP_HOME/logs}). But I don't find any 
>files which match the timestamp of the job. Also I did not find history and 
>userlogs in the logs folder. Should I look at some other place for the logs? 
>What could be the possible causes for the above error?
>
>   I am using Hadoop 0.20.2 and I am running it on a cluster with 16 nodes.
>
>Thank you.
>
>Regards,
>Raghava.
>


Re: basic hadoop job help

2010-02-18 Thread Eric Arenas
Hi Cory,

regarding the part that you are not sure about:


String inputdir  = args[0];
String outputdir= args[1];
int numberReducers = Integer.parseInt(args[2]);
//it is better to at least pass the numbers of reducers as parameters, or read 
from the XML job config file, if you want

//setting the number of reducers to 1 , as you had in your code *might* 
potentially make it slower to process and generate the output 
//if you are trying to sell the idea of Hadoop as a new ETL tool, you want it 
to be as fast as you can

...

job2.setNumReduceTasks(1);
FileInputFormat.setInputPaths(job, inputdir);
FileOutputFormat.setOutputPath(job, new Path(outputdir));

return job.waitForCompletion(true) ? 0 : 1;

  } //end of run method


Unless you copy/paste your code, I do not see why you need to set 
"setWorkingDirectory" in your M/R job.

Give this a try and let me know,

regards,
Eric Arenas



- Original Message 
From: Cory Berg 
To: common-user@hadoop.apache.org
Sent: Thu, February 18, 2010 9:07:54 AM
Subject: basic hadoop job help

Hey all,

I'm trying to get Hadoop up and running as a proof of concept to make an 
argument for moving away from a big RDBMS.  I'm having some challenges just 
getting a really simple demo mapreduce to run.  The examples I have seen on the 
web tend to make use of classes that are now deprecated in the latest hadoop 
(0.20.1).  It is not clear what the equivalent newer classes are in some cases.

Anyway, I am stuck at this exception - here it is start to finish:
---
$ ./bin/hadoop jar ./testdata/RetailTest.jar RetailTest testdata outputdata
10/02/18 09:24:55 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName
=JobTracker, sessionId=
10/02/18 09:24:55 WARN mapred.JobClient: Use GenericOptionsParser for parsing th
e arguments. Applications should implement Tool for the same.
10/02/18 09:24:55 INFO input.FileInputFormat: Total input paths to process : 5
10/02/18 09:24:56 INFO input.FileInputFormat: Total input paths to process : 5
Exception in thread "Thread-13" java.lang.IllegalStateException: Shutdown in pro
gress
at java.lang.ApplicationShutdownHooks.add(ApplicationShutdownHooks.java:
39)
at java.lang.Runtime.addShutdownHook(Runtime.java:192)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1387)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:191)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:95)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:180)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:175)
at org.apache.hadoop.mapred.FileOutputCommitter.cleanupJob(FileOutputCom
mitter.java:61)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:2
45)


Now here is the code that actually starts things up (not including the actual 
mapreduce code).  I initially suspected this code because I was guessing at the 
correct non-deprecated classes to use:

  public int run(String[] args) throws Exception {
Configuration conf = new Configuration();
Job job2 = new Job(conf);
job2.setJobName("RetailTest");
job2.setJarByClass(RetailTest.class);
job2.setMapperClass(RetailMapper.class);
job2.setReducerClass(RetailReducer.class);
job2.setOutputKeyClass(Text.class);
job2.setOutputValueClass(Text.class);
job2.setNumReduceTasks(1);
// this was a guess on my part as I could not find out the "recommended way"
job2.setWorkingDirectory(new Path(args[0]));
FileInputFormat.setInputPaths(job2, new Path(args[0]));
FileOutputFormat.setOutputPath(job2, new Path(args[1]));
job2.submit();
return 0;
  }

  /**
   * @param args
   */
  public static void main(String[] args) throws Exception {
int res = ToolRunner.run(new RetailTest(), args);
System.exit(res);
  }

Can someone sanity check me here?  Much appreciated.

Regards,

Cory


Re: configuration file

2010-02-04 Thread Eric Arenas
Hi Gang,

You have to load the XML config file in your M/R code.

Something like this:
FSDataInputStream inS = fs.open(in);
conf.addResource(inS); 

 
Where "conf" is your Configuration.

This will in effect read all the parameters from that XML and override anything 
that you have previously set with:
conf.set("parameter",parameterValue);

regards,
Eric Arenas



- Original Message 
From: Gang Luo 
To: common-user@hadoop.apache.org
Sent: Thu, February 4, 2010 6:14:54 AM
Subject: Re: configuration file

I give the path to that xml file in that command. Do I need to add that path to 
classpath? I try to give a wrong path, there is no error reported.

Aren't those parameters all configurable? like io.sort.mb, mapred.reduce.tasks, 
io.sort.factor, etc. 

Thanks.
-Gang




- 原始邮件 
发件人: Amogh Vasekar 
收件人: "common-user@hadoop.apache.org" 
发送日期: 2010/2/4 (周四) 6:09:04 上午
主   题: Re: configuration file

Hi,
A shot in the dark, is the conf file in your classpath? If yes, are the 
parameters you are trying to override marked final?

Amogh


On 2/4/10 3:18 AM, "Gang Luo"  wrote:

Hi,
I am writing script to run whole bunch of jobs automatically. But the 
configuration file doesn't seems working. I think there is something wrong in 
my command.

The command is my script is like:
bin/hadoop jar myJarFile myClass -conf myConfigurationFilr.xml  arg1  agr2 

I use conf.get() so show the value of some parameters. But the values are not 
what I define in that xml file.  Is there something wrong?

Thanks.
-Gang


  ___ 
  好玩贺卡等你发,邮箱贺卡全新上线! 
http://card.mail.cn.yahoo.com/