Hi,

Thank you for the reply, however now I have the question on what the recommended way is to get hadoop working with this fix?
Is there a documentation for this?

So some of my questions right now are:
. Does the svn head revision usually work?
. If not, is there a specific revision that is known to work (at least in regard to this bug)?
. Or do I need to apply the patch?
. Do I need to do some special post-configuration after checking the three sub-projects (common, mapred, hdfs) out from svn?
. Is there an estimate on the next release?

Kind regards,
Claus




On 04/21/2011 05:17 AM, Amareshwari Sri Ramadasu wrote:
Hmm.. This has been fixed in MAPREDUCE-1905, in 0.21.1

Thanks
Amareshwari
On 4/21/11 7:27 AM, "Claus Stadler"<cstad...@informatik.uni-leipzig.de>  wrote:

Hi,

I guess I am not the first one to see the following exception when
trying to initialize a LineRecordReader. However, so far I could't
figure out a workaround for this problem.

I saw that this problem was fixed in the svn, but when I checked out one
of the 0.23.0 versions (I can't remember which) I couldn't get the
servers started. (Not sure if that was due to the revision, or me
messing up the setup).

So maybe someone can point me to a known workaround for this problem or
at least hinting a revision that other people got to work?

Kind regards,
Claus


ps: Essentially I am just doing this:

      public static class MyInputFormat extends
              FileInputFormat<Text, ByteWritable>
      {
          @Override
          public RecordReader<Text, ByteWritable>
createRecordReader(InputSplit inputSplit, TaskAttemptContext
taskAttemptContext)
              throws IOException, InterruptedException
          {
              MyRecordReader result = new MyRecordReader();
              result.initialize(inputSplit, taskAttemptContext);
              return result;
          }
      }

      public static class MyRecordReader extends
              RecordReader<Text, ByteWritable>
      {
          LineRecordReader myReader = new LineRecordReader();
          ...
          @Override
          public void initialize(InputSplit inputSplit,
TaskAttemptContext taskAttemptContext)
              throws IOException, InterruptedException
          {
              myReader.initialize(inputSplit, taskAttemptContext); //
EXCEPTION THROWN HERE
          }
           ...
      }

     job.setInputFormatClass(MyInputFormat.class);


The exception is:

java.lang.Exception: java.lang.ClassCastException:
org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl cannot be cast
to org.apache.hadoop.mapreduce.MapContext
      at
org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:371)
~[hadoop-mapred-0.21.0.jar:na]
java.lang.ClassCastException:
org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl cannot be cast
to org.apache.hadoop.mapreduce.MapContext
      at
org.apache.hadoop.mapreduce.lib.input.LineRecordReader.initialize(LineRecordReader.java:75)
~[hadoop-mapred-0.21.0.jar:na]
(rest of the output is within my code)




Reply via email to