Here seems to be some problem in the mapper logic. You need to have the input 
according to your code or need to update the code to handle the cases like 
having the odd no of words in a line.

Before getting the element second time, need to check whether tokenizer has 
more elements or not. If you have only two words in a line, you can modify the 
code to get these directly instead of iterating multiple times.

Thanks
Devaraj k

From: jamal sasha [mailto:jamalsha...@gmail.com]
Sent: 31 July 2013 23:40
To: user@hadoop.apache.org
Subject: java.util.NoSuchElementException

Hi,
  I am getting this error:

13/07/31 09:29:41 INFO mapred.JobClient: Task Id : 
attempt_201307102216_0270_m_000002_2, Status : FAILED
java.util.NoSuchElementException
            at java.util.StringTokenizer.nextToken(StringTokenizer.java:332)
            at java.util.StringTokenizer.nextElement(StringTokenizer.java:390)
            at org.mean.Mean$MeanMapper.map(Mean.java:60)
            at org.mean.Mean$MeanMapper.map(Mean.java:1)
            at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
            at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
            at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)
            at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
            at java.security.AccessController.doPrivileged(Native Method)
            at javax.security.auth.Subject.doAs(Subject.java:396)
            at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
            at org.apache.hadoop.mapred.Child.main(Child.java:249)



            public void map(LongWritable key, Text value , Context context) 
throws IOException, InterruptedException,NoSuchElementException{
                                    initialize(context);
                                    StringTokenizer tokenizer = new 
StringTokenizer(value.toString());
                                    while (tokenizer.hasMoreElements()){
                                                String curWord = 
tokenizer.nextElement().toString();
                                                //The line which causes this 
error.
                                                Integer curValue = 
Integer.parseInt(tokenizer.nextElement().toString());

                                                Integer sum = 
summation.get(curWord);
                                                Integer count = 
counter.get(curWord);

                                                ..
                                 ...

                                    }

                                    close(context);
                        }


What am i doing wrong?

My data looks like:

//word count
foo 20
bar  21
and so on???
The code works fine if I strip the hadoop part and run it in java?



Reply via email to