Thanks your help , I finally am able to run my program and I still dont
have a clue why I was having that issue.
I just removed everything that had to do with hadoop and started working a
fresh copy of hadoop and it just worked.

On Thu, Apr 12, 2012 at 11:56 AM, SRIKANTH KOMMINENI (RIT Student) <
sxk7...@rit.edu> wrote:

> The following is the error that i get
>
>
>  bin/hadoop jar hadoop-examples-*.jar grep input output 'dfs[a-z.]+'
> 12/04/11 23:52:50 INFO util.NativeCodeLoader: Loaded the native-hadoop
> library
> 12/04/11 23:52:50 WARN mapred.JobClient: No job jar file set.  User
> classes may not be found. See JobConf(Class) or JobConf#setJar(String).
> 12/04/11 23:52:50 WARN snappy.LoadSnappy: Snappy native library not loaded
> 12/04/11 23:52:50 INFO mapred.FileInputFormat: Total input paths to
> process : 7
> 12/04/11 23:52:51 INFO mapred.JobClient: Running job: job_local_0001
> 12/04/11 23:52:51 INFO util.ProcessTree: setsid exited with exit code 0
> 12/04/11 23:52:51 INFO mapred.Task:  Using ResourceCalculatorPlugin :
> org.apache.hadoop.util.LinuxResourceCalculatorPlugin@59a34
> 12/04/11 23:52:51 INFO mapred.MapTask: numReduceTasks: 1
> 12/04/11 23:52:51 INFO mapred.MapTask: io.sort.mb = 100
> 12/04/11 23:52:51 WARN mapred.LocalJobRunner: job_local_0001
> java.lang.OutOfMemoryError: Java heap space
> at
> org.apache.hadoop.mapred.MapTask$MapOutputBuffer.<init>(MapTask.java:949)
>  at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:428)
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:372)
>  at
> org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:212)
> 12/04/11 23:52:52 INFO mapred.JobClient:  map 0% reduce 0%
> 12/04/11 23:52:52 INFO mapred.JobClient: Job complete: job_local_0001
> 12/04/11 23:52:52 INFO mapred.JobClient: Counters: 0
> 12/04/11 23:52:52 INFO mapred.JobClient: Job Failed: NA
> java.io.IOException: Job failed!
> at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1265)
>  at org.apache.hadoop.examples.Grep.run(Grep.java:69)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
>  at org.apache.hadoop.examples.Grep.main(Grep.java:93)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>  at java.lang.reflect.Method.invoke(Method.java:597)
> at
> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
>  at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
> at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>  at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
>  at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
>
>
> On Thu, Apr 12, 2012 at 10:37 AM, Mapred Learn <mapred.le...@gmail.com>wrote:
>
>> Can u share ur error also ?
>>
>> Sent from my iPhone
>>
>> On Apr 12, 2012, at 7:16 AM, Marcos Ortiz <mlor...@uci.cu> wrote:
>>
>> Can you show to us the logs of your NN/DN?
>>
>> On 04/12/2012 03:28 AM, SRIKANTH KOMMINENI (RIT Student) wrote:
>>
>> Tried that it didn't work for a lot of combinations of values
>>
>> On Thu, Apr 12, 2012 at 3:25 AM, Mapred Learn < <mapred.le...@gmail.com>
>> mapred.le...@gmail.com> wrote:
>>
>>>  Try exporting HADOOP_HEAPSIZE to bigger value like 1500 (1.5 gb)
>>> before running program or change it in hadoop-env.sh
>>>
>>>  If still gives error, u can try with bigger value.
>>>
>>> Sent from my iPhone
>>>
>>> On Apr 12, 2012, at 12:10 AM, "SRIKANTH KOMMINENI (RIT Student)" 
>>> <<sxk7...@rit.edu>
>>> sxk7...@rit.edu> wrote:
>>>
>>>    Hello,
>>>
>>>  I have searched a lot and still cant find any solution that can fix my
>>> problem.
>>>
>>>  I am using the the basic downloaded version of hadoop-1.0.2 and I have
>>> edited only what has been asked in the setup page of hadoop and I have set
>>> it up to work in a pseudo random distributed mode.
>>>
>>>  My JAVA_HOME is set to /usr/lib/jvm/java-6-sun, I tried editing the
>>> heap size in hadoop-env.sh that didn't work. I tried setting the CHILD_OPTS
>>> that didn't work, I found that there was another hadoop-env.sh in
>>> /etc/hadoop/ as per the recommendations in the mailing
>>> list archives that didn't work . I tried increasing the  io.sort.mb
>>> that didn't work. I am totally frustrated but it still doesn't work.please
>>> help.
>>>
>>>
>>>  --
>>> Srikanth Kommineni,
>>> Graduate Assistant,
>>> Dept of Computer Science,
>>> Rochester Institute of Technology.
>>>
>>>
>>>
>>>
>>>  --
>>> Srikanth Kommineni,
>>> Graduate Assistant,
>>> Dept of Computer Science,
>>> Rochester Institute of Technology.
>>>
>>>
>>>
>>>
>>>  --
>>> Srikanth Kommineni,
>>> Graduate Assistant,
>>> Dept of Computer Science,
>>> Rochester Institute of Technology.
>>>
>>>
>>
>>
>>  --
>> Srikanth Kommineni,
>> Graduate Assistant,
>> Dept of Computer Science,
>> Rochester Institute of Technology.
>>
>>
>> --
>> Marcos Luis Ortíz Valmaseda (@marcosluis2186)
>>  Data Engineer at UCI
>>   <http://marcosluis2186.posterous.com>http://marcosluis2186.posterous.com
>>
>>
>>
>>  <http://www.uci.cu/>
>>
>>
>
>
> --
> Srikanth Kommineni,
> Graduate Assistant,
> Dept of Computer Science,
> Rochester Institute of Technology.
>
>


-- 
Srikanth Kommineni,
Graduate Assistant,
Dept of Computer Science,
Rochester Institute of Technology.

Reply via email to