Hi,

Thanks Rohith,I just try this job on my virtual machine which does not have so 
much memory.
Can I modify some configuration options to let it run with fewer memory?
If not, i have to try it with a new environment.

Thank you again!

On Oct 23, 2014, at 14:18, Rohith Sharma K S <rohithsharm...@huawei.com> wrote:

> Hi,
>  
> This is problem with your memory configurations in cluster. You have 
> configured “yarn.nodemanager.resource.memory-mb” as 64MB which is too low.
>  
> 1.       ApplicationMaster required 2GB to launch container but cluster 
> memory it self has 64MB. So container never get assigned.
> 2.       Further steps, map memory is 64MB, but map.opts has 1024MB in 
> mapred-site.xml. Again it is contradictory.
>  
> Change NodeManger memory to 8GB and map/reduce memory to 2GB. Try running job.
>  
> Thanks & Regards
> Rohith Sharma K S
>  
> This e-mail and its attachments contain confidential information from HUAWEI, 
> which is intended only for the person or entity whose address is listed 
> above. Any use of the information contained herein in any way (including, but 
> not limited to, total or partial disclosure, reproduction, or dissemination) 
> by persons other than the intended recipient(s) is prohibited. If you receive 
> this e-mail in error, please notify the sender by phone or email immediately 
> and delete it!
>  
> From: mail list [mailto:louis.hust...@gmail.com] 
> Sent: 23 October 2014 07:55
> To: user@hadoop.apache.org
> Subject: mapred job pending at "Starting scan to move intermediate done files"
>  
> hi, all,
>  
> I am new to hadoop, and I install the hadoop-2.5.1 on ubuntu  with 
> Pseudo-distributed mode.  
> When I run a mapped job, the job output the following logs:
>  
> louis@ubuntu:~/src/hadoop-book$ hadoop jar hadoop-examples.jar 
> v3.MaxTemperatureDriver input/ncdc/all max-temp
> 14/10/22 19:09:56 INFO client.RMProxy: Connecting to ResourceManager at 
> /0.0.0.0:8032
> 14/10/22 19:09:57 INFO input.FileInputFormat: Total input paths to process : 2
> 14/10/22 19:09:58 INFO mapreduce.JobSubmitter: number of splits:2
> 14/10/22 19:09:58 INFO mapreduce.JobSubmitter: Submitting tokens for job: 
> job_1414030015373_0001
> 14/10/22 19:09:58 INFO impl.YarnClientImpl: Submitted application 
> application_1414030015373_0001
> 14/10/22 19:09:58 INFO mapreduce.Job: The url to track the 
> job:http://localhost:8088/proxy/application_1414030015373_0001/
> 14/10/22 19:09:58 INFO mapreduce.Job: Running job: job_1414030015373_0001
>  
> As you see, the job halt. Then I check the jps output:
>  
> louis@ubuntu:~/src/hadoop-2.5.1$ jps
> 22433 SecondaryNameNode
> 22716 NodeManager
> 22240 DataNode
> 22577 ResourceManager
> 23083 JobHistoryServer
> 23148 Jps
> 22080 NameNode
>  
> It seems nothing wrong, then i check the  
> mapred-louis-historyserver-ubuntu.log:
>  
> 2014-10-22 19:09:03,831 INFO org.apache.hadoop.mapreduce.v2.hs.JobHistory: 
> History Cleaner started
> 2014-10-22 19:09:03,837 INFO org.apache.hadoop.mapreduce.v2.hs.JobHistory: 
> History Cleaner complete
> 2014-10-22 19:11:33,830 INFO org.apache.hadoop.mapreduce.v2.hs.JobHistory: 
> Starting scan to move intermediate done files
> 2014-10-22 19:14:33,830 INFO org.apache.hadoop.mapreduce.v2.hs.JobHistory: 
> Starting scan to move intermediate done files
> 2014-10-22 19:17:33,832 INFO org.apache.hadoop.mapreduce.v2.hs.JobHistory: 
> Starting scan to move intermediate done files
> 2014-10-22 19:20:33,830 INFO org.apache.hadoop.mapreduce.v2.hs.JobHistory: 
> Starting scan to move intermediate done files
>  
> Then i check the web ui:
> It seems the job is pending!!!!
>  
> The attachment contains some configuration files at etc/hadoop/ . 
> Any idea will be appreciated !

Reply via email to