Thanks for the link ... but I am still unable to  find how do I resolve the 
issue with the heart beat ...

Date: Wed, 10 Sep 2014 09:52:19 -0400
Subject: Re: PIG heart beat freeze using hue + cdh 5.1
From: zenon...@gmail.com
To: user@hive.apache.org

Take a look at this link
http://hortonworks.com/blog/how-to-plan-and-configure-yarn-in-hdp-2-0/

Thanks
On Tue, Sep 9, 2014 at 8:53 PM, Amit Dutta <amitkrdu...@outlook.com> wrote:



Thanks a lot for your reply..I changed the following parameters from Cloudera 
manager 
mapred.tasktracker.map.tasks.maximum = 2 (it was 1 before)
mapred.tasktracker.reduce.tasks.maximum =  2 (it was 1 before)
could you please mention what are the parameters and how do I change those ...
Regards,Amit
Subject: Re: PIG heart beat freeze using hue + cdh 5.1
From: zenon...@gmail.com
Date: Tue, 9 Sep 2014 20:34:19 -0400
To: user@hive.apache.org

It use Yarn now you need to set your container resource memory and CPU then set 
the mapreduce physical memory and CPU cores the number of mapper and reducers 
are calculated based on the resource you gave to your mapper and reducer

PengchengSent from my iPhone
On Sep 9, 2014, at 7:55 PM, Amit Dutta <amitkrdu...@outlook.com> wrote:




I think one of the issue is number of mapreduce slot for the cluster... Can 
anyone please let me know how do I increase the mapreduce slot?

From: amitkrdu...@outlook.com
To: user@hive.apache.org
Subject: PIG heart beat freeze using hue + cdh 5.1
Date: Tue, 9 Sep 2014 17:55:01 -0500




Hi I have a only 604 rows in the hive table.
while using A = LOAD 'revenue' USING org.apache.hcatalog.pig.HCatLoader(); DUMP 
A; it starts spouting heart beat repeatedly and does not leave this state.Can 
please someone help.I am getting following exception
  2014-09-09 17:27:45,844 [JobControl] INFO  
org.apache.hadoop.mapreduce.JobSubmitter  - Kind: RM_DELEGATION_TOKEN, Service: 
10.215.204.182:8032, Ident: (owner=cloudera, renewer=oozie mr token, 
realUser=oozie, issueDate=1410301632571, maxDate=1410906432571, 
sequenceNumber=14, masterKeyId=2)
  2014-09-09 17:27:46,709 [JobControl] WARN  
org.apache.hadoop.mapreduce.v2.util.MRApps  - cache file 
(mapreduce.job.cache.files) 
hdfs://txwlcloud2:8020/user/oozie/share/lib/lib_20140820161455/pig/commons-httpclient-3.1.jar
 conflicts with cache file (mapreduce.job.cache.files) 
hdfs://txwlcloud2:8020/user/oozie/share/lib/lib_20140820161455/hcatalog/commons-httpclient-3.1.jar
 This will be an error in Hadoop 2.0
  2014-09-09 17:27:46,712 [JobControl] WARN  
org.apache.hadoop.mapreduce.v2.util.MRApps  - cache file 
(mapreduce.job.cache.files) 
hdfs://txwlcloud2:8020/user/oozie/share/lib/lib_20140820161455/pig/commons-io-2.1.jar
 conflicts with cache file (mapreduce.job.cache.files) 
hdfs://txwlcloud2:8020/user/oozie/share/lib/lib_20140820161455/hcatalog/commons-io-2.1.jar
 This will be an error in Hadoop 2.0
  2014-09-09 17:27:46,894 [JobControl] INFO  
org.apache.hadoop.yarn.client.api.impl.YarnClientImpl  - Submitted application 
application_1410291186220_0006
  2014-09-09 17:27:46,968 [JobControl] INFO  org.apache.hadoop.mapreduce.Job  - 
The url to track the job: 
http://txwlcloud2:8088/proxy/application_1410291186220_0006/
  2014-09-09 17:27:46,969 [main] INFO  
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher  
- HadoopJobId: job_1410291186220_0006
  2014-09-09 17:27:46,969 [main] INFO  
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher  
- Processing aliases A
  2014-09-09 17:27:46,969 [main] INFO  
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher  
- detailed locations: M: A[1,4] C:  R:
  2014-09-09 17:27:46,969 [main] INFO  
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher  
- More information at: 
http://txwlcloud2:50030/jobdetails.jsp?jobid=job_1410291186220_0006
  2014-09-09 17:27:47,019 [main] INFO  
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher  
- 0% complete
  Heart beat
  Heart beat
  Heart beat
  Heart beat
  Heart beat                                                                    
          
                                          

                                          

Reply via email to