I would guess it has something to do with container allocation

Daniel

> On 8 באפר׳ 2015, at 20:26, Alan Gates <alanfga...@gmail.com> wrote:
> 
> If you're seeing it list progress (or attempted progress) as here, this isn't 
> a locking issue.  All locks are obtained before the job is submitted to 
> Hadoop.
> 
> Alan.
> 
>> <compose-unknown-contact.jpg>        Mich Talebzadeh April 7, 2015 at 14:09
>> Hi,
>>  
>> Today I have noticed the following issue.
>>  
>> A simple insert into a table is sting there throwing the following
>>  
>> hive> insert into table mytest values(1,'test');
>> Query ID = hduser_20150407215959_bc030fac-258f-4996-b50f-3d2d49371cca
>> Total jobs = 3
>> Launching Job 1 out of 3
>> Number of reduce tasks is set to 0 since there's no reduce operator
>> Starting Job = job_1428439695331_0002, Tracking URL = 
>> http://rhes564:8088/proxy/application_1428439695331_0002/
>> Kill Command = /home/hduser/hadoop/hadoop-2.6.0/bin/hadoop job  -kill 
>> job_1428439695331_0002
>> Hadoop job information for Stage-1: number of mappers: 1; number of 
>> reducers: 0
>> 2015-04-07 21:59:35,068 Stage-1 map = 0%,  reduce = 0%
>> 2015-04-07 22:00:35,545 Stage-1 map = 0%,  reduce = 0%
>> 2015-04-07 22:01:35,832 Stage-1 map = 0%,  reduce = 0%
>> 2015-04-07 22:02:36,058 Stage-1 map = 0%,  reduce = 0%
>> 2015-04-07 22:03:36,279 Stage-1 map = 0%,  reduce = 0%
>> 2015-04-07 22:04:36,486 Stage-1 map = 0%,  reduce = 0%
>>  
>> I have been messing around with concurrency for hive. That did not work. My 
>> metastore is built in Oracle. So I drooped that schema and recreated from 
>> scratch. Got rid of concurrency parameters. First I was getting “container 
>> is running beyond virtual memory limits” for the task. I changed the 
>> following parameters in yarn-site.xml
>>  
>>  
>> <property>
>>   <name>yarn.nodemanager.resource.memory-mb</name>
>>   <value>2048</value>
>>   <description>Amount of physical memory, in MB, that can be allocated for 
>> containers.</description>
>> </property>
>> <property>
>>   <name>yarn.scheduler.minimum-allocation-mb</name>
>>   <value>1024</value>
>> </property>
>>  
>> and mapred-site.xml
>>  
>> <property>
>> <name>mapreduce.map.memory.mb</name>
>> <value>4096</value>
>> </property>
>> <property>
>> <name>mapreduce.reduce.memory.mb</name>
>> <value>4096</value>
>> </property>
>> <property>
>> <name>mapreduce.map.java.opts</name>
>> <value>-Xmx3072m</value>
>> </property>
>> <property>
>> <name>mapreduce.recduce.java.opts</name>
>> <value>-Xmx6144m</value>
>> </property>
>> <property>
>> <name>yarn.app.mapreduce.am.resource.mb</name>
>> <value>400</value>
>> </property>
>>  
>> However, nothing has helped except that virtual memory error has gone. Any 
>> ideas appreciated.
>>  
>> Thanks
>>  
>> Mich Talebzadeh
>>  
>> http://talebzadehmich.wordpress.com
>>  
>> Publications due shortly:
>> Creating in-memory Data Grid for Trading Systems with Oracle TimesTen and 
>> Coherence Cache
>>  
>> NOTE: The information in this email is proprietary and confidential. This 
>> message is for the designated recipient only, if you are not the intended 
>> recipient, you should destroy it immediately. Any information in this 
>> message shall not be understood as given or endorsed by Peridale Ltd, its 
>> subsidiaries or their employees, unless expressly so stated. It is the 
>> responsibility of the recipient to ensure that this email is virus free, 
>> therefore neither Peridale Ltd, its subsidiaries nor their employees accept 
>> any responsibility.

Reply via email to