Hi guys
I have situation in which i have machine with 4 processor and i have 5
containers so does it mean i can have only 4 mappers running parallely
at a time
and number of mappers is not dependent on the number of containers in a
machine then what is the use of container concept
sorry
I'm trying to execute a work in YARN but I get an error. I have checked
the yarn-site.xml, the classpath seems to be alright.
I read https://issues.apache.org/jira/browse/YARN-1473 as well, but it
didn't work to me either.
Container for appattempt_1413373500815_0001_02 exited with
Hi guys
I have situation in which i have machine with 4 processor and i have 5
containers so does it mean i can have only 4 mappers running parallely
at a time
and number of mappers is not dependent on the number of containers in a
machine then what is the use of container concept
sorry
It depends on memory settings as well, that how much you want to assign
resources to each container. Then yarn will run as many mappers in parallel
as possible.
See this:
http://hortonworks.com/blog/how-to-plan-and-configure-yarn-in-hdp-2-0/
It cannot run more mappers (tasks) in parallel than the underlying cores
available. Just like it cannot run multiple mappers in parallel if each
mapper's (task's) memory requirements are greater than allocated and
available container size configured on each node.
The links that I provided
Hi gortiz
Please have a look at application master logs(you can get RM UI or JHS (if
aggeration enabled) or yarn-nodemanger-logs-dir (if aggeration is disabled) )
and Nodemanager logs to get the exact cause..
Thanks Regards
Brahma Reddy Battula
I have one more doubt i was reading this
http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.0.6.0/bk_installing_manually_book/content/rpm-chap1-11.html
there is one property as
mapreduce.map.memory.mb = 2*1024 MB
mapreduce.reduce.memory.mb = 2 * 2 = 4*1024 MB
what are these
Explanation here.
http://stackoverflow.com/questions/24070557/what-is-the-relation-between-mapreduce-map-memory-mb-and-mapred-map-child-jav
https://support.pivotal.io/hc/en-us/articles/201462036-Mapreduce-YARN-Memory-Parameters
Hi Guys,
I have some issue with one of my JournalNode, it's not properly syncing
among other JN, How can I sync JN. otherwise can I restart the JN.service
it automatically sync wtih edit logs.
-Dhanasekaran.
Did I learn something today? If not, I wasted it.
it is still not clear to me
lets suppose block size of my hdfs is 128 mb so every mapper will
process only 128 mb of data
then what is the meaning of setting the property mapreduce.map.memory.mb
that is already known from the block size then why this property
On Wednesday 15 October 2014
The data that the each map task will process is different from the memory
the task itself might require depending upon whatever processing that you
plan to do in the task.
Very trivial example: Let us say your map gets 128mb input data but your
task logic is such that it creates lots of String
Does anyone have succeed configured failover namenode with QJM and
federation namenode? I was configure 3 namenode (hadoop1, hadoop2, hadoop3)
with hadoop1 is active and hadoop2 as a standby for failover, and hadoop3
as other federation namenode with both namenode before (hadoop1 and
hadoop2) but
thanks for the reply
i have one more doubt
are there three kinds of containers with different memory sizes in hadoop 2
1.normal container
2.map task container
3.reduce task container
On Wednesday 15 October 2014 07:33 PM, Shahab Yunus wrote:
The data that the each map task will process is
Has anybody able to run Hadoop 2 on a Windows machine in pseudo distributed
or cluster mode? I am able to run it on a single machine but has not been
able to deploy it across multiple machines.
Wadood
on the status page, I see table with columns
th class=id Attempt th class=progress Progress th class=state
State th class=status Status th class=node Node th class=logs
Logs th class=tsh Started th class=tsh Finished th class=tsh
Elapsed th class=note
for the status field, it seems to be a
free
Hi Goritz
It is most likely a failure of the application you are trying to run and
you can find the exact cause of failure in your logs. To find these logs
check your yarn-site.xml for *yarn.log-aggregation-enable.*
If *yarn.log-aggregation-enable *is set to true then your logs should be on
Thanks Gogate for help
I changed the hive.input.format to
org.apache.hadoop.hive.ql.io.HiveInputFormat, it works.
So I think CombineHiveInputFormat in hive 0.9 works with hadoop 2.0.0, but has
incompatibility problem with hadoop 2.4. Maybe some related code changed
between 2.0 and 2.4
Answer
17 matches
Mail list logo