/fairSharePreemptionTimeout
/allocations
regards,
2012-03-07
hao.wang
Hi, Thanks for your reply!
I have solved this problem by setting mapred.fairscheduler.preemption.only.log
to false. The preemption works!
But I don't know why can not set mapred.fairscheduler.preemption.only.log to
true. Is it a bug?
regards,
2012-03-07
hao.wang
发件人: Harsh J
发送时间
Hi,
Thanks for your help, your suggestion is very usefully.
I have another question that is whether the sum of maps and reduces equals
to the total number of cores.
regards!
2012-01-10
hao.wang
发件人: Harsh J
发送时间: 2012-01-10 16:44:07
收件人: common-user
抄送:
主题: Re: how to set
regards!
2012-01-09
hao.wang
!
2012-01-10
hao.wang
发件人: Harsh J
发送时间: 2012-01-09 23:19:21
收件人: common-user
抄送:
主题: Re: how to set mapred.tasktracker.map.tasks.maximum and
mapred.tasktracker.reduce.tasks.maximum
Hi,
Please read http://hadoop.apache.org/common/docs/current/single_node_setup.html
to learn how
hao.wang
发件人: Harsh J
发送时间: 2012-01-10 11:33:38
收件人: common-user
抄送:
主题: Re: how to set mapred.tasktracker.map.tasks.maximum and
mapred.tasktracker.reduce.tasks.maximum
Hello again,
Try a 4:3 ratio between maps and reduces, against a total # of available CPUs
per node (minus one
Hi All:
I have lots of small files stored in HDFS. My HDFS block size is 128M. Each
file is significantly smaller than the HDFS block size. Then, I want to know
whether the small file used 128M in HDFS?
regards
2011-09-21
hao.wang
Hi, Joey:
Thanks for your help!
2011-09-21
hao.wang
发件人: Joey Echeverria
发送时间: 2011-09-21 10:10:54
收件人: common-user
抄送:
主题: Re: block size
HDFS blocks are stored as files in the underlying filesystem of your
datanodes. Those files do not take a fixed amount of space, so if you