Hi,
I see a tuning tool---vaidya---from Amogh, but there is not enough information.
Is there any tool in Hadoop for MR jobs' profiling? Thank you!
Best Wishes,
Evan
__
赶快注册雅虎超大容量免费邮箱?
http://cn.mail.yahoo.com
Great. Just thought I'd be missing something :)
On Mon, Jun 7, 2010 at 01:55, Aaron Kimball wrote:
> Yes. LJR sets your number of reduce tasks to 1 if the number is >= 1.
> Subnote: I've posted a patch to fix this at MAPREDUCE-434, but it's not
> committed.
> - Aaron
>
> On Mon, Jun 7, 2010 at 1:
Yes. LJR sets your number of reduce tasks to 1 if the number is >= 1.
Subnote: I've posted a patch to fix this at MAPREDUCE-434, but it's not
committed.
- Aaron
On Mon, Jun 7, 2010 at 1:42 AM, Torsten Curdt wrote:
> I see only one.
>
> Could it be that using the LocalJobRunner interferes here?
Quite a reasonable workaround to my mind. Though having each mapper
calculate the list of inputsplits may be costly.
Instead, consider this:
On your client, configure the Job instance.
Call new FooInputFormat().getSplits(theJob) and save the resulting
List in serialized form to a file -- inject t
I see only one.
Could it be that using the LocalJobRunner interferes here?
On Mon, Jun 7, 2010 at 01:31, Eric Sammer wrote:
> Torsten:
>
> To clarify, how many reducers do you actually see? (i.e. Do you see 4
> reducers or 1?) It should work as you expect.
>
> On Sun, Jun 6, 2010 at 1:33 PM, Tor
Torsten:
To clarify, how many reducers do you actually see? (i.e. Do you see 4
reducers or 1?) It should work as you expect.
On Sun, Jun 6, 2010 at 1:33 PM, Torsten Curdt wrote:
> When I set
>
> job.setPartitionerClass(MyPartitioner.class);
> job.setNumReduceTasks(4);
>
> I would expect to see
When I set
job.setPartitionerClass(MyPartitioner.class);
job.setNumReduceTasks(4);
I would expect to see my MyParitioner get called with
getPartition(key, value, 4)
but still I see it only get called with 1.
If also tried setting
conf.set("mapred.map.tasks.speculative.exe