On Mar 6, 2008, at 5:21 AM, Enis Soztutar wrote:
Hi,
LocalJobRunner uses just 0 or 1 reduce. This is because running in
local mode is only supported for testing purposes.
This is a very long standing design bug. (HADOOP-206)
-- Owen
On Mar 6, 2008, at 5:23 AM, Naama Kraus wrote:
Thanks, that's fine with me then. Naama
Please note that you can also run hadoop in a 'psuedo-distributed'
mode on a single node where you can mimic all characteristics (all?
except, err... performance of course!) of a large cluster:
http:
Thanks, that's fine with me then. Naama
On Thu, Mar 6, 2008 at 3:21 PM, Enis Soztutar <[EMAIL PROTECTED]>
wrote:
> Hi,
>
> LocalJobRunner uses just 0 or 1 reduce. This is because running in local
> mode is only supported for testing purposes.
> Although you can simulate distribute mode in local,
Hi,
LocalJobRunner uses just 0 or 1 reduce. This is because running in local
mode is only supported for testing purposes.
Although you can simulate distribute mode in local, by using
MiniMRCluster and MiniDFSCluster under src/test.
Best wishes
Enis
Naama Kraus wrote:
Hi,
I ran a simple Map
Hi,
I ran a simple MapReduce job which defines 3 reducers:
*conf.setNumReduceTasks(3);*
When running on top of HDFS (distributed mode), I got 3 out files as I
expected.
When running on top of a local files system (local mode), I got 1 file and
not 3.
My question is whether the behavior in the l