so it cant figure out an appropriate number of reducers as it does for
mappers .. in my case hadoop is using 2100+ mappers and then only 1 reducer
.. since im overriding the partitioner class shouldnt that decide how
manyredeucers there should be based on how many different partition values
being returned by the custom partiotioner


On Thu, Aug 29, 2013 at 7:38 PM, Ian Wrigley <i...@cloudera.com> wrote:

> If you don't specify the number of Reducers, Hadoop will use the default
> -- which, unless you've changed it, is 1.
>
> Regards
>
> Ian.
>
> On Aug 29, 2013, at 4:23 PM, Adeel Qureshi <adeelmahm...@gmail.com> wrote:
>
> I have implemented secondary sort in my MR job and for some reason if i
> dont specify the number of reducers it uses 1 which doesnt seems right
> because im working with 800M+ records and one reducer slows things down
> significantly. Is this some kind of limitation with the secondary sort that
> it has to use a single reducer .. that kind of would defeat the purpose of
> having a scalable solution such as secondary sort. I would appreciate any
> help.
>
> Thanks
> Adeel
>
>
>
> ---
> Ian Wrigley
> Sr. Curriculum Manager
> Cloudera, Inc
> Cell: (323) 819 4075
>
>

Reply via email to