Harsh, thanks for the quick reply.While I am a Hadoop-newbie, I find I am 
explaining Hadoop install, config, job processing to newer-newbies. Thus the 
desire and need for more details.John
> From: ha...@cloudera.com
> Date: Tue, 9 Apr 2013 09:16:49 +0530
> Subject: Re: mr default=local?
> To: user@hadoop.apache.org
> 
> Hey John,
> 
> Sorta unclear on what is prompting this question (to answer it more
> specifically) but my response below:
> 
> On Tue, Apr 9, 2013 at 9:05 AM, John Meza <j_meza...@hotmail.com> wrote:
> > The default mode for hadoop is Standalone, PsuedoDistributed and Fully
> > Distributed modes. It is configured for Psuedo and Fully Distributed via
> > configuration file, but defaults to Standalone otherwise (correct?).
> 
> The mapred-default.xml we ship, has "mapred.job.tracker"
> (0.20.x/1.x/0.22.x) set to local, or "mapreduce.framework.name"
> (0.23.x, 2.x, trunk) set to local. This is why, without reconfiguring
> an installation to point to a proper cluster (JT or YARN), you will
> get local job runner activated.
> 
> > Question about the -defaulting- mechanism:
> >     -Does it get the -default- configuration via one of the config files?
> 
> For any Configuration type of invocation:
> 1. First level of defaults come from *-default.xml embedded inside the
> various relevant jar files.
> 2. Configurations further found in a classpath resource XML
> (core,mapred,hdfs,yarn, *-site.xmls) are applied on top of the
> defaults.
> 3. User applications' code may then override this set, with any
> settings of their own, if needed.
> 
> >     -Or does it get the -default- configuration via hard-coded values?
> 
> There may be a few cases of hardcodes, missing documentation and
> presence in *-default.xml, but they should still be configurable via
> (2) and (3).
> 
> >     -Or another mechanism?
> 
> --
> Harsh J
                                          

Reply via email to