Arun,

There are instructions on how to run 0.23/trunk from the dev tree here:

http://wiki.apache.org/hadoop/HowToSetupYourDevelopmentEnvironment#Run_HDFS_in_pseudo-distributed_mode_from_the_dev_tree
http://wiki.apache.org/hadoop/HowToSetupYourDevelopmentEnvironment#Run_MapReduce_in_pseudo-distributed_mode_from_the_dev_tree

I think it's useful to be able to do this since the feedback cycle is
shorter between making a change and seeing it reflected in running
code. I don't think it's either one or the other: I use tarballs too
in other circumstances - it's useful to have both.

To prevent the packages being broken we should have automated tests
that run on nightly builds:
https://issues.apache.org/jira/browse/HADOOP-7650.

Cheers,
Tom

On Wed, Jan 18, 2012 at 7:16 AM, Arun C Murthy <a...@hortonworks.com> wrote:
> Folks,
>
>  Somewhere between MR-279 and mavenization we have broken the support for 
> allowing _developers_ to run single-node installs from the non-packaged 
> 'build' i.e. ability to run single-node clusters without the need to use a 
> tarball/rpm etc. (I fully suspect MR-279 is blame as much as anyone else! 
> *smile*)
>
>  I propose we go ahead and stop support for this officially to prevent 
> confusion I already see among several folks (this has come up several times 
> on our lists in context of hadoop-0.23).
>
>  Some benefits I can think of:
>  a) Focus on fewer 'features' in the core.
>  b) Reduce maintenance/complexity in our scripts (bin/hadoop, bin/hdfs etc.).
>  d) Force us devs to eat our own dogfood when it comes to packaging etc. (I 
> can think of numerous cases where devs have broken the tarball/rpm generation 
> since they don't use it all the time.)
>
>  Clearly, we *should* back this up by improving our docs for new devs (wiki 
> and/or the site: 
> http://hadoop.apache.org/common/docs/r0.23.0/hadoop-yarn/hadoop-yarn-site/SingleCluster.html)
>  and I'm happy to volunteer.
>
>  Thoughts?
>
> thanks,
> Arun

Reply via email to