Hi bhupendra,

The Apache BigTop project was born to solve the general problem of dealing with 
and verifying the functionality of various components in the hadoop ecosystem.

Also,  it creates rpm , apt repos for installing hadoop and puppet recipes for 
initializing the file system and installing components in a clear and 
dependency aware manner. And we have smoke tests to validate that hive,pig, and 
so on all are working.

You should definitely consider checking it out of your building a hadoop 
environment or big data stack.

The best way to get started is with the vagrant recipes , which spin up a 
cluster from scratch for you. 

Once that works, you can take the puppet code and run it on bare metal,

One advantage of this approach is that you are using bits that the community 
tests for you, and will avoid reinventing the wheel of writing a bunch of shell 
scripts for things like synchronizing config files, yum installing components 
across a cluster, smoke tests.

> On Dec 16, 2014, at 9:05 AM,  GUPTA <bhupendra1...@gmail.com> wrote:
> 
> Hello all,
> 
> Firstly am a neophyte in the world of Hadoop..
> 
> So far, I have got the hadoop single node cluster running on Ubuntu.
> The end state of this was datanode and namenode servers where running..
> 
> But from here, am not sure how do I proceed, in the sense, how do I get the 
> other pieces of the hadoop ecosystem installed and working.
> 
> Like Hive, Pig , Hbase and may be Ambari as well, set up and running.
> 
> Would appreciate if I can get access to materials which says "these are  MUST 
> HAVEs for any hadoop project"
> 
> 
> Just trying to get all the pieces together...
> 
> Regards
> Bhupendra

Reply via email to