On Sun, Nov 15, 2009 at 8:39 PM, Mark Kerzner <markkerz...@gmail.com> wrote: > Tom, > > do I understand correctly that with these scripts I can use the Apache > Hadoop configuration as I am used to, and run and EC2 image that contains > Cloudera Hadoop distribution?
Yes, you can run Apache Hadoop with your existing configuration. > > PS. I could not download them from here, > http://issues.apache.org/jira/secure/attachment/12422889/HADOOP-6108.patch, > was getting, "too many open files" error. I think this may be a transient problem (if it recurs you can report it to in...@apache.org). > > Thank you, > Mark > > On Sun, Nov 15, 2009 at 10:29 PM, Tom White <t...@cloudera.com> wrote: > >> Hi Mark, >> >> HADOOP-6108 will add Cloudera's EC2 scripts to the Apache >> distribution, with the difference that they will run Apache Hadoop. >> The same scripts will also support Cloudera's Distribution for Hadoop, >> simply by using a different boot script on the instances. So I would >> suggest you use these scripts since they are more flexible than the >> existing bash-based ones in Apache (e.g. they also support EBS), and >> are likely to have more features added, and support more cloud >> providers over time. >> >> Hope this helps. >> >> Tom >> >> On Sun, Nov 15, 2009 at 7:31 PM, Mark Kerzner <markkerz...@gmail.com> >> wrote: >> > Hi, guys, >> > >> > sorry for kind of making you do my work, but I have a conundrum. I have >> been >> > developing on Ubuntu, and preferred to run the same Ubuntu Linux on EC2, >> and >> > indeed, that is what Amazon Elastic MR was giving me. >> > >> > But now I am running my own cluster on EC2, and Apache Hadoop images are >> all >> > on Fedora. I have already figured out the scripts and it all works - >> except >> > that I have not tested on Fedora, and I do use Linux packages. >> > >> > Alternatively, I could run on Cloudera's Hadoop, and they have Ubuntu. >> But, >> > I would probably to switch to their distribution in my code, and learn >> their >> > startup scripts. >> > >> > Which way is better? >> > >> > Thank you, >> > Mark >> > >> >