Hi Phil,

Yeah, could be. I often use lynx (ahem) to check those pages. Next
time I will try Safari/Chrome to see what it does, good to know where
to tweak it when it does not.

Thanks!

Lars

On Wed, Jan 5, 2011 at 1:29 AM, Phil Whelan <phil...@gmail.com> wrote:
> Hi Lars,
>
> Thanks for reading the post and the feedback.
>
> Sorry for slow reply. I was running through it again to check. If I do
> not enable the SOCKS proxy to the MacOSX Network Settings, then my web
> browser cannot see the AWS internal domain names when I browsing the
> HDFS.
>
> e.g. 
> http://ip-10-245-121-242.ec2.internal:50075/browseDirectory.jsp?namenodeInfoPort=50070&dir=/
>
> I'm guessing if you do not browse the file-system through the browser.
> Otherwise, it's possible I missed a step that then required me to have
> to do this.
>
> Cheers,
> Phil
>
> On Tue, Jan 4, 2011 at 3:23 PM, Lars George <lars.geo...@gmail.com> wrote:
>> Hi Phil,
>>
>> Very nice post indeed! Awesome stuff.
>>
>> One question I have is re: adding the SOCKS proxy to MacOS. I am on
>> MacOS too and did not do that but simply ran the hadoop-proxy.sh
>> script to set up the tunnel. What is that option needed for?
>>
>> Thanks,
>> Lars
>>
>>
>> On Tue, Jan 4, 2011 at 11:26 PM, Phil Whelan <phil...@gmail.com> wrote:
>>> Hi guys,
>>>
>>> My recent blog post goes through in detail the steps I took to use
>>> Whirr to start-up a CDH based Hadoop cluster on AWS. I'd really
>>> appreciate any feedback you have on it. I'm planning to write a
>>> similar on HBase.
>>>
>>> http://www.philwhln.com/map-reduce-with-ruby-using-hadoop
>>>
>>> Thanks,
>>> Phil
>>>
>>> On Tue, Jan 4, 2011 at 12:23 AM, Lars George <lars.geo...@gmail.com> wrote:
>>>> Hi Otis,
>>>>
>>>> It also supports CDH although it does only start Hadoop
>>>> (HDFS/MapReduce). I am going to open a JIRA to facilitate the start up
>>>> of other services on top of that.
>>>>
>>>> Lars
>>>>
>>>> On Tue, Jan 4, 2011 at 4:31 AM, Otis Gospodnetic
>>>> <otis_gospodne...@yahoo.com> wrote:
>>>>> Ah, Whirr and Hadoop/HBase on AWS.   Is this only applicable to using 
>>>>> "raw"
>>>>> HBase/Hadoop from ASF or can one use Whirr support for HBase/Hadoop 
>>>>> deployment
>>>>> to AWS with CDH?
>>>>>
>>>>> Thanks,
>>>>> Otis
>>>>> ----
>>>>> Sematext :: http://sematext.com/ :: Solr - Lucene - Hadoop - HBase
>>>>> Hadoop ecosystem search :: http://search-hadoop.com/
>>>>>
>>>>>
>>>>>
>>>>> ----- Original Message ----
>>>>>> From: Lars George <lars.geo...@gmail.com>
>>>>>> To: user@hbase.apache.org
>>>>>> Sent: Mon, January 3, 2011 12:32:11 PM
>>>>>> Subject: Re: Hbase/Hadoop cluster setup on AWS
>>>>>>
>>>>>> Hi H,
>>>>>>
>>>>>> While you can do that by hand I strongly recommend using Apache  Whirr
>>>>>> (http://incubator.apache.org/projects/whirr.html) which has  Hadoop and
>>>>>> (in trunk now) also HBase support, straight from the Apache  tarballs.
>>>>>>
>>>>>> If you want to set them up manually then you simply spin up N  machines
>>>>>> and follow the normal guides for either project to set them  up
>>>>>> appropriately, so extra magic needed.
>>>>>>
>>>>>> Lars
>>>>>>
>>>>>> On Mon, Jan 3,  2011 at 3:26 PM, h <hel...@gmail.com> wrote:
>>>>>> > Are there  any good tutorials on cluster setup on AWS
>>>>>> > I'd prefer not to use  third-party scripts/frameworks, just a simple
>>>>> sequence
>>>>>> > of  steps/one-liner commands, using the original hadoop/hbase tarballs
>>>>>> >
>>>>>>
>>>>>
>>>>
>>>
>>>
>>>
>>> --
>>> Cell : +1 (778) 233-4935
>>> Twitter : http://www.twitter.com/philwhln
>>> LinkedIn : http://ca.linkedin.com/in/philwhln
>>> Blog : http://www.philwhln.com
>>> Skype : philwhelan76
>>>
>>
>
>
>
> --
> Cell : +1 (778) 233-4935
> Twitter : http://www.twitter.com/philwhln
> LinkedIn : http://ca.linkedin.com/in/philwhln
> Blog : http://www.philwhln.com
> Skype : philwhelan76
>

Reply via email to