Well, here another possible option: Rackspace. If EC2 is such a big headache
to setup for external access, would Rackspace be a good alternative (cost
allowing)? What's people's experience running on it? It seems that they
support non-NAT public IPs:
http://www.rackspacecloud.com/cloud_hosting_products/servers/compare

-GS

On Fri, Mar 19, 2010 at 2:21 PM, George Stathis <gstat...@gmail.com> wrote:

> Andy, thanks for the response.
>
> Switching connectors is a significant DAO re-write for us, so that's out,
> plus we would not use connectors for production. The DNS approach is
> probably out as well as I don't think we have that much control with the ISP
> that hosts our build server (we are a small startup, so we are on the cheap
> right now). Re-writing the Java client is not an option either.
>
> So I guess moving the build machine to EC2 might be the best option for
> us. This definitely helps to move on. Thanks again for taking the time.
>
> -GS
>
>
>
> On Fri, Mar 19, 2010 at 1:16 PM, Andrew Purtell <apurt...@apache.org>wrote:
>
>> Expanding on my point #3, if you run your own DNS that accepts updates,
>> you can use nsupdate to maintain a dynamic shadow of the internal zone with
>> mappings to public IPs. Update records when the cluster is up. Remove them
>> when the cluster is terminated.
>>
>> You would also need to figure out how best to update should an instance
>> fail and be replaced, but this should be hopefully a rare event and elastic
>> IPs can help, though each account only gets 5 of them without justification
>> to AWS.
>>
>>    - Andy
>>
>> On Fri Mar 19th, 2010 9:45 AM PDT Andrew Purtell wrote:
>>
>> >The IP addresses assigned on the cluster are all internal ones, so when
>> the regionservers do a reverse lookup, they get something foo.internal. Then
>> they report this to the master, which hands them out to the client library
>> as region locations. So while you can telnet to 60020 on the slaves as you
>> know the public DNS names, the client library is only able to learn of the
>> internal ones.
>> >
>> >Some options:
>> >
>> >1) Run your clients up in the EC2 cloud also
>> >
>> >2) Use a connector like Stargate or the Thrift server which can in effect
>> proxy your requests to the EC2 hosted cluster.
>> >
>> >3) Grab the latest scripts from 0.20 branch in SVN. In
>> $HOME/.hbase-<cluster>-instances will be the list of instance identifiers of
>> the slaves. Do:
>> >
>> >   ec2-describe-instances `cat ~/.hbase-<cluster>-instances` | grep
>> INSTANCE | grep running | awk '{print "$4 $5"}'
>> >
>> >This will give you a mapping between private and public names. Dump
>> entries into your /etc/hosts which map public IP (use dig to look up) to
>> private name. Yes, it's not a nice hack.
>> >
>> >4) You can use SSH as a SOCKS 5 proxy (ssh -f -N -D <local-port>
>> <remote>), which will also forward DNS requests, but to do it that way you'd
>> have to hack the client library some.
>> >
>> >   - Andy
>> >
>> >> From: George Stathis
>> >> Subject: Remote Java client connection into EC2 instance
>> >> To: hbase-user@hadoop.apache.org
>> >> Date: Friday, March 19, 2010, 8:00 AM
>> >> This has come up
>> >> before<
>> http://mail-archives.apache.org/mod_mbox/hadoop-hbase-user/200909.mbox/%3c587903.6843...@web65509.mail.ac4.yahoo.com%3e
>> >but
>> >> I'm still unclear as to whether this is possible or not:
>> >> remotely connecting to an EC2 instance using the Java client
>> >> library.
>> >[...]
>> >> Now, I have gone though a lot of threads and posts and have
>> >> opened up all required ports (I think) on EC2: 60000, 60020
>> >> and 2181 (I can telnet into them). I have one test EC2
>> >> instance running in pseudo-distributed mode to
>> >> test the remote connection. I attempt to run a single unit
>> >> test.
>> >[...]
>> >
>> >
>> >
>> >
>>
>>
>>
>>
>>
>>
>

Reply via email to