[
https://issues.apache.org/jira/browse/HBASE-1961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12778715#action_12778715
]
stack commented on HBASE-1961:
------------------------------
@apurtell: I'm having a bit of trouble making these stuff work. I've edited
hbase-ec2-env.sh adding in my vitals but it seems like i need to export
EC2_PRIVATE_KEY, etc., otherwise the ec2 programs don't see their content. For
example, I seem to have to do this to get the private key and cert into place
so the amz programs will pick them up:
{code}
Index: bin/list-hbase-clusters
===================================================================
--- bin/list-hbase-clusters (revision 881125)
+++ bin/list-hbase-clusters (working copy)
@@ -23,7 +23,7 @@
. "$bin"/hbase-ec2-env.sh
# Finding HBase clusters
-CLUSTERS=`ec2-describe-instances | awk '"RESERVATION" == $1 && $4 ~
/-master$/, "INSTANCE" == $1' | tr '\n' '\t' | grep running | cut -f4 | rev |
cut -d'-' -f2- | rev`
+CLUSTERS=`ec2-describe-instances -K $EC2_PRIVATE_KEY | awk '"RESERVATION" ==
$1 && $4 ~ /-master$/, "INSTANCE" == $1' | tr '\n' '\t' | grep running | cut
-f4 | rev | cut -d'-' -f2- | rev`
[ -z "$CLUSTERS" ] && echo "No running clusters." && exit 0
{code}
(See how I added the -K above).
... or if I export the keys, it works as follows:
{code}
# Your AWS private key file -- must begin with 'pk' and end with '.pem'
-EC2_PRIVATE_KEY=
+export EC2_PRIVATE_KEY=/Users/stack/.ec2/....
{code}
Maybe its because I'm on a macintosh? I see that the hadoop scripts don't
export variables nor pass with -K nor -C. Let me try over on linux.
> HBase EC2 scripts
> -----------------
>
> Key: HBASE-1961
> URL: https://issues.apache.org/jira/browse/HBASE-1961
> Project: Hadoop HBase
> Issue Type: New Feature
> Environment: Amazon AWS EC2
> Reporter: Andrew Purtell
> Assignee: Andrew Purtell
> Priority: Minor
> Fix For: 0.21.0, 0.20.3
>
> Attachments: ec2-contrib.tar.gz
>
>
> Attached tarball is a clone of the Hadoop EC2 scripts, modified significantly
> to start up a HBase storage only cluster on top of HDFS backed by instance
> storage.
> Tested with the HBase 0.20 branch but should work with trunk also. Only the
> AMI create and launch scripts are tested. Will bring up a functioning HBase
> cluster.
> Do "create-hbase-image c1.xlarge" to create an x86_64 AMI, or
> "create-hbase-image c1.medium" to create an i386 AMI. Public Hadoop/HBase
> 0.20.1 AMIs are available:
> i386: ami-c644a7af
> x86_64: ami-f244a79b
> launch-hbase-cluster brings up the cluster: First, a small dedicated ZK
> quorum, specifiable in size, default of 3. Then, the DFS namenode (formatting
> on first boot) and one datanode and the HBase master. Then, a specifiable
> number of slaves, instances running DFS datanodes and HBase region servers.
> For example:
> {noformat}
> launch-hbase-cluster testcluster 100 5
> {noformat}
> would bring up a cluster with 100 slaves supported by a 5 node ZK ensemble.
> We must colocate a datanode with the namenode because currently the master
> won't tolerate a brand new DFS with only namenode and no datanodes up yet.
> See HBASE-1960. By default the launch scripts provision ZooKeeper as
> c1.medium and the HBase master and region servers as c1.xlarge. The result is
> a HBase cluster supported by a ZooKeeper ensemble. ZK ensembles are not
> dynamic, but HBase clusters can be grown by simply starting up more slaves,
> just like Hadoop.
> hbase-ec2-init-remote.sh can be trivially edited to bring up a jobtracker on
> the master node and task trackers on the slaves.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.