Re: [OpenIndiana-discuss] Amazon EC2 and OpenIndiana

2010-11-10 Thread Jerry Kemp
I have never actually tried, but I do have these (2) URL's that I have
archived.

Hope these help.

http://blogs.sun.com/angelo/entry/mounting_amazon_s3_buckets_as

http://blogs.sun.com/skr/entry/sun_ray_in_opensolaris_2009

Jerry Kemp


On 11/10/10 12:29, Alex Smith (K4RNT) wrote:
 Has anyone here used EC2 with OpenSolaris or OpenIndiana? If so,
 please contact me off-list. I'm not sure how to do it allowing use of
 Elastic Block Storage.
 
 Thanks!
 

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Amazon EC2 and OpenIndiana

2010-11-10 Thread Gary
You definitely _don't_ want to use S3 for raw volume storage -- that's
why they released EBS in the first place. I would start with the PDF
linked in the first URL below. I don't know if anyone's created an OI
AMI yet but I'm still using hardened OpenSolaris images without issue.

http://blogs.sun.com/ec2/entry/ebs_is_supported_on_opensolaris
http://blogs.sun.com/prateek/entry/using_ebs_with_opensolaris_2008
q.v. http://blogs.sun.com/ec2

-Gary

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Amazon EC2 and OpenIndiana

2010-11-10 Thread Gary
Here's a brief document I wrote with the assistance of the previously
referenced PDF -- note that the command used do require having
Amazon's EC2 and ELB management tools installed and in your path.
Also, pfexec may be substituted for sudo, mount locations changed,
different types/sizes of pools, etc. It's just a sample walkthrough...


HOWTO create a ZFS mirror on OpenSolaris with Amazon Elastic Block Store volumes

set up your environment

$ cat ~/.bash_profile

if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi

EC2_CERT=$HOME/.ec2/cert-FPGAG6000DYMT5SPWUS4CNMGVND3WF7Y.pem
EC2_PRIVATE_KEY=$HOME/.ec2/pk-FPGAG6000DYMT5SPWUS4CNMGVND3WF7Y.pem
PATH=/usr/gnu/bin:/usr/bin:/usr/X11/bin:/usr/sbin:/sbin:/usr/local/bin:/opt/ec2/bin:/opt/ec2/sbin:/opt/elb/bin
MANPATH=/usr/gnu/share/man:/usr/share/man:/usr/X11/share/man
PAGER=/usr/bin/less -ins
AWS_ELB_HOME=/opt/elb
EC2_HOME=/opt/ec2
JAVA_HOME=/usr/java
export PATH MANPATH PAGER AWS_ELB_HOME EC2_HOME JAVA_HOME EC2_CERT
EC2_PRIVATE_KEY


look at your instances, note their zone

$ ec2-describe-instances
RESERVATION r-7ef60316  164967591565default

INSTANCEi-86d861ee  ami-e56e8f8c
ec2-XXX-XXX-XXX-XXX.compute-1.amazonaws.com
domU-XXX-XXX-XXX-XXX.compute-1.internal   running gd  0
   m1.small2009-10-21T16:47:10+us-east-1a
aki-1783627eari-9d6889f4monitoring-enabled

RESERVATION r-eb78b183  164967591565default

INSTANCEi-7fce5417  ami-e56e8f8c
ec2-XXX-XXX-XXX-XXX.compute-1.amazonaws.com
ip-XXX-XXX-XXX-XXX.ec2.internal   running gd  0
m1.small2009-11-12T17:37:48+us-east-1d
aki-1783627eari-9d6889f4monitoring-enabled


check volume availability, note their zone

$ ec2dvol -H
VolumeIdSizeSnapshotId  AvailabilityZoneStatus
 CreateTime
VOLUME  vol-d18c75b816  us-east-1d  available
 2009-11-12T17:39:17+
VOLUME  vol-19956c7016  us-east-1a  available
 2009-11-12T04:16:04+
VOLUME  vol-d08c75b916  us-east-1d  available
 2009-11-12T17:39:29+
VOLUME  vol-dc8c75b516  us-east-1a  available
 2009-11-12T17:38:45+


create a script to attach volumes for the zone your instance resides in.

$ more attach-vols
#!/usr/bin/bash
# usage: attach-vols instance-id starting-dev number-of-vols
# instance to attach volume
inst=$1
# starting device number
dev=$2
# number of volumes to attach
num=$3
let count=0
# get a list of available volumes
for vol in `ec2-describe-volumes | egrep -i available | egrep -i
us-east-1a | cut -f2`
do
# attach the volume to the next device
echo ec2-attach-volume -i $inst -d $dev $vol
ec2-attach-volume -i $inst -d $dev $vol
# increment the device number
let dev=dev+1
let count=count+1
# if specified number have been attached then exit
if (( count == num ))
then
exit 0
fi
done

$ ./attach-vols i-86d861ee 2 3
ec2-attach-volume -i i-86d861ee -d 2 vol-19956c70
ATTACHMENT  vol-19956c70i-86d861ee  2   attaching
 2009-11-13T18:54:26+
ec2-attach-volume -i i-86d861ee -d 3 vol-dc8c75b5
ATTACHMENT  vol-dc8c75b5i-86d861ee  3   attaching
 2009-11-13T18:54:35+

$ ec2-describe-volumes | egrep -i attached | cut -f2,3,4,5
vol-19956c70i-86d861ee  2   attached
vol-dc8c75b5i-86d861ee  3   attached

find out what devices they've attached as (the first two are local EC2 volumes)
then create a ZFS mirror, check its status and mount

$ sudo format
Password:
Searching for disks...done


AVAILABLE DISK SELECTIONS:
   0. c7d0 DEFAULT cyl 1274 alt 0 hd 255 sec 63
  /xpvd/x...@0
   1. c7d1 DEFAULT cyl 19464 alt 0 hd 255 sec 63
  /xpvd/x...@1
   2. c7d2 DEFAULT cyl 2088 alt 0 hd 255 sec 63
  /xpvd/x...@2
   3. c7d3 DEFAULT cyl 2088 alt 0 hd 255 sec 63
  /xpvd/x...@3
Specify disk (enter its number): ^C

$ sudo zpool create logs mirror c7d2 c7d3

$ sudo zpool status
Password:
  pool: logs
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
logsONLINE   0 0 0
  mirrorONLINE   0 0 0
c7d2ONLINE   0 0 0
c7d3ONLINE   0 0 0

errors: No known data errors

  pool: mnt
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
mnt ONLINE   0 0 0
  c7d1p0ONLINE   0 0 0

errors: No known data errors

  pool: rpool
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
rpool   ONLINE   0 0 0
  c7d0s0ONLINE   0 0 0

errors: No known data errors

$ df -k
Filesystem   1K-blocks  Used Available Use% Mounted on
rpool/ROOT/opensolaris
   8319892   3728123   4591770  

Re: [OpenIndiana-discuss] Amazon EC2 and OpenIndiana

2010-11-10 Thread Alex Smith (K4RNT)
So I can create a ZFS mirror on my local machine, or should I use one
of the instance-storage pre-existing AMIs and move to EBS?

On Wed, Nov 10, 2010 at 14:09, Gary gdri...@gmail.com wrote:
 Here's a brief document I wrote with the assistance of the previously
 referenced PDF -- note that the command used do require having
 Amazon's EC2 and ELB management tools installed and in your path.
 Also, pfexec may be substituted for sudo, mount locations changed,
 different types/sizes of pools, etc. It's just a sample walkthrough...


 HOWTO create a ZFS mirror on OpenSolaris with Amazon Elastic Block Store 
 volumes

 set up your environment

 $ cat ~/.bash_profile

 if [ -f ~/.bashrc ]; then
        . ~/.bashrc
 fi

 EC2_CERT=$HOME/.ec2/cert-FPGAG6000DYMT5SPWUS4CNMGVND3WF7Y.pem
 EC2_PRIVATE_KEY=$HOME/.ec2/pk-FPGAG6000DYMT5SPWUS4CNMGVND3WF7Y.pem
 PATH=/usr/gnu/bin:/usr/bin:/usr/X11/bin:/usr/sbin:/sbin:/usr/local/bin:/opt/ec2/bin:/opt/ec2/sbin:/opt/elb/bin
 MANPATH=/usr/gnu/share/man:/usr/share/man:/usr/X11/share/man
 PAGER=/usr/bin/less -ins
 AWS_ELB_HOME=/opt/elb
 EC2_HOME=/opt/ec2
 JAVA_HOME=/usr/java
 export PATH MANPATH PAGER AWS_ELB_HOME EC2_HOME JAVA_HOME EC2_CERT
 EC2_PRIVATE_KEY


 look at your instances, note their zone

 $ ec2-describe-instances
 RESERVATION     r-7ef60316      164967591565    default

 INSTANCE        i-86d861ee      ami-e56e8f8c
 ec2-XXX-XXX-XXX-XXX.compute-1.amazonaws.com
 domU-XXX-XXX-XXX-XXX.compute-1.internal       running gd      0
       m1.small        2009-10-21T16:47:10+        us-east-1a
 aki-1783627e    ari-9d6889f4            monitoring-enabled

 RESERVATION     r-eb78b183      164967591565    default

 INSTANCE        i-7fce5417      ami-e56e8f8c
 ec2-XXX-XXX-XXX-XXX.compute-1.amazonaws.com
 ip-XXX-XXX-XXX-XXX.ec2.internal   running gd      0
 m1.small        2009-11-12T17:37:48+        us-east-1d
 aki-1783627e    ari-9d6889f4            monitoring-enabled


 check volume availability, note their zone

 $ ec2dvol -H
 VolumeId        Size    SnapshotId      AvailabilityZone        Status
  CreateTime
 VOLUME  vol-d18c75b8    16              us-east-1d      available
  2009-11-12T17:39:17+
 VOLUME  vol-19956c70    16              us-east-1a      available
  2009-11-12T04:16:04+
 VOLUME  vol-d08c75b9    16              us-east-1d      available
  2009-11-12T17:39:29+
 VOLUME  vol-dc8c75b5    16              us-east-1a      available
  2009-11-12T17:38:45+


 create a script to attach volumes for the zone your instance resides in.

 $ more attach-vols
 #!/usr/bin/bash
 # usage: attach-vols instance-id starting-dev number-of-vols
 # instance to attach volume
 inst=$1
 # starting device number
 dev=$2
 # number of volumes to attach
 num=$3
 let count=0
 # get a list of available volumes
 for vol in `ec2-describe-volumes | egrep -i available | egrep -i
 us-east-1a | cut -f2`
 do
    # attach the volume to the next device
    echo ec2-attach-volume -i $inst -d $dev $vol
    ec2-attach-volume -i $inst -d $dev $vol
    # increment the device number
    let dev=dev+1
    let count=count+1
    # if specified number have been attached then exit
    if (( count == num ))
    then
        exit 0
    fi
 done

 $ ./attach-vols i-86d861ee 2 3
 ec2-attach-volume -i i-86d861ee -d 2 vol-19956c70
 ATTACHMENT      vol-19956c70    i-86d861ee      2       attaching
  2009-11-13T18:54:26+
 ec2-attach-volume -i i-86d861ee -d 3 vol-dc8c75b5
 ATTACHMENT      vol-dc8c75b5    i-86d861ee      3       attaching
  2009-11-13T18:54:35+

 $ ec2-describe-volumes | egrep -i attached | cut -f2,3,4,5
 vol-19956c70    i-86d861ee      2       attached
 vol-dc8c75b5    i-86d861ee      3       attached

 find out what devices they've attached as (the first two are local EC2 
 volumes)
 then create a ZFS mirror, check its status and mount

 $ sudo format
 Password:
 Searching for disks...done


 AVAILABLE DISK SELECTIONS:
       0. c7d0 DEFAULT cyl 1274 alt 0 hd 255 sec 63
          /xpvd/x...@0
       1. c7d1 DEFAULT cyl 19464 alt 0 hd 255 sec 63
          /xpvd/x...@1
       2. c7d2 DEFAULT cyl 2088 alt 0 hd 255 sec 63
          /xpvd/x...@2
       3. c7d3 DEFAULT cyl 2088 alt 0 hd 255 sec 63
          /xpvd/x...@3
 Specify disk (enter its number): ^C

 $ sudo zpool create logs mirror c7d2 c7d3

 $ sudo zpool status
 Password:
  pool: logs
  state: ONLINE
  scrub: none requested
 config:

        NAME        STATE     READ WRITE CKSUM
        logs        ONLINE       0     0     0
          mirror    ONLINE       0     0     0
            c7d2    ONLINE       0     0     0
            c7d3    ONLINE       0     0     0

 errors: No known data errors

  pool: mnt
  state: ONLINE
  scrub: none requested
 config:

        NAME        STATE     READ WRITE CKSUM
        mnt         ONLINE       0     0     0
          c7d1p0    ONLINE       0     0     0

 errors: No known data errors

  pool: rpool
  state: ONLINE
  scrub: none requested
 config:

        NAME        STATE     READ