> I'd have to link /lib/ etc to /mnt or something. Yes.
Copy first. Then use bind mounts ('mount --bind ...') to overlay the additional storage wherever you prefer in the filesystem hierarchy. Then invoke yum. I am sure there are other approaches but the above can be scripted easily enough. If you want an elegant solution, make your own HBase+Hadoop AMIs that have an EBS root volume instead. But, you will have to modify the scripts. Best regards, - Andy --- On Fri, 11/19/10, Saptarshi Guha <saptarshi.g...@gmail.com> wrote: > From: Saptarshi Guha <saptarshi.g...@gmail.com> > Subject: Re: Adding packages to the HBase AMI (ami-fe698397 and ami-c86983a1) > - no space on device > To: apurt...@apache.org > Cc: hbase-u...@hadoop.apache.org, user@hbase.apache.org > Date: Friday, November 19, 2010, 2:31 PM > True, and thanks for putting the > scripts together. But doesn't yum demand > free space on the / device? > There is plenty of space on /mnt but then do instruct yum > to install packages elsewhere. I'd have to link /lib/ etc to > /mnt or something. > > Cheers > J > > > On Fri, Nov 19, 2010 at 2:27 PM, Andrew Purtell <apurt...@apache.org> > wrote: > > > The root device on instance-store type instances is > small, but there are > > several additional disk volumes supplied depending on > the instance size, 2 > > 420 GB volumes for m1.xlarge and 4 420 GB volumes for > c1.xlarge. > > > > Our ec2 scripts mount them as /mnt, /mnt2, /mnt3, etc. > and configure the > > Hadoop DataNode to use them for block storage. You can > change this and use > > this storage for something else. > > > > Or, you can always create EBS volumes on demand and > attach them to your > > instance, and use bind mounts ('mount --bind ...') to > hang the additional > > storage off arbitrary points in your filesystem > hierarchy. > > > > Best regards, > > > > - Andy > > > > > > --- On Fri, 11/19/10, Saptarshi Guha <saptarshi.g...@gmail.com> > wrote: > > > > > From: Saptarshi Guha <saptarshi.g...@gmail.com> > > > Subject: Re: Adding packages to the HBase AMI > (ami-fe698397 and > > ami-c86983a1) - no space on device > > > To: hbase-u...@hadoop.apache.org > > > Date: Friday, November 19, 2010, 2:18 PM > > > I Think i found the answer. It > > > appears this AMI was bundled with 3GB but > > > there is no compelling reason to do so. > > > I can recreate an AMI from the c1.xlarge AMI > bundling it > > > with e.g. 5GB. That > > > should cover my needs. > > > I honestly don't mind 5 minute start up times - > coffee and > > > a cigarette - or > > > is there something more to it than just longer > start up > > > times? > > > > > > > > > Cheers and thanks > > > > > > Joy > > > > > > On Fri, Nov 19, 2010 at 2:11 PM, Saptarshi Guha > < > > saptarshi.g...@gmail.com>wrote: > > > > > > > Hello, > > > > > > > > Both packages have HBAse 0.89.20100726 > installed, the > > > former is c1.xlarge > > > > and the latter is medium). > > > > I'm trying to install some extra packages > (see [1]) > > > > > > > > By the time I've come to install R, I'm > almost out of > > > space on the root > > > > device. I would like to add some packages to > the > > > > task nodes (which are laucnehd from the > c1.xlarge > > > instance) and install > > > > thrift - but I can't even get to the point. > > > > > > > > The images start of with 3Gb on the root > device and > > > 2.3 used. By the time > > > > it comes to the last yum, there is about > << > > > 200MB left. > > > > R requires 444 mb. > > > > > > > > Q1: Are there ways to increase the size of > the > > > root device? > > > > Q2: Any other ideas? > > > > > > > > Thanks for all your thoughts. > > > > > > > > Cheers > > > > Joy > > > > > > > > > > > > > > > > [1] > > > > > > > > yum update > > > > yum upgrade > > > > > > > > yum install -y gcc-c++.x86_64 > > > > yum install -y ant automake libtool flex > bison > > > pkgconfig libevent-devel > > > > ruby-devel php php-devel > > > > yum install -y boost-devel libevent-devel > libtool > > > > yum install -y atlas-devel atlas-sse-devel > > > atlas-sse2-devel > > > > yum install -y R > > > > > > > > > > > > > > > > > > > >