Andrei Savu wrote:
There is an open issue for this WHIRR-207 [1] and unfortunately no patch available.

Thanks, I did not see that.

I have tried again... this time, HBase has been downloaded successfully.

But, HBase master failed to start, from /var/log/hbase/logs/hbase-hadoop-master-ip-10-48-15-239.log

2011-02-01 21:08:06,352 ERROR org.apache.hadoop.hbase.master.HMaster: Failed to start master java.lang.RuntimeException: Failed construction of Master: class org.apache.hadoop.hbase.master.HMasterCall to ec2-46-137-20-253.eu-west-1.compute.amazonaws.com/10.48.15.239:8020 failed on connection exception: java.net.ConnectException: Connection refused
        at 
org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:1232)
        at org.apache.hadoop.hbase.master.HMaster.doMain(HMaster.java:1338)
        at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:1389)
Caused by: java.lang.reflect.InvocationTargetException
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
        at 
org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:1230)
        ... 2 more

I think Hadoop nn and jt are not running.

Paolo



[1] https://issues.apache.org/jira/browse/WHIRR-207 On Tue, Feb 1, 2011 at 10:36 PM, Paolo Castagna <castagna.li...@googlemail.com <mailto:castagna.li...@googlemail.com>> wrote:

    Tom White wrote:

        Paolo,

        You can find debug output on the instances in subdirectories of
        /tmp.


    Hi Tom,
    thanks, good to know.

    This is what I see on the machine which should run nn+jt+hbase-master:

    /tmp/runscript/stderr.log:

    [...]
    + curl --retry 3 --silent --show-error --fail -O
    
http://archive.apache.org/dist/hbase/hbase-0.89.20100924/hbase-0.89.20100924-bin.tar.gz
    curl: (18) transfer closed with 27212334 bytes remaining to read


    /tmp/computeserv/stderr.log:

    [...]

    + mkdir /etc/hbase
    + ln -s /usr/local/hbase-0.89.20100924/conf /etc/hbase/conf
    + cat
    /tmp/runurl.Vr6fC0/runfile: line 108:
    /usr/local/hbase-0.89.20100924/conf/hbase-site.xml: No such file or
    directory


    It seems to me it is failing to download hbase tar.gz. :-/

    Paolo




        Cheers,
        Tom

        On Tue, Feb 1, 2011 at 8:49 AM, Paolo Castagna
        <castagna.li...@googlemail.com
        <mailto:castagna.li...@googlemail.com>> wrote:

            Lars George wrote:

                Oh darn, spot on, didn't see those. Yeah, CDH support
                for HBase is
                still pending (but coming for sure)!

            Sorry, I read the comment "uncomment out these lines to run
            CDH". ;-)

            Anyway, I still have problems. This is the latest recipe I
            have tried:

            ------------
            whirr.cluster-name=myhbase
            whirr.instance-templates=1
            zookeeper+hadoop-namenode+hadoop-jobtracker+hbase-master,3
            hadoop-datanode+hadoop-tasktracker+hbase-regionserver
            whirr.provider=ec2
            whirr.identity=${env:AWS_ACCESS_KEY_ID}
            whirr.credential=${env:AWS_SECRET_ACCESS_KEY}
            whirr.hardware-id=m1.large
            whirr.location-id=eu-west-1
            whirr.image-id=eu-west-1/ami-0d9ca979
            whirr.private-key-file=${sys:user.home}/.ssh/whirr
            whirr.public-key-file=${sys:user.home}/.ssh/whirr.pub
            ------------

            If I connect with SSH to the machine which should run nn, jt
            and hbase
            master, I see only zookeeper running.

            I'll try again later or tomorrow.

            Thanks,
            Paolo

                On Tue, Feb 1, 2011 at 5:05 PM, Tom White
                <tom.e.wh...@gmail.com <mailto:tom.e.wh...@gmail.com>>
                wrote:

                    Try removing the CDH lines. I don't think that this
                    combination works
                    yet.

                    Tom

                    On Feb 1, 2011 7:53 AM, "Paolo Castagna"
                    <castagna.li...@googlemail.com
                    <mailto:castagna.li...@googlemail.com>>
                    wrote:

                        Andrei Savu wrote:

                            Could you share the recipe? I want to try to
                            replicate the issue on my
                            computer.

                        ------------------------
                        whirr.cluster-name=myhbase
                        whirr.instance-templates=1 zk+nn+jt+hbase-master,3
                        dn+tt+hbase-regionserver
                        whirr.hadoop-install-runurl=cloudera/cdh/install
                        
whirr.hadoop-configure-runurl=cloudera/cdh/post-configure
                        whirr.provider=ec2

                        whirr.identity=XXXXXXXXXXXXXXXXXXXX
                        
whirr.credential=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX

                        # See also:
                        http://aws.amazon.com/ec2/instance-types/
                        # t1.micro, m1.small, m1.large, m1.xlarge,
                        m2.xlarge, m2.2xlarge,
                        m2.4xlarge,
                        c1.medium, c1.xlarge, cc1.4xlarge
                        # whirr.hardware-id=m1.large
                        # Ubuntu 10.04 LTS Lucid. See also:
                        http://alestic.com/
                        # whirr.image-id=eu-west-1/ami-0d9ca979
                        # If you choose a different location, make sure
                        whirr.image-id is
                        updated
                        too
                        # whirr.location-id=eu-west-1

                        #whirr.hardware-id=m1.large
                        #whirr.location-id=us-east-1
                        #whirr.image-id=us-east-1/ami-f8f40591

                        whirr.hardware-id=m1.xlarge
                        whirr.image-id=us-east-1/ami-da0cf8b3
                        whirr.location-id=us-east-1

                        whirr.private-key-file=${sys:user.home}/.ssh/whirr
                        whirr.public-key-file=${sys:user.home}/.ssh/whirr.pub
                        ------------------------

                        I did different attempts (you see them commented).
                        Last one, I was using m1.xlarge with
                        us-east-1/ami-da0cf8b3.

                        Paolo

                            On Tue, Feb 1, 2011 at 5:32 PM, Paolo Castagna
                            <castagna.li...@googlemail.com
                            <mailto:castagna.li...@googlemail.com>> wrote:

                                Hi,
                                I am trying to run an HBase small
                                cluster using Whirr 0.3.0-incubating
                                and (since it does not start HBase
                                master or it does not install
                                Hadoop
                                correctly) Whirr from trunk.

                                When I run it from trunk with a recipe
                                very similar to the one
                                provided
                                in the recipes folder, I see these
                                errors in the whirr.log:


                                2011-02-01 15:11:58,484 DEBUG
                                [jclouds.compute] (user thread 9) <<
                                stderr
                                from runscript as ubuntu@50.16.158.231
                                <mailto:ubuntu@50.16.158.231>
                                + [[ hbase != \h\b\a\s\e ]]
                                + HBASE_HOME=/usr/local/hbase-0.89.20100924
                                +
                                
HBASE_CONF_DIR=/usr/local/hbase-0.89.20100924/conf
                                + update_repo
                                + which dpkg
                                + sudo apt-get update
                                + install_hbase
                                + id hadoop
                                + useradd hadoop
                                useradd: group hadoop exists - if you
                                want to add this user to that
                                group,
                                use -g

                                [...]

                                2011-02-01 15:12:26,370 DEBUG
                                [jclouds.compute] (user thread 2) <<
                                stderr
                                from computeserv as ubuntu@50.16.158.231
                                <mailto:ubuntu@50.16.158.231>
                                + HBASE_VERSION=hbase-0.89.20100924
                                + [[ hbase != \h\b\a\s\e ]]
                                + HBASE_HOME=/usr/local/hbase-0.89.20100924
                                +
                                
HBASE_CONF_DIR=/usr/local/hbase-0.89.20100924/conf
                                + configure_hbase
                                + case $CLOUD_PROVIDER in
                                + MOUNT=/mnt
                                + mkdir -p /mnt/hbase
                                + chown hadoop:hadoop /mnt/hbase
                                chown: invalid user: `hadoop:hadoop'

                                Is this a known problem?

                                Paolo








--
Andrei Savu -- andreisavu.ro <http://andreisavu.ro>


Reply via email to