You're using a very old version of HBase that is no longer supported and could be a pain to run today, Try using the latest 0.94 release at the very least.
The problem you face, anyway, is that 0.90.x shipped with a bundled version of Apache Hadoop older than 1.x that you've setup, therefore making it incompatible, and your errors expected, unless you follow the steps we've documented for this at http://hbase.apache.org/book.html#trouble.versions and http://hbase.apache.org/book.html#hadoop. Essentially, replace $HBASE_HOME/lib/'s hadoop-core, with the one from your actual $HADOOP_PREFIX (or $HADOOP_HOME). On Wed, Jan 1, 2014 at 11:00 PM, Law-Firms-In.com <[email protected]> wrote: > I have troubles bringing hbase 0.90.6 to work together with Hadoop > 1.2.1. Hadoop is 100% working (tested with wordcount mapreduce) and > hbase was working several months now in standalone mode. > > But due to performance problems I am now switching to pseudo mode with > hbase but I am stuck. I have followed almost all tutorials I could find > for my problem but still no luck (Example tutorial > http://cloudfront.blogspot.in/2012/06/how-to-configure-habse-in-pseudo.html#.UsROKKHNRkr). > > My mapred-site.xml file: > > <property> > <name>hbase.rootdir</name> > <value>hdfs://localhost:9000/hbase</value> > </property> > > <property> > <name>hbase.cluster.distributed</name> > <value>true</value> > </property> > > <property> > <name>hbase.zookeeper.quorum</name> > <value>localhost</value> > > </property> > <property> > <name>dfs.replication</name> > <value>1</value> > </property> > > <property> > <name>hbase.zookeeper.property.clientPort</name> > <value>2180</value> > <description>was 2181 but since I have in zoo.cfg file # the port at > which the clients will connect clientPort=2180i asjusted this but both > versions dont bring my HMaster alive > </description> > </property> > > <property> > <name>hbase.zookeeper.property.dataDir</name> > <value>/var/lib/zookeeper</value> > </property> > > > My hbase-env.xml file: > > export JAVA_HOME=/usr/java/jdk1.7.0_40 > export > HBASE_REGIONSERVERS=/srv/vhosts/search.sh/htdocs/nutch/hbase-0.90.6/conf$ > export HBASE_MANAGES_ZK=true > export HBASE_OPTS="-ea -XX:+UseConcMarkSweepGC -XX:+CMSIncrementalMode" > > > My Hadoop core-site.xml: > > <property> > <name>fs.default.name</name> > <value>hdfs://localhost:9000</value> > </property> > > > Master Log: > 2014-01-01 18:16:06,093 INFO > org.apache.hadoop.hbase.master.ActiveMasterManager: Master=localhost:60000 > 2014-01-01 18:16:06,453 FATAL org.apache.hadoop.hbase.master.HMaster: > Unhandled exception. Starting shutdown. > java.io.IOException: Call to localhost/127.0.0.1:9000 failed on local > exception: java.io.EOFException > at org.apache.hadoop.ipc.Client.wrapException(Client.java:775) > at org.apache.hadoop.ipc.Client.call(Client.java:743) > at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220) > at com.sun.proxy.$Proxy5.getProtocolVersion(Unknown Source) > at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:359) > at > org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:113) > at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:215) > at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:177) > at > org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:82) > at > org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1378) > at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66) > at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1390) > at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:196) > at org.apache.hadoop.fs.Path.getFileSystem(Path.java:175) > at org.apache.hadoop.hbase.util.FSUtils.getRootDir(FSUtils.java:363) > at > org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSystem.java:81) > at > org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:344) > at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:281) > Caused by: java.io.EOFException > at java.io.DataInputStream.readInt(DataInputStream.java:392) > at > org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:501) > at org.apache.hadoop.ipc.Client$Connection.run(Client.java:446) > 2014-01-01 18:16:06,458 INFO org.apache.hadoop.hbase.master.HMaster: > Aborting > 2014-01-01 18:16:06,458 DEBUG org.apache.hadoop.hbase.master.HMaster: > Stopping service threads > 2014-01-01 18:16:06,458 INFO org.apache.hadoop.ipc.HBaseServer: Stopping > server on 60000 > 2014-01-01 18:16:06,458 INFO org.apache.hadoop.ipc.HBaseServer: IPC > Server handler 0 on 60000: exiting > 2014-01-01 18:16:06,458 INFO org.apache.hadoop.ipc.HBaseServer: Stopping > IPC Server listener on 60000 > 2014-01-01 18:16:06,458 INFO org.apache.hadoop.ipc.HBaseServer: IPC > Server handler 5 on 60000: exiting > 2014-01-01 18:16:06,459 INFO org.apache.hadoop.ipc.HBaseServer: IPC > Server handler 6 on 60000: exiting > 2014-01-01 18:16:06,458 INFO org.apache.hadoop.ipc.HBaseServer: IPC > Server handler 3 on 60000: exiting > 2014-01-01 18:16:06,458 INFO org.apache.hadoop.ipc.HBaseServer: IPC > Server handler 2 on 60000: exiting > 2014-01-01 18:16:06,458 INFO org.apache.hadoop.ipc.HBaseServer: IPC > Server handler 4 on 60000: exiting > 2014-01-01 18:16:06,459 INFO org.apache.hadoop.ipc.HBaseServer: Stopping > IPC Server Responder > 2014-01-01 18:16:06,459 INFO org.apache.hadoop.ipc.HBaseServer: IPC > Server handler 1 on 60000: exiting > 2014-01-01 18:16:06,459 INFO org.apache.hadoop.ipc.HBaseServer: IPC > Server handler 7 on 60000: exiting > 2014-01-01 18:16:06,459 INFO org.apache.hadoop.ipc.HBaseServer: IPC > Server handler 9 on 60000: exiting > 2014-01-01 18:16:06,458 INFO org.apache.hadoop.ipc.HBaseServer: IPC > Server handler 8 on 60000: exiting > 2014-01-01 18:16:06,619 INFO org.apache.zookeeper.ZooKeeper: Session: > 0x1434ece13730000 closed > 2014-01-01 18:16:06,619 INFO org.apache.hadoop.hbase.master.HMaster: > HMaster main thread exiting > 2014-01-01 18:16:06,619 INFO org.apache.zookeeper.ClientCnxn: > EventThread shut down > -- Harsh J
