As you can guess from the 0.89 dependency, there has been a lot of water under 
the bridge since this integration was developed.  If someone would like to take 
on bringing it up to date, that would be great.

Note that auxpath is to make the jars available in map/reduce task VM's (we 
don't put everything from lib there automatically).

JVS

On Dec 2, 2011, at 10:39 AM, <jcfol...@pureperfect.com>
 wrote:

> 
> I am having the same issue. Hive won't connect to HBase and throws
> org.apache.hadoop.hbase.MasterNotRunningException despite the fact that
> the master is up and running. It may only work if HBase is in
> distributed mode or psuedo-distributed mode. I know HBase doesn't put
> files into HDFS otherwise.
> 
> 
> It certainly doesn't work for me running in standalone mode. I've tried
> about thirty different combinations of hive/hbase and can't get it going
> on any of them, so I switched to trying to get pseudo-distributed mode
> working in HBase, but haven't been able to find the magic combination of
> versions that will allow HBase to do anything in HDFS other than throw
> EOFExceptions.
> 
> In any case, according to the Hive documentation (see below) it doesn't
> work with any version of HBase other than 0.89, but there are three 0.89
> versions of HBase at archive.apache.org and the lib directories for Hive
> contain 0.89-SNAPSHOT.
> 
> 
> FYI: There's an official Hive/HBase integration page at the Confluence
> wiki, but that doesn't work either:
> 
> https://cwiki.apache.org/confluence/display/Hive/HBaseIntegration
> 
> 
> It contains the instruction:
> 
> 
> "The handler requires Hadoop 0.20 or higher, and has only been tested
> with dependency versions hadoop-0.20.x, hbase-0.89.0 and
> zookeeper-3.3.1. If you are not using hbase-0.89.0, you will need to
> rebuild the handler with the HBase jar matching your version, and change
> the --auxpath above accordingly. Failure to use matching versions will
> lead to misleading connection failures such as 
> 
> MasterNotRunningException 
> 
> since the HBase RPC protocol changes often."
> 
> 
> But that really doesn't make sense. Hive uses Ivy and you can't just
> simply replace the jar files. Updating the version Ivy fetches from the
> Apache repository in the second sentence contradicts the version
> exception in the previous sentence since the only official releases of
> HBase are 0.90 and forward:
> 
> https://repository.apache.org/content/repositories/releases/org/apache/hbase/hbase/
> 
> I'm trying to get Hive to build against HBase 0.90 but Ivy wants to pull
> the 0.90 out of snapshots so trying to grab the jar file throws 404s.
> 
> 
> As a side note: the --auxpath seems unnecessary. The jars are already in
> the lib directory so it seems like they ought to be on the classpath
> already. 
> 
> 
> 
> -------- Original Message --------
> Subject: Re: Hive-Hbase integration require Hbase in Pseudo
> distributed??
> From: Mohammad Tariq <donta...@gmail.com>
> Date: Fri, December 02, 2011 7:28 am
> To: user@hive.apache.org
> 
> Anyone there, cud you please confirm if I can use hive-hbase in
> standalone mode???
> will it work? or should i go for Pseudo distributed mode ?
> 
> Regards,
>    Mohammad Tariq
> 
> 
> 
> On Fri, Dec 2, 2011 at 5:54 PM, Alok Kumar <alok...@gmail.com> wrote:
>> hi,
>> 
>> yeah i've used
>> 
>> $HIVE_HOME/bin/hive --auxpath
>> $HIVE_HOME/lib/hive-hbase-handler-*.jar,$HIVE_HOME/lib/hbase-*.jar,$HIVE_HOME/lib/zookeeper-*.jar
>> -hiveconf hbase.master=localhost:60000
>> ------------------------------------------------
>> 
>> Hadoop version : hadoop-0.20.203.0
>> Hbase version : hbase-0.90.4
>> Hive version : hive-0.9.0 (built from trunk)
>> on
>> Ubuntu 11.10
>> -----------------------------------------------
>> 
>> Regards,
>> 
>> Alok
>> 
>> 
>> On Fri, Dec 2, 2011 at 5:49 PM, Ankit Jain <ankitjainc...@gmail.com> wrote:
>>> 
>>> Hi,
>>> 
>>> have you used following command to start the hive shell.
>>> 
>>> $HIVE_HOME/bin/hive --auxpath
>>> $HIVE_HOME/lib/hive-hbase-handler-*.jar,$HIVE_HOME/lib/hbase-*.jar,$HIVE_HOME/lib/zookeeper-*.jar
>>> -hiveconf hbase.master=127.0.0.1:60000
>>> 
>>> 
>>> If no then used above command.
>>> Regards,
>>> Ankit
>>> 
>>> 
>>> On Fri, Dec 2, 2011 at 5:34 PM, Alok Kumar <alok...@gmail.com> wrote:
>>>> 
>>>> Hi,
>>>> 
>>>> // Hadoop core-site.xml
>>>> <configuration>
>>>>    <property>
>>>>        <name>fs.default.name</name>
>>>>        <value>hdfs://localhost:9000</value>
>>>>    </property>
>>>>    <property>
>>>>        <name>hadoop.tmp.dir</name>
>>>>        <value>/home/alokkumar/hadoop/tmp</value>
>>>>    </property>
>>>> </configuration>
>>>> 
>>>> // hbase-site.xml
>>>> <configuration>
>>>>    <property>
>>>>            <name>hbase.rootdir</name>
>>>>       <!--    ><value>hdfs://localhost:9000/hbase</value>-->
>>>>        <value>file:///home/alokkumar/hbase/</value>
>>>>    </property>
>>>> </configuration>
>>>> 
>>>> with these conf Hbase/Hive are runnning independently file..
>>>> hbase(main):003:0> status
>>>> 1 servers, 0 dead, 4.0000 average load
>>>> 
>>>> but i'm stil getting $hive> CREATE TABLE hbase_table_1(key int, value
>>>> string)
>>>>> STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
>>>>> WITH SERDEPROPERTIES ("hbase.columns.mapping" = ":key,cf1:val");
>>>> FAILED: Error in metadata:
>>>> MetaException(message:org.apache.hadoop.hbase.MasterNotRunningException:
>>>> localhost:45966
>>>> 
>>>>    at
>>>> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getMaster(HConnectionManager.java:394)
>>>>    at
>>>> org.apache.hadoop.hbase.client.HBaseAdmin.<init>(HBaseAdmin.java:83)
>>>> 
>>>> Regards,
>>>> Alok
>>>> 
>>>> 
>>>> On Fri, Dec 2, 2011 at 5:14 PM, Ankit Jain <ankitjainc...@gmail.com>
>>>> wrote:
>>>>> 
>>>>> HI,
>>>>> 
>>>>> Can you post the Hbase-site.xml and hadoop core-site.xml properties
>>>>> here.
>>>>> 
>>>>> Regards,
>>>>> Ankit
>>>>> 
>>>>> 
>>>>> On Fri, Dec 2, 2011 at 3:30 PM, Alok Kumar <alok...@gmail.com> wrote:
>>>>>> 
>>>>>> Hi Ankit,
>>>>>> 
>>>>>> you were right, my Hbase shell/HMaster was not running though it was
>>>>>> coming in jps :)
>>>>>> 
>>>>>> nw i've run my HMaster n Hbase shell is up.. n getting this error--
>>>>>> Do I need zookeeper configured in standalone mode?
>>>>>> 
>>>>>> hive> CREATE TABLE hbase_table_1(key int, value string)
>>>>>>> STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
>>>>>>> WITH SERDEPROPERTIES ("hbase.columns.mapping" = ":key,cf1:val")
>>>>>>> TBLPROPERTIES ("hbase.table.name" = "xyz");
>>>>>> 
>>>>>> FAILED: Error in metadata:
>>>>>> MetaException(message:org.apache.hadoop.hbase.ZooKeeperConnectionException:
>>>>>> org.apache.hadoop.hbase.ZooKeeperConnectionException:
>>>>>> org.apache.zookeeper.KeeperException$ConnectionLossException:
>>>>>> KeeperErrorCode = ConnectionLoss for /hbase
>>>>>>    at
>>>>>> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getZooKeeperWatcher(HConnectionManager.java:985)
>>>>>>    at
>>>>>> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.setupZookeeperTrackers(HConnectionManager.java:301)
>>>>>>    at
>>>>>> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.<init>(HConnectionManager.java:292)
>>>>>>    at
>>>>>> org.apache.hadoop.hbase.client.HConnectionManager.getConnection(HConnectionManager.java:155)
>>>>>>    at
>>>>>> org.apache.hadoop.hbase.client.HBaseAdmin.<init>(HBaseAdmin.java:79)
>>>>>> 
>>>>>>    at
>>>>>> org.apache.hadoop.hive.hbase.HBaseStorageHandler.getHBaseAdmin(HBaseStorageHandler.java:74)
>>>>>>    at
>>>>>> org.apache.hadoop.hive.hbase.HBaseStorageHandler.preCreateTable(HBaseStorageHandler.java:158)
>>>>>>    at
>>>>>> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createTable(HiveMetaStoreClient.java:375)
>>>>>>    at
>>>>>> org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:540)
>>>>>>    at
>>>>>> org.apache.hadoop.hive.ql.exec.DDLTask.createTable(DDLTask.java:3473)
>>>>>>    at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:225)
>>>>>>    at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:133)
>>>>>>    at
>>>>>> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:57)
>>>>>>    at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1332)
>>>>>>    at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1123)
>>>>>>    at org.apache.hadoop.hive.ql.Driver.run(Driver.java:931)
>>>>>>    at
>>>>>> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:255)
>>>>>>    at
>>>>>> org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:212)
>>>>>>    at
>>>>>> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:403)
>>>>>>    at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:671)
>>>>>>    at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:554)
>>>>>>    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>>>>    at
>>>>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>>>>>    at
>>>>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>>>>>    at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
>>>>>> Caused by: org.apache.hadoop.hbase.ZooKeeperConnectionException:
>>>>>> org.apache.zookeeper.KeeperException$ConnectionLossException:
>>>>>> KeeperErrorCode = ConnectionLoss for /hbase
>>>>>>    at
>>>>>> org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.<init>(ZooKeeperWatcher.java:147)
>>>>>>    at
>>>>>> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getZooKeeperWatcher(HConnectionManager.java:983)
>>>>>>    ... 25 more
>>>>>> Caused by:
>>>>>> org.apache.zookeeper.KeeperException$ConnectionLossException:
>>>>>> KeeperErrorCode = ConnectionLoss for /hbase
>>>>>>    at
>>>>>> org.apache.zookeeper.KeeperException.create(KeeperException.java:90)
>>>>>>    at
>>>>>> org.apache.zookeeper.KeeperException.create(KeeperException.java:42)
>>>>>>    at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:637)
>>>>>>    at
>>>>>> org.apache.hadoop.hbase.zookeeper.ZKUtil.createAndFailSilent(ZKUtil.java:886)
>>>>>>    at
>>>>>> org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.<init>(ZooKeeperWatcher.java:133)
>>>>>>    ... 26 more
>>>>>> 
>>>>>> )
>>>>>> FAILED: Execution Error, return code 1 from
>>>>>> org.apache.hadoop.hive.ql.exec.DDLTask
>>>>>> 
>>>>>> 
>>>>>> On Fri, Dec 2, 2011 at 2:03 PM, Ankit Jain <ankitjainc...@gmail.com>
>>>>>> wrote:
>>>>>>> 
>>>>>>> I think your hbase master is not running.
>>>>>>> 
>>>>>>> Open the hive shell and run command :
>>>>>>> hbase> status
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> On Fri, Dec 2, 2011 at 2:00 PM, Alok Kumar <alok...@gmail.com> wrote:
>>>>>>>> 
>>>>>>>> Hi,
>>>>>>>> 
>>>>>>>> Does Hive-Hbase integration require Hbase running in
>>>>>>>> pseudo-distributed mode?
>>>>>>>> 
>>>>>>>> I've build my Hadoop following this article
>>>>>>>> http://www.michael-noll.com/blog/2011/04/14/building-an-hadoop-0-20-x-version-for-hbase-0-90-2/
>>>>>>>> and  have already replaced Hbase jar files accordingly..
>>>>>>>> 
>>>>>>>> I'm getting this error..
>>>>>>>> 
>>>>>>>> hive> !jps;
>>>>>>>> 5469 Jps
>>>>>>>> 4128 JobTracker
>>>>>>>> 3371 Main
>>>>>>>> 4346 TaskTracker
>>>>>>>> 5330 RunJar
>>>>>>>> 4059 SecondaryNameNode
>>>>>>>> 8350 NameNode
>>>>>>>> 3841 DataNode
>>>>>>>> 3244 HMaster
>>>>>>>> hive> create table hbase_table_1(key int, value string) stored by
>>>>>>>> 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' with
>>>>>>>> serdeproperties("hbase.columns.mapping" = ":key,cf1:val") tblproperties
>>>>>>>> ("hbase.table.name" = "xyz");
>>>>>>>> 
>>>>>>>> FAILED: Error in metadata:
>>>>>>>> MetaException(message:org.apache.hadoop.hbase.MasterNotRunningException:
>>>>>>>> localhost:56848
>>>>>>>>    at
>>>>>>>> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getMaster(HConnectionManager.java:394)
>>>>>>>>    at
>>>>>>>> org.apache.hadoop.hbase.client.HBaseAdmin.<init>(HBaseAdmin.java:83)
>>>>>>>>    at
>>>>>>>> org.apache.hadoop.hive.hbase.HBaseStorageHandler.getHBaseAdmin(HBaseStorageHandler.java:74)
>>>>>>>>    at
>>>>>>>> org.apache.hadoop.hive.hbase.HBaseStorageHandler.preCreateTable(HBaseStorageHandler.java:158)
>>>>>>>>    at
>>>>>>>> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createTable(HiveMetaStoreClient.java:375)
>>>>>>>>    at
>>>>>>>> org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:540)
>>>>>>>>    at
>>>>>>>> org.apache.hadoop.hive.ql.exec.DDLTask.createTable(DDLTask.java:3473)
>>>>>>>>    at
>>>>>>>> org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:225)
>>>>>>>>    at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:133)
>>>>>>>>    at
>>>>>>>> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:57)
>>>>>>>>    at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1332)
>>>>>>>>    at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1123)
>>>>>>>>    at org.apache.hadoop.hive.ql.Driver.run(Driver.java:931)
>>>>>>>>    at
>>>>>>>> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:255)
>>>>>>>>    at
>>>>>>>> org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:212)
>>>>>>>>    at
>>>>>>>> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:403)
>>>>>>>>    at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:671)
>>>>>>>>    at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:554)
>>>>>>>>    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>>>>>>    at
>>>>>>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>>>>>>>    at
>>>>>>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>>>>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>>>>>>>    at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
>>>>>>>> )
>>>>>>>> FAILED: Execution Error, return code 1 from
>>>>>>>> org.apache.hadoop.hive.ql.exec.DDLTask
>>>>>>>> 
>>>>>>>> where i shuld look for fixing
>>>>>>>> "message:org.apache.hadoop.hbase.MasterNotRunningException: 
>>>>>>>> localhost:56848"
>>>>>>>> ..?
>>>>>>>> 
>>>>>>>> --
>>>>>>>> regards
>>>>>>>> Alok Kumar
>>>>>>>> 
>>>>>>>> 
>>>>>>> 
>>>>>> --
>>>>>> Alok
>>>>>> 
>>>>>> 
>>>>> 
>>>> 
>>>> 
>>> 
>> 
>> 
>> 
>> --
>> Alok Kumar
>> 
>> 
> 

Reply via email to