You also need to create the table in order to see the relevant debug
information, it won't create it until it needs it.

J-D
On Jan 9, 2011 9:30 PM, "Adarsh Sharma" <adarsh.sha...@orkash.com> wrote:
> Jean-Daniel Cryans wrote:
>> Just figured that running the shell with this command will give all
>> the info you need:
>>
>> bin/hive -hiveconf hive.root.logger=INFO,console
>>
>
>
> Thanks JD, below is the output of this command :
>
> had...@s2-ratw-1:~/project/hive-0.6.0/build/dist$ bin/hive -hiveconf
> hive.root.logger=INFO,console
> Hive history
> file=/tmp/hadoop/hive_job_log_hadoop_201101101024_1339616584.txt
> 11/01/10 10:24:47 INFO exec.HiveHistory: Hive history
> file=/tmp/hadoop/hive_job_log_hadoop_201101101024_1339616584.txt
> hive> show tables;
> 11/01/10 10:25:07 INFO parse.ParseDriver: Parsing command: show tables
> 11/01/10 10:25:07 INFO parse.ParseDriver: Parse Completed
> 11/01/10 10:25:07 INFO ql.Driver: Semantic Analysis Completed
> 11/01/10 10:25:07 INFO ql.Driver: Returning Hive schema:
> Schema(fieldSchemas:[FieldSchema(name:tab_name, type:string,
> comment:from deserializer)], properties:null)
> 11/01/10 10:25:07 INFO ql.Driver: Starting command: show tables
> 11/01/10 10:25:07 INFO metastore.HiveMetaStore: 0: Opening raw store
> with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
> 11/01/10 10:25:07 INFO metastore.ObjectStore: ObjectStore, initialize
called
> *11/01/10 10:25:08 ERROR DataNucleus.Plugin: Bundle
> "org.eclipse.jdt.core" requires "org.eclipse.core.resources" but it
> cannot be resolved.
> 11/01/10 10:25:08 ERROR DataNucleus.Plugin: Bundle
> "org.eclipse.jdt.core" requires "org.eclipse.core.runtime" but it cannot
> be resolved.
> 11/01/10 10:25:08 ERROR DataNucleus.Plugin: Bundle
> "org.eclipse.jdt.core" requires "org.eclipse.text" but it cannot be
> resolved.*
> 11/01/10 10:25:09 INFO metastore.ObjectStore: Initialized ObjectStore
> 11/01/10 10:25:10 INFO metastore.HiveMetaStore: 0: get_tables:
> db=default pat=.*
> OK
> 11/01/10 10:25:15 INFO ql.Driver: OK
> Time taken: 7.897 seconds
> 11/01/10 10:25:15 INFO CliDriver: Time taken: 7.897 seconds
> hive> exit;
>
> It seems that Hive is working but I am facing issues while integrating
> with Hbase.
>
>
> Best Regards
>
> Adarsh Sharma
>
>
>> J-D
>>
>> On Fri, Jan 7, 2011 at 9:57 AM, Jean-Daniel Cryans <jdcry...@apache.org>
wrote:
>>
>>> While testing other things yesterday on my local machine, I
>>> encountered the same stack traces. Like I said the other day, which
>>> you seem to have discarded while debugging your issue, is that it's
>>> not able to connect to Zookeeper.
>>>
>>> Following the cue, I added these lines in HBaseStorageHandler.setConf():
>>>
>>> System.out.println(hbaseConf.get("hbase.zookeeper.quorum"));
>>>
System.out.println(hbaseConf.get("hbase.zookeeper.property.clientPort"));
>>>
>>> It showed me this when trying to create a table (after recompiling):
>>>
>>> localhost
>>> 21810
>>>
>>> I was testing with 0.89 and the test jar includes a hbase-site.xml
>>> which has the port 21810 instead of the default 2181. I remembered
>>> that it's a known issue that has since been fixed for 0.90.0, so
>>> removing that jar fixed it for me.
>>>
>>> I'm not saying that in your case it's the same fix, but at least by
>>> debugging those configurations you'll know where it's trying to
>>> connect and then you'll be able to get to the bottom of your issue.
>>>
>>> J-D
>>>
>>> On Fri, Jan 7, 2011 at 4:54 AM, Adarsh Sharma <adarsh.sha...@orkash.com>
wrote:
>>>
>>>> John Sichi wrote:
>>>>
>>>> On Jan 6, 2011, at 9:53 PM, Adarsh Sharma wrote:
>>>>
>>>>
>>>> I want to know why it occurs in hive.log
>>>>
>>>> 2011-01-05 15:19:36,783 ERROR DataNucleus.Plugin
>>>> (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires
>>>> "org.eclipse.core.resources" but it cannot be resolved.
>>>>
>>>>
>>>>
>>>> That is a bogus error; it always shows up, so you can ignore it.
>>>>
>>>>
>>>>
>>>> And use this new Hive build but I am sorry but the error remains the
same.
>>>>
>>>>
>>>> Then I don't know...probably still some remaining configuration error.
This
>>>> guy seems to have gotten it working:
>>>>
>>>> http://mevivs.wordpress.com/2010/11/24/hivehbase-integration/
>>>>
>>>>
>>>> Thanks a lot John , I know this link as i have start working by
following
>>>> this link in the past.
>>>>
>>>> But I think I have to research on below exception or warning to solve
this
>>>> issue.
>>>>
>>>> 2011-01-05 15:20:12,185 WARN zookeeper.ClientCnxn
>>>> (ClientCnxn.java:run(967)) - Exception closing session 0x0 to
>>>> sun.nio.ch.selectionkeyi...@561279c8
>>>> java.net.ConnectException: Connection refused
>>>> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>>>> at
>>>> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
>>>> at
>>>> org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:933)
>>>> 2011-01-05 15:20:12,188 WARN zookeeper.ClientCnxn
>>>> (ClientCnxn.java:cleanup(1001)) - Ignoring exception during shutdown
input
>>>> java.nio.channels.ClosedChannelException
>>>> at
>>>> sun.nio.ch.SocketChannelImpl.shutdownInput(SocketChannelImpl.java:638)
>>>> at sun.nio.ch.SocketAdaptor.shutdownInput(SocketAdaptor.java:360)
>>>> at
>>>> org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:999)
>>>> at
>>>> org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970)
>>>> 2011-01-05 15:20:12,188 WARN zookeeper.ClientCnxn
>>>> (ClientCnxn.java:cleanup(1006)) - Ignoring exception during shutdown
output
>>>> java.nio.channels.ClosedChannelException
>>>> at
>>>> sun.nio.ch.SocketChannelImpl.shutdownOutput(SocketChannelImpl.java:649)
>>>> at sun.nio.ch.SocketAdaptor.shutdownOutput(SocketAdaptor.java:368)
>>>> at
>>>>
org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1004)
>>>> at
>>>> org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970)
>>>> 2011-01-05 15:20:12,621 WARN zookeeper.ClientCnxn
>>>> (ClientCnxn.java:run(967)) - Exception closing session 0x0 to
>>>> sun.nio.ch.selectionkeyi...@799dbc3b
>>>>
>>>> Please help me, as i am not able to solve this problem.
>>>>
>>>> Also I want to add one more thing that my hadoop Cluster is of 9 nodes
and
>>>> 8 nodes act as Datanodes,Tasktrackers and Regionservers.
>>>>
>>>>
>>>>
>>>>
>>>> Best Regards
>>>>
>>>> Adarsh Sharma
>>>>
>>>> JVS
>>>>
>>>>
>>>>
>>>>
>

Reply via email to