RE: Drop table in hive throws metastore excetion.
Hi Anandha Because you set datanucleus.autoCreateTables=false. So hive would not create table automatic. When Hive drop table , index table will also be droped. So it will used INDEX_PARAMS table. I think you can try this.: 1、 set datanucleus.autoCreateTables=true 2、 create a index for one table ,so hive will create index_params table. 3、 Try to drop table. Regards Ransom From: Anandha L Ranganathan [mailto:analog.s...@gmail.com] Sent: Tuesday, December 11, 2012 11:29 AM To: user@hive.apache.org Subject: Drop table in hive throws metastore excetion. I am trying to drop a table in Hive and it throws metastore exception. 2012-12-11 01:19:59,225 ERROR exec.DDLTask (SessionState.java:printError(365)) - FAILED: Error in metadata: javax.jdo.JDODataStoreException: Required table missing : INDEX_PARAMS in Catalog Schema . DataNucleus requires this table to perform its persistence operations. Either your MetaData is incorrect, or you need to enable datanucleus.autoCreateTables NestedThrowables: org.datanucleus.store.rdbms.exceptions.MissingTableException: Required table missing : INDEX_PARAMS in Catalog Schema . DataNucleus requires this table to perform its persistence operations. Either your MetaData is incorrect, or you need to enable datanucleus.autoCreateTables org.apache.hadoop.hive.ql.metadata.HiveException: javax.jdo.JDODataStoreException: Required table missing : INDEX_PARAMS in Catalog Schema . DataNucleus requires this table to perform its persistence operations. Either your MetaData is incorrect, or you need to enable datanucleus.autoCreateTables NestedThrowables: org.datanucleus.store.rdbms.exceptions.MissingTableException: Required table missing : INDEX_PARAMS in Catalog Schema . DataNucleus requires this table to perform its persistence operations. Either your MetaData is incorrect, or you need to enable datanucleus.autoCreateTables at org.apache.hadoop.hive.ql.metadata.Hive.dropTable(Hive.java:755) at org.apache.hadoop.hive.ql.metadata.Hive.dropTable(Hive.java:712) at org.apache.hadoop.hive.ql.exec.DDLTask.dropTable(DDLTask.java:2860) at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:238) at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:130) at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:57) at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1063) at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:900) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:748) at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:209) at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:286) at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:516) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.util.RunJar.main(RunJar.java:186)
RE: How to set an empty value to hive.querylog.location to disable the creation of hive history file
It’s not supported now. I think you a rise it in JIRA. Regards Ransom From: Bing Li [mailto:sarah.lib...@gmail.com] Sent: Thursday, December 06, 2012 5:06 PM To: user@hive.apache.org Subject: Re: How to set an empty value to hive.querylog.location to disable the creation of hive history file it will exit with error like FAILED: Failed to open Query Log: /dev/null/hive_job_log_xxx.txt and pointed that the path is not a directory. 2012/12/6 Jithendranath Joijoide pixelma...@gmail.commailto:pixelma...@gmail.com How about setting it to /dev/null . Not sure if that would help in your case. Just an hack. Regards. On Thu, Dec 6, 2012 at 2:14 PM, Bing Li sarah.lib...@gmail.commailto:sarah.lib...@gmail.com wrote: Hi, all Refer to https://cwiki.apache.org/Hive/adminmanual-configuration.html, if I set hive.querylog.location to an empty string, it won't create structured log. I filed hive-site.xml in HIVE_HOME/conf and add the following setting, property namehive.querylog.location/name value/value /property BUT it didn't work, when launch HIVE_HOME/bin/hive, it created a history file in /tmp/user.namehttp://user.name which is the default directory of this property. Do you know how to set an EMPTY value in hive-site.xml? Thanks, - Bing
RE: Compile hive on hadoop 1.0.4
Hi Barak You can used –Dhadoop.version=1.0.4 For example: ant -Dhadoop.version=2.0.1 very-clean package tar test -Dinclude.postgres=true -Doverwrite=true -Dtest.silent=false -logfile ant.log Regards Ransom From: Barak Yaish [mailto:barak.ya...@gmail.com] Sent: Thursday, November 29, 2012 6:53 AM To: user@hive.apache.org Subject: Compile hive on hadoop 1.0.4 Hi, Is it possible running hive against hadoop 1.0.4? I was checking out the trunk from svn, and running ant clean package fails, looks like it's looking for 0.20.0 libraries. Is there some configuration I can modify? Thanks.
RE: is Hive support TTL(time to live) now ?
Hi Nitin TTL for data alive in hive. Can hive automatically delete data when out of date? Regards Ransom From: Nitin Pawar [mailto:nitinpawar...@gmail.com] Sent: Thursday, November 29, 2012 7:53 PM To: user@hive.apache.org Cc: Zhouxunmiao; Zhangjian (Jack); Zhaojun (Terry) Subject: Re: is Hive support TTL(time to live) now ? TTL for what ? for hive queries or hive over thrift or anything else you have in mind? for query execution they can run by default till the time map-reduce runs. On Thu, Nov 29, 2012 at 5:09 PM, Hezhiqiang (Ransom) ransom.hezhiqi...@huawei.commailto:ransom.hezhiqi...@huawei.com wrote: Hi all Is Hive support TTL(time to live) now? I search in JIRA but can't found it , is hive support it? Regards Ransom -- Nitin Pawar
RE: HBASE and HIVE Integration
Hi,Vijay You need to add zookeeper.jar to the hive-site.xml namehive.aux.jars.path/name valuefile:///usr/lib/hive/lib/hive-hbase-handler-0.7.1-cdh3u4.jar,file:///usr/lib/hive/lib/hbase-0.92.jar,file:///usr/lib/hive/lib/zookeeper-3.4.3.jar,file:///usr/lib/hive/lib/hive-contrib-0.7.1-cdh3u4.jar, file:///usr/lib/hive/lib/ zookeeper-3.4.3.jar /value /property namehbase.zookeeper.quorum/name valuelocalhost/value /property Regards Ransom From: vijay shinde [mailto:vijaysanj...@gmail.com] Sent: Thursday, July 26, 2012 8:59 AM To: user@hive.apache.org; Bejoy Ks Subject: Re: HBASE and HIVE Integration Hi Bejoy, I made some changes as per your suggetion. Here is the error from the http://0.0.0.0:50030/jobdetails.jsp?jobid=job_201207251858_0004 Job: Error: java.lang.ClassNotFoundException: org.apache.zookeeper.KeeperException at java.net.URLClassLoader$1.run(URLClassLoader.java:202) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:190) at java.lang.ClassLoader.loadClass(ClassLoader.java:307) at sun.misc.Launcher$ I went ahead and updated the hadoop-env.sh file and set the class path for hbase and zookeeper as follows: # Extra Java CLASSPATH elements. Optional. export HADOOP_CLASSPATH=/usr/lib/hive/lib/hbase-0.92.jar:/usr/lib/hive/lib/zookeeper-3.4.3.jar:$HADOOP_CLASSPATH Here is snippet of hive-site.xml file namehive.aux.jars.path/name valuefile:///usr/lib/hive/lib/hive-hbase-handler-0.7.1-cdh3u4.jar,file:///usr/lib/hive/lib/hbase-0.92.jar,file:///usr/lib/hive/lib/zookeeper-3.4.3.jar,file:///usr/lib/hive/lib/hive-contrib-0.7.1-cdh3u4.jar/value /property namehbase.zookeeper.quorum/name valuelocalhost/value /property Error message while executing hive query [root@localhost hive]# ./bin/hive Hive history file=/tmp/root/hive_job_log_root_201207252044_1993919630.txt hive INSERT OVERWRITE TABLE hive_hbasetable_k SELECT * FROM pokes where foo=98; Total MapReduce jobs = 1 Launching Job 1 out of 1 Number of reduce tasks is set to 0 since there's no reduce operator Starting Job = job_201207251858_0004, Tracking URL = http://0.0.0.0:50030/jobdetails.jsp?jobid=job_201207251858_0004 Kill Command = /usr/lib/hadoop-0.20/bin/hadoop job -Dmapred.job.tracker=0.0.0.0:8021http://0.0.0.0:8021 -kill job_201207251858_0004 2012-07-25 20:46:38,207 Stage-0 map = 0%, reduce = 0% 2012-07-25 20:47:35,920 Stage-0 map = 100%, reduce = 100% Ended Job = job_201207251858_0004 with errors FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.MapRedTask hive I am lost need help badly! Vijay On Wed, Jul 25, 2012 at 9:47 AM, Bejoy Ks bejoy...@yahoo.commailto:bejoy...@yahoo.com wrote: Hi Vijay You have provided the hbase master directly. (It is fine for single node hbase installation). But still can you try providing the zookeeper quorum instead. If that doesn't work as well , please post the error log from the mapreduce tasks? Just go the jobtracker page and drill down on the corresponding job to get the failed tasks. From each failed tasks you can get the error logs. http://0.0.0.0:50030/jobdetails.jsp?jobid=job_201207250246_0005 Regards Bejoy KS From: vijay shinde vijaysanj...@gmail.commailto:vijaysanj...@gmail.com To: user@hive.apache.orgmailto:user@hive.apache.org; bejoy...@yahoo.commailto:bejoy...@yahoo.com Sent: Wednesday, July 25, 2012 6:58 PM Subject: Re: HBASE and HIVE Integration Hi Bejoy, Thanks for quick reply. Here are some additional details Cloudera Version - CDH3U4 hive-site.xml property namehive.aux.jars.path/name valuefile:///usr/lib/hive/lib/hive-hbase-handler-0.7.1-cdh3u2.jar,file:///usr/lib/hive/lib/hbase-0.90.4-cdh3u2.jar,file:///usr/lib/hive/lib/zookeeper-3.3.1.jar,file:///usr/lib/hive/lib/hive-contrib-0.7.1-cdh3u2.jar/value /property Execution Log 1. start zookeeper [root@localhost zookeeper]# ./bin/zkServer.sh start 2. start hbase 3. start hive. I am setting hive jars in hive-site.xml ./bin/hive -hiveconf hbase.master=127.0.1.1:60010http://127.0.1.1:60010/ 4. Create new HBase table which is to be managed by Hive CREATE TABLE hive_hbasetable_k(key int, value string) STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' WITH SERDEPROPERTIES (hbase.columns.mapping = :key,cf1:val) TBLPROPERTIES (hbase.table.namehttp://hbase.table.name/ = hivehbasek); 5. Create a logical table pokes in Hive CREATE TABLE pokes (foo INT, bar STRING); 6. HIve error while inserting the data from Hive Poke table to HBASE table hive INSERT OVERWRITE TABLE hive_hbasetable_k SELECT * FROM pokes WHERE foo=98; Total MapReduce jobs = 1 Launching Job 1 out of 1 Number of reduce tasks is set to 0 since there's no reduce operator Starting Job = job_201207250246_0005, Tracking URL = http://0.0.0.0:50030/jobdetails.jsp?jobid=job_201207250246_0005 Kill Command = /usr/lib/hadoop-0.20/bin/hadoop job -Dmapred.job.tracker=0.0.0.0:8021http://0.0.0.0:8021/ -kill
RE: Thrift Server
Hi Van Hive doesn’t supported multiple connections now. it's impossible for HiveServer to support concurrent connections using the current Thrift API you can see: https://issues.apache.org/jira/browse/HIVE-2935 https://cwiki.apache.org/confluence/download/attachments/27362054/HiveServer2HadoopSummit2012BoF.pdf?version=1modificationDate=1339790767000 From: VanHuy Pham [mailto:huy.pham...@gmail.com] Sent: Saturday, June 30, 2012 4:33 AM To: user@hive.apache.org Subject: Thrift Server Hi hive folks, Does hive thrift server support multiple requests from clients at the same time? It looks like the server serves the request sequentially, which means it processes each request one by one. Am I wrong here? I make two clients, which make two requests (select data) to two different in hive; judging by the terminal screen of the hive server, it processes one request, finishes it, and process the other. Van
RE: Thrift Server
Hi Van In my test, JDBC supported about 25 connections in the same time without delay. If more connections , it will line up and timeout exception will happen. Regards Ransom -Original Message- From: VanHuy Pham [mailto:huy.pham...@gmail.com] Sent: Saturday, June 30, 2012 8:59 AM To: user@hive.apache.org Subject: Re: Thrift Server Thanks for the response. I see. Would JDBC then be a better option for concurrent connections? I am not aware of the implementation of hive-JDBC so wonder if it support multiple connections? Any idea? On 6/29/12, Hezhiqiang (Ransom) ransom.hezhiqi...@huawei.com wrote: Hi Van Hive doesn’t supported multiple connections now. it's impossible for HiveServer to support concurrent connections using the current Thrift API you can see: https://issues.apache.org/jira/browse/HIVE-2935 https://cwiki.apache.org/confluence/download/attachments/27362054/HiveServer2HadoopSummit2012BoF.pdf?version=1modificationDate=1339790767000 From: VanHuy Pham [mailto:huy.pham...@gmail.com] Sent: Saturday, June 30, 2012 4:33 AM To: user@hive.apache.org Subject: Thrift Server Hi hive folks, Does hive thrift server support multiple requests from clients at the same time? It looks like the server serves the request sequentially, which means it processes each request one by one. Am I wrong here? I make two clients, which make two requests (select data) to two different in hive; judging by the terminal screen of the hive server, it processes one request, finishes it, and process the other. Van
RE: Re: start hive cli error
Is it your linux cosole problem? You changed SecureCRT or putty charset “GB2312”? Best regards Ransom. From: dianbo.zhu [mailto:dianbo@gmail.com] Sent: Tuesday, May 22, 2012 6:48 PM To: user Subject: Re: Re: start hive cli error Hi Nitin, i reinstalled and did not modify anything, but it also can not work. It worked well when i first ran it months ago. the output of locale command is below: LANG=zh_CN LC_CTYPE=zh_CN LC_NUMERIC=zh_CN LC_TIME=zh_CN LC_COLLATE=zh_CN LC_MONETARY=zh_CN LC_MESSAGES=zh_CN LC_PAPER=zh_CN LC_NAME=zh_CN LC_ADDRESS=zh_CN LC_TELEPHONE=zh_CN LC_MEASUREMENT=zh_CN LC_IDENTIFICATION=zh_CN LC_ALL= Thanks very much. dianbo.zhu From: Nitin Pawarmailto:nitinpawar...@gmail.com Date: 2012-05-22 18:42 To: usermailto:user@hive.apache.org Subject: Re: start hive cli error error is due to default encoding. hive supports UTF-8 based encoding but somehow your hive setup is picking up GB2312. can you provide the output of locale command? Thanks, Nitin On Tue, May 22, 2012 at 3:17 PM, Dimboo Zhu dianbo@gmail.commailto:dianbo@gmail.com wrote: hi there, I got the following trace stack when startuping hive cli. It worked=20 well last week when i just installed it. Anybody can help? thanks, Dianbau [dzhu@bbdw-194 bin]$ ./hive Logging initialized using configuration in j= ar:file:/local/dzhu/hadoop/hive-0.8.1-bin/lib/hive-common-0.8.1.jar!/hive-= log4j.properties Hive history file=3D/tmp/dzhu/hive_job_log_dzhu_20120518130= 0_1495802688.txt Exception in thread main java.io.UnsupportedEnc= odingException: GB2312 at sun.nio.cs.Sthttp://sun.nio.cs.St= reamEncoder.forOutputStreamWriter(StreamEncoder.java:42) at java.io.Outpu= tStreamWriter.init(OutputStreamWriter.java:83) at jline.Console= Reader.init(ConsoleReader.java:174) at org.apache.ha= doop.hive.cli.CliDriver.run(CliDriver.java:649) at org.apache.ha= doop.hive.cli.CliDriver.main(CliDriver.java:554) at sun.reflect.N= ativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.N= ativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.D= elegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.ref= lect.Method.invoke(Method.java:597) at org.apache.ha= doop.util.RunJar.main(RunJar.java:156) -- Nitin Pawar
how to create index on hbase?
Hi all I want to create index on hbase, but it was faild . This is the sql. How to create index on HBase? create index i_hhive on hhive(c1,c2) as compact with deferred rebuild STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' WITH SERDEPROPERTIES (hbase.columns.mapping = :key,cf1:val,cf1:val2,cf1:val3) TBLPROPERTIES (hbase.table.name = xyz); When rebuild index,throws exception. hive ALTER INDEX i_txt_hhive ON hhive1 REBUILD; FAILED: Error in semantic analysis: org.apache.hadoop.hive.ql.metadata.HiveException: must specify an InputFormat class Best regards Ransom.
does hive 0.9 supported hbase 0.90?
In 0.8.1 version ,it’s ok But in 0.9 When I run selelct * from tbl, it was faild. Exception in thread main java.lang.NoSuchMethodError: org.apache.hadoop.hbase.mapred.TableMapReduceUtil.initCredentials(Lorg/apache/hadoop/mapred/JobConf;)V at org.apache.hadoop.hive.hbase.HiveHBaseTableInputFormat.getSplits(HiveHBaseTableInputFormat.java:419) at org.apache.hadoop.hive.ql.exec.FetchOperator.getRecordReader(FetchOperator.java:281) at org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:320) at org.apache.hadoop.hive.ql.exec.FetchTask.fetch(FetchTask.java:154) at org.apache.hadoop.hive.ql.Driver.getResults(Driver.java:1377) at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:269) at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:215) at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:406) at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:689) at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:557) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.util.RunJar.main(RunJar.java:156) I see the codes in 0.90, this codes was added in HiveHBaseTableInputFormat.getSplits TableMapReduceUtil.initCredentials(jobConf); Should I change other fileinputformat? Best regards Ransom.
RE: does hive 0.9 supported hbase 0.90?
Thank you Ashutosh Where is the JIRA address , I doesn’t found it in JIRA and 0.90 release notes. Best regards Ransom. From: Ashutosh Chauhan [mailto:hashut...@apache.org] Sent: Wednesday, May 09, 2012 4:29 PM To: user@hive.apache.org Cc: Wenzaohua Subject: Re: does hive 0.9 supported hbase 0.90? Hi Ransom, Hive 0.9 requires HBase 0.92 to work correctly. Thanks, Ashutosh On Wed, May 9, 2012 at 12:47 AM, Hezhiqiang (Ransom) ransom.hezhiqi...@huawei.commailto:ransom.hezhiqi...@huawei.com wrote: In 0.8.1 version ,it’s ok But in 0.9 When I run selelct * from tbl, it was faild. Exception in thread main java.lang.NoSuchMethodError: org.apache.hadoop.hbase.mapred.TableMapReduceUtil.initCredentials(Lorg/apache/hadoop/mapred/JobConf;)V at org.apache.hadoop.hive.hbase.HiveHBaseTableInputFormat.getSplits(HiveHBaseTableInputFormat.java:419) at org.apache.hadoop.hive.ql.exec.FetchOperator.getRecordReader(FetchOperator.java:281) at org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:320) at org.apache.hadoop.hive.ql.exec.FetchTask.fetch(FetchTask.java:154) at org.apache.hadoop.hive.ql.Driver.getResults(Driver.java:1377) at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:269) at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:215) at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:406) at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:689) at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:557) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.util.RunJar.main(RunJar.java:156) I see the codes in 0.90, this codes was added in HiveHBaseTableInputFormat.getSplits TableMapReduceUtil.initCredentials(jobConf); Should I change other fileinputformat? Best regards Ransom.
RE: Does Hive supports EXISTS keyword in select query?
SEMI is only for exist. Maybe you can try this Select a.* FROM tblA a left outer JOIN tblB b ON a.field1 = b.field1 where a.field2 is null or b.fild2 is null Best regards Ransom. -Original Message- From: Philip Tromans [mailto:philip.j.trom...@gmail.com] Sent: Wednesday, April 11, 2012 9:02 PM To: user@hive.apache.org Subject: Re: Does Hive supports EXISTS keyword in select query? Hi, Hive supports EXISTS via SEMI JOIN. Have a look at: https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Joins Cheers, Phil. On 11 April 2012 13:59, Bhavesh Shah bhavesh25s...@gmail.com wrote: Hello all, I want to query like below in Hive: Select a.* FROM tblA a JOIN tblB b ON a.field1 = b.field1 where (a.field2 is null or not exists(select field2 from tblB where filed2 is not null) But I think Hive doesn't supports EXISTS keyword so how can I overcome this issue? Pls suggest me some solution to this. I just got this kind of situation where I need to implement some thing like EXISTS/NOT EXISTS -- Thanks and Regards, Bhavesh Shah
RE: How to get a flat file out of a table in Hive
you can create table as select first, use comma separated. Then export it. Best regards Ransom. From: Omer, Farah [mailto:fo...@microstrategy.com] Sent: Wednesday, March 07, 2012 12:32 AM To: user@hive.apache.org Subject: How to get a flat file out of a table in Hive Whats the easiest way to get a flat file out from a table in Hive? I have a table in HIVE, that has millions of rows. I want to get a dump of this table out in flat file format, and it should be comma separated. Anyone knows the syntax to do it? Thanks for the help! Farah Omer Senior DB Engineer, MicroStrategy, Inc. T: 703 2702230 E: fo...@microstrategy.commailto:fo...@microstrategy.com http://www.microstrategy.com
RE: Can hive 0.8.1 work with hadoop 0.23.0?
Hi Xiaofeng, Backup “hive_exec.jar” in all hadop directory, then delete “hive_exec.jar”. Try it. Because “select * just use hdfs . And “select col1” will use MapReduce. Best regards Ransom. From: Carl Steinbach [mailto:c...@cloudera.com] Sent: Tuesday, February 21, 2012 4:45 PM To: user@hive.apache.org Subject: Re: Can hive 0.8.1 work with hadoop 0.23.0? Hi Xiaofeng, Which mode are you running Hadoop in, e.g. local, pseudo-distributed, or distributed? Thanks. Carl 2012/2/1 张晓峰 zhangxiaofe...@q.com.cnmailto:zhangxiaofe...@q.com.cn Hi, I installed hadoop 0.23.0 which can work. The version of my hive is 0.8.1. The query like ‘select * from tablename’ can work. But an exception is thrown when executing query like ‘select col1 form tablename’. 2012-02-01 16:32:20,296 WARN mapreduce.JobSubmitter (JobSubmitter.java:copyAndConfigureFiles(139)) - Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same. 2012-02-01 16:32:20,389 INFO mapreduce.JobSubmitter (JobSubmitter.java:submitJobInternal(388)) - Cleaning up the staging area file:/tmp/hadoop-hadoop/mapred/staging/hadoop-469936305/.staging/job_local_0001 2012-02-01 16:32:20,392 ERROR exec.ExecDriver (SessionState.java:printError(380)) - Job Submission failed with exception 'java.io.FileNotFoundException(File does not exist: /home/hadoop/hive-0.8.1/lib/hive-builtins-0.8.1.jar)' java.io.FileNotFoundException: File does not exist: /home/hadoop/hive-0.8.1/lib/hive-builtins-0.8.1.jar at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:764) at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus(ClientDistributedCacheManager.java:208) at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineTimestamps(ClientDistributedCacheManager.java:71) at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:246) at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:284) at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:355) at org.apache.hadoop.mapreduce.Job$2.run(Job.java:1159) at org.apache.hadoop.mapreduce.Job$2.run(Job.java:1156) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1152) at org.apache.hadoop.mapreduce.Job.submit(Job.java:1156) at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:571) at org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:452) at org.apache.hadoop.hive.ql.exec.ExecDriver.main(ExecDriver.java:710) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.util.RunJar.main(RunJar.java:189) Thanks, xiaofeng
答复: Hive CLI (v0.8.1) -e option trowing parse error all the time
Try hive -e ‘desc temp_logs’ Whitout “;” Best regards Ransom. 发件人: Abhishek Parolkar [mailto:abhis...@viki.com] 发送时间: 2012年2月20日 11:53 收件人: user@hive.apache.org 主题: Hive CLI (v0.8.1) -e option trowing parse error all the time Hi All: I am trying to get -e option in hive CLI work for me. Looks like it isn't straight forward. hadoopnode1:~ hadoop$ hive hive desc temp_logs; OK log_timestring namespace string jsonstring ds string hourstring Time taken: 2.832 seconds hive exit; hadoopnode1:~ hadoop$ hive -e desc temp_logs; FAILED: Parse Error: line 1:0 cannot recognize input near 'desc' 'EOF' 'EOF' in describe statement What is -e expecting the query to be? -v_abhi_v
hive install exception
Hi all Yesterday,I install hive 0.8.1 in hadoop 1.0.0 version ,it's ok. But today,I install hive 0.8.1 in HDFS 0.20.1,it thorws exceptions when execute hive shell. Exception in thread main java.lang.NoSuchMethodError: org.apache.hadoop.hive.cli.CliSessionState.setIsVerbose(Z)V at org.apache.hadoop.hive.cli.OptionsProcessor.process_stage2(OptionsProcessor.java:160) at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:585) at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:554) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.util.RunJar.main(RunJar.java:156) any suggestions how to approach this? is it my Hadoop version too out of date? thanks! Best ransom