Re: Hive 0.7.1 authorization woes

2011-08-25 Thread yongqiang he
what is your unix name on that machine? can u do a whoami?

On Thu, Aug 25, 2011 at 5:15 PM, Alex Holmes  wrote:
> Here's the hive-site.xml file (I use the same file for both the client
> and remote metastore).  We're using mysql as the metastore DB.
>
>
> 
> 
> 
> 
>  hive.security.authorization.enabled
>  true
> 
> 
>  hive.metastore.local
>  false
> 
> 
>  hive.metastore.uris
>  thrift://localhost:9083
> 
> 
>  javax.jdo.option.ConnectionURL
>  jdbc:mysql://localhost/hive?createDatabaseIfNotExist=true
> 
> 
>  javax.jdo.option.ConnectionDriverName
>  com.mysql.jdbc.Driver
> 
> 
>  javax.jdo.option.ConnectionUserName
>  hive
> 
> 
>  javax.jdo.option.ConnectionPassword
>  secret
> 
> 
>
>
>
> On Wed, Aug 24, 2011 at 6:06 PM, yongqiang he  
> wrote:
>> this is what i have tried with a remote metastore:
>>
>>    > set hive.security.authorization.enabled=false;
>> hive>
>>    >
>>    >
>>    > drop table src2;
>> OK
>> Time taken: 1.002 seconds
>> hive> create table src2 (key int, value string);
>> OK
>> Time taken: 0.03 seconds
>> hive>
>>    >
>>    >
>>    > set hive.security.authorization.enabled=true;
>> hive> grant select on table src2 to user heyongqiang;
>> OK
>> Time taken: 0.113 seconds
>> hive> select * from src2;
>> OK
>> Time taken: 0.188 seconds
>> hive> show grant user heyongqiang on table src2;
>> OK
>>
>> database        default
>> table   src2
>> principalName   heyongqiang
>> principalType   USER
>> privilege       Select
>> grantTime       Wed Aug 24 15:03:51 PDT 2011
>> grantor heyongqiang
>>
>> can u do a show grant?
>>
>> (But with remote metastore, i think hive should not return empty list
>> instead of null for list_privileges etc.)
>>
>>
>>
>> On Wed, Aug 24, 2011 at 2:34 PM, Alex Holmes  wrote:
>>> Authorization works for me with the local metastore.  The remote
>>> metastore works with authorization turned off, but as soon as I turn
>>> it on and issue any commands I get these exceptions on the hive
>>> client.
>>>
>>> Could you also try the remote metastore please?  I'm pretty sure that
>>> authorization does not work with it at all.
>>>
>>> Thanks,
>>> Alex
>>>
>>> On Wed, Aug 24, 2011 at 5:20 PM, yongqiang he  
>>> wrote:
 I am using local metastore,  and can not reproduce the problem.

 what message did you get when running local metastore?

 On Wed, Aug 24, 2011 at 1:58 PM, Alex Holmes  wrote:
> Thanks for opening a ticket.
>
> Table-level grants aren't working for me either (HIVE-2405 suggests
> that the bug is only related to global grants).
>
> hive> set hive.security.authorization.enabled=false;
> hive> CREATE TABLE pokes (foo INT, bar STRING);
> OK
> Time taken: 1.245 seconds
> hive> LOAD DATA LOCAL INPATH 'hive1.in' OVERWRITE INTO TABLE pokes;
> FAILED: Error in semantic analysis: Line 1:23 Invalid path 'hive1.in':
> No files matching path file:/app/hadoop/hive-0.7.1/conf/hive1.in
> hive> LOAD DATA LOCAL INPATH '/app/hadoop/hive1.in' OVERWRITE INTO TABLE 
> pokes;
> Copying data from file:/app/hadoop/hive1.in
> Copying file: file:/app/hadoop/hive1.in
> Loading data to table default.pokes
> Moved to trash: hdfs://localhost:54310/user/hive/warehouse/pokes
> OK
> Time taken: 0.33 seconds
> hive> select * from pokes;
> OK
> 1       a
> 2       b
> 3       c
> Time taken: 0.095 seconds
> hive> grant select on table pokes to user hduser;
> OK
> Time taken: 0.251 seconds
> hive> set hive.security.authorization.enabled=true;
> hive> select * from pokes;
> FAILED: Hive Internal Error:
> org.apache.hadoop.hive.ql.metadata.HiveException(org.apache.thrift.TApplicationException:
> get_privilege_set failed: unknown result)
> org.apache.hadoop.hive.ql.metadata.HiveException:
> org.apache.thrift.TApplicationException: get_privilege_set failed:
> unknown result
>        at 
> org.apache.hadoop.hive.ql.metadata.Hive.get_privilege_set(Hive.java:1617)
>        at 
> org.apache.hadoop.hive.ql.security.authorization.DefaultHiveAuthorizationProvider.authorizeUserPriv(DefaultHiveAuthorizationProvider.java:201)
>        at 
> org.apache.hadoop.hive.ql.security.authorization.DefaultHiveAuthorizationProvider.authorizeUserAndDBPriv(DefaultHiveAuthorizationProvider.java:226)
> ...
>
> mysql> select * from TBL_PRIVS;
> +--+-+--+-+--+++--++
> | TBL_GRANT_ID | CREATE_TIME | GRANT_OPTION | GRANTOR | GRANTOR_TYPE |
> PRINCIPAL_NAME | PRINCIPAL_TYPE | TBL_PRIV | TBL_ID |
> +--+-+--+-+--+++--++
> |            1 |  1314219701 |            0 | hduser  | USER         |
> hduser         | USER           | Select   |      1 |
> +--+-+--+-

Re: Hive 0.7.1 authorization woes

2011-08-25 Thread Alex Holmes
Here's the hive-site.xml file (I use the same file for both the client
and remote metastore).  We're using mysql as the metastore DB.






  hive.security.authorization.enabled
  true


  hive.metastore.local
  false


  hive.metastore.uris
  thrift://localhost:9083


  javax.jdo.option.ConnectionURL
  jdbc:mysql://localhost/hive?createDatabaseIfNotExist=true


  javax.jdo.option.ConnectionDriverName
  com.mysql.jdbc.Driver


  javax.jdo.option.ConnectionUserName
  hive


  javax.jdo.option.ConnectionPassword
  secret





On Wed, Aug 24, 2011 at 6:06 PM, yongqiang he  wrote:
> this is what i have tried with a remote metastore:
>
>    > set hive.security.authorization.enabled=false;
> hive>
>    >
>    >
>    > drop table src2;
> OK
> Time taken: 1.002 seconds
> hive> create table src2 (key int, value string);
> OK
> Time taken: 0.03 seconds
> hive>
>    >
>    >
>    > set hive.security.authorization.enabled=true;
> hive> grant select on table src2 to user heyongqiang;
> OK
> Time taken: 0.113 seconds
> hive> select * from src2;
> OK
> Time taken: 0.188 seconds
> hive> show grant user heyongqiang on table src2;
> OK
>
> database        default
> table   src2
> principalName   heyongqiang
> principalType   USER
> privilege       Select
> grantTime       Wed Aug 24 15:03:51 PDT 2011
> grantor heyongqiang
>
> can u do a show grant?
>
> (But with remote metastore, i think hive should not return empty list
> instead of null for list_privileges etc.)
>
>
>
> On Wed, Aug 24, 2011 at 2:34 PM, Alex Holmes  wrote:
>> Authorization works for me with the local metastore.  The remote
>> metastore works with authorization turned off, but as soon as I turn
>> it on and issue any commands I get these exceptions on the hive
>> client.
>>
>> Could you also try the remote metastore please?  I'm pretty sure that
>> authorization does not work with it at all.
>>
>> Thanks,
>> Alex
>>
>> On Wed, Aug 24, 2011 at 5:20 PM, yongqiang he  
>> wrote:
>>> I am using local metastore,  and can not reproduce the problem.
>>>
>>> what message did you get when running local metastore?
>>>
>>> On Wed, Aug 24, 2011 at 1:58 PM, Alex Holmes  wrote:
 Thanks for opening a ticket.

 Table-level grants aren't working for me either (HIVE-2405 suggests
 that the bug is only related to global grants).

 hive> set hive.security.authorization.enabled=false;
 hive> CREATE TABLE pokes (foo INT, bar STRING);
 OK
 Time taken: 1.245 seconds
 hive> LOAD DATA LOCAL INPATH 'hive1.in' OVERWRITE INTO TABLE pokes;
 FAILED: Error in semantic analysis: Line 1:23 Invalid path 'hive1.in':
 No files matching path file:/app/hadoop/hive-0.7.1/conf/hive1.in
 hive> LOAD DATA LOCAL INPATH '/app/hadoop/hive1.in' OVERWRITE INTO TABLE 
 pokes;
 Copying data from file:/app/hadoop/hive1.in
 Copying file: file:/app/hadoop/hive1.in
 Loading data to table default.pokes
 Moved to trash: hdfs://localhost:54310/user/hive/warehouse/pokes
 OK
 Time taken: 0.33 seconds
 hive> select * from pokes;
 OK
 1       a
 2       b
 3       c
 Time taken: 0.095 seconds
 hive> grant select on table pokes to user hduser;
 OK
 Time taken: 0.251 seconds
 hive> set hive.security.authorization.enabled=true;
 hive> select * from pokes;
 FAILED: Hive Internal Error:
 org.apache.hadoop.hive.ql.metadata.HiveException(org.apache.thrift.TApplicationException:
 get_privilege_set failed: unknown result)
 org.apache.hadoop.hive.ql.metadata.HiveException:
 org.apache.thrift.TApplicationException: get_privilege_set failed:
 unknown result
        at 
 org.apache.hadoop.hive.ql.metadata.Hive.get_privilege_set(Hive.java:1617)
        at 
 org.apache.hadoop.hive.ql.security.authorization.DefaultHiveAuthorizationProvider.authorizeUserPriv(DefaultHiveAuthorizationProvider.java:201)
        at 
 org.apache.hadoop.hive.ql.security.authorization.DefaultHiveAuthorizationProvider.authorizeUserAndDBPriv(DefaultHiveAuthorizationProvider.java:226)
 ...

 mysql> select * from TBL_PRIVS;
 +--+-+--+-+--+++--++
 | TBL_GRANT_ID | CREATE_TIME | GRANT_OPTION | GRANTOR | GRANTOR_TYPE |
 PRINCIPAL_NAME | PRINCIPAL_TYPE | TBL_PRIV | TBL_ID |
 +--+-+--+-+--+++--++
 |            1 |  1314219701 |            0 | hduser  | USER         |
 hduser         | USER           | Select   |      1 |
 +--+-+--+-+--+++--++

 Also, I noticed in HIVE-2405 that you get a meaningful error message:

  Authorization failed:No privilege 'Create' found for outputs {
 database:default}. Use show grant to get more details.

Re: Hive 0.7.1 authorization woes

2011-08-25 Thread Alex Holmes
Hi,

hive> CREATE TABLE pokes2 (foo INT, bar STRING);
OK
hive> LOAD DATA LOCAL INPATH '/app/hadoop/hive1.in' OVERWRITE INTO TABLE pokes2;
OK
hive> grant select on table pokes2 to user hduser;
OK
hive> set hive.security.authorization.enabled=true;
hive> show grant user hduser on table pokes2;
OK

databasedefault 
table   pokes2  
principalName   hduser  
principalType   USER
privilege   Select  
grantTime   1314318185  
grantor hduser  
Time taken: 0.041 seconds

hive> select * from pokes2;
FAILED: Hive Internal Error:
org.apache.hadoop.hive.ql.metadata.HiveException(org.apache.thrift.TApplicationException:
get_privilege_set failed: unknown result)
org.apache.hadoop.hive.ql.metadata.HiveException:
org.apache.thrift.TApplicationException: get_privilege_set failed:
unknown result
at 
org.apache.hadoop.hive.ql.metadata.Hive.get_privilege_set(Hive.java:1617)
at 
org.apache.hadoop.hive.ql.security.authorization.DefaultHiveAuthorizationProvider.authorizeUserPriv(DefaultHiveAuthorizationProvider.java:201)
at 
org.apache.hadoop.hive.ql.security.authorization.DefaultHiveAuthorizationProvider.authorizeUserAndDBPriv(DefaultHiveAuthorizationProvider.java:226)
at 
org.apache.hadoop.hive.ql.security.authorization.DefaultHiveAuthorizationProvider.authorizeUserDBAndTable(DefaultHiveAuthorizationProvider.java:259)
at 
org.apache.hadoop.hive.ql.security.authorization.DefaultHiveAuthorizationProvider.authorize(DefaultHiveAuthorizationProvider.java:159)
at org.apache.hadoop.hive.ql.Driver.doAuthorization(Driver.java:531)
at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:393)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:736)
at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:164)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:241)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:456)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
Caused by: org.apache.thrift.TApplicationException: get_privilege_set
failed: unknown result
at 
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_get_privilege_set(ThriftHiveMetastore.java:2414)
at 
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.get_privilege_set(ThriftHiveMetastore.java:2379)
at 
org.apache.hadoop.hive.metastore.HiveMetaStoreClient.get_privilege_set(HiveMetaStoreClient.java:1042)
at 
org.apache.hadoop.hive.ql.metadata.Hive.get_privilege_set(Hive.java:1615)
... 15 more




On Wed, Aug 24, 2011 at 6:06 PM, yongqiang he  wrote:
> this is what i have tried with a remote metastore:
>
>    > set hive.security.authorization.enabled=false;
> hive>
>    >
>    >
>    > drop table src2;
> OK
> Time taken: 1.002 seconds
> hive> create table src2 (key int, value string);
> OK
> Time taken: 0.03 seconds
> hive>
>    >
>    >
>    > set hive.security.authorization.enabled=true;
> hive> grant select on table src2 to user heyongqiang;
> OK
> Time taken: 0.113 seconds
> hive> select * from src2;
> OK
> Time taken: 0.188 seconds
> hive> show grant user heyongqiang on table src2;
> OK
>
> database        default
> table   src2
> principalName   heyongqiang
> principalType   USER
> privilege       Select
> grantTime       Wed Aug 24 15:03:51 PDT 2011
> grantor heyongqiang
>
> can u do a show grant?
>
> (But with remote metastore, i think hive should not return empty list
> instead of null for list_privileges etc.)
>
>
>
> On Wed, Aug 24, 2011 at 2:34 PM, Alex Holmes  wrote:
>> Authorization works for me with the local metastore.  The remote
>> metastore works with authorization turned off, but as soon as I turn
>> it on and issue any commands I get these exceptions on the hive
>> client.
>>
>> Could you also try the remote metastore please?  I'm pretty sure that
>> authorization does not work with it at all.
>>
>> Thanks,
>> Alex
>>
>> On Wed, Aug 24, 2011 at 5:20 PM, yongqiang he  
>> wrote:
>>> I am using local metastore,  and can not reproduce the problem.
>>>
>>> what message did you get when running local metastore?
>>>
>>> On Wed, Aug 24, 2011 at 1:58 PM, Alex Holmes  wrote:
 Thanks for opening a ticket.

 Table-level grants aren't working for me either (HIVE-2405 suggests
 that the bug is only related to global grants).

 hive> set hive.security.authorization.enabled=false;
 hive> CREATE TABLE pokes (foo INT, bar STRING);
 OK
 Time taken: 1.245 seconds
 hive> LOAD DATA LOCAL INPATH 'hive1.in' OVERWRITE INTO TABLE pokes;
 FAILED: Error in semantic analysis: Line 1:23 Invalid path 'hi

Re:Issues Integrating HBASE with HIVE

2011-08-25 Thread karthik kottapalli
0 down vote favorite


I am able to create tables in HIVE. I have a problem with integrating
HIVE and HBASE.

I am following this doc.
https://cwiki.apache.org/confluence/display/Hive/HBaseIntegration

My versions are: Hadoop 0.20.2 Hive 0.7.1 Hbase 0.20.6

hive> CREATE TABLE hbase_table_1(key int, value string)
> STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
> WITH SERDEPROPERTIES ("hbase.columns.mapping" = ":key,cf1:val")
> TBLPROPERTIES ("hbase.table.name" = "xyz");

console:

java.lang.NoSuchMethodError:
org.apache.hadoop.hbase.client.HBaseAdmin.(Lorg/apache/hadoop/conf/Configuration;)V
at 
org.apache.hadoop.hive.hbase.HBaseStorageHandler.getHBaseAdmin(HBaseStorageHandler.java:74)
at 
org.apache.hadoop.hive.hbase.HBaseStorageHandler.preCreateTable(HBaseStorageHandler.java:158)
at 
org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createTable(HiveMetaStoreClient.java:344)
at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:470)
at org.apache.hadoop.hive.ql.exec.DDLTask.createTable(DDLTask.java:3146)
at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:213) at
org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:130) at
org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:57)
at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1063) at
org.apache.hadoop.hive.ql.Driver.execute(Driver.java:900) at
org.apache.hadoop.hive.ql.Driver.run(Driver.java:748) at
org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:164) at
org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:241)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:456) at
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at
sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at
java.lang.reflect.Method.invoke(Unknown Source) at
org.apache.hadoop.util.RunJar.main(RunJar.java:156) FAILED: Execution
Error, return code -101 from org.apache.hadoop.hive.ql.exec.DDLTask

Any idea on how to proceed further or thoughts about the cause of the issue?


On Thu, Aug 25, 2011 at 6:00 PM, Ashutosh Chauhan  wrote:
> Christian,
> Looks like its not possible to do the setup that you are looking for.
> Problem arises since HiveServer extends HMSHandler directly instead of
> accessing Metastore through HiveMetaStoreClient and because of this
> metastore thrift interface is missed entirely. Hiveserver will contact mysql
> directly and won't go through external metastore service as you have in your
> diagram.  If you consider this as a blocker, please open up a jira for more
> discussion.
> Hope it helps,
> Ashutosh
>
> On Wed, Aug 24, 2011 at 23:21, Christian Kurz  wrote:
>>
>> Thanks, Edward and Ashutosh
>>
>> Ashutosh,
>> yes, I do not understand why the service "hiveserver" still uses a Derby
>> instance even through it should be talking to the service "metastore". Btw,
>> if I run the hiveserver without having started the metastore service, the
>> hiveserver complains when I try to let it execute a HiveQL command through
>> JDBC:
>>
>> ...
>> org.apache.hadoop.hive.ql.metadata.HiveException:
>> MetaException(message:Could not connect to meta store using any of the URIs
>> provided)
>>     at
>> org.apache.hadoop.hive.ql.metadata.Hive.getTablesByPattern(Hive.java:919)
>> ...
>> (full stacktrace at the end of this post)
>>
>> which is exactly what I expect and which makes me somewhat confident that
>> I have configured things correctly.
>>
>> The entire issue came up, because the hiveserver service did not work,
>> when started from the same directory, from which the metastore service had
>> been started. It turned out that this was because both services were trying
>> to setup a Derby instance in the current dir and therefore ran into a file
>> locking situation. I have worked around this by starting the two services
>> from different directories, but I am worried that I'd be missing an
>> important point in my setup.
>>
>> When I run "pfiles " it lists these files for the
>> hiveserver service (which should not need a Derby instance, as far as I
>> understood):
>>   ...tons of jars...
>>   /home/hadoop/hive_admin/derby.log
>>   /home/hadoop/hive_admin/metastore_db/log/log1.dat
>>   /home/hadoop/hive_admin/metastore_db/dbex.lck
>>   /home/hadoop/hive_admin/metastore_db/seg0/c191.dat
>>   /home/hadoop/hive_admin/metastore_db/seg0/c1a1.dat
>>   ...
>>   /home/hadoop/hive_admin/metastore_db/seg0/c431.dat
>>   /home/hadoop/hive_admin/metastore_db/seg0/c451.dat
>>
>> Any pointers appreciated. If anybody things this is a bug, I can file one.
>>
>> Thanks,
>> Christian
>>
>>
>> full stacktrace:
>>
>> Hive history
>> file=/tmp/hadoop/hive_job_log_hadoop_201108242305_155100916.txt
>> FAILED: Error in semantic analysis: Table not found weblog
>> org.apache.hadoop.hive.ql.metadata.HiveException:
>> MetaException(message:Could not co

Re: Understanding distributed Hive server and Hive Metastore setup

2011-08-25 Thread Ashutosh Chauhan
Christian,

Looks like its not possible to do the setup that you are looking for.
Problem arises since HiveServer extends HMSHandler directly instead of
accessing Metastore through HiveMetaStoreClient and because of this
metastore thrift interface is missed entirely. Hiveserver will contact mysql
directly and won't go through external metastore service as you have in your
diagram.  If you consider this as a blocker, please open up a jira for more
discussion.

Hope it helps,
Ashutosh

On Wed, Aug 24, 2011 at 23:21, Christian Kurz  wrote:

> **
> Thanks, Edward and Ashutosh
>
> Ashutosh,
> yes, I do not understand why the service "hiveserver" still uses a Derby
> instance even through it should be talking to the service "metastore". Btw,
> if I run the hiveserver without having started the metastore service, the
> hiveserver complains when I try to let it execute a HiveQL command through
> JDBC:
>
> ...
> org.apache.hadoop.hive.ql.metadata.HiveException:
> MetaException(message:Could not connect to meta store using any of the URIs
> provided)
> at
> org.apache.hadoop.hive.ql.metadata.Hive.getTablesByPattern(Hive.java:919)
> ...
> (full stacktrace at the end of this post)
>
> which is exactly what I expect and which makes me somewhat confident that I
> have configured things correctly.
>
> The entire issue came up, because the hiveserver service did not work, when
> started from the same directory, from which the metastore service had been
> started. It turned out that this was because both services were trying to
> setup a Derby instance in the current dir and therefore ran into a file
> locking situation. I have worked around this by starting the two services
> from different directories, but I am worried that I'd be missing an
> important point in my setup.
>
> When I run "pfiles " it lists these files for the
> hiveserver service (which should not need a Derby instance, as far as I
> understood):
>   ...tons of jars...
>   /home/hadoop/hive_admin/derby.log
>   /home/hadoop/hive_admin/metastore_db/log/log1.dat
>   /home/hadoop/hive_admin/metastore_db/dbex.lck
>   /home/hadoop/hive_admin/metastore_db/seg0/c191.dat
>   /home/hadoop/hive_admin/metastore_db/seg0/c1a1.dat
>   ...
>   /home/hadoop/hive_admin/metastore_db/seg0/c431.dat
>   /home/hadoop/hive_admin/metastore_db/seg0/c451.dat
>
> Any pointers appreciated. If anybody things this is a bug, I can file one.
>
> Thanks,
> Christian
>
>
> full stacktrace:
>
> Hive history
> file=/tmp/hadoop/hive_job_log_hadoop_201108242305_155100916.txt
> FAILED: Error in semantic analysis: Table not found weblog
> org.apache.hadoop.hive.ql.metadata.HiveException:
> MetaException(message:Could not connect to meta store using any of the URIs
> provided)
> at
> org.apache.hadoop.hive.ql.metadata.Hive.getTablesByPattern(Hive.java:919)
> at
> org.apache.hadoop.hive.ql.metadata.Hive.getTablesByPattern(Hive.java:904)
> at
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeCreateTable(SemanticAnalyzer.java:7074)
> at
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:6573)
> at
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:238)
> at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:340)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:736)
> at
> org.apache.hadoop.hive.service.HiveServer$HiveServerHandler.execute(HiveServer.java:116)
> at
> org.apache.hadoop.hive.service.ThriftHive$Processor$execute.process(ThriftHive.java:699)
> at
> org.apache.hadoop.hive.service.ThriftHive$Processor.process(ThriftHive.java:677)
> at
> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:253)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:619)
> Caused by: MetaException(message:Could not connect to meta store using any
> of the URIs provided)
> at
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:183)
> at
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.(HiveMetaStoreClient.java:151)
> at
> org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:1855)
> at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:1865)
> at
> org.apache.hadoop.hive.ql.metadata.Hive.getTablesByPattern(Hive.java:917)
> ... 13 more
> FAILED: Error in metadata: MetaException(message:Could not connect to meta
> store using any of the URIs provided)
> FAILED: Execution Error, return code 1 from
> org.apache.hadoop.hive.ql.exec.DDLTask
>
>
>
>
> On 25.08.2011 01:29, Ashutosh Chauhan wrote:
>
> Edward,
>
>  Apart from recommended best practices what Christian is asking fo

Re: hive-0.7.0 semantic analysis inside catalogs

2011-08-25 Thread Edward Capriolo
On Thu, Aug 25, 2011 at 12:11 PM, Ayon Sinha  wrote:

> You need
>  select one.a, two.b from one join two on one.a=two.a group by one.a,*two.b
> *;
>
> -Ayon
> See My Photos on Flickr 
> Also check out my Blog for answers to commonly asked 
> questions.
>
> --
> *From:* Edward Capriolo 
> *To:* user@hive.apache.org
> *Sent:* Thursday, August 25, 2011 8:16 AM
> *Subject:* hive-0.7.0 semantic analysis inside catalogs
>
> The parser can not handle certain queries inside catlogs.
>
> [11:09:51]  hive> create table one (a int,b int) fields
> terminated by '\t';
> [11:09:51]  FAILED: Parse Error: line 1:31 mismatched input
> 'fields' expecting EOF
> [11:09:51] 
> [11:09:51]  hive> create table one (a int,b int) row format
> delimited fields terminated by '\t';
> [11:09:51]  OK
> [11:09:51]  Time taken: 0.459 seconds
> [11:09:51]  hive> create table two (a int,b int) row format
> delimited fields terminated by '\t';
> [11:09:51]  OK
> [11:09:51]  Time taken: 0.039 seconds
> [11:09:51]  hive> select one.a, two.b from one join two on
> one.a=two.a group by one.a;
> [11:09:51]  FAILED: Error in semantic analysis: line 1:14
> Expression Not In Group By Key two
>
> Is there a fix/workaround?
>
> Edward
>
>
>
Shame on me. Of course all select columns have to be in group by.

I was using many aliases in a query and sometimes the above error gets
masked by something like this:
FAILED: Error in semantic analysis: line 20:81 Invalid Column Reference
host_name

Thank you again,
Edward


Re: hive-0.7.0 semantic analysis inside catalogs

2011-08-25 Thread Ayon Sinha
You need 
 select one.a, two.b from one join two on one.a=two.a group by one.a,two.b;
 
-Ayon
See My Photos on Flickr
Also check out my Blog for answers to commonly asked questions.




From: Edward Capriolo 
To: user@hive.apache.org
Sent: Thursday, August 25, 2011 8:16 AM
Subject: hive-0.7.0 semantic analysis inside catalogs


The parser can not handle certain queries inside catlogs.

[11:09:51]  hive> create table one (a int,b int) fields terminated 
by '\t';
[11:09:51]  FAILED: Parse Error: line 1:31 mismatched input 'fields' 
expecting EOF
[11:09:51]  
[11:09:51]  hive> create table one (a int,b int) row format 
delimited fields  terminated by '\t';
[11:09:51]  OK
[11:09:51]  Time taken: 0.459 seconds
[11:09:51]  hive> create table two (a int,b int) row format 
delimited fields  terminated by '\t';
[11:09:51]  OK
[11:09:51]  Time taken: 0.039 seconds
[11:09:51]  hive> select one.a, two.b from one join two on 
one.a=two.a group by one.a;

[11:09:51]  FAILED: Error in semantic analysis: line 1:14 Expression 
Not In Group By Key two


Is there a fix/workaround?

Edward

hive-0.7.0 semantic analysis inside catalogs

2011-08-25 Thread Edward Capriolo
The parser can not handle certain queries inside catlogs.

[11:09:51]  hive> create table one (a int,b int) fields
terminated by '\t';
[11:09:51]  FAILED: Parse Error: line 1:31 mismatched input
'fields' expecting EOF
[11:09:51] 
[11:09:51]  hive> create table one (a int,b int) row format
delimited fields terminated by '\t';
[11:09:51]  OK
[11:09:51]  Time taken: 0.459 seconds
[11:09:51]  hive> create table two (a int,b int) row format
delimited fields terminated by '\t';
[11:09:51]  OK
[11:09:51]  Time taken: 0.039 seconds
[11:09:51]  hive> select one.a, two.b from one join two on
one.a=two.a group by one.a;
[11:09:51]  FAILED: Error in semantic analysis: line 1:14
Expression Not In Group By Key two

Is there a fix/workaround?

Edward


Re: Re:Re: Re: RE: Why a sql only use one map task?

2011-08-25 Thread bejoy_ks
Hi Daniel
 In the hadoop eco system the number of map tasks is actually decided 
by the job basically based  no of input splits . Setting mapred.map.tasks 
wouldn't assure that only that many number of map tasks are triggered. What 
worked out here for you is that you were specifying that a map tasks should 
process a min data volume by setting value for mapred.min.split size.
 So in your case in real there were 9 input splits but when you imposed a 
constrain on the min data that a map task should handle, the map tasks came 
down to 3. 
Regards
Bejoy K S

-Original Message-
From: "Daniel,Wu" 
Date: Thu, 25 Aug 2011 20:02:43 
To: 
Reply-To: user@hive.apache.org
Subject: Re:Re:Re: Re: RE: Why a sql only use one map task?

after I set
set mapred.min.split.size=2;

Then it will kick off 3 map tasks (the file I have is 500M).  So looks like we 
need to set mapred.min.split.size instead of mapred.map.tasks to control how 
many maps to kick off.


At 2011-08-25 19:38:30,"Daniel,Wu"  wrote:

It works, after I set as you said, but looks like I can't control the map task, 
it always use 9 maps, even if I set
set mapred.map.tasks=2;


Kind% CompleteNum TasksPendingRunningCompleteKilledFailed/Killed
Task Attempts
map100.00%


900900 / 0
reduce100.00%


100100 / 0



At 2011-08-25 06:35:38,"Ashutosh Chauhan"  wrote:
This may be because CombineHiveInputFormat is combining your splits in one map 
task. If you don't want that to happen, do:
hive> set hive.input.format=org.apache.hadoop.hive.ql.io.HiveI nputFormat


2011/8/24 Daniel,Wu

I pasted the inform I pasted blow, the map capacity is 6. And no matter how I 
set  mapred.map.tasks, such as 3,  it doesn't work, as it always use 1 map task 
(please see the completed job information).



Cluster Summary (Heap Size is 16.81 MB/966.69 MB)
Running Map TasksRunning Reduce TasksTotal SubmissionsNodesOccupied Map 
SlotsOccupied Reduce SlotsReserved Map SlotsReserved Reduce SlotsMap Task 
CapacityReduce Task CapacityAvg. Tasks/NodeBlacklisted NodesExcluded Nodes
0063664.


Completed Jobs
JobidPriorityUserNameMap % CompleteMap TotalMaps CompletedReduce % 
CompleteReduce TotalReduces CompletedJob Scheduling InformationDiagnostic Info
job_201108242119_0001NORMALoracleselect count(*) from test(Stage-1)100.00%


00100.00%


1 1NANA
job_201108242119_0002NORMALoracleselect count(*) from test(Stage-1)100.00%


11100.00%


1 1NANA
job_201108242119_0003NORMALoracleselect count(*) from test(Stage-1)100.00%


11100.00%


1 1NANA
job_201108242119_0004NORMALoracleselect period_key,count(*) 
from...period_key(Stage-1)100.00%


11100.00%


3 3NANA
job_201108242119_0005NORMALoracleselect period_key,count(*) 
from...period_key(Stage-1)100.00%


11100.00%


3 3NANA
job_201108242119_0006NORMALoracleselect period_key,count(*) 
from...period_key(Stage-1)100.00%


11100.00%


3 3NANA



At 2011-08-24 18:19:38,wd  wrote:
>What about your total Map Task Capacity?
>you may check it from http://your_jobtracker:50030/jobtracker.jsp

>
>2011/8/24 Daniel,Wu :
>> I checked my setting, all are with the default value.So per the book of
>> "Hadoop the definitive guide", the split size should be 64M. And the file
>> size is about 500M, so that's about 8 splits. And from the map job
>> information (after the map job is done), I can see it gets 8 split from one
>> node. But anyhow it starts only one map task.
>>
>>
>>
>> At 2011-08-24 02:28:18,"Aggarwal, Vaibhav"  wrote:
>>
>> If you actually have splittable files you can set the following setting to
>> create more splits:
>>
>>
>>
>> mapred.max.split.size appropriately.
>>
>>
>>
>> Thanks
>>
>> Vaibhav
>>
>>
>>
>> From: Daniel,Wu [mailto:hadoop...@163.com]
>> Sent: Tuesday, August 23, 2011 6:51 AM
>> To: hive
>> Subject: Why a sql only use one map task?
>>
>>
>>
>>   I run the following simple sql
>> select count(*) from sales;
>> And the job information shows it only uses one map task.
>>
>> The underlying hadoop has 3 data/data nodes. So I expect hive should kick
>> off 3 map tasks, one on each task nodes. What can make hive only run one map
>> task? Do I need to set something to kick off multiple map task?  in my
>> config, I didn't change hive config.
>>
>>
>>
>>











Re: Problem in hive

2011-08-25 Thread Vikas Srivastava
Hey Ashu!!1

i have given full permission to hadoop user on new server(A) with user name
and password.

it can only read those tables made by this server(A) and desc them,

and from other server(B) we cant be able to read tables created by this
server(A).

regards
Vikas Srivastava


On Thu, Aug 25, 2011 at 4:02 PM, Vikas Srivastava <
vikas.srivast...@one97.net> wrote:

> hey ashutosh,
>
> thanks for reply..
>
> the output of that is
>
> *Failed with exception null
> FAILED: Execution Error, return code 1 from org.apache.hado**
> op.hive.ql.exec**.DDLTask*
>
> regards
> Vikas Srivastava
>
>
>
>
> On Thu, Aug 25, 2011 at 4:52 AM, Ashutosh Chauhan wrote:
>
>> Vikas,
>> Looks like your metadata is corrupted. Can you paste the output of
>> following:
>> hive> describe formatted aircel_obd;
>>
>> Ashutosh
>>
>>
>> On Wed, Aug 24, 2011 at 03:46, Vikas Srivastava <
>> vikas.srivast...@one97.net> wrote:
>>
>>> hey thanks for reply,
>>>
>>> i m using hadoop 0.20.2,
>>> and hive: 0.7.0
>>>
>>> i have install hive on new server and making that use readonly ,
>>>
>>>
>>> On Wed, Aug 24, 2011 at 4:05 PM, Chinna  wrote:
>>>
  Hi,

 ** **

Can u post some more details like which version u r using and what
 sequence of queries u have executed.

 When I checked the trunk code this exception will come when getCols()
 returns null. Check u r metadata is in good state or not.

 Thanks

 Chinna Rao Lalam

 -- Forwarded message --
 From: *Vikas Srivastava* 
 Date: Tue, Aug 23, 2011 at 7:26 PM
 Subject: Problem in hive
 To: user@hive.apache.org


 HI team,


 i m facing this problem.

 show tables is running fine but when i run below query.

 hive> select * from aircel_obd;
 FAILED: Hive Internal Error: java.lang.NullPointerException(null)
 java.lang.NullPointerException
 at
 org.apache.hadoop.hive.ql.metadata.Table.checkValidity(Table.java:162)
 at
 org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:886)
 at
 org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:787)
 at
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.getMetaData(SemanticAnalyzer.java:893)
 at
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:7203)
 at
 org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:240)
 at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:428)
 at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:336)
 at org.apache.hadoop.hive.ql.Driver.run(Driver.java:901)
 at
 org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:253)
 at
 org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:210)
 at
 org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:401)
 at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:660)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
 at
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at org.apache.hadoop.util.RunJar.main(RunJar.java:156)


 --
 With Regards
 Vikas Srivastava

 DWH & Analytics Team

 Mob:+91 9560885900
 One97 | Let's get talking !

 ** **




 --
 With Regards
 Vikas Srivastava

 DWH & Analytics Team

 Mob:+91 9560885900
 One97 | Let's get talking !

 ** **

>>>
>>>
>>>
>>> --
>>> With Regards
>>> Vikas Srivastava
>>>
>>> DWH & Analytics Team
>>> Mob:+91 9560885900
>>> One97 | Let's get talking !
>>>
>>>
>>
>
>
> --
> With Regards
> Vikas Srivastava
>
> DWH & Analytics Team
> Mob:+91 9560885900
> One97 | Let's get talking !
>
>


-- 
With Regards
Vikas Srivastava

DWH & Analytics Team
Mob:+91 9560885900
One97 | Let's get talking !


Re:Re:Re: Re: RE: Why a sql only use one map task?

2011-08-25 Thread Daniel,Wu
after I set
set mapred.min.split.size=2;

Then it will kick off 3 map tasks (the file I have is 500M).  So looks like we 
need to set mapred.min.split.size instead of mapred.map.tasks to control how 
many maps to kick off.


At 2011-08-25 19:38:30,"Daniel,Wu"  wrote:

It works, after I set as you said, but looks like I can't control the map task, 
it always use 9 maps, even if I set
set mapred.map.tasks=2;


Kind% CompleteNum TasksPendingRunningCompleteKilledFailed/Killed
Task Attempts
map100.00%


900900 / 0
reduce100.00%


100100 / 0



At 2011-08-25 06:35:38,"Ashutosh Chauhan"  wrote:
This may be because CombineHiveInputFormat is combining your splits in one map 
task. If you don't want that to happen, do:
hive> set hive.input.format=org.apache.hadoop.hive.ql.io.HiveI nputFormat


2011/8/24 Daniel,Wu

I pasted the inform I pasted blow, the map capacity is 6. And no matter how I 
set  mapred.map.tasks, such as 3,  it doesn't work, as it always use 1 map task 
(please see the completed job information).



Cluster Summary (Heap Size is 16.81 MB/966.69 MB)
Running Map TasksRunning Reduce TasksTotal SubmissionsNodesOccupied Map 
SlotsOccupied Reduce SlotsReserved Map SlotsReserved Reduce SlotsMap Task 
CapacityReduce Task CapacityAvg. Tasks/NodeBlacklisted NodesExcluded Nodes
0063664.


Completed Jobs
JobidPriorityUserNameMap % CompleteMap TotalMaps CompletedReduce % 
CompleteReduce TotalReduces CompletedJob Scheduling InformationDiagnostic Info
job_201108242119_0001NORMALoracleselect count(*) from test(Stage-1)100.00%


00100.00%


1 1NANA
job_201108242119_0002NORMALoracleselect count(*) from test(Stage-1)100.00%


11100.00%


1 1NANA
job_201108242119_0003NORMALoracleselect count(*) from test(Stage-1)100.00%


11100.00%


1 1NANA
job_201108242119_0004NORMALoracleselect period_key,count(*) 
from...period_key(Stage-1)100.00%


11100.00%


3 3NANA
job_201108242119_0005NORMALoracleselect period_key,count(*) 
from...period_key(Stage-1)100.00%


11100.00%


3 3NANA
job_201108242119_0006NORMALoracleselect period_key,count(*) 
from...period_key(Stage-1)100.00%


11100.00%


3 3NANA



At 2011-08-24 18:19:38,wd  wrote:
>What about your total Map Task Capacity?
>you may check it from http://your_jobtracker:50030/jobtracker.jsp

>
>2011/8/24 Daniel,Wu :
>> I checked my setting, all are with the default value.So per the book of
>> "Hadoop the definitive guide", the split size should be 64M. And the file
>> size is about 500M, so that's about 8 splits. And from the map job
>> information (after the map job is done), I can see it gets 8 split from one
>> node. But anyhow it starts only one map task.
>>
>>
>>
>> At 2011-08-24 02:28:18,"Aggarwal, Vaibhav"  wrote:
>>
>> If you actually have splittable files you can set the following setting to
>> create more splits:
>>
>>
>>
>> mapred.max.split.size appropriately.
>>
>>
>>
>> Thanks
>>
>> Vaibhav
>>
>>
>>
>> From: Daniel,Wu [mailto:hadoop...@163.com]
>> Sent: Tuesday, August 23, 2011 6:51 AM
>> To: hive
>> Subject: Why a sql only use one map task?
>>
>>
>>
>>   I run the following simple sql
>> select count(*) from sales;
>> And the job information shows it only uses one map task.
>>
>> The underlying hadoop has 3 data/data nodes. So I expect hive should kick
>> off 3 map tasks, one on each task nodes. What can make hive only run one map
>> task? Do I need to set something to kick off multiple map task?  in my
>> config, I didn't change hive config.
>>
>>
>>
>>










Re:Re: Re: RE: Why a sql only use one map task?

2011-08-25 Thread Daniel,Wu
It works, after I set as you said, but looks like I can't control the map task, 
it always use 9 maps, even if I set
set mapred.map.tasks=2;


Kind% CompleteNum TasksPendingRunningCompleteKilledFailed/Killed
Task Attempts
map100.00%


900900 / 0
reduce100.00%


100100 / 0



At 2011-08-25 06:35:38,"Ashutosh Chauhan"  wrote:
This may be because CombineHiveInputFormat is combining your splits in one map 
task. If you don't want that to happen, do:
hive> set hive.input.format=org.apache.hadoop.hive.ql.io.HiveInputFormat


2011/8/24 Daniel,Wu

I pasted the inform I pasted blow, the map capacity is 6. And no matter how I 
set  mapred.map.tasks, such as 3,  it doesn't work, as it always use 1 map task 
(please see the completed job information).



Cluster Summary (Heap Size is 16.81 MB/966.69 MB)
Running Map TasksRunning Reduce TasksTotal SubmissionsNodesOccupied Map 
SlotsOccupied Reduce SlotsReserved Map SlotsReserved Reduce SlotsMap Task 
CapacityReduce Task CapacityAvg. Tasks/NodeBlacklisted NodesExcluded Nodes
0063664.


Completed Jobs
JobidPriorityUserNameMap % CompleteMap TotalMaps CompletedReduce % 
CompleteReduce TotalReduces CompletedJob Scheduling InformationDiagnostic Info
job_201108242119_0001NORMALoracleselect count(*) from test(Stage-1)100.00%


00100.00%


1 1NANA
job_201108242119_0002NORMALoracleselect count(*) from test(Stage-1)100.00%


11100.00%


1 1NANA
job_201108242119_0003NORMALoracleselect count(*) from test(Stage-1)100.00%


11100.00%


1 1NANA
job_201108242119_0004NORMALoracleselect period_key,count(*) 
from...period_key(Stage-1)100.00%


11100.00%


3 3NANA
job_201108242119_0005NORMALoracleselect period_key,count(*) 
from...period_key(Stage-1)100.00%


11100.00%


3 3NANA
job_201108242119_0006NORMALoracleselect period_key,count(*) 
from...period_key(Stage-1)100.00%


11100.00%


3 3NANA



At 2011-08-24 18:19:38,wd  wrote:
>What about your total Map Task Capacity?
>you may check it from http://your_jobtracker:50030/jobtracker.jsp

>
>2011/8/24 Daniel,Wu :
>> I checked my setting, all are with the default value.So per the book of
>> "Hadoop the definitive guide", the split size should be 64M. And the file
>> size is about 500M, so that's about 8 splits. And from the map job
>> information (after the map job is done), I can see it gets 8 split from one
>> node. But anyhow it starts only one map task.
>>
>>
>>
>> At 2011-08-24 02:28:18,"Aggarwal, Vaibhav"  wrote:
>>
>> If you actually have splittable files you can set the following setting to
>> create more splits:
>>
>>
>>
>> mapred.max.split.size appropriately.
>>
>>
>>
>> Thanks
>>
>> Vaibhav
>>
>>
>>
>> From: Daniel,Wu [mailto:hadoop...@163.com]
>> Sent: Tuesday, August 23, 2011 6:51 AM
>> To: hive
>> Subject: Why a sql only use one map task?
>>
>>
>>
>>   I run the following simple sql
>> select count(*) from sales;
>> And the job information shows it only uses one map task.
>>
>> The underlying hadoop has 3 data/data nodes. So I expect hive should kick
>> off 3 map tasks, one on each task nodes. What can make hive only run one map
>> task? Do I need to set something to kick off multiple map task?  in my
>> config, I didn't change hive config.
>>
>>
>>
>>







Re: Problem in hive

2011-08-25 Thread Vikas Srivastava
hey ashutosh,

thanks for reply..

the output of that is

*Failed with exception null
FAILED: Execution Error, return code 1 from org.apache.hado**op.hive.ql.exec
**.DDLTask*

regards
Vikas Srivastava



On Thu, Aug 25, 2011 at 4:52 AM, Ashutosh Chauhan wrote:

> Vikas,
> Looks like your metadata is corrupted. Can you paste the output of
> following:
> hive> describe formatted aircel_obd;
>
> Ashutosh
>
>
> On Wed, Aug 24, 2011 at 03:46, Vikas Srivastava <
> vikas.srivast...@one97.net> wrote:
>
>> hey thanks for reply,
>>
>> i m using hadoop 0.20.2,
>> and hive: 0.7.0
>>
>> i have install hive on new server and making that use readonly ,
>>
>>
>> On Wed, Aug 24, 2011 at 4:05 PM, Chinna  wrote:
>>
>>>  Hi,
>>>
>>> ** **
>>>
>>>Can u post some more details like which version u r using and what
>>> sequence of queries u have executed.
>>>
>>> When I checked the trunk code this exception will come when getCols()
>>> returns null. Check u r metadata is in good state or not.
>>>
>>> Thanks
>>>
>>> Chinna Rao Lalam
>>>
>>> -- Forwarded message --
>>> From: *Vikas Srivastava* 
>>> Date: Tue, Aug 23, 2011 at 7:26 PM
>>> Subject: Problem in hive
>>> To: user@hive.apache.org
>>>
>>>
>>> HI team,
>>>
>>>
>>> i m facing this problem.
>>>
>>> show tables is running fine but when i run below query.
>>>
>>> hive> select * from aircel_obd;
>>> FAILED: Hive Internal Error: java.lang.NullPointerException(null)
>>> java.lang.NullPointerException
>>> at
>>> org.apache.hadoop.hive.ql.metadata.Table.checkValidity(Table.java:162)
>>> at
>>> org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:886)
>>> at
>>> org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:787)
>>> at
>>> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.getMetaData(SemanticAnalyzer.java:893)
>>> at
>>> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:7203)
>>> at
>>> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:240)
>>> at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:428)
>>> at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:336)
>>> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:901)
>>> at
>>> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:253)
>>> at
>>> org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:210)
>>> at
>>> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:401)
>>> at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:660)
>>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>> at
>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>> at
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>> at java.lang.reflect.Method.invoke(Method.java:597)
>>> at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
>>>
>>>
>>> --
>>> With Regards
>>> Vikas Srivastava
>>>
>>> DWH & Analytics Team
>>>
>>> Mob:+91 9560885900
>>> One97 | Let's get talking !
>>>
>>> ** **
>>>
>>>
>>>
>>>
>>> --
>>> With Regards
>>> Vikas Srivastava
>>>
>>> DWH & Analytics Team
>>>
>>> Mob:+91 9560885900
>>> One97 | Let's get talking !
>>>
>>> ** **
>>>
>>
>>
>>
>> --
>> With Regards
>> Vikas Srivastava
>>
>> DWH & Analytics Team
>> Mob:+91 9560885900
>> One97 | Let's get talking !
>>
>>
>


-- 
With Regards
Vikas Srivastava

DWH & Analytics Team
Mob:+91 9560885900
One97 | Let's get talking !