Pradeep,

I ran the commands you provided and it succeeded with the expected behavior.

One possibility is that there are multiple versions of libthrift.jar in your 
CLASSPATH (hadoop & hive). Can you check in the Hadoop & Hive CLASSPATH so that 
no other libthrift.jar is there? What is in hive_config_without_thrift?

Thanks,
Ning

On Jun 18, 2010, at 10:19 AM, Pradeep Kamath wrote:

I think there are two separate issues here – I want to open a jira for the 
first one since I am now able to reproduce it even with text format with 
builtin Serdes. Essentially this is a bug in the thrift code (not sure if it is 
in the client or server) since the same alter table statement works fine when 
the hive client does not use thrift. Here are the details:

cat create_dummy.sql
CREATE external TABLE if not exists dummy (

  partition_name string
  ,partition_id int
)
PARTITIONED BY ( datestamp string, srcid string, action string, testid string )
row format delimited
stored as textfile
location '/user/pradeepk/dummy';

hive -f create_dummy.sql
10/06/18 10:13:36 WARN conf.Configuration: DEPRECATED: hadoop-site.xml found in 
the classpath. Usage of hadoop-site.xml is deprecated. Instead use 
core-site.xml, mapred-site.xml and hdfs-site.xml to override properties of 
core-default.xml, mapred-default.xml and hdfs-default.xml respectively
Hive history file=/tmp/pradeepk/hive_job_log_pradeepk_201006181013_184583537.txt
OK
Time taken: 0.627 seconds

hive  -e "ALTER TABLE dummy add partition(datestamp = '20100602', srcid = 
'100',action='view',testid='10') location 
'/user/pradeepk/dummy/20100602/100/view/10';"
10/06/18 10:14:11 WARN conf.Configuration: DEPRECATED: hadoop-site.xml found in 
the classpath. Usage of hadoop-site.xml is deprecated. Instead use 
core-site.xml, mapred-site.xml and hdfs-site.xml to override properties of 
core-default.xml, mapred-default.xml and hdfs-default.xml respectively
Hive history file=/tmp/pradeepk/hive_job_log_pradeepk_201006181014_700722546.txt
FAILED: Error in metadata: org.apache.thrift.TApplicationException: 
get_partition failed: unknown result
FAILED: Execution Error, return code 1 from 
org.apache.hadoop.hive.ql.exec.DDLTask

hive --config hive_conf_without_thrift -e "ALTER TABLE dummy add 
partition(datestamp = '20100602', srcid = '100',action='view',testid='10') 
location '/user/pradeepk/dummy/20100602/100/view/10';"
10/06/18 10:14:31 WARN conf.Configuration: DEPRECATED: hadoop-site.xml found in 
the classpath. Usage of hadoop-site.xml is deprecated. Instead use 
core-site.xml, mapred-site.xml and hdfs-site.xml to override properties of 
core-default.xml, mapred-default.xml and hdfs-default.xml respectively
Hive history file=/tmp/pradeepk/hive_job_log_pradeepk_201006181014_598649843.txt
OK
Time taken: 5.849 seconds

Is there some thrift setting I am missing or is this a bug? – If it is the 
latter, I can open a jira with the above details.

Thanks,
Pradeep


________________________________
From: Pradeep Kamath [mailto:prade...@yahoo-inc.com]
Sent: Thursday, June 17, 2010 1:25 PM
To: hive-user@hadoop.apache.org<mailto:hive-user@hadoop.apache.org>
Subject: RE: alter table add partition error

Here are the create table and alter table statements:
CREATE external TABLE if not exists mytable (

  bc string
  ,src_spaceid string
  ,srcpvid string
  ,dstpvid string
  ,dst_spaceid string
  ,page_params map<string, string>
  ,clickinfo map<string, string>
  ,viewinfo array<map<string, string>>

)
PARTITIONED BY ( datestamp string, srcid string, action string, testid string )
row format serde 'com.yahoo.mySerde’
stored as inputformat 'org.apache.hadoop.mapred.SequenceFileInputFormat' 
outputformat 'org.apache.hadoop.mapred.SequenceFileOutputFormat'
location '/user/pradeepk/mytable’;

hive --auxpath ult-serde.jar -e "ALTER TABLE mytable add partition(datestamp = 
'20091101', srcid = '19174',action='click',testid='NOTESTID') location 
'/user/pradeepk/mytable/20091101/19174/click/NOTESTID';"

I get the following error:
Hive history 
file=/tmp/pradeepk/hive_job_log_pradeepk_201006161709_1934304805.txt
FAILED: Error in metadata: org.apache.thrift.TApplicationException: 
get_partition failed: unknown result
FAILED: Execution Error, return code 1 from 
org.apache.hadoop.hive.ql.exec.DDLTask
If I don’t use thrift and use a hive-site.xml to directly talk to the db, the 
alter table seems to succeed:
hive --auxpath ult-serde.jar --config hive_conf_without_thrift -e "ALTER TABLE 
mytable add partition(datestamp = '20091101', srcid = 
'19174',action='click',testid='NOTESTID') location 
'/user/pradeepk/mytable/20091101/19174/click/NOTESTID';"

However I get errors when I try to run a query:
[prade...@chargesize:~/dev]hive --auxpath ult-serde.jar --config 
hive_conf_without_thrift -e "select src_spaceid from  ult_search_austria_ult 
where datestamp='20091101';"
10/06/17 13:22:34 WARN conf.Configuration: DEPRECATED: hadoop-site.xml found in 
the classpath. Usage of hadoop-site.xml is deprecated. Instead use 
core-site.xml, mapred-site.xml and hdfs-site.xml to override properties of 
core-default.xml, mapred-default.xml and hdfs-default.xml respectively
Hive history 
file=/tmp/pradeepk/hive_job_log_pradeepk_201006171322_1913647383.txt
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks is set to 0 since there's no reduce operator
java.lang.IllegalArgumentException: Can not create a Path from an empty string
        at org.apache.hadoop.fs.Path.checkPathArg(Path.java:82)
        at org.apache.hadoop.fs.Path.<init>(Path.java:90)
        at org.apache.hadoop.fs.Path.<init>(Path.java:50)
        at 
org.apache.hadoop.mapred.JobClient.copyRemoteFiles(JobClient.java:523)
        at 
org.apache.hadoop.mapred.JobClient.configureCommandLineOptions(JobClient.java:603)
        at 
org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:761)
        at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:730)
        at 
org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:684)
        at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:107)
        at 
org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:55)
        at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:631)
        at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:504)

Any help is much appreciated.

Pradeep

________________________________
From: Ashish Thusoo [mailto:athu...@facebook.com]
Sent: Thursday, June 17, 2010 11:15 AM
To: hive-user@hadoop.apache.org<mailto:hive-user@hadoop.apache.org>
Subject: RE: alter table add partition error

hmm... Can you send the exact command and also the create table command for 
this table.

Ashish

________________________________
From: Pradeep Kamath [mailto:prade...@yahoo-inc.com]
Sent: Thursday, June 17, 2010 9:09 AM
To: hive-user@hadoop.apache.org<mailto:hive-user@hadoop.apache.org>
Subject: RE: alter table add partition error
Sorry – that was a cut-paste error – I don’t have the action part – so I am 
specifying key-value pairs. Since what I am trying to do seems like a basic 
operation, I am wondering if it’s something to do with my Serde – unfortunately 
the error I see gives me no clue of what could be wrong – any help would be 
greatly appreciated!

Thanks,
Pradeep

________________________________
From: yq he [mailto:hhh.h...@gmail.com]
Sent: Wednesday, June 16, 2010 5:54 PM
To: hive-user@hadoop.apache.org<mailto:hive-user@hadoop.apache.org>
Subject: Re: alter table add partition error

Hi Pradeep,

partition definition need to be key-value pairs. partition key `action` seems 
missed the value.

Thanks
Yongqiang
On Wed, Jun 16, 2010 at 5:22 PM, Pradeep Kamath 
<prade...@yahoo-inc.com<mailto:prade...@yahoo-inc.com>> wrote:
Hi,
    I am trying to create an external table against already existing data in 
sequencefile format. However I have written a custom Serde to interpret the 
data. I am able to create the table fine but get the exception shown in the 
session output below when I try to add partition – any help would be greatly 
appreciated.

Thanks,
Pradeep

== session output ===

[prade...@chargesize:~/dev/howl]hive --auxpath ult-serde.jar -e "ALTER TABLE 
mytable add partition(datestamp = '20091101', srcid = ‘10’,action) location 
'/user/pradeepk/mytable/20091101/10’;"
10/06/16 17:08:59 WARN conf.Configuration: DEPRECATED: hadoop-site.xml found in 
the classpath. Usage of hadoop-site.xml is deprecated. Instead use 
core-site.xml, mapred-site.xml and hdfs-site.xml to override properties of 
core-default.xml, mapred-default.xml and hdfs-default.xml respectively
Hive history 
file=/tmp/pradeepk/hive_job_log_pradeepk_201006161709_1934304805.txt
FAILED: Error in metadata: org.apache.thrift.TApplicationException: 
get_partition failed: unknown result
FAILED: Execution Error, return code 1 from 
org.apache.hadoop.hive.ql.exec.DDLTask
[prade...@chargesize:~/dev/howl]

== session output ===

/tmp/pradeepk/hive.log has:
2010-06-16 17:09:00,841 ERROR exec.DDLTask (SessionState.java:printError(269)) 
- FAILED: Error in metadata: org.apache.thrift.TApplicationException: 
get_partition failed: unknown result
org.apache.hadoop.hive.ql.metadata.HiveException: 
org.apache.thrift.TApplicationException: get_partition failed: unknown result
    at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:778)
    at org.apache.hadoop.hive.ql.exec.DDLTask.addPartition(DDLTask.java:231)
    at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:150)
    at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:107)
    at 
org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:55)
    at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:631)
    at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:504)
    at org.apache.hadoop.hive.ql.Driver.run(Driver.java:382)
    at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:138)
    at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:197)
    at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:268)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
Caused by: org.apache.thrift.TApplicationException: get_partition failed: 
unknown result
    at 
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_get_partition(ThriftHiveMetastore.java:931)
    at 
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.get_partition(ThriftHiveMetastore.java:899)
    at 
org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getPartition(HiveMetaStoreClient.java:500)
    at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:756)
    ... 15 more

The thrift server messages are:
10/06/16 17:09:00 INFO metastore.HiveMetaStore: 22: get_table : db=default 
tbl=mytable
10/06/16 17:09:00 INFO metastore.HiveMetaStore: 22: Opening raw store with 
implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
10/06/16 17:09:00 INFO metastore.ObjectStore: ObjectStore, initialize called
10/06/16 17:09:00 INFO metastore.ObjectStore: Initialized ObjectStore
10/06/16 17:09:00 INFO metastore.HiveMetaStore: 22: get_partition : db=default 
tbl=mytable




Reply via email to