[jira] [Created] (HIVE-2956) [hive] Provide error message when using UDAF in the place of UDF instead of throwing NPE

2012-04-17 Thread Navis (Created) (JIRA)
[hive] Provide error message when using UDAF in the place of UDF instead of 
throwing NPE


 Key: HIVE-2956
 URL: https://issues.apache.org/jira/browse/HIVE-2956
 Project: Hive
  Issue Type: Improvement
Reporter: Navis
Assignee: Navis
Priority: Trivial


For example, 

{code}
hive select distinct deptno, sum(deptno) from emp;
FAILED: Hive Internal Error: java.lang.NullPointerException(null)
java.lang.NullPointerException
at 
org.apache.hadoop.hive.ql.plan.ExprNodeGenericFuncDesc.newInstance(ExprNodeGenericFuncDesc.java:214)
at 
org.apache.hadoop.hive.ql.parse.TypeCheckProcFactory$DefaultExprProcessor.getXpathOrFuncExprNodeDesc(TypeCheckProcFactory.java:767)
at 
org.apache.hadoop.hive.ql.parse.TypeCheckProcFactory$DefaultExprProcessor.process(TypeCheckProcFactory.java:888)
at 
org.apache.hadoop.hive.ql.lib.DefaultRuleDispatcher.dispatch(DefaultRuleDispatcher.java:89)
at 
org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatch(DefaultGraphWalker.java:88)
at 
org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.walk(DefaultGraphWalker.java:125)
at 
org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.startWalking(DefaultGraphWalker.java:102)
at 
org.apache.hadoop.hive.ql.parse.TypeCheckProcFactory.genExprNode(TypeCheckProcFactory.java:165)
at 
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genExprNodeDesc(SemanticAnalyzer.java:7755)
at 
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genExprNodeDesc(SemanticAnalyzer.java:7713)
at 
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genGroupByPlanMapGroupByOperator(SemanticAnalyzer.java:2793)
at 
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genGroupByPlanMapAggr1MR(SemanticAnalyzer.java:3651)
at 
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genBodyPlan(SemanticAnalyzer.java:6125)
at 
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:6762)
at 
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:7531)
at 
org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:250)
{code}

Trivial.. but people always reports this confused by esoteric custom function 
names.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HIVE-2956) [hive] Provide error message when using UDAF in the place of UDF instead of throwing NPE

2012-04-17 Thread Phabricator (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phabricator updated HIVE-2956:
--

Attachment: HIVE-2956.D2823.1.patch

navis requested code review of HIVE-2956 [jira] [hive] Provide error message 
when using UDAF in the place of UDF instead of throwing NPE.
Reviewers: JIRA

  DPAL-1109 Provide error message when using UDAF in the place of UDF instead 
of throwing NPE

  For example,

  hive select distinct deptno, sum(deptno) from emp;
  FAILED: Hive Internal Error: java.lang.NullPointerException(null)
  java.lang.NullPointerException
at 
org.apache.hadoop.hive.ql.plan.ExprNodeGenericFuncDesc.newInstance(ExprNodeGenericFuncDesc.java:214)
at 
org.apache.hadoop.hive.ql.parse.TypeCheckProcFactory$DefaultExprProcessor.getXpathOrFuncExprNodeDesc(TypeCheckProcFactory.java:767)
at 
org.apache.hadoop.hive.ql.parse.TypeCheckProcFactory$DefaultExprProcessor.process(TypeCheckProcFactory.java:888)
at 
org.apache.hadoop.hive.ql.lib.DefaultRuleDispatcher.dispatch(DefaultRuleDispatcher.java:89)
at 
org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatch(DefaultGraphWalker.java:88)
at 
org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.walk(DefaultGraphWalker.java:125)
at 
org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.startWalking(DefaultGraphWalker.java:102)
at 
org.apache.hadoop.hive.ql.parse.TypeCheckProcFactory.genExprNode(TypeCheckProcFactory.java:165)
at 
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genExprNodeDesc(SemanticAnalyzer.java:7755)
at 
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genExprNodeDesc(SemanticAnalyzer.java:7713)
at 
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genGroupByPlanMapGroupByOperator(SemanticAnalyzer.java:2793)
at 
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genGroupByPlanMapAggr1MR(SemanticAnalyzer.java:3651)
at 
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genBodyPlan(SemanticAnalyzer.java:6125)
at 
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:6762)
at 
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:7531)
at 
org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:250)

  Trivial.. but people always reports this confused by esoteric custom function 
names.

TEST PLAN
  EMPTY

REVISION DETAIL
  https://reviews.facebook.net/D2823

AFFECTED FILES
  ql/src/java/org/apache/hadoop/hive/ql/parse/ErrorMsg.java
  ql/src/java/org/apache/hadoop/hive/ql/parse/TypeCheckProcFactory.java

MANAGE HERALD DIFFERENTIAL RULES
  https://reviews.facebook.net/herald/view/differential/

WHY DID I GET THIS EMAIL?
  https://reviews.facebook.net/herald/transcript/6417/

Tip: use the X-Herald-Rules header to filter Herald messages in your client.


 [hive] Provide error message when using UDAF in the place of UDF instead of 
 throwing NPE
 

 Key: HIVE-2956
 URL: https://issues.apache.org/jira/browse/HIVE-2956
 Project: Hive
  Issue Type: Improvement
Reporter: Navis
Assignee: Navis
Priority: Trivial
 Attachments: HIVE-2956.D2823.1.patch


 For example, 
 {code}
 hive select distinct deptno, sum(deptno) from emp;
 FAILED: Hive Internal Error: java.lang.NullPointerException(null)
 java.lang.NullPointerException
   at 
 org.apache.hadoop.hive.ql.plan.ExprNodeGenericFuncDesc.newInstance(ExprNodeGenericFuncDesc.java:214)
   at 
 org.apache.hadoop.hive.ql.parse.TypeCheckProcFactory$DefaultExprProcessor.getXpathOrFuncExprNodeDesc(TypeCheckProcFactory.java:767)
   at 
 org.apache.hadoop.hive.ql.parse.TypeCheckProcFactory$DefaultExprProcessor.process(TypeCheckProcFactory.java:888)
   at 
 org.apache.hadoop.hive.ql.lib.DefaultRuleDispatcher.dispatch(DefaultRuleDispatcher.java:89)
   at 
 org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatch(DefaultGraphWalker.java:88)
   at 
 org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.walk(DefaultGraphWalker.java:125)
   at 
 org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.startWalking(DefaultGraphWalker.java:102)
   at 
 org.apache.hadoop.hive.ql.parse.TypeCheckProcFactory.genExprNode(TypeCheckProcFactory.java:165)
   at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genExprNodeDesc(SemanticAnalyzer.java:7755)
   at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genExprNodeDesc(SemanticAnalyzer.java:7713)
   at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genGroupByPlanMapGroupByOperator(SemanticAnalyzer.java:2793)
   at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genGroupByPlanMapAggr1MR(SemanticAnalyzer.java:3651)
   at 
 

[jira] [Assigned] (HIVE-2723) should throw Ambiguous column reference key Exception in particular join condition

2012-04-17 Thread Navis (Assigned) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2723?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Navis reassigned HIVE-2723:
---

Assignee: Navis

 should throw  Ambiguous column reference key  Exception in particular join 
 condition
 --

 Key: HIVE-2723
 URL: https://issues.apache.org/jira/browse/HIVE-2723
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.8.0
 Environment: Linux zongren-VirtualBox 3.0.0-14-generic #23-Ubuntu SMP 
 Mon Nov 21 20:34:47 UTC 2011 i686 i686 i386 GNU/Linux
 java version 1.6.0_25
 hadoop-0.20.2-cdh3u0
 hive-0.7.0-cdh3u0
Reporter: caofangkun
Assignee: Navis
Priority: Minor
  Labels: exception-handling, query, queryparser
 Fix For: 0.9.0

 Attachments: HIVE-2723.D1275.1.patch


 This Bug can be Repeated as following :
 create table test(key string, value string);
 create table test1(key string, value string);
 1: Correct!
 select t.key 
 from 
   (select a.key, b.key from (select * from src ) a right outer join (select * 
 from src1) b on (a.key = b.key)) t;
 FAILED: Error in semantic analysis: Ambiguous column reference key
 2: Uncorrect!! Should throw Exception as above too!
 select t.key --Is this a.key or b.key ? It's ambiduous!
 from 
   (select a.\*, b.\* from (select * from src ) a right outer join (select * 
 from src1) b on (a.value = b.value)) t;
 Total MapReduce jobs = 1
 Launching Job 1 out of 1
 Number of reduce tasks not specified. Defaulting to jobconf value of: 1
 In order to change the average load for a reducer (in bytes):
   set hive.exec.reducers.bytes.per.reducer=number
 In order to limit the maximum number of reducers:
   set hive.exec.reducers.max=number
 In order to set a constant number of reducers:
   set mapred.reduce.tasks=number
 Starting Job = job_201201170959_0004, Tracking URL = 
 http://zongren-VirtualBox:50030/jobdetails.jsp?jobid=job_201201170959_0004
 Kill Command = /home/zongren/workspace/hadoop-adh/bin/hadoop job  
 -Dmapred.job.tracker=zongren-VirtualBox:9001 -kill job_201201170959_0004
 Hadoop job information for Stage-1: number of mappers: 2; number of reducers: 
 1
 2012-01-17 11:02:47,507 Stage-1 map = 0%,  reduce = 0%
 2012-01-17 11:02:55,002 Stage-1 map = 100%,  reduce = 0%
 2012-01-17 11:03:04,240 Stage-1 map = 100%,  reduce = 33%
 2012-01-17 11:03:05,258 Stage-1 map = 100%,  reduce = 100%
 Ended Job = job_201201170959_0004
 MapReduce Jobs Launched: 
 Job 0: Map: 2  Reduce: 1   HDFS Read: 669 HDFS Write: 216 SUCESS
 Total MapReduce CPU Time Spent: 0 msec
 OK

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2528) use guava's LexicographicalComparator for Hive

2012-04-17 Thread alex gemini (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13255471#comment-13255471
 ] 

alex gemini commented on HIVE-2528:
---

HADOOP-7761 has done. maybe it can closed. 

 use guava's LexicographicalComparator for Hive
 --

 Key: HIVE-2528
 URL: https://issues.apache.org/jira/browse/HIVE-2528
 Project: Hive
  Issue Type: Improvement
Reporter: He Yongqiang
Assignee: He Yongqiang

 ref:
 http://guava-libraries.googlecode.com/svn/trunk/guava/src/com/google/common/primitives/UnsignedBytes.java
 https://issues.apache.org/jira/browse/HBASE-4012

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




How to call Shell Script from Hive JDBC

2012-04-17 Thread Bhavesh Shah
Hi all,

I have implemented a task in Hive.

But I need to call a Shell Script in which I have written SQOOP commands
for importing the tables in Hive from SQL Server.

I tried to call a Shell Script in one of the Demo Application but as the
Program runs no action taken place. I just see the blank console and a
message which I have printed.

Is it possible to call the Shell Script for importing and exporting the
tables in HIve?

Do I need to do something extra in case of Hive? Please help me out from
this.

Thanks.


-- 
Regards,
Bhavesh Shah


[jira] [Commented] (HIVE-2953) ReflectionStructObjectInspector support for getters and setters

2012-04-17 Thread Jaka Jancar (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13255515#comment-13255515
 ] 

Jaka Jancar commented on HIVE-2953:
---

I might have a go at this myself. Does anyone see any problems with doing this?

 ReflectionStructObjectInspector support for getters and setters
 ---

 Key: HIVE-2953
 URL: https://issues.apache.org/jira/browse/HIVE-2953
 Project: Hive
  Issue Type: Wish
Affects Versions: 0.8.1
Reporter: Jaka Jancar

 It would be great if ReflectionStructObjectInspector could also support using 
 getters, not just fields.
 Additionally, it would be also nice to be able to limit it to only use public 
 fields/getters.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Work started] (HIVE-2957) JDBC getColumns() fails on a TIMESTAMP column

2012-04-17 Thread Bharath Ganesh (Work started) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HIVE-2957 started by Bharath Ganesh.

 JDBC getColumns() fails on a TIMESTAMP column
 -

 Key: HIVE-2957
 URL: https://issues.apache.org/jira/browse/HIVE-2957
 Project: Hive
  Issue Type: Bug
  Components: JDBC
Affects Versions: 0.8.1, 0.9.0
Reporter: Bharath Ganesh
Assignee: Bharath Ganesh
Priority: Minor

 Steps to replicate:
 1. Create a table with at least one column of type TIMESTAMP
 2. Do a DatabaseMetaData.getColumns () such that this TIMESTAMP column is 
 part of the resultset.
 3. When you iterate over the TIMESTAMP column it would fail, throwing the 
 below exception:
 Exception in thread main java.sql.SQLException: Unrecognized column type: 
 timestamp
   at org.apache.hadoop.hive.jdbc.Utils.hiveTypeToSqlType(Utils.java:56)
   at org.apache.hadoop.hive.jdbc.JdbcColumn.getSqlType(JdbcColumn.java:62)
   at 
 org.apache.hadoop.hive.jdbc.HiveDatabaseMetaData$2.next(HiveDatabaseMetaData.java:244)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HIVE-2703) ResultSetMetaData.getColumnType() always returns VARCHAR(string) for partition columns irrespective of partition column type

2012-04-17 Thread Phabricator (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2703?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phabricator updated HIVE-2703:
--

Attachment: HIVE-2703.D2829.1.patch

tamtam180 requested code review of HIVE-2703 [jira] 
ResultSetMetaData.getColumnType() always returns VARCHAR(string) for partition 
columns irrespective of partition column type.
Reviewers: JIRA

  https://issues.apache.org/jira/browse/HIVE-2703



  ResultSetMetaData.getColumnType() always returns VARCHAR(string) as column 
type, no matter what the column type is for the partition column.

  However DatabaseMetadata.getColumnType() returns correct type.


  Create a table with a partition column having a type other than string, you 
will see that ResultSet.getColumnType() always returns string as the type for 
int or boolean or float columns...

TEST PLAN
  EMPTY

REVISION DETAIL
  https://reviews.facebook.net/D2829

AFFECTED FILES
  jdbc/src/test/org/apache/hadoop/hive/jdbc/TestJdbcDriver.java
  ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java

MANAGE HERALD DIFFERENTIAL RULES
  https://reviews.facebook.net/herald/view/differential/

WHY DID I GET THIS EMAIL?
  https://reviews.facebook.net/herald/transcript/6423/

Tip: use the X-Herald-Rules header to filter Herald messages in your client.


 ResultSetMetaData.getColumnType() always returns VARCHAR(string) for 
 partition columns irrespective of partition column type
 

 Key: HIVE-2703
 URL: https://issues.apache.org/jira/browse/HIVE-2703
 Project: Hive
  Issue Type: Bug
  Components: JDBC
Affects Versions: 0.8.0
Reporter: Mythili Gopalakrishnan
Priority: Critical
 Attachments: HIVE-2703.D2829.1.patch


 ResultSetMetaData.getColumnType() always returns VARCHAR(string) as column 
 type, no matter what the column type is for the partition column.
 However DatabaseMetadata.getColumnType() returns correct type. 
 Create a table with a partition column having a type other than string, you 
 will see that ResultSet.getColumnType() always returns string as the type for 
 int or boolean or float columns...

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HIVE-2703) ResultSetMetaData.getColumnType() always returns VARCHAR(string) for partition columns irrespective of partition column type

2012-04-17 Thread tamtam180 (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2703?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

tamtam180 updated HIVE-2703:


Status: Patch Available  (was: Open)

I was attach a patch.
Could someone review this patch?

 ResultSetMetaData.getColumnType() always returns VARCHAR(string) for 
 partition columns irrespective of partition column type
 

 Key: HIVE-2703
 URL: https://issues.apache.org/jira/browse/HIVE-2703
 Project: Hive
  Issue Type: Bug
  Components: JDBC
Affects Versions: 0.8.0
Reporter: Mythili Gopalakrishnan
Priority: Critical
 Attachments: HIVE-2703.D2829.1.patch


 ResultSetMetaData.getColumnType() always returns VARCHAR(string) as column 
 type, no matter what the column type is for the partition column.
 However DatabaseMetadata.getColumnType() returns correct type. 
 Create a table with a partition column having a type other than string, you 
 will see that ResultSet.getColumnType() always returns string as the type for 
 int or boolean or float columns...

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HIVE-2958) GROUP BY causing ClassCastException [LazyDioInteger cannot be cast LazyInteger]

2012-04-17 Thread Tim Robertson (Created) (JIRA)
GROUP BY causing ClassCastException [LazyDioInteger cannot be cast LazyInteger]
---

 Key: HIVE-2958
 URL: https://issues.apache.org/jira/browse/HIVE-2958
 Project: Hive
  Issue Type: Bug
  Components: HBase Handler
Affects Versions: 0.9.0
 Environment: HBase 0.90.4, Hive 0.90 snapshot (trunk) built today
Reporter: Tim Robertson
Priority: Blocker


This relates to 1634.

The following work fine:

CREATE EXTERNAL TABLE tim_hbase_occurrence ( 
  id int,
  scientific_name string,
  data_resource_id int
) STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' WITH 
SERDEPROPERTIES (
  hbase.columns.mapping = :key#b,v:scientific_name#s,v:data_resource_id#b
) TBLPROPERTIES(
  hbase.table.name = mini_occurrences, 
  hbase.table.default.storage.type = binary
);
SELECT * FROM tim_hbase_occurrence LIMIT 3;
SELECT * FROM tim_hbase_occurrence WHERE data_resource_id=1081 LIMIT 3;

However, the following fails:
  SELECT data_resource_id, count(*) FROM tim_hbase_occurrence GROUP BY 
data_resource_id;

The error given:

0 TS
2012-04-17 16:58:45,693 INFO org.apache.hadoop.hive.ql.exec.MapOperator: 
Initialization Done 7 MAP
2012-04-17 16:58:45,714 INFO org.apache.hadoop.hive.ql.exec.MapOperator: 
Processing alias tim_hbase_occurrence for file 
hdfs://c1n2.gbif.org/user/hive/warehouse/tim_hbase_occurrence
2012-04-17 16:58:45,714 INFO org.apache.hadoop.hive.ql.exec.MapOperator: 7 
forwarding 1 rows
2012-04-17 16:58:45,714 INFO org.apache.hadoop.hive.ql.exec.TableScanOperator: 
0 forwarding 1 rows
2012-04-17 16:58:45,716 INFO org.apache.hadoop.hive.ql.exec.SelectOperator: 1 
forwarding 1 rows
2012-04-17 16:58:45,723 FATAL ExecMapper: 
org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while 
processing row {id:1444,scientific_name:null,data_resource_id:1081}
at 
org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:548)
at org.apache.hadoop.hive.ql.exec.ExecMapper.map(ExecMapper.java:143)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:391)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:325)
at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1157)
at org.apache.hadoop.mapred.Child.main(Child.java:264)
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: 
java.lang.ClassCastException: 
org.apache.hadoop.hive.serde2.lazydio.LazyDioInteger cannot be cast to 
org.apache.hadoop.hive.serde2.lazy.LazyInteger
at 
org.apache.hadoop.hive.ql.exec.GroupByOperator.processOp(GroupByOperator.java:737)
at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:471)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:762)
at 
org.apache.hadoop.hive.ql.exec.SelectOperator.processOp(SelectOperator.java:84)
at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:471)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:762)
at 
org.apache.hadoop.hive.ql.exec.TableScanOperator.processOp(TableScanOperator.java:83)
at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:471)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:762)
at 
org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:529)
... 9 more
Caused by: java.lang.ClassCastException: 
org.apache.hadoop.hive.serde2.lazydio.LazyDioInteger cannot be cast to 
org.apache.hadoop.hive.serde2.lazy.LazyInteger
at 
org.apache.hadoop.hive.serde2.lazy.objectinspector.primitive.LazyIntObjectInspector.copyObject(LazyIntObjectInspector.java:43)
at 
org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorUtils.copyToStandardObject(ObjectInspectorUtils.java:239)
at 
org.apache.hadoop.hive.ql.exec.KeyWrapperFactory$ListKeyWrapper.deepCopyElements(KeyWrapperFactory.java:150)
at 
org.apache.hadoop.hive.ql.exec.KeyWrapperFactory$ListKeyWrapper.deepCopyElements(KeyWrapperFactory.java:142)
at 
org.apache.hadoop.hive.ql.exec.KeyWrapperFactory$ListKeyWrapper.copyKey(KeyWrapperFactory.java:119)
at 
org.apache.hadoop.hive.ql.exec.GroupByOperator.processHashAggr(GroupByOperator.java:750)
at 
org.apache.hadoop.hive.ql.exec.GroupByOperator.processOp(GroupByOperator.java:722)
... 18 more




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on 

[jira] [Updated] (HIVE-2958) GROUP BY causing ClassCastException [LazyDioInteger cannot be cast LazyInteger]

2012-04-17 Thread Tim Robertson (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tim Robertson updated HIVE-2958:


Description: 
This relates to https://issues.apache.org/jira/browse/HIVE-1634.

The following work fine:

{code}
CREATE EXTERNAL TABLE tim_hbase_occurrence ( 
  id int,
  scientific_name string,
  data_resource_id int
) STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' WITH 
SERDEPROPERTIES (
  hbase.columns.mapping = :key#b,v:scientific_name#s,v:data_resource_id#b
) TBLPROPERTIES(
  hbase.table.name = mini_occurrences, 
  hbase.table.default.storage.type = binary
);
SELECT * FROM tim_hbase_occurrence LIMIT 3;
SELECT * FROM tim_hbase_occurrence WHERE data_resource_id=1081 LIMIT 3;
{code}

However, the following fails:
{code}
SELECT data_resource_id, count(*) FROM tim_hbase_occurrence GROUP BY 
data_resource_id;
{code}

The error given:
{code}
0 TS
2012-04-17 16:58:45,693 INFO org.apache.hadoop.hive.ql.exec.MapOperator: 
Initialization Done 7 MAP
2012-04-17 16:58:45,714 INFO org.apache.hadoop.hive.ql.exec.MapOperator: 
Processing alias tim_hbase_occurrence for file 
hdfs://c1n2.gbif.org/user/hive/warehouse/tim_hbase_occurrence
2012-04-17 16:58:45,714 INFO org.apache.hadoop.hive.ql.exec.MapOperator: 7 
forwarding 1 rows
2012-04-17 16:58:45,714 INFO org.apache.hadoop.hive.ql.exec.TableScanOperator: 
0 forwarding 1 rows
2012-04-17 16:58:45,716 INFO org.apache.hadoop.hive.ql.exec.SelectOperator: 1 
forwarding 1 rows
2012-04-17 16:58:45,723 FATAL ExecMapper: 
org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while 
processing row {id:1444,scientific_name:null,data_resource_id:1081}
at 
org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:548)
at org.apache.hadoop.hive.ql.exec.ExecMapper.map(ExecMapper.java:143)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:391)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:325)
at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1157)
at org.apache.hadoop.mapred.Child.main(Child.java:264)
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: 
java.lang.ClassCastException: 
org.apache.hadoop.hive.serde2.lazydio.LazyDioInteger cannot be cast to 
org.apache.hadoop.hive.serde2.lazy.LazyInteger
at 
org.apache.hadoop.hive.ql.exec.GroupByOperator.processOp(GroupByOperator.java:737)
at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:471)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:762)
at 
org.apache.hadoop.hive.ql.exec.SelectOperator.processOp(SelectOperator.java:84)
at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:471)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:762)
at 
org.apache.hadoop.hive.ql.exec.TableScanOperator.processOp(TableScanOperator.java:83)
at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:471)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:762)
at 
org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:529)
... 9 more
Caused by: java.lang.ClassCastException: 
org.apache.hadoop.hive.serde2.lazydio.LazyDioInteger cannot be cast to 
org.apache.hadoop.hive.serde2.lazy.LazyInteger
at 
org.apache.hadoop.hive.serde2.lazy.objectinspector.primitive.LazyIntObjectInspector.copyObject(LazyIntObjectInspector.java:43)
at 
org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorUtils.copyToStandardObject(ObjectInspectorUtils.java:239)
at 
org.apache.hadoop.hive.ql.exec.KeyWrapperFactory$ListKeyWrapper.deepCopyElements(KeyWrapperFactory.java:150)
at 
org.apache.hadoop.hive.ql.exec.KeyWrapperFactory$ListKeyWrapper.deepCopyElements(KeyWrapperFactory.java:142)
at 
org.apache.hadoop.hive.ql.exec.KeyWrapperFactory$ListKeyWrapper.copyKey(KeyWrapperFactory.java:119)
at 
org.apache.hadoop.hive.ql.exec.GroupByOperator.processHashAggr(GroupByOperator.java:750)
at 
org.apache.hadoop.hive.ql.exec.GroupByOperator.processOp(GroupByOperator.java:722)
... 18 more
{code}



  was:
This relates to 1634.

The following work fine:

CREATE EXTERNAL TABLE tim_hbase_occurrence ( 
  id int,
  scientific_name string,
  data_resource_id int
) STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' WITH 
SERDEPROPERTIES (
  hbase.columns.mapping = :key#b,v:scientific_name#s,v:data_resource_id#b
) TBLPROPERTIES(
  hbase.table.name = mini_occurrences, 
  hbase.table.default.storage.type = binary
);
SELECT * FROM tim_hbase_occurrence 

[jira] [Updated] (HIVE-2930) Add license to the Hive files

2012-04-17 Thread Phabricator (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phabricator updated HIVE-2930:
--

Attachment: HIVE-2930.D2835.1.patch

omalley requested code review of HIVE-2930 [jira] Add license to the Hive 
files.
Reviewers: JIRA

  Enter Revision Title

  Fixing rat warnings

  We need to clean up the RAT report to 0. Apache projects aren't supposed to 
release until they have shown that all of their files have proper headers.

  Note that although most of the files are just missing headers, some of them 
explicitly say copyright by facebook and released under the Thrift (not Apache) 
license. I'll generate a list of them, but I'd really appreciate it if someone 
from facebook could in verify that they intend to license them to Apache.

TEST PLAN
  EMPTY

REVISION DETAIL
  https://reviews.facebook.net/D2835

AFFECTED FILES
  .checkstyle
  build.properties
  build.xml
  common/src/java/conf/hive-log4j.properties
  conf/configuration.xsl
  conf/hive-env.sh.template
  data/conf/hive-log4j.properties
  data/scripts/cat.py
  data/scripts/cat_error.py
  data/scripts/dumpdata_script.py
  data/scripts/error_script
  data/scripts/input20_script
  docs/velocity.properties
  eclipse-templates/.classpath
  eclipse-templates/.classpath._hbase
  eclipse-templates/.externalToolBuilders/Hive_Ant_Builder.launch
  eclipse-templates/.project
  eclipse-templates/.settings/org.eclipse.jdt.core.prefs
  eclipse-templates/.settings/org.eclipse.jdt.ui.prefs
  eclipse-templates/HiveCLI.launchtemplate
  eclipse-templates/TestCliDriver.launchtemplate
  eclipse-templates/TestEmbeddedHiveMetaStore.launchtemplate
  eclipse-templates/TestHBaseCliDriver.launchtemplate
  eclipse-templates/TestHive.launchtemplate
  eclipse-templates/TestHiveMetaStoreChecker.launchtemplate
  eclipse-templates/TestJdbc.launchtemplate
  eclipse-templates/TestMTQueries.launchtemplate
  eclipse-templates/TestRemoteHiveMetaStore.launchtemplate
  eclipse-templates/TestTruncate.launchtemplate
  hbase-handler/src/test/templates/TestHBaseCliDriver.vm
  metastore/if/hive_metastore.thrift
  metastore/scripts/hive.metastore_ctrl
  metastore/scripts/hive.metastore_daemon
  metastore/scripts/upgrade/001-HIVE-2795.update_view_partitions.py
  metastore/src/java/org/apache/hadoop/hive/metastore/parser/Filter.g
  metastore/src/model/package.jdo
  metastore/src/test/org/apache/hadoop/hive/metastore/DummyPreListener.java
  
metastore/src/test/org/apache/hadoop/hive/metastore/TestRemoteUGIHiveMetaStoreIpAddress.java
  ql/if/queryplan.thrift
  ql/src/java/conf/hive-exec-log4j.properties
  
ql/src/java/org/apache/hadoop/hive/ql/optimizer/physical/MetadataOnlyOptimizer.java
  ql/src/java/org/apache/hadoop/hive/ql/plan/mapredplan.jr
  ql/src/java/org/apache/hadoop/hive/ql/stats/jdbc/JDBCStatsUtils.java
  ql/src/java/org/apache/hadoop/hive/ql/udf/UDFOPPlus.java
  ql/src/test/org/apache/hadoop/hive/ql/exec/sample_plan.xml
  ql/src/test/queries/clientnegative/dyn_part_empty.q.disabled
  ql/src/test/scripts/testgrep
  ql/src/test/templates/TestCliDriver.vm
  ql/src/test/templates/TestNegativeCliDriver.vm
  ql/src/test/templates/TestParse.vm
  ql/src/test/templates/TestParseNegative.vm
  serde/if/serde.thrift
  serde/if/test/complex.thrift
  serde/if/test/complexpb.proto
  serde/if/test/testthrift.thrift
  
serde/src/java/org/apache/hadoop/hive/serde2/columnar/LazyBinaryColumnarSerDe.java
  serde/src/java/org/apache/hadoop/hive/serde2/dynamic_type/thrift_grammar.jjt
  service/lib/php/ext/thrift_protocol/config.m4
  service/lib/php/ext/thrift_protocol/tags/1.0.0/config.m4
  service/lib/php/ext/thrift_protocol/tags/1.0.0/php_thrift_protocol.cpp
  service/lib/php/ext/thrift_protocol/tags/1.0.0/php_thrift_protocol.h
  service/lib/py/fb303/__init__.py
  service/lib/py/thrift/protocol/fastbinary.c
  service/lib/py/thrift/reflection/__init__.py
  service/lib/py/thrift/reflection/limited/__init__.py
  service/src/test/php/test_service.php
  testutils/compute_stats
  testutils/dump_schema
  testutils/run_tests

MANAGE HERALD DIFFERENTIAL RULES
  https://reviews.facebook.net/herald/view/differential/

WHY DID I GET THIS EMAIL?
  https://reviews.facebook.net/herald/transcript/6429/

Tip: use the X-Herald-Rules header to filter Herald messages in your client.


 Add license to the Hive files
 -

 Key: HIVE-2930
 URL: https://issues.apache.org/jira/browse/HIVE-2930
 Project: Hive
  Issue Type: Improvement
Reporter: Owen O'Malley
Assignee: Owen O'Malley
Priority: Blocker
 Attachments: HIVE-2930.D2835.1.patch


 We need to clean up the RAT report to 0. Apache projects aren't supposed to 
 release until they have shown that all of their files have proper headers.
 Note that although most of the files are just missing headers, some of them 
 explicitly say copyright 

[jira] [Updated] (HIVE-2930) Add license to the Hive files

2012-04-17 Thread Owen O'Malley (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Owen O'Malley updated HIVE-2930:


Status: Patch Available  (was: Open)

This doesn't fix the sql files, but fixes the rest of them. It passes unit 
tests.

 Add license to the Hive files
 -

 Key: HIVE-2930
 URL: https://issues.apache.org/jira/browse/HIVE-2930
 Project: Hive
  Issue Type: Improvement
Reporter: Owen O'Malley
Assignee: Owen O'Malley
Priority: Blocker
 Attachments: HIVE-2930.D2835.1.patch


 We need to clean up the RAT report to 0. Apache projects aren't supposed to 
 release until they have shown that all of their files have proper headers.
 Note that although most of the files are just missing headers, some of them 
 explicitly say copyright by facebook and released under the Thrift (not 
 Apache) license. I'll generate a list of them, but I'd really appreciate it 
 if someone from facebook could in verify that they intend to license them to 
 Apache.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (HIVE-2530) Implement SHOW TBLPROPERTIES

2012-04-17 Thread Kevin Wilfong (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Wilfong resolved HIVE-2530.
-

Resolution: Fixed

Committed, thanks Lei.

 Implement SHOW TBLPROPERTIES
 

 Key: HIVE-2530
 URL: https://issues.apache.org/jira/browse/HIVE-2530
 Project: Hive
  Issue Type: New Feature
Reporter: Adam Kramer
Assignee: Lei Zhao
Priority: Minor
 Attachments: HIVE-2530.D2589.1.patch, HIVE-2530.D2589.2.patch, 
 HIVE-2530.D2589.3.patch


 Since table properties can be defined arbitrarily, they should be easy for a 
 user to query from the command-line.
 SHOW TBLPROPERTIES tblname;
 ...would show all of them, one per row, key \t value
 SHOW TBLPROPERTIES tblname (FOOBAR);
 ...would just show the value for the FOOBAR tblproperty.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2883) Metastore client doesnt close connection properly

2012-04-17 Thread Ashutosh Chauhan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13255747#comment-13255747
 ] 

Ashutosh Chauhan commented on HIVE-2883:


Hive Committers,

This has been independently verified by couple other folks. Can someone review 
it for me?

All tests passed with the patch.

 Metastore client doesnt close connection properly
 -

 Key: HIVE-2883
 URL: https://issues.apache.org/jira/browse/HIVE-2883
 Project: Hive
  Issue Type: Bug
  Components: Metastore
Affects Versions: 0.9.0
Reporter: Ashutosh Chauhan
Assignee: Ashutosh Chauhan
 Fix For: 0.9.0

 Attachments: HIVE-2883.D2613.1.patch


 While closing connection, it always fail with following trace. Seemingly, it 
 doesnt have any harmful effects.
 {code}
 12/03/20 10:55:02 ERROR hive.metastore: Unable to shutdown local metastore 
 client
 org.apache.thrift.transport.TTransportException: Cannot write to null 
 outputStream
   at 
 org.apache.thrift.transport.TIOStreamTransport.write(TIOStreamTransport.java:142)
   at 
 org.apache.thrift.protocol.TBinaryProtocol.writeI32(TBinaryProtocol.java:163)
   at 
 org.apache.thrift.protocol.TBinaryProtocol.writeMessageBegin(TBinaryProtocol.java:91)
   at org.apache.thrift.TServiceClient.sendBase(TServiceClient.java:62)
   at 
 com.facebook.fb303.FacebookService$Client.send_shutdown(FacebookService.java:421)
   at 
 com.facebook.fb303.FacebookService$Client.shutdown(FacebookService.java:415)
   at 
 org.apache.hadoop.hive.metastore.HiveMetaStoreClient.close(HiveMetaStoreClient.java:310)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2930) Add license to the Hive files

2012-04-17 Thread Sohan Jain (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13255759#comment-13255759
 ] 

Sohan Jain commented on HIVE-2930:
--

I intended to license my patch for HIVE-2246 to the Apache Software Foundation. 
Thanks!

 Add license to the Hive files
 -

 Key: HIVE-2930
 URL: https://issues.apache.org/jira/browse/HIVE-2930
 Project: Hive
  Issue Type: Improvement
Reporter: Owen O'Malley
Assignee: Owen O'Malley
Priority: Blocker
 Attachments: HIVE-2930.D2835.1.patch


 We need to clean up the RAT report to 0. Apache projects aren't supposed to 
 release until they have shown that all of their files have proper headers.
 Note that although most of the files are just missing headers, some of them 
 explicitly say copyright by facebook and released under the Thrift (not 
 Apache) license. I'll generate a list of them, but I'd really appreciate it 
 if someone from facebook could in verify that they intend to license them to 
 Apache.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HIVE-2959) TestRemoteHiveMetaStoreIpAddress always uses the same port

2012-04-17 Thread Kevin Wilfong (Created) (JIRA)
TestRemoteHiveMetaStoreIpAddress always uses the same port
--

 Key: HIVE-2959
 URL: https://issues.apache.org/jira/browse/HIVE-2959
 Project: Hive
  Issue Type: Test
Reporter: Kevin Wilfong
Assignee: Kevin Wilfong


TestRemoteHiveMetaStoreIpAddress always uses the same port, meaning that if 
another process happens to be using that port, the tests cannot succeed.

There seems to be a standard way of finding a free port using Java's 
ServerSocket class, this should be used instead.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HIVE-2702) listPartitionsByFilter only supports string partitions

2012-04-17 Thread Thomas Weise (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Weise updated HIVE-2702:
---

Summary: listPartitionsByFilter only supports string partitions  (was: 
listPartitionsByFilter only supports non-string partitions)

 listPartitionsByFilter only supports string partitions
 --

 Key: HIVE-2702
 URL: https://issues.apache.org/jira/browse/HIVE-2702
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.8.1
Reporter: Aniket Mokashi
Assignee: Aniket Mokashi
 Attachments: HIVE-2702.1.patch, HIVE-2702.D2043.1.patch


 listPartitionsByFilter supports only non-string partitions. This is because 
 its explicitly specified in generateJDOFilterOverPartitions in 
 ExpressionTree.java. 
 //Can only support partitions whose types are string
   if( ! table.getPartitionKeys().get(partitionColumnIndex).
   
 getType().equals(org.apache.hadoop.hive.serde.Constants.STRING_TYPE_NAME) ) {
 throw new MetaException
 (Filtering is supported only on partition keys of type string);
   }

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HIVE-2959) TestRemoteHiveMetaStoreIpAddress always uses the same port

2012-04-17 Thread Kevin Wilfong (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Wilfong updated HIVE-2959:


Status: Patch Available  (was: Open)

 TestRemoteHiveMetaStoreIpAddress always uses the same port
 --

 Key: HIVE-2959
 URL: https://issues.apache.org/jira/browse/HIVE-2959
 Project: Hive
  Issue Type: Test
Reporter: Kevin Wilfong
Assignee: Kevin Wilfong
 Attachments: HIVE-2959.D2841.1.patch


 TestRemoteHiveMetaStoreIpAddress always uses the same port, meaning that if 
 another process happens to be using that port, the tests cannot succeed.
 There seems to be a standard way of finding a free port using Java's 
 ServerSocket class, this should be used instead.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2930) Add license to the Hive files

2012-04-17 Thread Phabricator (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13255776#comment-13255776
 ] 

Phabricator commented on HIVE-2930:
---

ashutoshc has accepted the revision HIVE-2930 [jira] Add license to the Hive 
files.

  +1 will commit soon.

REVISION DETAIL
  https://reviews.facebook.net/D2835

BRANCH
  h-2930


 Add license to the Hive files
 -

 Key: HIVE-2930
 URL: https://issues.apache.org/jira/browse/HIVE-2930
 Project: Hive
  Issue Type: Improvement
Reporter: Owen O'Malley
Assignee: Owen O'Malley
Priority: Blocker
 Attachments: HIVE-2930.D2835.1.patch


 We need to clean up the RAT report to 0. Apache projects aren't supposed to 
 release until they have shown that all of their files have proper headers.
 Note that although most of the files are just missing headers, some of them 
 explicitly say copyright by facebook and released under the Thrift (not 
 Apache) license. I'll generate a list of them, but I'd really appreciate it 
 if someone from facebook could in verify that they intend to license them to 
 Apache.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2930) Add license to the Hive files

2012-04-17 Thread Phabricator (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13255783#comment-13255783
 ] 

Phabricator commented on HIVE-2930:
---

omalley has committed the revision HIVE-2930 [jira] Add license to the Hive 
files.

  Change committed by hashutosh.

REVISION DETAIL
  https://reviews.facebook.net/D2835

COMMIT
  https://reviews.facebook.net/rHIVE1327205


 Add license to the Hive files
 -

 Key: HIVE-2930
 URL: https://issues.apache.org/jira/browse/HIVE-2930
 Project: Hive
  Issue Type: Improvement
Reporter: Owen O'Malley
Assignee: Owen O'Malley
Priority: Blocker
 Attachments: HIVE-2930.D2835.1.patch


 We need to clean up the RAT report to 0. Apache projects aren't supposed to 
 release until they have shown that all of their files have proper headers.
 Note that although most of the files are just missing headers, some of them 
 explicitly say copyright by facebook and released under the Thrift (not 
 Apache) license. I'll generate a list of them, but I'd really appreciate it 
 if someone from facebook could in verify that they intend to license them to 
 Apache.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HIVE-2930) Add license to the Hive files

2012-04-17 Thread Ashutosh Chauhan (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-2930:
---

   Resolution: Fixed
Fix Version/s: 0.9.0
   Status: Resolved  (was: Patch Available)

Thanks Owen for fixing this. Committed to 0.9 and trunk.

 Add license to the Hive files
 -

 Key: HIVE-2930
 URL: https://issues.apache.org/jira/browse/HIVE-2930
 Project: Hive
  Issue Type: Improvement
Reporter: Owen O'Malley
Assignee: Owen O'Malley
Priority: Blocker
 Fix For: 0.9.0

 Attachments: HIVE-2930.D2835.1.patch


 We need to clean up the RAT report to 0. Apache projects aren't supposed to 
 release until they have shown that all of their files have proper headers.
 Note that although most of the files are just missing headers, some of them 
 explicitly say copyright by facebook and released under the Thrift (not 
 Apache) license. I'll generate a list of them, but I'd really appreciate it 
 if someone from facebook could in verify that they intend to license them to 
 Apache.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-538) make hive_jdbc.jar self-containing

2012-04-17 Thread Phabricator (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13255822#comment-13255822
 ] 

Phabricator commented on HIVE-538:
--

ashutoshc has commented on the revision HIVE-538 [jira] make hive_jdbc.jar 
self-containing.

INLINE COMMENTS
  build.xml:1262 I dont see any advantage of it. But, since it wont make a 
difference I will do it any case.

REVISION DETAIL
  https://reviews.facebook.net/D2553


 make hive_jdbc.jar self-containing
 --

 Key: HIVE-538
 URL: https://issues.apache.org/jira/browse/HIVE-538
 Project: Hive
  Issue Type: Improvement
  Components: JDBC
Affects Versions: 0.3.0, 0.4.0, 0.6.0
Reporter: Raghotham Murthy
Assignee: Ashutosh Chauhan
 Attachments: HIVE-538.D2553.1.patch, HIVE-538.D2553.2.patch


 Currently, most jars in hive/build/dist/lib and the hadoop-*-core.jar are 
 required in the classpath to run jdbc applications on hive. We need to do 
 atleast the following to get rid of most unnecessary dependencies:
 1. get rid of dynamic serde and use a standard serialization format, maybe 
 tab separated, json or avro
 2. dont use hadoop configuration parameters
 3. repackage thrift and fb303 classes into hive_jdbc.jar

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HIVE-538) make hive_jdbc.jar self-containing

2012-04-17 Thread Phabricator (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phabricator updated HIVE-538:
-

Attachment: HIVE-538.D2553.2.patch

ashutoshc updated the revision HIVE-538 [jira] make hive_jdbc.jar 
self-containing.
Reviewers: JIRA

  Addressing Namit's comments.
  Rebased to trunk.

REVISION DETAIL
  https://reviews.facebook.net/D2553

AFFECTED FILES
  build.xml
  ivy/libraries.properties
  ivy.xml


 make hive_jdbc.jar self-containing
 --

 Key: HIVE-538
 URL: https://issues.apache.org/jira/browse/HIVE-538
 Project: Hive
  Issue Type: Improvement
  Components: JDBC
Affects Versions: 0.3.0, 0.4.0, 0.6.0
Reporter: Raghotham Murthy
Assignee: Ashutosh Chauhan
 Attachments: HIVE-538.D2553.1.patch, HIVE-538.D2553.2.patch


 Currently, most jars in hive/build/dist/lib and the hadoop-*-core.jar are 
 required in the classpath to run jdbc applications on hive. We need to do 
 atleast the following to get rid of most unnecessary dependencies:
 1. get rid of dynamic serde and use a standard serialization format, maybe 
 tab separated, json or avro
 2. dont use hadoop configuration parameters
 3. repackage thrift and fb303 classes into hive_jdbc.jar

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2946) Hive metastore does not have any log messages while shutting itself down.

2012-04-17 Thread Phabricator (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13256054#comment-13256054
 ] 

Phabricator commented on HIVE-2946:
---

ashutoshc has accepted the revision HIVE-2946 [jira] Hive metastore does not 
have any log messages while shutting itself down..

  +1 will commit soon.

REVISION DETAIL
  https://reviews.facebook.net/D2787

BRANCH
  svn


 Hive metastore does not have any log messages while shutting itself down. 
 --

 Key: HIVE-2946
 URL: https://issues.apache.org/jira/browse/HIVE-2946
 Project: Hive
  Issue Type: Improvement
  Components: Metastore
Reporter: Vandana Ayyalasomayajula
Assignee: Vandana Ayyalasomayajula
Priority: Minor
  Labels: metastore
 Attachments: HIVE-2946.D2745.1.patch, HIVE-2946.D2787.1.patch


 When hive metastore is started, the event is logged with good amount of 
 information. But when it is stopped ( using a kill) command, no information 
 is being written to the logs. It will be good if we can add a shutdown event 
 to the HiveMetastore class. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (HIVE-2946) Hive metastore does not have any log messages while shutting itself down.

2012-04-17 Thread Ashutosh Chauhan (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan resolved HIVE-2946.


   Resolution: Fixed
Fix Version/s: 0.9.0

Committed to trunk and 0.9 Thanks, Vandana!

 Hive metastore does not have any log messages while shutting itself down. 
 --

 Key: HIVE-2946
 URL: https://issues.apache.org/jira/browse/HIVE-2946
 Project: Hive
  Issue Type: Improvement
  Components: Metastore
Reporter: Vandana Ayyalasomayajula
Assignee: Vandana Ayyalasomayajula
Priority: Minor
  Labels: metastore
 Fix For: 0.9.0

 Attachments: HIVE-2946.D2745.1.patch, HIVE-2946.D2787.1.patch


 When hive metastore is started, the event is logged with good amount of 
 information. But when it is stopped ( using a kill) command, no information 
 is being written to the logs. It will be good if we can add a shutdown event 
 to the HiveMetastore class. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2946) Hive metastore does not have any log messages while shutting itself down.

2012-04-17 Thread Phabricator (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13256060#comment-13256060
 ] 

Phabricator commented on HIVE-2946:
---

avandana has committed the revision HIVE-2946 [jira] Hive metastore does not 
have any log messages while shutting itself down..

  Change committed by hashutosh.

REVISION DETAIL
  https://reviews.facebook.net/D2787

COMMIT
  https://reviews.facebook.net/rHIVE1327323


 Hive metastore does not have any log messages while shutting itself down. 
 --

 Key: HIVE-2946
 URL: https://issues.apache.org/jira/browse/HIVE-2946
 Project: Hive
  Issue Type: Improvement
  Components: Metastore
Reporter: Vandana Ayyalasomayajula
Assignee: Vandana Ayyalasomayajula
Priority: Minor
  Labels: metastore
 Fix For: 0.9.0

 Attachments: HIVE-2946.D2745.1.patch, HIVE-2946.D2787.1.patch


 When hive metastore is started, the event is logged with good amount of 
 information. But when it is stopped ( using a kill) command, no information 
 is being written to the logs. It will be good if we can add a shutdown event 
 to the HiveMetastore class. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HIVE-2924) Clean up warnings in RCFile

2012-04-17 Thread Phabricator (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phabricator updated HIVE-2924:
--

Attachment: HIVE-2924.D2859.1.patch

omalley requested code review of HIVE-2924 [jira] Clean up warnings in RCFile.
Reviewers: JIRA

  Enter Revision Title

  Cleanup compilation warnings in rcfile.

  Currently we get a couple of warnings in compiling RCFile and I want to clean 
them up.

TEST PLAN
  EMPTY

REVISION DETAIL
  https://reviews.facebook.net/D2859

AFFECTED FILES
  cli/src/java/org/apache/hadoop/hive/cli/RCFileCat.java
  ql/src/java/org/apache/hadoop/hive/ql/io/RCFile.java
  
ql/src/java/org/apache/hadoop/hive/ql/io/rcfile/merge/RCFileValueBufferWrapper.java

MANAGE HERALD DIFFERENTIAL RULES
  https://reviews.facebook.net/herald/view/differential/

WHY DID I GET THIS EMAIL?
  https://reviews.facebook.net/herald/transcript/6495/

Tip: use the X-Herald-Rules header to filter Herald messages in your client.


 Clean up warnings in RCFile
 ---

 Key: HIVE-2924
 URL: https://issues.apache.org/jira/browse/HIVE-2924
 Project: Hive
  Issue Type: Improvement
Reporter: Owen O'Malley
Assignee: Owen O'Malley
 Attachments: HIVE-2924.D2859.1.patch


 Currently we get a couple of warnings in compiling RCFile and I want to clean 
 them up.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HIVE-2960) Stop testing concat of partitions containing control characters.

2012-04-17 Thread Kevin Wilfong (Created) (JIRA)
Stop testing concat of partitions containing control characters.


 Key: HIVE-2960
 URL: https://issues.apache.org/jira/browse/HIVE-2960
 Project: Hive
  Issue Type: Test
Reporter: Kevin Wilfong
Assignee: Kevin Wilfong


We have been, for a short while, testing to make sure that concatenation 
commands work with partitions that contain ASCII control characters.  This 
happened to work up until recently due to a happy coincidence in the way the 
Hive object's HiveConf was updated.  Namely, it was updated often enough that 
it got configs set by the user, but not so often that it got the value for 
hive.query.string.  With some recent changes, it now needs to be updated more 
often, see https://issues.apache.org/jira/browse/HIVE-2918

This breaks the process of launching a job to merge partitions that contain 
ASCII control characters.  The job conf is constructed using the updated Hive 
conf containing the value of hive.query.string which contains ASCII control 
characters.  When the job conf is converted to XML it fails because these 
characters are illegal.  Given that any query has, even prior to this change, 
failed when that query contained ASCII control characters, and hence these 
partitions cannot be queried directly, it seems reasonable to no longer support 
concatenating them either (which this change will allow for).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HIVE-2960) Stop testing concat of partitions containing control characters.

2012-04-17 Thread Phabricator (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phabricator updated HIVE-2960:
--

Attachment: HIVE-2960.D2865.1.patch

kevinwilfong requested code review of HIVE-2960 [jira] Stop testing concat of 
partitions containing control characters..
Reviewers: JIRA

  https://issues.apache.org/jira/browse/HIVE-2960

  Remove the testcases for concatenating partitions containing control 
characters.

  We have been, for a short while, testing to make sure that concatenation 
commands work with partitions that contain ASCII control characters.  This 
happened to work up until recently due to a happy coincidence in the way the 
Hive object's HiveConf was updated.  Namely, it was updated often enough that 
it got configs set by the user, but not so often that it got the value for 
hive.query.string.  With some recent changes, it now needs to be updated more 
often, see https://issues.apache.org/jira/browse/HIVE-2918

  This breaks the process of launching a job to merge partitions that contain 
ASCII control characters.  The job conf is constructed using the updated Hive 
conf containing the value of hive.query.string which contains ASCII control 
characters.  When the job conf is converted to XML it fails because these 
characters are illegal.  Given that any query has, even prior to this change, 
failed when that query contained ASCII control characters, and hence these 
partitions cannot be queried directly, it seems reasonable to no longer support 
concatenating them either (which this change will allow for).

TEST PLAN
  EMPTY

REVISION DETAIL
  https://reviews.facebook.net/D2865

AFFECTED FILES
  ql/src/test/results/clientpositive/escape2.q.out
  ql/src/test/queries/clientpositive/escape2.q

MANAGE HERALD DIFFERENTIAL RULES
  https://reviews.facebook.net/herald/view/differential/

WHY DID I GET THIS EMAIL?
  https://reviews.facebook.net/herald/transcript/6501/

Tip: use the X-Herald-Rules header to filter Herald messages in your client.


 Stop testing concat of partitions containing control characters.
 

 Key: HIVE-2960
 URL: https://issues.apache.org/jira/browse/HIVE-2960
 Project: Hive
  Issue Type: Test
Reporter: Kevin Wilfong
Assignee: Kevin Wilfong
 Attachments: HIVE-2960.D2865.1.patch


 We have been, for a short while, testing to make sure that concatenation 
 commands work with partitions that contain ASCII control characters.  This 
 happened to work up until recently due to a happy coincidence in the way the 
 Hive object's HiveConf was updated.  Namely, it was updated often enough that 
 it got configs set by the user, but not so often that it got the value for 
 hive.query.string.  With some recent changes, it now needs to be updated more 
 often, see https://issues.apache.org/jira/browse/HIVE-2918
 This breaks the process of launching a job to merge partitions that contain 
 ASCII control characters.  The job conf is constructed using the updated Hive 
 conf containing the value of hive.query.string which contains ASCII control 
 characters.  When the job conf is converted to XML it fails because these 
 characters are illegal.  Given that any query has, even prior to this change, 
 failed when that query contained ASCII control characters, and hence these 
 partitions cannot be queried directly, it seems reasonable to no longer 
 support concatenating them either (which this change will allow for).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HIVE-2960) Stop testing concat of partitions containing control characters.

2012-04-17 Thread Kevin Wilfong (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Wilfong updated HIVE-2960:


Attachment: escape2.q.out

 Stop testing concat of partitions containing control characters.
 

 Key: HIVE-2960
 URL: https://issues.apache.org/jira/browse/HIVE-2960
 Project: Hive
  Issue Type: Test
Reporter: Kevin Wilfong
Assignee: Kevin Wilfong
 Attachments: HIVE-2960.D2865.1.patch, escape2.q.out


 We have been, for a short while, testing to make sure that concatenation 
 commands work with partitions that contain ASCII control characters.  This 
 happened to work up until recently due to a happy coincidence in the way the 
 Hive object's HiveConf was updated.  Namely, it was updated often enough that 
 it got configs set by the user, but not so often that it got the value for 
 hive.query.string.  With some recent changes, it now needs to be updated more 
 often, see https://issues.apache.org/jira/browse/HIVE-2918
 This breaks the process of launching a job to merge partitions that contain 
 ASCII control characters.  The job conf is constructed using the updated Hive 
 conf containing the value of hive.query.string which contains ASCII control 
 characters.  When the job conf is converted to XML it fails because these 
 characters are illegal.  Given that any query has, even prior to this change, 
 failed when that query contained ASCII control characters, and hence these 
 partitions cannot be queried directly, it seems reasonable to no longer 
 support concatenating them either (which this change will allow for).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2960) Stop testing concat of partitions containing control characters.

2012-04-17 Thread Kevin Wilfong (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13256078#comment-13256078
 ] 

Kevin Wilfong commented on HIVE-2960:
-

Attached escape2.q.out because the diff for some reason thought it was a binary 
file.

 Stop testing concat of partitions containing control characters.
 

 Key: HIVE-2960
 URL: https://issues.apache.org/jira/browse/HIVE-2960
 Project: Hive
  Issue Type: Test
Reporter: Kevin Wilfong
Assignee: Kevin Wilfong
 Attachments: HIVE-2960.D2865.1.patch, escape2.q.out


 We have been, for a short while, testing to make sure that concatenation 
 commands work with partitions that contain ASCII control characters.  This 
 happened to work up until recently due to a happy coincidence in the way the 
 Hive object's HiveConf was updated.  Namely, it was updated often enough that 
 it got configs set by the user, but not so often that it got the value for 
 hive.query.string.  With some recent changes, it now needs to be updated more 
 often, see https://issues.apache.org/jira/browse/HIVE-2918
 This breaks the process of launching a job to merge partitions that contain 
 ASCII control characters.  The job conf is constructed using the updated Hive 
 conf containing the value of hive.query.string which contains ASCII control 
 characters.  When the job conf is converted to XML it fails because these 
 characters are illegal.  Given that any query has, even prior to this change, 
 failed when that query contained ASCII control characters, and hence these 
 partitions cannot be queried directly, it seems reasonable to no longer 
 support concatenating them either (which this change will allow for).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Assigned] (HIVE-2958) GROUP BY causing ClassCastException [LazyDioInteger cannot be cast LazyInteger]

2012-04-17 Thread Navis (Assigned) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Navis reassigned HIVE-2958:
---

Assignee: Navis

 GROUP BY causing ClassCastException [LazyDioInteger cannot be cast 
 LazyInteger]
 ---

 Key: HIVE-2958
 URL: https://issues.apache.org/jira/browse/HIVE-2958
 Project: Hive
  Issue Type: Bug
  Components: HBase Handler
Affects Versions: 0.9.0
 Environment: HBase 0.90.4, Hive 0.90 snapshot (trunk) built today
Reporter: Tim Robertson
Assignee: Navis
Priority: Blocker

 This relates to https://issues.apache.org/jira/browse/HIVE-1634.
 The following work fine:
 {code}
 CREATE EXTERNAL TABLE tim_hbase_occurrence ( 
   id int,
   scientific_name string,
   data_resource_id int
 ) STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' WITH 
 SERDEPROPERTIES (
   hbase.columns.mapping = :key#b,v:scientific_name#s,v:data_resource_id#b
 ) TBLPROPERTIES(
   hbase.table.name = mini_occurrences, 
   hbase.table.default.storage.type = binary
 );
 SELECT * FROM tim_hbase_occurrence LIMIT 3;
 SELECT * FROM tim_hbase_occurrence WHERE data_resource_id=1081 LIMIT 3;
 {code}
 However, the following fails:
 {code}
 SELECT data_resource_id, count(*) FROM tim_hbase_occurrence GROUP BY 
 data_resource_id;
 {code}
 The error given:
 {code}
 0 TS
 2012-04-17 16:58:45,693 INFO org.apache.hadoop.hive.ql.exec.MapOperator: 
 Initialization Done 7 MAP
 2012-04-17 16:58:45,714 INFO org.apache.hadoop.hive.ql.exec.MapOperator: 
 Processing alias tim_hbase_occurrence for file 
 hdfs://c1n2.gbif.org/user/hive/warehouse/tim_hbase_occurrence
 2012-04-17 16:58:45,714 INFO org.apache.hadoop.hive.ql.exec.MapOperator: 7 
 forwarding 1 rows
 2012-04-17 16:58:45,714 INFO 
 org.apache.hadoop.hive.ql.exec.TableScanOperator: 0 forwarding 1 rows
 2012-04-17 16:58:45,716 INFO org.apache.hadoop.hive.ql.exec.SelectOperator: 1 
 forwarding 1 rows
 2012-04-17 16:58:45,723 FATAL ExecMapper: 
 org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while 
 processing row {id:1444,scientific_name:null,data_resource_id:1081}
   at 
 org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:548)
   at org.apache.hadoop.hive.ql.exec.ExecMapper.map(ExecMapper.java:143)
   at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50)
   at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:391)
   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:325)
   at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:396)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1157)
   at org.apache.hadoop.mapred.Child.main(Child.java:264)
 Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: 
 java.lang.ClassCastException: 
 org.apache.hadoop.hive.serde2.lazydio.LazyDioInteger cannot be cast to 
 org.apache.hadoop.hive.serde2.lazy.LazyInteger
   at 
 org.apache.hadoop.hive.ql.exec.GroupByOperator.processOp(GroupByOperator.java:737)
   at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:471)
   at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:762)
   at 
 org.apache.hadoop.hive.ql.exec.SelectOperator.processOp(SelectOperator.java:84)
   at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:471)
   at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:762)
   at 
 org.apache.hadoop.hive.ql.exec.TableScanOperator.processOp(TableScanOperator.java:83)
   at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:471)
   at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:762)
   at 
 org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:529)
   ... 9 more
 Caused by: java.lang.ClassCastException: 
 org.apache.hadoop.hive.serde2.lazydio.LazyDioInteger cannot be cast to 
 org.apache.hadoop.hive.serde2.lazy.LazyInteger
   at 
 org.apache.hadoop.hive.serde2.lazy.objectinspector.primitive.LazyIntObjectInspector.copyObject(LazyIntObjectInspector.java:43)
   at 
 org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorUtils.copyToStandardObject(ObjectInspectorUtils.java:239)
   at 
 org.apache.hadoop.hive.ql.exec.KeyWrapperFactory$ListKeyWrapper.deepCopyElements(KeyWrapperFactory.java:150)
   at 
 org.apache.hadoop.hive.ql.exec.KeyWrapperFactory$ListKeyWrapper.deepCopyElements(KeyWrapperFactory.java:142)
   at 
 org.apache.hadoop.hive.ql.exec.KeyWrapperFactory$ListKeyWrapper.copyKey(KeyWrapperFactory.java:119)
   at 
 org.apache.hadoop.hive.ql.exec.GroupByOperator.processHashAggr(GroupByOperator.java:750)
   

[jira] [Updated] (HIVE-2958) GROUP BY causing ClassCastException [LazyDioInteger cannot be cast LazyInteger]

2012-04-17 Thread Phabricator (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phabricator updated HIVE-2958:
--

Attachment: HIVE-2958.D2871.1.patch

navis requested code review of HIVE-2958 [jira] GROUP BY causing 
ClassCastException [LazyDioInteger cannot be cast LazyInteger].
Reviewers: JIRA

  DPAL- GROUP BY causing ClassCastException [LazyDioInteger cannot be cast 
LazyInteger]

  This relates to https://issues.apache.org/jira/browse/HIVE-1634.

  The following work fine:

  CREATE EXTERNAL TABLE tim_hbase_occurrence (
id int,
scientific_name string,
data_resource_id int
  ) STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' WITH 
SERDEPROPERTIES (
hbase.columns.mapping = :key#b,v:scientific_name#s,v:data_resource_id#b
  ) TBLPROPERTIES(
hbase.table.name = mini_occurrences,
hbase.table.default.storage.type = binary
  );
  SELECT * FROM tim_hbase_occurrence LIMIT 3;
  SELECT * FROM tim_hbase_occurrence WHERE data_resource_id=1081 LIMIT 3;

  However, the following fails:

  SELECT data_resource_id, count(*) FROM tim_hbase_occurrence GROUP BY 
data_resource_id;

  The error given:

  0 TS
  2012-04-17 16:58:45,693 INFO org.apache.hadoop.hive.ql.exec.MapOperator: 
Initialization Done 7 MAP
  2012-04-17 16:58:45,714 INFO org.apache.hadoop.hive.ql.exec.MapOperator: 
Processing alias tim_hbase_occurrence for file 
hdfs://c1n2.gbif.org/user/hive/warehouse/tim_hbase_occurrence
  2012-04-17 16:58:45,714 INFO org.apache.hadoop.hive.ql.exec.MapOperator: 7 
forwarding 1 rows
  2012-04-17 16:58:45,714 INFO 
org.apache.hadoop.hive.ql.exec.TableScanOperator: 0 forwarding 1 rows
  2012-04-17 16:58:45,716 INFO org.apache.hadoop.hive.ql.exec.SelectOperator: 1 
forwarding 1 rows
  2012-04-17 16:58:45,723 FATAL ExecMapper: 
org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while 
processing row {id:1444,scientific_name:null,data_resource_id:1081}
at 
org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:548)
at org.apache.hadoop.hive.ql.exec.ExecMapper.map(ExecMapper.java:143)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:391)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:325)
at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1157)
at org.apache.hadoop.mapred.Child.main(Child.java:264)
  Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: 
java.lang.ClassCastException: 
org.apache.hadoop.hive.serde2.lazydio.LazyDioInteger cannot be cast to 
org.apache.hadoop.hive.serde2.lazy.LazyInteger
at 
org.apache.hadoop.hive.ql.exec.GroupByOperator.processOp(GroupByOperator.java:737)
at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:471)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:762)
at 
org.apache.hadoop.hive.ql.exec.SelectOperator.processOp(SelectOperator.java:84)
at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:471)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:762)
at 
org.apache.hadoop.hive.ql.exec.TableScanOperator.processOp(TableScanOperator.java:83)
at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:471)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:762)
at 
org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:529)
... 9 more
  Caused by: java.lang.ClassCastException: 
org.apache.hadoop.hive.serde2.lazydio.LazyDioInteger cannot be cast to 
org.apache.hadoop.hive.serde2.lazy.LazyInteger
at 
org.apache.hadoop.hive.serde2.lazy.objectinspector.primitive.LazyIntObjectInspector.copyObject(LazyIntObjectInspector.java:43)
at 
org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorUtils.copyToStandardObject(ObjectInspectorUtils.java:239)
at 
org.apache.hadoop.hive.ql.exec.KeyWrapperFactory$ListKeyWrapper.deepCopyElements(KeyWrapperFactory.java:150)
at 
org.apache.hadoop.hive.ql.exec.KeyWrapperFactory$ListKeyWrapper.deepCopyElements(KeyWrapperFactory.java:142)
at 
org.apache.hadoop.hive.ql.exec.KeyWrapperFactory$ListKeyWrapper.copyKey(KeyWrapperFactory.java:119)
at 
org.apache.hadoop.hive.ql.exec.GroupByOperator.processHashAggr(GroupByOperator.java:750)
at 
org.apache.hadoop.hive.ql.exec.GroupByOperator.processOp(GroupByOperator.java:722)
... 18 more

TEST PLAN
  EMPTY

REVISION DETAIL
  https://reviews.facebook.net/D2871

AFFECTED FILES
  hbase-handler/src/test/queries/hbase_binary_external_table_queries.q
  

[jira] [Updated] (HIVE-2956) [hive] Provide error message when using UDAF in the place of UDF instead of throwing NPE

2012-04-17 Thread Navis (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Navis updated HIVE-2956:


Status: Patch Available  (was: Open)

Passed all tests

 [hive] Provide error message when using UDAF in the place of UDF instead of 
 throwing NPE
 

 Key: HIVE-2956
 URL: https://issues.apache.org/jira/browse/HIVE-2956
 Project: Hive
  Issue Type: Improvement
Reporter: Navis
Assignee: Navis
Priority: Trivial
 Attachments: HIVE-2956.D2823.1.patch


 For example, 
 {code}
 hive select distinct deptno, sum(deptno) from emp;
 FAILED: Hive Internal Error: java.lang.NullPointerException(null)
 java.lang.NullPointerException
   at 
 org.apache.hadoop.hive.ql.plan.ExprNodeGenericFuncDesc.newInstance(ExprNodeGenericFuncDesc.java:214)
   at 
 org.apache.hadoop.hive.ql.parse.TypeCheckProcFactory$DefaultExprProcessor.getXpathOrFuncExprNodeDesc(TypeCheckProcFactory.java:767)
   at 
 org.apache.hadoop.hive.ql.parse.TypeCheckProcFactory$DefaultExprProcessor.process(TypeCheckProcFactory.java:888)
   at 
 org.apache.hadoop.hive.ql.lib.DefaultRuleDispatcher.dispatch(DefaultRuleDispatcher.java:89)
   at 
 org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatch(DefaultGraphWalker.java:88)
   at 
 org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.walk(DefaultGraphWalker.java:125)
   at 
 org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.startWalking(DefaultGraphWalker.java:102)
   at 
 org.apache.hadoop.hive.ql.parse.TypeCheckProcFactory.genExprNode(TypeCheckProcFactory.java:165)
   at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genExprNodeDesc(SemanticAnalyzer.java:7755)
   at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genExprNodeDesc(SemanticAnalyzer.java:7713)
   at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genGroupByPlanMapGroupByOperator(SemanticAnalyzer.java:2793)
   at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genGroupByPlanMapAggr1MR(SemanticAnalyzer.java:3651)
   at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genBodyPlan(SemanticAnalyzer.java:6125)
   at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:6762)
   at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:7531)
   at 
 org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:250)
 {code}
 Trivial.. but people always reports this confused by esoteric custom function 
 names.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HIVE-2955) Queries consists of metadata-only-query returns always empty value

2012-04-17 Thread Navis (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2955?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Navis updated HIVE-2955:


Status: Patch Available  (was: Open)

passed all tests

 Queries consists of metadata-only-query returns always empty value
 --

 Key: HIVE-2955
 URL: https://issues.apache.org/jira/browse/HIVE-2955
 Project: Hive
  Issue Type: Bug
Reporter: Navis
Assignee: Navis
Priority: Minor
 Attachments: HIVE-2955.D2817.1.patch


 For partitioned table, simple query on partition column returns always null 
 or empty value, for example,
 {code}
 create table emppart(empno int, ename string) partitioned by (deptno int);
 .. load partitions..
 select distinct deptno from emppart; // empty
 select min(deptno), max(deptno) from emppart;  // NULL and NULL
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Assigned] (HIVE-2954) The statement fails when a column part of an ORDER BY is not specified in the SELECT.

2012-04-17 Thread Navis (Assigned) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Navis reassigned HIVE-2954:
---

Assignee: Navis

 The statement fails when a column part of an ORDER BY is not specified in the 
 SELECT.
 -

 Key: HIVE-2954
 URL: https://issues.apache.org/jira/browse/HIVE-2954
 Project: Hive
  Issue Type: Improvement
  Components: SQL
Affects Versions: 0.8.1
Reporter: Mauro Cazzari
Assignee: Navis

 Given the following table:
 CREATE TABLE `DBCSTB32` (`aaa` DOUBLE,`bbb` STRING,`ccc` STRING,`ddd` DOUBLE) 
 ROW FORMAT
 DELIMITED FIELDS TERMINATED BY '\001' STORED AS TEXTFILE;
 The following statement fails:
  select TXT_1`aaa`, TXT_1.`bbb` 
from `DBCSTB32` TXT_1 
   order by TXT_1.`bbb` asc, TXT_1.`aaa` asc, TXT_1.`ccc` asc
 ERROR: java.sql.SQLException: Query returned non-zero code: 10, cause: 
 FAILED: Error in
semantic analysis: Line 1:104 Invalid column reference '`ccc`'
 Adding `ccc` to the selected list of columns fixes the problem.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2530) Implement SHOW TBLPROPERTIES

2012-04-17 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13256132#comment-13256132
 ] 

Hudson commented on HIVE-2530:
--

Integrated in Hive-trunk-h0.21 #1379 (See 
[https://builds.apache.org/job/Hive-trunk-h0.21/1379/])
HIVE-2530. Implement SHOW TBLPROPERTIES. (leizhao via kevinwilfong) 
(Revision 1327189)

 Result = SUCCESS
kevinwilfong : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1327189
Files : 
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/DDLTask.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/DDLSemanticAnalyzer.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/Hive.g
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/ParseDriver.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzerFactory.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/DDLWork.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/HiveOperation.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/ShowTblPropertiesDesc.java
* /hive/trunk/ql/src/test/queries/clientpositive/show_tblproperties.q
* /hive/trunk/ql/src/test/results/clientpositive/show_tblproperties.q.out


 Implement SHOW TBLPROPERTIES
 

 Key: HIVE-2530
 URL: https://issues.apache.org/jira/browse/HIVE-2530
 Project: Hive
  Issue Type: New Feature
Reporter: Adam Kramer
Assignee: Lei Zhao
Priority: Minor
 Attachments: HIVE-2530.D2589.1.patch, HIVE-2530.D2589.2.patch, 
 HIVE-2530.D2589.3.patch


 Since table properties can be defined arbitrarily, they should be easy for a 
 user to query from the command-line.
 SHOW TBLPROPERTIES tblname;
 ...would show all of them, one per row, key \t value
 SHOW TBLPROPERTIES tblname (FOOBAR);
 ...would just show the value for the FOOBAR tblproperty.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2930) Add license to the Hive files

2012-04-17 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13256131#comment-13256131
 ] 

Hudson commented on HIVE-2930:
--

Integrated in Hive-trunk-h0.21 #1379 (See 
[https://builds.apache.org/job/Hive-trunk-h0.21/1379/])
HIVE-2930 [jira] Add license to the Hive files
(Owen O'Malley via Ashutosh Chauhan)

Summary:
Enter Revision Title

Fixing rat warnings

We need to clean up the RAT report to 0. Apache projects aren't supposed to
release until they have shown that all of their files have proper headers.

Note that although most of the files are just missing headers, some of them
explicitly say copyright by facebook and released under the Thrift (not Apache)
license. I'll generate a list of them, but I'd really appreciate it if someone
from facebook could in verify that they intend to license them to Apache.

Test Plan: EMPTY

Reviewers: JIRA, ashutoshc

Reviewed By: ashutoshc

Differential Revision: https://reviews.facebook.net/D2835 (Revision 1327205)

 Result = SUCCESS
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1327205
Files : 
* /hive/trunk/.checkstyle
* /hive/trunk/build.properties
* /hive/trunk/build.xml
* /hive/trunk/common/src/java/conf/hive-log4j.properties
* /hive/trunk/conf/configuration.xsl
* /hive/trunk/conf/hive-env.sh.template
* /hive/trunk/data/conf/hive-log4j.properties
* /hive/trunk/data/scripts/cat.py
* /hive/trunk/data/scripts/cat_error.py
* /hive/trunk/data/scripts/dumpdata_script.py
* /hive/trunk/data/scripts/error_script
* /hive/trunk/data/scripts/input20_script
* /hive/trunk/docs/velocity.properties
* /hive/trunk/eclipse-templates/.classpath
* /hive/trunk/eclipse-templates/.classpath._hbase
* /hive/trunk/eclipse-templates/.externalToolBuilders/Hive_Ant_Builder.launch
* /hive/trunk/eclipse-templates/.project
* /hive/trunk/eclipse-templates/.settings/org.eclipse.jdt.core.prefs
* /hive/trunk/eclipse-templates/.settings/org.eclipse.jdt.ui.prefs
* /hive/trunk/eclipse-templates/HiveCLI.launchtemplate
* /hive/trunk/eclipse-templates/TestCliDriver.launchtemplate
* /hive/trunk/eclipse-templates/TestEmbeddedHiveMetaStore.launchtemplate
* /hive/trunk/eclipse-templates/TestHBaseCliDriver.launchtemplate
* /hive/trunk/eclipse-templates/TestHive.launchtemplate
* /hive/trunk/eclipse-templates/TestHiveMetaStoreChecker.launchtemplate
* /hive/trunk/eclipse-templates/TestJdbc.launchtemplate
* /hive/trunk/eclipse-templates/TestMTQueries.launchtemplate
* /hive/trunk/eclipse-templates/TestRemoteHiveMetaStore.launchtemplate
* /hive/trunk/eclipse-templates/TestTruncate.launchtemplate
* /hive/trunk/hbase-handler/src/test/templates/TestHBaseCliDriver.vm
* /hive/trunk/metastore/if/hive_metastore.thrift
* /hive/trunk/metastore/scripts/hive.metastore_ctrl
* /hive/trunk/metastore/scripts/hive.metastore_daemon
* /hive/trunk/metastore/scripts/upgrade/001-HIVE-2795.update_view_partitions.py
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/parser/Filter.g
* /hive/trunk/metastore/src/model/package.jdo
* 
/hive/trunk/metastore/src/test/org/apache/hadoop/hive/metastore/DummyPreListener.java
* 
/hive/trunk/metastore/src/test/org/apache/hadoop/hive/metastore/TestRemoteUGIHiveMetaStoreIpAddress.java
* /hive/trunk/ql/if/queryplan.thrift
* /hive/trunk/ql/src/java/conf/hive-exec-log4j.properties
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/physical/MetadataOnlyOptimizer.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/mapredplan.jr
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/stats/jdbc/JDBCStatsUtils.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/UDFOPPlus.java
* /hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/exec/sample_plan.xml
* /hive/trunk/ql/src/test/queries/clientnegative/dyn_part_empty.q.disabled
* /hive/trunk/ql/src/test/scripts/testgrep
* /hive/trunk/ql/src/test/templates/TestCliDriver.vm
* /hive/trunk/ql/src/test/templates/TestNegativeCliDriver.vm
* /hive/trunk/ql/src/test/templates/TestParse.vm
* /hive/trunk/ql/src/test/templates/TestParseNegative.vm
* /hive/trunk/serde/if/serde.thrift
* /hive/trunk/serde/if/test/complex.thrift
* /hive/trunk/serde/if/test/complexpb.proto
* /hive/trunk/serde/if/test/testthrift.thrift
* 
/hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/columnar/LazyBinaryColumnarSerDe.java
* 
/hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/dynamic_type/thrift_grammar.jjt
* /hive/trunk/service/lib/php/ext/thrift_protocol/config.m4
* /hive/trunk/service/lib/php/ext/thrift_protocol/tags/1.0.0/config.m4
* 
/hive/trunk/service/lib/php/ext/thrift_protocol/tags/1.0.0/php_thrift_protocol.cpp
* 
/hive/trunk/service/lib/php/ext/thrift_protocol/tags/1.0.0/php_thrift_protocol.h
* /hive/trunk/service/lib/py/fb303/__init__.py
* /hive/trunk/service/lib/py/thrift/protocol/fastbinary.c
* /hive/trunk/service/lib/py/thrift/reflection/__init__.py
* 

[jira] [Updated] (HIVE-2723) should throw Ambiguous column reference key Exception in particular join condition

2012-04-17 Thread Phabricator (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2723?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phabricator updated HIVE-2723:
--

Attachment: HIVE-2723.D1275.2.patch

navis updated the revision HIVE-2723 [jira] should throw  Ambiguous column 
reference key  Exception in particular join condition.
Reviewers: JIRA

  1. Added test cases
  2. Avoid throwing exception if it's possible

REVISION DETAIL
  https://reviews.facebook.net/D1275

AFFECTED FILES
  ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java
  ql/src/test/queries/clientnegative/ambiguous_col0.q
  ql/src/test/queries/clientnegative/ambiguous_col1.q
  ql/src/test/queries/clientnegative/ambiguous_col2.q
  ql/src/test/queries/clientpositive/ambiguous_col.q
  ql/src/test/results/clientnegative/ambiguous_col0.q.out
  ql/src/test/results/clientnegative/ambiguous_col1.q.out
  ql/src/test/results/clientnegative/ambiguous_col2.q.out
  ql/src/test/results/clientnegative/ambiguous_col_patterned.q.out
  ql/src/test/results/clientpositive/ambiguous_col.q.out


 should throw  Ambiguous column reference key  Exception in particular join 
 condition
 --

 Key: HIVE-2723
 URL: https://issues.apache.org/jira/browse/HIVE-2723
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.8.0
 Environment: Linux zongren-VirtualBox 3.0.0-14-generic #23-Ubuntu SMP 
 Mon Nov 21 20:34:47 UTC 2011 i686 i686 i386 GNU/Linux
 java version 1.6.0_25
 hadoop-0.20.2-cdh3u0
 hive-0.7.0-cdh3u0
Reporter: caofangkun
Assignee: Navis
Priority: Minor
  Labels: exception-handling, query, queryparser
 Fix For: 0.9.0

 Attachments: HIVE-2723.D1275.1.patch, HIVE-2723.D1275.2.patch


 This Bug can be Repeated as following :
 create table test(key string, value string);
 create table test1(key string, value string);
 1: Correct!
 select t.key 
 from 
   (select a.key, b.key from (select * from src ) a right outer join (select * 
 from src1) b on (a.key = b.key)) t;
 FAILED: Error in semantic analysis: Ambiguous column reference key
 2: Uncorrect!! Should throw Exception as above too!
 select t.key --Is this a.key or b.key ? It's ambiduous!
 from 
   (select a.\*, b.\* from (select * from src ) a right outer join (select * 
 from src1) b on (a.value = b.value)) t;
 Total MapReduce jobs = 1
 Launching Job 1 out of 1
 Number of reduce tasks not specified. Defaulting to jobconf value of: 1
 In order to change the average load for a reducer (in bytes):
   set hive.exec.reducers.bytes.per.reducer=number
 In order to limit the maximum number of reducers:
   set hive.exec.reducers.max=number
 In order to set a constant number of reducers:
   set mapred.reduce.tasks=number
 Starting Job = job_201201170959_0004, Tracking URL = 
 http://zongren-VirtualBox:50030/jobdetails.jsp?jobid=job_201201170959_0004
 Kill Command = /home/zongren/workspace/hadoop-adh/bin/hadoop job  
 -Dmapred.job.tracker=zongren-VirtualBox:9001 -kill job_201201170959_0004
 Hadoop job information for Stage-1: number of mappers: 2; number of reducers: 
 1
 2012-01-17 11:02:47,507 Stage-1 map = 0%,  reduce = 0%
 2012-01-17 11:02:55,002 Stage-1 map = 100%,  reduce = 0%
 2012-01-17 11:03:04,240 Stage-1 map = 100%,  reduce = 33%
 2012-01-17 11:03:05,258 Stage-1 map = 100%,  reduce = 100%
 Ended Job = job_201201170959_0004
 MapReduce Jobs Launched: 
 Job 0: Map: 2  Reduce: 1   HDFS Read: 669 HDFS Write: 216 SUCESS
 Total MapReduce CPU Time Spent: 0 msec
 OK

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira