[jira] [Updated] (HIVE-6827) Disable insecure commands with std sql auth

2014-07-09 Thread Lefty Leverenz (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lefty Leverenz updated HIVE-6827:
-

Labels:   (was: TODOC13)

 Disable insecure commands with std sql auth
 ---

 Key: HIVE-6827
 URL: https://issues.apache.org/jira/browse/HIVE-6827
 Project: Hive
  Issue Type: Task
  Components: Authorization, Security
Reporter: Ashutosh Chauhan
Assignee: Ashutosh Chauhan
 Fix For: 0.13.0

 Attachments: HIVE-6827.2.patch, HIVE-6827.patch


 Disable insecure command on auth V2



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6827) Disable insecure commands with std sql auth

2014-07-09 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14055910#comment-14055910
 ] 

Lefty Leverenz commented on HIVE-6827:
--

Thejas documented this in the wiki here:

* [SQL Standard Based Hive Authorization -- Restrictions on Hive Commands and 
Statements | 
https://cwiki.apache.org/confluence/display/Hive/SQL+Standard+Based+Hive+Authorization#SQLStandardBasedHiveAuthorization-RestrictionsonHiveCommandsandStatements]

 Disable insecure commands with std sql auth
 ---

 Key: HIVE-6827
 URL: https://issues.apache.org/jira/browse/HIVE-6827
 Project: Hive
  Issue Type: Task
  Components: Authorization, Security
Reporter: Ashutosh Chauhan
Assignee: Ashutosh Chauhan
 Fix For: 0.13.0

 Attachments: HIVE-6827.2.patch, HIVE-6827.patch


 Disable insecure command on auth V2



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7025) Support retention on hive tables

2014-07-09 Thread Navis (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Navis updated HIVE-7025:


Attachment: HIVE-7025.4.patch.txt

 Support retention on hive tables
 

 Key: HIVE-7025
 URL: https://issues.apache.org/jira/browse/HIVE-7025
 Project: Hive
  Issue Type: Improvement
  Components: Query Processor
Reporter: Navis
Assignee: Navis
Priority: Minor
 Attachments: HIVE-7025.1.patch.txt, HIVE-7025.2.patch.txt, 
 HIVE-7025.3.patch.txt, HIVE-7025.4.patch.txt


 Add self destruction properties for temporary tables.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7317) authorization_explain.q fails when run in sequence

2014-07-09 Thread Navis (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Navis updated HIVE-7317:


   Resolution: Fixed
Fix Version/s: 0.14.0
   Status: Resolved  (was: Patch Available)

Committed to trunk. Thanks Thejas, for the review.

 authorization_explain.q fails when run in sequence
 --

 Key: HIVE-7317
 URL: https://issues.apache.org/jira/browse/HIVE-7317
 Project: Hive
  Issue Type: Bug
  Components: Authorization, Tests
Reporter: Thejas M Nair
Assignee: Navis
 Fix For: 0.14.0

 Attachments: HIVE-7317.1.patch.txt


 NO PRECOMMIT TESTS
 The test seems to be picking up some state changed by another test , as it 
 fails only when run in sequence. (cd itests; mvn install test 
 -Dtest=TestCliDriver -Dqfile_regex=.*authorization.*  -Phadoop-2 
 -Dtest.output.overwrite=true)
 The diff -
 {code}
 --- a/ql/src/test/results/clientpositive/authorization_explain.q.out
 +++ b/ql/src/test/results/clientpositive/authorization_explain.q.out
 @@ -16,9 +16,6 @@ CURRENT_USER:
hive_test_user
  OPERATION: 
QUERY
 -AUTHORIZATION_FAILURES: 
 -  No privilege 'Select' found for inputs { database:default, table:srcpart, 
 columnName:key}
 -  No privilege 'Select' found for inputs { database:default, table:src, 
 columnName:key}
  Warning: Shuffle Join JOIN[4][tables = [src, srcpart]] in Stage 
 'Stage-1:MAPRED' is a cross product
  PREHOOK: query: explain formatted authorization select * from src join 
 srcpart
  PREHOOK: type: QUERY
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7220) Empty dir in external table causes issue (root_dir_external_table.q failure)

2014-07-09 Thread Navis (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Navis updated HIVE-7220:


   Resolution: Fixed
Fix Version/s: 0.14.0
   Status: Resolved  (was: Patch Available)

Committed to trunk. Thanks Szehon!

 Empty dir in external table causes issue (root_dir_external_table.q failure)
 

 Key: HIVE-7220
 URL: https://issues.apache.org/jira/browse/HIVE-7220
 Project: Hive
  Issue Type: Bug
Reporter: Szehon Ho
Assignee: Szehon Ho
 Fix For: 0.14.0

 Attachments: HIVE-7220.2.patch, HIVE-7220.3.patch, HIVE-7220.4.patch, 
 HIVE-7220.5.patch, HIVE-7220.5.patch, HIVE-7220.patch


 While looking at root_dir_external_table.q failure, which is doing a query on 
 an external table located at root ('/'), I noticed that latest Hadoop2 
 CombineFileInputFormat returns split representing empty directories (like 
 '/Users'), which leads to failure in Hive's CombineFileRecordReader as it 
 tries to open the directory for processing.
 Tried with an external table in a normal HDFS directory, and it also returns 
 the same error.  Looks like a real bug.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: Review Request 17887: Support subquery for single sourced multi query

2014-07-09 Thread Navis Ryu

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/17887/
---

(Updated July 9, 2014, 6:52 a.m.)


Review request for hive.


Changes
---

Rebased to trunk.


Bugs: HIVE-5690
https://issues.apache.org/jira/browse/HIVE-5690


Repository: hive-git


Description
---

Single sourced multi (insert) query is very useful for various ETL processes 
but it does not allow subqueries included. For example, 
{noformat}
explain from src 
insert overwrite table x1 select * from (select distinct key,value) b order by 
key
insert overwrite table x2 select * from (select distinct key,value) c order by 
value;
{noformat}


Diffs (updated)
-

  ql/src/java/org/apache/hadoop/hive/ql/exec/ExplainSQRewriteTask.java ea3ac70 
  ql/src/java/org/apache/hadoop/hive/ql/parse/FromClauseParser.g f448b16 
  ql/src/java/org/apache/hadoop/hive/ql/parse/HiveParser.g f934ac4 
  ql/src/java/org/apache/hadoop/hive/ql/parse/QB.java 908db1e 
  ql/src/java/org/apache/hadoop/hive/ql/parse/QBParseInfo.java 911ac8a 
  ql/src/java/org/apache/hadoop/hive/ql/parse/QBSubQuery.java d398c88 
  ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java b91b9a2 
  ql/src/java/org/apache/hadoop/hive/ql/parse/SubQueryDiagnostic.java 57f9432 
  ql/src/java/org/apache/hadoop/hive/ql/parse/SubQueryUtils.java 089ad78 
  ql/src/java/org/apache/hadoop/hive/ql/parse/TypeCheckProcFactory.java e44f5ae 
  ql/src/test/org/apache/hadoop/hive/ql/parse/TestQBSubQuery.java 8b36f21 
  ql/src/test/queries/clientpositive/multi_insert_subquery.q PRE-CREATION 
  ql/src/test/results/clientnegative/create_view_failure3.q.out 5ddbdb6 
  ql/src/test/results/clientnegative/subquery_exists_implicit_gby.q.out 4830c00 
  ql/src/test/results/clientnegative/subquery_in_groupby.q.out 809bb0a 
  ql/src/test/results/clientnegative/subquery_in_select.q.out 3d74132 
  ql/src/test/results/clientnegative/subquery_multiple_cols_in_select.q.out 
7a16bae 
  ql/src/test/results/clientnegative/subquery_nested_subquery.q.out 4950ec9 
  ql/src/test/results/clientnegative/subquery_notexists_implicit_gby.q.out 
74422af 
  ql/src/test/results/clientnegative/subquery_subquery_chain.q.out 448bfb2 
  ql/src/test/results/clientnegative/subquery_windowing_corr.q.out d45f8f1 
  ql/src/test/results/clientnegative/uniquejoin3.q.out e10a47b 
  ql/src/test/results/clientpositive/alter_partition_coltype.q.out e86cc06 
  ql/src/test/results/clientpositive/analyze_table_null_partition.q.out a811f81 
  ql/src/test/results/clientpositive/annotate_stats_filter.q.out c7d58f6 
  ql/src/test/results/clientpositive/annotate_stats_groupby.q.out 6f72964 
  ql/src/test/results/clientpositive/annotate_stats_join.q.out cc816c8 
  ql/src/test/results/clientpositive/annotate_stats_limit.q.out 5c150f4 
  ql/src/test/results/clientpositive/annotate_stats_part.q.out a0b4602 
  ql/src/test/results/clientpositive/annotate_stats_select.q.out 97e9473 
  ql/src/test/results/clientpositive/annotate_stats_table.q.out bb2d18c 
  ql/src/test/results/clientpositive/annotate_stats_union.q.out 6d179b6 
  ql/src/test/results/clientpositive/auto_join_reordering_values.q.out 3f4f902 
  ql/src/test/results/clientpositive/auto_sortmerge_join_1.q.out 72640df 
  ql/src/test/results/clientpositive/auto_sortmerge_join_11.q.out c660cd0 
  ql/src/test/results/clientpositive/auto_sortmerge_join_12.q.out 4abda32 
  ql/src/test/results/clientpositive/auto_sortmerge_join_2.q.out 52a3194 
  ql/src/test/results/clientpositive/auto_sortmerge_join_3.q.out d807791 
  ql/src/test/results/clientpositive/auto_sortmerge_join_4.q.out 35e0a30 
  ql/src/test/results/clientpositive/auto_sortmerge_join_5.q.out af3d9d6 
  ql/src/test/results/clientpositive/auto_sortmerge_join_7.q.out 05ef5d8 
  ql/src/test/results/clientpositive/auto_sortmerge_join_8.q.out e423d14 
  ql/src/test/results/clientpositive/binary_output_format.q.out 294aabb 
  ql/src/test/results/clientpositive/bucket1.q.out f3eb15c 
  ql/src/test/results/clientpositive/bucket2.q.out 9a22160 
  ql/src/test/results/clientpositive/bucket3.q.out 8fa9c7b 
  ql/src/test/results/clientpositive/bucket4.q.out 032272b 
  ql/src/test/results/clientpositive/bucket5.q.out d19fbe5 
  ql/src/test/results/clientpositive/bucket_map_join_1.q.out 8674a6c 
  ql/src/test/results/clientpositive/bucket_map_join_2.q.out 8a5984d 
  ql/src/test/results/clientpositive/bucketcontext_1.q.out 1513515 
  ql/src/test/results/clientpositive/bucketcontext_2.q.out d18a9be 
  ql/src/test/results/clientpositive/bucketcontext_3.q.out e12c155 
  ql/src/test/results/clientpositive/bucketcontext_4.q.out 77b4882 
  ql/src/test/results/clientpositive/bucketcontext_5.q.out fa1cfc5 
  ql/src/test/results/clientpositive/bucketcontext_6.q.out aac66f8 
  ql/src/test/results/clientpositive/bucketcontext_7.q.out 78c4f94 
  ql/src/test/results/clientpositive/bucketcontext_8.q.out 

Re: Review Request 17887: Support subquery for single sourced multi query

2014-07-09 Thread Navis Ryu

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/17887/
---

(Updated July 9, 2014, 6:53 a.m.)


Review request for hive.


Bugs: HIVE-5690
https://issues.apache.org/jira/browse/HIVE-5690


Repository: hive-git


Description
---

Single sourced multi (insert) query is very useful for various ETL processes 
but it does not allow subqueries included. For example, 
{noformat}
explain from src 
insert overwrite table x1 select * from (select distinct key,value) b order by 
key
insert overwrite table x2 select * from (select distinct key,value) c order by 
value;
{noformat}


Diffs
-

  ql/src/java/org/apache/hadoop/hive/ql/exec/ExplainSQRewriteTask.java ea3ac70 
  ql/src/java/org/apache/hadoop/hive/ql/parse/FromClauseParser.g f448b16 
  ql/src/java/org/apache/hadoop/hive/ql/parse/HiveParser.g f934ac4 
  ql/src/java/org/apache/hadoop/hive/ql/parse/QB.java 908db1e 
  ql/src/java/org/apache/hadoop/hive/ql/parse/QBParseInfo.java 911ac8a 
  ql/src/java/org/apache/hadoop/hive/ql/parse/QBSubQuery.java d398c88 
  ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java b91b9a2 
  ql/src/java/org/apache/hadoop/hive/ql/parse/SubQueryDiagnostic.java 57f9432 
  ql/src/java/org/apache/hadoop/hive/ql/parse/SubQueryUtils.java 089ad78 
  ql/src/java/org/apache/hadoop/hive/ql/parse/TypeCheckProcFactory.java e44f5ae 
  ql/src/test/org/apache/hadoop/hive/ql/parse/TestQBSubQuery.java 8b36f21 
  ql/src/test/queries/clientpositive/multi_insert_subquery.q PRE-CREATION 
  ql/src/test/results/clientnegative/create_view_failure3.q.out 5ddbdb6 
  ql/src/test/results/clientnegative/subquery_exists_implicit_gby.q.out 4830c00 
  ql/src/test/results/clientnegative/subquery_in_groupby.q.out 809bb0a 
  ql/src/test/results/clientnegative/subquery_in_select.q.out 3d74132 
  ql/src/test/results/clientnegative/subquery_multiple_cols_in_select.q.out 
7a16bae 
  ql/src/test/results/clientnegative/subquery_nested_subquery.q.out 4950ec9 
  ql/src/test/results/clientnegative/subquery_notexists_implicit_gby.q.out 
74422af 
  ql/src/test/results/clientnegative/subquery_subquery_chain.q.out 448bfb2 
  ql/src/test/results/clientnegative/subquery_windowing_corr.q.out d45f8f1 
  ql/src/test/results/clientnegative/uniquejoin3.q.out e10a47b 
  ql/src/test/results/clientpositive/alter_partition_coltype.q.out e86cc06 
  ql/src/test/results/clientpositive/analyze_table_null_partition.q.out a811f81 
  ql/src/test/results/clientpositive/annotate_stats_filter.q.out c7d58f6 
  ql/src/test/results/clientpositive/annotate_stats_groupby.q.out 6f72964 
  ql/src/test/results/clientpositive/annotate_stats_join.q.out cc816c8 
  ql/src/test/results/clientpositive/annotate_stats_limit.q.out 5c150f4 
  ql/src/test/results/clientpositive/annotate_stats_part.q.out a0b4602 
  ql/src/test/results/clientpositive/annotate_stats_select.q.out 97e9473 
  ql/src/test/results/clientpositive/annotate_stats_table.q.out bb2d18c 
  ql/src/test/results/clientpositive/annotate_stats_union.q.out 6d179b6 
  ql/src/test/results/clientpositive/auto_join_reordering_values.q.out 3f4f902 
  ql/src/test/results/clientpositive/auto_sortmerge_join_1.q.out 72640df 
  ql/src/test/results/clientpositive/auto_sortmerge_join_11.q.out c660cd0 
  ql/src/test/results/clientpositive/auto_sortmerge_join_12.q.out 4abda32 
  ql/src/test/results/clientpositive/auto_sortmerge_join_2.q.out 52a3194 
  ql/src/test/results/clientpositive/auto_sortmerge_join_3.q.out d807791 
  ql/src/test/results/clientpositive/auto_sortmerge_join_4.q.out 35e0a30 
  ql/src/test/results/clientpositive/auto_sortmerge_join_5.q.out af3d9d6 
  ql/src/test/results/clientpositive/auto_sortmerge_join_7.q.out 05ef5d8 
  ql/src/test/results/clientpositive/auto_sortmerge_join_8.q.out e423d14 
  ql/src/test/results/clientpositive/binary_output_format.q.out 294aabb 
  ql/src/test/results/clientpositive/bucket1.q.out f3eb15c 
  ql/src/test/results/clientpositive/bucket2.q.out 9a22160 
  ql/src/test/results/clientpositive/bucket3.q.out 8fa9c7b 
  ql/src/test/results/clientpositive/bucket4.q.out 032272b 
  ql/src/test/results/clientpositive/bucket5.q.out d19fbe5 
  ql/src/test/results/clientpositive/bucket_map_join_1.q.out 8674a6c 
  ql/src/test/results/clientpositive/bucket_map_join_2.q.out 8a5984d 
  ql/src/test/results/clientpositive/bucketcontext_1.q.out 1513515 
  ql/src/test/results/clientpositive/bucketcontext_2.q.out d18a9be 
  ql/src/test/results/clientpositive/bucketcontext_3.q.out e12c155 
  ql/src/test/results/clientpositive/bucketcontext_4.q.out 77b4882 
  ql/src/test/results/clientpositive/bucketcontext_5.q.out fa1cfc5 
  ql/src/test/results/clientpositive/bucketcontext_6.q.out aac66f8 
  ql/src/test/results/clientpositive/bucketcontext_7.q.out 78c4f94 
  ql/src/test/results/clientpositive/bucketcontext_8.q.out ad7fec9 
  

Review Request 23350: Queries without tables fail under Tez

2014-07-09 Thread Navis Ryu

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/23350/
---

Review request for hive.


Bugs: HIVE-7352
https://issues.apache.org/jira/browse/HIVE-7352


Repository: hive-git


Description
---

Hive 0.13.0 added support for queries that do not reference tables (such as 
'SELECT 1'). These queries fail under Tez:

{noformat}
Vertex failed as one or more tasks failed. failedTasks:1]
14/07/07 09:54:42 ERROR tez.TezJobMonitor: Vertex failed, vertexName=Map 1, 
vertexId=vertex_1404652697071_4487_1_00, diagnostics=[Task failed, 
taskId=task_1404652697071_4487_1_00_00, 
diagnostics=[AttemptID:attempt_1404652697071_4487_1_00_00_0 Info:Error: 
java.lang.RuntimeException: java.lang.IllegalArgumentException: Can not create 
a Path from an empty string
at 
org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.initNextRecordReader(TezGroupedSplitsInputFormat.java:174)
at 
org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.init(TezGroupedSplitsInputFormat.java:113)
at 
org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat.getRecordReader(TezGroupedSplitsInputFormat.java:79)
at 
org.apache.tez.mapreduce.input.MRInput.setupOldRecordReader(MRInput.java:205)
at 
org.apache.tez.mapreduce.input.MRInput.initFromEventInternal(MRInput.java:362)
at 
org.apache.tez.mapreduce.input.MRInput.initFromEvent(MRInput.java:341)
at 
org.apache.tez.mapreduce.input.MRInputLegacy.checkAndAwaitRecordReaderInitialization(MRInputLegacy.java:99)
at 
org.apache.tez.mapreduce.input.MRInputLegacy.init(MRInputLegacy.java:68)
at 
org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:141)
at 
org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:307)
at 
org.apache.hadoop.mapred.YarnTezDagChild$5.run(YarnTezDagChild.java:562)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
at 
org.apache.hadoop.mapred.YarnTezDagChild.main(YarnTezDagChild.java:551)
Caused by: java.lang.IllegalArgumentException: Can not create a Path from an 
empty string
at org.apache.hadoop.fs.Path.checkPathArg(Path.java:127)
at org.apache.hadoop.fs.Path.init(Path.java:135)
at 
org.apache.hadoop.hive.ql.io.HiveInputFormat$HiveInputSplit.getPath(HiveInputFormat.java:110)
at 
org.apache.hadoop.hive.ql.io.HiveInputFormat.getRecordReader(HiveInputFormat.java:228)
at 
org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.initNextRecordReader(TezGroupedSplitsInputFormat.java:171)
... 14 more
{noformat}


Diffs
-

  itests/qtest/testconfiguration.properties 1462ecd 
  ql/src/java/org/apache/hadoop/hive/ql/io/NullRowsInputFormat.java fd60fed 
  ql/src/test/results/clientpositive/tez/select_dummy_source.q.out PRE-CREATION 

Diff: https://reviews.apache.org/r/23350/diff/


Testing
---


Thanks,

Navis Ryu



Re: Review Request 17887: Support subquery for single sourced multi query

2014-07-09 Thread Navis Ryu

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/17887/
---

(Updated July 9, 2014, 6:54 a.m.)


Review request for hive.


Bugs: HIVE-5690
https://issues.apache.org/jira/browse/HIVE-5690


Repository: hive-git


Description
---

Single sourced multi (insert) query is very useful for various ETL processes 
but it does not allow subqueries included. For example, 
{noformat}
explain from src 
insert overwrite table x1 select * from (select distinct key,value) b order by 
key
insert overwrite table x2 select * from (select distinct key,value) c order by 
value;
{noformat}


Diffs
-

  ql/src/java/org/apache/hadoop/hive/ql/exec/ExplainSQRewriteTask.java ea3ac70 
  ql/src/java/org/apache/hadoop/hive/ql/parse/FromClauseParser.g f448b16 
  ql/src/java/org/apache/hadoop/hive/ql/parse/HiveParser.g f934ac4 
  ql/src/java/org/apache/hadoop/hive/ql/parse/QB.java 908db1e 
  ql/src/java/org/apache/hadoop/hive/ql/parse/QBParseInfo.java 911ac8a 
  ql/src/java/org/apache/hadoop/hive/ql/parse/QBSubQuery.java d398c88 
  ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java b91b9a2 
  ql/src/java/org/apache/hadoop/hive/ql/parse/SubQueryDiagnostic.java 57f9432 
  ql/src/java/org/apache/hadoop/hive/ql/parse/SubQueryUtils.java 089ad78 
  ql/src/java/org/apache/hadoop/hive/ql/parse/TypeCheckProcFactory.java e44f5ae 
  ql/src/test/org/apache/hadoop/hive/ql/parse/TestQBSubQuery.java 8b36f21 
  ql/src/test/queries/clientpositive/multi_insert_subquery.q PRE-CREATION 
  ql/src/test/results/clientnegative/create_view_failure3.q.out 5ddbdb6 
  ql/src/test/results/clientnegative/subquery_exists_implicit_gby.q.out 4830c00 
  ql/src/test/results/clientnegative/subquery_in_groupby.q.out 809bb0a 
  ql/src/test/results/clientnegative/subquery_in_select.q.out 3d74132 
  ql/src/test/results/clientnegative/subquery_multiple_cols_in_select.q.out 
7a16bae 
  ql/src/test/results/clientnegative/subquery_nested_subquery.q.out 4950ec9 
  ql/src/test/results/clientnegative/subquery_notexists_implicit_gby.q.out 
74422af 
  ql/src/test/results/clientnegative/subquery_subquery_chain.q.out 448bfb2 
  ql/src/test/results/clientnegative/subquery_windowing_corr.q.out d45f8f1 
  ql/src/test/results/clientnegative/uniquejoin3.q.out e10a47b 
  ql/src/test/results/clientpositive/alter_partition_coltype.q.out e86cc06 
  ql/src/test/results/clientpositive/analyze_table_null_partition.q.out a811f81 
  ql/src/test/results/clientpositive/annotate_stats_filter.q.out c7d58f6 
  ql/src/test/results/clientpositive/annotate_stats_groupby.q.out 6f72964 
  ql/src/test/results/clientpositive/annotate_stats_join.q.out cc816c8 
  ql/src/test/results/clientpositive/annotate_stats_limit.q.out 5c150f4 
  ql/src/test/results/clientpositive/annotate_stats_part.q.out a0b4602 
  ql/src/test/results/clientpositive/annotate_stats_select.q.out 97e9473 
  ql/src/test/results/clientpositive/annotate_stats_table.q.out bb2d18c 
  ql/src/test/results/clientpositive/annotate_stats_union.q.out 6d179b6 
  ql/src/test/results/clientpositive/auto_join_reordering_values.q.out 3f4f902 
  ql/src/test/results/clientpositive/auto_sortmerge_join_1.q.out 72640df 
  ql/src/test/results/clientpositive/auto_sortmerge_join_11.q.out c660cd0 
  ql/src/test/results/clientpositive/auto_sortmerge_join_12.q.out 4abda32 
  ql/src/test/results/clientpositive/auto_sortmerge_join_2.q.out 52a3194 
  ql/src/test/results/clientpositive/auto_sortmerge_join_3.q.out d807791 
  ql/src/test/results/clientpositive/auto_sortmerge_join_4.q.out 35e0a30 
  ql/src/test/results/clientpositive/auto_sortmerge_join_5.q.out af3d9d6 
  ql/src/test/results/clientpositive/auto_sortmerge_join_7.q.out 05ef5d8 
  ql/src/test/results/clientpositive/auto_sortmerge_join_8.q.out e423d14 
  ql/src/test/results/clientpositive/binary_output_format.q.out 294aabb 
  ql/src/test/results/clientpositive/bucket1.q.out f3eb15c 
  ql/src/test/results/clientpositive/bucket2.q.out 9a22160 
  ql/src/test/results/clientpositive/bucket3.q.out 8fa9c7b 
  ql/src/test/results/clientpositive/bucket4.q.out 032272b 
  ql/src/test/results/clientpositive/bucket5.q.out d19fbe5 
  ql/src/test/results/clientpositive/bucket_map_join_1.q.out 8674a6c 
  ql/src/test/results/clientpositive/bucket_map_join_2.q.out 8a5984d 
  ql/src/test/results/clientpositive/bucketcontext_1.q.out 1513515 
  ql/src/test/results/clientpositive/bucketcontext_2.q.out d18a9be 
  ql/src/test/results/clientpositive/bucketcontext_3.q.out e12c155 
  ql/src/test/results/clientpositive/bucketcontext_4.q.out 77b4882 
  ql/src/test/results/clientpositive/bucketcontext_5.q.out fa1cfc5 
  ql/src/test/results/clientpositive/bucketcontext_6.q.out aac66f8 
  ql/src/test/results/clientpositive/bucketcontext_7.q.out 78c4f94 
  ql/src/test/results/clientpositive/bucketcontext_8.q.out ad7fec9 
  

Review Request 23351: Support direct fetch for lateral views, sub queries, etc.

2014-07-09 Thread Navis Ryu

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/23351/
---

Review request for hive.


Bugs: HIVE-5718
https://issues.apache.org/jira/browse/HIVE-5718


Repository: hive-git


Description
---

Extend HIVE-2925 with LV and SubQ.


Diffs
-

  ql/src/java/org/apache/hadoop/hive/ql/exec/OperatorFactory.java 5d41fa1 
  ql/src/java/org/apache/hadoop/hive/ql/optimizer/SimpleFetchOptimizer.java 
7413d2b 
  ql/src/java/org/apache/hadoop/hive/ql/parse/QB.java 908db1e 
  ql/src/java/org/apache/hadoop/hive/ql/parse/QBParseInfo.java 911ac8a 
  ql/src/java/org/apache/hadoop/hive/ql/plan/FetchWork.java 32d84ea 
  ql/src/test/queries/clientpositive/nonmr_fetch.q 2a92d17 
  ql/src/test/queries/clientpositive/nonmr_fetch_threshold.q e6343e2 
  ql/src/test/results/clientpositive/explain_logical.q.out bb26e8c 
  ql/src/test/results/clientpositive/lateral_view_noalias.q.out d51b2de 
  ql/src/test/results/clientpositive/nonmr_fetch.q.out 5a13e84 
  ql/src/test/results/clientpositive/nonmr_fetch_threshold.q.out 39cdfa6 
  ql/src/test/results/clientpositive/select_dummy_source.q.out 2742d56 
  ql/src/test/results/clientpositive/subquery_alias.q.out 37bc3a4 
  ql/src/test/results/clientpositive/udf_explode.q.out 4eeedeb 
  ql/src/test/results/clientpositive/udf_inline.q.out e065bed 
  ql/src/test/results/clientpositive/udf_reflect2.q.out 6b19277 
  ql/src/test/results/clientpositive/udf_to_unix_timestamp.q.out 447ef87 
  ql/src/test/results/clientpositive/udtf_explode.q.out ae95907 

Diff: https://reviews.apache.org/r/23351/diff/


Testing
---


Thanks,

Navis Ryu



Review Request 23352: Support non-constant expressions for MAP type indices.

2014-07-09 Thread Navis Ryu

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/23352/
---

Review request for hive.


Bugs: HIVE-7325
https://issues.apache.org/jira/browse/HIVE-7325


Repository: hive-git


Description
---

Here is my sample:
{code}
CREATE TABLE RECORD(RecordID string, BatchDate string, Country string) 
STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' 
WITH SERDEPROPERTIES (hbase.columns.mapping = :key,D:BatchDate,D:Country) 
TBLPROPERTIES (hbase.table.name = RECORD); 


CREATE TABLE KEY_RECORD(KeyValue String, RecordId mapstring,string) 
STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' 
WITH SERDEPROPERTIES (hbase.columns.mapping = :key, K:) 
TBLPROPERTIES (hbase.table.name = KEY_RECORD); 
{code}
The following join statement doesn't work. 
{code}
SELECT a.*, b.* from KEY_RECORD a join RECORD b 
WHERE a.RecordId[b.RecordID] is not null;
{code}
FAILED: SemanticException 2:16 Non-constant expression for map indexes not 
supported. Error encountered near token 'RecordID' 


Diffs
-

  ql/src/java/org/apache/hadoop/hive/ql/ErrorMsg.java 9889cfe 
  ql/src/java/org/apache/hadoop/hive/ql/parse/TypeCheckProcFactory.java e44f5ae 
  ql/src/test/queries/clientpositive/array_map_access_nonconstant.q 
PRE-CREATION 
  ql/src/test/queries/negative/invalid_list_index.q c40f079 
  ql/src/test/queries/negative/invalid_list_index2.q 99d0b3d 
  ql/src/test/queries/negative/invalid_map_index2.q 5828f07 
  ql/src/test/results/clientpositive/array_map_access_nonconstant.q.out 
PRE-CREATION 
  ql/src/test/results/compiler/errors/invalid_list_index.q.out a4179cd 
  ql/src/test/results/compiler/errors/invalid_list_index2.q.out aaa9455 
  ql/src/test/results/compiler/errors/invalid_map_index2.q.out edc9bda 
  
serde/src/java/org/apache/hadoop/hive/serde2/objectinspector/primitive/PrimitiveObjectInspectorUtils.java
 5ccacf1 

Diff: https://reviews.apache.org/r/23352/diff/


Testing
---


Thanks,

Navis Ryu



Review Request 23353: Explain authorize for auth2 throws exception

2014-07-09 Thread Navis Ryu

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/23353/
---

Review request for hive.


Bugs: HIVE-7365
https://issues.apache.org/jira/browse/HIVE-7365


Repository: hive-git


Description
---

throws NPE in auth v2.


Diffs
-

  ql/src/java/org/apache/hadoop/hive/ql/exec/ExplainTask.java 92545d8 
  
ql/src/java/org/apache/hadoop/hive/ql/security/authorization/AuthorizationFactory.java
 47c57db 
  ql/src/java/org/apache/hadoop/hive/ql/session/SessionState.java 2de476e 
  ql/src/test/queries/clientpositive/authorization_view_sqlstd.q 3418e47 
  ql/src/test/results/clientpositive/authorization_view_sqlstd.q.out cf3925b 

Diff: https://reviews.apache.org/r/23353/diff/


Testing
---


Thanks,

Navis Ryu



[jira] [Commented] (HIVE-6586) Add new parameters to HiveConf.java after commit HIVE-6037 (also fix typos)

2014-07-09 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14055929#comment-14055929
 ] 

Lefty Leverenz commented on HIVE-6586:
--

HIVE-6846 added hive.security.authorization.sqlstd.confwhitelist in 0.13.0, 
with a description in the HiveConf.java comment which isn't in 
hive-default.xml.template (nor in patch HIVE-6037-0.13.0).

 Add new parameters to HiveConf.java after commit HIVE-6037 (also fix typos)
 ---

 Key: HIVE-6586
 URL: https://issues.apache.org/jira/browse/HIVE-6586
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.13.0
Reporter: Lefty Leverenz
  Labels: TODOC14

 HIVE-6037 puts the definitions of configuration parameters into the 
 HiveConf.java file, but several recent jiras for release 0.13.0 introduce new 
 parameters that aren't in HiveConf.java yet and some parameter definitions 
 need to be altered for 0.13.0.  This jira will patch HiveConf.java after 
 HIVE-6037 gets committed.
 Also, four typos patched in HIVE-6582 need to be fixed in the new 
 HiveConf.java.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Review Request 23354: Simple reconnection support for jdbc2

2014-07-09 Thread Navis Ryu

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/23354/
---

Review request for hive.


Bugs: HIVE-4616
https://issues.apache.org/jira/browse/HIVE-4616


Repository: hive-git


Description
---

jdbc:hive2://localhost:1/db2;autoReconnect=true

simple reconnection on TransportException. If hiveserver2 has not been 
shutdown, session could be reused.


Diffs
-

  jdbc/src/java/org/apache/hive/jdbc/HiveConnection.java cbcfec7 

Diff: https://reviews.apache.org/r/23354/diff/


Testing
---


Thanks,

Navis Ryu



[jira] [Commented] (HIVE-7345) Beeline changes its prompt to reflect successful database connection even after failing to connect

2014-07-09 Thread Navis (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14055961#comment-14055961
 ] 

Navis commented on HIVE-7345:
-

Yes, even the connection is not established, metadata for the connection is 
registered in beeline. User may close it or try reconnect it.

 Beeline changes its prompt to reflect successful database connection even 
 after failing to connect
 --

 Key: HIVE-7345
 URL: https://issues.apache.org/jira/browse/HIVE-7345
 Project: Hive
  Issue Type: Bug
Reporter: Ashish Kumar Singh
Assignee: Ashish Kumar Singh
 Attachments: HIVE-7345.patch


 Beeline changes its prompt to reflect successful database connection even 
 after failing to connect, which is misleading.
 {code}
 [asingh@e1118 tpcds]$ beeline -u jdbc:hive2://abclocalhost:1 hive
 scan complete in 5ms
 Connecting to jdbc:hive2://abclocalhost:1
 Error: Invalid URL: jdbc:hive2://abclocalhost:1 (state=08S01,code=0)
 Beeline version 0.12.0-cdh5.1.0-SNAPSHOT by Apache Hive
 0: jdbc:hive2://abclocalhost:1 show tables;
 No current connection
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-7025) Support retention on hive tables

2014-07-09 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14055968#comment-14055968
 ] 

Hive QA commented on HIVE-7025:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12654758/HIVE-7025.4.patch.txt

{color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 5704 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_dynpart_sort_optimization
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_root_dir_external_table
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-Build/719/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-Build/719/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-Build-719/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 2 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12654758

 Support retention on hive tables
 

 Key: HIVE-7025
 URL: https://issues.apache.org/jira/browse/HIVE-7025
 Project: Hive
  Issue Type: Improvement
  Components: Query Processor
Reporter: Navis
Assignee: Navis
Priority: Minor
 Attachments: HIVE-7025.1.patch.txt, HIVE-7025.2.patch.txt, 
 HIVE-7025.3.patch.txt, HIVE-7025.4.patch.txt


 Add self destruction properties for temporary tables.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-494) Select columns by index instead of name

2014-07-09 Thread Navis (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14055969#comment-14055969
 ] 

Navis commented on HIVE-494:


Test fails seemed not related to this.

 Select columns by index instead of name
 ---

 Key: HIVE-494
 URL: https://issues.apache.org/jira/browse/HIVE-494
 Project: Hive
  Issue Type: Wish
  Components: Clients, Query Processor
Reporter: Adam Kramer
Assignee: Navis
Priority: Minor
  Labels: SQL
 Attachments: ASF.LICENSE.NOT.GRANTED--HIVE-494.D1641.1.patch, 
 HIVE-494.2.patch.txt, HIVE-494.3.patch.txt, HIVE-494.D12153.1.patch


 SELECT mytable[0], mytable[2] FROM some_table_name mytable;
 ...should return the first and third columns, respectively, from mytable 
 regardless of their column names.
 The need for names specifically is kind of silly when they just get 
 translated into numbers anyway.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Review Request 23355: Hive unnecessarily validates table SerDes when dropping a table

2014-07-09 Thread Navis Ryu

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/23355/
---

Review request for hive.


Bugs: HIVE-3392
https://issues.apache.org/jira/browse/HIVE-3392


Repository: hive-git


Description
---

natty@hadoop1:~$ hive
hive add jar 
/home/natty/source/sample-code/custom-serdes/target/custom-serdes-1.0-SNAPSHOT.jar;
Added 
/home/natty/source/sample-code/custom-serdes/target/custom-serdes-1.0-SNAPSHOT.jar
 to class path
Added resource: 
/home/natty/source/sample-code/custom-serdes/target/custom-serdes-1.0-SNAPSHOT.jar
hive create table test (a int) row format serde 'hive.serde.JSONSerDe';

OK
Time taken: 2.399 seconds


natty@hadoop1:~$ hive
hive drop table test;  
 
FAILED: Hive Internal Error: 
java.lang.RuntimeException(MetaException(message:org.apache.hadoop.hive.serde2.SerDeException
 SerDe hive.serde.JSONSerDe does not exist))
java.lang.RuntimeException: 
MetaException(message:org.apache.hadoop.hive.serde2.SerDeException SerDe 
hive.serde.JSONSerDe does not exist)
at 
org.apache.hadoop.hive.ql.metadata.Table.getDeserializerFromMetaStore(Table.java:262)
at 
org.apache.hadoop.hive.ql.metadata.Table.getDeserializer(Table.java:253)
at org.apache.hadoop.hive.ql.metadata.Table.getCols(Table.java:490)
at 
org.apache.hadoop.hive.ql.metadata.Table.checkValidity(Table.java:162)
at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:943)
at 
org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer.analyzeDropTable(DDLSemanticAnalyzer.java:700)
at 
org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer.analyzeInternal(DDLSemanticAnalyzer.java:210)
at 
org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:243)
at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:430)
at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:337)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:889)
at 
org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:255)
at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:212)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:403)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:671)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:554)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.util.RunJar.main(RunJar.java:208)
Caused by: MetaException(message:org.apache.hadoop.hive.serde2.SerDeException 
SerDe com.cloudera.hive.serde.JSONSerDe does not exist)
at 
org.apache.hadoop.hive.metastore.MetaStoreUtils.getDeserializer(MetaStoreUtils.java:211)
at 
org.apache.hadoop.hive.ql.metadata.Table.getDeserializerFromMetaStore(Table.java:260)
... 20 more

hive add jar 
/home/natty/source/sample-code/custom-serdes/target/custom-serdes-1.0-SNAPSHOT.jar;
Added 
/home/natty/source/sample-code/custom-serdes/target/custom-serdes-1.0-SNAPSHOT.jar
 to class path
Added resource: 
/home/natty/source/sample-code/custom-serdes/target/custom-serdes-1.0-SNAPSHOT.jar
hive drop table test;
OK
Time taken: 0.658 seconds
hive 


Diffs
-

  ql/src/java/org/apache/hadoop/hive/ql/exec/DDLTask.java 4d8e10c 
  ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java 250756c 
  ql/src/java/org/apache/hadoop/hive/ql/metadata/Partition.java 3a1e7fd 
  ql/src/java/org/apache/hadoop/hive/ql/metadata/Table.java 3df2690 
  ql/src/java/org/apache/hadoop/hive/ql/plan/CreateTableDesc.java 2537b75 

Diff: https://reviews.apache.org/r/23355/diff/


Testing
---


Thanks,

Navis Ryu



[jira] [Updated] (HIVE-3392) Hive unnecessarily validates table SerDes when dropping a table

2014-07-09 Thread Navis (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Navis updated HIVE-3392:


Attachment: HIVE-3392.4.patch.txt

 Hive unnecessarily validates table SerDes when dropping a table
 ---

 Key: HIVE-3392
 URL: https://issues.apache.org/jira/browse/HIVE-3392
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.9.0
Reporter: Jonathan Natkins
Assignee: Navis
  Labels: patch
 Attachments: HIVE-3392.2.patch.txt, HIVE-3392.3.patch.txt, 
 HIVE-3392.4.patch.txt, HIVE-3392.Test Case - with_trunk_version.txt


 natty@hadoop1:~$ hive
 hive add jar 
 /home/natty/source/sample-code/custom-serdes/target/custom-serdes-1.0-SNAPSHOT.jar;
 Added 
 /home/natty/source/sample-code/custom-serdes/target/custom-serdes-1.0-SNAPSHOT.jar
  to class path
 Added resource: 
 /home/natty/source/sample-code/custom-serdes/target/custom-serdes-1.0-SNAPSHOT.jar
 hive create table test (a int) row format serde 'hive.serde.JSONSerDe';  
   
 OK
 Time taken: 2.399 seconds
 natty@hadoop1:~$ hive
 hive drop table test;

 FAILED: Hive Internal Error: 
 java.lang.RuntimeException(MetaException(message:org.apache.hadoop.hive.serde2.SerDeException
  SerDe hive.serde.JSONSerDe does not exist))
 java.lang.RuntimeException: 
 MetaException(message:org.apache.hadoop.hive.serde2.SerDeException SerDe 
 hive.serde.JSONSerDe does not exist)
   at 
 org.apache.hadoop.hive.ql.metadata.Table.getDeserializerFromMetaStore(Table.java:262)
   at 
 org.apache.hadoop.hive.ql.metadata.Table.getDeserializer(Table.java:253)
   at org.apache.hadoop.hive.ql.metadata.Table.getCols(Table.java:490)
   at 
 org.apache.hadoop.hive.ql.metadata.Table.checkValidity(Table.java:162)
   at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:943)
   at 
 org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer.analyzeDropTable(DDLSemanticAnalyzer.java:700)
   at 
 org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer.analyzeInternal(DDLSemanticAnalyzer.java:210)
   at 
 org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:243)
   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:430)
   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:337)
   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:889)
   at 
 org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:255)
   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:212)
   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:403)
   at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:671)
   at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:554)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at org.apache.hadoop.util.RunJar.main(RunJar.java:208)
 Caused by: MetaException(message:org.apache.hadoop.hive.serde2.SerDeException 
 SerDe com.cloudera.hive.serde.JSONSerDe does not exist)
   at 
 org.apache.hadoop.hive.metastore.MetaStoreUtils.getDeserializer(MetaStoreUtils.java:211)
   at 
 org.apache.hadoop.hive.ql.metadata.Table.getDeserializerFromMetaStore(Table.java:260)
   ... 20 more
 hive add jar 
 /home/natty/source/sample-code/custom-serdes/target/custom-serdes-1.0-SNAPSHOT.jar;
 Added 
 /home/natty/source/sample-code/custom-serdes/target/custom-serdes-1.0-SNAPSHOT.jar
  to class path
 Added resource: 
 /home/natty/source/sample-code/custom-serdes/target/custom-serdes-1.0-SNAPSHOT.jar
 hive drop table test;
 OK
 Time taken: 0.658 seconds
 hive 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Review Request 23356: Add equals method to ObjectInspectorUtils

2014-07-09 Thread Navis Ryu

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/23356/
---

Review request for hive.


Bugs: HIVE-5343
https://issues.apache.org/jira/browse/HIVE-5343


Repository: hive-git


Description
---

Might provide shortcut for some use cases.


Diffs
-

  common/src/java/org/apache/hive/common/util/HiveStringUtils.java c21c937 
  ql/src/java/org/apache/hadoop/hive/ql/exec/GroupByOperator.java 792d87f 
  
ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFArrayContains.java 
510f367 
  ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFField.java 
d7e65fa 
  ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFIn.java 8990e1d 
  ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFOPEqual.java 
cf104d3 
  ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFOPNotEqual.java 
d604cd5 
  ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFReflect.java 
89496ea 
  serde/src/java/org/apache/hadoop/hive/serde2/io/HiveBaseCharWritable.java 
8c37a9b 
  serde/src/java/org/apache/hadoop/hive/serde2/io/HiveCharWritable.java 2aaa90c 
  serde/src/java/org/apache/hadoop/hive/serde2/io/HiveVarcharWritable.java 
a165b84 
  
serde/src/java/org/apache/hadoop/hive/serde2/objectinspector/ListObjectsEqualComparer.java
 ed4979e 
  
serde/src/java/org/apache/hadoop/hive/serde2/objectinspector/MapEqualComparer.java
 adde408 
  
serde/src/java/org/apache/hadoop/hive/serde2/objectinspector/ObjectInspectorUtils.java
 1baf359 
  
serde/src/test/org/apache/hadoop/hive/serde2/binarysortable/TestBinarySortableSerDe.java
 cefb72e 
  
serde/src/test/org/apache/hadoop/hive/serde2/lazybinary/TestLazyBinarySerDe.java
 02ae6f8 

Diff: https://reviews.apache.org/r/23356/diff/


Testing
---


Thanks,

Navis Ryu



[jira] [Comment Edited] (HIVE-6586) Add new parameters to HiveConf.java after commit HIVE-6037 (also fix typos)

2014-07-09 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14055929#comment-14055929
 ] 

Lefty Leverenz edited comment on HIVE-6586 at 7/9/14 8:06 AM:
--

HIVE-6846 added hive.security.authorization.sqlstd.confwhitelist in 0.13.0, 
with a description in the HiveConf.java comment which isn't in 
hive-default.xml.template (nor in patch HIVE-6037-0.13.0).

The wiki has a revised description -- I recommend using it without the full 
parameter list, just refer to HIVE-6846 for the list:

* [Configuration Properties -- hive.security.authorization.sqlstd.confwhitelist 
| 
https://cwiki.apache.org/confluence/display/Hive/Configuration+Properties#ConfigurationProperties-hive.security.authorization.sqlstd.confwhitelist]


was (Author: le...@hortonworks.com):
HIVE-6846 added hive.security.authorization.sqlstd.confwhitelist in 0.13.0, 
with a description in the HiveConf.java comment which isn't in 
hive-default.xml.template (nor in patch HIVE-6037-0.13.0).

 Add new parameters to HiveConf.java after commit HIVE-6037 (also fix typos)
 ---

 Key: HIVE-6586
 URL: https://issues.apache.org/jira/browse/HIVE-6586
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.13.0
Reporter: Lefty Leverenz
  Labels: TODOC14

 HIVE-6037 puts the definitions of configuration parameters into the 
 HiveConf.java file, but several recent jiras for release 0.13.0 introduce new 
 parameters that aren't in HiveConf.java yet and some parameter definitions 
 need to be altered for 0.13.0.  This jira will patch HiveConf.java after 
 HIVE-6037 gets committed.
 Also, four typos patched in HIVE-6582 need to be fixed in the new 
 HiveConf.java.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Review Request 23357: Support retention on hive tables

2014-07-09 Thread Navis Ryu

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/23357/
---

Review request for hive.


Bugs: HIVE-7025
https://issues.apache.org/jira/browse/HIVE-7025


Repository: hive-git


Description
---

Add self destruction properties for temporary tables.


Diffs
-

  common/src/java/org/apache/hadoop/hive/conf/HiveConf.java 8bff2a9 
  conf/hive-default.xml.template 4944dfc 
  
itests/hive-unit/src/test/java/org/apache/hadoop/hive/metastore/TestHiveMetaStore.java
 130fd67 
  
itests/hive-unit/src/test/java/org/apache/hadoop/hive/metastore/TestRetention.java
 PRE-CREATION 
  metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java 
acef599 
  metastore/src/java/org/apache/hadoop/hive/metastore/MetaStoreThread.java 
6e18a5b 
  metastore/src/java/org/apache/hadoop/hive/metastore/ObjectStore.java 911c997 
  metastore/src/java/org/apache/hadoop/hive/metastore/RawStore.java e0de0e0 
  metastore/src/java/org/apache/hadoop/hive/metastore/Warehouse.java 9e8d912 
  
metastore/src/java/org/apache/hadoop/hive/metastore/retention/RetentionProcessor.java
 PRE-CREATION 
  
metastore/src/java/org/apache/hadoop/hive/metastore/retention/RetentionTarget.java
 PRE-CREATION 
  
metastore/src/test/org/apache/hadoop/hive/metastore/DummyRawStoreControlledCommit.java
 5c00aa1 
  
metastore/src/test/org/apache/hadoop/hive/metastore/DummyRawStoreForJdoConnection.java
 5025b83 
  ql/src/java/org/apache/hadoop/hive/ql/exec/DDLTask.java 4d8e10c 
  ql/src/java/org/apache/hadoop/hive/ql/hooks/WriteEntity.java 26836b6 
  ql/src/java/org/apache/hadoop/hive/ql/parse/DDLSemanticAnalyzer.java 640b6b3 
  ql/src/java/org/apache/hadoop/hive/ql/parse/HiveLexer.g 412a046 
  ql/src/java/org/apache/hadoop/hive/ql/parse/HiveParser.g f934ac4 
  ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzerFactory.java 
b6f3748 
  ql/src/java/org/apache/hadoop/hive/ql/plan/AlterTableDesc.java 20d863b 
  ql/src/java/org/apache/hadoop/hive/ql/txn/compactor/Cleaner.java 18bb2c0 
  ql/src/java/org/apache/hadoop/hive/ql/txn/compactor/CompactorThread.java 
715f9c0 
  ql/src/java/org/apache/hadoop/hive/ql/txn/compactor/Initiator.java 3211759 
  ql/src/java/org/apache/hadoop/hive/ql/txn/compactor/Worker.java f464df8 
  ql/src/test/org/apache/hadoop/hive/ql/txn/compactor/CompactorTest.java 
7f5134e 
  ql/src/test/queries/clientpositive/alter_table_retention.q PRE-CREATION 
  ql/src/test/results/clientpositive/alter_table_retention.q.out PRE-CREATION 

Diff: https://reviews.apache.org/r/23357/diff/


Testing
---


Thanks,

Navis Ryu



[jira] [Commented] (HIVE-6846) allow safe set commands with sql standard authorization

2014-07-09 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14055975#comment-14055975
 ] 

Lefty Leverenz commented on HIVE-6846:
--

This adds *hive.security.authorization.sqlstd.confwhitelist* with a description 
in the HiveConf.java comment but not in hive-default.xml.template.

It's documented in the wiki here (please review, I made the assumption that 
it's a comma-separated list):

* [Configuration Properties -- hive.security.authorization.sqlstd.confwhitelist 
| 
https://cwiki.apache.org/confluence/display/Hive/Configuration+Properties#ConfigurationProperties-hive.security.authorization.sqlstd.confwhitelist]

with references from two places:

* [SQL Standard Based Hive Authorization -- Restrictions on Hive Commands and 
Statements | 
https://cwiki.apache.org/confluence/display/Hive/SQL+Standard+Based+Hive+Authorization#SQLStandardBasedHiveAuthorization-RestrictionsonHiveCommandsandStatements]
* [Configuration Properties -- Restricted List and Whitelist | 
https://cwiki.apache.org/confluence/display/Hive/Configuration+Properties#ConfigurationProperties-RestrictedListandWhitelist]

I added a comment to HIVE-6586 so the parameter description won't get lost in 
the shuffle when HIVE-6037 changes HiveConf.java.

 allow safe set commands with sql standard authorization
 ---

 Key: HIVE-6846
 URL: https://issues.apache.org/jira/browse/HIVE-6846
 Project: Hive
  Issue Type: Bug
  Components: Authorization
Affects Versions: 0.13.0
Reporter: Thejas M Nair
Assignee: Thejas M Nair
 Fix For: 0.13.0

 Attachments: HIVE-6846.1.patch, HIVE-6846.2.patch, HIVE-6846.3.patch


 HIVE-6827 disables all set commands when SQL standard authorization is turned 
 on, but not all set commands are unsafe. We should allow safe set commands.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-7213) COUNT(*) returns out-dated count value after TRUNCATE or INSERT INTO

2014-07-09 Thread Moustafa Aboul Atta (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14056013#comment-14056013
 ] 

Moustafa Aboul Atta commented on HIVE-7213:
---

[~ashutoshc] Sorry just saw your earlier comment, I ran the query on a simple 
table with 2 records, here's the result:

{code}
hive truncate table test;
OK
Time taken: 0.371 seconds
hive select * from test;
OK
Time taken: 0.187 seconds
hive select count(*) from test;
OK
1
Time taken: 0.192 seconds, Fetched: 1 row(s)
hive explain select count(*) from test;
OK
STAGE DEPENDENCIES:
  Stage-0 is a root stage

STAGE PLANS:
  Stage: Stage-0
Fetch Operator
  limit: 1

Time taken: 0.086 seconds, Fetched: 8 row(s)
{code}

 COUNT(*) returns out-dated count value after TRUNCATE or INSERT INTO
 

 Key: HIVE-7213
 URL: https://issues.apache.org/jira/browse/HIVE-7213
 Project: Hive
  Issue Type: Bug
  Components: Query Processor, Statistics
Affects Versions: 0.13.0
 Environment: HDP 2.1
 Windows Server 2012 64-bit
Reporter: Moustafa Aboul Atta

 Running a query to count number of rows in a table through
 {{SELECT COUNT( * ) FROM t}}
 always returns the last number of rows added through the following statement:
 {{INSERT INTO TABLE t SELECT r FROM t2}}
 However, running
 {{SELECT * FROM t}}
 returns the expected results i.e. the old and newly added rows.
 Also running 
 {{TRUNCATE TABLE t;}}
 returns the original count of rows in the table, however running 
 {{SELECT * FROM t;}}
 returns nothing as expected



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-3392) Hive unnecessarily validates table SerDes when dropping a table

2014-07-09 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14056017#comment-14056017
 ] 

Hive QA commented on HIVE-3392:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12654765/HIVE-3392.4.patch.txt

{color:red}ERROR:{color} -1 due to 3 failed/errored test(s), 5701 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_altern1
org.apache.hive.hcatalog.pig.TestHCatLoader.testReadDataPrimitiveTypes
org.apache.hive.jdbc.miniHS2.TestHiveServer2.testConnection
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-Build/720/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-Build/720/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-Build-720/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 3 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12654765

 Hive unnecessarily validates table SerDes when dropping a table
 ---

 Key: HIVE-3392
 URL: https://issues.apache.org/jira/browse/HIVE-3392
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.9.0
Reporter: Jonathan Natkins
Assignee: Navis
  Labels: patch
 Attachments: HIVE-3392.2.patch.txt, HIVE-3392.3.patch.txt, 
 HIVE-3392.4.patch.txt, HIVE-3392.Test Case - with_trunk_version.txt


 natty@hadoop1:~$ hive
 hive add jar 
 /home/natty/source/sample-code/custom-serdes/target/custom-serdes-1.0-SNAPSHOT.jar;
 Added 
 /home/natty/source/sample-code/custom-serdes/target/custom-serdes-1.0-SNAPSHOT.jar
  to class path
 Added resource: 
 /home/natty/source/sample-code/custom-serdes/target/custom-serdes-1.0-SNAPSHOT.jar
 hive create table test (a int) row format serde 'hive.serde.JSONSerDe';  
   
 OK
 Time taken: 2.399 seconds
 natty@hadoop1:~$ hive
 hive drop table test;

 FAILED: Hive Internal Error: 
 java.lang.RuntimeException(MetaException(message:org.apache.hadoop.hive.serde2.SerDeException
  SerDe hive.serde.JSONSerDe does not exist))
 java.lang.RuntimeException: 
 MetaException(message:org.apache.hadoop.hive.serde2.SerDeException SerDe 
 hive.serde.JSONSerDe does not exist)
   at 
 org.apache.hadoop.hive.ql.metadata.Table.getDeserializerFromMetaStore(Table.java:262)
   at 
 org.apache.hadoop.hive.ql.metadata.Table.getDeserializer(Table.java:253)
   at org.apache.hadoop.hive.ql.metadata.Table.getCols(Table.java:490)
   at 
 org.apache.hadoop.hive.ql.metadata.Table.checkValidity(Table.java:162)
   at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:943)
   at 
 org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer.analyzeDropTable(DDLSemanticAnalyzer.java:700)
   at 
 org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer.analyzeInternal(DDLSemanticAnalyzer.java:210)
   at 
 org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:243)
   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:430)
   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:337)
   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:889)
   at 
 org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:255)
   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:212)
   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:403)
   at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:671)
   at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:554)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at org.apache.hadoop.util.RunJar.main(RunJar.java:208)
 Caused by: MetaException(message:org.apache.hadoop.hive.serde2.SerDeException 
 SerDe com.cloudera.hive.serde.JSONSerDe does not exist)
   at 
 org.apache.hadoop.hive.metastore.MetaStoreUtils.getDeserializer(MetaStoreUtils.java:211)
   at 
 org.apache.hadoop.hive.ql.metadata.Table.getDeserializerFromMetaStore(Table.java:260)
   ... 20 more
 hive add jar 
 /home/natty/source/sample-code/custom-serdes/target/custom-serdes-1.0-SNAPSHOT.jar;
 Added 
 

[jira] [Updated] (HIVE-6198) ORC file and struct column names are case sensitive

2014-07-09 Thread Navis (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Navis updated HIVE-6198:


Attachment: HIVE-6198.2.patch.txt

 ORC file and struct column names are case sensitive
 ---

 Key: HIVE-6198
 URL: https://issues.apache.org/jira/browse/HIVE-6198
 Project: Hive
  Issue Type: Bug
  Components: CLI, File Formats
Affects Versions: 0.11.0, 0.12.0
Reporter: Viraj Bhat
Assignee: Navis
 Attachments: HIVE-6198.1.patch.txt, HIVE-6198.2.patch.txt


 HiveQL document states that the Table names and column names are case 
 insensitive. But the struct behavior for ORC file is different. 
 Consider a sample text file:
 {code}
 $ cat data.txt
 line1|key11:value11,key12:value12,key13:value13|a,b,c|one,two
 line2|key21:value21,key22:value22,key23:value23|d,e,f|three,four
 line3|key31:value31,key32:value32,key33:value33|g,h,i|five,six
 {code}
 Creating a table stored as txt and then using this to create a table stored 
 as orc 
 {code}
 CREATE TABLE orig (
   str STRING,
   mp  MAPSTRING,STRING,
   lst ARRAYSTRING,
   strct STRUCTA:STRING,B:STRING
 ) ROW FORMAT DELIMITED
 FIELDS TERMINATED BY '|'
 COLLECTION ITEMS TERMINATED BY ','
 MAP KEYS TERMINATED BY ':';
 LOAD DATA LOCAL 'test.txt' INTO TABLE orig;
 CREATE TABLE tableorc (
   str STRING,
   mp  MAPSTRING,STRING,
   lst ARRAYSTRING,
   strct STRUCTA:STRING,B:STRING
 ) STORED AS ORC;
 INSERT OVERWRITE TABLE tableorc SELECT * FROM orig;
 {code}
 Suppose we project columns or read the *strct* columns for both table types, 
 here are the results. I have also tested the same with *RC*. The behavior is 
 similar to *txt* files.
 {code}
 hive SELECT * FROM orig;
 line1   {key11:value11,key12:value12,key13:value13} [a,b,c] 
  
 {a:one,b:two}
 line2   {key21:value21,key22:value22,key23:value23} [d,e,f] 
  
 {a:three,b:four}
 line3   {key31:value31,key32:value32,key33:value33} [g,h,i] 
  
 {a:five,b:six}
 Time taken: 0.126 seconds, Fetched: 3 row(s)
 hive SELECT * FROM tableorc;
 line1   {key12:value12,key11:value11,key13:value13} [a,b,c] 
  
 {A:one,B:two}
 line2   {key21:value21,key23:value23,key22:value22} [d,e,f] 
  
 {A:three,B:four}
 line3   {key33:value33,key31:value31,key32:value32} [g,h,i] 
  
 {A:five,B:six}
 Time taken: 0.178 seconds, Fetched: 3 row(s)
 hive SELECT strct FROM tableorc;
 {a:one,b:two}
 {a:three,b:four}
 {a:five,b:six}
 hiveSELECT strct.A FROM orig;
 one
 three
 five
 hiveSELECT strct.a FROM orig;
 one
 three
 five
 hiveSELECT strct.A FROM tableorc;
 one
 three
 five
 hiveSELECT strct.a FROM tableorc;
 FAILED: Execution Error, return code 2 from
 org.apache.hadoop.hive.ql.exec.mr.MapRedTask
 MapReduce Jobs Launched: 
 Job 0: Map: 1   HDFS Read: 0 HDFS Write: 0 FAIL
 {code}
 So it seems that ORC behaves differently for struct columns. Also why are we 
 storing the column names for struct for the other types as CASE SENSITIVE? 
 What is the standard for Hive QL with respect to structs?
 Regards
 Viraj



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6198) ORC file and struct column names are case sensitive

2014-07-09 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14056060#comment-14056060
 ] 

Hive QA commented on HIVE-6198:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12654773/HIVE-6198.2.patch.txt

{color:red}ERROR:{color} -1 due to 98 failed/errored test(s), 5701 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_analyze_table_null_partition
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_annotate_stats_filter
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_annotate_stats_groupby
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_annotate_stats_join
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_annotate_stats_limit
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_annotate_stats_part
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_annotate_stats_select
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_annotate_stats_table
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_annotate_stats_union
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_avro_decimal
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_avro_nullable_fields
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_avro_schema_error_message
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_avro_schema_literal
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_case_sensitivity
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_char_udf1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_columnarserde_create_shortcut
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_columnstats_partlvl
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_columnstats_partlvl_dp
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_columnstats_tbllvl
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_compute_stats_binary
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_compute_stats_boolean
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_compute_stats_decimal
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_compute_stats_double
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_compute_stats_empty_table
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_compute_stats_long
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_compute_stats_string
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_constant_prop
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_convert_enum_to_string
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_describe_xpath
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_display_colstats_tbllvl
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_distinct_stats
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_input17
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_input5
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_input_columnarserde
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_input_dynamicserde
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_input_lazyserde
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_input_testxpath
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_input_testxpath2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_input_testxpath3
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_input_testxpath4
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_inputddl8
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_join_thrift
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_metadata_only_queries
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_metadata_only_queries_with_filters
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_orc_create
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_parquet_create
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_serde_reported_schema
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_stats10
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_stats12
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_stats13
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_stats14
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_stats16
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_stats5
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_stats6
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_stats7
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_stats8
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_stats9
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_stats_invalidation
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_stats_noscan_2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_stats_only_null
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_statsfs

[jira] [Updated] (HIVE-6914) parquet-hive cannot write nested map (map value is map)

2014-07-09 Thread Adrian Lange (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6914?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrian Lange updated HIVE-6914:
---

Affects Version/s: 0.12.0

 parquet-hive cannot write nested map (map value is map)
 ---

 Key: HIVE-6914
 URL: https://issues.apache.org/jira/browse/HIVE-6914
 Project: Hive
  Issue Type: Bug
  Components: File Formats
Affects Versions: 0.12.0, 0.13.0
Reporter: Tongjie Chen

 // table schema (identical for both plain text version and parquet version)
 desc hive desc text_mmap;
 m map
 // sample nested map entry
 {level1:{level2_key1:value1,level2_key2:value2}}
 The following query will fail, 
 insert overwrite table parquet_mmap select * from text_mmap;
 Caused by: parquet.io.ParquetEncodingException: This should be an 
 ArrayWritable or MapWritable: 
 org.apache.hadoop.hive.ql.io.parquet.writable.BinaryWritable@f2f8106
 at 
 org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter.writeData(DataWritableWriter.java:85)
 at 
 org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter.writeArray(DataWritableWriter.java:118)
 at 
 org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter.writeData(DataWritableWriter.java:80)
 at 
 org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter.writeData(DataWritableWriter.java:82)
 at 
 org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter.write(DataWritableWriter.java:55)
 at 
 org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriteSupport.write(DataWritableWriteSupport.java:59)
 at 
 org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriteSupport.write(DataWritableWriteSupport.java:31)
 at 
 parquet.hadoop.InternalParquetRecordWriter.write(InternalParquetRecordWriter.java:115)
 at parquet.hadoop.ParquetRecordWriter.write(ParquetRecordWriter.java:81)
 at parquet.hadoop.ParquetRecordWriter.write(ParquetRecordWriter.java:37)
 at 
 org.apache.hadoop.hive.ql.io.parquet.write.ParquetRecordWriterWrapper.write(ParquetRecordWriterWrapper.java:77)
 at 
 org.apache.hadoop.hive.ql.io.parquet.write.ParquetRecordWriterWrapper.write(ParquetRecordWriterWrapper.java:90)
 at 
 org.apache.hadoop.hive.ql.exec.FileSinkOperator.processOp(FileSinkOperator.java:622)
 at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:793)
 at 
 org.apache.hadoop.hive.ql.exec.SelectOperator.processOp(SelectOperator.java:87)
 at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:793)
 at 
 org.apache.hadoop.hive.ql.exec.TableScanOperator.processOp(TableScanOperator.java:92)
 at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:793)
 at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:540)
 ... 9 more



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7292) Hive on Spark

2014-07-09 Thread Xuefu Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuefu Zhang updated HIVE-7292:
--

Component/s: Spark

 Hive on Spark
 -

 Key: HIVE-7292
 URL: https://issues.apache.org/jira/browse/HIVE-7292
 Project: Hive
  Issue Type: Improvement
  Components: Spark
Reporter: Xuefu Zhang
Assignee: Xuefu Zhang
 Attachments: Hive-on-Spark.pdf


 Spark as an open-source data analytics cluster computing framework has gained 
 significant momentum recently. Many Hive users already have Spark installed 
 as their computing backbone. To take advantages of Hive, they still need to 
 have either MapReduce or Tez on their cluster. This initiative will provide 
 user a new alternative so that those user can consolidate their backend. 
 Secondly, providing such an alternative further increases Hive's adoption as 
 it exposes Spark users  to a viable, feature-rich de facto standard SQL tools 
 on Hadoop.
 Finally, allowing Hive to run on Spark also has performance benefits. Hive 
 queries, especially those involving multiple reducer stages, will run faster, 
 thus improving user experience as Tez does.
 This is an umbrella JIRA which will cover many coming subtask. Design doc 
 will be attached here shortly, and will be on the wiki as well. Feedback from 
 the community is greatly appreciated!



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7329) Create SparkWork

2014-07-09 Thread Xuefu Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuefu Zhang updated HIVE-7329:
--

Component/s: Spark

 Create SparkWork
 

 Key: HIVE-7329
 URL: https://issues.apache.org/jira/browse/HIVE-7329
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Xuefu Zhang

 This class encapsulates all the work objects that can be executed in a single 
 Spark job.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7330) Create SparkTask

2014-07-09 Thread Xuefu Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuefu Zhang updated HIVE-7330:
--

Component/s: Spark

 Create SparkTask
 

 Key: HIVE-7330
 URL: https://issues.apache.org/jira/browse/HIVE-7330
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Xuefu Zhang

 SparkTask handles the execution of SparkWork. It will execute a graph of map 
 and reduce work using a SparkClient instance.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7370) Initial ground work for Hive on Spark [Spark branch]

2014-07-09 Thread Xuefu Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuefu Zhang updated HIVE-7370:
--

Component/s: Spark

 Initial ground work for Hive on Spark [Spark branch]
 

 Key: HIVE-7370
 URL: https://issues.apache.org/jira/browse/HIVE-7370
 Project: Hive
  Issue Type: Task
  Components: Spark
Reporter: Xuefu Zhang
Assignee: Xuefu Zhang
 Attachments: HIVE-7370.patch, spark_1.0.0.patch


 Contribute PoC code to Hive on Spark as the ground work for subsequent tasks. 
 While it has hacks and bad organized code, it will change and more 
 importantly it allows multiple people to working on different components 
 concurrently.
 With this, simple queries such as select col from tab where ... and select 
 grp, avg(val) from tab group by grp where ... can be executed on Spark.
 Contents of the patch:
 1. code path for additional execution engine
 2. essential classes such as SparkWork, SparkTask, SparkCompiler, 
 HiveMapFunction, HiveReduceFunction, SparkClient, etc.
 3. Some code changes to existing classes.
 4. build infrastructure
 5. utility classes.
 To try run Hive on Spark, for now you need to have:
 1. self-built Spark 1.0.0 with the patch attached.
 2. invoke Hive client with environment variable MASTER, which points to 
 master URL of Spark.
 2. set hive.execution.engine=spark
 3. execute supported queries.
 NO PRECOMMIT TESTS. This is for spark branch only.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7370) Initial ground work for Hive on Spark [Spark branch]

2014-07-09 Thread Xuefu Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuefu Zhang updated HIVE-7370:
--

Attachment: (was: spark_1.0.0.patch)

 Initial ground work for Hive on Spark [Spark branch]
 

 Key: HIVE-7370
 URL: https://issues.apache.org/jira/browse/HIVE-7370
 Project: Hive
  Issue Type: Task
  Components: Spark
Reporter: Xuefu Zhang
Assignee: Xuefu Zhang
 Attachments: HIVE-7370.patch, spark_1.0.0.patch


 Contribute PoC code to Hive on Spark as the ground work for subsequent tasks. 
 While it has hacks and bad organized code, it will change and more 
 importantly it allows multiple people to working on different components 
 concurrently.
 With this, simple queries such as select col from tab where ... and select 
 grp, avg(val) from tab group by grp where ... can be executed on Spark.
 Contents of the patch:
 1. code path for additional execution engine
 2. essential classes such as SparkWork, SparkTask, SparkCompiler, 
 HiveMapFunction, HiveReduceFunction, SparkClient, etc.
 3. Some code changes to existing classes.
 4. build infrastructure
 5. utility classes.
 To try run Hive on Spark, for now you need to have:
 1. self-built Spark 1.0.0 with the patch attached.
 2. invoke Hive client with environment variable MASTER, which points to 
 master URL of Spark.
 2. set hive.execution.engine=spark
 3. execute supported queries.
 NO PRECOMMIT TESTS. This is for spark branch only.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7370) Initial ground work for Hive on Spark [Spark branch]

2014-07-09 Thread Xuefu Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuefu Zhang updated HIVE-7370:
--

Attachment: spark_1.0.0.patch

 Initial ground work for Hive on Spark [Spark branch]
 

 Key: HIVE-7370
 URL: https://issues.apache.org/jira/browse/HIVE-7370
 Project: Hive
  Issue Type: Task
  Components: Spark
Reporter: Xuefu Zhang
Assignee: Xuefu Zhang
 Attachments: HIVE-7370.patch, spark_1.0.0.patch


 Contribute PoC code to Hive on Spark as the ground work for subsequent tasks. 
 While it has hacks and bad organized code, it will change and more 
 importantly it allows multiple people to working on different components 
 concurrently.
 With this, simple queries such as select col from tab where ... and select 
 grp, avg(val) from tab group by grp where ... can be executed on Spark.
 Contents of the patch:
 1. code path for additional execution engine
 2. essential classes such as SparkWork, SparkTask, SparkCompiler, 
 HiveMapFunction, HiveReduceFunction, SparkClient, etc.
 3. Some code changes to existing classes.
 4. build infrastructure
 5. utility classes.
 To try run Hive on Spark, for now you need to have:
 1. self-built Spark 1.0.0 with the patch attached.
 2. invoke Hive client with environment variable MASTER, which points to 
 master URL of Spark.
 2. set hive.execution.engine=spark
 3. execute supported queries.
 NO PRECOMMIT TESTS. This is for spark branch only.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HIVE-7370) Initial ground work for Hive on Spark [Spark branch]

2014-07-09 Thread Xuefu Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuefu Zhang resolved HIVE-7370.
---

   Resolution: Fixed
Fix Version/s: spark-branch

Patch committed to spark-branch.

 Initial ground work for Hive on Spark [Spark branch]
 

 Key: HIVE-7370
 URL: https://issues.apache.org/jira/browse/HIVE-7370
 Project: Hive
  Issue Type: Task
  Components: Spark
Reporter: Xuefu Zhang
Assignee: Xuefu Zhang
 Fix For: spark-branch

 Attachments: HIVE-7370.patch, spark_1.0.0.patch


 Contribute PoC code to Hive on Spark as the ground work for subsequent tasks. 
 While it has hacks and bad organized code, it will change and more 
 importantly it allows multiple people to working on different components 
 concurrently.
 With this, simple queries such as select col from tab where ... and select 
 grp, avg(val) from tab group by grp where ... can be executed on Spark.
 Contents of the patch:
 1. code path for additional execution engine
 2. essential classes such as SparkWork, SparkTask, SparkCompiler, 
 HiveMapFunction, HiveReduceFunction, SparkClient, etc.
 3. Some code changes to existing classes.
 4. build infrastructure
 5. utility classes.
 To try run Hive on Spark, for now you need to have:
 1. self-built Spark 1.0.0 with the patch attached.
 2. invoke Hive client with environment variable MASTER, which points to 
 master URL of Spark.
 2. set hive.execution.engine=spark
 3. execute supported queries.
 NO PRECOMMIT TESTS. This is for spark branch only.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HIVE-7371) Identify a minimum set of JARs needed to ship to Spark cluster [Spark Branch]

2014-07-09 Thread Xuefu Zhang (JIRA)
Xuefu Zhang created HIVE-7371:
-

 Summary: Identify a minimum set of JARs needed to ship to Spark 
cluster [Spark Branch]
 Key: HIVE-7371
 URL: https://issues.apache.org/jira/browse/HIVE-7371
 Project: Hive
  Issue Type: Task
  Components: Spark
Reporter: Xuefu Zhang


Currently, Spark client ships all Hive JARs, including those that Hive depends 
on, to Spark cluster when a query is executed by Spark. This is not efficient, 
causing potential library conflicts. Ideally, only a minimum set of JARs needs 
to be shipped. This task is to identify such a set.

We should learn from current MR cluster, for which I assume only hive-exec JAR 
is shipped to MR cluster.

We also need to ensure that user-supplied JARs are also shipped to Spark 
cluster, in a similar fashion as MR does.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HIVE-7372) Select query gives unpredictable incorrect result when parallelism is greater than 1 [Spark Branch]

2014-07-09 Thread Xuefu Zhang (JIRA)
Xuefu Zhang created HIVE-7372:
-

 Summary: Select query gives unpredictable incorrect result when 
parallelism is greater than 1 [Spark Branch]
 Key: HIVE-7372
 URL: https://issues.apache.org/jira/browse/HIVE-7372
 Project: Hive
  Issue Type: Bug
  Components: Spark
Reporter: Xuefu Zhang


In SparkClient.java, if the following property is set, unpredictable, incorrect 
result may be observed.
{code}
sparkConf.set(spark.default.parallelism, 1);
{code}

It's suspected that there are some concurrency issues, as Spark may process 
multiple datasets in a single JVM when parallelism is greater than 1 in order 
to use multiple cores.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HIVE-7373) Hive should not remove trailing zeros for decimal numbers

2014-07-09 Thread Xuefu Zhang (JIRA)
Xuefu Zhang created HIVE-7373:
-

 Summary: Hive should not remove trailing zeros for decimal numbers
 Key: HIVE-7373
 URL: https://issues.apache.org/jira/browse/HIVE-7373
 Project: Hive
  Issue Type: Bug
  Components: Types
Affects Versions: 0.13.1, 0.13.0
Reporter: Xuefu Zhang
Assignee: Xuefu Zhang


Currently Hive blindly removes trailing zeros of a decimal input number as sort 
of standardization. This is questionable in theory and problematic in practice.

1. In decimal context,  number 3.14 has a different semantic meaning from 
number 3.14. Removing trailing zeroes makes the meaning lost.

2. In a extreme case, 0.0 has (p, s) as (1, 1). Hive removes trailing zeros, 
and then the number becomes 0, which has (p, s) of (1, 0). Thus, for a decimal 
column of (1,1), input such as 0.0, 0.00, and so on becomes NULL because the 
column doesn't allow a decimal number with integer part.

Therefore, I propose Hive preserve the trailing zeroes. With this, in above 
example, 0.0, 0.00, and 0. will be represented as 0.0 (precision=1, 
scale=1) internally.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7373) Hive should not remove trailing zeros for decimal numbers

2014-07-09 Thread Xuefu Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuefu Zhang updated HIVE-7373:
--

Description: 
Currently Hive blindly removes trailing zeros of a decimal input number as sort 
of standardization. This is questionable in theory and problematic in practice.

1. In decimal context,  number 3.14 has a different semantic meaning from 
number 3.14. Removing trailing zeroes makes the meaning lost.

2. In a extreme case, 0.0 has (p, s) as (1, 1). Hive removes trailing zeros, 
and then the number becomes 0, which has (p, s) of (1, 0). Thus, for a decimal 
column of (1,1), input such as 0.0, 0.00, and so on becomes NULL because the 
column doesn't allow a decimal number with integer part.

Therefore, I propose Hive preserve the trailing zeroes (up to what the scale 
allows). With this, in above example, 0.0, 0.00, and 0. will be represented 
as 0.0 (precision=1, scale=1) internally.

  was:
Currently Hive blindly removes trailing zeros of a decimal input number as sort 
of standardization. This is questionable in theory and problematic in practice.

1. In decimal context,  number 3.14 has a different semantic meaning from 
number 3.14. Removing trailing zeroes makes the meaning lost.

2. In a extreme case, 0.0 has (p, s) as (1, 1). Hive removes trailing zeros, 
and then the number becomes 0, which has (p, s) of (1, 0). Thus, for a decimal 
column of (1,1), input such as 0.0, 0.00, and so on becomes NULL because the 
column doesn't allow a decimal number with integer part.

Therefore, I propose Hive preserve the trailing zeroes. With this, in above 
example, 0.0, 0.00, and 0. will be represented as 0.0 (precision=1, 
scale=1) internally.


 Hive should not remove trailing zeros for decimal numbers
 -

 Key: HIVE-7373
 URL: https://issues.apache.org/jira/browse/HIVE-7373
 Project: Hive
  Issue Type: Bug
  Components: Types
Affects Versions: 0.13.0, 0.13.1
Reporter: Xuefu Zhang
Assignee: Xuefu Zhang

 Currently Hive blindly removes trailing zeros of a decimal input number as 
 sort of standardization. This is questionable in theory and problematic in 
 practice.
 1. In decimal context,  number 3.14 has a different semantic meaning from 
 number 3.14. Removing trailing zeroes makes the meaning lost.
 2. In a extreme case, 0.0 has (p, s) as (1, 1). Hive removes trailing zeros, 
 and then the number becomes 0, which has (p, s) of (1, 0). Thus, for a 
 decimal column of (1,1), input such as 0.0, 0.00, and so on becomes NULL 
 because the column doesn't allow a decimal number with integer part.
 Therefore, I propose Hive preserve the trailing zeroes (up to what the scale 
 allows). With this, in above example, 0.0, 0.00, and 0. will be 
 represented as 0.0 (precision=1, scale=1) internally.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-7364) Trunk cannot be built on -Phadoop1 after HIVE-7144

2014-07-09 Thread Gopal V (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14056365#comment-14056365
 ] 

Gopal V commented on HIVE-7364:
---

Thanks [~navis] for fixing this.

Is there some change happening to the build infra that helps me catch such 
issues pre-commit?

 Trunk cannot be built on -Phadoop1 after HIVE-7144
 --

 Key: HIVE-7364
 URL: https://issues.apache.org/jira/browse/HIVE-7364
 Project: Hive
  Issue Type: Task
  Components: Build Infrastructure
Reporter: Navis
Assignee: Navis
 Fix For: 0.14.0

 Attachments: HIVE-7364.1.patch.txt


 Text.copyBytes() is introduced in hadoop-2



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HIVE-7374) SHOW COMPACTIONS fail on trunk

2014-07-09 Thread Damien Carol (JIRA)
Damien Carol created HIVE-7374:
--

 Summary: SHOW COMPACTIONS fail on trunk
 Key: HIVE-7374
 URL: https://issues.apache.org/jira/browse/HIVE-7374
 Project: Hive
  Issue Type: Bug
  Components: CLI, Metastore
Affects Versions: 0.14.0
Reporter: Damien Carol


In CLI in trunk after doing this :
{code}
show compactions;
{code}
Return error :
{code}
FAILED: Execution Error, return code 1 from 
org.apache.hadoop.hive.ql.exec.DDLTask. 
org.apache.thrift.transport.TTransportException
{code}

In metatore :
{noformat}
2014-07-09 17:54:10,537 ERROR [pool-3-thread-20]: server.TThreadPoolServer 
(TThreadPoolServer.java:run(213)) - Thrift error occurred during processing of 
message.
org.apache.thrift.protocol.TProtocolException: Required field 'compacts' is 
unset! Struct:ShowCompactResponse(compacts:null)
at 
org.apache.hadoop.hive.metastore.api.ShowCompactResponse.validate(ShowCompactResponse.java:310)
at 
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$show_compact_result.validate(ThriftHiveMetastore.java)
at 
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$show_compact_result$show_compact_resultStandardScheme.write(ThriftHiveMetastore.java)
at 
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$show_compact_result$show_compact_resultStandardScheme.write(ThriftHiveMetastore.java)
at 
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$show_compact_result.write(ThriftHiveMetastore.java)
at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:53)
at 
org.apache.hadoop.hive.metastore.TUGIBasedProcessor.process(TUGIBasedProcessor.java:103)
at 
org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:206)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
{noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7374) SHOW COMPACTIONS fail on trunk

2014-07-09 Thread Damien Carol (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Damien Carol updated HIVE-7374:
---

Status: Patch Available  (was: Open)

 SHOW COMPACTIONS fail on trunk
 --

 Key: HIVE-7374
 URL: https://issues.apache.org/jira/browse/HIVE-7374
 Project: Hive
  Issue Type: Bug
  Components: CLI, Metastore
Affects Versions: 0.14.0
Reporter: Damien Carol
  Labels: cli, compaction, metastore

 In CLI in trunk after doing this :
 {code}
 show compactions;
 {code}
 Return error :
 {code}
 FAILED: Execution Error, return code 1 from 
 org.apache.hadoop.hive.ql.exec.DDLTask. 
 org.apache.thrift.transport.TTransportException
 {code}
 In metatore :
 {noformat}
 2014-07-09 17:54:10,537 ERROR [pool-3-thread-20]: server.TThreadPoolServer 
 (TThreadPoolServer.java:run(213)) - Thrift error occurred during processing 
 of message.
 org.apache.thrift.protocol.TProtocolException: Required field 'compacts' is 
 unset! Struct:ShowCompactResponse(compacts:null)
 at 
 org.apache.hadoop.hive.metastore.api.ShowCompactResponse.validate(ShowCompactResponse.java:310)
 at 
 org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$show_compact_result.validate(ThriftHiveMetastore.java)
 at 
 org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$show_compact_result$show_compact_resultStandardScheme.write(ThriftHiveMetastore.java)
 at 
 org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$show_compact_result$show_compact_resultStandardScheme.write(ThriftHiveMetastore.java)
 at 
 org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$show_compact_result.write(ThriftHiveMetastore.java)
 at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:53)
 at 
 org.apache.hadoop.hive.metastore.TUGIBasedProcessor.process(TUGIBasedProcessor.java:103)
 at 
 org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:206)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:744)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7374) SHOW COMPACTIONS fail on trunk

2014-07-09 Thread Damien Carol (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Damien Carol updated HIVE-7374:
---

Attachment: HIVE-7374.1.patch

 SHOW COMPACTIONS fail on trunk
 --

 Key: HIVE-7374
 URL: https://issues.apache.org/jira/browse/HIVE-7374
 Project: Hive
  Issue Type: Bug
  Components: CLI, Metastore
Affects Versions: 0.14.0
Reporter: Damien Carol
  Labels: cli, compaction, metastore
 Attachments: HIVE-7374.1.patch


 In CLI in trunk after doing this :
 {code}
 show compactions;
 {code}
 Return error :
 {code}
 FAILED: Execution Error, return code 1 from 
 org.apache.hadoop.hive.ql.exec.DDLTask. 
 org.apache.thrift.transport.TTransportException
 {code}
 In metatore :
 {noformat}
 2014-07-09 17:54:10,537 ERROR [pool-3-thread-20]: server.TThreadPoolServer 
 (TThreadPoolServer.java:run(213)) - Thrift error occurred during processing 
 of message.
 org.apache.thrift.protocol.TProtocolException: Required field 'compacts' is 
 unset! Struct:ShowCompactResponse(compacts:null)
 at 
 org.apache.hadoop.hive.metastore.api.ShowCompactResponse.validate(ShowCompactResponse.java:310)
 at 
 org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$show_compact_result.validate(ThriftHiveMetastore.java)
 at 
 org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$show_compact_result$show_compact_resultStandardScheme.write(ThriftHiveMetastore.java)
 at 
 org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$show_compact_result$show_compact_resultStandardScheme.write(ThriftHiveMetastore.java)
 at 
 org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$show_compact_result.write(ThriftHiveMetastore.java)
 at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:53)
 at 
 org.apache.hadoop.hive.metastore.TUGIBasedProcessor.process(TUGIBasedProcessor.java:103)
 at 
 org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:206)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:744)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-7374) SHOW COMPACTIONS fail on trunk

2014-07-09 Thread Damien Carol (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14056423#comment-14056423
 ] 

Damien Carol commented on HIVE-7374:


The bug is simple. An object of type ShowCompactResponse is build without list 
of ShowCompactResponseElement.
This throws errors in thrift part because it's a required field.
The patch adds an empty list when the ShowCompactResponse object is 
instantiated.

 SHOW COMPACTIONS fail on trunk
 --

 Key: HIVE-7374
 URL: https://issues.apache.org/jira/browse/HIVE-7374
 Project: Hive
  Issue Type: Bug
  Components: CLI, Metastore
Affects Versions: 0.14.0
Reporter: Damien Carol
  Labels: cli, compaction, metastore
 Attachments: HIVE-7374.1.patch


 In CLI in trunk after doing this :
 {code}
 show compactions;
 {code}
 Return error :
 {code}
 FAILED: Execution Error, return code 1 from 
 org.apache.hadoop.hive.ql.exec.DDLTask. 
 org.apache.thrift.transport.TTransportException
 {code}
 In metatore :
 {noformat}
 2014-07-09 17:54:10,537 ERROR [pool-3-thread-20]: server.TThreadPoolServer 
 (TThreadPoolServer.java:run(213)) - Thrift error occurred during processing 
 of message.
 org.apache.thrift.protocol.TProtocolException: Required field 'compacts' is 
 unset! Struct:ShowCompactResponse(compacts:null)
 at 
 org.apache.hadoop.hive.metastore.api.ShowCompactResponse.validate(ShowCompactResponse.java:310)
 at 
 org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$show_compact_result.validate(ThriftHiveMetastore.java)
 at 
 org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$show_compact_result$show_compact_resultStandardScheme.write(ThriftHiveMetastore.java)
 at 
 org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$show_compact_result$show_compact_resultStandardScheme.write(ThriftHiveMetastore.java)
 at 
 org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$show_compact_result.write(ThriftHiveMetastore.java)
 at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:53)
 at 
 org.apache.hadoop.hive.metastore.TUGIBasedProcessor.process(TUGIBasedProcessor.java:103)
 at 
 org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:206)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:744)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-7213) COUNT(*) returns out-dated count value after TRUNCATE or INSERT INTO

2014-07-09 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14056443#comment-14056443
 ] 

Ashutosh Chauhan commented on HIVE-7213:


As a workaround, do :
{code}
set hive.compute.query.using.stats=false;
{code}

 COUNT(*) returns out-dated count value after TRUNCATE or INSERT INTO
 

 Key: HIVE-7213
 URL: https://issues.apache.org/jira/browse/HIVE-7213
 Project: Hive
  Issue Type: Bug
  Components: Query Processor, Statistics
Affects Versions: 0.13.0
 Environment: HDP 2.1
 Windows Server 2012 64-bit
Reporter: Moustafa Aboul Atta

 Running a query to count number of rows in a table through
 {{SELECT COUNT( * ) FROM t}}
 always returns the last number of rows added through the following statement:
 {{INSERT INTO TABLE t SELECT r FROM t2}}
 However, running
 {{SELECT * FROM t}}
 returns the expected results i.e. the old and newly added rows.
 Also running 
 {{TRUNCATE TABLE t;}}
 returns the original count of rows in the table, however running 
 {{SELECT * FROM t;}}
 returns nothing as expected



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7356) Table level stats collection fail for partitioned tables

2014-07-09 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-7356:
---

   Resolution: Fixed
Fix Version/s: 0.14.0
   Status: Resolved  (was: Patch Available)

Committed to trunk.

 Table level stats collection fail for partitioned tables
 

 Key: HIVE-7356
 URL: https://issues.apache.org/jira/browse/HIVE-7356
 Project: Hive
  Issue Type: Bug
  Components: Statistics
Affects Versions: 0.14.0
Reporter: Ashutosh Chauhan
Assignee: Ashutosh Chauhan
 Fix For: 0.14.0

 Attachments: HIVE-7356.patch






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-7288) Enable support for -libjars and -archives in WebHcat for Streaming MapReduce jobs

2014-07-09 Thread Eugene Koifman (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14056487#comment-14056487
 ] 

Eugene Koifman commented on HIVE-7288:
--

[~shanyu] I left some comments on RB.

 Enable support for -libjars and -archives in WebHcat for Streaming MapReduce 
 jobs
 -

 Key: HIVE-7288
 URL: https://issues.apache.org/jira/browse/HIVE-7288
 Project: Hive
  Issue Type: New Feature
  Components: WebHCat
Affects Versions: 0.11.0, 0.12.0, 0.13.0, 0.13.1
 Environment: HDInsight deploying HDP 2.1;  Also HDP 2.1 on Windows 
Reporter: Azim Uddin
Assignee: shanyu zhao
 Attachments: HIVE-7288.1.patch, hive-7288.patch


 Issue:
 ==
 Due to lack of parameters (or support for) equivalent of '-libjars' and 
 '-archives' in WebHcat REST API, we cannot use an external Java Jars or 
 Archive files with a Streaming MapReduce job, when the job is submitted via 
 WebHcat/templeton. 
 I am citing a few use cases here, but there can be plenty of scenarios like 
 this-
 #1 
 (for -archives):In order to use R with a hadoop distribution like HDInsight 
 or HDP on Windows, we could package the R directory up in a zip file and 
 rename it to r.jar and put it into HDFS or WASB. We can then do 
 something like this from hadoop command line (ignore the wasb syntax, same 
 command can be run with hdfs) - 
 hadoop jar %HADOOP_HOME%\lib\hadoop-streaming.jar -archives 
 wasb:///example/jars/r.jar -files 
 wasb:///example/apps/mapper.r,wasb:///example/apps/reducer.r -mapper 
 ./r.jar/bin/Rscript.exe mapper.r -reducer ./r.jar/bin/Rscript.exe 
 reducer.r -input /example/data/gutenberg -output /probe/r/wordcount
 This works from hadoop command line, but due to lack of support for 
 '-archives' parameter in WebHcat, we can't submit the same Streaming MR job 
 via WebHcat.
 #2 (for -libjars):
 Consider a scenario where a user would like to use a custom inputFormat with 
 a Streaming MapReduce job and wrote his own custom InputFormat JAR. From a 
 hadoop command line we can do something like this - 
 hadoop jar /path/to/hadoop-streaming.jar \
 -libjars /path/to/custom-formats.jar \
 -D map.output.key.field.separator=, \
 -D mapred.text.key.partitioner.options=-k1,1 \
 -input my_data/ \
 -output my_output/ \
 -outputformat test.example.outputformat.DateFieldMultipleOutputFormat 
 \
 -mapper my_mapper.py \
 -reducer my_reducer.py \
 But due to lack of support for '-libjars' parameter for streaming MapReduce 
 job in WebHcat, we can't submit the above streaming MR job (that uses a 
 custom Java JAR) via WebHcat.
 Impact:
 
 We think, being able to submit jobs remotely is a vital feature for hadoop to 
 be enterprise-ready and WebHcat plays an important role there. Streaming 
 MapReduce job is also very important for interoperability. So, it would be 
 very useful to keep WebHcat on par with hadoop command line in terms of 
 streaming MR job submission capability.
 Ask:
 
 Enable parameter support for 'libjars' and 'archives' in WebHcat for Hadoop 
 streaming jobs in WebHcat.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-7374) SHOW COMPACTIONS fail on trunk

2014-07-09 Thread Damien Carol (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14056553#comment-14056553
 ] 

Damien Carol commented on HIVE-7374:


Ready for review.
Anyone can assign this ticket to me?

 SHOW COMPACTIONS fail on trunk
 --

 Key: HIVE-7374
 URL: https://issues.apache.org/jira/browse/HIVE-7374
 Project: Hive
  Issue Type: Bug
  Components: CLI, Metastore
Affects Versions: 0.14.0
Reporter: Damien Carol
  Labels: cli, compaction, metastore
 Attachments: HIVE-7374.1.patch


 In CLI in trunk after doing this :
 {code}
 show compactions;
 {code}
 Return error :
 {code}
 FAILED: Execution Error, return code 1 from 
 org.apache.hadoop.hive.ql.exec.DDLTask. 
 org.apache.thrift.transport.TTransportException
 {code}
 In metatore :
 {noformat}
 2014-07-09 17:54:10,537 ERROR [pool-3-thread-20]: server.TThreadPoolServer 
 (TThreadPoolServer.java:run(213)) - Thrift error occurred during processing 
 of message.
 org.apache.thrift.protocol.TProtocolException: Required field 'compacts' is 
 unset! Struct:ShowCompactResponse(compacts:null)
 at 
 org.apache.hadoop.hive.metastore.api.ShowCompactResponse.validate(ShowCompactResponse.java:310)
 at 
 org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$show_compact_result.validate(ThriftHiveMetastore.java)
 at 
 org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$show_compact_result$show_compact_resultStandardScheme.write(ThriftHiveMetastore.java)
 at 
 org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$show_compact_result$show_compact_resultStandardScheme.write(ThriftHiveMetastore.java)
 at 
 org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$show_compact_result.write(ThriftHiveMetastore.java)
 at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:53)
 at 
 org.apache.hadoop.hive.metastore.TUGIBasedProcessor.process(TUGIBasedProcessor.java:103)
 at 
 org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:206)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:744)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7374) SHOW COMPACTIONS fail on trunk

2014-07-09 Thread Damien Carol (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Damien Carol updated HIVE-7374:
---

Description: 
In CLI in trunk after doing this :
{{show compactions;}}
Return error :
{noformat}
FAILED: Execution Error, return code 1 from 
org.apache.hadoop.hive.ql.exec.DDLTask. 
org.apache.thrift.transport.TTransportException
{noformat}

In metatore :
{noformat}
2014-07-09 17:54:10,537 ERROR [pool-3-thread-20]: server.TThreadPoolServer 
(TThreadPoolServer.java:run(213)) - Thrift error occurred during processing of 
message.
org.apache.thrift.protocol.TProtocolException: Required field 'compacts' is 
unset! Struct:ShowCompactResponse(compacts:null)
at 
org.apache.hadoop.hive.metastore.api.ShowCompactResponse.validate(ShowCompactResponse.java:310)
at 
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$show_compact_result.validate(ThriftHiveMetastore.java)
at 
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$show_compact_result$show_compact_resultStandardScheme.write(ThriftHiveMetastore.java)
at 
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$show_compact_result$show_compact_resultStandardScheme.write(ThriftHiveMetastore.java)
at 
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$show_compact_result.write(ThriftHiveMetastore.java)
at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:53)
at 
org.apache.hadoop.hive.metastore.TUGIBasedProcessor.process(TUGIBasedProcessor.java:103)
at 
org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:206)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
{noformat}

  was:
In CLI in trunk after doing this :
{code}
show compactions;
{code}
Return error :
{code}
FAILED: Execution Error, return code 1 from 
org.apache.hadoop.hive.ql.exec.DDLTask. 
org.apache.thrift.transport.TTransportException
{code}

In metatore :
{noformat}
2014-07-09 17:54:10,537 ERROR [pool-3-thread-20]: server.TThreadPoolServer 
(TThreadPoolServer.java:run(213)) - Thrift error occurred during processing of 
message.
org.apache.thrift.protocol.TProtocolException: Required field 'compacts' is 
unset! Struct:ShowCompactResponse(compacts:null)
at 
org.apache.hadoop.hive.metastore.api.ShowCompactResponse.validate(ShowCompactResponse.java:310)
at 
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$show_compact_result.validate(ThriftHiveMetastore.java)
at 
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$show_compact_result$show_compact_resultStandardScheme.write(ThriftHiveMetastore.java)
at 
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$show_compact_result$show_compact_resultStandardScheme.write(ThriftHiveMetastore.java)
at 
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$show_compact_result.write(ThriftHiveMetastore.java)
at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:53)
at 
org.apache.hadoop.hive.metastore.TUGIBasedProcessor.process(TUGIBasedProcessor.java:103)
at 
org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:206)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
{noformat}


 SHOW COMPACTIONS fail on trunk
 --

 Key: HIVE-7374
 URL: https://issues.apache.org/jira/browse/HIVE-7374
 Project: Hive
  Issue Type: Bug
  Components: CLI, Metastore
Affects Versions: 0.14.0
Reporter: Damien Carol
  Labels: cli, compaction, metastore
 Attachments: HIVE-7374.1.patch


 In CLI in trunk after doing this :
 {{show compactions;}}
 Return error :
 {noformat}
 FAILED: Execution Error, return code 1 from 
 org.apache.hadoop.hive.ql.exec.DDLTask. 
 org.apache.thrift.transport.TTransportException
 {noformat}
 In metatore :
 {noformat}
 2014-07-09 17:54:10,537 ERROR [pool-3-thread-20]: server.TThreadPoolServer 
 (TThreadPoolServer.java:run(213)) - Thrift error occurred during processing 
 of message.
 org.apache.thrift.protocol.TProtocolException: Required field 'compacts' is 
 unset! Struct:ShowCompactResponse(compacts:null)
 at 
 org.apache.hadoop.hive.metastore.api.ShowCompactResponse.validate(ShowCompactResponse.java:310)
 at 
 org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$show_compact_result.validate(ThriftHiveMetastore.java)
 at 
 

[jira] [Commented] (HIVE-7364) Trunk cannot be built on -Phadoop1 after HIVE-7144

2014-07-09 Thread Szehon Ho (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14056588#comment-14056588
 ] 

Szehon Ho commented on HIVE-7364:
-

There's no way build infra can handle running test in both profiles, but I 
think we can probably add a pre-test step to compile with optional profiles, 
though complexity would be in how generic it has to be.

 Trunk cannot be built on -Phadoop1 after HIVE-7144
 --

 Key: HIVE-7364
 URL: https://issues.apache.org/jira/browse/HIVE-7364
 Project: Hive
  Issue Type: Task
  Components: Build Infrastructure
Reporter: Navis
Assignee: Navis
 Fix For: 0.14.0

 Attachments: HIVE-7364.1.patch.txt


 Text.copyBytes() is introduced in hadoop-2



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-7364) Trunk cannot be built on -Phadoop1 after HIVE-7144

2014-07-09 Thread Szehon Ho (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14056592#comment-14056592
 ] 

Szehon Ho commented on HIVE-7364:
-

HIVE-7375

 Trunk cannot be built on -Phadoop1 after HIVE-7144
 --

 Key: HIVE-7364
 URL: https://issues.apache.org/jira/browse/HIVE-7364
 Project: Hive
  Issue Type: Task
  Components: Build Infrastructure
Reporter: Navis
Assignee: Navis
 Fix For: 0.14.0

 Attachments: HIVE-7364.1.patch.txt


 Text.copyBytes() is introduced in hadoop-2



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HIVE-7375) Add option in test infra to compile in other profiles (like hadoop-1)

2014-07-09 Thread Szehon Ho (JIRA)
Szehon Ho created HIVE-7375:
---

 Summary: Add option in test infra to compile in other profiles 
(like hadoop-1)
 Key: HIVE-7375
 URL: https://issues.apache.org/jira/browse/HIVE-7375
 Project: Hive
  Issue Type: Test
Reporter: Szehon Ho
Assignee: Szehon Ho


As a lot of changes are breaking hadoop-1 compilation, it might be nice to add 
an option in the test infra to compile on optional profiles as a pre-step 
before testing on the main profile.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7375) Add option in test infra to compile in other profiles (like hadoop-1)

2014-07-09 Thread Szehon Ho (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szehon Ho updated HIVE-7375:


Description: As we are seeing some commits breaking hadoop-1 compilation 
due to lack of pre-commit converage, it might be nice to add an option in the 
test infra to compile on optional profiles as a pre-step before testing on the 
main profile.  (was: As a lot of changes are breaking hadoop-1 compilation, it 
might be nice to add an option in the test infra to compile on optional 
profiles as a pre-step before testing on the main profile.)

 Add option in test infra to compile in other profiles (like hadoop-1)
 -

 Key: HIVE-7375
 URL: https://issues.apache.org/jira/browse/HIVE-7375
 Project: Hive
  Issue Type: Test
Reporter: Szehon Ho
Assignee: Szehon Ho

 As we are seeing some commits breaking hadoop-1 compilation due to lack of 
 pre-commit converage, it might be nice to add an option in the test infra to 
 compile on optional profiles as a pre-step before testing on the main profile.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6252) sql std auth - support 'with admin option' in revoke role metastore api

2014-07-09 Thread Jason Dere (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Dere updated HIVE-6252:
-

Attachment: HIVE-6252.1.patch

Attaching patch.  Since an extra grantOption argument was required for 
revoke_role, this requires Metastore API changes.  I ended up creating a new 
thrift metastore call to do handle both grant/revoke role requests.

 sql std auth - support 'with admin option' in revoke role metastore api
 ---

 Key: HIVE-6252
 URL: https://issues.apache.org/jira/browse/HIVE-6252
 Project: Hive
  Issue Type: Sub-task
  Components: Authorization, SQLStandardAuthorization
Reporter: Thejas M Nair
 Attachments: HIVE-6252.1.patch

   Original Estimate: 24h
  Remaining Estimate: 24h

 The metastore api for revoking role privileges does not accept 'with admin 
 option' , though the syntax supports it. SQL syntax also supports grantor 
 specification in revoke role statement.
 It should be similar to the grant_role api.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6252) sql std auth - support 'with admin option' in revoke role metastore api

2014-07-09 Thread Jason Dere (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Dere updated HIVE-6252:
-

Status: Patch Available  (was: Open)

 sql std auth - support 'with admin option' in revoke role metastore api
 ---

 Key: HIVE-6252
 URL: https://issues.apache.org/jira/browse/HIVE-6252
 Project: Hive
  Issue Type: Sub-task
  Components: Authorization, SQLStandardAuthorization
Reporter: Thejas M Nair
Assignee: Jason Dere
 Attachments: HIVE-6252.1.patch

   Original Estimate: 24h
  Remaining Estimate: 24h

 The metastore api for revoking role privileges does not accept 'with admin 
 option' , though the syntax supports it. SQL syntax also supports grantor 
 specification in revoke role statement.
 It should be similar to the grant_role api.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (HIVE-6252) sql std auth - support 'with admin option' in revoke role metastore api

2014-07-09 Thread Jason Dere (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Dere reassigned HIVE-6252:


Assignee: Jason Dere

 sql std auth - support 'with admin option' in revoke role metastore api
 ---

 Key: HIVE-6252
 URL: https://issues.apache.org/jira/browse/HIVE-6252
 Project: Hive
  Issue Type: Sub-task
  Components: Authorization, SQLStandardAuthorization
Reporter: Thejas M Nair
Assignee: Jason Dere
 Attachments: HIVE-6252.1.patch

   Original Estimate: 24h
  Remaining Estimate: 24h

 The metastore api for revoking role privileges does not accept 'with admin 
 option' , though the syntax supports it. SQL syntax also supports grantor 
 specification in revoke role statement.
 It should be similar to the grant_role api.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Review Request 23373: HIVE-6252: sql std auth - support 'with admin option' in revoke role metastore api

2014-07-09 Thread Jason Dere

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/23373/
---

Review request for hive and Thejas Nair.


Bugs: HIVE-6252
https://issues.apache.org/jira/browse/HIVE-6252


Repository: hive-git


Description
---

Parser changes - support REVOKE ADMIN ROLE FOR
New grant_revoke_role() thrift metastore method


Diffs
-

  
itests/hive-unit/src/test/java/org/apache/hadoop/hive/metastore/TestAuthorizationApiAuthorizer.java
 6b2f28e 
  metastore/if/hive_metastore.thrift cc802c6 
  metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java 
acef599 
  metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClient.java 
664dccd 
  metastore/src/java/org/apache/hadoop/hive/metastore/IMetaStoreClient.java 
0c2209b 
  metastore/src/java/org/apache/hadoop/hive/metastore/ObjectStore.java 911c997 
  metastore/src/java/org/apache/hadoop/hive/metastore/RawStore.java e0de0e0 
  
metastore/src/test/org/apache/hadoop/hive/metastore/DummyRawStoreControlledCommit.java
 5c00aa1 
  
metastore/src/test/org/apache/hadoop/hive/metastore/DummyRawStoreForJdoConnection.java
 5025b83 
  ql/src/java/org/apache/hadoop/hive/ql/exec/DDLTask.java 4d8e10c 
  ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java 250756c 
  ql/src/java/org/apache/hadoop/hive/ql/parse/HiveParser.g f934ac4 
  
ql/src/java/org/apache/hadoop/hive/ql/parse/authorization/HiveAuthorizationTaskFactoryImpl.java
 419117c 
  
ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/sqlstd/SQLStdHiveAccessController.java
 6ede03c 
  ql/src/test/queries/clientnegative/authorization_role_grant2.q PRE-CREATION 
  ql/src/test/queries/clientpositive/authorization_role_grant1.q 051bdee 
  ql/src/test/results/clientnegative/authorization_role_grant2.q.out 
PRE-CREATION 
  ql/src/test/results/clientpositive/authorization_role_grant1.q.out cdbcb26 

Diff: https://reviews.apache.org/r/23373/diff/


Testing
---

unit tests added


Thanks,

Jason Dere



[jira] [Updated] (HIVE-7374) SHOW COMPACTIONS fail on trunk

2014-07-09 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-7374:


Assignee: Damien Carol

 SHOW COMPACTIONS fail on trunk
 --

 Key: HIVE-7374
 URL: https://issues.apache.org/jira/browse/HIVE-7374
 Project: Hive
  Issue Type: Bug
  Components: CLI, Metastore
Affects Versions: 0.14.0
Reporter: Damien Carol
Assignee: Damien Carol
  Labels: cli, compaction, metastore
 Attachments: HIVE-7374.1.patch


 In CLI in trunk after doing this :
 {{show compactions;}}
 Return error :
 {noformat}
 FAILED: Execution Error, return code 1 from 
 org.apache.hadoop.hive.ql.exec.DDLTask. 
 org.apache.thrift.transport.TTransportException
 {noformat}
 In metatore :
 {noformat}
 2014-07-09 17:54:10,537 ERROR [pool-3-thread-20]: server.TThreadPoolServer 
 (TThreadPoolServer.java:run(213)) - Thrift error occurred during processing 
 of message.
 org.apache.thrift.protocol.TProtocolException: Required field 'compacts' is 
 unset! Struct:ShowCompactResponse(compacts:null)
 at 
 org.apache.hadoop.hive.metastore.api.ShowCompactResponse.validate(ShowCompactResponse.java:310)
 at 
 org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$show_compact_result.validate(ThriftHiveMetastore.java)
 at 
 org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$show_compact_result$show_compact_resultStandardScheme.write(ThriftHiveMetastore.java)
 at 
 org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$show_compact_result$show_compact_resultStandardScheme.write(ThriftHiveMetastore.java)
 at 
 org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$show_compact_result.write(ThriftHiveMetastore.java)
 at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:53)
 at 
 org.apache.hadoop.hive.metastore.TUGIBasedProcessor.process(TUGIBasedProcessor.java:103)
 at 
 org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:206)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:744)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-7374) SHOW COMPACTIONS fail on trunk

2014-07-09 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14056648#comment-14056648
 ] 

Hive QA commented on HIVE-7374:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12654828/HIVE-7374.1.patch

{color:red}ERROR:{color} -1 due to 11 failed/errored test(s), 5686 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.metastore.txn.TestCompactionTxnHandler.testMarkCleaned
org.apache.hadoop.hive.ql.txn.compactor.TestCleaner.cleanupAfterMajorPartitionCompaction
org.apache.hadoop.hive.ql.txn.compactor.TestCleaner.cleanupAfterMajorPartitionCompactionNoBase
org.apache.hadoop.hive.ql.txn.compactor.TestCleaner.cleanupAfterMajorTableCompaction
org.apache.hadoop.hive.ql.txn.compactor.TestCleaner.cleanupAfterMinorPartitionCompaction
org.apache.hadoop.hive.ql.txn.compactor.TestCleaner.cleanupAfterMinorTableCompaction
org.apache.hadoop.hive.ql.txn.compactor.TestInitiator.noCompactOnManyDifferentPartitionAborts
org.apache.hadoop.hive.ql.txn.compactor.TestInitiator.noCompactTableDeltaPctNotHighEnough
org.apache.hadoop.hive.ql.txn.compactor.TestInitiator.noCompactTableNotEnoughDeltas
org.apache.hadoop.hive.ql.txn.compactor.TestInitiator.noCompactWhenNoCompactSet
org.apache.hive.jdbc.miniHS2.TestHiveServer2.testConnection
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-Build/722/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-Build/722/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-Build-722/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 11 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12654828

 SHOW COMPACTIONS fail on trunk
 --

 Key: HIVE-7374
 URL: https://issues.apache.org/jira/browse/HIVE-7374
 Project: Hive
  Issue Type: Bug
  Components: CLI, Metastore
Affects Versions: 0.14.0
Reporter: Damien Carol
Assignee: Damien Carol
  Labels: cli, compaction, metastore
 Attachments: HIVE-7374.1.patch


 In CLI in trunk after doing this :
 {{show compactions;}}
 Return error :
 {noformat}
 FAILED: Execution Error, return code 1 from 
 org.apache.hadoop.hive.ql.exec.DDLTask. 
 org.apache.thrift.transport.TTransportException
 {noformat}
 In metatore :
 {noformat}
 2014-07-09 17:54:10,537 ERROR [pool-3-thread-20]: server.TThreadPoolServer 
 (TThreadPoolServer.java:run(213)) - Thrift error occurred during processing 
 of message.
 org.apache.thrift.protocol.TProtocolException: Required field 'compacts' is 
 unset! Struct:ShowCompactResponse(compacts:null)
 at 
 org.apache.hadoop.hive.metastore.api.ShowCompactResponse.validate(ShowCompactResponse.java:310)
 at 
 org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$show_compact_result.validate(ThriftHiveMetastore.java)
 at 
 org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$show_compact_result$show_compact_resultStandardScheme.write(ThriftHiveMetastore.java)
 at 
 org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$show_compact_result$show_compact_resultStandardScheme.write(ThriftHiveMetastore.java)
 at 
 org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$show_compact_result.write(ThriftHiveMetastore.java)
 at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:53)
 at 
 org.apache.hadoop.hive.metastore.TUGIBasedProcessor.process(TUGIBasedProcessor.java:103)
 at 
 org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:206)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:744)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: Review Request 23153: HIVE-5976: Decouple input formats from STORED as keywords

2014-07-09 Thread David Chen

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/23153/
---

(Updated July 9, 2014, 8:16 p.m.)


Review request for hive.


Bugs: HIVE-5976
https://issues.apache.org/jira/browse/HIVE-5976


Repository: hive-git


Description
---

Apply patch


Diffs
-

  common/src/java/org/apache/hadoop/hive/conf/HiveConf.java 
8bff2a96fbfc572d86e6a6cdbc2a74ff4f5b0609 
  
hcatalog/core/src/main/java/org/apache/hive/hcatalog/cli/SemanticAnalysis/CreateTableHook.java
 ec24531117203a5c75c62d0e5b54d5a43d37fa79 
  
itests/custom-serde/src/main/java/org/apache/hadoop/hive/serde2/CustomTextSerDe.java
 PRE-CREATION 
  
itests/custom-serde/src/main/java/org/apache/hadoop/hive/serde2/CustomTextStorageFormatDescriptor.java
 PRE-CREATION 
  
itests/custom-serde/src/main/resources/META-INF/services/org.apache.hadoop.hive.ql.io.StorageFormatDescriptor
 PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/io/AbstractStorageFormatDescriptor.java 
PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/io/IOConstants.java 
41310661ced0616f6bee27af2b1195127e5230e8 
  ql/src/java/org/apache/hadoop/hive/ql/io/ORCFileStorageFormatDescriptor.java 
PRE-CREATION 
  
ql/src/java/org/apache/hadoop/hive/ql/io/ParquetFileStorageFormatDescriptor.java
 PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/io/RCFileStorageFormatDescriptor.java 
PRE-CREATION 
  
ql/src/java/org/apache/hadoop/hive/ql/io/SequenceFileStorageFormatDescriptor.java
 PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/io/StorageFormatDescriptor.java 
PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/io/StorageFormatFactory.java 
PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/io/TextFileStorageFormatDescriptor.java 
PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/parse/BaseSemanticAnalyzer.java 
60d54b6a04e1a9601342b0159387114f7b666338 
  ql/src/java/org/apache/hadoop/hive/ql/parse/DDLSemanticAnalyzer.java 
640b6b319ce84a875cc78cb8b29fa6bbc1067fc5 
  ql/src/java/org/apache/hadoop/hive/ql/parse/HiveLexer.g 
412a046488eaea42a6416c7cbd514715d37e249f 
  ql/src/java/org/apache/hadoop/hive/ql/parse/HiveParser.g 
f934ac4e3b736eed1b3060fa516124c67f9a2f87 
  ql/src/java/org/apache/hadoop/hive/ql/parse/IdentifiersParser.g 
9c001c1495b423c19f3fa710c74f1bb1e24a08f4 
  ql/src/java/org/apache/hadoop/hive/ql/parse/ParseUtils.java 
0af25360ee6f3088c764f0c4d812f30d1eeb91d6 
  ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java 
b91b9a26dff6cc6af235cd09b56c47536f5b43ef 
  ql/src/java/org/apache/hadoop/hive/ql/parse/StorageFormat.java PRE-CREATION 
  
ql/src/main/resources/META-INF/services/org.apache.hadoop.hive.ql.io.StorageFormatDescriptor
 PRE-CREATION 
  ql/src/test/org/apache/hadoop/hive/ql/io/TestStorageFormatDescriptor.java 
PRE-CREATION 
  ql/src/test/queries/clientpositive/storage_format_descriptor.q PRE-CREATION 
  ql/src/test/results/clientnegative/fileformat_bad_class.q.out 
ab1e9357c0a7d4e21816290fbf7ed99396932b92 
  ql/src/test/results/clientnegative/genericFileFormat.q.out 
9613df95c8fc977c0ad1f717afa2db3870dfd904 
  ql/src/test/results/clientpositive/ctas.q.out 
5af90d03b72d42c30c4d31ce6b28bfd5493470ac 
  ql/src/test/results/clientpositive/storage_format_descriptor.q.out 
PRE-CREATION 
  ql/src/test/results/clientpositive/tez/ctas.q.out 
6b4c69019d45976bd0ccb705331948cd240e5750 

Diff: https://reviews.apache.org/r/23153/diff/


Testing
---


Thanks,

David Chen



Re: Review Request 23153: HIVE-5976: Decouple input formats from STORED as keywords

2014-07-09 Thread David Chen

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/23153/
---

(Updated July 9, 2014, 8:16 p.m.)


Review request for hive.


Changes
---

Fix alter_file_format test failures.


Summary (updated)
-

HIVE-5976: Decouple input formats from STORED as keywords


Bugs: HIVE-5976
https://issues.apache.org/jira/browse/HIVE-5976


Repository: hive-git


Description (updated)
---

Apply patch


Diffs (updated)
-

  common/src/java/org/apache/hadoop/hive/conf/HiveConf.java 
8bff2a96fbfc572d86e6a6cdbc2a74ff4f5b0609 
  
hcatalog/core/src/main/java/org/apache/hive/hcatalog/cli/SemanticAnalysis/CreateTableHook.java
 ec24531117203a5c75c62d0e5b54d5a43d37fa79 
  
itests/custom-serde/src/main/java/org/apache/hadoop/hive/serde2/CustomTextSerDe.java
 PRE-CREATION 
  
itests/custom-serde/src/main/java/org/apache/hadoop/hive/serde2/CustomTextStorageFormatDescriptor.java
 PRE-CREATION 
  
itests/custom-serde/src/main/resources/META-INF/services/org.apache.hadoop.hive.ql.io.StorageFormatDescriptor
 PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/io/AbstractStorageFormatDescriptor.java 
PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/io/IOConstants.java 
41310661ced0616f6bee27af2b1195127e5230e8 
  ql/src/java/org/apache/hadoop/hive/ql/io/ORCFileStorageFormatDescriptor.java 
PRE-CREATION 
  
ql/src/java/org/apache/hadoop/hive/ql/io/ParquetFileStorageFormatDescriptor.java
 PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/io/RCFileStorageFormatDescriptor.java 
PRE-CREATION 
  
ql/src/java/org/apache/hadoop/hive/ql/io/SequenceFileStorageFormatDescriptor.java
 PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/io/StorageFormatDescriptor.java 
PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/io/StorageFormatFactory.java 
PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/io/TextFileStorageFormatDescriptor.java 
PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/parse/BaseSemanticAnalyzer.java 
60d54b6a04e1a9601342b0159387114f7b666338 
  ql/src/java/org/apache/hadoop/hive/ql/parse/DDLSemanticAnalyzer.java 
640b6b319ce84a875cc78cb8b29fa6bbc1067fc5 
  ql/src/java/org/apache/hadoop/hive/ql/parse/HiveLexer.g 
412a046488eaea42a6416c7cbd514715d37e249f 
  ql/src/java/org/apache/hadoop/hive/ql/parse/HiveParser.g 
f934ac4e3b736eed1b3060fa516124c67f9a2f87 
  ql/src/java/org/apache/hadoop/hive/ql/parse/IdentifiersParser.g 
9c001c1495b423c19f3fa710c74f1bb1e24a08f4 
  ql/src/java/org/apache/hadoop/hive/ql/parse/ParseUtils.java 
0af25360ee6f3088c764f0c4d812f30d1eeb91d6 
  ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java 
b91b9a26dff6cc6af235cd09b56c47536f5b43ef 
  ql/src/java/org/apache/hadoop/hive/ql/parse/StorageFormat.java PRE-CREATION 
  
ql/src/main/resources/META-INF/services/org.apache.hadoop.hive.ql.io.StorageFormatDescriptor
 PRE-CREATION 
  ql/src/test/org/apache/hadoop/hive/ql/io/TestStorageFormatDescriptor.java 
PRE-CREATION 
  ql/src/test/queries/clientpositive/storage_format_descriptor.q PRE-CREATION 
  ql/src/test/results/clientnegative/fileformat_bad_class.q.out 
ab1e9357c0a7d4e21816290fbf7ed99396932b92 
  ql/src/test/results/clientnegative/genericFileFormat.q.out 
9613df95c8fc977c0ad1f717afa2db3870dfd904 
  ql/src/test/results/clientpositive/ctas.q.out 
5af90d03b72d42c30c4d31ce6b28bfd5493470ac 
  ql/src/test/results/clientpositive/storage_format_descriptor.q.out 
PRE-CREATION 
  ql/src/test/results/clientpositive/tez/ctas.q.out 
6b4c69019d45976bd0ccb705331948cd240e5750 

Diff: https://reviews.apache.org/r/23153/diff/


Testing
---


Thanks,

David Chen



[jira] [Updated] (HIVE-5976) Decouple input formats from STORED as keywords

2014-07-09 Thread David Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Chen updated HIVE-5976:
-

Attachment: HIVE-5976.5.patch

I have fixed the test failure for alter_file_format. The failure was caused by 
the fact that the SerDe of the table was not set when the 
StorageFormatDescriptor does not specify a SerDe, as is the case for text and 
sequencefile. To fix this, I added a new HiveConf variable, HIVEDEFAULTSERDE, 
which is set to LazySimpleSerDe by default.

Another source of the failure is if INPUTFORMAT, OUTPUTFORMAT, and SERDE are 
set using SET FILEFORMAT, the SerDe is not applied.

[~brocknoland] - I noticed that AvroSerDe is not included in this patch. Was 
this because we only want to port the current native storage formats for this 
patch and make Avro a native storage format in a separate patch? I am thinking 
that AvroSerDe's requirement for a schema URL or schema literal to be set might 
cause some complications.

 Decouple input formats from STORED as keywords
 --

 Key: HIVE-5976
 URL: https://issues.apache.org/jira/browse/HIVE-5976
 Project: Hive
  Issue Type: Task
Reporter: Brock Noland
Assignee: Brock Noland
 Attachments: HIVE-5976.2.patch, HIVE-5976.3.patch, HIVE-5976.3.patch, 
 HIVE-5976.4.patch, HIVE-5976.5.patch, HIVE-5976.patch, HIVE-5976.patch, 
 HIVE-5976.patch, HIVE-5976.patch


 As noted in HIVE-5783, we hard code the input formats mapped to keywords. 
 It'd be nice if there was a registration system so we didn't need to do that.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: Review Request 23153: HIVE-5976: Decouple input formats from STORED as keywords

2014-07-09 Thread David Chen

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/23153/
---

(Updated July 9, 2014, 8:16 p.m.)


Review request for hive.


Bugs: HIVE-5976
https://issues.apache.org/jira/browse/HIVE-5976


Repository: hive-git


Description (updated)
---

HIVE-5976: Decouple input formats from STORED as keywords


Diffs
-

  common/src/java/org/apache/hadoop/hive/conf/HiveConf.java 
8bff2a96fbfc572d86e6a6cdbc2a74ff4f5b0609 
  
hcatalog/core/src/main/java/org/apache/hive/hcatalog/cli/SemanticAnalysis/CreateTableHook.java
 ec24531117203a5c75c62d0e5b54d5a43d37fa79 
  
itests/custom-serde/src/main/java/org/apache/hadoop/hive/serde2/CustomTextSerDe.java
 PRE-CREATION 
  
itests/custom-serde/src/main/java/org/apache/hadoop/hive/serde2/CustomTextStorageFormatDescriptor.java
 PRE-CREATION 
  
itests/custom-serde/src/main/resources/META-INF/services/org.apache.hadoop.hive.ql.io.StorageFormatDescriptor
 PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/io/AbstractStorageFormatDescriptor.java 
PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/io/IOConstants.java 
41310661ced0616f6bee27af2b1195127e5230e8 
  ql/src/java/org/apache/hadoop/hive/ql/io/ORCFileStorageFormatDescriptor.java 
PRE-CREATION 
  
ql/src/java/org/apache/hadoop/hive/ql/io/ParquetFileStorageFormatDescriptor.java
 PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/io/RCFileStorageFormatDescriptor.java 
PRE-CREATION 
  
ql/src/java/org/apache/hadoop/hive/ql/io/SequenceFileStorageFormatDescriptor.java
 PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/io/StorageFormatDescriptor.java 
PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/io/StorageFormatFactory.java 
PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/io/TextFileStorageFormatDescriptor.java 
PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/parse/BaseSemanticAnalyzer.java 
60d54b6a04e1a9601342b0159387114f7b666338 
  ql/src/java/org/apache/hadoop/hive/ql/parse/DDLSemanticAnalyzer.java 
640b6b319ce84a875cc78cb8b29fa6bbc1067fc5 
  ql/src/java/org/apache/hadoop/hive/ql/parse/HiveLexer.g 
412a046488eaea42a6416c7cbd514715d37e249f 
  ql/src/java/org/apache/hadoop/hive/ql/parse/HiveParser.g 
f934ac4e3b736eed1b3060fa516124c67f9a2f87 
  ql/src/java/org/apache/hadoop/hive/ql/parse/IdentifiersParser.g 
9c001c1495b423c19f3fa710c74f1bb1e24a08f4 
  ql/src/java/org/apache/hadoop/hive/ql/parse/ParseUtils.java 
0af25360ee6f3088c764f0c4d812f30d1eeb91d6 
  ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java 
b91b9a26dff6cc6af235cd09b56c47536f5b43ef 
  ql/src/java/org/apache/hadoop/hive/ql/parse/StorageFormat.java PRE-CREATION 
  
ql/src/main/resources/META-INF/services/org.apache.hadoop.hive.ql.io.StorageFormatDescriptor
 PRE-CREATION 
  ql/src/test/org/apache/hadoop/hive/ql/io/TestStorageFormatDescriptor.java 
PRE-CREATION 
  ql/src/test/queries/clientpositive/storage_format_descriptor.q PRE-CREATION 
  ql/src/test/results/clientnegative/fileformat_bad_class.q.out 
ab1e9357c0a7d4e21816290fbf7ed99396932b92 
  ql/src/test/results/clientnegative/genericFileFormat.q.out 
9613df95c8fc977c0ad1f717afa2db3870dfd904 
  ql/src/test/results/clientpositive/ctas.q.out 
5af90d03b72d42c30c4d31ce6b28bfd5493470ac 
  ql/src/test/results/clientpositive/storage_format_descriptor.q.out 
PRE-CREATION 
  ql/src/test/results/clientpositive/tez/ctas.q.out 
6b4c69019d45976bd0ccb705331948cd240e5750 

Diff: https://reviews.apache.org/r/23153/diff/


Testing
---


Thanks,

David Chen



[jira] [Updated] (HIVE-7090) Support session-level temporary tables in Hive

2014-07-09 Thread Jason Dere (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Dere updated HIVE-7090:
-

Attachment: HIVE-7090.9.patch

patch v9 - rebase with trunk (testconfiguration.properties)

 Support session-level temporary tables in Hive
 --

 Key: HIVE-7090
 URL: https://issues.apache.org/jira/browse/HIVE-7090
 Project: Hive
  Issue Type: Bug
  Components: SQL
Reporter: Gunther Hagleitner
Assignee: Jason Dere
 Attachments: HIVE-7090.1.patch, HIVE-7090.2.patch, HIVE-7090.3.patch, 
 HIVE-7090.4.patch, HIVE-7090.5.patch, HIVE-7090.6.patch, HIVE-7090.7.patch, 
 HIVE-7090.8.patch, HIVE-7090.9.patch


 It's common to see sql scripts that create some temporary table as an 
 intermediate result, run some additional queries against it and then clean up 
 at the end.
 We should support temporary tables properly, meaning automatically manage the 
 life cycle and make sure the visibility is restricted to the creating 
 connection/session. Without these it's common to see left over tables in 
 meta-store or weird errors with clashing tmp table names.
 Proposed syntax:
 CREATE TEMPORARY TABLE 
 CTAS, CTL, INSERT INTO, should all be supported as usual.
 Knowing that a user wants a temp table can enable us to further optimize 
 access to it. E.g.: temp tables should be kept in memory where possible, 
 compactions and merging table files aren't required, ...



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7090) Support session-level temporary tables in Hive

2014-07-09 Thread Jason Dere (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Dere updated HIVE-7090:
-

   Resolution: Fixed
Fix Version/s: 0.14.0
   Status: Resolved  (was: Patch Available)

Committed to trunk. Thanks Brock for reviewing.

 Support session-level temporary tables in Hive
 --

 Key: HIVE-7090
 URL: https://issues.apache.org/jira/browse/HIVE-7090
 Project: Hive
  Issue Type: Bug
  Components: SQL
Reporter: Gunther Hagleitner
Assignee: Jason Dere
 Fix For: 0.14.0

 Attachments: HIVE-7090.1.patch, HIVE-7090.2.patch, HIVE-7090.3.patch, 
 HIVE-7090.4.patch, HIVE-7090.5.patch, HIVE-7090.6.patch, HIVE-7090.7.patch, 
 HIVE-7090.8.patch, HIVE-7090.9.patch


 It's common to see sql scripts that create some temporary table as an 
 intermediate result, run some additional queries against it and then clean up 
 at the end.
 We should support temporary tables properly, meaning automatically manage the 
 life cycle and make sure the visibility is restricted to the creating 
 connection/session. Without these it's common to see left over tables in 
 meta-store or weird errors with clashing tmp table names.
 Proposed syntax:
 CREATE TEMPORARY TABLE 
 CTAS, CTL, INSERT INTO, should all be supported as usual.
 Knowing that a user wants a temp table can enable us to further optimize 
 access to it. E.g.: temp tables should be kept in memory where possible, 
 compactions and merging table files aren't required, ...



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HIVE-7376) add minimizeJar to jdbc/pom.xml

2014-07-09 Thread Eugene Koifman (JIRA)
Eugene Koifman created HIVE-7376:


 Summary: add minimizeJar to jdbc/pom.xml
 Key: HIVE-7376
 URL: https://issues.apache.org/jira/browse/HIVE-7376
 Project: Hive
  Issue Type: Bug
Reporter: Eugene Koifman


adding {code}minimizeJartrue/minimizeJar{code} to maven-shade-plugin 
reduces the uber jar from 51MB to 27MB.  Is there any reason not to add it?



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7376) add minimizeJar to jdbc/pom.xml

2014-07-09 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-7376:
-

Description: adding {code}minimizeJartrue/minimizeJar{code} to 
maven-shade-plugin reduces the uber jar 
(hive-jdbc-0.14.0-SNAPSHOT-standalone.jar) from 51MB to 27MB.  Is there any 
reason not to add it?  (was: adding {code}minimizeJartrue/minimizeJar{code} 
to maven-shade-plugin reduces the uber jar from 51MB to 27MB.  Is there any 
reason not to add it?)

 add minimizeJar to jdbc/pom.xml
 ---

 Key: HIVE-7376
 URL: https://issues.apache.org/jira/browse/HIVE-7376
 Project: Hive
  Issue Type: Bug
Reporter: Eugene Koifman

 adding {code}minimizeJartrue/minimizeJar{code} to maven-shade-plugin 
 reduces the uber jar (hive-jdbc-0.14.0-SNAPSHOT-standalone.jar) from 51MB to 
 27MB.  Is there any reason not to add it?



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-7364) Trunk cannot be built on -Phadoop1 after HIVE-7144

2014-07-09 Thread Gopal V (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14056754#comment-14056754
 ] 

Gopal V commented on HIVE-7364:
---

Thanks, [~szehon].  Will follow that JIRA.


 Trunk cannot be built on -Phadoop1 after HIVE-7144
 --

 Key: HIVE-7364
 URL: https://issues.apache.org/jira/browse/HIVE-7364
 Project: Hive
  Issue Type: Task
  Components: Build Infrastructure
Reporter: Navis
Assignee: Navis
 Fix For: 0.14.0

 Attachments: HIVE-7364.1.patch.txt


 Text.copyBytes() is introduced in hadoop-2



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7376) add minimizeJar to jdbc/pom.xml

2014-07-09 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-7376:
-

Description: 
adding {code}minimizeJartrue/minimizeJar{code} to maven-shade-plugin 
reduces the uber jar (hive-jdbc-0.14.0-SNAPSHOT-standalone.jar) from 51MB to 
27MB.  Is there any reason not to add it?

https://maven.apache.org/plugins/maven-shade-plugin/shade-mojo.html#minimizeJar

  was:adding {code}minimizeJartrue/minimizeJar{code} to maven-shade-plugin 
reduces the uber jar (hive-jdbc-0.14.0-SNAPSHOT-standalone.jar) from 51MB to 
27MB.  Is there any reason not to add it?


 add minimizeJar to jdbc/pom.xml
 ---

 Key: HIVE-7376
 URL: https://issues.apache.org/jira/browse/HIVE-7376
 Project: Hive
  Issue Type: Bug
Reporter: Eugene Koifman

 adding {code}minimizeJartrue/minimizeJar{code} to maven-shade-plugin 
 reduces the uber jar (hive-jdbc-0.14.0-SNAPSHOT-standalone.jar) from 51MB to 
 27MB.  Is there any reason not to add it?
 https://maven.apache.org/plugins/maven-shade-plugin/shade-mojo.html#minimizeJar



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-5976) Decouple input formats from STORED as keywords

2014-07-09 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14056764#comment-14056764
 ] 

Hive QA commented on HIVE-5976:
---



{color:red}Overall{color}: -1 no tests executed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12654861/HIVE-5976.5.patch

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-Build/724/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-Build/724/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-Build-724/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hive-ptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ [[ -n /usr/java/jdk1.7.0_45-cloudera ]]
+ export JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera
+ JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera
+ export 
PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.6.0_34/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin
+ 
PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.6.0_34/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'M2_OPTS=-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ M2_OPTS='-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ cd /data/hive-ptest/working/
+ tee /data/hive-ptest/logs/PreCommit-HIVE-Build-724/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ svn = \s\v\n ]]
+ [[ -n '' ]]
+ [[ -d apache-svn-trunk-source ]]
+ [[ ! -d apache-svn-trunk-source/.svn ]]
+ [[ ! -d apache-svn-trunk-source ]]
+ cd apache-svn-trunk-source
+ svn revert -R .
Reverted 
'metastore/src/test/org/apache/hadoop/hive/metastore/DummyRawStoreControlledCommit.java'
Reverted 
'metastore/src/test/org/apache/hadoop/hive/metastore/DummyRawStoreForJdoConnection.java'
Reverted 'metastore/src/java/org/apache/hadoop/hive/metastore/RawStore.java'
Reverted 'metastore/src/java/org/apache/hadoop/hive/metastore/ObjectStore.java'
Reverted 
'metastore/src/java/org/apache/hadoop/hive/metastore/IMetaStoreClient.java'
Reverted 
'metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClient.java'
Reverted 
'metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java'
Reverted 'metastore/src/gen/thrift/gen-py/hive_metastore/ttypes.py'
Reverted 'metastore/src/gen/thrift/gen-py/hive_metastore/ThriftHiveMetastore.py'
Reverted 
'metastore/src/gen/thrift/gen-py/hive_metastore/ThriftHiveMetastore-remote'
Reverted 'metastore/src/gen/thrift/gen-cpp/ThriftHiveMetastore.cpp'
Reverted 'metastore/src/gen/thrift/gen-cpp/hive_metastore_types.cpp'
Reverted 'metastore/src/gen/thrift/gen-cpp/ThriftHiveMetastore.h'
Reverted 'metastore/src/gen/thrift/gen-cpp/hive_metastore_types.h'
Reverted 
'metastore/src/gen/thrift/gen-cpp/ThriftHiveMetastore_server.skeleton.cpp'
Reverted 'metastore/src/gen/thrift/gen-rb/thrift_hive_metastore.rb'
Reverted 'metastore/src/gen/thrift/gen-rb/hive_metastore_types.rb'
Reverted 
'metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/PrivilegeBag.java'
Reverted 
'metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/ColumnStatistics.java'
Reverted 
'metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/Partition.java'
Reverted 
'metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/HiveObjectRef.java'
Reverted 
'metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/PartitionsStatsRequest.java'
Reverted 
'metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/GetPrincipalsInRoleResponse.java'
Reverted 
'metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/SkewedInfo.java'
Reverted 
'metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/ShowCompactResponse.java'
Reverted 
'metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/RequestPartsSpec.java'
Reverted 
'metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/AddPartitionsRequest.java'
Reverted 
'metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/Type.java'
Reverted 
'metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/HeartbeatTxnRangeResponse.java'
Reverted 
'metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/ShowLocksResponse.java'
Reverted 

[jira] [Updated] (HIVE-7090) Support session-level temporary tables in Hive

2014-07-09 Thread Jason Dere (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Dere updated HIVE-7090:
-

Issue Type: New Feature  (was: Bug)

 Support session-level temporary tables in Hive
 --

 Key: HIVE-7090
 URL: https://issues.apache.org/jira/browse/HIVE-7090
 Project: Hive
  Issue Type: New Feature
  Components: SQL
Reporter: Gunther Hagleitner
Assignee: Jason Dere
 Fix For: 0.14.0

 Attachments: HIVE-7090.1.patch, HIVE-7090.2.patch, HIVE-7090.3.patch, 
 HIVE-7090.4.patch, HIVE-7090.5.patch, HIVE-7090.6.patch, HIVE-7090.7.patch, 
 HIVE-7090.8.patch, HIVE-7090.9.patch


 It's common to see sql scripts that create some temporary table as an 
 intermediate result, run some additional queries against it and then clean up 
 at the end.
 We should support temporary tables properly, meaning automatically manage the 
 life cycle and make sure the visibility is restricted to the creating 
 connection/session. Without these it's common to see left over tables in 
 meta-store or weird errors with clashing tmp table names.
 Proposed syntax:
 CREATE TEMPORARY TABLE 
 CTAS, CTL, INSERT INTO, should all be supported as usual.
 Knowing that a user wants a temp table can enable us to further optimize 
 access to it. E.g.: temp tables should be kept in memory where possible, 
 compactions and merging table files aren't required, ...



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-5976) Decouple input formats from STORED as keywords

2014-07-09 Thread David Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Chen updated HIVE-5976:
-

Attachment: HIVE-5976.5.patch

Previous patch was not created correctly. Attaching a new one.

 Decouple input formats from STORED as keywords
 --

 Key: HIVE-5976
 URL: https://issues.apache.org/jira/browse/HIVE-5976
 Project: Hive
  Issue Type: Task
Reporter: Brock Noland
Assignee: Brock Noland
 Attachments: HIVE-5976.2.patch, HIVE-5976.3.patch, HIVE-5976.3.patch, 
 HIVE-5976.4.patch, HIVE-5976.5.patch, HIVE-5976.patch, HIVE-5976.patch, 
 HIVE-5976.patch, HIVE-5976.patch


 As noted in HIVE-5783, we hard code the input formats mapped to keywords. 
 It'd be nice if there was a registration system so we didn't need to do that.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-5976) Decouple input formats from STORED as keywords

2014-07-09 Thread David Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Chen updated HIVE-5976:
-

Attachment: (was: HIVE-5976.5.patch)

 Decouple input formats from STORED as keywords
 --

 Key: HIVE-5976
 URL: https://issues.apache.org/jira/browse/HIVE-5976
 Project: Hive
  Issue Type: Task
Reporter: Brock Noland
Assignee: Brock Noland
 Attachments: HIVE-5976.2.patch, HIVE-5976.3.patch, HIVE-5976.3.patch, 
 HIVE-5976.4.patch, HIVE-5976.5.patch, HIVE-5976.patch, HIVE-5976.patch, 
 HIVE-5976.patch, HIVE-5976.patch


 As noted in HIVE-5783, we hard code the input formats mapped to keywords. 
 It'd be nice if there was a registration system so we didn't need to do that.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: Review Request 23373: HIVE-6252: sql std auth - support 'with admin option' in revoke role metastore api

2014-07-09 Thread Thejas Nair

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/23373/#review47527
---



ql/src/test/queries/clientnegative/authorization_role_grant2.q
https://reviews.apache.org/r/23373/#comment83439

set role admin; is needed here
The revoke is not succeeding (in q.out) because of that.



- Thejas Nair


On July 9, 2014, 7:02 p.m., Jason Dere wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/23373/
 ---
 
 (Updated July 9, 2014, 7:02 p.m.)
 
 
 Review request for hive and Thejas Nair.
 
 
 Bugs: HIVE-6252
 https://issues.apache.org/jira/browse/HIVE-6252
 
 
 Repository: hive-git
 
 
 Description
 ---
 
 Parser changes - support REVOKE ADMIN ROLE FOR
 New grant_revoke_role() thrift metastore method
 
 
 Diffs
 -
 
   
 itests/hive-unit/src/test/java/org/apache/hadoop/hive/metastore/TestAuthorizationApiAuthorizer.java
  6b2f28e 
   metastore/if/hive_metastore.thrift cc802c6 
   metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java 
 acef599 
   
 metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClient.java 
 664dccd 
   metastore/src/java/org/apache/hadoop/hive/metastore/IMetaStoreClient.java 
 0c2209b 
   metastore/src/java/org/apache/hadoop/hive/metastore/ObjectStore.java 
 911c997 
   metastore/src/java/org/apache/hadoop/hive/metastore/RawStore.java e0de0e0 
   
 metastore/src/test/org/apache/hadoop/hive/metastore/DummyRawStoreControlledCommit.java
  5c00aa1 
   
 metastore/src/test/org/apache/hadoop/hive/metastore/DummyRawStoreForJdoConnection.java
  5025b83 
   ql/src/java/org/apache/hadoop/hive/ql/exec/DDLTask.java 4d8e10c 
   ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java 250756c 
   ql/src/java/org/apache/hadoop/hive/ql/parse/HiveParser.g f934ac4 
   
 ql/src/java/org/apache/hadoop/hive/ql/parse/authorization/HiveAuthorizationTaskFactoryImpl.java
  419117c 
   
 ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/sqlstd/SQLStdHiveAccessController.java
  6ede03c 
   ql/src/test/queries/clientnegative/authorization_role_grant2.q PRE-CREATION 
   ql/src/test/queries/clientpositive/authorization_role_grant1.q 051bdee 
   ql/src/test/results/clientnegative/authorization_role_grant2.q.out 
 PRE-CREATION 
   ql/src/test/results/clientpositive/authorization_role_grant1.q.out cdbcb26 
 
 Diff: https://reviews.apache.org/r/23373/diff/
 
 
 Testing
 ---
 
 unit tests added
 
 
 Thanks,
 
 Jason Dere
 




[jira] [Commented] (HIVE-5976) Decouple input formats from STORED as keywords

2014-07-09 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14056853#comment-14056853
 ] 

Hive QA commented on HIVE-5976:
---



{color:red}Overall{color}: -1 no tests executed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12654880/HIVE-5976.5.patch

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-Build/726/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-Build/726/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-Build-726/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hive-ptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ [[ -n /usr/java/jdk1.7.0_45-cloudera ]]
+ export JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera
+ JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera
+ export 
PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.6.0_34/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin
+ 
PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.6.0_34/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'M2_OPTS=-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ M2_OPTS='-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ cd /data/hive-ptest/working/
+ tee /data/hive-ptest/logs/PreCommit-HIVE-Build-726/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ svn = \s\v\n ]]
+ [[ -n '' ]]
+ [[ -d apache-svn-trunk-source ]]
+ [[ ! -d apache-svn-trunk-source/.svn ]]
+ [[ ! -d apache-svn-trunk-source ]]
+ cd apache-svn-trunk-source
+ svn revert -R .
++ egrep -v '^X|^Performing status on external'
++ awk '{print $2}'
++ svn status --no-ignore
+ rm -rf
+ svn update

Fetching external item into 'hcatalog/src/test/e2e/harness'
External at revision 1609331.

At revision 1609331.
+ patchCommandPath=/data/hive-ptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hive-ptest/working/scratch/build.patch
+ [[ -f /data/hive-ptest/working/scratch/build.patch ]]
+ chmod +x /data/hive-ptest/working/scratch/smart-apply-patch.sh
+ /data/hive-ptest/working/scratch/smart-apply-patch.sh 
/data/hive-ptest/working/scratch/build.patch
The patch does not appear to apply with p0, p1, or p2
+ exit 1
'
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12654880

 Decouple input formats from STORED as keywords
 --

 Key: HIVE-5976
 URL: https://issues.apache.org/jira/browse/HIVE-5976
 Project: Hive
  Issue Type: Task
Reporter: Brock Noland
Assignee: Brock Noland
 Attachments: HIVE-5976.2.patch, HIVE-5976.3.patch, HIVE-5976.3.patch, 
 HIVE-5976.4.patch, HIVE-5976.5.patch, HIVE-5976.patch, HIVE-5976.patch, 
 HIVE-5976.patch, HIVE-5976.patch


 As noted in HIVE-5783, we hard code the input formats mapped to keywords. 
 It'd be nice if there was a registration system so we didn't need to do that.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-5976) Decouple input formats from STORED as keywords

2014-07-09 Thread David Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Chen updated HIVE-5976:
-

Attachment: (was: HIVE-5976.5.patch)

 Decouple input formats from STORED as keywords
 --

 Key: HIVE-5976
 URL: https://issues.apache.org/jira/browse/HIVE-5976
 Project: Hive
  Issue Type: Task
Reporter: Brock Noland
Assignee: Brock Noland
 Attachments: HIVE-5976.2.patch, HIVE-5976.3.patch, HIVE-5976.3.patch, 
 HIVE-5976.4.patch, HIVE-5976.patch, HIVE-5976.patch, HIVE-5976.patch, 
 HIVE-5976.patch


 As noted in HIVE-5783, we hard code the input formats mapped to keywords. 
 It'd be nice if there was a registration system so we didn't need to do that.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-5976) Decouple input formats from STORED as keywords

2014-07-09 Thread David Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Chen updated HIVE-5976:
-

Attachment: HIVE-5976.5.patch

Let's try not using --no-prefix.

 Decouple input formats from STORED as keywords
 --

 Key: HIVE-5976
 URL: https://issues.apache.org/jira/browse/HIVE-5976
 Project: Hive
  Issue Type: Task
Reporter: Brock Noland
Assignee: Brock Noland
 Attachments: HIVE-5976.2.patch, HIVE-5976.3.patch, HIVE-5976.3.patch, 
 HIVE-5976.4.patch, HIVE-5976.5.patch, HIVE-5976.patch, HIVE-5976.patch, 
 HIVE-5976.patch, HIVE-5976.patch


 As noted in HIVE-5783, we hard code the input formats mapped to keywords. 
 It'd be nice if there was a registration system so we didn't need to do that.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-5976) Decouple input formats from STORED as keywords

2014-07-09 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14056870#comment-14056870
 ] 

Hive QA commented on HIVE-5976:
---



{color:red}Overall{color}: -1 no tests executed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12654885/HIVE-5976.5.patch

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-Build/727/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-Build/727/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-Build-727/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hive-ptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ [[ -n /usr/java/jdk1.7.0_45-cloudera ]]
+ export JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera
+ JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera
+ export 
PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.6.0_34/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin
+ 
PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.6.0_34/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'M2_OPTS=-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ M2_OPTS='-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ cd /data/hive-ptest/working/
+ tee /data/hive-ptest/logs/PreCommit-HIVE-Build-727/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ svn = \s\v\n ]]
+ [[ -n '' ]]
+ [[ -d apache-svn-trunk-source ]]
+ [[ ! -d apache-svn-trunk-source/.svn ]]
+ [[ ! -d apache-svn-trunk-source ]]
+ cd apache-svn-trunk-source
+ svn revert -R .
++ awk '{print $2}'
++ egrep -v '^X|^Performing status on external'
++ svn status --no-ignore
+ rm -rf
+ svn update

Fetching external item into 'hcatalog/src/test/e2e/harness'
External at revision 1609333.

At revision 1609333.
+ patchCommandPath=/data/hive-ptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hive-ptest/working/scratch/build.patch
+ [[ -f /data/hive-ptest/working/scratch/build.patch ]]
+ chmod +x /data/hive-ptest/working/scratch/smart-apply-patch.sh
+ /data/hive-ptest/working/scratch/smart-apply-patch.sh 
/data/hive-ptest/working/scratch/build.patch
The patch does not appear to apply with p0, p1, or p2
+ exit 1
'
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12654885

 Decouple input formats from STORED as keywords
 --

 Key: HIVE-5976
 URL: https://issues.apache.org/jira/browse/HIVE-5976
 Project: Hive
  Issue Type: Task
Reporter: Brock Noland
Assignee: Brock Noland
 Attachments: HIVE-5976.2.patch, HIVE-5976.3.patch, HIVE-5976.3.patch, 
 HIVE-5976.4.patch, HIVE-5976.5.patch, HIVE-5976.patch, HIVE-5976.patch, 
 HIVE-5976.patch, HIVE-5976.patch


 As noted in HIVE-5783, we hard code the input formats mapped to keywords. 
 It'd be nice if there was a registration system so we didn't need to do that.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-5976) Decouple input formats from STORED as keywords

2014-07-09 Thread David Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14056873#comment-14056873
 ] 

David Chen commented on HIVE-5976:
--

Ok, I tried uploading a patch using:

 * git format-patch trunk
 * git diff trunk --no-prefix
 * git diff trunk

I am getting the {{The patch does not appear to apply with p0, p1, or p2}} 
error for each of them.

Is there something wrong with the Hive pre-commit test job?

 Decouple input formats from STORED as keywords
 --

 Key: HIVE-5976
 URL: https://issues.apache.org/jira/browse/HIVE-5976
 Project: Hive
  Issue Type: Task
Reporter: Brock Noland
Assignee: Brock Noland
 Attachments: HIVE-5976.2.patch, HIVE-5976.3.patch, HIVE-5976.3.patch, 
 HIVE-5976.4.patch, HIVE-5976.5.patch, HIVE-5976.patch, HIVE-5976.patch, 
 HIVE-5976.patch, HIVE-5976.patch


 As noted in HIVE-5783, we hard code the input formats mapped to keywords. 
 It'd be nice if there was a registration system so we didn't need to do that.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-5976) Decouple input formats from STORED as keywords

2014-07-09 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14056876#comment-14056876
 ] 

Brock Noland commented on HIVE-5976:


Can you verify your patch applies on top of trunk HEAD?

 Decouple input formats from STORED as keywords
 --

 Key: HIVE-5976
 URL: https://issues.apache.org/jira/browse/HIVE-5976
 Project: Hive
  Issue Type: Task
Reporter: Brock Noland
Assignee: Brock Noland
 Attachments: HIVE-5976.2.patch, HIVE-5976.3.patch, HIVE-5976.3.patch, 
 HIVE-5976.4.patch, HIVE-5976.5.patch, HIVE-5976.patch, HIVE-5976.patch, 
 HIVE-5976.patch, HIVE-5976.patch


 As noted in HIVE-5783, we hard code the input formats mapped to keywords. 
 It'd be nice if there was a registration system so we didn't need to do that.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-7286) Parameterize HCatMapReduceTest for testing against all Hive storage formats

2014-07-09 Thread David Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14056894#comment-14056894
 ] 

David Chen commented on HIVE-7286:
--

Hi [~szehon], thanks for taking the time to review this patch and for your 
feedback and advice.

I have made some progress finishing HIVE-5976 and fixing the remaining test 
failures. However, as I am working on that patch, I realized that it only 
covers the current set of native SerDes, i.e. Sequence File, text, Parquet, 
ORC, and RCFile but not Avro and any of the other SerDes found throughout the 
Hive codebase. However, I do not think that this test should be limited to only 
covering those storage formats or only the ones in 
SERDESUSINGMETASTOREFORSCHEMA. They should cover all SerDes in the Hive 
codebase, especially since it is very likely that the other SerDes are actually 
being used; we use Avro almost exclusively here at LinkedIn.

After further thought, Avro is a particular special case because it requires an 
Avro schema to be set in the SerDe or table properties, and as a result, the 
test code must provide the TypeInfo to Avro Schema converter. This is a 
requirement that other SerDes do not have. At the same time, the TypeInfo to 
Avro Schema converter has good test coverage and will become useful when we 
make the AvroSerDe a native Hive storage format and remove the requirement for 
specifying an Avro schema, which should definitely be done in the future.

SerDe devs would only be required to add an entry to the table in the test with 
the SerDe class and nulls in the other fields. This would indicate that 
HCatalog is not being tested against the new storage format.

I am currently blocked on HIVE-5976 because there seems to be some issues with 
the pre-commit tests; even so, I think I will need to spend some more time to 
finish that patch. After further thought, after HIVE-5976 is committed, I think 
we will still want to keep most of the code in this patch and just modify the 
test to make exceptions using the enumeration of StorageFormatDescriptor in 
place of the TestStorageFormat classes (which is nearly identical to 
StorageFormatDescriptor).

Since this patch is ready and expands the coverage of the HCatMapReduceTest 
tests to run against RCFile, ORC, and SequenceFile and that HIVE-5976 will take 
more time to complete, I think we should go ahead and commit this patch and 
open a new ticket to make the necessary changes to these tests once HIVE-05976 
is done. I am also working on adding a similar fixture to the HCatalog Pig 
Adapter tests, which also requires this patch.

 Parameterize HCatMapReduceTest for testing against all Hive storage formats
 ---

 Key: HIVE-7286
 URL: https://issues.apache.org/jira/browse/HIVE-7286
 Project: Hive
  Issue Type: Test
  Components: HCatalog
Reporter: David Chen
Assignee: David Chen
 Attachments: HIVE-7286.1.patch


 Currently, HCatMapReduceTest, which is extended by the following test suites:
  * TestHCatDynamicPartitioned
  * TestHCatNonPartitioned
  * TestHCatPartitioned
  * TestHCatExternalDynamicPartitioned
  * TestHCatExternalNonPartitioned
  * TestHCatExternalPartitioned
  * TestHCatMutableDynamicPartitioned
  * TestHCatMutableNonPartitioned
  * TestHCatMutablePartitioned
 These tests run against RCFile. Currently, only TestHCatDynamicPartitioned is 
 run against any other storage format (ORC).
 Ideally, HCatalog should be tested against all storage formats supported by 
 Hive. The easiest way to accomplish this is to turn HCatMapReduceTest into a 
 parameterized test fixture that enumerates all Hive storage formats. Until 
 HIVE-5976 is implemented, we would need to manually create the mapping of 
 SerDe to InputFormat and OutputFormat. This way, we can explicitly keep track 
 of which storage formats currently work with HCatalog or which ones are 
 untested or have test failures. The test fixture should also use Reflection 
 to find all classes in the classpath that implements the SerDe interface and 
 raise a failure if any of them are not enumerated.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-5976) Decouple input formats from STORED as keywords

2014-07-09 Thread David Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Chen updated HIVE-5976:
-

Attachment: (was: HIVE-5976.6.patch)

 Decouple input formats from STORED as keywords
 --

 Key: HIVE-5976
 URL: https://issues.apache.org/jira/browse/HIVE-5976
 Project: Hive
  Issue Type: Task
Reporter: Brock Noland
Assignee: Brock Noland
 Attachments: HIVE-5976.2.patch, HIVE-5976.3.patch, HIVE-5976.3.patch, 
 HIVE-5976.4.patch, HIVE-5976.5.patch, HIVE-5976.6.patch, HIVE-5976.patch, 
 HIVE-5976.patch, HIVE-5976.patch, HIVE-5976.patch


 As noted in HIVE-5783, we hard code the input formats mapped to keywords. 
 It'd be nice if there was a registration system so we didn't need to do that.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-5976) Decouple input formats from STORED as keywords

2014-07-09 Thread David Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Chen updated HIVE-5976:
-

Attachment: HIVE-5976.6.patch

Ah. I rebased before I submitted the first of those patches and thought the 
error message implied something wrong with the format of the patch itself.

I have rebased on trunk again and resolved the merge conflict. Uploading a new 
patch.

 Decouple input formats from STORED as keywords
 --

 Key: HIVE-5976
 URL: https://issues.apache.org/jira/browse/HIVE-5976
 Project: Hive
  Issue Type: Task
Reporter: Brock Noland
Assignee: Brock Noland
 Attachments: HIVE-5976.2.patch, HIVE-5976.3.patch, HIVE-5976.3.patch, 
 HIVE-5976.4.patch, HIVE-5976.5.patch, HIVE-5976.6.patch, HIVE-5976.patch, 
 HIVE-5976.patch, HIVE-5976.patch, HIVE-5976.patch


 As noted in HIVE-5783, we hard code the input formats mapped to keywords. 
 It'd be nice if there was a registration system so we didn't need to do that.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-5976) Decouple input formats from STORED as keywords

2014-07-09 Thread David Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Chen updated HIVE-5976:
-

Attachment: HIVE-5976.6.patch

 Decouple input formats from STORED as keywords
 --

 Key: HIVE-5976
 URL: https://issues.apache.org/jira/browse/HIVE-5976
 Project: Hive
  Issue Type: Task
Reporter: Brock Noland
Assignee: Brock Noland
 Attachments: HIVE-5976.2.patch, HIVE-5976.3.patch, HIVE-5976.3.patch, 
 HIVE-5976.4.patch, HIVE-5976.5.patch, HIVE-5976.6.patch, HIVE-5976.patch, 
 HIVE-5976.patch, HIVE-5976.patch, HIVE-5976.patch


 As noted in HIVE-5783, we hard code the input formats mapped to keywords. 
 It'd be nice if there was a registration system so we didn't need to do that.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: Review Request 23153: Apply patch

2014-07-09 Thread David Chen

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/23153/
---

(Updated July 9, 2014, 11:30 p.m.)


Review request for hive.


Changes
---

Rebase on trunk.


Summary (updated)
-

Apply patch


Bugs: HIVE-5976
https://issues.apache.org/jira/browse/HIVE-5976


Repository: hive-git


Description (updated)
---

Apply patch


Diffs (updated)
-

  common/src/java/org/apache/hadoop/hive/conf/HiveConf.java 
8bff2a96fbfc572d86e6a6cdbc2a74ff4f5b0609 
  
hcatalog/core/src/main/java/org/apache/hive/hcatalog/cli/SemanticAnalysis/CreateTableHook.java
 ec24531117203a5c75c62d0e5b54d5a43d37fa79 
  
itests/custom-serde/src/main/java/org/apache/hadoop/hive/serde2/CustomTextSerDe.java
 PRE-CREATION 
  
itests/custom-serde/src/main/java/org/apache/hadoop/hive/serde2/CustomTextStorageFormatDescriptor.java
 PRE-CREATION 
  
itests/custom-serde/src/main/resources/META-INF/services/org.apache.hadoop.hive.ql.io.StorageFormatDescriptor
 PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/io/AbstractStorageFormatDescriptor.java 
PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/io/IOConstants.java 
41310661ced0616f6bee27af2b1195127e5230e8 
  ql/src/java/org/apache/hadoop/hive/ql/io/ORCFileStorageFormatDescriptor.java 
PRE-CREATION 
  
ql/src/java/org/apache/hadoop/hive/ql/io/ParquetFileStorageFormatDescriptor.java
 PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/io/RCFileStorageFormatDescriptor.java 
PRE-CREATION 
  
ql/src/java/org/apache/hadoop/hive/ql/io/SequenceFileStorageFormatDescriptor.java
 PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/io/StorageFormatDescriptor.java 
PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/io/StorageFormatFactory.java 
PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/io/TextFileStorageFormatDescriptor.java 
PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/parse/BaseSemanticAnalyzer.java 
60d54b6a04e1a9601342b0159387114f7b666338 
  ql/src/java/org/apache/hadoop/hive/ql/parse/DDLSemanticAnalyzer.java 
640b6b319ce84a875cc78cb8b29fa6bbc1067fc5 
  ql/src/java/org/apache/hadoop/hive/ql/parse/HiveLexer.g 
412a046488eaea42a6416c7cbd514715d37e249f 
  ql/src/java/org/apache/hadoop/hive/ql/parse/HiveParser.g 
5ac64527497d3d047d6c7bffd64c4201a66a2a04 
  ql/src/java/org/apache/hadoop/hive/ql/parse/IdentifiersParser.g 
9c001c1495b423c19f3fa710c74f1bb1e24a08f4 
  ql/src/java/org/apache/hadoop/hive/ql/parse/ParseUtils.java 
0af25360ee6f3088c764f0c4d812f30d1eeb91d6 
  ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java 
c42923f716afb89ac6c60fb386fb91c1c94413dd 
  ql/src/java/org/apache/hadoop/hive/ql/parse/StorageFormat.java PRE-CREATION 
  
ql/src/main/resources/META-INF/services/org.apache.hadoop.hive.ql.io.StorageFormatDescriptor
 PRE-CREATION 
  ql/src/test/org/apache/hadoop/hive/ql/io/TestStorageFormatDescriptor.java 
PRE-CREATION 
  ql/src/test/queries/clientpositive/storage_format_descriptor.q PRE-CREATION 
  ql/src/test/results/clientnegative/fileformat_bad_class.q.out 
ab1e9357c0a7d4e21816290fbf7ed99396932b92 
  ql/src/test/results/clientnegative/genericFileFormat.q.out 
9613df95c8fc977c0ad1f717afa2db3870dfd904 
  ql/src/test/results/clientpositive/ctas.q.out 
5af90d03b72d42c30c4d31ce6b28bfd5493470ac 
  ql/src/test/results/clientpositive/storage_format_descriptor.q.out 
PRE-CREATION 
  ql/src/test/results/clientpositive/tez/ctas.q.out 
6b4c69019d45976bd0ccb705331948cd240e5750 

Diff: https://reviews.apache.org/r/23153/diff/


Testing
---


Thanks,

David Chen



Re: Review Request 23153: HIVE-5976: Decouple input formats from STORED as keywords

2014-07-09 Thread David Chen

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/23153/
---

(Updated July 9, 2014, 11:50 p.m.)


Review request for hive.


Summary (updated)
-

HIVE-5976: Decouple input formats from STORED as keywords


Bugs: HIVE-5976
https://issues.apache.org/jira/browse/HIVE-5976


Repository: hive-git


Description
---

Apply patch


Diffs
-

  common/src/java/org/apache/hadoop/hive/conf/HiveConf.java 
8bff2a96fbfc572d86e6a6cdbc2a74ff4f5b0609 
  
hcatalog/core/src/main/java/org/apache/hive/hcatalog/cli/SemanticAnalysis/CreateTableHook.java
 ec24531117203a5c75c62d0e5b54d5a43d37fa79 
  
itests/custom-serde/src/main/java/org/apache/hadoop/hive/serde2/CustomTextSerDe.java
 PRE-CREATION 
  
itests/custom-serde/src/main/java/org/apache/hadoop/hive/serde2/CustomTextStorageFormatDescriptor.java
 PRE-CREATION 
  
itests/custom-serde/src/main/resources/META-INF/services/org.apache.hadoop.hive.ql.io.StorageFormatDescriptor
 PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/io/AbstractStorageFormatDescriptor.java 
PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/io/IOConstants.java 
41310661ced0616f6bee27af2b1195127e5230e8 
  ql/src/java/org/apache/hadoop/hive/ql/io/ORCFileStorageFormatDescriptor.java 
PRE-CREATION 
  
ql/src/java/org/apache/hadoop/hive/ql/io/ParquetFileStorageFormatDescriptor.java
 PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/io/RCFileStorageFormatDescriptor.java 
PRE-CREATION 
  
ql/src/java/org/apache/hadoop/hive/ql/io/SequenceFileStorageFormatDescriptor.java
 PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/io/StorageFormatDescriptor.java 
PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/io/StorageFormatFactory.java 
PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/io/TextFileStorageFormatDescriptor.java 
PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/parse/BaseSemanticAnalyzer.java 
60d54b6a04e1a9601342b0159387114f7b666338 
  ql/src/java/org/apache/hadoop/hive/ql/parse/DDLSemanticAnalyzer.java 
640b6b319ce84a875cc78cb8b29fa6bbc1067fc5 
  ql/src/java/org/apache/hadoop/hive/ql/parse/HiveLexer.g 
412a046488eaea42a6416c7cbd514715d37e249f 
  ql/src/java/org/apache/hadoop/hive/ql/parse/HiveParser.g 
5ac64527497d3d047d6c7bffd64c4201a66a2a04 
  ql/src/java/org/apache/hadoop/hive/ql/parse/IdentifiersParser.g 
9c001c1495b423c19f3fa710c74f1bb1e24a08f4 
  ql/src/java/org/apache/hadoop/hive/ql/parse/ParseUtils.java 
0af25360ee6f3088c764f0c4d812f30d1eeb91d6 
  ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java 
c42923f716afb89ac6c60fb386fb91c1c94413dd 
  ql/src/java/org/apache/hadoop/hive/ql/parse/StorageFormat.java PRE-CREATION 
  
ql/src/main/resources/META-INF/services/org.apache.hadoop.hive.ql.io.StorageFormatDescriptor
 PRE-CREATION 
  ql/src/test/org/apache/hadoop/hive/ql/io/TestStorageFormatDescriptor.java 
PRE-CREATION 
  ql/src/test/queries/clientpositive/storage_format_descriptor.q PRE-CREATION 
  ql/src/test/results/clientnegative/fileformat_bad_class.q.out 
ab1e9357c0a7d4e21816290fbf7ed99396932b92 
  ql/src/test/results/clientnegative/genericFileFormat.q.out 
9613df95c8fc977c0ad1f717afa2db3870dfd904 
  ql/src/test/results/clientpositive/ctas.q.out 
5af90d03b72d42c30c4d31ce6b28bfd5493470ac 
  ql/src/test/results/clientpositive/storage_format_descriptor.q.out 
PRE-CREATION 
  ql/src/test/results/clientpositive/tez/ctas.q.out 
6b4c69019d45976bd0ccb705331948cd240e5750 

Diff: https://reviews.apache.org/r/23153/diff/


Testing
---


Thanks,

David Chen



[jira] [Assigned] (HIVE-7331) Create SparkCompiler

2014-07-09 Thread Xuefu Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7331?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuefu Zhang reassigned HIVE-7331:
-

Assignee: Xuefu Zhang

 Create SparkCompiler
 

 Key: HIVE-7331
 URL: https://issues.apache.org/jira/browse/HIVE-7331
 Project: Hive
  Issue Type: Sub-task
Reporter: Xuefu Zhang
Assignee: Xuefu Zhang

 SparkCompiler translates the operator plan into SparkWorks. It behaves a 
 similar way as MapReduceCompiler for MR and TezCompiler for Tez.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-5976) Decouple input formats from STORED as keywords

2014-07-09 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14056964#comment-14056964
 ] 

Hive QA commented on HIVE-5976:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12654895/HIVE-5976.6.patch

{color:red}ERROR:{color} -1 due to 23 failed/errored test(s), 5720 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_sortmerge_join_8
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_create_union_table
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_ctas
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_ctas_colname
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_ctas_uses_database_location
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby_duplicate_key
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_input15
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_inputddl1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_inputddl2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_inputddl3
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_merge3
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_nonmr_fetch
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_nullformat
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_nullformatCTAS
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_skewjoin_noskew
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_smb_mapjoin9
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_storage_format_descriptor
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_temp_table
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_union25
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_union_top_level
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_ctas
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_tez_dml
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_parallel_orderby
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-Build/728/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-Build/728/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-Build-728/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 23 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12654895

 Decouple input formats from STORED as keywords
 --

 Key: HIVE-5976
 URL: https://issues.apache.org/jira/browse/HIVE-5976
 Project: Hive
  Issue Type: Task
Reporter: Brock Noland
Assignee: Brock Noland
 Attachments: HIVE-5976.2.patch, HIVE-5976.3.patch, HIVE-5976.3.patch, 
 HIVE-5976.4.patch, HIVE-5976.5.patch, HIVE-5976.6.patch, HIVE-5976.patch, 
 HIVE-5976.patch, HIVE-5976.patch, HIVE-5976.patch


 As noted in HIVE-5783, we hard code the input formats mapped to keywords. 
 It'd be nice if there was a registration system so we didn't need to do that.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-5976) Decouple input formats from STORED as keywords

2014-07-09 Thread David Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14056969#comment-14056969
 ] 

David Chen commented on HIVE-5976:
--

I am looking into the test failures. It looks like my patch caused additional 
tests to fail due to an extra line in the output:

{code}
serde name: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
{code}

There are three test failures that appear to be caused by other problems.

Finally, it seems that the test failures in the previous runs where the data 
size was different are no longer appearing.

 Decouple input formats from STORED as keywords
 --

 Key: HIVE-5976
 URL: https://issues.apache.org/jira/browse/HIVE-5976
 Project: Hive
  Issue Type: Task
Reporter: Brock Noland
Assignee: Brock Noland
 Attachments: HIVE-5976.2.patch, HIVE-5976.3.patch, HIVE-5976.3.patch, 
 HIVE-5976.4.patch, HIVE-5976.5.patch, HIVE-5976.6.patch, HIVE-5976.patch, 
 HIVE-5976.patch, HIVE-5976.patch, HIVE-5976.patch


 As noted in HIVE-5783, we hard code the input formats mapped to keywords. 
 It'd be nice if there was a registration system so we didn't need to do that.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: Review Request 23373: HIVE-6252: sql std auth - support 'with admin option' in revoke role metastore api

2014-07-09 Thread Jason Dere

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/23373/#review47543
---


- Jason Dere


On July 9, 2014, 7:02 p.m., Jason Dere wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/23373/
 ---
 
 (Updated July 9, 2014, 7:02 p.m.)
 
 
 Review request for hive and Thejas Nair.
 
 
 Bugs: HIVE-6252
 https://issues.apache.org/jira/browse/HIVE-6252
 
 
 Repository: hive-git
 
 
 Description
 ---
 
 Parser changes - support REVOKE ADMIN ROLE FOR
 New grant_revoke_role() thrift metastore method
 
 
 Diffs
 -
 
   
 itests/hive-unit/src/test/java/org/apache/hadoop/hive/metastore/TestAuthorizationApiAuthorizer.java
  6b2f28e 
   metastore/if/hive_metastore.thrift cc802c6 
   metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java 
 acef599 
   
 metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClient.java 
 664dccd 
   metastore/src/java/org/apache/hadoop/hive/metastore/IMetaStoreClient.java 
 0c2209b 
   metastore/src/java/org/apache/hadoop/hive/metastore/ObjectStore.java 
 911c997 
   metastore/src/java/org/apache/hadoop/hive/metastore/RawStore.java e0de0e0 
   
 metastore/src/test/org/apache/hadoop/hive/metastore/DummyRawStoreControlledCommit.java
  5c00aa1 
   
 metastore/src/test/org/apache/hadoop/hive/metastore/DummyRawStoreForJdoConnection.java
  5025b83 
   ql/src/java/org/apache/hadoop/hive/ql/exec/DDLTask.java 4d8e10c 
   ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java 250756c 
   ql/src/java/org/apache/hadoop/hive/ql/parse/HiveParser.g f934ac4 
   
 ql/src/java/org/apache/hadoop/hive/ql/parse/authorization/HiveAuthorizationTaskFactoryImpl.java
  419117c 
   
 ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/sqlstd/SQLStdHiveAccessController.java
  6ede03c 
   ql/src/test/queries/clientnegative/authorization_role_grant2.q PRE-CREATION 
   ql/src/test/queries/clientpositive/authorization_role_grant1.q 051bdee 
   ql/src/test/results/clientnegative/authorization_role_grant2.q.out 
 PRE-CREATION 
   ql/src/test/results/clientpositive/authorization_role_grant1.q.out cdbcb26 
 
 Diff: https://reviews.apache.org/r/23373/diff/
 
 
 Testing
 ---
 
 unit tests added
 
 
 Thanks,
 
 Jason Dere
 




Re: Review Request 23373: HIVE-6252: sql std auth - support 'with admin option' in revoke role metastore api

2014-07-09 Thread Jason Dere

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/23373/
---

(Updated July 10, 2014, 2:53 a.m.)


Review request for hive and Thejas Nair.


Changes
---

- Fix unit test failures. Apparently argument checking for grant_revoke_roles() 
was too strict for the existing unit tests, made the fields in 
GrantRevokeRequest optional to allow nulls.
- Review changes per Thejas


Bugs: HIVE-6252
https://issues.apache.org/jira/browse/HIVE-6252


Repository: hive-git


Description
---

Parser changes - support REVOKE ADMIN ROLE FOR
New grant_revoke_role() thrift metastore method


Diffs (updated)
-

  
itests/hive-unit/src/test/java/org/apache/hadoop/hive/metastore/TestAuthorizationApiAuthorizer.java
 6b2f28e 
  metastore/if/hive_metastore.thrift d425d2b 
  metastore/src/gen/thrift/gen-cpp/ThriftHiveMetastore.h 2a1b4d7 
  metastore/src/gen/thrift/gen-cpp/ThriftHiveMetastore.cpp 9567874 
  metastore/src/gen/thrift/gen-cpp/ThriftHiveMetastore_server.skeleton.cpp 
b18009c 
  metastore/src/gen/thrift/gen-cpp/hive_metastore_types.h a0f208a 
  metastore/src/gen/thrift/gen-cpp/hive_metastore_types.cpp a6cd09a 
  
metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/AddPartitionsRequest.java
 791c46b 
  
metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/AddPartitionsResult.java
 2471690 
  
metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/ColumnStatistics.java
 aa647d4 
  
metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/DropPartitionsResult.java
 b8d5a56 
  
metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/Function.java
 4a24bbf 
  
metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/GetOpenTxnsInfoResponse.java
 427204e 
  
metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/GetOpenTxnsResponse.java
 eda18ad 
  
metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/GetPrincipalsInRoleResponse.java
 083699b 
  
metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/GetRoleGrantsForPrincipalResponse.java
 f745c08 
  
metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/HeartbeatTxnRangeResponse.java
 0fc4310 
  
metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/HiveObjectRef.java
 997060f 
  
metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/LockRequest.java
 c35aadd 
  
metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/OpenTxnsResponse.java
 3d47286 
  
metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/Partition.java
 312807e 
  
metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/PartitionsByExprResult.java
 ea8f0bb 
  
metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/PartitionsStatsRequest.java
 a46bdc8 
  
metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/PartitionsStatsResult.java
 27f654d 
  
metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/PrincipalPrivilegeSet.java
 eea86e5 
  
metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/PrivilegeBag.java
 a4687ad 
  
metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/RequestPartsSpec.java
 5119b83 
  
metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/Schema.java
 d91ca2d 
  
metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/ShowCompactResponse.java
 a9f9f7c 
  
metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/ShowLocksResponse.java
 d2657e0 
  
metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/SkewedInfo.java
 83438c7 
  
metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/StorageDescriptor.java
 d0b9843 
  
metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/Table.java
 229a819 
  
metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/TableStatsRequest.java
 48d16b7 
  
metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/TableStatsResult.java
 b25c6c2 
  
metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/ThriftHiveMetastore.java
 4f051af 
  
metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/Type.java
 bb81e3c 
  metastore/src/gen/thrift/gen-php/metastore/ThriftHiveMetastore.php c79624f 
  metastore/src/gen/thrift/gen-php/metastore/Types.php 3db3ded 
  metastore/src/gen/thrift/gen-py/hive_metastore/ThriftHiveMetastore-remote 
fdedb57 
  metastore/src/gen/thrift/gen-py/hive_metastore/ThriftHiveMetastore.py 23679be 
  metastore/src/gen/thrift/gen-py/hive_metastore/ttypes.py 43a498a 
  

[jira] [Updated] (HIVE-6252) sql std auth - support 'with admin option' in revoke role metastore api

2014-07-09 Thread Jason Dere (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Dere updated HIVE-6252:
-

Attachment: HIVE-6252.2.patch

Patch v2:
- Fix unit test failures. Apparently argument checking for grant_revoke_roles() 
was too strict for the existing unit tests, made the fields in 
GrantRevokeRequest optional to allow nulls.
- Review changes per Thejas

 sql std auth - support 'with admin option' in revoke role metastore api
 ---

 Key: HIVE-6252
 URL: https://issues.apache.org/jira/browse/HIVE-6252
 Project: Hive
  Issue Type: Sub-task
  Components: Authorization, SQLStandardAuthorization
Reporter: Thejas M Nair
Assignee: Jason Dere
 Attachments: HIVE-6252.1.patch, HIVE-6252.2.patch

   Original Estimate: 24h
  Remaining Estimate: 24h

 The metastore api for revoking role privileges does not accept 'with admin 
 option' , though the syntax supports it. SQL syntax also supports grantor 
 specification in revoke role statement.
 It should be similar to the grant_role api.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: Review Request 23373: HIVE-6252: sql std auth - support 'with admin option' in revoke role metastore api

2014-07-09 Thread Jason Dere


 On July 9, 2014, 10:13 p.m., Thejas Nair wrote:
  ql/src/test/queries/clientnegative/authorization_role_grant2.q, line 24
  https://reviews.apache.org/r/23373/diff/1/?file=627091#file627091line24
 
  set role admin; is needed here
  The revoke is not succeeding (in q.out) because of that.
 

You're right, that was a mistake and it was the final grant statement that was 
supposed to fail.  I'll fix that.


- Jason


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/23373/#review47527
---


On July 10, 2014, 2:53 a.m., Jason Dere wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/23373/
 ---
 
 (Updated July 10, 2014, 2:53 a.m.)
 
 
 Review request for hive and Thejas Nair.
 
 
 Bugs: HIVE-6252
 https://issues.apache.org/jira/browse/HIVE-6252
 
 
 Repository: hive-git
 
 
 Description
 ---
 
 Parser changes - support REVOKE ADMIN ROLE FOR
 New grant_revoke_role() thrift metastore method
 
 
 Diffs
 -
 
   
 itests/hive-unit/src/test/java/org/apache/hadoop/hive/metastore/TestAuthorizationApiAuthorizer.java
  6b2f28e 
   metastore/if/hive_metastore.thrift d425d2b 
   metastore/src/gen/thrift/gen-cpp/ThriftHiveMetastore.h 2a1b4d7 
   metastore/src/gen/thrift/gen-cpp/ThriftHiveMetastore.cpp 9567874 
   metastore/src/gen/thrift/gen-cpp/ThriftHiveMetastore_server.skeleton.cpp 
 b18009c 
   metastore/src/gen/thrift/gen-cpp/hive_metastore_types.h a0f208a 
   metastore/src/gen/thrift/gen-cpp/hive_metastore_types.cpp a6cd09a 
   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/AddPartitionsRequest.java
  791c46b 
   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/AddPartitionsResult.java
  2471690 
   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/ColumnStatistics.java
  aa647d4 
   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/DropPartitionsResult.java
  b8d5a56 
   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/Function.java
  4a24bbf 
   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/GetOpenTxnsInfoResponse.java
  427204e 
   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/GetOpenTxnsResponse.java
  eda18ad 
   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/GetPrincipalsInRoleResponse.java
  083699b 
   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/GetRoleGrantsForPrincipalResponse.java
  f745c08 
   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/HeartbeatTxnRangeResponse.java
  0fc4310 
   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/HiveObjectRef.java
  997060f 
   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/LockRequest.java
  c35aadd 
   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/OpenTxnsResponse.java
  3d47286 
   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/Partition.java
  312807e 
   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/PartitionsByExprResult.java
  ea8f0bb 
   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/PartitionsStatsRequest.java
  a46bdc8 
   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/PartitionsStatsResult.java
  27f654d 
   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/PrincipalPrivilegeSet.java
  eea86e5 
   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/PrivilegeBag.java
  a4687ad 
   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/RequestPartsSpec.java
  5119b83 
   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/Schema.java
  d91ca2d 
   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/ShowCompactResponse.java
  a9f9f7c 
   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/ShowLocksResponse.java
  d2657e0 
   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/SkewedInfo.java
  83438c7 
   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/StorageDescriptor.java
  d0b9843 
   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/Table.java
  229a819 
   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/TableStatsRequest.java
  48d16b7 
   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/TableStatsResult.java
  b25c6c2 
   
 

Re: Review Request 23373: HIVE-6252: sql std auth - support 'with admin option' in revoke role metastore api

2014-07-09 Thread Jason Dere


- Jason


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/23373/#review47527
---


On July 10, 2014, 2:53 a.m., Jason Dere wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/23373/
 ---
 
 (Updated July 10, 2014, 2:53 a.m.)
 
 
 Review request for hive and Thejas Nair.
 
 
 Bugs: HIVE-6252
 https://issues.apache.org/jira/browse/HIVE-6252
 
 
 Repository: hive-git
 
 
 Description
 ---
 
 Parser changes - support REVOKE ADMIN ROLE FOR
 New grant_revoke_role() thrift metastore method
 
 
 Diffs
 -
 
   
 itests/hive-unit/src/test/java/org/apache/hadoop/hive/metastore/TestAuthorizationApiAuthorizer.java
  6b2f28e 
   metastore/if/hive_metastore.thrift d425d2b 
   metastore/src/gen/thrift/gen-cpp/ThriftHiveMetastore.h 2a1b4d7 
   metastore/src/gen/thrift/gen-cpp/ThriftHiveMetastore.cpp 9567874 
   metastore/src/gen/thrift/gen-cpp/ThriftHiveMetastore_server.skeleton.cpp 
 b18009c 
   metastore/src/gen/thrift/gen-cpp/hive_metastore_types.h a0f208a 
   metastore/src/gen/thrift/gen-cpp/hive_metastore_types.cpp a6cd09a 
   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/AddPartitionsRequest.java
  791c46b 
   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/AddPartitionsResult.java
  2471690 
   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/ColumnStatistics.java
  aa647d4 
   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/DropPartitionsResult.java
  b8d5a56 
   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/Function.java
  4a24bbf 
   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/GetOpenTxnsInfoResponse.java
  427204e 
   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/GetOpenTxnsResponse.java
  eda18ad 
   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/GetPrincipalsInRoleResponse.java
  083699b 
   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/GetRoleGrantsForPrincipalResponse.java
  f745c08 
   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/HeartbeatTxnRangeResponse.java
  0fc4310 
   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/HiveObjectRef.java
  997060f 
   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/LockRequest.java
  c35aadd 
   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/OpenTxnsResponse.java
  3d47286 
   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/Partition.java
  312807e 
   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/PartitionsByExprResult.java
  ea8f0bb 
   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/PartitionsStatsRequest.java
  a46bdc8 
   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/PartitionsStatsResult.java
  27f654d 
   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/PrincipalPrivilegeSet.java
  eea86e5 
   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/PrivilegeBag.java
  a4687ad 
   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/RequestPartsSpec.java
  5119b83 
   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/Schema.java
  d91ca2d 
   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/ShowCompactResponse.java
  a9f9f7c 
   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/ShowLocksResponse.java
  d2657e0 
   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/SkewedInfo.java
  83438c7 
   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/StorageDescriptor.java
  d0b9843 
   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/Table.java
  229a819 
   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/TableStatsRequest.java
  48d16b7 
   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/TableStatsResult.java
  b25c6c2 
   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/ThriftHiveMetastore.java
  4f051af 
   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/Type.java
  bb81e3c 
   metastore/src/gen/thrift/gen-php/metastore/ThriftHiveMetastore.php c79624f 
   metastore/src/gen/thrift/gen-php/metastore/Types.php 3db3ded 
   metastore/src/gen/thrift/gen-py/hive_metastore/ThriftHiveMetastore-remote 

[jira] [Commented] (HIVE-7262) Partitioned Table Function (PTF) query fails on ORC table when attempting to vectorize

2014-07-09 Thread Eric Hanson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7262?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14057078#comment-14057078
 ] 

Eric Hanson commented on HIVE-7262:
---

[~mmccline] put a code review at: https://reviews.apache.org/r/23186/. Matt, if 
you could attach this to your JIRAs in the future, that'd be great.

 Partitioned Table Function (PTF) query fails on ORC table when attempting to 
 vectorize
 --

 Key: HIVE-7262
 URL: https://issues.apache.org/jira/browse/HIVE-7262
 Project: Hive
  Issue Type: Sub-task
Reporter: Matt McCline
Assignee: Matt McCline
 Attachments: HIVE-7262.1.patch, HIVE-7262.2.patch


 In ptf.q, create the part table with STORED AS ORC and SET 
 hive.vectorized.execution.enabled=true;
 Queries fail to find BLOCKOFFSET virtual column during vectorization and 
 suffers an exception.
 ERROR vector.VectorizationContext 
 (VectorizationContext.java:getInputColumnIndex(186)) - The column 
 BLOCK__OFFSET__INSIDE__FILE is not in the vectorization context column map.
 Jitendra pointed to the routine that returns the VectorizationContext in 
 Vectorize.java needing to add virtual columns to the map, too.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-3392) Hive unnecessarily validates table SerDes when dropping a table

2014-07-09 Thread Jason Dere (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14057083#comment-14057083
 ] 

Jason Dere commented on HIVE-3392:
--

So this moves the validity check from getTable(), over to 
alterTable/alterPartition.
What kind of error will we get now if we try to do a SELECT on this table when 
the SerDe cannot be resolved? Do we need to add the validity check somewhere in 
that code path, or is the current error sufficient?

 Hive unnecessarily validates table SerDes when dropping a table
 ---

 Key: HIVE-3392
 URL: https://issues.apache.org/jira/browse/HIVE-3392
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.9.0
Reporter: Jonathan Natkins
Assignee: Navis
  Labels: patch
 Attachments: HIVE-3392.2.patch.txt, HIVE-3392.3.patch.txt, 
 HIVE-3392.4.patch.txt, HIVE-3392.Test Case - with_trunk_version.txt


 natty@hadoop1:~$ hive
 hive add jar 
 /home/natty/source/sample-code/custom-serdes/target/custom-serdes-1.0-SNAPSHOT.jar;
 Added 
 /home/natty/source/sample-code/custom-serdes/target/custom-serdes-1.0-SNAPSHOT.jar
  to class path
 Added resource: 
 /home/natty/source/sample-code/custom-serdes/target/custom-serdes-1.0-SNAPSHOT.jar
 hive create table test (a int) row format serde 'hive.serde.JSONSerDe';  
   
 OK
 Time taken: 2.399 seconds
 natty@hadoop1:~$ hive
 hive drop table test;

 FAILED: Hive Internal Error: 
 java.lang.RuntimeException(MetaException(message:org.apache.hadoop.hive.serde2.SerDeException
  SerDe hive.serde.JSONSerDe does not exist))
 java.lang.RuntimeException: 
 MetaException(message:org.apache.hadoop.hive.serde2.SerDeException SerDe 
 hive.serde.JSONSerDe does not exist)
   at 
 org.apache.hadoop.hive.ql.metadata.Table.getDeserializerFromMetaStore(Table.java:262)
   at 
 org.apache.hadoop.hive.ql.metadata.Table.getDeserializer(Table.java:253)
   at org.apache.hadoop.hive.ql.metadata.Table.getCols(Table.java:490)
   at 
 org.apache.hadoop.hive.ql.metadata.Table.checkValidity(Table.java:162)
   at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:943)
   at 
 org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer.analyzeDropTable(DDLSemanticAnalyzer.java:700)
   at 
 org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer.analyzeInternal(DDLSemanticAnalyzer.java:210)
   at 
 org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:243)
   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:430)
   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:337)
   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:889)
   at 
 org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:255)
   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:212)
   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:403)
   at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:671)
   at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:554)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at org.apache.hadoop.util.RunJar.main(RunJar.java:208)
 Caused by: MetaException(message:org.apache.hadoop.hive.serde2.SerDeException 
 SerDe com.cloudera.hive.serde.JSONSerDe does not exist)
   at 
 org.apache.hadoop.hive.metastore.MetaStoreUtils.getDeserializer(MetaStoreUtils.java:211)
   at 
 org.apache.hadoop.hive.ql.metadata.Table.getDeserializerFromMetaStore(Table.java:260)
   ... 20 more
 hive add jar 
 /home/natty/source/sample-code/custom-serdes/target/custom-serdes-1.0-SNAPSHOT.jar;
 Added 
 /home/natty/source/sample-code/custom-serdes/target/custom-serdes-1.0-SNAPSHOT.jar
  to class path
 Added resource: 
 /home/natty/source/sample-code/custom-serdes/target/custom-serdes-1.0-SNAPSHOT.jar
 hive drop table test;
 OK
 Time taken: 0.658 seconds
 hive 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-7262) Partitioned Table Function (PTF) query fails on ORC table when attempting to vectorize

2014-07-09 Thread Eric Hanson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7262?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14057085#comment-14057085
 ] 

Eric Hanson commented on HIVE-7262:
---

Matt, can you upload your patch to your ReviewBoard page? I didn't see a View 
Diff button. I see you did include a link above -- sorry I missed that.

 Partitioned Table Function (PTF) query fails on ORC table when attempting to 
 vectorize
 --

 Key: HIVE-7262
 URL: https://issues.apache.org/jira/browse/HIVE-7262
 Project: Hive
  Issue Type: Sub-task
Reporter: Matt McCline
Assignee: Matt McCline
 Attachments: HIVE-7262.1.patch, HIVE-7262.2.patch


 In ptf.q, create the part table with STORED AS ORC and SET 
 hive.vectorized.execution.enabled=true;
 Queries fail to find BLOCKOFFSET virtual column during vectorization and 
 suffers an exception.
 ERROR vector.VectorizationContext 
 (VectorizationContext.java:getInputColumnIndex(186)) - The column 
 BLOCK__OFFSET__INSIDE__FILE is not in the vectorization context column map.
 Jitendra pointed to the routine that returns the VectorizationContext in 
 Vectorize.java needing to add virtual columns to the map, too.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (HIVE-7333) Create RDD translator, translating Hive Tables into Spark RDDs

2014-07-09 Thread Xuefu Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuefu Zhang reassigned HIVE-7333:
-

Assignee: Rui Li

 Create RDD translator, translating Hive Tables into Spark RDDs
 --

 Key: HIVE-7333
 URL: https://issues.apache.org/jira/browse/HIVE-7333
 Project: Hive
  Issue Type: Sub-task
Reporter: Xuefu Zhang
Assignee: Rui Li

 Please refer to the design specification.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (HIVE-7332) Create SparkClient, interface to Spark cluster

2014-07-09 Thread Xuefu Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuefu Zhang reassigned HIVE-7332:
-

Assignee: Chengxiang Li

 Create SparkClient, interface to Spark cluster
 --

 Key: HIVE-7332
 URL: https://issues.apache.org/jira/browse/HIVE-7332
 Project: Hive
  Issue Type: Sub-task
Reporter: Xuefu Zhang
Assignee: Chengxiang Li

 SparkClient is responsible for Spark job submission, monitoring, progress and 
 error reporting, etc.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (HIVE-7371) Identify a minimum set of JARs needed to ship to Spark cluster [Spark Branch]

2014-07-09 Thread Xuefu Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuefu Zhang reassigned HIVE-7371:
-

Assignee: Chengxiang Li

 Identify a minimum set of JARs needed to ship to Spark cluster [Spark Branch]
 -

 Key: HIVE-7371
 URL: https://issues.apache.org/jira/browse/HIVE-7371
 Project: Hive
  Issue Type: Task
  Components: Spark
Reporter: Xuefu Zhang
Assignee: Chengxiang Li

 Currently, Spark client ships all Hive JARs, including those that Hive 
 depends on, to Spark cluster when a query is executed by Spark. This is not 
 efficient, causing potential library conflicts. Ideally, only a minimum set 
 of JARs needs to be shipped. This task is to identify such a set.
 We should learn from current MR cluster, for which I assume only hive-exec 
 JAR is shipped to MR cluster.
 We also need to ensure that user-supplied JARs are also shipped to Spark 
 cluster, in a similar fashion as MR does.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


  1   2   >