[jira] [Commented] (HIVE-7233) File hive-hwi-0.13.1 not found on lib folder

2014-07-17 Thread John (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14064608#comment-14064608
 ] 

John commented on HIVE-7233:


1. hwi/pom.xml
  change packaging type to war
2. packaging/src/main/assembly/bin.xml
  add hive-0.13.1.war to bin tar ball

 File hive-hwi-0.13.1 not found on lib folder
 

 Key: HIVE-7233
 URL: https://issues.apache.org/jira/browse/HIVE-7233
 Project: Hive
  Issue Type: New Feature
  Components: Web UI
Affects Versions: 0.13.1
Reporter: Dinh Hoang Luong

 I found that: 
 line 27 of file 
 .../apache-hive-0.13.1-sr/hwi/pom.xml with 
 packagejar/package instead of packagewar/package
 sorry my english is bad. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6912) HWI not working - HTTP ERROR 500

2014-07-17 Thread John (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14064612#comment-14064612
 ] 

John commented on HIVE-6912:


1. hwi/pom.xml
  add dependency whose artifact ID is jasper-compiler-jdt
2. packaging/src/main/assembly/bin.xml
  include jasper-compiler-jdt*.jar in bin tar ball

 HWI not working - HTTP ERROR 500
 

 Key: HIVE-6912
 URL: https://issues.apache.org/jira/browse/HIVE-6912
 Project: Hive
  Issue Type: Bug
Reporter: sunil ranjan khuntia
Priority: Critical
 Fix For: 0.13.0


 I tried to use hive HWI to write hive queries on a UI.
 As p[er the steps mentioned here 
 https://cwiki.apache.org/confluence/display/Hive/HiveWebInterface
 I set Ant and ran the hive hwi service.
 but In browser when i hit http://localhost:/hwi i got the below error
 HTTP ERROR 500
 Problem accessing /hwi/. Reason:
 Unable to find a javac compiler;
 com.sun.tools.javac.Main is not on the classpath.
 Perhaps JAVA_HOME does not point to the JDK.
 It is currently set to /usr/java/jdk1.6.0_32/jre
 Caused by:
 Unable to find a javac compiler;
 com.sun.tools.javac.Main is not on the classpath.
 Perhaps JAVA_HOME does not point to the JDK.
 It is currently set to /usr/java/jdk1.6.0_32/jre
   at 
 org.apache.tools.ant.taskdefs.compilers.CompilerAdapterFactory.getCompiler(CompilerAdapterFactory.java:129)
 I have checked and changed JAVA_HOME. But its still the same



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6928) Beeline should not chop off describe extended results by default

2014-07-17 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14064611#comment-14064611
 ] 

Hive QA commented on HIVE-6928:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12656188/HIVE-6928.3.patch

{color:red}ERROR:{color} -1 due to 3 failed/errored test(s), 5740 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_temp_table
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_ql_rewrite_gbtoidx
org.apache.hive.hcatalog.streaming.TestStreaming.testTransactionBatchAbortAndCommit
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-Build/825/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-Build/825/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-Build-825/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 3 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12656188

 Beeline should not chop off describe extended results by default
 --

 Key: HIVE-6928
 URL: https://issues.apache.org/jira/browse/HIVE-6928
 Project: Hive
  Issue Type: Bug
  Components: CLI
Reporter: Szehon Ho
Assignee: Chinna Rao Lalam
 Attachments: HIVE-6928.1.patch, HIVE-6928.2.patch, HIVE-6928.3 
 .patch, HIVE-6928.3 .patch, HIVE-6928.3 .patch, HIVE-6928.3.patch, 
 HIVE-6928.patch


 By default, beeline truncates long results based on the console width like:
 {code}
 +-+--+
 |  col_name   |   
|
 +-+--+
 | pat_id  | string
|
 | score   | float 
|
 | acutes  | float 
|
 | |   
|
 | Detailed Table Information  | Table(tableName:refills, dbName:default, 
 owner:hdadmin, createTime:1393882396, lastAccessTime:0, retention:0, sd:Sto |
 +-+--+
 5 rows selected (0.4 seconds)
 {code}
 This can be changed by !outputformat, but the default should behave better to 
 give a better experience to the first-time beeline user.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Review Request 23602: add hiverc support for hive server2

2014-07-17 Thread cheng xu

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/23602/
---

Review request for hive.


Bugs: hive-5160
https://issues.apache.org/jira/browse/hive-5160


Repository: hive-git


Description
---

see jira hive-5160


Diffs
-

  common/src/java/org/apache/hadoop/hive/common/cli/FileAbstractProcessor.java 
PRE-CREATION 
  service/src/java/org/apache/hive/service/cli/session/HiveSessionImpl.java 
6a7ee7a 
  
service/src/java/org/apache/hive/service/cli/session/HiveSessionImplwithUGI.java
 e79b129 
  service/src/java/org/apache/hive/service/cli/session/SessionManager.java 
6650c05 

Diff: https://reviews.apache.org/r/23602/diff/


Testing
---

test locally


Thanks,

cheng xu



Re: Review Request 23602: add hiverc support for hive server2

2014-07-17 Thread cheng xu

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/23602/
---

(Updated July 17, 2014, 6:22 a.m.)


Review request for hive.


Changes
---

remove blanks


Bugs: hive-5160
https://issues.apache.org/jira/browse/hive-5160


Repository: hive-git


Description
---

see jira hive-5160


Diffs (updated)
-

  common/src/java/org/apache/hadoop/hive/common/cli/FileAbstractProcessor.java 
PRE-CREATION 
  service/src/java/org/apache/hive/service/cli/session/HiveSessionImpl.java 
6a7ee7a 
  
service/src/java/org/apache/hive/service/cli/session/HiveSessionImplwithUGI.java
 e79b129 
  service/src/java/org/apache/hive/service/cli/session/SessionManager.java 
6650c05 

Diff: https://reviews.apache.org/r/23602/diff/


Testing
---

test locally


Thanks,

cheng xu



Re: Review Request 23602: add hiverc support for hive server2

2014-07-17 Thread cheng xu

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/23602/
---

(Updated July 17, 2014, 6:25 a.m.)


Review request for hive.


Bugs: hive-5160
https://issues.apache.org/jira/browse/hive-5160


Repository: hive-git


Description
---

see jira hive-5160


Diffs (updated)
-

  common/src/java/org/apache/hadoop/hive/common/cli/FileAbstractProcessor.java 
PRE-CREATION 
  service/src/java/org/apache/hive/service/cli/session/HiveSessionImpl.java 
6a7ee7a 
  
service/src/java/org/apache/hive/service/cli/session/HiveSessionImplwithUGI.java
 e79b129 
  service/src/java/org/apache/hive/service/cli/session/SessionManager.java 
6650c05 

Diff: https://reviews.apache.org/r/23602/diff/


Testing
---

test locally


Thanks,

cheng xu



Re: Review Request 23602: add hiverc support for hive server2

2014-07-17 Thread cheng xu

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/23602/
---

(Updated July 17, 2014, 6:30 a.m.)


Review request for hive.


Bugs: hive-5160
https://issues.apache.org/jira/browse/hive-5160


Repository: hive-git


Description
---

see jira hive-5160


Diffs (updated)
-

  common/src/java/org/apache/hadoop/hive/common/cli/FileAbstractProcessor.java 
PRE-CREATION 
  service/src/java/org/apache/hive/service/cli/session/HiveSessionImpl.java 
6a7ee7a 
  
service/src/java/org/apache/hive/service/cli/session/HiveSessionImplwithUGI.java
 e79b129 
  service/src/java/org/apache/hive/service/cli/session/SessionManager.java 
6650c05 

Diff: https://reviews.apache.org/r/23602/diff/


Testing
---

test locally


Thanks,

cheng xu



[jira] [Updated] (HIVE-7371) Identify a minimum set of JARs needed to ship to Spark cluster [Spark Branch]

2014-07-17 Thread Chengxiang Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chengxiang Li updated HIVE-7371:


Attachment: HIVE-7371-Spark.1.patch

 Identify a minimum set of JARs needed to ship to Spark cluster [Spark Branch]
 -

 Key: HIVE-7371
 URL: https://issues.apache.org/jira/browse/HIVE-7371
 Project: Hive
  Issue Type: Task
  Components: Spark
Reporter: Xuefu Zhang
Assignee: Chengxiang Li
 Attachments: HIVE-7371-Spark.1.patch


 Currently, Spark client ships all Hive JARs, including those that Hive 
 depends on, to Spark cluster when a query is executed by Spark. This is not 
 efficient, causing potential library conflicts. Ideally, only a minimum set 
 of JARs needs to be shipped. This task is to identify such a set.
 We should learn from current MR cluster, for which I assume only hive-exec 
 JAR is shipped to MR cluster.
 We also need to ensure that user-supplied JARs are also shipped to Spark 
 cluster, in a similar fashion as MR does.
 NO PRECOMMIT TESTS. This is for spark-branch only.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7371) Identify a minimum set of JARs needed to ship to Spark cluster [Spark Branch]

2014-07-17 Thread Chengxiang Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chengxiang Li updated HIVE-7371:


Status: Patch Available  (was: In Progress)

 Identify a minimum set of JARs needed to ship to Spark cluster [Spark Branch]
 -

 Key: HIVE-7371
 URL: https://issues.apache.org/jira/browse/HIVE-7371
 Project: Hive
  Issue Type: Task
  Components: Spark
Reporter: Xuefu Zhang
Assignee: Chengxiang Li
 Attachments: HIVE-7371-Spark.1.patch


 Currently, Spark client ships all Hive JARs, including those that Hive 
 depends on, to Spark cluster when a query is executed by Spark. This is not 
 efficient, causing potential library conflicts. Ideally, only a minimum set 
 of JARs needs to be shipped. This task is to identify such a set.
 We should learn from current MR cluster, for which I assume only hive-exec 
 JAR is shipped to MR cluster.
 We also need to ensure that user-supplied JARs are also shipped to Spark 
 cluster, in a similar fashion as MR does.
 NO PRECOMMIT TESTS. This is for spark-branch only.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-7409) Add workaround for a deadlock issue of Class.getAnnotation()

2014-07-17 Thread Tsuyoshi OZAWA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14064638#comment-14064638
 ] 

Tsuyoshi OZAWA commented on HIVE-7409:
--

[~xuefuz] and [~navis], 
[JDK-7171421|https://bugs.openjdk.java.net/browse/JDK-7171421] is correct way 
which reproduce deadlock:

1. Thread 1: {{Class.getAnnotations}} or {{Class.getAnnotation}} - 
{{initAnnotationsIfNecessary(synchronized)}} - 
{{AnnotationType.getInstance(synchronized)}}
2. Thread 2: {{AnnotationType.getInstance(synchronized)}} - 
{{Class.getAnnotation}} - {{initAnnotationsIfNecessary(synchronized)}}

This is deadlock between {{Class.getAnnotations}} 
and{{AnnotationType.getInstance}}. JDK-side fix will be merged on future 
release - [JDK-8047613|https://bugs.openjdk.java.net/browse/JDK-8047613]. Do 
you have additional questions?

 Add workaround for a deadlock issue of Class.getAnnotation() 
 -

 Key: HIVE-7409
 URL: https://issues.apache.org/jira/browse/HIVE-7409
 Project: Hive
  Issue Type: Bug
Reporter: Tsuyoshi OZAWA
Assignee: Tsuyoshi OZAWA
 Attachments: HIVE-7409.1.patch, HIVE-7409.2.patch.txt, stacktrace.txt


 [JDK-7122142|https://bugs.openjdk.java.net/browse/JDK-7122142] mentions that 
 there is a race condition in getAnnotations. This problem can lead deadlock. 
 The fix on JDK will be merged on jdk8, but hive supports jdk6/jdk7 currently. 
 Therefore, we should add workaround to avoid the issue.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HIVE-7434) beeline should not always enclose the output by default in CSV/TSV mode

2014-07-17 Thread ferdinand (JIRA)
ferdinand created HIVE-7434:
---

 Summary: beeline should not always enclose the output by default 
in CSV/TSV mode
 Key: HIVE-7434
 URL: https://issues.apache.org/jira/browse/HIVE-7434
 Project: Hive
  Issue Type: Bug
  Components: CLI
Reporter: ferdinand


When using beeline in CSV/TSV mode (via command !outputformat csv) , the output 
is always enclosed in single quotes. This is however not the case for Hive CLI, 
so we need to make this enclose optional.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-7404) Revoke privilege should support revoking of grant option

2014-07-17 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14064654#comment-14064654
 ] 

Hive QA commented on HIVE-7404:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12656189/HIVE-7404.2.patch

{color:red}ERROR:{color} -1 due to 3 failed/errored test(s), 5741 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_temp_table
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_ql_rewrite_gbtoidx
org.apache.hive.hcatalog.pig.TestHCatLoader.testReadDataPrimitiveTypes
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-Build/828/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-Build/828/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-Build-828/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 3 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12656189

 Revoke privilege should support revoking of grant option
 

 Key: HIVE-7404
 URL: https://issues.apache.org/jira/browse/HIVE-7404
 Project: Hive
  Issue Type: Sub-task
  Components: Authorization
Reporter: Jason Dere
Assignee: Jason Dere
 Attachments: HIVE-7404.1.patch, HIVE-7404.2.patch


 Similar to HIVE-6252, but for grant option on privileges:
 {noformat}
 REVOKE GRANT OPTION FOR privilege ON object FROM USER user
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6885) Address style and docs feedback in HIVE-5687

2014-07-17 Thread Lars Francke (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14064659#comment-14064659
 ] 

Lars Francke commented on HIVE-6885:


I see why this was pushed through so fast but it'd be nice if the promised 
cleanup were to actually happen. The Hive code base is littered with 
inconsistencies and hard to understand as it is. I don't think we need to add 
to that by rushing out features.

 Address style and docs feedback in HIVE-5687
 

 Key: HIVE-6885
 URL: https://issues.apache.org/jira/browse/HIVE-6885
 Project: Hive
  Issue Type: Bug
  Components: HCatalog
Affects Versions: 0.13.0
Reporter: Alan Gates
Assignee: Roshan Naik

 There were a number of style and docs feedback given in HIVE-5687 that were 
 not addressed before it was committed.  These need to be addressed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HIVE-7435) java.sql.SQLException: For input string: 5000L

2014-07-17 Thread Elan Hershcovitz (JIRA)
Elan Hershcovitz created HIVE-7435:
--

 Summary: java.sql.SQLException: For input string: 5000L
 Key: HIVE-7435
 URL: https://issues.apache.org/jira/browse/HIVE-7435
 Project: Hive
  Issue Type: Bug
  Components: JDBC
Affects Versions: 0.13.0
 Environment: ubuntu 12.04 , java 1.8.0_05 , hiveserver2
Reporter: Elan Hershcovitz


running java version 1.8.0_05 , hive 0.13 I could pass connection 
successfully but could not run any sql queries (show tables , drop and so).
msg I got  :
java.sql.SQLException: For input string: 5000L
at org.apache.hive.jdbc.Utils.verifySuccess(Utils.java:121)
at org.apache.hive.jdbc.Utils.verifySuccessWithInfo(Utils.java:109)
at org.apache.hive.jdbc.HiveStatement.execute(HiveStatement.java:263)
at 
org.apache.hive.jdbc.HiveStatement.executeQuery(HiveStatement.java:355)
I ve changed  hive-core.xml  hive.server2.long.polling.timeout from default
  value5000L/value to 5000 and now im getting isssue :
java.sql.SQLException: The query did not generate a result set!
at 
org.apache.hive.jdbc.HiveStatement.executeQuery(HiveStatement.java:356)
at HiveDataFetcher.LoadTable(HiveDataFetcher.java:64)
at HiveDataFetcher.runQueryAndGetResult(HiveDataFetcher.java:45)
at HiveDataFetcher.getDataFromHive(HiveDataFetcher.java:19)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7435) java.sql.SQLException: For input string: 5000L

2014-07-17 Thread Elan Hershcovitz (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elan Hershcovitz updated HIVE-7435:
---

  Description: 
running java version 1.8.0_05 , hive 0.13.1 I could pass connection 
successfully but could not run any sql queries (show tables , drop and so).
msg I got  :
java.sql.SQLException: For input string: 5000L
at org.apache.hive.jdbc.Utils.verifySuccess(Utils.java:121)
at org.apache.hive.jdbc.Utils.verifySuccessWithInfo(Utils.java:109)
at org.apache.hive.jdbc.HiveStatement.execute(HiveStatement.java:263)
at 
org.apache.hive.jdbc.HiveStatement.executeQuery(HiveStatement.java:355)
I ve changed  hive-core.xml  hive.server2.long.polling.timeout from default
  value5000L/value to 5000 and now im getting isssue :
java.sql.SQLException: The query did not generate a result set!
at 
org.apache.hive.jdbc.HiveStatement.executeQuery(HiveStatement.java:356)
at HiveDataFetcher.LoadTable(HiveDataFetcher.java:64)
at HiveDataFetcher.runQueryAndGetResult(HiveDataFetcher.java:45)
at HiveDataFetcher.getDataFromHive(HiveDataFetcher.java:19)

  was:
running java version 1.8.0_05 , hive 0.13 I could pass connection 
successfully but could not run any sql queries (show tables , drop and so).
msg I got  :
java.sql.SQLException: For input string: 5000L
at org.apache.hive.jdbc.Utils.verifySuccess(Utils.java:121)
at org.apache.hive.jdbc.Utils.verifySuccessWithInfo(Utils.java:109)
at org.apache.hive.jdbc.HiveStatement.execute(HiveStatement.java:263)
at 
org.apache.hive.jdbc.HiveStatement.executeQuery(HiveStatement.java:355)
I ve changed  hive-core.xml  hive.server2.long.polling.timeout from default
  value5000L/value to 5000 and now im getting isssue :
java.sql.SQLException: The query did not generate a result set!
at 
org.apache.hive.jdbc.HiveStatement.executeQuery(HiveStatement.java:356)
at HiveDataFetcher.LoadTable(HiveDataFetcher.java:64)
at HiveDataFetcher.runQueryAndGetResult(HiveDataFetcher.java:45)
at HiveDataFetcher.getDataFromHive(HiveDataFetcher.java:19)

Affects Version/s: (was: 0.13.0)
   0.13.1

 java.sql.SQLException: For input string: 5000L
 

 Key: HIVE-7435
 URL: https://issues.apache.org/jira/browse/HIVE-7435
 Project: Hive
  Issue Type: Bug
  Components: JDBC
Affects Versions: 0.13.1
 Environment: ubuntu 12.04 , java 1.8.0_05 , hiveserver2
Reporter: Elan Hershcovitz

 running java version 1.8.0_05 , hive 0.13.1 I could pass connection 
 successfully but could not run any sql queries (show tables , drop and so).
 msg I got  :
 java.sql.SQLException: For input string: 5000L
   at org.apache.hive.jdbc.Utils.verifySuccess(Utils.java:121)
   at org.apache.hive.jdbc.Utils.verifySuccessWithInfo(Utils.java:109)
   at org.apache.hive.jdbc.HiveStatement.execute(HiveStatement.java:263)
   at 
 org.apache.hive.jdbc.HiveStatement.executeQuery(HiveStatement.java:355)
 I ve changed  hive-core.xml  hive.server2.long.polling.timeout from default
   value5000L/value to 5000 and now im getting isssue :
 java.sql.SQLException: The query did not generate a result set!
   at 
 org.apache.hive.jdbc.HiveStatement.executeQuery(HiveStatement.java:356)
   at HiveDataFetcher.LoadTable(HiveDataFetcher.java:64)
   at HiveDataFetcher.runQueryAndGetResult(HiveDataFetcher.java:45)
   at HiveDataFetcher.getDataFromHive(HiveDataFetcher.java:19)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7435) java.sql.SQLException: For input string: 5000L

2014-07-17 Thread Elan Hershcovitz (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elan Hershcovitz updated HIVE-7435:
---

Priority: Blocker  (was: Major)

 java.sql.SQLException: For input string: 5000L
 

 Key: HIVE-7435
 URL: https://issues.apache.org/jira/browse/HIVE-7435
 Project: Hive
  Issue Type: Bug
  Components: JDBC
Affects Versions: 0.13.1
 Environment: ubuntu 12.04 , java 1.8.0_05 , hiveserver2
Reporter: Elan Hershcovitz
Priority: Blocker

 running java version 1.8.0_05 , hive 0.13.1 I could pass connection 
 successfully but could not run any sql queries (show tables , drop and so).
 msg I got  :
 java.sql.SQLException: For input string: 5000L
   at org.apache.hive.jdbc.Utils.verifySuccess(Utils.java:121)
   at org.apache.hive.jdbc.Utils.verifySuccessWithInfo(Utils.java:109)
   at org.apache.hive.jdbc.HiveStatement.execute(HiveStatement.java:263)
   at 
 org.apache.hive.jdbc.HiveStatement.executeQuery(HiveStatement.java:355)
 I ve changed  hive-core.xml  hive.server2.long.polling.timeout from default
   value5000L/value to 5000 and now im getting isssue :
 java.sql.SQLException: The query did not generate a result set!
   at 
 org.apache.hive.jdbc.HiveStatement.executeQuery(HiveStatement.java:356)
   at HiveDataFetcher.LoadTable(HiveDataFetcher.java:64)
   at HiveDataFetcher.runQueryAndGetResult(HiveDataFetcher.java:45)
   at HiveDataFetcher.getDataFromHive(HiveDataFetcher.java:19)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7371) Identify a minimum set of JARs needed to ship to Spark cluster [Spark Branch]

2014-07-17 Thread Chengxiang Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chengxiang Li updated HIVE-7371:


Attachment: HIVE-7371-Spark.2.patch

code refactor.

 Identify a minimum set of JARs needed to ship to Spark cluster [Spark Branch]
 -

 Key: HIVE-7371
 URL: https://issues.apache.org/jira/browse/HIVE-7371
 Project: Hive
  Issue Type: Task
  Components: Spark
Reporter: Xuefu Zhang
Assignee: Chengxiang Li
 Attachments: HIVE-7371-Spark.1.patch, HIVE-7371-Spark.2.patch


 Currently, Spark client ships all Hive JARs, including those that Hive 
 depends on, to Spark cluster when a query is executed by Spark. This is not 
 efficient, causing potential library conflicts. Ideally, only a minimum set 
 of JARs needs to be shipped. This task is to identify such a set.
 We should learn from current MR cluster, for which I assume only hive-exec 
 JAR is shipped to MR cluster.
 We also need to ensure that user-supplied JARs are also shipped to Spark 
 cluster, in a similar fashion as MR does.
 NO PRECOMMIT TESTS. This is for spark-branch only.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-5275) HiveServer2 should respect hive.aux.jars.path property and add aux jars to distributed cache

2014-07-17 Thread Jens Schmitt (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14064726#comment-14064726
 ] 

Jens Schmitt commented on HIVE-5275:


update:
my problem seems to be solved... and was probably a different one. some classes 
where missing. this worked for the hive console, but appareantly not for any 
other way (somebody told me, that if you work via hive-console, there are some 
background services or smth startet, which are not for any other way...)

cheers

 HiveServer2 should respect hive.aux.jars.path property and add aux jars to 
 distributed cache
 

 Key: HIVE-5275
 URL: https://issues.apache.org/jira/browse/HIVE-5275
 Project: Hive
  Issue Type: Improvement
  Components: HiveServer2
Reporter: Alex Favaro

 HiveServer2 currently ignores the hive.aux.jars.path property in 
 hive-site.xml. That means that the only way to use a custom SerDe is to add 
 it to AUX_CLASSPATH on the server and manually distribute the jar to the 
 cluster nodes. Hive CLI does this automatically when hive.aux.jars.path is 
 set. It would be nice if HiverServer2 did the same.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7415) Test TestMinimrCliDriver.testCliDriver_ql_rewrite_gbtoidx failing

2014-07-17 Thread Navis (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7415?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Navis updated HIVE-7415:


Attachment: HIVE-7415.1.patch.txt

 Test TestMinimrCliDriver.testCliDriver_ql_rewrite_gbtoidx failing
 -

 Key: HIVE-7415
 URL: https://issues.apache.org/jira/browse/HIVE-7415
 Project: Hive
  Issue Type: Bug
  Components: Tests
Reporter: Jason Dere
Assignee: Navis
 Attachments: HIVE-7415.1.patch.txt


 This test has been failing in recent runs.
 itests/qtest/target/tmp/logs/hive.log shows the following stack trace:
 {noformat}
 java.lang.NullPointerException
 at 
 org.apache.hadoop.hive.ql.stats.StatsUtils.deriveStatType(StatsUtils.java:357)
 at 
 org.apache.hadoop.hive.ql.stats.StatsUtils.collectStatistics(StatsUtils.java:151)
 at 
 org.apache.hadoop.hive.ql.optimizer.stats.annotation.StatsRulesProcFactory$TableScanStatsRule.process(StatsRulesProcFactory.java:104)
 at 
 org.apache.hadoop.hive.ql.lib.DefaultRuleDispatcher.dispatch(DefaultRuleDispatcher.java:90)
 at 
 org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatchAndReturn(DefaultGraphWalker.java:94)
 at 
 org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatch(DefaultGraphWalker.java:78)
 at 
 org.apache.hadoop.hive.ql.lib.PreOrderWalker.walk(PreOrderWalker.java:54)
 at 
 org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.startWalking(DefaultGraphWalker.java:109)
 at 
 org.apache.hadoop.hive.ql.optimizer.stats.annotation.AnnotateWithStatistics.transform(AnnotateWithStatistics.java:75)
 at 
 org.apache.hadoop.hive.ql.optimizer.Optimizer.optimize(Optimizer.java:146)
 at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:9494)
 at 
 org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:207)
 at 
 org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:74)
 at 
 org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:207)
 at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:411)
 at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:307)
 at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:960)
 at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1025)
 at org.apache.hadoop.hive.ql.Driver.run(Driver.java:897)
 at org.apache.hadoop.hive.ql.Driver.run(Driver.java:887)
 at 
 org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:265)
 at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:217)
 at 
 org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:427)
 at 
 org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:363)
 at 
 org.apache.hadoop.hive.ql.QTestUtil.executeClient(QTestUtil.java:921)
 at 
 org.apache.hadoop.hive.cli.TestMinimrCliDriver.runTest(TestMinimrCliDriver.java:133)
 at 
 org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_ql_rewrite_gbtoidx(TestMinimrCliDriver.java:117)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at junit.framework.TestCase.runTest(TestCase.java:168)
 at junit.framework.TestCase.runBare(TestCase.java:134)
 at junit.framework.TestResult$1.protect(TestResult.java:110)
 at junit.framework.TestResult.runProtected(TestResult.java:128)
 at junit.framework.TestResult.run(TestResult.java:113)
 at junit.framework.TestCase.run(TestCase.java:124)
 at junit.framework.TestSuite.runTest(TestSuite.java:243)
 at junit.framework.TestSuite.run(TestSuite.java:238)
 at 
 org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
 at 
 org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
 at 
 org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
 at 
 org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
 at 
 org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
 at 
 org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
 at 
 org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-7409) Add workaround for a deadlock issue of Class.getAnnotation()

2014-07-17 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14064753#comment-14064753
 ] 

Hive QA commented on HIVE-7409:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12656205/HIVE-7409.2.patch.txt

{color:red}ERROR:{color} -1 due to 3 failed/errored test(s), 5725 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_temp_table
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_ql_rewrite_gbtoidx
org.apache.hive.jdbc.miniHS2.TestHiveServer2.testConnection
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-Build/831/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-Build/831/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-Build-831/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 3 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12656205

 Add workaround for a deadlock issue of Class.getAnnotation() 
 -

 Key: HIVE-7409
 URL: https://issues.apache.org/jira/browse/HIVE-7409
 Project: Hive
  Issue Type: Bug
Reporter: Tsuyoshi OZAWA
Assignee: Tsuyoshi OZAWA
 Attachments: HIVE-7409.1.patch, HIVE-7409.2.patch.txt, stacktrace.txt


 [JDK-7122142|https://bugs.openjdk.java.net/browse/JDK-7122142] mentions that 
 there is a race condition in getAnnotations. This problem can lead deadlock. 
 The fix on JDK will be merged on jdk8, but hive supports jdk6/jdk7 currently. 
 Therefore, we should add workaround to avoid the issue.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7435) java.sql.SQLException: For input string: 5000L

2014-07-17 Thread Elan Hershcovitz (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elan Hershcovitz updated HIVE-7435:
---

Description: 
running java app and from beeline jdk version 1.8.0_05 , hive 0.13.1 I could 
pass connection successfully but could not run any sql queries (show tables , 
drop and so).
msg I got  :
java.sql.SQLException: For input string: 5000L
at org.apache.hive.jdbc.Utils.verifySuccess(Utils.java:121)
at org.apache.hive.jdbc.Utils.verifySuccessWithInfo(Utils.java:109)
at org.apache.hive.jdbc.HiveStatement.execute(HiveStatement.java:263)
at 
org.apache.hive.jdbc.HiveStatement.executeQuery(HiveStatement.java:355)

** same msg for java jdbc app and beeline
**So :
 
I ve changed  hive-core.xml  hive.server2.long.polling.timeout from default
  value5000L/value to 5000 and now im getting new ?? issue :
java.sql.SQLException: The query did not generate a result set!
at 
org.apache.hive.jdbc.HiveStatement.executeQuery(HiveStatement.java:356)
at HiveDataFetcher.LoadTable(HiveDataFetcher.java:64)
at HiveDataFetcher.runQueryAndGetResult(HiveDataFetcher.java:45)
at HiveDataFetcher.getDataFromHive(HiveDataFetcher.java:19)

  was:
running java version 1.8.0_05 , hive 0.13.1 I could pass connection 
successfully but could not run any sql queries (show tables , drop and so).
msg I got  :
java.sql.SQLException: For input string: 5000L
at org.apache.hive.jdbc.Utils.verifySuccess(Utils.java:121)
at org.apache.hive.jdbc.Utils.verifySuccessWithInfo(Utils.java:109)
at org.apache.hive.jdbc.HiveStatement.execute(HiveStatement.java:263)
at 
org.apache.hive.jdbc.HiveStatement.executeQuery(HiveStatement.java:355)
I ve changed  hive-core.xml  hive.server2.long.polling.timeout from default
  value5000L/value to 5000 and now im getting isssue :
java.sql.SQLException: The query did not generate a result set!
at 
org.apache.hive.jdbc.HiveStatement.executeQuery(HiveStatement.java:356)
at HiveDataFetcher.LoadTable(HiveDataFetcher.java:64)
at HiveDataFetcher.runQueryAndGetResult(HiveDataFetcher.java:45)
at HiveDataFetcher.getDataFromHive(HiveDataFetcher.java:19)


 java.sql.SQLException: For input string: 5000L
 

 Key: HIVE-7435
 URL: https://issues.apache.org/jira/browse/HIVE-7435
 Project: Hive
  Issue Type: Bug
  Components: JDBC
Affects Versions: 0.13.1
 Environment: ubuntu 12.04 , java 1.8.0_05 , hiveserver2
Reporter: Elan Hershcovitz
Priority: Blocker

 running java app and from beeline jdk version 1.8.0_05 , hive 0.13.1 I 
 could pass connection successfully but could not run any sql queries (show 
 tables , drop and so).
 msg I got  :
 java.sql.SQLException: For input string: 5000L
   at org.apache.hive.jdbc.Utils.verifySuccess(Utils.java:121)
   at org.apache.hive.jdbc.Utils.verifySuccessWithInfo(Utils.java:109)
   at org.apache.hive.jdbc.HiveStatement.execute(HiveStatement.java:263)
   at 
 org.apache.hive.jdbc.HiveStatement.executeQuery(HiveStatement.java:355)
 ** same msg for java jdbc app and beeline
 **So :
  
 I ve changed  hive-core.xml  hive.server2.long.polling.timeout from default
   value5000L/value to 5000 and now im getting new ?? issue :
 java.sql.SQLException: The query did not generate a result set!
   at 
 org.apache.hive.jdbc.HiveStatement.executeQuery(HiveStatement.java:356)
   at HiveDataFetcher.LoadTable(HiveDataFetcher.java:64)
   at HiveDataFetcher.runQueryAndGetResult(HiveDataFetcher.java:45)
   at HiveDataFetcher.getDataFromHive(HiveDataFetcher.java:19)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7435) java.sql.SQLException: For input string: 5000L

2014-07-17 Thread Elan Hershcovitz (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elan Hershcovitz updated HIVE-7435:
---

Description: 
running java app and from beeline jdk version 1.8.0_05 , hive 0.13.1 I could 
pass connection successfully but could not run any sql queries (show tables , 
drop and so).
msg I' ve got  :
java.sql.SQLException: For input string: 5000L
at org.apache.hive.jdbc.Utils.verifySuccess(Utils.java:121)
at org.apache.hive.jdbc.Utils.verifySuccessWithInfo(Utils.java:109)
at org.apache.hive.jdbc.HiveStatement.execute(HiveStatement.java:263)
at 
org.apache.hive.jdbc.HiveStatement.executeQuery(HiveStatement.java:355)

** same msg for java jdbc app and beeline
** bin/hive works fine
**So :
 
I ve changed  hive-core.xml  hive.server2.long.polling.timeout from default
  value5000L/value to 5000 and now im getting new ?? issue :
java.sql.SQLException: The query did not generate a result set!
at 
org.apache.hive.jdbc.HiveStatement.executeQuery(HiveStatement.java:356)
at HiveDataFetcher.LoadTable(HiveDataFetcher.java:64)
at HiveDataFetcher.runQueryAndGetResult(HiveDataFetcher.java:45)
at HiveDataFetcher.getDataFromHive(HiveDataFetcher.java:19)

  was:
running java app and from beeline jdk version 1.8.0_05 , hive 0.13.1 I could 
pass connection successfully but could not run any sql queries (show tables , 
drop and so).
msg I got  :
java.sql.SQLException: For input string: 5000L
at org.apache.hive.jdbc.Utils.verifySuccess(Utils.java:121)
at org.apache.hive.jdbc.Utils.verifySuccessWithInfo(Utils.java:109)
at org.apache.hive.jdbc.HiveStatement.execute(HiveStatement.java:263)
at 
org.apache.hive.jdbc.HiveStatement.executeQuery(HiveStatement.java:355)

** same msg for java jdbc app and beeline
**So :
 
I ve changed  hive-core.xml  hive.server2.long.polling.timeout from default
  value5000L/value to 5000 and now im getting new ?? issue :
java.sql.SQLException: The query did not generate a result set!
at 
org.apache.hive.jdbc.HiveStatement.executeQuery(HiveStatement.java:356)
at HiveDataFetcher.LoadTable(HiveDataFetcher.java:64)
at HiveDataFetcher.runQueryAndGetResult(HiveDataFetcher.java:45)
at HiveDataFetcher.getDataFromHive(HiveDataFetcher.java:19)


 java.sql.SQLException: For input string: 5000L
 

 Key: HIVE-7435
 URL: https://issues.apache.org/jira/browse/HIVE-7435
 Project: Hive
  Issue Type: Bug
  Components: JDBC
Affects Versions: 0.13.1
 Environment: ubuntu 12.04 , java 1.8.0_05 , hiveserver2
Reporter: Elan Hershcovitz
Priority: Blocker

 running java app and from beeline jdk version 1.8.0_05 , hive 0.13.1 I 
 could pass connection successfully but could not run any sql queries (show 
 tables , drop and so).
 msg I' ve got  :
 java.sql.SQLException: For input string: 5000L
   at org.apache.hive.jdbc.Utils.verifySuccess(Utils.java:121)
   at org.apache.hive.jdbc.Utils.verifySuccessWithInfo(Utils.java:109)
   at org.apache.hive.jdbc.HiveStatement.execute(HiveStatement.java:263)
   at 
 org.apache.hive.jdbc.HiveStatement.executeQuery(HiveStatement.java:355)
 ** same msg for java jdbc app and beeline
 ** bin/hive works fine
 **So :
  
 I ve changed  hive-core.xml  hive.server2.long.polling.timeout from default
   value5000L/value to 5000 and now im getting new ?? issue :
 java.sql.SQLException: The query did not generate a result set!
   at 
 org.apache.hive.jdbc.HiveStatement.executeQuery(HiveStatement.java:356)
   at HiveDataFetcher.LoadTable(HiveDataFetcher.java:64)
   at HiveDataFetcher.runQueryAndGetResult(HiveDataFetcher.java:45)
   at HiveDataFetcher.getDataFromHive(HiveDataFetcher.java:19)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7435) java.sql.SQLException: For input string: 5000L

2014-07-17 Thread Elan Hershcovitz (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elan Hershcovitz updated HIVE-7435:
---

Description: 
btw - using hiveserve1 it all works fine...

running java app and from beeline, jdk version 1.8.0_05 , hive 0.13.1 
hiveserver2, I could pass connection successfully but could not run any sql 
queries (show tables , drop and so).
msg I' ve got  :
java.sql.SQLException: For input string: 5000L
at org.apache.hive.jdbc.Utils.verifySuccess(Utils.java:121)
at org.apache.hive.jdbc.Utils.verifySuccessWithInfo(Utils.java:109)
at org.apache.hive.jdbc.HiveStatement.execute(HiveStatement.java:263)
at 
org.apache.hive.jdbc.HiveStatement.executeQuery(HiveStatement.java:355)

** same msg for java jdbc app and beeline
** bin/hive works fine
**So :
 
I ve changed  hive-core.xml  hive.server2.long.polling.timeout from default
  value5000L/value to 5000 and now 
** beeline works fine now
** Java jdbc :
im getting new ?? issue :
java.sql.SQLException: The query did not generate a result set!
at 
org.apache.hive.jdbc.HiveStatement.executeQuery(HiveStatement.java:356)
at HiveDataFetcher.LoadTable(HiveDataFetcher.java:64)
at HiveDataFetcher.runQueryAndGetResult(HiveDataFetcher.java:45)
at HiveDataFetcher.getDataFromHive(HiveDataFetcher.java:19)

  was:
running java app and from beeline jdk version 1.8.0_05 , hive 0.13.1 I could 
pass connection successfully but could not run any sql queries (show tables , 
drop and so).
msg I' ve got  :
java.sql.SQLException: For input string: 5000L
at org.apache.hive.jdbc.Utils.verifySuccess(Utils.java:121)
at org.apache.hive.jdbc.Utils.verifySuccessWithInfo(Utils.java:109)
at org.apache.hive.jdbc.HiveStatement.execute(HiveStatement.java:263)
at 
org.apache.hive.jdbc.HiveStatement.executeQuery(HiveStatement.java:355)

** same msg for java jdbc app and beeline
** bin/hive works fine
**So :
 
I ve changed  hive-core.xml  hive.server2.long.polling.timeout from default
  value5000L/value to 5000 and now im getting new ?? issue :
java.sql.SQLException: The query did not generate a result set!
at 
org.apache.hive.jdbc.HiveStatement.executeQuery(HiveStatement.java:356)
at HiveDataFetcher.LoadTable(HiveDataFetcher.java:64)
at HiveDataFetcher.runQueryAndGetResult(HiveDataFetcher.java:45)
at HiveDataFetcher.getDataFromHive(HiveDataFetcher.java:19)


 java.sql.SQLException: For input string: 5000L
 

 Key: HIVE-7435
 URL: https://issues.apache.org/jira/browse/HIVE-7435
 Project: Hive
  Issue Type: Bug
  Components: JDBC
Affects Versions: 0.13.1
 Environment: ubuntu 12.04 , java 1.8.0_05 , hiveserver2
Reporter: Elan Hershcovitz
Priority: Blocker

 btw - using hiveserve1 it all works fine...
 running java app and from beeline, jdk version 1.8.0_05 , hive 0.13.1 
 hiveserver2, I could pass connection successfully but could not run any sql 
 queries (show tables , drop and so).
 msg I' ve got  :
 java.sql.SQLException: For input string: 5000L
   at org.apache.hive.jdbc.Utils.verifySuccess(Utils.java:121)
   at org.apache.hive.jdbc.Utils.verifySuccessWithInfo(Utils.java:109)
   at org.apache.hive.jdbc.HiveStatement.execute(HiveStatement.java:263)
   at 
 org.apache.hive.jdbc.HiveStatement.executeQuery(HiveStatement.java:355)
 ** same msg for java jdbc app and beeline
 ** bin/hive works fine
 **So :
  
 I ve changed  hive-core.xml  hive.server2.long.polling.timeout from default
   value5000L/value to 5000 and now 
 ** beeline works fine now
 ** Java jdbc :
 im getting new ?? issue :
 java.sql.SQLException: The query did not generate a result set!
   at 
 org.apache.hive.jdbc.HiveStatement.executeQuery(HiveStatement.java:356)
   at HiveDataFetcher.LoadTable(HiveDataFetcher.java:64)
   at HiveDataFetcher.runQueryAndGetResult(HiveDataFetcher.java:45)
   at HiveDataFetcher.getDataFromHive(HiveDataFetcher.java:19)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7435) java.sql.SQLException: For input string: 5000L

2014-07-17 Thread Elan Hershcovitz (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elan Hershcovitz updated HIVE-7435:
---

Environment: ubuntu 12.04 , java 1.8.0_05 , hiveserver2, mysql  (was: 
ubuntu 12.04 , java 1.8.0_05 , hiveserver2)

 java.sql.SQLException: For input string: 5000L
 

 Key: HIVE-7435
 URL: https://issues.apache.org/jira/browse/HIVE-7435
 Project: Hive
  Issue Type: Bug
  Components: JDBC
Affects Versions: 0.13.1
 Environment: ubuntu 12.04 , java 1.8.0_05 , hiveserver2, mysql
Reporter: Elan Hershcovitz
Priority: Blocker

 btw - using hiveserve1 it all works fine...
 running java app and from beeline, jdk version 1.8.0_05 , hive 0.13.1 
 hiveserver2, I could pass connection successfully but could not run any sql 
 queries (show tables , drop and so).
 msg I' ve got  :
 java.sql.SQLException: For input string: 5000L
   at org.apache.hive.jdbc.Utils.verifySuccess(Utils.java:121)
   at org.apache.hive.jdbc.Utils.verifySuccessWithInfo(Utils.java:109)
   at org.apache.hive.jdbc.HiveStatement.execute(HiveStatement.java:263)
   at 
 org.apache.hive.jdbc.HiveStatement.executeQuery(HiveStatement.java:355)
 ** same msg for java jdbc app and beeline
 ** bin/hive works fine
 **So :
  
 I ve changed  hive-core.xml  hive.server2.long.polling.timeout from default
   value5000L/value to 5000 and now 
 ** beeline works fine now
 ** Java jdbc :
 im getting new ?? issue :
 java.sql.SQLException: The query did not generate a result set!
   at 
 org.apache.hive.jdbc.HiveStatement.executeQuery(HiveStatement.java:356)
   at HiveDataFetcher.LoadTable(HiveDataFetcher.java:64)
   at HiveDataFetcher.runQueryAndGetResult(HiveDataFetcher.java:45)
   at HiveDataFetcher.getDataFromHive(HiveDataFetcher.java:19)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7435) java.sql.SQLException: For input string: 5000L

2014-07-17 Thread Elan Hershcovitz (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elan Hershcovitz updated HIVE-7435:
---

Attachment: hive-site.xml

config after changing 5000L to 5000.

 java.sql.SQLException: For input string: 5000L
 

 Key: HIVE-7435
 URL: https://issues.apache.org/jira/browse/HIVE-7435
 Project: Hive
  Issue Type: Bug
  Components: JDBC
Affects Versions: 0.13.1
 Environment: ubuntu 12.04 , java 1.8.0_05 , hiveserver2, mysql
Reporter: Elan Hershcovitz
Priority: Blocker
 Attachments: hive-site.xml


 btw - using hiveserve1 it all works fine...
 running java app and from beeline, jdk version 1.8.0_05 , hive 0.13.1 
 hiveserver2, I could pass connection successfully but could not run any sql 
 queries (show tables , drop and so).
 msg I' ve got  :
 java.sql.SQLException: For input string: 5000L
   at org.apache.hive.jdbc.Utils.verifySuccess(Utils.java:121)
   at org.apache.hive.jdbc.Utils.verifySuccessWithInfo(Utils.java:109)
   at org.apache.hive.jdbc.HiveStatement.execute(HiveStatement.java:263)
   at 
 org.apache.hive.jdbc.HiveStatement.executeQuery(HiveStatement.java:355)
 ** same msg for java jdbc app and beeline
 ** bin/hive works fine
 **So :
  
 I ve changed  hive-core.xml  hive.server2.long.polling.timeout from default
   value5000L/value to 5000 and now 
 ** beeline works fine now
 ** Java jdbc :
 im getting new ?? issue :
 java.sql.SQLException: The query did not generate a result set!
   at 
 org.apache.hive.jdbc.HiveStatement.executeQuery(HiveStatement.java:356)
   at HiveDataFetcher.LoadTable(HiveDataFetcher.java:64)
   at HiveDataFetcher.runQueryAndGetResult(HiveDataFetcher.java:45)
   at HiveDataFetcher.getDataFromHive(HiveDataFetcher.java:19)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-7415) Test TestMinimrCliDriver.testCliDriver_ql_rewrite_gbtoidx failing

2014-07-17 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14064863#comment-14064863
 ] 

Hive QA commented on HIVE-7415:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12656248/HIVE-7415.1.patch.txt

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 5725 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_temp_table
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-Build/834/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-Build/834/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-Build-834/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12656248

 Test TestMinimrCliDriver.testCliDriver_ql_rewrite_gbtoidx failing
 -

 Key: HIVE-7415
 URL: https://issues.apache.org/jira/browse/HIVE-7415
 Project: Hive
  Issue Type: Bug
  Components: Tests
Reporter: Jason Dere
Assignee: Navis
 Attachments: HIVE-7415.1.patch.txt


 This test has been failing in recent runs.
 itests/qtest/target/tmp/logs/hive.log shows the following stack trace:
 {noformat}
 java.lang.NullPointerException
 at 
 org.apache.hadoop.hive.ql.stats.StatsUtils.deriveStatType(StatsUtils.java:357)
 at 
 org.apache.hadoop.hive.ql.stats.StatsUtils.collectStatistics(StatsUtils.java:151)
 at 
 org.apache.hadoop.hive.ql.optimizer.stats.annotation.StatsRulesProcFactory$TableScanStatsRule.process(StatsRulesProcFactory.java:104)
 at 
 org.apache.hadoop.hive.ql.lib.DefaultRuleDispatcher.dispatch(DefaultRuleDispatcher.java:90)
 at 
 org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatchAndReturn(DefaultGraphWalker.java:94)
 at 
 org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatch(DefaultGraphWalker.java:78)
 at 
 org.apache.hadoop.hive.ql.lib.PreOrderWalker.walk(PreOrderWalker.java:54)
 at 
 org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.startWalking(DefaultGraphWalker.java:109)
 at 
 org.apache.hadoop.hive.ql.optimizer.stats.annotation.AnnotateWithStatistics.transform(AnnotateWithStatistics.java:75)
 at 
 org.apache.hadoop.hive.ql.optimizer.Optimizer.optimize(Optimizer.java:146)
 at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:9494)
 at 
 org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:207)
 at 
 org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:74)
 at 
 org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:207)
 at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:411)
 at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:307)
 at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:960)
 at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1025)
 at org.apache.hadoop.hive.ql.Driver.run(Driver.java:897)
 at org.apache.hadoop.hive.ql.Driver.run(Driver.java:887)
 at 
 org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:265)
 at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:217)
 at 
 org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:427)
 at 
 org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:363)
 at 
 org.apache.hadoop.hive.ql.QTestUtil.executeClient(QTestUtil.java:921)
 at 
 org.apache.hadoop.hive.cli.TestMinimrCliDriver.runTest(TestMinimrCliDriver.java:133)
 at 
 org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_ql_rewrite_gbtoidx(TestMinimrCliDriver.java:117)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at junit.framework.TestCase.runTest(TestCase.java:168)
 at junit.framework.TestCase.runBare(TestCase.java:134)
 at junit.framework.TestResult$1.protect(TestResult.java:110)
 at 

[jira] [Created] (HIVE-7436) Load Spark configuration into Hive driver

2014-07-17 Thread Chengxiang Li (JIRA)
Chengxiang Li created HIVE-7436:
---

 Summary: Load Spark configuration into Hive driver
 Key: HIVE-7436
 URL: https://issues.apache.org/jira/browse/HIVE-7436
 Project: Hive
  Issue Type: Task
  Components: Spark
Reporter: Chengxiang Li
Assignee: Chengxiang Li


load Spark configuration into Hive driver:
# load Spark configuration through spark configuration file.
# load Spark configuration through java property and override.
# ship Spark configuration and Hive configuration to spark cluster.




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Work started] (HIVE-7436) Load Spark configuration into Hive driver

2014-07-17 Thread Chengxiang Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HIVE-7436 started by Chengxiang Li.

 Load Spark configuration into Hive driver
 -

 Key: HIVE-7436
 URL: https://issues.apache.org/jira/browse/HIVE-7436
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Chengxiang Li
Assignee: Chengxiang Li

 load Spark configuration into Hive driver:
 # load Spark configuration through spark configuration file.
 # load Spark configuration through java property and override.
 # ship Spark configuration and Hive configuration to spark cluster.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7436) Load Spark configuration into Hive driver

2014-07-17 Thread Chengxiang Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chengxiang Li updated HIVE-7436:


Issue Type: Sub-task  (was: Task)
Parent: HIVE-7292

 Load Spark configuration into Hive driver
 -

 Key: HIVE-7436
 URL: https://issues.apache.org/jira/browse/HIVE-7436
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Chengxiang Li
Assignee: Chengxiang Li

 load Spark configuration into Hive driver:
 # load Spark configuration through spark configuration file.
 # load Spark configuration through java property and override.
 # ship Spark configuration and Hive configuration to spark cluster.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: Review Request 23387: HIVE-6806: Native avro support

2014-07-17 Thread Tom White

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/23387/#review48004
---


Ashish, thanks for addressing my feedback. Here's a bit more.


serde/src/java/org/apache/hadoop/hive/serde2/avro/TypeInfoToSchema.java
https://reviews.apache.org/r/23387/#comment84252

Still need to pass the Hive column definition here as the field comment.



serde/src/java/org/apache/hadoop/hive/serde2/avro/TypeInfoToSchema.java
https://reviews.apache.org/r/23387/#comment84253

It would be simpler to make sure that NULL is included (and is the first 
branch in the union) in the createAvroUnion() method, and just fall through 
here.



serde/src/java/org/apache/hadoop/hive/serde2/avro/TypeInfoToSchema.java
https://reviews.apache.org/r/23387/#comment84256

If you made all records have names then this case statement wouldn't be 
needed as the default case would be used.

Also, having non-deterministic schemas is something we should avoid, since 
otherwise files in different partitions or written at different times would 
have schemas that differed only in the record names. Instead, use a counter for 
gensym - this will work since one instance of TypeInfoToSchema is only used to 
convert one schema (although it might be a good idea to enforce that).



serde/src/test/org/apache/hadoop/hive/serde2/avro/TestTypeInfoToSchema.java
https://reviews.apache.org/r/23387/#comment84254

Also need to test the case when the union includes NULL, to check it's not 
included twice. Also, when it's included but not in the first branch of the 
union.


- Tom White


On July 17, 2014, 2:50 a.m., Ashish Singh wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/23387/
 ---
 
 (Updated July 17, 2014, 2:50 a.m.)
 
 
 Review request for hive.
 
 
 Bugs: HIVE-6806
 https://issues.apache.org/jira/browse/HIVE-6806
 
 
 Repository: hive-git
 
 
 Description
 ---
 
 HIVE-6806: Native avro support
 
 
 Diffs
 -
 
   ql/src/java/org/apache/hadoop/hive/ql/io/AvroStorageFormatDescriptor.java 
 PRE-CREATION 
   ql/src/java/org/apache/hadoop/hive/ql/io/IOConstants.java 
 1bae0a8fee04049f90b16d813ff4c96707b349c8 
   
 ql/src/main/resources/META-INF/services/org.apache.hadoop.hive.ql.io.StorageFormatDescriptor
  a23ff115512da5fe3167835a88d582c427585b8e 
   ql/src/test/org/apache/hadoop/hive/ql/io/TestStorageFormatDescriptor.java 
 d53ebc65174d66bfeee25fd2891c69c78f9137ee 
   ql/src/test/queries/clientpositive/avro_compression_enabled_native.q 
 PRE-CREATION 
   ql/src/test/queries/clientpositive/avro_decimal_native.q PRE-CREATION 
   ql/src/test/queries/clientpositive/avro_joins_native.q PRE-CREATION 
   ql/src/test/queries/clientpositive/avro_native.q PRE-CREATION 
   ql/src/test/queries/clientpositive/avro_partitioned_native.q PRE-CREATION 
   ql/src/test/queries/clientpositive/avro_schema_evolution_native.q 
 PRE-CREATION 
   ql/src/test/results/clientpositive/avro_compression_enabled_native.q.out 
 PRE-CREATION 
   ql/src/test/results/clientpositive/avro_decimal_native.q.out PRE-CREATION 
   ql/src/test/results/clientpositive/avro_joins_native.q.out PRE-CREATION 
   ql/src/test/results/clientpositive/avro_native.q.out PRE-CREATION 
   ql/src/test/results/clientpositive/avro_partitioned_native.q.out 
 PRE-CREATION 
   ql/src/test/results/clientpositive/avro_schema_evolution_native.q.out 
 PRE-CREATION 
   serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroDeserializer.java 
 0db12437406170686a21b6055d83156fe5d6a55f 
   serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroSerDe.java 
 1fe31e0034f8988d03a0c51a90904bb93e7cb157 
   serde/src/java/org/apache/hadoop/hive/serde2/avro/TypeInfoToSchema.java 
 PRE-CREATION 
   serde/src/test/org/apache/hadoop/hive/serde2/avro/TestTypeInfoToSchema.java 
 PRE-CREATION 
 
 Diff: https://reviews.apache.org/r/23387/diff/
 
 
 Testing
 ---
 
 Added qTests and unit tests
 
 
 Thanks,
 
 Ashish Singh
 




[jira] [Commented] (HIVE-7409) Add workaround for a deadlock issue of Class.getAnnotation()

2014-07-17 Thread Xuefu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14064934#comment-14064934
 ] 

Xuefu Zhang commented on HIVE-7409:
---

[~ozawa] Thanks for sharing your research result. I compared [~navis]'s changes 
with yours. 1. he identified more places of changes. 2. your synchronization is 
all on UDFType.class, yet Navis' synchronization happens on different classes. 
What's your thoughts on this?

 Add workaround for a deadlock issue of Class.getAnnotation() 
 -

 Key: HIVE-7409
 URL: https://issues.apache.org/jira/browse/HIVE-7409
 Project: Hive
  Issue Type: Bug
Reporter: Tsuyoshi OZAWA
Assignee: Tsuyoshi OZAWA
 Attachments: HIVE-7409.1.patch, HIVE-7409.2.patch.txt, stacktrace.txt


 [JDK-7122142|https://bugs.openjdk.java.net/browse/JDK-7122142] mentions that 
 there is a race condition in getAnnotations. This problem can lead deadlock. 
 The fix on JDK will be merged on jdk8, but hive supports jdk6/jdk7 currently. 
 Therefore, we should add workaround to avoid the issue.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-7415) Test TestMinimrCliDriver.testCliDriver_ql_rewrite_gbtoidx failing

2014-07-17 Thread Jason Dere (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14065202#comment-14065202
 ] 

Jason Dere commented on HIVE-7415:
--

RB might be useful for this one

 Test TestMinimrCliDriver.testCliDriver_ql_rewrite_gbtoidx failing
 -

 Key: HIVE-7415
 URL: https://issues.apache.org/jira/browse/HIVE-7415
 Project: Hive
  Issue Type: Bug
  Components: Tests
Reporter: Jason Dere
Assignee: Navis
 Attachments: HIVE-7415.1.patch.txt


 This test has been failing in recent runs.
 itests/qtest/target/tmp/logs/hive.log shows the following stack trace:
 {noformat}
 java.lang.NullPointerException
 at 
 org.apache.hadoop.hive.ql.stats.StatsUtils.deriveStatType(StatsUtils.java:357)
 at 
 org.apache.hadoop.hive.ql.stats.StatsUtils.collectStatistics(StatsUtils.java:151)
 at 
 org.apache.hadoop.hive.ql.optimizer.stats.annotation.StatsRulesProcFactory$TableScanStatsRule.process(StatsRulesProcFactory.java:104)
 at 
 org.apache.hadoop.hive.ql.lib.DefaultRuleDispatcher.dispatch(DefaultRuleDispatcher.java:90)
 at 
 org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatchAndReturn(DefaultGraphWalker.java:94)
 at 
 org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatch(DefaultGraphWalker.java:78)
 at 
 org.apache.hadoop.hive.ql.lib.PreOrderWalker.walk(PreOrderWalker.java:54)
 at 
 org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.startWalking(DefaultGraphWalker.java:109)
 at 
 org.apache.hadoop.hive.ql.optimizer.stats.annotation.AnnotateWithStatistics.transform(AnnotateWithStatistics.java:75)
 at 
 org.apache.hadoop.hive.ql.optimizer.Optimizer.optimize(Optimizer.java:146)
 at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:9494)
 at 
 org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:207)
 at 
 org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:74)
 at 
 org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:207)
 at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:411)
 at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:307)
 at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:960)
 at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1025)
 at org.apache.hadoop.hive.ql.Driver.run(Driver.java:897)
 at org.apache.hadoop.hive.ql.Driver.run(Driver.java:887)
 at 
 org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:265)
 at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:217)
 at 
 org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:427)
 at 
 org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:363)
 at 
 org.apache.hadoop.hive.ql.QTestUtil.executeClient(QTestUtil.java:921)
 at 
 org.apache.hadoop.hive.cli.TestMinimrCliDriver.runTest(TestMinimrCliDriver.java:133)
 at 
 org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_ql_rewrite_gbtoidx(TestMinimrCliDriver.java:117)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at junit.framework.TestCase.runTest(TestCase.java:168)
 at junit.framework.TestCase.runBare(TestCase.java:134)
 at junit.framework.TestResult$1.protect(TestResult.java:110)
 at junit.framework.TestResult.runProtected(TestResult.java:128)
 at junit.framework.TestResult.run(TestResult.java:113)
 at junit.framework.TestCase.run(TestCase.java:124)
 at junit.framework.TestSuite.runTest(TestSuite.java:243)
 at junit.framework.TestSuite.run(TestSuite.java:238)
 at 
 org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
 at 
 org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
 at 
 org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
 at 
 org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
 at 
 org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
 at 
 org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
 at 
 org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
 {noformat}



--
This message 

[jira] [Updated] (HIVE-7433) ColumnMappins.ColumnMapping should expose public accessors for its fields

2014-07-17 Thread Andrew Mains (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Mains updated HIVE-7433:
---

Fix Version/s: 0.14.0
Affects Version/s: 0.14.0
   Status: Patch Available  (was: Open)

 ColumnMappins.ColumnMapping should expose public accessors for its fields
 -

 Key: HIVE-7433
 URL: https://issues.apache.org/jira/browse/HIVE-7433
 Project: Hive
  Issue Type: Bug
  Components: HBase Handler
Affects Versions: 0.14.0
Reporter: Andrew Mains
Priority: Trivial
 Fix For: 0.14.0

 Attachments: HIVE-7433.patch


 The changes from  https://issues.apache.org/jira/browse/HIVE-6411 allow users 
 to write their own HBaseKeyFactory implementations in order to customize the 
 serialization and predicate pushdown for composite HBase row keys.  
 AbstractHBaseKeyFactory allows users to use the hive-hbase column mapping 
 information through a protected ColumnMappings.ColumnMapping keyMapping 
 member. 
 However, ColumnMappings.ColumnMapping exposes no public members (everything 
 is package private org.apache.hadoop.hive.hbase), meaning that custom 
 HBaseKeyFactory implementations created outside of the package can't access 
 the any attributes of the class. 
 ColumnMappings.ColumnMapping should expose public getter methods for its 
 attributes. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6411) Support more generic way of using composite key for HBaseHandler

2014-07-17 Thread Gautam Kowshik (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14065302#comment-14065302
 ] 

Gautam Kowshik commented on HIVE-6411:
--

Have we tried to back port this to Hive .13 .. This is a very useful feature to 
have for hive over hbase in the current stable version as well. If not i can 
try looking into this in a separate jira.

 Support more generic way of using composite key for HBaseHandler
 

 Key: HIVE-6411
 URL: https://issues.apache.org/jira/browse/HIVE-6411
 Project: Hive
  Issue Type: Improvement
  Components: HBase Handler
Reporter: Navis
Assignee: Navis
Priority: Minor
  Labels: TODOC14
 Fix For: 0.14.0

 Attachments: HIVE-6411.1.patch.txt, HIVE-6411.10.patch.txt, 
 HIVE-6411.11.patch.txt, HIVE-6411.2.patch.txt, HIVE-6411.3.patch.txt, 
 HIVE-6411.4.patch.txt, HIVE-6411.5.patch.txt, HIVE-6411.6.patch.txt, 
 HIVE-6411.7.patch.txt, HIVE-6411.8.patch.txt, HIVE-6411.9.patch.txt


 HIVE-2599 introduced using custom object for the row key. But it forces key 
 objects to extend HBaseCompositeKey, which is again extension of LazyStruct. 
 If user provides proper Object and OI, we can replace internal key and keyOI 
 with those. 
 Initial implementation is based on factory interface.
 {code}
 public interface HBaseKeyFactory {
   void init(SerDeParameters parameters, Properties properties) throws 
 SerDeException;
   ObjectInspector createObjectInspector(TypeInfo type) throws SerDeException;
   LazyObjectBase createObject(ObjectInspector inspector) throws 
 SerDeException;
 }
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: Review Request 23470: HIVE-7404 Revoke privilege should support revoking of grant option

2014-07-17 Thread Thejas Nair

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/23470/#review48034
---

Ship it!


Ship It!

- Thejas Nair


On July 17, 2014, 12:29 a.m., Jason Dere wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/23470/
 ---
 
 (Updated July 17, 2014, 12:29 a.m.)
 
 
 Review request for hive and Thejas Nair.
 
 
 Bugs: HIVE-7404
 https://issues.apache.org/jira/browse/HIVE-7404
 
 
 Repository: hive-git
 
 
 Description
 ---
 
 Generated Thrift files removed from diff.
 New grant_revoke_privilege() method in Thrift Hive metastore interface
 Existing grant/revoke privilege methods (non-thrift) have additional 
 grantOption arg.
 
 
 Diffs
 -
 
   
 itests/hive-unit/src/test/java/org/apache/hadoop/hive/metastore/TestAuthorizationApiAuthorizer.java
  d2b6355 
   metastore/if/hive_metastore.thrift 2df4876 
   metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java 
 bace609 
   
 metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClient.java 
 32da869 
   metastore/src/java/org/apache/hadoop/hive/metastore/IMetaStoreClient.java 
 9ce717a 
   metastore/src/java/org/apache/hadoop/hive/metastore/ObjectStore.java 
 5e2cad7 
   metastore/src/java/org/apache/hadoop/hive/metastore/RawStore.java c9c3037 
   
 metastore/src/test/org/apache/hadoop/hive/metastore/DummyRawStoreControlledCommit.java
  5f9ab4d 
   
 metastore/src/test/org/apache/hadoop/hive/metastore/DummyRawStoreForJdoConnection.java
  b7997c0 
   ql/src/java/org/apache/hadoop/hive/ql/exec/DDLTask.java ee074ea 
   ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java a891838 
   ql/src/java/org/apache/hadoop/hive/ql/parse/HiveParser.g f5d0602 
   
 ql/src/java/org/apache/hadoop/hive/ql/parse/authorization/HiveAuthorizationTaskFactoryImpl.java
  c32d81e 
   ql/src/java/org/apache/hadoop/hive/ql/plan/RevokeDesc.java eaef34c 
   
 ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/sqlstd/SQLStdHiveAccessController.java
  f2a4004 
   ql/src/test/queries/clientnegative/authorization_fail_8.q PRE-CREATION 
   ql/src/test/queries/clientpositive/authorization_revoke_table_priv.q 
 c8f4bc8 
   ql/src/test/results/clientnegative/authorization_fail_8.q.out PRE-CREATION 
   ql/src/test/results/clientpositive/authorization_revoke_table_priv.q.out 
 907c889 
 
 Diff: https://reviews.apache.org/r/23470/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 Jason Dere
 




[jira] [Commented] (HIVE-7341) Support for Table replication across HCatalog instances

2014-07-17 Thread Mithun Radhakrishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14065324#comment-14065324
 ] 

Mithun Radhakrishnan commented on HIVE-7341:


I'd appreciate any comments on the interfaces. For instance, I've deprecated 
the HCatCreateTableDesc's old constructor, because it doesn't specify an 
HCatTable. I did this to remove redundancy between HCatCreateTableDesc, its 
Builder and HCatTable itself. For the moment, I'm throwing an unsupported 
exception because there's no clean way of supporting that constructor, after 
the cleanup.

 Support for Table replication across HCatalog instances
 ---

 Key: HIVE-7341
 URL: https://issues.apache.org/jira/browse/HIVE-7341
 Project: Hive
  Issue Type: New Feature
  Components: HCatalog
Affects Versions: 0.13.1
Reporter: Mithun Radhakrishnan
Assignee: Mithun Radhakrishnan
 Fix For: 0.14.0

 Attachments: HIVE-7341.1.patch


 The HCatClient currently doesn't provide very much support for replicating 
 HCatTable definitions between 2 HCatalog Server (i.e. Hive metastore) 
 instances. 
 Systems similar to Apache Falcon might find the need to replicate partition 
 data between 2 clusters, and keep the HCatalog metadata in sync between the 
 two. This poses a couple of problems:
 # The definition of the source table might change (in column schema, I/O 
 formats, record-formats, serde-parameters, etc.) The system will need a way 
 to diff 2 tables and update the target-metastore with the changes. E.g. 
 {code}
 targetTable.resolve( sourceTable, targetTable.diff(sourceTable) );
 hcatClient.updateTableSchema(dbName, tableName, targetTable);
 {code}
 # The current {{HCatClient.addPartitions()}} API requires that the 
 partition's schema be derived from the table's schema, thereby requiring that 
 the table-schema be resolved *before* partitions with the new schema are 
 added to the table. This is problematic, because it introduces race 
 conditions when 2 partitions with differing column-schemas (e.g. right after 
 a schema change) are copied in parallel. This can be avoided if each 
 HCatAddPartitionDesc kept track of the partition's schema, in flight.
 # The source and target metastores might be running different/incompatible 
 versions of Hive. 
 The impending patch attempts to address these concerns (with some caveats).
 # {{HCatTable}} now has 
 ## a {{diff()}} method, to compare against another HCatTable instance
 ## a {{resolve(diff)}} method to copy over specified table-attributes from 
 another HCatTable
 ## a serialize/deserialize mechanism (via {{HCatClient.serializeTable()}} and 
 {{HCatClient.deserializeTable()}}), so that HCatTable instances constructed 
 in other class-loaders may be used for comparison
 # {{HCatPartition}} now provides finer-grained control over a Partition's 
 column-schema, StorageDescriptor settings, etc. This allows partitions to be 
 copied completely from source, with the ability to override specific 
 properties if required (e.g. location).
 # {{HCatClient.updateTableSchema()}} can now update the entire 
 table-definition, not just the column schema.
 # I've cleaned up and removed most of the redundancy between the HCatTable, 
 HCatCreateTableDesc and HCatCreateTableDesc.Builder. The prior API failed to 
 separate the table-attributes from the add-table-operation's attributes. By 
 providing fluent-interfaces in HCatTable, and composing an HCatTable instance 
 in HCatCreateTableDesc, the interfaces are cleaner(ish). The old setters are 
 deprecated, in favour of those in HCatTable. Likewise, HCatPartition and 
 HCatAddPartitionDesc.
 I'll post a patch for trunk shortly.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-7404) Revoke privilege should support revoking of grant option

2014-07-17 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14065326#comment-14065326
 ] 

Thejas M Nair commented on HIVE-7404:
-

+1

 Revoke privilege should support revoking of grant option
 

 Key: HIVE-7404
 URL: https://issues.apache.org/jira/browse/HIVE-7404
 Project: Hive
  Issue Type: Sub-task
  Components: Authorization
Reporter: Jason Dere
Assignee: Jason Dere
 Attachments: HIVE-7404.1.patch, HIVE-7404.2.patch


 Similar to HIVE-6252, but for grant option on privileges:
 {noformat}
 REVOKE GRANT OPTION FOR privilege ON object FROM USER user
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6305) test use of quoted identifiers in user/role names

2014-07-17 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14065332#comment-14065332
 ] 

Thejas M Nair commented on HIVE-6305:
-

The quoted identifier support was added as part HIVE-6013 by [~rhbutani] . This 
is just testing that it works with role names as well. If i remember right, 
HIVE-6013 talks only about column names because only that was tested as that 
part of the patch.

Yes, we should document that role names can also be quoted identifier.

 test use of quoted identifiers in user/role names
 -

 Key: HIVE-6305
 URL: https://issues.apache.org/jira/browse/HIVE-6305
 Project: Hive
  Issue Type: Bug
  Components: Authorization
Reporter: Thejas M Nair
Assignee: Jason Dere
 Fix For: 0.14.0

 Attachments: HIVE-6305.1.patch


 Tests need to be added to verify that quoted identifiers can be used with 
 user and role names.
 For example - 
 {code}
  grant all on x to user `user-qa`; 
 show grant user `user-qa` on table x; 
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-7416) provide context information to authorization checkPrivileges api call

2014-07-17 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14065335#comment-14065335
 ] 

Thejas M Nair commented on HIVE-7416:
-

The test failures are unrelated.


 provide context information to authorization checkPrivileges api call
 -

 Key: HIVE-7416
 URL: https://issues.apache.org/jira/browse/HIVE-7416
 Project: Hive
  Issue Type: New Feature
  Components: Authorization, SQLStandardAuthorization
Reporter: Thejas M Nair
Assignee: Thejas M Nair
 Attachments: HIVE-7416.1.patch, HIVE-7416.1.patch, HIVE-7416.2.patch


 Context information such as request ip address, unique string for session, 
 and original sql command string are useful for audit logging from the 
 authorization implementations. 
 Authorization implementations can also choose to log authorization success 
 along with information about what policies matched and the context 
 information.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-7411) Exclude hadoop 1 from spark dep

2014-07-17 Thread Xuefu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14065346#comment-14065346
 ] 

Xuefu Zhang commented on HIVE-7411:
---

Patch looks good. Will commit shortly.

 Exclude hadoop 1 from spark dep
 ---

 Key: HIVE-7411
 URL: https://issues.apache.org/jira/browse/HIVE-7411
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Brock Noland
Assignee: Brock Noland
 Attachments: HIVE-7411.patch


 The branch does not compile on my machine. Attached patch fixes this.
 NO PRECOMMIT TESTS (I am working on this)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6928) Beeline should not chop off describe extended results by default

2014-07-17 Thread Xuefu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14065349#comment-14065349
 ] 

Xuefu Zhang commented on HIVE-6928:
---

The test failures don't seem related to the patch. I have seen them in other 
test runs also. Will commit the patch shortly.

 Beeline should not chop off describe extended results by default
 --

 Key: HIVE-6928
 URL: https://issues.apache.org/jira/browse/HIVE-6928
 Project: Hive
  Issue Type: Bug
  Components: CLI
Reporter: Szehon Ho
Assignee: Chinna Rao Lalam
 Attachments: HIVE-6928.1.patch, HIVE-6928.2.patch, HIVE-6928.3 
 .patch, HIVE-6928.3 .patch, HIVE-6928.3 .patch, HIVE-6928.3.patch, 
 HIVE-6928.patch


 By default, beeline truncates long results based on the console width like:
 {code}
 +-+--+
 |  col_name   |   
|
 +-+--+
 | pat_id  | string
|
 | score   | float 
|
 | acutes  | float 
|
 | |   
|
 | Detailed Table Information  | Table(tableName:refills, dbName:default, 
 owner:hdadmin, createTime:1393882396, lastAccessTime:0, retention:0, sd:Sto |
 +-+--+
 5 rows selected (0.4 seconds)
 {code}
 This can be changed by !outputformat, but the default should behave better to 
 give a better experience to the first-time beeline user.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6928) Beeline should not chop off describe extended results by default

2014-07-17 Thread Xuefu Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuefu Zhang updated HIVE-6928:
--

   Resolution: Fixed
Fix Version/s: 0.14.0
   Status: Resolved  (was: Patch Available)

Patch committed to trunk. Thanks to Chinna and ferdinand for the contribution.

 Beeline should not chop off describe extended results by default
 --

 Key: HIVE-6928
 URL: https://issues.apache.org/jira/browse/HIVE-6928
 Project: Hive
  Issue Type: Bug
  Components: CLI
Reporter: Szehon Ho
Assignee: Chinna Rao Lalam
 Fix For: 0.14.0

 Attachments: HIVE-6928.1.patch, HIVE-6928.2.patch, HIVE-6928.3 
 .patch, HIVE-6928.3 .patch, HIVE-6928.3 .patch, HIVE-6928.3.patch, 
 HIVE-6928.patch


 By default, beeline truncates long results based on the console width like:
 {code}
 +-+--+
 |  col_name   |   
|
 +-+--+
 | pat_id  | string
|
 | score   | float 
|
 | acutes  | float 
|
 | |   
|
 | Detailed Table Information  | Table(tableName:refills, dbName:default, 
 owner:hdadmin, createTime:1393882396, lastAccessTime:0, retention:0, sd:Sto |
 +-+--+
 5 rows selected (0.4 seconds)
 {code}
 This can be changed by !outputformat, but the default should behave better to 
 give a better experience to the first-time beeline user.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: Review Request 23425: HIVE-7361: using authorization api for RESET, DFS, ADD, DELETE, COMPILE commands

2014-07-17 Thread Thejas Nair


 On July 17, 2014, 1:16 a.m., Jason Dere wrote:
  ql/src/test/queries/clientnegative/authorization_dfs.q, line 4
  https://reviews.apache.org/r/23425/diff/3/?file=633727#file633727line4
 
  Looks like authorization_dfs.q no longer requires an initial query to 
  initialize auth, whereas authorization_reset.q, 
  authorization_admin_almighty2.q still have one.  Should it be removed from 
  those q files?

removing it from authorization_reset.q.
In authorization_admin_almighty2.q  it is there as a way to enable me to add a 
comment (not for any auth init)!! hive throws a syntax error if a comment is 
there before any of the command processor commands


- Thejas


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/23425/#review47978
---


On July 16, 2014, 10:10 p.m., Thejas Nair wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/23425/
 ---
 
 (Updated July 16, 2014, 10:10 p.m.)
 
 
 Review request for hive.
 
 
 Bugs: HIVE-7361
 https://issues.apache.org/jira/browse/HIVE-7361
 
 
 Repository: hive-git
 
 
 Description
 ---
 
 See jira HIVE-7361.
 
 
 Diffs
 -
 
   conf/hive-default.xml.template ba5b8a9 
   
 itests/hive-unit/src/test/java/org/apache/hive/jdbc/authorization/TestJdbcWithSQLAuthorization.java
  abe5ffa 
   
 itests/util/src/main/java/org/apache/hadoop/hive/ql/security/authorization/plugin/sqlstd/SQLStdHiveAccessControllerForTest.java
  4474ce5 
   
 itests/util/src/main/java/org/apache/hadoop/hive/ql/security/authorization/plugin/sqlstd/SQLStdHiveAuthorizationValidatorForTest.java
  PRE-CREATION 
   
 itests/util/src/main/java/org/apache/hadoop/hive/ql/security/authorization/plugin/sqlstd/SQLStdHiveAuthorizerFactoryForTest.java
  89e18b3 
   ql/src/java/org/apache/hadoop/hive/ql/processors/AddResourceProcessor.java 
 0532666 
   
 ql/src/java/org/apache/hadoop/hive/ql/processors/CommandProcessorResponse.java
  f29a409 
   ql/src/java/org/apache/hadoop/hive/ql/processors/CommandUtil.java 
 PRE-CREATION 
   ql/src/java/org/apache/hadoop/hive/ql/processors/CompileProcessor.java 
 8b8475b 
   
 ql/src/java/org/apache/hadoop/hive/ql/processors/DeleteResourceProcessor.java 
 bfac5f8 
   ql/src/java/org/apache/hadoop/hive/ql/processors/DfsProcessor.java d343a3c 
   ql/src/java/org/apache/hadoop/hive/ql/processors/ResetProcessor.java 
 b8ecfad 
   
 ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/HiveOperationType.java
  0537b92 
   
 ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/HivePrivilegeObject.java
  db57cb6 
   
 ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/sqlstd/GrantPrivAuthUtils.java
  f99109b 
   
 ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/sqlstd/Operation2Privilege.java
  151df6a 
   
 ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/sqlstd/SQLAuthorizationUtils.java
  beb45f5 
   
 ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/sqlstd/SQLStdHiveAccessController.java
  f2a4004 
   
 ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/sqlstd/SQLStdHiveAuthorizationValidator.java
  8937cfa 
   
 ql/src/test/org/apache/hadoop/hive/ql/security/authorization/plugin/TestHiveOperationType.java
  b990cb2 
   
 ql/src/test/org/apache/hadoop/hive/ql/security/authorization/plugin/sqlstd/TestSQLStdHiveAccessController.java
  06f9258 
   ql/src/test/queries/clientnegative/authorization_addjar.q a1709da 
   ql/src/test/queries/clientnegative/authorization_compile.q PRE-CREATION 
   ql/src/test/queries/clientnegative/authorization_deletejar.q PRE-CREATION 
   ql/src/test/queries/clientnegative/authorization_dfs.q 7d47a7b 
   ql/src/test/queries/clientpositive/authorization_admin_almighty2.q 
 PRE-CREATION 
   ql/src/test/queries/clientpositive/authorization_reset.q PRE-CREATION 
   ql/src/test/results/clientnegative/authorization_addjar.q.out d206dca 
   ql/src/test/results/clientnegative/authorization_addpartition.q.out 6331ae2 
   ql/src/test/results/clientnegative/authorization_alter_db_owner.q.out 
 550cbcc 
   
 ql/src/test/results/clientnegative/authorization_alter_db_owner_default.q.out 
 4df868e 
   ql/src/test/results/clientnegative/authorization_compile.q.out PRE-CREATION 
   ql/src/test/results/clientnegative/authorization_create_func1.q.out 7c72092 
   ql/src/test/results/clientnegative/authorization_create_func2.q.out 7c72092 
   ql/src/test/results/clientnegative/authorization_create_macro1.q.out 
 7c72092 
   ql/src/test/results/clientnegative/authorization_createview.q.out c86bdfa 
   ql/src/test/results/clientnegative/authorization_ctas.q.out f8395b7 
   

[jira] [Updated] (HIVE-7411) Exclude hadoop 1 from spark dep

2014-07-17 Thread Xuefu Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuefu Zhang updated HIVE-7411:
--

   Resolution: Fixed
Fix Version/s: spark-branch
   Status: Resolved  (was: Patch Available)

Patch committed to spark branch. Thanks, Brock.

 Exclude hadoop 1 from spark dep
 ---

 Key: HIVE-7411
 URL: https://issues.apache.org/jira/browse/HIVE-7411
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Brock Noland
Assignee: Brock Noland
 Fix For: spark-branch

 Attachments: HIVE-7411.patch


 The branch does not compile on my machine. Attached patch fixes this.
 NO PRECOMMIT TESTS (I am working on this)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6584) Add HiveHBaseTableSnapshotInputFormat

2014-07-17 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HIVE-6584:
---

Attachment: HIVE-6584.9.patch

 Add HiveHBaseTableSnapshotInputFormat
 -

 Key: HIVE-6584
 URL: https://issues.apache.org/jira/browse/HIVE-6584
 Project: Hive
  Issue Type: Improvement
  Components: HBase Handler
Reporter: Nick Dimiduk
Assignee: Nick Dimiduk
 Fix For: 0.14.0

 Attachments: HIVE-6584.0.patch, HIVE-6584.1.patch, HIVE-6584.2.patch, 
 HIVE-6584.3.patch, HIVE-6584.4.patch, HIVE-6584.5.patch, HIVE-6584.6.patch, 
 HIVE-6584.7.patch, HIVE-6584.8.patch, HIVE-6584.9.patch


 HBASE-8369 provided mapreduce support for reading from HBase table snapsopts. 
 This allows a MR job to consume a stable, read-only view of an HBase table 
 directly off of HDFS. Bypassing the online region server API provides a nice 
 performance boost for the full scan. HBASE-10642 is backporting that feature 
 to 0.94/0.96 and also adding a {{mapred}} implementation. Once that's 
 available, we should add an input format. A follow-on patch could work out 
 how to integrate this functionality into the StorageHandler, similar to how 
 HIVE-6473 integrates the HFileOutputFormat into existing table definitions.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-7433) ColumnMappins.ColumnMapping should expose public accessors for its fields

2014-07-17 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14065403#comment-14065403
 ] 

Hive QA commented on HIVE-7433:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12656217/HIVE-7433.patch

{color:red}ERROR:{color} -1 due to 4 failed/errored test(s), 5740 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_dynpart_sort_optimization
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_temp_table
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_tez_join_hash
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_ql_rewrite_gbtoidx
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-Build/835/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-Build/835/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-Build-835/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 4 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12656217

 ColumnMappins.ColumnMapping should expose public accessors for its fields
 -

 Key: HIVE-7433
 URL: https://issues.apache.org/jira/browse/HIVE-7433
 Project: Hive
  Issue Type: Bug
  Components: HBase Handler
Affects Versions: 0.14.0
Reporter: Andrew Mains
Priority: Trivial
 Fix For: 0.14.0

 Attachments: HIVE-7433.patch


 The changes from  https://issues.apache.org/jira/browse/HIVE-6411 allow users 
 to write their own HBaseKeyFactory implementations in order to customize the 
 serialization and predicate pushdown for composite HBase row keys.  
 AbstractHBaseKeyFactory allows users to use the hive-hbase column mapping 
 information through a protected ColumnMappings.ColumnMapping keyMapping 
 member. 
 However, ColumnMappings.ColumnMapping exposes no public members (everything 
 is package private org.apache.hadoop.hive.hbase), meaning that custom 
 HBaseKeyFactory implementations created outside of the package can't access 
 the any attributes of the class. 
 ColumnMappings.ColumnMapping should expose public getter methods for its 
 attributes. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: Review Request 23425: HIVE-7361: using authorization api for RESET, DFS, ADD, DELETE, COMPILE commands

2014-07-17 Thread Thejas Nair

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/23425/
---

(Updated July 17, 2014, 7:23 p.m.)


Review request for hive.


Changes
---

 HIVE-7361.4.patch - fixes TestJdbcWithSQLAuthorization and updates 
authorization_reset.q


Bugs: HIVE-7361
https://issues.apache.org/jira/browse/HIVE-7361


Repository: hive-git


Description
---

See jira HIVE-7361.


Diffs (updated)
-

  conf/hive-default.xml.template ba5b8a9 
  
itests/hive-unit/src/test/java/org/apache/hive/jdbc/authorization/TestJdbcWithSQLAuthorization.java
 abe5ffa 
  
itests/util/src/main/java/org/apache/hadoop/hive/ql/security/authorization/plugin/sqlstd/SQLStdHiveAccessControllerForTest.java
 4474ce5 
  
itests/util/src/main/java/org/apache/hadoop/hive/ql/security/authorization/plugin/sqlstd/SQLStdHiveAuthorizationValidatorForTest.java
 PRE-CREATION 
  
itests/util/src/main/java/org/apache/hadoop/hive/ql/security/authorization/plugin/sqlstd/SQLStdHiveAuthorizerFactoryForTest.java
 89e18b3 
  ql/src/java/org/apache/hadoop/hive/ql/processors/AddResourceProcessor.java 
0532666 
  
ql/src/java/org/apache/hadoop/hive/ql/processors/CommandProcessorResponse.java 
f29a409 
  ql/src/java/org/apache/hadoop/hive/ql/processors/CommandUtil.java 
PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/processors/CompileProcessor.java 
8b8475b 
  ql/src/java/org/apache/hadoop/hive/ql/processors/DeleteResourceProcessor.java 
bfac5f8 
  ql/src/java/org/apache/hadoop/hive/ql/processors/DfsProcessor.java d343a3c 
  ql/src/java/org/apache/hadoop/hive/ql/processors/ResetProcessor.java b8ecfad 
  
ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/HiveOperationType.java
 0537b92 
  
ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/HivePrivilegeObject.java
 db57cb6 
  
ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/sqlstd/GrantPrivAuthUtils.java
 f99109b 
  
ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/sqlstd/Operation2Privilege.java
 151df6a 
  
ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/sqlstd/SQLAuthorizationUtils.java
 beb45f5 
  
ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/sqlstd/SQLStdHiveAccessController.java
 f2a4004 
  
ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/sqlstd/SQLStdHiveAuthorizationValidator.java
 8937cfa 
  
ql/src/test/org/apache/hadoop/hive/ql/security/authorization/plugin/TestHiveOperationType.java
 b990cb2 
  
ql/src/test/org/apache/hadoop/hive/ql/security/authorization/plugin/sqlstd/TestSQLStdHiveAccessController.java
 06f9258 
  ql/src/test/queries/clientnegative/authorization_addjar.q a1709da 
  ql/src/test/queries/clientnegative/authorization_compile.q PRE-CREATION 
  ql/src/test/queries/clientnegative/authorization_deletejar.q PRE-CREATION 
  ql/src/test/queries/clientnegative/authorization_dfs.q 7d47a7b 
  ql/src/test/queries/clientpositive/authorization_admin_almighty2.q 
PRE-CREATION 
  ql/src/test/queries/clientpositive/authorization_reset.q PRE-CREATION 
  ql/src/test/results/clientnegative/authorization_addjar.q.out d206dca 
  ql/src/test/results/clientnegative/authorization_addpartition.q.out 6331ae2 
  ql/src/test/results/clientnegative/authorization_alter_db_owner.q.out 550cbcc 
  ql/src/test/results/clientnegative/authorization_alter_db_owner_default.q.out 
4df868e 
  ql/src/test/results/clientnegative/authorization_compile.q.out PRE-CREATION 
  ql/src/test/results/clientnegative/authorization_create_func1.q.out 7c72092 
  ql/src/test/results/clientnegative/authorization_create_func2.q.out 7c72092 
  ql/src/test/results/clientnegative/authorization_create_macro1.q.out 7c72092 
  ql/src/test/results/clientnegative/authorization_createview.q.out c86bdfa 
  ql/src/test/results/clientnegative/authorization_ctas.q.out f8395b7 
  ql/src/test/results/clientnegative/authorization_deletejar.q.out PRE-CREATION 
  ql/src/test/results/clientnegative/authorization_desc_table_nosel.q.out 
be56d34 
  ql/src/test/results/clientnegative/authorization_dfs.q.out d685e78 
  ql/src/test/results/clientnegative/authorization_drop_db_cascade.q.out 
74ab4c8 
  ql/src/test/results/clientnegative/authorization_drop_db_empty.q.out bd7447f 
  ql/src/test/results/clientnegative/authorization_droppartition.q.out 1da250a 
  ql/src/test/results/clientnegative/authorization_grant_table_allpriv.q.out 
4aa7058 
  ql/src/test/results/clientnegative/authorization_grant_table_fail1.q.out 
f042c1e 
  
ql/src/test/results/clientnegative/authorization_grant_table_fail_nogrant.q.out 
a906a70 
  ql/src/test/results/clientnegative/authorization_insert_noinspriv.q.out 
8de1104 
  ql/src/test/results/clientnegative/authorization_insert_noselectpriv.q.out 
46ada3b 
  ql/src/test/results/clientnegative/authorization_insertoverwrite_nodel.q.out 
fa0f7f7 
  

[jira] [Updated] (HIVE-7361) using authorization api for RESET, DFS, ADD, DELETE, COMPILE commands

2014-07-17 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-7361:


Attachment: HIVE-7361.4.patch

 using authorization api for RESET, DFS, ADD, DELETE, COMPILE commands
 -

 Key: HIVE-7361
 URL: https://issues.apache.org/jira/browse/HIVE-7361
 Project: Hive
  Issue Type: Improvement
  Components: Authorization
Reporter: Thejas M Nair
Assignee: Thejas M Nair
  Labels: TODOC14
 Attachments: HIVE-7361.1.patch, HIVE-7361.2.patch, HIVE-7361.3.patch, 
 HIVE-7361.4.patch


 The only way to disable the commands SET, RESET, DFS, ADD, DELETE and COMPILE 
 that is available currently is to use the hive.security.command.whitelist 
 parameter.
 Some of these commands are disabled using this configuration parameter for 
 security reasons when SQL standard authorization is enabled. However, it gets 
 disabled in all cases.
 If authorization api is used authorize the use of these commands, it will 
 give authorization implementations the flexibility to allow/disallow these 
 commands based on user privileges.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-7371) Identify a minimum set of JARs needed to ship to Spark cluster [Spark Branch]

2014-07-17 Thread Xuefu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14065440#comment-14065440
 ] 

Xuefu Zhang commented on HIVE-7371:
---

Patch #3 is based on #2 but with a fix for a NullPointerException. [~chengxiang 
li] Could you check if it makes sense? 

 Identify a minimum set of JARs needed to ship to Spark cluster [Spark Branch]
 -

 Key: HIVE-7371
 URL: https://issues.apache.org/jira/browse/HIVE-7371
 Project: Hive
  Issue Type: Task
  Components: Spark
Reporter: Xuefu Zhang
Assignee: Chengxiang Li
 Attachments: HIVE-7371-Spark.1.patch, HIVE-7371-Spark.2.patch, 
 HIVE-7371-Spark.3.patch


 Currently, Spark client ships all Hive JARs, including those that Hive 
 depends on, to Spark cluster when a query is executed by Spark. This is not 
 efficient, causing potential library conflicts. Ideally, only a minimum set 
 of JARs needs to be shipped. This task is to identify such a set.
 We should learn from current MR cluster, for which I assume only hive-exec 
 JAR is shipped to MR cluster.
 We also need to ensure that user-supplied JARs are also shipped to Spark 
 cluster, in a similar fashion as MR does.
 NO PRECOMMIT TESTS. This is for spark-branch only.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7371) Identify a minimum set of JARs needed to ship to Spark cluster [Spark Branch]

2014-07-17 Thread Xuefu Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuefu Zhang updated HIVE-7371:
--

Attachment: HIVE-7371-Spark.3.patch

 Identify a minimum set of JARs needed to ship to Spark cluster [Spark Branch]
 -

 Key: HIVE-7371
 URL: https://issues.apache.org/jira/browse/HIVE-7371
 Project: Hive
  Issue Type: Task
  Components: Spark
Reporter: Xuefu Zhang
Assignee: Chengxiang Li
 Attachments: HIVE-7371-Spark.1.patch, HIVE-7371-Spark.2.patch, 
 HIVE-7371-Spark.3.patch


 Currently, Spark client ships all Hive JARs, including those that Hive 
 depends on, to Spark cluster when a query is executed by Spark. This is not 
 efficient, causing potential library conflicts. Ideally, only a minimum set 
 of JARs needs to be shipped. This task is to identify such a set.
 We should learn from current MR cluster, for which I assume only hive-exec 
 JAR is shipped to MR cluster.
 We also need to ensure that user-supplied JARs are also shipped to Spark 
 cluster, in a similar fashion as MR does.
 NO PRECOMMIT TESTS. This is for spark-branch only.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: Review Request 23527: HIVE-7416 - provide context information to authorization checkPrivileges api call

2014-07-17 Thread Thejas Nair


 On July 16, 2014, 11:26 p.m., Jason Dere wrote:
  ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/sqlstd/SQLStdHiveAuthorizationValidator.java,
   line 67
  https://reviews.apache.org/r/23527/diff/1/?file=632950#file632950line67
 
  Is this the only current usage of the context info? Should it be logged 
  for failed auth checks?

This patch is only enabling additional audit information to be logged (with the 
API changes). Note that this log message is also logged at DEBUG level, while 
most installations are likely to use INFO or WARN level logging.
I didn't want to club the auditing enhancements into this patch. However, with 
this change any other implementations of this API can make use of it.


- Thejas


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/23527/#review47953
---


On July 15, 2014, 10:48 p.m., Thejas Nair wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/23527/
 ---
 
 (Updated July 15, 2014, 10:48 p.m.)
 
 
 Review request for hive.
 
 
 Bugs: HIVE-7416
 https://issues.apache.org/jira/browse/HIVE-7416
 
 
 Repository: hive-git
 
 
 Description
 ---
 
 See jira
 
 
 Diffs
 -
 
   
 itests/hive-unit/src/test/java/org/apache/hive/jdbc/authorization/TestHS2AuthzContext.java
  PRE-CREATION 
   ql/src/java/org/apache/hadoop/hive/ql/Driver.java ac76214 
   ql/src/java/org/apache/hadoop/hive/ql/exec/ExplainTask.java 92545d8 
   
 ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/HiveAuthorizationValidator.java
  7ffbc44 
   
 ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/HiveAuthorizer.java
  dbef61a 
   
 ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/HiveAuthorizerImpl.java
  558d4ff 
   
 ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/HiveAuthzContext.java
  PRE-CREATION 
   
 ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/sqlstd/SQLStdHiveAuthorizationValidator.java
  8937cfa 
   ql/src/java/org/apache/hadoop/hive/ql/session/SessionState.java 6686bc6 
   service/src/java/org/apache/hive/service/cli/session/HiveSessionImpl.java 
 6a7ee7a 
 
 Diff: https://reviews.apache.org/r/23527/diff/
 
 
 Testing
 ---
 
 New tests included.
 
 
 Thanks,
 
 Thejas Nair
 




[jira] [Commented] (HIVE-7414) Update golden file for MiniTez temp_table.q

2014-07-17 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14065465#comment-14065465
 ] 

Thejas M Nair commented on HIVE-7414:
-

+1

 Update golden file for MiniTez temp_table.q
 ---

 Key: HIVE-7414
 URL: https://issues.apache.org/jira/browse/HIVE-7414
 Project: Hive
  Issue Type: Bug
  Components: Tests
Reporter: Jason Dere
Assignee: Jason Dere
 Attachments: HIVE-7414.1.patch


 Looks like the golden file is out of date and the explain output now includes 
 the serde.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7426) ClassCastException: ...IntWritable cannot be cast to ...Text involving ql.udf.generic.GenericUDFBasePad.evaluate

2014-07-17 Thread Matt McCline (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7426?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt McCline updated HIVE-7426:
---

Assignee: (was: Matt McCline)

 ClassCastException: ...IntWritable cannot be cast to ...Text involving 
 ql.udf.generic.GenericUDFBasePad.evaluate
 

 Key: HIVE-7426
 URL: https://issues.apache.org/jira/browse/HIVE-7426
 Project: Hive
  Issue Type: Bug
Reporter: Matt McCline
 Attachments: TestWithORC.zip, fail_366.sql, fail_750.sql, fail_856.sql


 One of several found by Raj Bains.
 M/R or Tez.
 {code}
 set hive.vectorized.execution.enabled=true;
 {code}
 Query:
 {code}
 SELECT `Calcs`.`datetime0` AS `none_datetime0_ok`,   `Calcs`.`int1` AS 
 `none_int1_ok`,   `Calcs`.`key` AS `none_key_nk`,   CASE WHEN 
 (`Calcs`.`datetime0` IS NOT NULL AND `Calcs`.`int1` IS NOT NULL) THEN 
 FROM_UNIXTIME(UNIX_TIMESTAMP(CONCAT((YEAR(`Calcs`.`datetime0`)+FLOOR((MONTH(`Calcs`.`datetime0`)+`Calcs`.`int1`)/12)),
  CONCAT('-', CONCAT(LPAD(PMOD(MONTH(`Calcs`.`datetime0`)+`Calcs`.`int1`, 12), 
 2, '0'), SUBSTR(`Calcs`.`datetime0`, 8, SUBSTR('-MM-dd 
 HH:mm:ss',0,LENGTH(`Calcs`.`datetime0`))), '-MM-dd HH:mm:ss') END AS 
 `none_z_dateadd_month_ok` FROM `default`.`testv1_Calcs` `Calcs` GROUP BY 
 `Calcs`.`datetime0`,   `Calcs`.`int1`,   `Calcs`.`key`,   CASE WHEN 
 (`Calcs`.`datetime0` IS NOT NULL AND `Calcs`.`int1` IS NOT NULL) THEN 
 FROM_UNIXTIME(UNIX_TIMESTAMP(CONCAT((YEAR(`Calcs`.`datetime0`)+FLOOR((MONTH(`Calcs`.`datetime0`)+`Calcs`.`int1`)/12)),
  CONCAT('-', CONCAT(LPAD(PMOD(MONTH(`Calcs`.`datetime0`)+`Calcs`.`int1`, 12), 
 2, '0'), SUBSTR(`Calcs`.`datetime0`, 8, SUBSTR('-MM-dd 
 HH:mm:ss',0,LENGTH(`Calcs`.`datetime0`))), '-MM-dd HH:mm:ss') END ;
 {code}
 Stack Trace:
 {code}
 Caused by: java.lang.ClassCastException: org.apache.hadoop.io.IntWritable 
 cannot be cast to org.apache.hadoop.io.Text
   at 
 org.apache.hadoop.hive.ql.udf.generic.GenericUDFBasePad.evaluate(GenericUDFBasePad.java:65)
   at 
 org.apache.hadoop.hive.ql.exec.ExprNodeGenericFuncEvaluator._evaluate(ExprNodeGenericFuncEvaluator.java:166)
   at 
 org.apache.hadoop.hive.ql.exec.ExprNodeEvaluator.evaluate(ExprNodeEvaluator.java:77)
   at 
 org.apache.hadoop.hive.ql.exec.ExprNodeGenericFuncEvaluator$DeferredExprObject.get(ExprNodeGenericFuncEvaluator.java:77)
   at 
 org.apache.hadoop.hive.ql.udf.generic.GenericUDFConcat.stringEvaluate(GenericUDFConcat.java:189)
   at 
 org.apache.hadoop.hive.ql.udf.generic.GenericUDFConcat.evaluate(GenericUDFConcat.java:159)
   at 
 org.apache.hadoop.hive.ql.exec.ExprNodeGenericFuncEvaluator._evaluate(ExprNodeGenericFuncEvaluator.java:166)
   at 
 org.apache.hadoop.hive.ql.exec.ExprNodeEvaluator.evaluate(ExprNodeEvaluator.java:77)
   at 
 org.apache.hadoop.hive.ql.exec.ExprNodeGenericFuncEvaluator$DeferredExprObject.get(ExprNodeGenericFuncEvaluator.java:77)
   at 
 org.apache.hadoop.hive.ql.udf.generic.GenericUDFConcat.stringEvaluate(GenericUDFConcat.java:189)
   at 
 org.apache.hadoop.hive.ql.udf.generic.GenericUDFConcat.evaluate(GenericUDFConcat.java:159)
   at 
 org.apache.hadoop.hive.ql.exec.ExprNodeGenericFuncEvaluator._evaluate(ExprNodeGenericFuncEvaluator.java:166)
   at 
 org.apache.hadoop.hive.ql.exec.ExprNodeEvaluator.evaluate(ExprNodeEvaluator.java:77)
   at 
 org.apache.hadoop.hive.ql.exec.ExprNodeGenericFuncEvaluator$DeferredExprObject.get(ExprNodeGenericFuncEvaluator.java:77)
   at 
 org.apache.hadoop.hive.ql.udf.generic.GenericUDFConcat.stringEvaluate(GenericUDFConcat.java:189)
   at 
 org.apache.hadoop.hive.ql.udf.generic.GenericUDFConcat.evaluate(GenericUDFConcat.java:159)
   at 
 org.apache.hadoop.hive.ql.exec.ExprNodeGenericFuncEvaluator._evaluate(ExprNodeGenericFuncEvaluator.java:166)
   at 
 org.apache.hadoop.hive.ql.exec.ExprNodeEvaluator.evaluate(ExprNodeEvaluator.java:77)
   at 
 org.apache.hadoop.hive.ql.exec.ExprNodeGenericFuncEvaluator$DeferredExprObject.get(ExprNodeGenericFuncEvaluator.java:77)
   at 
 org.apache.hadoop.hive.ql.udf.generic.GenericUDFToUnixTimeStamp.evaluate(GenericUDFToUnixTimeStamp.java:121)
   at 
 org.apache.hadoop.hive.ql.udf.generic.GenericUDFUnixTimeStamp.evaluate(GenericUDFUnixTimeStamp.java:52)
   at 
 org.apache.hadoop.hive.ql.exec.ExprNodeGenericFuncEvaluator._evaluate(ExprNodeGenericFuncEvaluator.java:166)
   at 
 org.apache.hadoop.hive.ql.exec.ExprNodeEvaluator.evaluate(ExprNodeEvaluator.java:77)
   at 
 org.apache.hadoop.hive.ql.exec.ExprNodeGenericFuncEvaluator$DeferredExprObject.get(ExprNodeGenericFuncEvaluator.java:77)
   at 
 org.apache.hadoop.hive.ql.udf.generic.GenericUDFBridge.evaluate(GenericUDFBridge.java:177)
   at 
 

[jira] [Commented] (HIVE-7361) using authorization api for RESET, DFS, ADD, DELETE, COMPILE commands

2014-07-17 Thread Jason Dere (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14065472#comment-14065472
 ] 

Jason Dere commented on HIVE-7361:
--

+1 if TestJdbcWithSQLAuthorization passes now.

 using authorization api for RESET, DFS, ADD, DELETE, COMPILE commands
 -

 Key: HIVE-7361
 URL: https://issues.apache.org/jira/browse/HIVE-7361
 Project: Hive
  Issue Type: Improvement
  Components: Authorization
Reporter: Thejas M Nair
Assignee: Thejas M Nair
  Labels: TODOC14
 Attachments: HIVE-7361.1.patch, HIVE-7361.2.patch, HIVE-7361.3.patch, 
 HIVE-7361.4.patch


 The only way to disable the commands SET, RESET, DFS, ADD, DELETE and COMPILE 
 that is available currently is to use the hive.security.command.whitelist 
 parameter.
 Some of these commands are disabled using this configuration parameter for 
 security reasons when SQL standard authorization is enabled. However, it gets 
 disabled in all cases.
 If authorization api is used authorize the use of these commands, it will 
 give authorization implementations the flexibility to allow/disallow these 
 commands based on user privileges.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HIVE-7437) Check if servlet-api and jetty module in Spark library are an issue for hive-spark integration [Spark Branch]

2014-07-17 Thread Xuefu Zhang (JIRA)
Xuefu Zhang created HIVE-7437:
-

 Summary: Check if servlet-api and jetty module in Spark library 
are an issue for hive-spark integration [Spark Branch]
 Key: HIVE-7437
 URL: https://issues.apache.org/jira/browse/HIVE-7437
 Project: Hive
  Issue Type: Task
  Components: Spark
Reporter: Xuefu Zhang
Assignee: Chengxiang Li


Currently we used a customized Spark 1.0.0 build for Hive on Spark project 
because of library conflicts. One of the conflicts found during POC is about 
servlet-api and jetty, where in Spark the version is 3.0 while the rest of 
Hadoop components, including Hive, is still on 2.5. As a followup for 
HIVE-7371, it would be good to figured out if this continues to be an issue.

The corresponding Spark JIRA is SPARK-2420.




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7437) Check if servlet-api and jetty module in Spark library are an issue for hive-spark integration [Spark Branch]

2014-07-17 Thread Xuefu Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuefu Zhang updated HIVE-7437:
--

Description: 
Currently we used a customized Spark 1.0.0 build for Hive on Spark project 
because of library conflicts. One of the conflicts found during POC is about 
servlet-api and jetty, where in Spark the version is 3.0 while the rest of 
Hadoop components, including Hive, is still on 2.5. As a followup for 
HIVE-7371, it would be good to figured out if this continues to be an issue.

The corresponding Spark JIRA is SPARK-2420.

NO PRECOMMIT TESTS. This is for spark-branch only.

  was:
Currently we used a customized Spark 1.0.0 build for Hive on Spark project 
because of library conflicts. One of the conflicts found during POC is about 
servlet-api and jetty, where in Spark the version is 3.0 while the rest of 
Hadoop components, including Hive, is still on 2.5. As a followup for 
HIVE-7371, it would be good to figured out if this continues to be an issue.

The corresponding Spark JIRA is SPARK-2420.



 Check if servlet-api and jetty module in Spark library are an issue for 
 hive-spark integration [Spark Branch]
 -

 Key: HIVE-7437
 URL: https://issues.apache.org/jira/browse/HIVE-7437
 Project: Hive
  Issue Type: Task
  Components: Spark
Reporter: Xuefu Zhang
Assignee: Chengxiang Li

 Currently we used a customized Spark 1.0.0 build for Hive on Spark project 
 because of library conflicts. One of the conflicts found during POC is about 
 servlet-api and jetty, where in Spark the version is 3.0 while the rest of 
 Hadoop components, including Hive, is still on 2.5. As a followup for 
 HIVE-7371, it would be good to figured out if this continues to be an issue.
 The corresponding Spark JIRA is SPARK-2420.
 NO PRECOMMIT TESTS. This is for spark-branch only.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-7416) provide context information to authorization checkPrivileges api call

2014-07-17 Thread Jason Dere (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14065478#comment-14065478
 ] 

Jason Dere commented on HIVE-7416:
--

+1

 provide context information to authorization checkPrivileges api call
 -

 Key: HIVE-7416
 URL: https://issues.apache.org/jira/browse/HIVE-7416
 Project: Hive
  Issue Type: New Feature
  Components: Authorization, SQLStandardAuthorization
Reporter: Thejas M Nair
Assignee: Thejas M Nair
 Attachments: HIVE-7416.1.patch, HIVE-7416.1.patch, HIVE-7416.2.patch


 Context information such as request ip address, unique string for session, 
 and original sql command string are useful for audit logging from the 
 authorization implementations. 
 Authorization implementations can also choose to log authorization success 
 along with information about what policies matched and the context 
 information.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7414) Update golden file for MiniTez temp_table.q

2014-07-17 Thread Jason Dere (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Dere updated HIVE-7414:
-

   Resolution: Fixed
Fix Version/s: 0.14.0
   Status: Resolved  (was: Patch Available)

Committed to trunk, thanks for reviewing Thejas.

 Update golden file for MiniTez temp_table.q
 ---

 Key: HIVE-7414
 URL: https://issues.apache.org/jira/browse/HIVE-7414
 Project: Hive
  Issue Type: Bug
  Components: Tests
Reporter: Jason Dere
Assignee: Jason Dere
 Fix For: 0.14.0

 Attachments: HIVE-7414.1.patch


 Looks like the golden file is out of date and the explain output now includes 
 the serde.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6305) test use of quoted identifiers in user/role names

2014-07-17 Thread Lefty Leverenz (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lefty Leverenz updated HIVE-6305:
-

Labels: TODOC13  (was: )

 test use of quoted identifiers in user/role names
 -

 Key: HIVE-6305
 URL: https://issues.apache.org/jira/browse/HIVE-6305
 Project: Hive
  Issue Type: Bug
  Components: Authorization
Reporter: Thejas M Nair
Assignee: Jason Dere
  Labels: TODOC13
 Fix For: 0.14.0

 Attachments: HIVE-6305.1.patch


 Tests need to be added to verify that quoted identifiers can be used with 
 user and role names.
 For example - 
 {code}
  grant all on x to user `user-qa`; 
 show grant user `user-qa` on table x; 
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6305) test use of quoted identifiers in user/role names

2014-07-17 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14065502#comment-14065502
 ] 

Lefty Leverenz commented on HIVE-6305:
--

Good, HIVE-6013 is already documented so it's easy to add this to the wiki.  
Thanks, Thejas.

One question:  why don't these tests specify hive.support.quoted.identifiers?  
Oh, nevermind, its default setting is column.

 test use of quoted identifiers in user/role names
 -

 Key: HIVE-6305
 URL: https://issues.apache.org/jira/browse/HIVE-6305
 Project: Hive
  Issue Type: Bug
  Components: Authorization
Reporter: Thejas M Nair
Assignee: Jason Dere
  Labels: TODOC13
 Fix For: 0.14.0

 Attachments: HIVE-6305.1.patch


 Tests need to be added to verify that quoted identifiers can be used with 
 user and role names.
 For example - 
 {code}
  grant all on x to user `user-qa`; 
 show grant user `user-qa` on table x; 
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HIVE-7438) Counters and metrics

2014-07-17 Thread Xuefu Zhang (JIRA)
Xuefu Zhang created HIVE-7438:
-

 Summary: Counters and metrics
 Key: HIVE-7438
 URL: https://issues.apache.org/jira/browse/HIVE-7438
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Xuefu Zhang


Hive makes use of MapReduce counters for statistics and possibly for other 
purposes. For Hive on Spark, we should achieve the same functionality using 
Spark's accumulators.

Hive also collects metrics from MapReduce jobs traditionally. Spark job very 
likely publishes a different set of metrics, which, if made available, would 
help user to get insights into their spark jobs. Thus, we should obtain the 
metrics and make them available as we do for MapReduce.

This task therefore includes 1. identify Hive's existing functionality w.r.t. 
counters and metrics; 2. design and implement the same functionality in Spark.

Please refer to the design document for more information. 
https://cwiki.apache.org/confluence/display/Hive/Hive+on+Spark#HiveonSpark-CountersandMetrics



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HIVE-7439) Spark job monitoring and error reporting

2014-07-17 Thread Xuefu Zhang (JIRA)
Xuefu Zhang created HIVE-7439:
-

 Summary: Spark job monitoring and error reporting
 Key: HIVE-7439
 URL: https://issues.apache.org/jira/browse/HIVE-7439
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Xuefu Zhang


After Hive submits a job to Spark cluster, we need to report to user the job 
progress, such as the percentage done, to the user. This is especially 
important for long running queries. Moreover, if there is an error during job 
submission or execution, it's also crucial for hive to fetch the error log 
and/or stacktrace and feedback it to the user.

Please refer design doc on wiki for more information.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: Review Request 23387: HIVE-6806: Native avro support

2014-07-17 Thread Ashish Singh

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/23387/
---

(Updated July 17, 2014, 9:10 p.m.)


Review request for hive.


Changes
---

Addressed more review comments.


Bugs: HIVE-6806
https://issues.apache.org/jira/browse/HIVE-6806


Repository: hive-git


Description
---

HIVE-6806: Native avro support


Diffs (updated)
-

  ql/src/java/org/apache/hadoop/hive/ql/io/AvroStorageFormatDescriptor.java 
PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/io/IOConstants.java 
1bae0a8fee04049f90b16d813ff4c96707b349c8 
  
ql/src/main/resources/META-INF/services/org.apache.hadoop.hive.ql.io.StorageFormatDescriptor
 a23ff115512da5fe3167835a88d582c427585b8e 
  ql/src/test/org/apache/hadoop/hive/ql/io/TestStorageFormatDescriptor.java 
d53ebc65174d66bfeee25fd2891c69c78f9137ee 
  ql/src/test/queries/clientpositive/avro_compression_enabled_native.q 
PRE-CREATION 
  ql/src/test/queries/clientpositive/avro_decimal_native.q PRE-CREATION 
  ql/src/test/queries/clientpositive/avro_joins_native.q PRE-CREATION 
  ql/src/test/queries/clientpositive/avro_native.q PRE-CREATION 
  ql/src/test/queries/clientpositive/avro_partitioned_native.q PRE-CREATION 
  ql/src/test/queries/clientpositive/avro_schema_evolution_native.q 
PRE-CREATION 
  ql/src/test/results/clientpositive/avro_compression_enabled_native.q.out 
PRE-CREATION 
  ql/src/test/results/clientpositive/avro_decimal_native.q.out PRE-CREATION 
  ql/src/test/results/clientpositive/avro_joins_native.q.out PRE-CREATION 
  ql/src/test/results/clientpositive/avro_native.q.out PRE-CREATION 
  ql/src/test/results/clientpositive/avro_partitioned_native.q.out PRE-CREATION 
  ql/src/test/results/clientpositive/avro_schema_evolution_native.q.out 
PRE-CREATION 
  serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroDeserializer.java 
0db12437406170686a21b6055d83156fe5d6a55f 
  serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroSerDe.java 
1fe31e0034f8988d03a0c51a90904bb93e7cb157 
  serde/src/java/org/apache/hadoop/hive/serde2/avro/TypeInfoToSchema.java 
PRE-CREATION 
  serde/src/test/org/apache/hadoop/hive/serde2/avro/TestTypeInfoToSchema.java 
PRE-CREATION 

Diff: https://reviews.apache.org/r/23387/diff/


Testing
---

Added qTests and unit tests


Thanks,

Ashish Singh



Re: Review Request 23387: HIVE-6806: Native avro support

2014-07-17 Thread Ashish Singh


 On July 17, 2014, 1:49 p.m., Tom White wrote:
  Ashish, thanks for addressing my feedback. Here's a bit more.

Thanks again for the review.


 On July 17, 2014, 1:49 p.m., Tom White wrote:
  serde/src/java/org/apache/hadoop/hive/serde2/avro/TypeInfoToSchema.java, 
  line 229
  https://reviews.apache.org/r/23387/diff/8/?file=634160#file634160line229
 
  It would be simpler to make sure that NULL is included (and is the 
  first branch in the union) in the createAvroUnion() method, and just fall 
  through here.

I do not think this will be something good or feasible without redesigning many 
parts without any obvious gain. createAvroUnion() only creates a schema for 
union, based on union typeinfo passed to it. If I hack it to add null to all 
unions, I will still need to handle union here differently as union of unions 
is not possible.


- Ashish


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/23387/#review48004
---


On July 17, 2014, 2:50 a.m., Ashish Singh wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/23387/
 ---
 
 (Updated July 17, 2014, 2:50 a.m.)
 
 
 Review request for hive.
 
 
 Bugs: HIVE-6806
 https://issues.apache.org/jira/browse/HIVE-6806
 
 
 Repository: hive-git
 
 
 Description
 ---
 
 HIVE-6806: Native avro support
 
 
 Diffs
 -
 
   ql/src/java/org/apache/hadoop/hive/ql/io/AvroStorageFormatDescriptor.java 
 PRE-CREATION 
   ql/src/java/org/apache/hadoop/hive/ql/io/IOConstants.java 
 1bae0a8fee04049f90b16d813ff4c96707b349c8 
   
 ql/src/main/resources/META-INF/services/org.apache.hadoop.hive.ql.io.StorageFormatDescriptor
  a23ff115512da5fe3167835a88d582c427585b8e 
   ql/src/test/org/apache/hadoop/hive/ql/io/TestStorageFormatDescriptor.java 
 d53ebc65174d66bfeee25fd2891c69c78f9137ee 
   ql/src/test/queries/clientpositive/avro_compression_enabled_native.q 
 PRE-CREATION 
   ql/src/test/queries/clientpositive/avro_decimal_native.q PRE-CREATION 
   ql/src/test/queries/clientpositive/avro_joins_native.q PRE-CREATION 
   ql/src/test/queries/clientpositive/avro_native.q PRE-CREATION 
   ql/src/test/queries/clientpositive/avro_partitioned_native.q PRE-CREATION 
   ql/src/test/queries/clientpositive/avro_schema_evolution_native.q 
 PRE-CREATION 
   ql/src/test/results/clientpositive/avro_compression_enabled_native.q.out 
 PRE-CREATION 
   ql/src/test/results/clientpositive/avro_decimal_native.q.out PRE-CREATION 
   ql/src/test/results/clientpositive/avro_joins_native.q.out PRE-CREATION 
   ql/src/test/results/clientpositive/avro_native.q.out PRE-CREATION 
   ql/src/test/results/clientpositive/avro_partitioned_native.q.out 
 PRE-CREATION 
   ql/src/test/results/clientpositive/avro_schema_evolution_native.q.out 
 PRE-CREATION 
   serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroDeserializer.java 
 0db12437406170686a21b6055d83156fe5d6a55f 
   serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroSerDe.java 
 1fe31e0034f8988d03a0c51a90904bb93e7cb157 
   serde/src/java/org/apache/hadoop/hive/serde2/avro/TypeInfoToSchema.java 
 PRE-CREATION 
   serde/src/test/org/apache/hadoop/hive/serde2/avro/TestTypeInfoToSchema.java 
 PRE-CREATION 
 
 Diff: https://reviews.apache.org/r/23387/diff/
 
 
 Testing
 ---
 
 Added qTests and unit tests
 
 
 Thanks,
 
 Ashish Singh
 




Re: Review Request 23387: HIVE-6806: Native avro support

2014-07-17 Thread Ashish Singh


 On July 17, 2014, 5:33 a.m., David Chen wrote:
  ql/src/java/org/apache/hadoop/hive/ql/io/IOConstants.java, line 37
  https://reviews.apache.org/r/23387/diff/8/?file=634143#file634143line37
 
  Is using AVROFILE rather than AVRO a common use case? If not, should we 
  be allowing both?

David, I initially did not add AVROFILE. Brock suggested to be consistent with 
other storage formats, its good to have it. I do not see any harm in having it.


- Ashish


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/23387/#review47984
---


On July 17, 2014, 9:10 p.m., Ashish Singh wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/23387/
 ---
 
 (Updated July 17, 2014, 9:10 p.m.)
 
 
 Review request for hive.
 
 
 Bugs: HIVE-6806
 https://issues.apache.org/jira/browse/HIVE-6806
 
 
 Repository: hive-git
 
 
 Description
 ---
 
 HIVE-6806: Native avro support
 
 
 Diffs
 -
 
   ql/src/java/org/apache/hadoop/hive/ql/io/AvroStorageFormatDescriptor.java 
 PRE-CREATION 
   ql/src/java/org/apache/hadoop/hive/ql/io/IOConstants.java 
 1bae0a8fee04049f90b16d813ff4c96707b349c8 
   
 ql/src/main/resources/META-INF/services/org.apache.hadoop.hive.ql.io.StorageFormatDescriptor
  a23ff115512da5fe3167835a88d582c427585b8e 
   ql/src/test/org/apache/hadoop/hive/ql/io/TestStorageFormatDescriptor.java 
 d53ebc65174d66bfeee25fd2891c69c78f9137ee 
   ql/src/test/queries/clientpositive/avro_compression_enabled_native.q 
 PRE-CREATION 
   ql/src/test/queries/clientpositive/avro_decimal_native.q PRE-CREATION 
   ql/src/test/queries/clientpositive/avro_joins_native.q PRE-CREATION 
   ql/src/test/queries/clientpositive/avro_native.q PRE-CREATION 
   ql/src/test/queries/clientpositive/avro_partitioned_native.q PRE-CREATION 
   ql/src/test/queries/clientpositive/avro_schema_evolution_native.q 
 PRE-CREATION 
   ql/src/test/results/clientpositive/avro_compression_enabled_native.q.out 
 PRE-CREATION 
   ql/src/test/results/clientpositive/avro_decimal_native.q.out PRE-CREATION 
   ql/src/test/results/clientpositive/avro_joins_native.q.out PRE-CREATION 
   ql/src/test/results/clientpositive/avro_native.q.out PRE-CREATION 
   ql/src/test/results/clientpositive/avro_partitioned_native.q.out 
 PRE-CREATION 
   ql/src/test/results/clientpositive/avro_schema_evolution_native.q.out 
 PRE-CREATION 
   serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroDeserializer.java 
 0db12437406170686a21b6055d83156fe5d6a55f 
   serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroSerDe.java 
 1fe31e0034f8988d03a0c51a90904bb93e7cb157 
   serde/src/java/org/apache/hadoop/hive/serde2/avro/TypeInfoToSchema.java 
 PRE-CREATION 
   serde/src/test/org/apache/hadoop/hive/serde2/avro/TestTypeInfoToSchema.java 
 PRE-CREATION 
 
 Diff: https://reviews.apache.org/r/23387/diff/
 
 
 Testing
 ---
 
 Added qTests and unit tests
 
 
 Thanks,
 
 Ashish Singh
 




Re: Review Request 23387: HIVE-6806: Native avro support

2014-07-17 Thread David Chen


 On July 17, 2014, 5:33 a.m., David Chen wrote:
  ql/src/java/org/apache/hadoop/hive/ql/io/IOConstants.java, line 37
  https://reviews.apache.org/r/23387/diff/8/?file=634143#file634143line37
 
  Is using AVROFILE rather than AVRO a common use case? If not, should we 
  be allowing both?
 
 Ashish Singh wrote:
 David, I initially did not add AVROFILE. Brock suggested to be consistent 
 with other storage formats, its good to have it. I do not see any harm in 
 having it.

I see. That's fine with me. I was just wondering.


- David


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/23387/#review47984
---


On July 17, 2014, 9:10 p.m., Ashish Singh wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/23387/
 ---
 
 (Updated July 17, 2014, 9:10 p.m.)
 
 
 Review request for hive.
 
 
 Bugs: HIVE-6806
 https://issues.apache.org/jira/browse/HIVE-6806
 
 
 Repository: hive-git
 
 
 Description
 ---
 
 HIVE-6806: Native avro support
 
 
 Diffs
 -
 
   ql/src/java/org/apache/hadoop/hive/ql/io/AvroStorageFormatDescriptor.java 
 PRE-CREATION 
   ql/src/java/org/apache/hadoop/hive/ql/io/IOConstants.java 
 1bae0a8fee04049f90b16d813ff4c96707b349c8 
   
 ql/src/main/resources/META-INF/services/org.apache.hadoop.hive.ql.io.StorageFormatDescriptor
  a23ff115512da5fe3167835a88d582c427585b8e 
   ql/src/test/org/apache/hadoop/hive/ql/io/TestStorageFormatDescriptor.java 
 d53ebc65174d66bfeee25fd2891c69c78f9137ee 
   ql/src/test/queries/clientpositive/avro_compression_enabled_native.q 
 PRE-CREATION 
   ql/src/test/queries/clientpositive/avro_decimal_native.q PRE-CREATION 
   ql/src/test/queries/clientpositive/avro_joins_native.q PRE-CREATION 
   ql/src/test/queries/clientpositive/avro_native.q PRE-CREATION 
   ql/src/test/queries/clientpositive/avro_partitioned_native.q PRE-CREATION 
   ql/src/test/queries/clientpositive/avro_schema_evolution_native.q 
 PRE-CREATION 
   ql/src/test/results/clientpositive/avro_compression_enabled_native.q.out 
 PRE-CREATION 
   ql/src/test/results/clientpositive/avro_decimal_native.q.out PRE-CREATION 
   ql/src/test/results/clientpositive/avro_joins_native.q.out PRE-CREATION 
   ql/src/test/results/clientpositive/avro_native.q.out PRE-CREATION 
   ql/src/test/results/clientpositive/avro_partitioned_native.q.out 
 PRE-CREATION 
   ql/src/test/results/clientpositive/avro_schema_evolution_native.q.out 
 PRE-CREATION 
   serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroDeserializer.java 
 0db12437406170686a21b6055d83156fe5d6a55f 
   serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroSerDe.java 
 1fe31e0034f8988d03a0c51a90904bb93e7cb157 
   serde/src/java/org/apache/hadoop/hive/serde2/avro/TypeInfoToSchema.java 
 PRE-CREATION 
   serde/src/test/org/apache/hadoop/hive/serde2/avro/TestTypeInfoToSchema.java 
 PRE-CREATION 
 
 Diff: https://reviews.apache.org/r/23387/diff/
 
 
 Testing
 ---
 
 Added qTests and unit tests
 
 
 Thanks,
 
 Ashish Singh
 




[jira] [Created] (HIVE-7440) Remove custom code for Avro in HCatMapReduceTest

2014-07-17 Thread David Chen (JIRA)
David Chen created HIVE-7440:


 Summary: Remove custom code for Avro in HCatMapReduceTest
 Key: HIVE-7440
 URL: https://issues.apache.org/jira/browse/HIVE-7440
 Project: Hive
  Issue Type: Bug
Reporter: David Chen
Priority: Minor


Once both HIVE-7286 and HIVE-6806 have been committed, remove the 
AvroStorageCustomHandler from the HCatalog Core tests because it will no longer 
be needed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-7286) Parameterize HCatMapReduceTest for testing against all Hive storage formats

2014-07-17 Thread David Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14065569#comment-14065569
 ] 

David Chen commented on HIVE-7286:
--

[~szehon] - By the way, I have opened HIVE-7440 to remove StorageCustomHandler 
once both this patch and HIVE-6806 are committed.

 Parameterize HCatMapReduceTest for testing against all Hive storage formats
 ---

 Key: HIVE-7286
 URL: https://issues.apache.org/jira/browse/HIVE-7286
 Project: Hive
  Issue Type: Test
  Components: HCatalog
Reporter: David Chen
Assignee: David Chen
 Attachments: HIVE-7286.1.patch, HIVE-7286.2.patch, HIVE-7286.3.patch, 
 HIVE-7286.4.patch


 Currently, HCatMapReduceTest, which is extended by the following test suites:
  * TestHCatDynamicPartitioned
  * TestHCatNonPartitioned
  * TestHCatPartitioned
  * TestHCatExternalDynamicPartitioned
  * TestHCatExternalNonPartitioned
  * TestHCatExternalPartitioned
  * TestHCatMutableDynamicPartitioned
  * TestHCatMutableNonPartitioned
  * TestHCatMutablePartitioned
 These tests run against RCFile. Currently, only TestHCatDynamicPartitioned is 
 run against any other storage format (ORC).
 Ideally, HCatalog should be tested against all storage formats supported by 
 Hive. The easiest way to accomplish this is to turn HCatMapReduceTest into a 
 parameterized test fixture that enumerates all Hive storage formats. Until 
 HIVE-5976 is implemented, we would need to manually create the mapping of 
 SerDe to InputFormat and OutputFormat. This way, we can explicitly keep track 
 of which storage formats currently work with HCatalog or which ones are 
 untested or have test failures. The test fixture should also use Reflection 
 to find all classes in the classpath that implements the SerDe interface and 
 raise a failure if any of them are not enumerated.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7440) Remove custom code for Avro in HCatMapReduceTest

2014-07-17 Thread David Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Chen updated HIVE-7440:
-

Issue Type: Test  (was: Bug)

 Remove custom code for Avro in HCatMapReduceTest
 

 Key: HIVE-7440
 URL: https://issues.apache.org/jira/browse/HIVE-7440
 Project: Hive
  Issue Type: Test
Reporter: David Chen
Priority: Minor

 Once both HIVE-7286 and HIVE-6806 have been committed, remove the 
 AvroStorageCustomHandler from the HCatalog Core tests because it will no 
 longer be needed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: Review Request 23387: HIVE-6806: Native avro support

2014-07-17 Thread Ashish Singh


 On July 17, 2014, 5:33 a.m., David Chen wrote:
  ql/src/java/org/apache/hadoop/hive/ql/io/IOConstants.java, line 37
  https://reviews.apache.org/r/23387/diff/8/?file=634143#file634143line37
 
  Is using AVROFILE rather than AVRO a common use case? If not, should we 
  be allowing both?
 
 Ashish Singh wrote:
 David, I initially did not add AVROFILE. Brock suggested to be consistent 
 with other storage formats, its good to have it. I do not see any harm in 
 having it.
 
 David Chen wrote:
 I see. That's fine with me. I was just wondering.

Ok, then I will leave it there. Thanks for the review.


- Ashish


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/23387/#review47984
---


On July 17, 2014, 9:10 p.m., Ashish Singh wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/23387/
 ---
 
 (Updated July 17, 2014, 9:10 p.m.)
 
 
 Review request for hive.
 
 
 Bugs: HIVE-6806
 https://issues.apache.org/jira/browse/HIVE-6806
 
 
 Repository: hive-git
 
 
 Description
 ---
 
 HIVE-6806: Native avro support
 
 
 Diffs
 -
 
   ql/src/java/org/apache/hadoop/hive/ql/io/AvroStorageFormatDescriptor.java 
 PRE-CREATION 
   ql/src/java/org/apache/hadoop/hive/ql/io/IOConstants.java 
 1bae0a8fee04049f90b16d813ff4c96707b349c8 
   
 ql/src/main/resources/META-INF/services/org.apache.hadoop.hive.ql.io.StorageFormatDescriptor
  a23ff115512da5fe3167835a88d582c427585b8e 
   ql/src/test/org/apache/hadoop/hive/ql/io/TestStorageFormatDescriptor.java 
 d53ebc65174d66bfeee25fd2891c69c78f9137ee 
   ql/src/test/queries/clientpositive/avro_compression_enabled_native.q 
 PRE-CREATION 
   ql/src/test/queries/clientpositive/avro_decimal_native.q PRE-CREATION 
   ql/src/test/queries/clientpositive/avro_joins_native.q PRE-CREATION 
   ql/src/test/queries/clientpositive/avro_native.q PRE-CREATION 
   ql/src/test/queries/clientpositive/avro_partitioned_native.q PRE-CREATION 
   ql/src/test/queries/clientpositive/avro_schema_evolution_native.q 
 PRE-CREATION 
   ql/src/test/results/clientpositive/avro_compression_enabled_native.q.out 
 PRE-CREATION 
   ql/src/test/results/clientpositive/avro_decimal_native.q.out PRE-CREATION 
   ql/src/test/results/clientpositive/avro_joins_native.q.out PRE-CREATION 
   ql/src/test/results/clientpositive/avro_native.q.out PRE-CREATION 
   ql/src/test/results/clientpositive/avro_partitioned_native.q.out 
 PRE-CREATION 
   ql/src/test/results/clientpositive/avro_schema_evolution_native.q.out 
 PRE-CREATION 
   serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroDeserializer.java 
 0db12437406170686a21b6055d83156fe5d6a55f 
   serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroSerDe.java 
 1fe31e0034f8988d03a0c51a90904bb93e7cb157 
   serde/src/java/org/apache/hadoop/hive/serde2/avro/TypeInfoToSchema.java 
 PRE-CREATION 
   serde/src/test/org/apache/hadoop/hive/serde2/avro/TestTypeInfoToSchema.java 
 PRE-CREATION 
 
 Diff: https://reviews.apache.org/r/23387/diff/
 
 
 Testing
 ---
 
 Added qTests and unit tests
 
 
 Thanks,
 
 Ashish Singh
 




[jira] [Commented] (HIVE-5317) Implement insert, update, and delete in Hive with full ACID support

2014-07-17 Thread Venkat Ankam (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14065627#comment-14065627
 ] 

Venkat Ankam commented on HIVE-5317:


Any update on the next release of Hive with this feature?

 Implement insert, update, and delete in Hive with full ACID support
 ---

 Key: HIVE-5317
 URL: https://issues.apache.org/jira/browse/HIVE-5317
 Project: Hive
  Issue Type: New Feature
Reporter: Owen O'Malley
Assignee: Owen O'Malley
 Attachments: InsertUpdatesinHive.pdf


 Many customers want to be able to insert, update and delete rows from Hive 
 tables with full ACID support. The use cases are varied, but the form of the 
 queries that should be supported are:
 * INSERT INTO tbl SELECT …
 * INSERT INTO tbl VALUES ...
 * UPDATE tbl SET … WHERE …
 * DELETE FROM tbl WHERE …
 * MERGE INTO tbl USING src ON … WHEN MATCHED THEN ... WHEN NOT MATCHED THEN 
 ...
 * SET TRANSACTION LEVEL …
 * BEGIN/END TRANSACTION
 Use Cases
 * Once an hour, a set of inserts and updates (up to 500k rows) for various 
 dimension tables (eg. customer, inventory, stores) needs to be processed. The 
 dimension tables have primary keys and are typically bucketed and sorted on 
 those keys.
 * Once a day a small set (up to 100k rows) of records need to be deleted for 
 regulatory compliance.
 * Once an hour a log of transactions is exported from a RDBS and the fact 
 tables need to be updated (up to 1m rows)  to reflect the new data. The 
 transactions are a combination of inserts, updates, and deletes. The table is 
 partitioned and bucketed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HIVE-7441) Custom partition scheme gets rewritten with hive scheme upon concatenate

2014-07-17 Thread Johndee Burks (JIRA)
Johndee Burks created HIVE-7441:
---

 Summary: Custom partition scheme gets rewritten with hive scheme 
upon concatenate
 Key: HIVE-7441
 URL: https://issues.apache.org/jira/browse/HIVE-7441
 Project: Hive
  Issue Type: Bug
  Components: CLI
Affects Versions: 0.12.0, 0.11.0, 0.10.0
 Environment: CDH4.5 and CDH5.0
Reporter: Johndee Burks
Priority: Minor


If I take a given data directories. The directories contain a data file that is 
rc format and only contains one character 1.

{code}
/j1/part1
/j1/part2
{code}

Create the table over the directories using the following command:

{code}
create table j1 (a int) partitioned by (b string) stored as rcfile location 
'/j1' ;
{code}

I add these directories to a table for example j1 using the following commands:

{code}
alter table j1 add partition (b = 'part1') location '/j1/part1';
alter table j1 add partition (b = 'part2') location '/j1/part2';
{code}

I then do the following command to the first partition: 

{code}
alter table j1 partition (b = 'part1') concatenate;
{code}

Hive changes the partition location from on hdfs

{code}
/j1/part1
{code}

to 

{code}
/j1/b=part1
{code}

However it does not update the partition location in the metastore and 
partition is then lost to the table. It is hard to find this out until you 
start querying your data and notice there is missing data. The table even still 
shows the partition when you do show partitions.






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6584) Add HiveHBaseTableSnapshotInputFormat

2014-07-17 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14065667#comment-14065667
 ] 

Hive QA commented on HIVE-6584:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12656322/HIVE-6584.9.patch

{color:red}ERROR:{color} -1 due to 4 failed/errored test(s), 5726 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestHBaseMinimrCliDriver.testCliDriver_hbase_bulk
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_temp_table
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_ql_rewrite_gbtoidx
org.apache.hive.hcatalog.pig.TestHCatLoader.testReadDataPrimitiveTypes
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-Build/836/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-Build/836/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-Build-836/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 4 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12656322

 Add HiveHBaseTableSnapshotInputFormat
 -

 Key: HIVE-6584
 URL: https://issues.apache.org/jira/browse/HIVE-6584
 Project: Hive
  Issue Type: Improvement
  Components: HBase Handler
Reporter: Nick Dimiduk
Assignee: Nick Dimiduk
 Fix For: 0.14.0

 Attachments: HIVE-6584.0.patch, HIVE-6584.1.patch, HIVE-6584.2.patch, 
 HIVE-6584.3.patch, HIVE-6584.4.patch, HIVE-6584.5.patch, HIVE-6584.6.patch, 
 HIVE-6584.7.patch, HIVE-6584.8.patch, HIVE-6584.9.patch


 HBASE-8369 provided mapreduce support for reading from HBase table snapsopts. 
 This allows a MR job to consume a stable, read-only view of an HBase table 
 directly off of HDFS. Bypassing the online region server API provides a nice 
 performance boost for the full scan. HBASE-10642 is backporting that feature 
 to 0.94/0.96 and also adding a {{mapred}} implementation. Once that's 
 available, we should add an input format. A follow-on patch could work out 
 how to integrate this functionality into the StorageHandler, similar to how 
 HIVE-6473 integrates the HFileOutputFormat into existing table definitions.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-7361) using authorization api for RESET, DFS, ADD, DELETE, COMPILE commands

2014-07-17 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14065672#comment-14065672
 ] 

Hive QA commented on HIVE-7361:
---



{color:red}Overall{color}: -1 no tests executed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12656325/HIVE-7361.4.patch

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-Build/837/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-Build/837/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-Build-837/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hive-ptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ [[ -n /usr/java/jdk1.7.0_45-cloudera ]]
+ export JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera
+ JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera
+ export 
PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/java/jdk1.6.0_34/bin:/usr/local/apache-maven-3.0.5/bin:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.6.0_34/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin
+ 
PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/java/jdk1.6.0_34/bin:/usr/local/apache-maven-3.0.5/bin:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.6.0_34/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'M2_OPTS=-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ M2_OPTS='-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ cd /data/hive-ptest/working/
+ tee /data/hive-ptest/logs/PreCommit-HIVE-Build-837/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ svn = \s\v\n ]]
+ [[ -n '' ]]
+ [[ -d apache-svn-trunk-source ]]
+ [[ ! -d apache-svn-trunk-source/.svn ]]
+ [[ ! -d apache-svn-trunk-source ]]
+ cd apache-svn-trunk-source
+ svn revert -R .
Reverted 'conf/hive-default.xml.template'
Reverted 'pom.xml'
Reverted 'hbase-handler/src/test/results/positive/external_table_ppd.q.out'
Reverted 
'hbase-handler/src/test/results/positive/hbase_binary_storage_queries.q.out'
Reverted 'hbase-handler/src/test/templates/TestHBaseCliDriver.vm'
Reverted 
'hbase-handler/src/java/org/apache/hadoop/hive/hbase/HiveHBaseTableInputFormat.java'
Reverted 
'hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseStorageHandler.java'
Reverted 
'itests/util/src/main/java/org/apache/hadoop/hive/hbase/HBaseQTestUtil.java'
Reverted 
'itests/util/src/main/java/org/apache/hadoop/hive/hbase/HBaseTestSetup.java'
Reverted 'itests/util/src/main/java/org/apache/hadoop/hive/ql/QTestUtil.java'
Reverted 'common/src/java/org/apache/hadoop/hive/conf/HiveConf.java'
Reverted 'ql/src/java/org/apache/hadoop/hive/ql/exec/DDLTask.java'
++ egrep -v '^X|^Performing status on external'
++ awk '{print $2}'
++ svn status --no-ignore
+ rm -rf target datanucleus.log ant/target shims/target shims/0.20/target 
shims/0.20S/target shims/0.23/target shims/aggregator/target 
shims/common/target shims/common-secure/target packaging/target 
conf/hive-default.xml.template.orig hbase-handler/target 
hbase-handler/src/test/results/positive/hbase_handler_snapshot.q.out 
hbase-handler/src/test/queries/positive/hbase_handler_snapshot.q 
hbase-handler/src/java/org/apache/hadoop/hive/hbase/HiveHBaseTableSnapshotInputFormat.java
 
hbase-handler/src/java/org/apache/hadoop/hive/hbase/HiveHBaseInputFormatUtil.java
 testutils/target jdbc/target metastore/target itests/target 
itests/hcatalog-unit/target itests/test-serde/target itests/qtest/target 
itests/hive-unit-hadoop2/target itests/hive-minikdc/target 
itests/hive-unit/target itests/custom-serde/target itests/util/target 
hcatalog/target hcatalog/core/target hcatalog/streaming/target 
hcatalog/server-extensions/target hcatalog/webhcat/svr/target 
hcatalog/webhcat/java-client/target hcatalog/hcatalog-pig-adapter/target 
hwi/target common/target common/src/gen contrib/target service/target 
serde/target beeline/target odbc/target cli/target 
ql/dependency-reduced-pom.xml ql/target
+ svn update
Uql/src/test/results/clientpositive/tez/temp_table.q.out

Fetching external item into 'hcatalog/src/test/e2e/harness'
Updated external to revision 1611492.

Updated to revision 1611492.
+ patchCommandPath=/data/hive-ptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hive-ptest/working/scratch/build.patch
+ [[ -f /data/hive-ptest/working/scratch/build.patch ]]
+ chmod +x /data/hive-ptest/working/scratch/smart-apply-patch.sh
+ /data/hive-ptest/working/scratch/smart-apply-patch.sh 
/data/hive-ptest/working/scratch/build.patch
The 

[jira] [Commented] (HIVE-7424) HiveException: Error evaluating concat(concat(' ', str2), ' ') in ql.exec.vector.VectorSelectOperator.processOp

2014-07-17 Thread Gopal V (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14065690#comment-14065690
 ] 

Gopal V commented on HIVE-7424:
---

Possibly related bug

 HiveException: Error evaluating concat(concat('  ', str2), '  ') in 
 ql.exec.vector.VectorSelectOperator.processOp
 -

 Key: HIVE-7424
 URL: https://issues.apache.org/jira/browse/HIVE-7424
 Project: Hive
  Issue Type: Bug
Reporter: Matt McCline
Assignee: Matt McCline
 Attachments: TestWithORC.zip, fail_401.sql


 One of several found by Raj Bains.
 M/R or Tez.
 {code}
 set hive.vectorized.execution.enabled=true;
 {code}
 Query:
 {code}
 SELECT `testv1_Calcs`.`key` AS `none_key_nk`,   CONCAT(CONCAT('  
 ',`testv1_Calcs`.`str2`),'  ') AS `none_padded_str2_nk`,   
 CONCAT(CONCAT('|',RTRIM(CONCAT(CONCAT('  ',`testv1_Calcs`.`str2`),'  
 '))),'|') AS `none_z_rtrim_str_nk` FROM `default`.`testv1_Calcs` 
 `testv1_Calcs` ;
 {code}
 Stack trace:
 {code}
 Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Error evaluating 
 concat(concat('  ', str2), '  ')
   at 
 org.apache.hadoop.hive.ql.exec.vector.VectorSelectOperator.processOp(VectorSelectOperator.java:127)
   at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:800)
   at 
 org.apache.hadoop.hive.ql.exec.TableScanOperator.processOp(TableScanOperator.java:95)
   at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:800)
   at 
 org.apache.hadoop.hive.ql.exec.vector.VectorMapOperator.process(VectorMapOperator.java:43)
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-7380) HWI war is not packaged in tar.gz

2014-07-17 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14065701#comment-14065701
 ] 

Lefty Leverenz commented on HIVE-7380:
--

Is this a duplicate of HIVE-7233?

 HWI war is not packaged in tar.gz
 -

 Key: HIVE-7380
 URL: https://issues.apache.org/jira/browse/HIVE-7380
 Project: Hive
  Issue Type: Bug
Reporter: Brock Noland

 packaging pom or assembly needs to be modified to include the HWI interface



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6928) Beeline should not chop off describe extended results by default

2014-07-17 Thread Lefty Leverenz (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lefty Leverenz updated HIVE-6928:
-

Labels: TODOC14  (was: )

 Beeline should not chop off describe extended results by default
 --

 Key: HIVE-6928
 URL: https://issues.apache.org/jira/browse/HIVE-6928
 Project: Hive
  Issue Type: Bug
  Components: CLI
Reporter: Szehon Ho
Assignee: Chinna Rao Lalam
  Labels: TODOC14
 Fix For: 0.14.0

 Attachments: HIVE-6928.1.patch, HIVE-6928.2.patch, HIVE-6928.3 
 .patch, HIVE-6928.3 .patch, HIVE-6928.3 .patch, HIVE-6928.3.patch, 
 HIVE-6928.patch


 By default, beeline truncates long results based on the console width like:
 {code}
 +-+--+
 |  col_name   |   
|
 +-+--+
 | pat_id  | string
|
 | score   | float 
|
 | acutes  | float 
|
 | |   
|
 | Detailed Table Information  | Table(tableName:refills, dbName:default, 
 owner:hdadmin, createTime:1393882396, lastAccessTime:0, retention:0, sd:Sto |
 +-+--+
 5 rows selected (0.4 seconds)
 {code}
 This can be changed by !outputformat, but the default should behave better to 
 give a better experience to the first-time beeline user.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6988) Hive changes for tez-0.5.x compatibility

2014-07-17 Thread Siddharth Seth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Seth updated HIVE-6988:
-

Attachment: HIVE-6988.5.patch

Re-uploading for pre-commit.

 Hive changes for tez-0.5.x compatibility
 

 Key: HIVE-6988
 URL: https://issues.apache.org/jira/browse/HIVE-6988
 Project: Hive
  Issue Type: Task
Reporter: Gopal V
 Attachments: HIVE-6988.1.patch, HIVE-6988.2.patch, HIVE-6988.3.patch, 
 HIVE-6988.4.patch, HIVE-6988.5.patch, HIVE-6988.5.patch


 Umbrella JIRA to track all hive changes needed for tez-0.5.x compatibility.
 tez-0.4.x - tez.0.5.x is going to break backwards compat.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6988) Hive changes for tez-0.5.x compatibility

2014-07-17 Thread Siddharth Seth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Seth updated HIVE-6988:
-

Status: Open  (was: Patch Available)

 Hive changes for tez-0.5.x compatibility
 

 Key: HIVE-6988
 URL: https://issues.apache.org/jira/browse/HIVE-6988
 Project: Hive
  Issue Type: Task
Reporter: Gopal V
 Attachments: HIVE-6988.1.patch, HIVE-6988.2.patch, HIVE-6988.3.patch, 
 HIVE-6988.4.patch, HIVE-6988.5.patch, HIVE-6988.5.patch


 Umbrella JIRA to track all hive changes needed for tez-0.5.x compatibility.
 tez-0.4.x - tez.0.5.x is going to break backwards compat.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6988) Hive changes for tez-0.5.x compatibility

2014-07-17 Thread Siddharth Seth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Seth updated HIVE-6988:
-

Status: Patch Available  (was: Open)

 Hive changes for tez-0.5.x compatibility
 

 Key: HIVE-6988
 URL: https://issues.apache.org/jira/browse/HIVE-6988
 Project: Hive
  Issue Type: Task
Reporter: Gopal V
 Attachments: HIVE-6988.1.patch, HIVE-6988.2.patch, HIVE-6988.3.patch, 
 HIVE-6988.4.patch, HIVE-6988.5.patch, HIVE-6988.5.patch


 Umbrella JIRA to track all hive changes needed for tez-0.5.x compatibility.
 tez-0.4.x - tez.0.5.x is going to break backwards compat.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6928) Beeline should not chop off describe extended results by default

2014-07-17 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14065719#comment-14065719
 ] 

Lefty Leverenz commented on HIVE-6928:
--

The --truncateTable option needs to be documented in the Beeline section of 
HiveServer2 Clients (with a version note and link to this jira).

* [HiveServer2 Clients -- Beeline Command Options | 
https://cwiki.apache.org/confluence/display/Hive/HiveServer2+Clients#HiveServer2Clients-BeelineCommandOptions]

 Beeline should not chop off describe extended results by default
 --

 Key: HIVE-6928
 URL: https://issues.apache.org/jira/browse/HIVE-6928
 Project: Hive
  Issue Type: Bug
  Components: CLI
Reporter: Szehon Ho
Assignee: Chinna Rao Lalam
  Labels: TODOC14
 Fix For: 0.14.0

 Attachments: HIVE-6928.1.patch, HIVE-6928.2.patch, HIVE-6928.3 
 .patch, HIVE-6928.3 .patch, HIVE-6928.3 .patch, HIVE-6928.3.patch, 
 HIVE-6928.patch


 By default, beeline truncates long results based on the console width like:
 {code}
 +-+--+
 |  col_name   |   
|
 +-+--+
 | pat_id  | string
|
 | score   | float 
|
 | acutes  | float 
|
 | |   
|
 | Detailed Table Information  | Table(tableName:refills, dbName:default, 
 owner:hdadmin, createTime:1393882396, lastAccessTime:0, retention:0, sd:Sto |
 +-+--+
 5 rows selected (0.4 seconds)
 {code}
 This can be changed by !outputformat, but the default should behave better to 
 give a better experience to the first-time beeline user.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HIVE-7380) HWI war is not packaged in tar.gz

2014-07-17 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7380?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland resolved HIVE-7380.


Resolution: Duplicate

 HWI war is not packaged in tar.gz
 -

 Key: HIVE-7380
 URL: https://issues.apache.org/jira/browse/HIVE-7380
 Project: Hive
  Issue Type: Bug
Reporter: Brock Noland

 packaging pom or assembly needs to be modified to include the HWI interface



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6928) Beeline should not chop off describe extended results by default

2014-07-17 Thread Xuefu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14065726#comment-14065726
 ] 

Xuefu Zhang commented on HIVE-6928:
---

Thanks, Lefty.

 Beeline should not chop off describe extended results by default
 --

 Key: HIVE-6928
 URL: https://issues.apache.org/jira/browse/HIVE-6928
 Project: Hive
  Issue Type: Bug
  Components: CLI
Reporter: Szehon Ho
Assignee: Chinna Rao Lalam
  Labels: TODOC14
 Fix For: 0.14.0

 Attachments: HIVE-6928.1.patch, HIVE-6928.2.patch, HIVE-6928.3 
 .patch, HIVE-6928.3 .patch, HIVE-6928.3 .patch, HIVE-6928.3.patch, 
 HIVE-6928.patch


 By default, beeline truncates long results based on the console width like:
 {code}
 +-+--+
 |  col_name   |   
|
 +-+--+
 | pat_id  | string
|
 | score   | float 
|
 | acutes  | float 
|
 | |   
|
 | Detailed Table Information  | Table(tableName:refills, dbName:default, 
 owner:hdadmin, createTime:1393882396, lastAccessTime:0, retention:0, sd:Sto |
 +-+--+
 5 rows selected (0.4 seconds)
 {code}
 This can be changed by !outputformat, but the default should behave better to 
 give a better experience to the first-time beeline user.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HIVE-7442) ql.exec.vector.expressions.gen.DecimalColAddDecimalScalar.evaluate throws ClassCastException: ...LongColumnVector cannot be cast to ...DecimalColumnVector

2014-07-17 Thread Matt McCline (JIRA)
Matt McCline created HIVE-7442:
--

 Summary: 
ql.exec.vector.expressions.gen.DecimalColAddDecimalScalar.evaluate throws 
ClassCastException: ...LongColumnVector cannot be cast to ...DecimalColumnVector
 Key: HIVE-7442
 URL: https://issues.apache.org/jira/browse/HIVE-7442
 Project: Hive
  Issue Type: Bug
Reporter: Matt McCline
Assignee: Matt McCline



Took decimal_join.q and converted it to read from ORC and turned on 
vectorization:

vector_decimal_join.q
{code}
SET hive.vectorized.execution.enabled=true;

-- HIVE-5292 Join on decimal columns fails

create table src_dec_staging (key decimal(3,0), value string);
load data local inpath '../../data/files/kv1.txt' into table src_dec_staging;

create table src_dec (key decimal(3,0), value string) stored as orc;
insert overwrite table src_dec select * from src_dec_staging;

explain select * from src_dec a join src_dec b on a.key=b.key+450;

select * from src_dec a join src_dec b on a.key=b.key+450;
{code}

Stack trace:
{code}
java.lang.Exception: java.lang.RuntimeException: 
org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while 
processing row 
at 
org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462)
at 
org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:522)
Caused by: java.lang.RuntimeException: 
org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while 
processing row 
at org.apache.hadoop.hive.ql.exec.mr.ExecMapper.map(ExecMapper.java:195)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:430)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:342)
at 
org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:243)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
at java.lang.Thread.run(Thread.java:695)
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error 
while processing row 
at 
org.apache.hadoop.hive.ql.exec.vector.VectorMapOperator.process(VectorMapOperator.java:45)
at org.apache.hadoop.hive.ql.exec.mr.ExecMapper.map(ExecMapper.java:177)
... 10 more
Caused by: java.lang.ClassCastException: 
org.apache.hadoop.hive.ql.exec.vector.LongColumnVector cannot be cast to 
org.apache.hadoop.hive.ql.exec.vector.DecimalColumnVector
at 
org.apache.hadoop.hive.ql.exec.vector.expressions.gen.DecimalColAddDecimalScalar.evaluate(DecimalColAddDecimalScalar.java:60)
at 
org.apache.hadoop.hive.ql.exec.vector.expressions.VectorExpression.evaluateChildren(VectorExpression.java:112)
at 
org.apache.hadoop.hive.ql.exec.vector.expressions.FuncDecimalToLong.evaluate(FuncDecimalToLong.java:51)
at 
org.apache.hadoop.hive.ql.exec.vector.expressions.VectorExpression.evaluateChildren(VectorExpression.java:112)
at 
org.apache.hadoop.hive.ql.exec.vector.expressions.SelectColumnIsNotNull.evaluate(SelectColumnIsNotNull.java:45)
at 
org.apache.hadoop.hive.ql.exec.vector.VectorFilterOperator.processOp(VectorFilterOperator.java:91)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:800)
at 
org.apache.hadoop.hive.ql.exec.TableScanOperator.processOp(TableScanOperator.java:95)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:800)
at 
org.apache.hadoop.hive.ql.exec.vector.VectorMapOperator.process(VectorMapOperator.java:43)
... 11 more
{code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6885) Address style and docs feedback in HIVE-5687

2014-07-17 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14065756#comment-14065756
 ] 

Lefty Leverenz commented on HIVE-6885:
--

Speaking of doc issues, a diagram on the HCat overview page shows streaming 
grayed out.  Can it be updated now?  (I only noticed it because Confluence told 
me Andrew Lee likes my page -- kudos to Alan, not to me.)

* [Using HCatalog -- Overview | 
https://cwiki.apache.org/confluence/display/Hive/HCatalog+UsingHCat#HCatalogUsingHCat-Overview]

 Address style and docs feedback in HIVE-5687
 

 Key: HIVE-6885
 URL: https://issues.apache.org/jira/browse/HIVE-6885
 Project: Hive
  Issue Type: Bug
  Components: HCatalog
Affects Versions: 0.13.0
Reporter: Alan Gates
Assignee: Roshan Naik

 There were a number of style and docs feedback given in HIVE-5687 that were 
 not addressed before it was committed.  These need to be addressed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HIVE-7443) Fix HiveConnection to communicate with Kerberized Hive JDBC server and alternative JDKs

2014-07-17 Thread Yu Gao (JIRA)
Yu Gao created HIVE-7443:


 Summary: Fix HiveConnection to communicate with Kerberized Hive 
JDBC server and alternative JDKs
 Key: HIVE-7443
 URL: https://issues.apache.org/jira/browse/HIVE-7443
 Project: Hive
  Issue Type: Bug
  Components: JDBC
Affects Versions: 0.13.1, 0.12.0
 Environment: Kerberos
Run Hive server and client with IBM JDK7.1
Reporter: Yu Gao
Assignee: Yu Gao


Hive Kerberos authentication has been enabled in my cluster. I ran kinit to 
initialize the current login user's ticket cache successfully, and then tried 
to use beeline to connect to Hive Server2, but failed. After I manually added 
some logging to catch the failure exception, this is what I got that caused the 
failure:
beeline  !connect 
jdbc:hive2://hiveserver.host:1/default;principal=hive/hiveserver.host@REALM.COM
 org.apache.hive.jdbc.HiveDriver
scan complete in 2ms
Connecting to 
jdbc:hive2://hiveserver.host:1/default;principal=hive/hiveserver.host@REALM.COM
Enter password for 
jdbc:hive2://hiveserver.host:1/default;principal=hive/hiveserver.host@REALM.COM:
14/07/17 15:12:45 ERROR jdbc.HiveConnection: Failed to open client transport
javax.security.sasl.SaslException: Failed to open client transport [Caused by 
java.io.IOException: Could not instantiate SASL transport]
at 
org.apache.hive.service.auth.KerberosSaslHelper.getKerberosTransport(KerberosSaslHelper.java:78)
at 
org.apache.hive.jdbc.HiveConnection.createBinaryTransport(HiveConnection.java:342)
at 
org.apache.hive.jdbc.HiveConnection.openTransport(HiveConnection.java:200)
at org.apache.hive.jdbc.HiveConnection.init(HiveConnection.java:178)
at org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:105)
at java.sql.DriverManager.getConnection(DriverManager.java:582)
at java.sql.DriverManager.getConnection(DriverManager.java:198)
at 
org.apache.hive.beeline.DatabaseConnection.connect(DatabaseConnection.java:145)
at 
org.apache.hive.beeline.DatabaseConnection.getConnection(DatabaseConnection.java:186)
at org.apache.hive.beeline.Commands.connect(Commands.java:959)
at org.apache.hive.beeline.Commands.connect(Commands.java:880)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:94)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55)
at java.lang.reflect.Method.invoke(Method.java:619)
at 
org.apache.hive.beeline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:44)
at org.apache.hive.beeline.BeeLine.dispatch(BeeLine.java:801)
at org.apache.hive.beeline.BeeLine.begin(BeeLine.java:659)
at 
org.apache.hive.beeline.BeeLine.mainWithInputRedirection(BeeLine.java:368)
at org.apache.hive.beeline.BeeLine.main(BeeLine.java:351)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:94)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55)
at java.lang.reflect.Method.invoke(Method.java:619)
at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
Caused by: java.io.IOException: Could not instantiate SASL transport
at 
org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge20S$Client.createClientTransport(HadoopThriftAuthBridge20S.java:177)
at 
org.apache.hive.service.auth.KerberosSaslHelper.getKerberosTransport(KerberosSaslHelper.java:74)
... 24 more
Caused by: javax.security.sasl.SaslException: Failure to initialize security 
context [Caused by org.ietf.jgss.GSSException, major code: 13, minor code: 0
major string: Invalid credentials
minor string: SubjectCredFinder: no JAAS Subject]
at 
com.ibm.security.sasl.gsskerb.GssKrb5Client.init(GssKrb5Client.java:131)
at 
com.ibm.security.sasl.gsskerb.FactoryImpl.createSaslClient(FactoryImpl.java:53)
at javax.security.sasl.Sasl.createSaslClient(Sasl.java:362)
at 
org.apache.thrift.transport.TSaslClientTransport.init(TSaslClientTransport.java:72)
at 
org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge20S$Client.createClientTransport(HadoopThriftAuthBridge20S.java:169)
... 25 more
Caused by: org.ietf.jgss.GSSException, major code: 13, minor code: 0
major string: Invalid credentials
minor string: SubjectCredFinder: no JAAS Subject
at 
com.ibm.security.jgss.i18n.I18NException.throwGSSException(I18NException.java:83)
at 
com.ibm.security.jgss.mech.krb5.Krb5Credential$SubjectCredFinder.run(Krb5Credential.java:1126)
at 
java.security.AccessController.doPrivileged(AccessController.java:330)
at 

[jira] [Updated] (HIVE-7443) Fix HiveConnection to communicate with Kerberized Hive JDBC server and alternative JDKs

2014-07-17 Thread Yu Gao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yu Gao updated HIVE-7443:
-

Environment: 
Kerberos
Run Hive server2 and client with IBM JDK7.1

  was:
Kerberos
Run Hive server and client with IBM JDK7.1


 Fix HiveConnection to communicate with Kerberized Hive JDBC server and 
 alternative JDKs
 ---

 Key: HIVE-7443
 URL: https://issues.apache.org/jira/browse/HIVE-7443
 Project: Hive
  Issue Type: Bug
  Components: JDBC
Affects Versions: 0.12.0, 0.13.1
 Environment: Kerberos
 Run Hive server2 and client with IBM JDK7.1
Reporter: Yu Gao
Assignee: Yu Gao

 Hive Kerberos authentication has been enabled in my cluster. I ran kinit to 
 initialize the current login user's ticket cache successfully, and then tried 
 to use beeline to connect to Hive Server2, but failed. After I manually added 
 some logging to catch the failure exception, this is what I got that caused 
 the failure:
 beeline  !connect 
 jdbc:hive2://hiveserver.host:1/default;principal=hive/hiveserver.host@REALM.COM
  org.apache.hive.jdbc.HiveDriver
 scan complete in 2ms
 Connecting to 
 jdbc:hive2://hiveserver.host:1/default;principal=hive/hiveserver.host@REALM.COM
 Enter password for 
 jdbc:hive2://hiveserver.host:1/default;principal=hive/hiveserver.host@REALM.COM:
 14/07/17 15:12:45 ERROR jdbc.HiveConnection: Failed to open client transport
 javax.security.sasl.SaslException: Failed to open client transport [Caused by 
 java.io.IOException: Could not instantiate SASL transport]
 at 
 org.apache.hive.service.auth.KerberosSaslHelper.getKerberosTransport(KerberosSaslHelper.java:78)
 at 
 org.apache.hive.jdbc.HiveConnection.createBinaryTransport(HiveConnection.java:342)
 at 
 org.apache.hive.jdbc.HiveConnection.openTransport(HiveConnection.java:200)
 at org.apache.hive.jdbc.HiveConnection.init(HiveConnection.java:178)
 at org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:105)
 at java.sql.DriverManager.getConnection(DriverManager.java:582)
 at java.sql.DriverManager.getConnection(DriverManager.java:198)
 at 
 org.apache.hive.beeline.DatabaseConnection.connect(DatabaseConnection.java:145)
 at 
 org.apache.hive.beeline.DatabaseConnection.getConnection(DatabaseConnection.java:186)
 at org.apache.hive.beeline.Commands.connect(Commands.java:959)
 at org.apache.hive.beeline.Commands.connect(Commands.java:880)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:94)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55)
 at java.lang.reflect.Method.invoke(Method.java:619)
 at 
 org.apache.hive.beeline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:44)
 at org.apache.hive.beeline.BeeLine.dispatch(BeeLine.java:801)
 at org.apache.hive.beeline.BeeLine.begin(BeeLine.java:659)
 at 
 org.apache.hive.beeline.BeeLine.mainWithInputRedirection(BeeLine.java:368)
 at org.apache.hive.beeline.BeeLine.main(BeeLine.java:351)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:94)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55)
 at java.lang.reflect.Method.invoke(Method.java:619)
 at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
 Caused by: java.io.IOException: Could not instantiate SASL transport
 at 
 org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge20S$Client.createClientTransport(HadoopThriftAuthBridge20S.java:177)
 at 
 org.apache.hive.service.auth.KerberosSaslHelper.getKerberosTransport(KerberosSaslHelper.java:74)
 ... 24 more
 Caused by: javax.security.sasl.SaslException: Failure to initialize security 
 context [Caused by org.ietf.jgss.GSSException, major code: 13, minor code: 0
 major string: Invalid credentials
 minor string: SubjectCredFinder: no JAAS Subject]
 at 
 com.ibm.security.sasl.gsskerb.GssKrb5Client.init(GssKrb5Client.java:131)
 at 
 com.ibm.security.sasl.gsskerb.FactoryImpl.createSaslClient(FactoryImpl.java:53)
 at javax.security.sasl.Sasl.createSaslClient(Sasl.java:362)
 at 
 org.apache.thrift.transport.TSaslClientTransport.init(TSaslClientTransport.java:72)
 at 
 org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge20S$Client.createClientTransport(HadoopThriftAuthBridge20S.java:169)
 ... 25 more
 Caused by: org.ietf.jgss.GSSException, major code: 13, minor code: 0
 major 

[jira] [Updated] (HIVE-7426) ClassCastException: ...IntWritable cannot be cast to ...Text involving ql.udf.generic.GenericUDFBasePad.evaluate

2014-07-17 Thread Matt McCline (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7426?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt McCline updated HIVE-7426:
---

Description: 
One of several found by Raj Bains.

M/R or Tez.
Query does not vectorize, so this is not vector related.

Query:
{code}
SELECT `Calcs`.`datetime0` AS `none_datetime0_ok`,   `Calcs`.`int1` AS 
`none_int1_ok`,   `Calcs`.`key` AS `none_key_nk`,   CASE WHEN 
(`Calcs`.`datetime0` IS NOT NULL AND `Calcs`.`int1` IS NOT NULL) THEN 
FROM_UNIXTIME(UNIX_TIMESTAMP(CONCAT((YEAR(`Calcs`.`datetime0`)+FLOOR((MONTH(`Calcs`.`datetime0`)+`Calcs`.`int1`)/12)),
 CONCAT('-', CONCAT(LPAD(PMOD(MONTH(`Calcs`.`datetime0`)+`Calcs`.`int1`, 12), 
2, '0'), SUBSTR(`Calcs`.`datetime0`, 8, SUBSTR('-MM-dd 
HH:mm:ss',0,LENGTH(`Calcs`.`datetime0`))), '-MM-dd HH:mm:ss') END AS 
`none_z_dateadd_month_ok` FROM `default`.`testv1_Calcs` `Calcs` GROUP BY 
`Calcs`.`datetime0`,   `Calcs`.`int1`,   `Calcs`.`key`,   CASE WHEN 
(`Calcs`.`datetime0` IS NOT NULL AND `Calcs`.`int1` IS NOT NULL) THEN 
FROM_UNIXTIME(UNIX_TIMESTAMP(CONCAT((YEAR(`Calcs`.`datetime0`)+FLOOR((MONTH(`Calcs`.`datetime0`)+`Calcs`.`int1`)/12)),
 CONCAT('-', CONCAT(LPAD(PMOD(MONTH(`Calcs`.`datetime0`)+`Calcs`.`int1`, 12), 
2, '0'), SUBSTR(`Calcs`.`datetime0`, 8, SUBSTR('-MM-dd 
HH:mm:ss',0,LENGTH(`Calcs`.`datetime0`))), '-MM-dd HH:mm:ss') END ;
{code}

Stack Trace:
{code}
Caused by: java.lang.ClassCastException: org.apache.hadoop.io.IntWritable 
cannot be cast to org.apache.hadoop.io.Text
at 
org.apache.hadoop.hive.ql.udf.generic.GenericUDFBasePad.evaluate(GenericUDFBasePad.java:65)
at 
org.apache.hadoop.hive.ql.exec.ExprNodeGenericFuncEvaluator._evaluate(ExprNodeGenericFuncEvaluator.java:166)
at 
org.apache.hadoop.hive.ql.exec.ExprNodeEvaluator.evaluate(ExprNodeEvaluator.java:77)
at 
org.apache.hadoop.hive.ql.exec.ExprNodeGenericFuncEvaluator$DeferredExprObject.get(ExprNodeGenericFuncEvaluator.java:77)
at 
org.apache.hadoop.hive.ql.udf.generic.GenericUDFConcat.stringEvaluate(GenericUDFConcat.java:189)
at 
org.apache.hadoop.hive.ql.udf.generic.GenericUDFConcat.evaluate(GenericUDFConcat.java:159)
at 
org.apache.hadoop.hive.ql.exec.ExprNodeGenericFuncEvaluator._evaluate(ExprNodeGenericFuncEvaluator.java:166)
at 
org.apache.hadoop.hive.ql.exec.ExprNodeEvaluator.evaluate(ExprNodeEvaluator.java:77)
at 
org.apache.hadoop.hive.ql.exec.ExprNodeGenericFuncEvaluator$DeferredExprObject.get(ExprNodeGenericFuncEvaluator.java:77)
at 
org.apache.hadoop.hive.ql.udf.generic.GenericUDFConcat.stringEvaluate(GenericUDFConcat.java:189)
at 
org.apache.hadoop.hive.ql.udf.generic.GenericUDFConcat.evaluate(GenericUDFConcat.java:159)
at 
org.apache.hadoop.hive.ql.exec.ExprNodeGenericFuncEvaluator._evaluate(ExprNodeGenericFuncEvaluator.java:166)
at 
org.apache.hadoop.hive.ql.exec.ExprNodeEvaluator.evaluate(ExprNodeEvaluator.java:77)
at 
org.apache.hadoop.hive.ql.exec.ExprNodeGenericFuncEvaluator$DeferredExprObject.get(ExprNodeGenericFuncEvaluator.java:77)
at 
org.apache.hadoop.hive.ql.udf.generic.GenericUDFConcat.stringEvaluate(GenericUDFConcat.java:189)
at 
org.apache.hadoop.hive.ql.udf.generic.GenericUDFConcat.evaluate(GenericUDFConcat.java:159)
at 
org.apache.hadoop.hive.ql.exec.ExprNodeGenericFuncEvaluator._evaluate(ExprNodeGenericFuncEvaluator.java:166)
at 
org.apache.hadoop.hive.ql.exec.ExprNodeEvaluator.evaluate(ExprNodeEvaluator.java:77)
at 
org.apache.hadoop.hive.ql.exec.ExprNodeGenericFuncEvaluator$DeferredExprObject.get(ExprNodeGenericFuncEvaluator.java:77)
at 
org.apache.hadoop.hive.ql.udf.generic.GenericUDFToUnixTimeStamp.evaluate(GenericUDFToUnixTimeStamp.java:121)
at 
org.apache.hadoop.hive.ql.udf.generic.GenericUDFUnixTimeStamp.evaluate(GenericUDFUnixTimeStamp.java:52)
at 
org.apache.hadoop.hive.ql.exec.ExprNodeGenericFuncEvaluator._evaluate(ExprNodeGenericFuncEvaluator.java:166)
at 
org.apache.hadoop.hive.ql.exec.ExprNodeEvaluator.evaluate(ExprNodeEvaluator.java:77)
at 
org.apache.hadoop.hive.ql.exec.ExprNodeGenericFuncEvaluator$DeferredExprObject.get(ExprNodeGenericFuncEvaluator.java:77)
at 
org.apache.hadoop.hive.ql.udf.generic.GenericUDFBridge.evaluate(GenericUDFBridge.java:177)
at 
org.apache.hadoop.hive.ql.exec.ExprNodeGenericFuncEvaluator._evaluate(ExprNodeGenericFuncEvaluator.java:166)
at 
org.apache.hadoop.hive.ql.exec.ExprNodeEvaluator.evaluate(ExprNodeEvaluator.java:77)
at 
org.apache.hadoop.hive.ql.exec.ExprNodeGenericFuncEvaluator$DeferredExprObject.get(ExprNodeGenericFuncEvaluator.java:77)
at 
org.apache.hadoop.hive.ql.udf.generic.GenericUDFWhen.evaluate(GenericUDFWhen.java:78)
at 
org.apache.hadoop.hive.ql.exec.ExprNodeGenericFuncEvaluator._evaluate(ExprNodeGenericFuncEvaluator.java:166)
at 

[jira] [Updated] (HIVE-7443) Fix HiveConnection to communicate with Kerberized Hive JDBC server and alternative JDKs

2014-07-17 Thread Yu Gao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yu Gao updated HIVE-7443:
-

Description: 
Hive Kerberos authentication has been enabled in my cluster. I ran kinit to 
initialize the current login user's ticket cache successfully, and then tried 
to use beeline to connect to Hive Server2, but failed. After I manually added 
some logging to catch the failure exception, this is what I got that caused the 
failure:

beeline  !connect 
jdbc:hive2://hiveserver.host:1/default;principal=hive/hiveserver.host@REALM.COM
 org.apache.hive.jdbc.HiveDriver
scan complete in 2ms
Connecting to 
jdbc:hive2://hiveserver.host:1/default;principal=hive/hiveserver.host@REALM.COM
Enter password for 
jdbc:hive2://hiveserver.host:1/default;principal=hive/hiveserver.host@REALM.COM:
14/07/17 15:12:45 ERROR jdbc.HiveConnection: Failed to open client transport
javax.security.sasl.SaslException: Failed to open client transport [Caused by 
java.io.IOException: Could not instantiate SASL transport]
at 
org.apache.hive.service.auth.KerberosSaslHelper.getKerberosTransport(KerberosSaslHelper.java:78)
at 
org.apache.hive.jdbc.HiveConnection.createBinaryTransport(HiveConnection.java:342)
at 
org.apache.hive.jdbc.HiveConnection.openTransport(HiveConnection.java:200)
at org.apache.hive.jdbc.HiveConnection.init(HiveConnection.java:178)
at org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:105)
at java.sql.DriverManager.getConnection(DriverManager.java:582)
at java.sql.DriverManager.getConnection(DriverManager.java:198)
at 
org.apache.hive.beeline.DatabaseConnection.connect(DatabaseConnection.java:145)
at 
org.apache.hive.beeline.DatabaseConnection.getConnection(DatabaseConnection.java:186)
at org.apache.hive.beeline.Commands.connect(Commands.java:959)
at org.apache.hive.beeline.Commands.connect(Commands.java:880)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:94)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55)
at java.lang.reflect.Method.invoke(Method.java:619)
at 
org.apache.hive.beeline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:44)
at org.apache.hive.beeline.BeeLine.dispatch(BeeLine.java:801)
at org.apache.hive.beeline.BeeLine.begin(BeeLine.java:659)
at 
org.apache.hive.beeline.BeeLine.mainWithInputRedirection(BeeLine.java:368)
at org.apache.hive.beeline.BeeLine.main(BeeLine.java:351)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:94)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55)
at java.lang.reflect.Method.invoke(Method.java:619)
at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
Caused by: java.io.IOException: Could not instantiate SASL transport
at 
org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge20S$Client.createClientTransport(HadoopThriftAuthBridge20S.java:177)
at 
org.apache.hive.service.auth.KerberosSaslHelper.getKerberosTransport(KerberosSaslHelper.java:74)
... 24 more
Caused by: javax.security.sasl.SaslException: Failure to initialize security 
context [Caused by org.ietf.jgss.GSSException, major code: 13, minor code: 0
major string: Invalid credentials
minor string: SubjectCredFinder: no JAAS Subject]
at 
com.ibm.security.sasl.gsskerb.GssKrb5Client.init(GssKrb5Client.java:131)
at 
com.ibm.security.sasl.gsskerb.FactoryImpl.createSaslClient(FactoryImpl.java:53)
at javax.security.sasl.Sasl.createSaslClient(Sasl.java:362)
at 
org.apache.thrift.transport.TSaslClientTransport.init(TSaslClientTransport.java:72)
at 
org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge20S$Client.createClientTransport(HadoopThriftAuthBridge20S.java:169)
... 25 more
Caused by: org.ietf.jgss.GSSException, major code: 13, minor code: 0
major string: Invalid credentials
minor string: SubjectCredFinder: no JAAS Subject
at 
com.ibm.security.jgss.i18n.I18NException.throwGSSException(I18NException.java:83)
at 
com.ibm.security.jgss.mech.krb5.Krb5Credential$SubjectCredFinder.run(Krb5Credential.java:1126)
at 
java.security.AccessController.doPrivileged(AccessController.java:330)
at 
com.ibm.security.jgss.mech.krb5.Krb5Credential.getClientCredsFromSubject(Krb5Credential.java:816)
at 
com.ibm.security.jgss.mech.krb5.Krb5Credential.getCredentials(Krb5Credential.java:388)
at 
com.ibm.security.jgss.mech.krb5.Krb5Credential.init(Krb5Credential.java:196)
at 

[jira] [Updated] (HIVE-7284) CBO: create Partition Pruning rules in Optiq

2014-07-17 Thread Laljo John Pullokkaran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Laljo John Pullokkaran updated HIVE-7284:
-

Status: Open  (was: Patch Available)

 CBO: create Partition Pruning rules in Optiq
 

 Key: HIVE-7284
 URL: https://issues.apache.org/jira/browse/HIVE-7284
 Project: Hive
  Issue Type: Sub-task
Reporter: Harish Butani
Assignee: Harish Butani
 Attachments: HIVE-7284.1.patch, HIVE-7284.1.patch


 NO PRECOMMIT TESTS
 Create rules in Optiq that do the job of the PartitionPruner.
 For now we will reuse the logic that evaluates the Partition list from 
 prunedExpr. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-7284) CBO: create Partition Pruning rules in Optiq

2014-07-17 Thread Laljo John Pullokkaran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Laljo John Pullokkaran updated HIVE-7284:
-

Attachment: HIVE-7284.2.patch

 CBO: create Partition Pruning rules in Optiq
 

 Key: HIVE-7284
 URL: https://issues.apache.org/jira/browse/HIVE-7284
 Project: Hive
  Issue Type: Sub-task
Reporter: Harish Butani
Assignee: Harish Butani
 Attachments: HIVE-7284.1.patch, HIVE-7284.1.patch, HIVE-7284.2.patch


 NO PRECOMMIT TESTS
 Create rules in Optiq that do the job of the PartitionPruner.
 For now we will reuse the logic that evaluates the Partition list from 
 prunedExpr. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-7443) Fix HiveConnection to communicate with Kerberized Hive JDBC server and alternative JDKs

2014-07-17 Thread Yu Gao (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14065770#comment-14065770
 ] 

Yu Gao commented on HIVE-7443:
--

Also tried with a Java client which does keytab login - 
UserGroupInformation.loginUserFromKeytab(client_principal, client_keytab) - 
before calls DriverManager.getConnection to make the connection. It failed with 
the same exception as that when using beeline. (The environment was set up 
correctly, jars, confs, kerberos and keytabs, etc.)




 Fix HiveConnection to communicate with Kerberized Hive JDBC server and 
 alternative JDKs
 ---

 Key: HIVE-7443
 URL: https://issues.apache.org/jira/browse/HIVE-7443
 Project: Hive
  Issue Type: Bug
  Components: JDBC
Affects Versions: 0.12.0, 0.13.1
 Environment: Kerberos
 Run Hive server2 and client with IBM JDK7.1
Reporter: Yu Gao
Assignee: Yu Gao

 Hive Kerberos authentication has been enabled in my cluster. I ran kinit to 
 initialize the current login user's ticket cache successfully, and then tried 
 to use beeline to connect to Hive Server2, but failed. After I manually added 
 some logging to catch the failure exception, this is what I got that caused 
 the failure:
 beeline  !connect 
 jdbc:hive2://hiveserver.host:1/default;principal=hive/hiveserver.host@REALM.COM
  org.apache.hive.jdbc.HiveDriver
 scan complete in 2ms
 Connecting to 
 jdbc:hive2://hiveserver.host:1/default;principal=hive/hiveserver.host@REALM.COM
 Enter password for 
 jdbc:hive2://hiveserver.host:1/default;principal=hive/hiveserver.host@REALM.COM:
 14/07/17 15:12:45 ERROR jdbc.HiveConnection: Failed to open client transport
 javax.security.sasl.SaslException: Failed to open client transport [Caused by 
 java.io.IOException: Could not instantiate SASL transport]
 at 
 org.apache.hive.service.auth.KerberosSaslHelper.getKerberosTransport(KerberosSaslHelper.java:78)
 at 
 org.apache.hive.jdbc.HiveConnection.createBinaryTransport(HiveConnection.java:342)
 at 
 org.apache.hive.jdbc.HiveConnection.openTransport(HiveConnection.java:200)
 at org.apache.hive.jdbc.HiveConnection.init(HiveConnection.java:178)
 at org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:105)
 at java.sql.DriverManager.getConnection(DriverManager.java:582)
 at java.sql.DriverManager.getConnection(DriverManager.java:198)
 at 
 org.apache.hive.beeline.DatabaseConnection.connect(DatabaseConnection.java:145)
 at 
 org.apache.hive.beeline.DatabaseConnection.getConnection(DatabaseConnection.java:186)
 at org.apache.hive.beeline.Commands.connect(Commands.java:959)
 at org.apache.hive.beeline.Commands.connect(Commands.java:880)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:94)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55)
 at java.lang.reflect.Method.invoke(Method.java:619)
 at 
 org.apache.hive.beeline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:44)
 at org.apache.hive.beeline.BeeLine.dispatch(BeeLine.java:801)
 at org.apache.hive.beeline.BeeLine.begin(BeeLine.java:659)
 at 
 org.apache.hive.beeline.BeeLine.mainWithInputRedirection(BeeLine.java:368)
 at org.apache.hive.beeline.BeeLine.main(BeeLine.java:351)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:94)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55)
 at java.lang.reflect.Method.invoke(Method.java:619)
 at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
 Caused by: java.io.IOException: Could not instantiate SASL transport
 at 
 org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge20S$Client.createClientTransport(HadoopThriftAuthBridge20S.java:177)
 at 
 org.apache.hive.service.auth.KerberosSaslHelper.getKerberosTransport(KerberosSaslHelper.java:74)
 ... 24 more
 Caused by: javax.security.sasl.SaslException: Failure to initialize security 
 context [Caused by org.ietf.jgss.GSSException, major code: 13, minor code: 0
 major string: Invalid credentials
 minor string: SubjectCredFinder: no JAAS Subject]
 at 
 com.ibm.security.sasl.gsskerb.GssKrb5Client.init(GssKrb5Client.java:131)
 at 
 com.ibm.security.sasl.gsskerb.FactoryImpl.createSaslClient(FactoryImpl.java:53)
 at javax.security.sasl.Sasl.createSaslClient(Sasl.java:362)
 at 
 

[jira] [Updated] (HIVE-7284) CBO: create Partition Pruning rules in Optiq

2014-07-17 Thread Laljo John Pullokkaran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Laljo John Pullokkaran updated HIVE-7284:
-

Status: Patch Available  (was: Open)

Updated Patch to do stats fetching on demand.

We need to add rules to push filter through, project, set operators, GB.
Need to add rules to combine filters. Also constant folding may improve 
partition Pruning.

 CBO: create Partition Pruning rules in Optiq
 

 Key: HIVE-7284
 URL: https://issues.apache.org/jira/browse/HIVE-7284
 Project: Hive
  Issue Type: Sub-task
Reporter: Harish Butani
Assignee: Harish Butani
 Attachments: HIVE-7284.1.patch, HIVE-7284.1.patch, HIVE-7284.2.patch


 NO PRECOMMIT TESTS
 Create rules in Optiq that do the job of the PartitionPruner.
 For now we will reuse the logic that evaluates the Partition list from 
 prunedExpr. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-7443) Fix HiveConnection to communicate with Kerberized Hive JDBC server and alternative JDKs

2014-07-17 Thread Yu Gao (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14065797#comment-14065797
 ] 

Yu Gao commented on HIVE-7443:
--

This is caused by no kerberos login behavior in HiveConnection class when 
opening transport to kerberized Hive server2: IBM JDK requires valid kerberos 
credentials in place when creating Sasl client, so adding 
UserGroupInformation.getCurrentUser() call in there, which in turn invokes 
UserGroupInformation.getLoginUser(). The login user is the one who holds 
kerberos credentials, either via ticket cache or via keytab login.

After this change, to access Hive server2 using beeline, what client needs to 
do is a kinit;
While for java client with keytab login, before make JDBC connection, one needs 
to call Hadoop UGI API to login (UGI.loginUserFromKeytab())

 Fix HiveConnection to communicate with Kerberized Hive JDBC server and 
 alternative JDKs
 ---

 Key: HIVE-7443
 URL: https://issues.apache.org/jira/browse/HIVE-7443
 Project: Hive
  Issue Type: Bug
  Components: JDBC
Affects Versions: 0.12.0, 0.13.1
 Environment: Kerberos
 Run Hive server2 and client with IBM JDK7.1
Reporter: Yu Gao
Assignee: Yu Gao

 Hive Kerberos authentication has been enabled in my cluster. I ran kinit to 
 initialize the current login user's ticket cache successfully, and then tried 
 to use beeline to connect to Hive Server2, but failed. After I manually added 
 some logging to catch the failure exception, this is what I got that caused 
 the failure:
 beeline  !connect 
 jdbc:hive2://hiveserver.host:1/default;principal=hive/hiveserver.host@REALM.COM
  org.apache.hive.jdbc.HiveDriver
 scan complete in 2ms
 Connecting to 
 jdbc:hive2://hiveserver.host:1/default;principal=hive/hiveserver.host@REALM.COM
 Enter password for 
 jdbc:hive2://hiveserver.host:1/default;principal=hive/hiveserver.host@REALM.COM:
 14/07/17 15:12:45 ERROR jdbc.HiveConnection: Failed to open client transport
 javax.security.sasl.SaslException: Failed to open client transport [Caused by 
 java.io.IOException: Could not instantiate SASL transport]
 at 
 org.apache.hive.service.auth.KerberosSaslHelper.getKerberosTransport(KerberosSaslHelper.java:78)
 at 
 org.apache.hive.jdbc.HiveConnection.createBinaryTransport(HiveConnection.java:342)
 at 
 org.apache.hive.jdbc.HiveConnection.openTransport(HiveConnection.java:200)
 at org.apache.hive.jdbc.HiveConnection.init(HiveConnection.java:178)
 at org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:105)
 at java.sql.DriverManager.getConnection(DriverManager.java:582)
 at java.sql.DriverManager.getConnection(DriverManager.java:198)
 at 
 org.apache.hive.beeline.DatabaseConnection.connect(DatabaseConnection.java:145)
 at 
 org.apache.hive.beeline.DatabaseConnection.getConnection(DatabaseConnection.java:186)
 at org.apache.hive.beeline.Commands.connect(Commands.java:959)
 at org.apache.hive.beeline.Commands.connect(Commands.java:880)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:94)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55)
 at java.lang.reflect.Method.invoke(Method.java:619)
 at 
 org.apache.hive.beeline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:44)
 at org.apache.hive.beeline.BeeLine.dispatch(BeeLine.java:801)
 at org.apache.hive.beeline.BeeLine.begin(BeeLine.java:659)
 at 
 org.apache.hive.beeline.BeeLine.mainWithInputRedirection(BeeLine.java:368)
 at org.apache.hive.beeline.BeeLine.main(BeeLine.java:351)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:94)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55)
 at java.lang.reflect.Method.invoke(Method.java:619)
 at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
 Caused by: java.io.IOException: Could not instantiate SASL transport
 at 
 org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge20S$Client.createClientTransport(HadoopThriftAuthBridge20S.java:177)
 at 
 org.apache.hive.service.auth.KerberosSaslHelper.getKerberosTransport(KerberosSaslHelper.java:74)
 ... 24 more
 Caused by: javax.security.sasl.SaslException: Failure to initialize security 
 context [Caused by org.ietf.jgss.GSSException, major code: 13, minor code: 0
 major string: Invalid credentials
 minor string: SubjectCredFinder: no JAAS Subject]
  

[jira] [Updated] (HIVE-7443) Fix HiveConnection to communicate with Kerberized Hive JDBC server and alternative JDKs

2014-07-17 Thread Yu Gao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yu Gao updated HIVE-7443:
-

Attachment: HIVE-7443.patch

 Fix HiveConnection to communicate with Kerberized Hive JDBC server and 
 alternative JDKs
 ---

 Key: HIVE-7443
 URL: https://issues.apache.org/jira/browse/HIVE-7443
 Project: Hive
  Issue Type: Bug
  Components: JDBC
Affects Versions: 0.12.0, 0.13.1
 Environment: Kerberos
 Run Hive server2 and client with IBM JDK7.1
Reporter: Yu Gao
Assignee: Yu Gao
 Attachments: HIVE-7443.patch


 Hive Kerberos authentication has been enabled in my cluster. I ran kinit to 
 initialize the current login user's ticket cache successfully, and then tried 
 to use beeline to connect to Hive Server2, but failed. After I manually added 
 some logging to catch the failure exception, this is what I got that caused 
 the failure:
 beeline  !connect 
 jdbc:hive2://hiveserver.host:1/default;principal=hive/hiveserver.host@REALM.COM
  org.apache.hive.jdbc.HiveDriver
 scan complete in 2ms
 Connecting to 
 jdbc:hive2://hiveserver.host:1/default;principal=hive/hiveserver.host@REALM.COM
 Enter password for 
 jdbc:hive2://hiveserver.host:1/default;principal=hive/hiveserver.host@REALM.COM:
 14/07/17 15:12:45 ERROR jdbc.HiveConnection: Failed to open client transport
 javax.security.sasl.SaslException: Failed to open client transport [Caused by 
 java.io.IOException: Could not instantiate SASL transport]
 at 
 org.apache.hive.service.auth.KerberosSaslHelper.getKerberosTransport(KerberosSaslHelper.java:78)
 at 
 org.apache.hive.jdbc.HiveConnection.createBinaryTransport(HiveConnection.java:342)
 at 
 org.apache.hive.jdbc.HiveConnection.openTransport(HiveConnection.java:200)
 at org.apache.hive.jdbc.HiveConnection.init(HiveConnection.java:178)
 at org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:105)
 at java.sql.DriverManager.getConnection(DriverManager.java:582)
 at java.sql.DriverManager.getConnection(DriverManager.java:198)
 at 
 org.apache.hive.beeline.DatabaseConnection.connect(DatabaseConnection.java:145)
 at 
 org.apache.hive.beeline.DatabaseConnection.getConnection(DatabaseConnection.java:186)
 at org.apache.hive.beeline.Commands.connect(Commands.java:959)
 at org.apache.hive.beeline.Commands.connect(Commands.java:880)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:94)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55)
 at java.lang.reflect.Method.invoke(Method.java:619)
 at 
 org.apache.hive.beeline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:44)
 at org.apache.hive.beeline.BeeLine.dispatch(BeeLine.java:801)
 at org.apache.hive.beeline.BeeLine.begin(BeeLine.java:659)
 at 
 org.apache.hive.beeline.BeeLine.mainWithInputRedirection(BeeLine.java:368)
 at org.apache.hive.beeline.BeeLine.main(BeeLine.java:351)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:94)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55)
 at java.lang.reflect.Method.invoke(Method.java:619)
 at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
 Caused by: java.io.IOException: Could not instantiate SASL transport
 at 
 org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge20S$Client.createClientTransport(HadoopThriftAuthBridge20S.java:177)
 at 
 org.apache.hive.service.auth.KerberosSaslHelper.getKerberosTransport(KerberosSaslHelper.java:74)
 ... 24 more
 Caused by: javax.security.sasl.SaslException: Failure to initialize security 
 context [Caused by org.ietf.jgss.GSSException, major code: 13, minor code: 0
 major string: Invalid credentials
 minor string: SubjectCredFinder: no JAAS Subject]
 at 
 com.ibm.security.sasl.gsskerb.GssKrb5Client.init(GssKrb5Client.java:131)
 at 
 com.ibm.security.sasl.gsskerb.FactoryImpl.createSaslClient(FactoryImpl.java:53)
 at javax.security.sasl.Sasl.createSaslClient(Sasl.java:362)
 at 
 org.apache.thrift.transport.TSaslClientTransport.init(TSaslClientTransport.java:72)
 at 
 org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge20S$Client.createClientTransport(HadoopThriftAuthBridge20S.java:169)
 ... 25 more
 Caused by: org.ietf.jgss.GSSException, major code: 13, minor code: 0
 major string: Invalid credentials
 minor string: 

[jira] [Updated] (HIVE-7443) Fix HiveConnection to communicate with Kerberized Hive JDBC server and alternative JDKs

2014-07-17 Thread Yu Gao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yu Gao updated HIVE-7443:
-

Component/s: Security

 Fix HiveConnection to communicate with Kerberized Hive JDBC server and 
 alternative JDKs
 ---

 Key: HIVE-7443
 URL: https://issues.apache.org/jira/browse/HIVE-7443
 Project: Hive
  Issue Type: Bug
  Components: JDBC, Security
Affects Versions: 0.12.0, 0.13.1
 Environment: Kerberos
 Run Hive server2 and client with IBM JDK7.1
Reporter: Yu Gao
Assignee: Yu Gao
 Attachments: HIVE-7443.patch


 Hive Kerberos authentication has been enabled in my cluster. I ran kinit to 
 initialize the current login user's ticket cache successfully, and then tried 
 to use beeline to connect to Hive Server2, but failed. After I manually added 
 some logging to catch the failure exception, this is what I got that caused 
 the failure:
 beeline  !connect 
 jdbc:hive2://hiveserver.host:1/default;principal=hive/hiveserver.host@REALM.COM
  org.apache.hive.jdbc.HiveDriver
 scan complete in 2ms
 Connecting to 
 jdbc:hive2://hiveserver.host:1/default;principal=hive/hiveserver.host@REALM.COM
 Enter password for 
 jdbc:hive2://hiveserver.host:1/default;principal=hive/hiveserver.host@REALM.COM:
 14/07/17 15:12:45 ERROR jdbc.HiveConnection: Failed to open client transport
 javax.security.sasl.SaslException: Failed to open client transport [Caused by 
 java.io.IOException: Could not instantiate SASL transport]
 at 
 org.apache.hive.service.auth.KerberosSaslHelper.getKerberosTransport(KerberosSaslHelper.java:78)
 at 
 org.apache.hive.jdbc.HiveConnection.createBinaryTransport(HiveConnection.java:342)
 at 
 org.apache.hive.jdbc.HiveConnection.openTransport(HiveConnection.java:200)
 at org.apache.hive.jdbc.HiveConnection.init(HiveConnection.java:178)
 at org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:105)
 at java.sql.DriverManager.getConnection(DriverManager.java:582)
 at java.sql.DriverManager.getConnection(DriverManager.java:198)
 at 
 org.apache.hive.beeline.DatabaseConnection.connect(DatabaseConnection.java:145)
 at 
 org.apache.hive.beeline.DatabaseConnection.getConnection(DatabaseConnection.java:186)
 at org.apache.hive.beeline.Commands.connect(Commands.java:959)
 at org.apache.hive.beeline.Commands.connect(Commands.java:880)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:94)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55)
 at java.lang.reflect.Method.invoke(Method.java:619)
 at 
 org.apache.hive.beeline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:44)
 at org.apache.hive.beeline.BeeLine.dispatch(BeeLine.java:801)
 at org.apache.hive.beeline.BeeLine.begin(BeeLine.java:659)
 at 
 org.apache.hive.beeline.BeeLine.mainWithInputRedirection(BeeLine.java:368)
 at org.apache.hive.beeline.BeeLine.main(BeeLine.java:351)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:94)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55)
 at java.lang.reflect.Method.invoke(Method.java:619)
 at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
 Caused by: java.io.IOException: Could not instantiate SASL transport
 at 
 org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge20S$Client.createClientTransport(HadoopThriftAuthBridge20S.java:177)
 at 
 org.apache.hive.service.auth.KerberosSaslHelper.getKerberosTransport(KerberosSaslHelper.java:74)
 ... 24 more
 Caused by: javax.security.sasl.SaslException: Failure to initialize security 
 context [Caused by org.ietf.jgss.GSSException, major code: 13, minor code: 0
 major string: Invalid credentials
 minor string: SubjectCredFinder: no JAAS Subject]
 at 
 com.ibm.security.sasl.gsskerb.GssKrb5Client.init(GssKrb5Client.java:131)
 at 
 com.ibm.security.sasl.gsskerb.FactoryImpl.createSaslClient(FactoryImpl.java:53)
 at javax.security.sasl.Sasl.createSaslClient(Sasl.java:362)
 at 
 org.apache.thrift.transport.TSaslClientTransport.init(TSaslClientTransport.java:72)
 at 
 org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge20S$Client.createClientTransport(HadoopThriftAuthBridge20S.java:169)
 ... 25 more
 Caused by: org.ietf.jgss.GSSException, major code: 13, minor code: 0
 major string: Invalid credentials
 minor string: 

[jira] [Updated] (HIVE-7443) Fix HiveConnection to communicate with Kerberized Hive JDBC server and alternative JDKs

2014-07-17 Thread Yu Gao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yu Gao updated HIVE-7443:
-

Status: Patch Available  (was: Open)

 Fix HiveConnection to communicate with Kerberized Hive JDBC server and 
 alternative JDKs
 ---

 Key: HIVE-7443
 URL: https://issues.apache.org/jira/browse/HIVE-7443
 Project: Hive
  Issue Type: Bug
  Components: JDBC, Security
Affects Versions: 0.13.1, 0.12.0
 Environment: Kerberos
 Run Hive server2 and client with IBM JDK7.1
Reporter: Yu Gao
Assignee: Yu Gao
 Attachments: HIVE-7443.patch


 Hive Kerberos authentication has been enabled in my cluster. I ran kinit to 
 initialize the current login user's ticket cache successfully, and then tried 
 to use beeline to connect to Hive Server2, but failed. After I manually added 
 some logging to catch the failure exception, this is what I got that caused 
 the failure:
 beeline  !connect 
 jdbc:hive2://hiveserver.host:1/default;principal=hive/hiveserver.host@REALM.COM
  org.apache.hive.jdbc.HiveDriver
 scan complete in 2ms
 Connecting to 
 jdbc:hive2://hiveserver.host:1/default;principal=hive/hiveserver.host@REALM.COM
 Enter password for 
 jdbc:hive2://hiveserver.host:1/default;principal=hive/hiveserver.host@REALM.COM:
 14/07/17 15:12:45 ERROR jdbc.HiveConnection: Failed to open client transport
 javax.security.sasl.SaslException: Failed to open client transport [Caused by 
 java.io.IOException: Could not instantiate SASL transport]
 at 
 org.apache.hive.service.auth.KerberosSaslHelper.getKerberosTransport(KerberosSaslHelper.java:78)
 at 
 org.apache.hive.jdbc.HiveConnection.createBinaryTransport(HiveConnection.java:342)
 at 
 org.apache.hive.jdbc.HiveConnection.openTransport(HiveConnection.java:200)
 at org.apache.hive.jdbc.HiveConnection.init(HiveConnection.java:178)
 at org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:105)
 at java.sql.DriverManager.getConnection(DriverManager.java:582)
 at java.sql.DriverManager.getConnection(DriverManager.java:198)
 at 
 org.apache.hive.beeline.DatabaseConnection.connect(DatabaseConnection.java:145)
 at 
 org.apache.hive.beeline.DatabaseConnection.getConnection(DatabaseConnection.java:186)
 at org.apache.hive.beeline.Commands.connect(Commands.java:959)
 at org.apache.hive.beeline.Commands.connect(Commands.java:880)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:94)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55)
 at java.lang.reflect.Method.invoke(Method.java:619)
 at 
 org.apache.hive.beeline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:44)
 at org.apache.hive.beeline.BeeLine.dispatch(BeeLine.java:801)
 at org.apache.hive.beeline.BeeLine.begin(BeeLine.java:659)
 at 
 org.apache.hive.beeline.BeeLine.mainWithInputRedirection(BeeLine.java:368)
 at org.apache.hive.beeline.BeeLine.main(BeeLine.java:351)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:94)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55)
 at java.lang.reflect.Method.invoke(Method.java:619)
 at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
 Caused by: java.io.IOException: Could not instantiate SASL transport
 at 
 org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge20S$Client.createClientTransport(HadoopThriftAuthBridge20S.java:177)
 at 
 org.apache.hive.service.auth.KerberosSaslHelper.getKerberosTransport(KerberosSaslHelper.java:74)
 ... 24 more
 Caused by: javax.security.sasl.SaslException: Failure to initialize security 
 context [Caused by org.ietf.jgss.GSSException, major code: 13, minor code: 0
 major string: Invalid credentials
 minor string: SubjectCredFinder: no JAAS Subject]
 at 
 com.ibm.security.sasl.gsskerb.GssKrb5Client.init(GssKrb5Client.java:131)
 at 
 com.ibm.security.sasl.gsskerb.FactoryImpl.createSaslClient(FactoryImpl.java:53)
 at javax.security.sasl.Sasl.createSaslClient(Sasl.java:362)
 at 
 org.apache.thrift.transport.TSaslClientTransport.init(TSaslClientTransport.java:72)
 at 
 org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge20S$Client.createClientTransport(HadoopThriftAuthBridge20S.java:169)
 ... 25 more
 Caused by: org.ietf.jgss.GSSException, major code: 13, minor code: 0
 major string: Invalid credentials
 minor 

[jira] [Updated] (HIVE-7361) using authorization api for RESET, DFS, ADD, DELETE, COMPILE commands

2014-07-17 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-7361:


Attachment: HIVE-7361.5.patch

change in autogenerated hive-default.xml caused conflict.
HIVE-7361.5.patch - rebased with trunk



 using authorization api for RESET, DFS, ADD, DELETE, COMPILE commands
 -

 Key: HIVE-7361
 URL: https://issues.apache.org/jira/browse/HIVE-7361
 Project: Hive
  Issue Type: Improvement
  Components: Authorization
Reporter: Thejas M Nair
Assignee: Thejas M Nair
  Labels: TODOC14
 Attachments: HIVE-7361.1.patch, HIVE-7361.2.patch, HIVE-7361.3.patch, 
 HIVE-7361.4.patch, HIVE-7361.5.patch


 The only way to disable the commands SET, RESET, DFS, ADD, DELETE and COMPILE 
 that is available currently is to use the hive.security.command.whitelist 
 parameter.
 Some of these commands are disabled using this configuration parameter for 
 security reasons when SQL standard authorization is enabled. However, it gets 
 disabled in all cases.
 If authorization api is used authorize the use of these commands, it will 
 give authorization implementations the flexibility to allow/disallow these 
 commands based on user privileges.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-7357) Add vectorized support for BINARY data type

2014-07-17 Thread Eric Hanson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14065831#comment-14065831
 ] 

Eric Hanson commented on HIVE-7357:
---

Hi Matt. This looks good overall. Please see my comments on ReviewBoard.

 Add vectorized support for BINARY data type
 ---

 Key: HIVE-7357
 URL: https://issues.apache.org/jira/browse/HIVE-7357
 Project: Hive
  Issue Type: Sub-task
Reporter: Matt McCline
Assignee: Matt McCline
 Attachments: HIVE-7357.1.patch






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6988) Hive changes for tez-0.5.x compatibility

2014-07-17 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14065843#comment-14065843
 ] 

Hive QA commented on HIVE-6988:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12656379/HIVE-6988.5.patch

{color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 5740 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_ql_rewrite_gbtoidx
org.apache.hive.jdbc.miniHS2.TestHiveServer2.testConnection
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-Build/840/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-Build/840/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-Build-840/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 2 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12656379

 Hive changes for tez-0.5.x compatibility
 

 Key: HIVE-6988
 URL: https://issues.apache.org/jira/browse/HIVE-6988
 Project: Hive
  Issue Type: Task
Reporter: Gopal V
 Attachments: HIVE-6988.1.patch, HIVE-6988.2.patch, HIVE-6988.3.patch, 
 HIVE-6988.4.patch, HIVE-6988.5.patch, HIVE-6988.5.patch


 Umbrella JIRA to track all hive changes needed for tez-0.5.x compatibility.
 tez-0.4.x - tez.0.5.x is going to break backwards compat.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


  1   2   >