[jira] [Created] (HIVE-8835) identify dependency scope for Remote Spark Context.[Spark Branch]

2014-11-11 Thread Chengxiang Li (JIRA)
Chengxiang Li created HIVE-8835:
---

 Summary: identify dependency scope for Remote Spark Context.[Spark 
Branch]
 Key: HIVE-8835
 URL: https://issues.apache.org/jira/browse/HIVE-8835
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Chengxiang Li


While submit job through Remote Spark Context, spark RDD graph generation and 
job submit is executed in remote side, so we have to add hive  related 
dependency into its classpath with spark.driver.extraClassPath. instead of add 
all hive/hadoop dependency, we should narrow the scope and identify what 
dependency remote spark context required. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-7497) Fix some default values in HiveConf

2014-11-11 Thread Szehon Ho (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szehon Ho updated HIVE-7497:

Labels:   (was: TODOC14)

Covered by HIVE-5160

> Fix some default values in HiveConf
> ---
>
> Key: HIVE-7497
> URL: https://issues.apache.org/jira/browse/HIVE-7497
> Project: Hive
>  Issue Type: Task
>Reporter: Brock Noland
>Assignee: Dong Chen
> Fix For: 0.14.0
>
> Attachments: HIVE-7497.1.patch, HIVE-7497.patch
>
>
> HIVE-5160 resolves an env variable at runtime via calling System.getenv(). As 
> long as the variable is not defined when you run the build null is returned 
> and the path is not placed in the hive-default,template. However if it is 
> defined it will populate hive-default.template with a path which will be 
> different based on the user running the build. We should use 
> $\{system:HIVE_CONF_DIR\} instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-5160) HS2 should support .hiverc

2014-11-11 Thread Szehon Ho (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szehon Ho updated HIVE-5160:

Labels:   (was: TODOC14)

[~leftylev] Added a section:

[https://cwiki.apache.org/confluence/display/Hive/Setting+Up+HiveServer2#SettingUpHiveServer2-OptionalGlobalInitFile(.hiverc)|https://cwiki.apache.org/confluence/display/Hive/Setting+Up+HiveServer2#SettingUpHiveServer2-OptionalGlobalInitFile(.hiverc)]
and
[https://cwiki.apache.org/confluence/display/Hive/Configuration+Properties|https://cwiki.apache.org/confluence/display/Hive/Configuration+Properties]
under hive.server2.global.init.file.location.

Wiki beginner question, I wonder how do you generate a link to a specific 
configuration property in the second page, without typing it out?

> HS2 should support .hiverc
> --
>
> Key: HIVE-5160
> URL: https://issues.apache.org/jira/browse/HIVE-5160
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2
>Reporter: Thejas M Nair
>Assignee: Dong Chen
> Fix For: 0.14.0
>
> Attachments: HIVE-5160.1.patch, HIVE-5160.patch
>
>
> It would be useful to support the .hiverc functionality with hive server2 as 
> well.
> .hiverc is processed by CliDriver, so it works only with hive cli. It would 
> be useful to be able to do things like register a standard set of jars and 
> functions for all users. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8834) enable job progress monitoring of Remote Spark Context[Spark Branch]

2014-11-11 Thread Chengxiang Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chengxiang Li updated HIVE-8834:

Summary: enable job progress monitoring of Remote Spark Context[Spark 
Branch]  (was: enable job progress monitoring of Remote Spark Context)

> enable job progress monitoring of Remote Spark Context[Spark Branch]
> 
>
> Key: HIVE-8834
> URL: https://issues.apache.org/jira/browse/HIVE-8834
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Reporter: Chengxiang Li
>  Labels: Spark-M3
>
> We should enable job progress monitor in Remote Spark Context, the spark job 
> progress info should fit into SparkJobStatus. SPARK-2321 supply new spark 
> progress API, which should make this task easier.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8833) Unify spark client API and implement remote spark client.[Spark Branch]

2014-11-11 Thread Chengxiang Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chengxiang Li updated HIVE-8833:

Summary: Unify spark client API and implement remote spark client.[Spark 
Branch]  (was: Unify spark client API and implement remote spark client.)

> Unify spark client API and implement remote spark client.[Spark Branch]
> ---
>
> Key: HIVE-8833
> URL: https://issues.apache.org/jira/browse/HIVE-8833
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Reporter: Chengxiang Li
>Assignee: Chengxiang Li
>  Labels: Spark-M3
>
> Hive would support submitting spark job through both local spark client and 
> remote spark client. we should unify the spark client API, and implement 
> remote spark client through Remote Spark Context. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8574) Enhance metrics gathering in Spark Client [Spark Branch]

2014-11-11 Thread Chengxiang Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chengxiang Li updated HIVE-8574:

Issue Type: Sub-task  (was: Bug)
Parent: HIVE-8548

> Enhance metrics gathering in Spark Client [Spark Branch]
> 
>
> Key: HIVE-8574
> URL: https://issues.apache.org/jira/browse/HIVE-8574
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Reporter: Marcelo Vanzin
>Assignee: Marcelo Vanzin
>
> The current implementation of metrics gathering in the Spark client is a 
> little hacky. First, it's awkward to use (and the implementation is also 
> pretty ugly). Second, it will just collect metrics indefinitely, so in the 
> long term it turns into a huge memory leak.
> We need a simplified interface and some mechanism for disposing of old 
> metrics.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8574) Enhance metrics gathering in Spark Client [Spark Branch]

2014-11-11 Thread Chengxiang Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chengxiang Li updated HIVE-8574:

Issue Type: Bug  (was: Sub-task)
Parent: (was: HIVE-7292)

> Enhance metrics gathering in Spark Client [Spark Branch]
> 
>
> Key: HIVE-8574
> URL: https://issues.apache.org/jira/browse/HIVE-8574
> Project: Hive
>  Issue Type: Bug
>  Components: Spark
>Reporter: Marcelo Vanzin
>Assignee: Marcelo Vanzin
>
> The current implementation of metrics gathering in the Spark client is a 
> little hacky. First, it's awkward to use (and the implementation is also 
> pretty ugly). Second, it will just collect metrics indefinitely, so in the 
> long term it turns into a huge memory leak.
> We need a simplified interface and some mechanism for disposing of old 
> metrics.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8834) enable job progress monitoring of Remote Spark Context

2014-11-11 Thread Chengxiang Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chengxiang Li updated HIVE-8834:

Assignee: (was: Chengxiang Li)

> enable job progress monitoring of Remote Spark Context
> --
>
> Key: HIVE-8834
> URL: https://issues.apache.org/jira/browse/HIVE-8834
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Reporter: Chengxiang Li
>  Labels: Spark-M3
>
> We should enable job progress monitor in Remote Spark Context, the spark job 
> progress info should fit into SparkJobStatus. SPARK-2321 supply new spark 
> progress API, which should make this task easier.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-8834) enable job progress monitoring of Remote Spark Context

2014-11-11 Thread Chengxiang Li (JIRA)
Chengxiang Li created HIVE-8834:
---

 Summary: enable job progress monitoring of Remote Spark Context
 Key: HIVE-8834
 URL: https://issues.apache.org/jira/browse/HIVE-8834
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Chengxiang Li
Assignee: Chengxiang Li


We should enable job progress monitor in Remote Spark Context, the spark job 
progress info should fit into SparkJobStatus. SPARK-2321 supply new spark 
progress API, which should make this task easier.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-8833) Unify spark client API and implement remote spark client.

2014-11-11 Thread Chengxiang Li (JIRA)
Chengxiang Li created HIVE-8833:
---

 Summary: Unify spark client API and implement remote spark client.
 Key: HIVE-8833
 URL: https://issues.apache.org/jira/browse/HIVE-8833
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Chengxiang Li
Assignee: Chengxiang Li


Hive would support submitting spark job through both local spark client and 
remote spark client. we should unify the spark client API, and implement remote 
spark client through Remote Spark Context. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8548) Integrate with remote Spark context after HIVE-8528 [Spark Branch]

2014-11-11 Thread Chengxiang Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8548?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chengxiang Li updated HIVE-8548:

Issue Type: Bug  (was: Sub-task)
Parent: (was: HIVE-7292)

> Integrate with remote Spark context after HIVE-8528 [Spark Branch]
> --
>
> Key: HIVE-8548
> URL: https://issues.apache.org/jira/browse/HIVE-8548
> Project: Hive
>  Issue Type: Bug
>  Components: Spark
>Reporter: Xuefu Zhang
>Assignee: Chengxiang Li
>
> With HIVE-8528, HiverSever2 should use remote Spark context to submit job and 
> monitor progress, etc. This is necessary if Hive runs on standalone cluster, 
> Yarn, or Mesos. If Hive runs with spark.master=local, we should continue 
> using SparkContext in current way.
> We take this as root JIRA to track all Remote Spark Context integration 
> related subtasks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-6601) alter database commands should support schema synonym keyword

2014-11-11 Thread Navis (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Navis updated HIVE-6601:

Labels:   (was: TODOC14)

> alter database commands should support schema synonym keyword
> -
>
> Key: HIVE-6601
> URL: https://issues.apache.org/jira/browse/HIVE-6601
> Project: Hive
>  Issue Type: Bug
>Reporter: Thejas M Nair
>Assignee: Navis
> Fix For: 0.14.0
>
> Attachments: HIVE-6601.1.patch.txt
>
>
> It should be possible to use "alter schema"  as an alternative to "alter 
> database".  But the syntax is not currently supported.
> {code}
> alter schema db1 set owner user x;  
> NoViableAltException(215@[])
> FAILED: ParseException line 1:6 cannot recognize input near 'schema' 'db1' 
> 'set' in alter statement
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8548) Integrate with remote Spark context after HIVE-8528 [Spark Branch]

2014-11-11 Thread Chengxiang Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8548?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chengxiang Li updated HIVE-8548:

Description: 
With HIVE-8528, HiverSever2 should use remote Spark context to submit job and 
monitor progress, etc. This is necessary if Hive runs on standalone cluster, 
Yarn, or Mesos. If Hive runs with spark.master=local, we should continue using 
SparkContext in current way.
We take this as root JIRA to track all Remote Spark Context integration related 
subtasks.

  was:With HIVE-8528, HiverSever2 should use remote Spark context to submit job 
and monitor progress, etc. This is necessary if Hive runs on standalone 
cluster, Yarn, or Mesos. If Hive runs with spark.master=local, we should 
continue using SparkContext in current way.


> Integrate with remote Spark context after HIVE-8528 [Spark Branch]
> --
>
> Key: HIVE-8548
> URL: https://issues.apache.org/jira/browse/HIVE-8548
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Reporter: Xuefu Zhang
>Assignee: Chengxiang Li
>
> With HIVE-8528, HiverSever2 should use remote Spark context to submit job and 
> monitor progress, etc. This is necessary if Hive runs on standalone cluster, 
> Yarn, or Mesos. If Hive runs with spark.master=local, we should continue 
> using SparkContext in current way.
> We take this as root JIRA to track all Remote Spark Context integration 
> related subtasks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-7196) Configure session by single open session call

2014-11-11 Thread Navis (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14207766#comment-14207766
 ] 

Navis commented on HIVE-7196:
-

This is internal change of processing, not related to user experience. Seemed 
not need any documentation.

> Configure session by single open session call
> -
>
> Key: HIVE-7196
> URL: https://issues.apache.org/jira/browse/HIVE-7196
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2, JDBC
>Affects Versions: 0.14.0
>Reporter: Navis
>Assignee: Navis
>Priority: Trivial
> Fix For: 0.14.0
>
> Attachments: HIVE-7196.1.patch.txt
>
>
> Currently, jdbc2 connection executes set command for each conf/vars, which 
> can be embedded in TOpenSessionReq.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8828) Remove hadoop 20 shims

2014-11-11 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8828?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14207764#comment-14207764
 ] 

Hive QA commented on HIVE-8828:
---



{color:red}Overall{color}: -1 no tests executed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12680954/HIVE-8828.patch

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/1746/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/1746/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-1746/

Messages:
{noformat}
 This message was trimmed, see log for full details 
[WARNING] 
/data/hive-ptest/working/apache-svn-trunk-source/shims/0.20S/src/main/java/org/apache/hadoop/hive/shims/Hadoop20SShims.java:
 Recompile with -Xlint:deprecation for details.
[INFO] 
[INFO] --- maven-resources-plugin:2.6:testResources (default-testResources) @ 
hive-shims-0.20S ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory 
/data/hive-ptest/working/apache-svn-trunk-source/shims/0.20S/src/test/resources
[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (setup-test-dirs) @ hive-shims-0.20S ---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/data/hive-ptest/working/apache-svn-trunk-source/shims/0.20S/target/tmp
[mkdir] Created dir: 
/data/hive-ptest/working/apache-svn-trunk-source/shims/0.20S/target/warehouse
[mkdir] Created dir: 
/data/hive-ptest/working/apache-svn-trunk-source/shims/0.20S/target/tmp/conf
 [copy] Copying 8 files to 
/data/hive-ptest/working/apache-svn-trunk-source/shims/0.20S/target/tmp/conf
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-compiler-plugin:3.1:testCompile (default-testCompile) @ 
hive-shims-0.20S ---
[INFO] No sources to compile
[INFO] 
[INFO] --- maven-surefire-plugin:2.16:test (default-test) @ hive-shims-0.20S ---
[INFO] Tests are skipped.
[INFO] 
[INFO] --- maven-jar-plugin:2.2:jar (default-jar) @ hive-shims-0.20S ---
[INFO] Building jar: 
/data/hive-ptest/working/apache-svn-trunk-source/shims/0.20S/target/hive-shims-0.20S-0.15.0-SNAPSHOT.jar
[INFO] 
[INFO] --- maven-site-plugin:3.3:attach-descriptor (attach-descriptor) @ 
hive-shims-0.20S ---
[INFO] 
[INFO] --- maven-install-plugin:2.4:install (default-install) @ 
hive-shims-0.20S ---
[INFO] Installing 
/data/hive-ptest/working/apache-svn-trunk-source/shims/0.20S/target/hive-shims-0.20S-0.15.0-SNAPSHOT.jar
 to 
/data/hive-ptest/working/maven/org/apache/hive/shims/hive-shims-0.20S/0.15.0-SNAPSHOT/hive-shims-0.20S-0.15.0-SNAPSHOT.jar
[INFO] Installing 
/data/hive-ptest/working/apache-svn-trunk-source/shims/0.20S/pom.xml to 
/data/hive-ptest/working/maven/org/apache/hive/shims/hive-shims-0.20S/0.15.0-SNAPSHOT/hive-shims-0.20S-0.15.0-SNAPSHOT.pom
[INFO] 
[INFO] 
[INFO] Building Hive Shims 0.23 0.15.0-SNAPSHOT
[INFO] 
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hive-shims-0.23 ---
[INFO] Deleting /data/hive-ptest/working/apache-svn-trunk-source/shims/0.23 
(includes = [datanucleus.log, derby.log], excludes = [])
[INFO] 
[INFO] --- maven-remote-resources-plugin:1.5:process (default) @ 
hive-shims-0.23 ---
[INFO] 
[INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ 
hive-shims-0.23 ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory 
/data/hive-ptest/working/apache-svn-trunk-source/shims/0.23/src/main/resources
[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (define-classpath) @ hive-shims-0.23 ---
[INFO] Executing tasks

main:
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-compiler-plugin:3.1:compile (default-compile) @ 
hive-shims-0.23 ---
[INFO] Compiling 5 source files to 
/data/hive-ptest/working/apache-svn-trunk-source/shims/0.23/target/classes
[WARNING] 
/data/hive-ptest/working/apache-svn-trunk-source/shims/0.23/src/main/java/org/apache/hadoop/hive/shims/Hadoop23Shims.java:
 
/data/hive-ptest/working/apache-svn-trunk-source/shims/0.23/src/main/java/org/apache/hadoop/hive/shims/Hadoop23Shims.java
 uses or overrides a deprecated API.
[WARNING] 
/data/hive-ptest/working/apache-svn-trunk-source/shims/0.23/src/main/java/org/apache/hadoop/hive/shims/Hadoop23Shims.java:
 Recompile with -Xlint:deprecation for details.
[INFO] 
[INFO] --- maven-resources-plugin:2.6:testResources (default-testResources) @ 
hive-shims-0.23 ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory 
/data/hive-ptest/working/apache-svn-trunk-s

[jira] [Commented] (HIVE-7685) Parquet memory manager

2014-11-11 Thread Dong Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14207765#comment-14207765
 ] 

Dong Chen commented on HIVE-7685:
-

Thanks for this reminding. :)
Since this patch could not pass building right now because of depending on 
PARQUET-108 resolved, I will rename it to trigger test later.

> Parquet memory manager
> --
>
> Key: HIVE-7685
> URL: https://issues.apache.org/jira/browse/HIVE-7685
> Project: Hive
>  Issue Type: Improvement
>  Components: Serializers/Deserializers
>Reporter: Brock Noland
>Assignee: Dong Chen
> Attachments: HIVE-7685.patch.ready
>
>
> Similar to HIVE-4248, Parquet tries to write large very large "row groups". 
> This causes Hive to run out of memory during dynamic partitions when a 
> reducer may have many Parquet files open at a given time.
> As such, we should implement a memory manager which ensures that we don't run 
> out of memory due to writing too many row groups within a single JVM.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-7196) Configure session by single open session call

2014-11-11 Thread Navis (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Navis updated HIVE-7196:

Labels:   (was: TODOC14)

> Configure session by single open session call
> -
>
> Key: HIVE-7196
> URL: https://issues.apache.org/jira/browse/HIVE-7196
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2, JDBC
>Affects Versions: 0.14.0
>Reporter: Navis
>Assignee: Navis
>Priority: Trivial
> Fix For: 0.14.0
>
> Attachments: HIVE-7196.1.patch.txt
>
>
> Currently, jdbc2 connection executes set command for each conf/vars, which 
> can be embedded in TOpenSessionReq.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-2573) Create per-session function registry

2014-11-11 Thread Navis (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2573?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Navis updated HIVE-2573:

Attachment: HIVE-2573.13.patch.txt

Cannot reproduce fails

> Create per-session function registry 
> -
>
> Key: HIVE-2573
> URL: https://issues.apache.org/jira/browse/HIVE-2573
> Project: Hive
>  Issue Type: Improvement
>  Components: Server Infrastructure
>Reporter: Navis
>Assignee: Navis
>Priority: Minor
> Attachments: ASF.LICENSE.NOT.GRANTED--HIVE-2573.D3231.1.patch, 
> HIVE-2573.1.patch.txt, HIVE-2573.10.patch.txt, HIVE-2573.11.patch.txt, 
> HIVE-2573.12.patch.txt, HIVE-2573.13.patch.txt, HIVE-2573.2.patch.txt, 
> HIVE-2573.3.patch.txt, HIVE-2573.4.patch.txt, HIVE-2573.5.patch, 
> HIVE-2573.6.patch, HIVE-2573.7.patch, HIVE-2573.8.patch.txt, 
> HIVE-2573.9.patch.txt
>
>
> Currently the function registry is shared resource and could be overrided by 
> other users when using HiveServer. If per-session function registry is 
> provided, this situation could be prevented.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-5538) Turn on vectorization by default.

2014-11-11 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14207762#comment-14207762
 ] 

Hive QA commented on HIVE-5538:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12680937/HIVE-5538.62.patch

{color:red}ERROR:{color} -1 due to 116 failed/errored test(s), 6686 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_annotate_stats_filter
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_annotate_stats_groupby
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_annotate_stats_part
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_annotate_stats_select
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join32
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join_without_localtask
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketmapjoin1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_decimal_join2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby_sort_11
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby_sort_6
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby_sort_8
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_input26
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_join_view
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_limit_pushdown
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_metadataonly1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_optimize_nullscan
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_orc_createas1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_orc_merge4
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_orc_merge5
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_orc_merge6
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_orc_merge7
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_orc_merge_incompat1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_orc_merge_incompat2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_orc_predicate_pushdown
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_parquet_create
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_parquet_join
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_parquet_partitioned
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_parquet_types
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_partcols1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_partition_boolexpr
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_partition_wise_fileformat18
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_partition_wise_fileformat7
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_ppd_constant_where
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_ppd_union_view
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_smb_mapjoin9
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_smb_mapjoin_17
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_stats14
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_temp_table_gb1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_udf_minute
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_union31
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_union_remove_15
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_vector_decimal_mapjoin
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_vector_orderby_5
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_vectorization_short_regress
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_auto_join0
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_auto_sortmerge_join_1
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_auto_sortmerge_join_10
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_auto_sortmerge_join_11
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_auto_sortmerge_join_12
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_auto_sortmerge_join_14
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_auto_sortmerge_join_15
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_auto_sortmerge_join_2
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_auto_sortmerge_join_3
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_auto_sortmerge_join_4
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_auto_sortmerge_join_5
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_auto_sortmerge_join_7
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_auto_sortmerge_join_8
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_auto_sortmerge_join_9
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_bucket2
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_bucket3
org.a

[jira] [Updated] (HIVE-2573) Create per-session function registry

2014-11-11 Thread Navis (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2573?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Navis updated HIVE-2573:

Attachment: (was: HIVE-2573.13.patch.txt)

> Create per-session function registry 
> -
>
> Key: HIVE-2573
> URL: https://issues.apache.org/jira/browse/HIVE-2573
> Project: Hive
>  Issue Type: Improvement
>  Components: Server Infrastructure
>Reporter: Navis
>Assignee: Navis
>Priority: Minor
> Attachments: ASF.LICENSE.NOT.GRANTED--HIVE-2573.D3231.1.patch, 
> HIVE-2573.1.patch.txt, HIVE-2573.10.patch.txt, HIVE-2573.11.patch.txt, 
> HIVE-2573.12.patch.txt, HIVE-2573.13.patch.txt, HIVE-2573.2.patch.txt, 
> HIVE-2573.3.patch.txt, HIVE-2573.4.patch.txt, HIVE-2573.5.patch, 
> HIVE-2573.6.patch, HIVE-2573.7.patch, HIVE-2573.8.patch.txt, 
> HIVE-2573.9.patch.txt
>
>
> Currently the function registry is shared resource and could be overrided by 
> other users when using HiveServer. If per-session function registry is 
> provided, this situation could be prevented.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-2573) Create per-session function registry

2014-11-11 Thread Navis (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2573?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Navis updated HIVE-2573:

Attachment: HIVE-2573.13.patch.txt

> Create per-session function registry 
> -
>
> Key: HIVE-2573
> URL: https://issues.apache.org/jira/browse/HIVE-2573
> Project: Hive
>  Issue Type: Improvement
>  Components: Server Infrastructure
>Reporter: Navis
>Assignee: Navis
>Priority: Minor
> Attachments: ASF.LICENSE.NOT.GRANTED--HIVE-2573.D3231.1.patch, 
> HIVE-2573.1.patch.txt, HIVE-2573.10.patch.txt, HIVE-2573.11.patch.txt, 
> HIVE-2573.12.patch.txt, HIVE-2573.13.patch.txt, HIVE-2573.2.patch.txt, 
> HIVE-2573.3.patch.txt, HIVE-2573.4.patch.txt, HIVE-2573.5.patch, 
> HIVE-2573.6.patch, HIVE-2573.7.patch, HIVE-2573.8.patch.txt, 
> HIVE-2573.9.patch.txt
>
>
> Currently the function registry is shared resource and could be overrided by 
> other users when using HiveServer. If per-session function registry is 
> provided, this situation could be prevented.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-7685) Parquet memory manager

2014-11-11 Thread Ferdinand Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14207758#comment-14207758
 ] 

Ferdinand Xu commented on HIVE-7685:


Hi,
I am afraid the file name extension ".ready" may not be able to trigger the 
hive qa CI test. Better change to "*.patch".

> Parquet memory manager
> --
>
> Key: HIVE-7685
> URL: https://issues.apache.org/jira/browse/HIVE-7685
> Project: Hive
>  Issue Type: Improvement
>  Components: Serializers/Deserializers
>Reporter: Brock Noland
>Assignee: Dong Chen
> Attachments: HIVE-7685.patch.ready
>
>
> Similar to HIVE-4248, Parquet tries to write large very large "row groups". 
> This causes Hive to run out of memory during dynamic partitions when a 
> reducer may have many Parquet files open at a given time.
> As such, we should implement a memory manager which ensures that we don't run 
> out of memory due to writing too many row groups within a single JVM.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-7136) Allow Hive to read hive scripts from any of the supported file systems in hadoop eco-system

2014-11-11 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14207755#comment-14207755
 ] 

Lefty Leverenz commented on HIVE-7136:
--

Happy to, but I'll need your Confluence ID.  If you don't have one yet, you can 
get one here:  https://cwiki.apache.org/confluence/signup.action.

* [About This Wiki -- How to get permission to edit | 
https://cwiki.apache.org/confluence/display/Hive/AboutThisWiki#AboutThisWiki-Howtogetpermissiontoedit]

> Allow Hive to read hive scripts from any of the supported file systems in 
> hadoop eco-system
> ---
>
> Key: HIVE-7136
> URL: https://issues.apache.org/jira/browse/HIVE-7136
> Project: Hive
>  Issue Type: Improvement
>  Components: CLI
>Affects Versions: 0.13.0
>Reporter: Sumit Kumar
>Assignee: Sumit Kumar
>Priority: Minor
>  Labels: TODOC14
> Fix For: 0.14.0
>
> Attachments: HIVE-7136.01.patch, HIVE-7136.patch
>
>
> Current hive cli assumes that the source file (hive script) is always on the 
> local file system. This patch implements support for reading source files 
> from other file systems in hadoop eco-system (hdfs, s3 etc) as well keeping 
> the default behavior intact to be reading from default filesystem (local) in 
> case scheme is not provided in the url for the source file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-7685) Parquet memory manager

2014-11-11 Thread Dong Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dong Chen updated HIVE-7685:

Attachment: HIVE-7685.patch.ready

> Parquet memory manager
> --
>
> Key: HIVE-7685
> URL: https://issues.apache.org/jira/browse/HIVE-7685
> Project: Hive
>  Issue Type: Improvement
>  Components: Serializers/Deserializers
>Reporter: Brock Noland
>Assignee: Dong Chen
> Attachments: HIVE-7685.patch.ready
>
>
> Similar to HIVE-4248, Parquet tries to write large very large "row groups". 
> This causes Hive to run out of memory during dynamic partitions when a 
> reducer may have many Parquet files open at a given time.
> As such, we should implement a memory manager which ensures that we don't run 
> out of memory due to writing too many row groups within a single JVM.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-7685) Parquet memory manager

2014-11-11 Thread Dong Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dong Chen updated HIVE-7685:

Assignee: Dong Chen
  Status: Patch Available  (was: Open)

This patch adds a hook in Hive to use the Parquet memory manager in Parquet 
(PARQUET-108).

When PARQUET-108 get committed into trunk and packaged in Maven (1.6.0 or 
1.6.0rc3), this patch should work. I will track it then.

> Parquet memory manager
> --
>
> Key: HIVE-7685
> URL: https://issues.apache.org/jira/browse/HIVE-7685
> Project: Hive
>  Issue Type: Improvement
>  Components: Serializers/Deserializers
>Reporter: Brock Noland
>Assignee: Dong Chen
>
> Similar to HIVE-4248, Parquet tries to write large very large "row groups". 
> This causes Hive to run out of memory during dynamic partitions when a 
> reducer may have many Parquet files open at a given time.
> As such, we should implement a memory manager which ensures that we don't run 
> out of memory due to writing too many row groups within a single JVM.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-7353) HiveServer2 using embedded MetaStore leaks JDOPersistanceManager

2014-11-11 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14207750#comment-14207750
 ] 

Lefty Leverenz commented on HIVE-7353:
--

bq.  ... we are removing TODOC label after doc but before you review ...

That's fine because my reviews generally turn up trivial edits that don't 
affect doc usability.  (On the other hand when I write doc, it makes sense to 
keep the label pending expert review.)

Today's flood of documentation is very encouraging -- thanks all!  I'll get to 
the reviews as soon as I can, and tackle the backlist too.

> HiveServer2 using embedded MetaStore leaks JDOPersistanceManager
> 
>
> Key: HIVE-7353
> URL: https://issues.apache.org/jira/browse/HIVE-7353
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 0.13.0
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
> Fix For: 0.14.0
>
> Attachments: HIVE-7353.1.patch, HIVE-7353.2.patch, HIVE-7353.3.patch, 
> HIVE-7353.4.patch, HIVE-7353.5.patch, HIVE-7353.6.patch, HIVE-7353.7.patch, 
> HIVE-7353.8.patch, HIVE-7353.9.patch
>
>
> While using embedded metastore, while creating background threads to run 
> async operations, HiveServer2 ends up creating new instances of 
> JDOPersistanceManager which are cached in JDOPersistanceManagerFactory. Even 
> when the background thread is killed by the thread pool manager, the 
> JDOPersistanceManager are never GCed because they are cached by 
> JDOPersistanceManagerFactory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8793) Make sure multi-insert works with map join [Spark Branch]

2014-11-11 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14207749#comment-14207749
 ] 

Hive QA commented on HIVE-8793:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12681002/HIVE-8793.1-spark.patch

{color:red}ERROR:{color} -1 due to 145 failed/errored test(s), 7234 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join12
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join13
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join20
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join21
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join22
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join28
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join29
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join3
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join_reordering_values
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_sortmerge_join_11
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_sortmerge_join_12
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_sortmerge_join_6
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_correlationoptimizer6
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_extrapolate_part_stats_full
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_extrapolate_part_stats_partial
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_filter_join_breaktask
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby_sort_1_23
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby_sort_skew_1_23
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_join26
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_join28
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_join32
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_join32_lessSize
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_join33
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_join_map_ppr
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_join_reorder4
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_join_star
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_query_multiskew_2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_query_oneskew_3
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_louter_join_ppr
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_mapjoin_filter_on_outerjoin
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_mapjoin_mapjoin
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_mapjoin_subquery
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_mapjoin_subquery2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_mapjoin_test_outer
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_metadataonly1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_multiMapJoin1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_multiMapJoin2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_multi_join_union
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_orc_merge2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_orc_merge6
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_outer_join_ppr
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_pcr
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_ppd_join_filter
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_ppd_union_view
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_rcfile_merge2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_router_join_ppr
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_sample_islocalmode_hook
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_skewjoin_mapjoin1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_skewjoin_mapjoin10
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_skewjoin_mapjoin11
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_skewjoin_mapjoin2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_skewjoin_mapjoin3
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_skewjoin_mapjoin4
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_skewjoin_mapjoin6
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_skewjoin_mapjoin7
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_skewjoin_mapjoin8
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_skewjoin_mapjoin9
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_stats12
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_transform_acid
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_union22
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_union24
org.apache.hadoop.hive.cli.TestCliDri

Review Request 27908: HIVE-8780 insert1.q hangs with hadoop-1

2014-11-11 Thread chengxiang li

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/27908/
---

Review request for hive and Xuefu Zhang.


Bugs: HIVE-8780
https://issues.apache.org/jira/browse/HIVE-8780


Repository: hive-git


Description
---

Spark job with empty source data would not be submmited to spark cluster 
actually, so no JobEnd/JobStart event are posted to Spark listener bus. As Hive 
monitor spark job state by spark listener, it would never get any job state, we 
use JavaFutureAction to fetch job state in this patch.


Diffs
-

  itests/pom.xml a15e04a 
  ql/src/java/org/apache/hadoop/hive/ql/exec/spark/SparkClient.java b33f0e2 
  
ql/src/java/org/apache/hadoop/hive/ql/exec/spark/status/impl/SimpleSparkJobStatus.java
 31a45d0 

Diff: https://reviews.apache.org/r/27908/diff/


Testing
---


Thanks,

chengxiang li



[jira] [Updated] (HIVE-8832) SessionState.getUserFromAuthenticator() should be used instead of SessionState.getName()

2014-11-11 Thread Navis (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Navis updated HIVE-8832:

Status: Patch Available  (was: Open)

> SessionState.getUserFromAuthenticator() should be used instead of 
> SessionState.getName()
> 
>
> Key: HIVE-8832
> URL: https://issues.apache.org/jira/browse/HIVE-8832
> Project: Hive
>  Issue Type: Bug
>  Components: Authentication
>Reporter: Navis
>Assignee: Navis
> Attachments: HIVE-8832.1.patch.txt
>
>
> It's not valid sometimes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-8832) SessionState.getUserFromAuthenticator() should be used instead of SessionState.getName()

2014-11-11 Thread Navis (JIRA)
Navis created HIVE-8832:
---

 Summary: SessionState.getUserFromAuthenticator() should be used 
instead of SessionState.getName()
 Key: HIVE-8832
 URL: https://issues.apache.org/jira/browse/HIVE-8832
 Project: Hive
  Issue Type: Bug
  Components: Authentication
Reporter: Navis
Assignee: Navis
 Attachments: HIVE-8832.1.patch.txt

It's not valid sometimes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8832) SessionState.getUserFromAuthenticator() should be used instead of SessionState.getName()

2014-11-11 Thread Navis (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Navis updated HIVE-8832:

Attachment: HIVE-8832.1.patch.txt

> SessionState.getUserFromAuthenticator() should be used instead of 
> SessionState.getName()
> 
>
> Key: HIVE-8832
> URL: https://issues.apache.org/jira/browse/HIVE-8832
> Project: Hive
>  Issue Type: Bug
>  Components: Authentication
>Reporter: Navis
>Assignee: Navis
> Attachments: HIVE-8832.1.patch.txt
>
>
> It's not valid sometimes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8609) Move beeline to jline2

2014-11-11 Thread Ferdinand Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ferdinand Xu updated HIVE-8609:
---
Attachment: HIVE-8609.3.patch

> Move beeline to jline2
> --
>
> Key: HIVE-8609
> URL: https://issues.apache.org/jira/browse/HIVE-8609
> Project: Hive
>  Issue Type: Task
>Reporter: Brock Noland
>Assignee: Ferdinand Xu
>Priority: Blocker
> Attachments: HIVE-8609.1.patch, HIVE-8609.2.patch, HIVE-8609.3.patch, 
> HIVE-8609.patch
>
>
> We found a serious bug in jline in HIVE-8565. We should move to jline2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8831) show roles appends dummy new line

2014-11-11 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14207738#comment-14207738
 ] 

Thejas M Nair commented on HIVE-8831:
-

+1

Regarding hive 0.14 , the release candidate is out, please vote. 


> show roles appends dummy new line
> -
>
> Key: HIVE-8831
> URL: https://issues.apache.org/jira/browse/HIVE-8831
> Project: Hive
>  Issue Type: Improvement
>  Components: Authentication
>Reporter: Navis
>Assignee: Navis
>Priority: Trivial
> Attachments: HIVE-8831.1.patch.txt
>
>
> {noformat}
> hive> show roles;
> OK
> ADMIN
> PUBLIC
> admin
> navis
> public
> r1
> role1
> role2
> s1
> src_role2
> Time taken: 0.092 seconds, Fetched: 11 row(s)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8825) SQLCompletor catches Throwable and ignores it

2014-11-11 Thread Ferdinand Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ferdinand Xu updated HIVE-8825:
---
Summary: SQLCompletor catches Throwable and ignores it  (was: SQLCompletor 
catches Throwable and ignores is)

> SQLCompletor catches Throwable and ignores it
> -
>
> Key: HIVE-8825
> URL: https://issues.apache.org/jira/browse/HIVE-8825
> Project: Hive
>  Issue Type: Bug
>Reporter: Brock Noland
>Assignee: Ferdinand Xu
>
> We should be catching the specific exception which is thrown.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8823) Add additional serde properties for parquet

2014-11-11 Thread Ferdinand Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ferdinand Xu updated HIVE-8823:
---
Status: Patch Available  (was: Open)

> Add additional serde properties for parquet
> ---
>
> Key: HIVE-8823
> URL: https://issues.apache.org/jira/browse/HIVE-8823
> Project: Hive
>  Issue Type: Improvement
>Reporter: Brock Noland
>Assignee: Ferdinand Xu
> Attachments: HIVE-8823.1.patch, HIVE-8823.patch
>
>
> Similar to HIVE-7858 and HIVE-8469 I think that users could want to configure 
> {{parquet.enable.dictionary}} and {{parquet.block.size}} on a per-table basis.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8823) Add additional serde properties for parquet

2014-11-11 Thread Ferdinand Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ferdinand Xu updated HIVE-8823:
---
Attachment: HIVE-8823.1.patch

> Add additional serde properties for parquet
> ---
>
> Key: HIVE-8823
> URL: https://issues.apache.org/jira/browse/HIVE-8823
> Project: Hive
>  Issue Type: Improvement
>Reporter: Brock Noland
>Assignee: Ferdinand Xu
> Attachments: HIVE-8823.1.patch, HIVE-8823.patch
>
>
> Similar to HIVE-7858 and HIVE-8469 I think that users could want to configure 
> {{parquet.enable.dictionary}} and {{parquet.block.size}} on a per-table basis.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Review Request 27907: HIVE-8823: Add additional serde properties for parquet

2014-11-11 Thread cheng xu

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/27907/
---

Review request for hive.


Repository: hive-git


Description
---

Changes includes:
1. refactor the previous implements for the initialization of table properties 
in ParquetOutputFormat
2. changes the query tests accordingly
3. enable "like" features for newly added table properties


Diffs
-

  ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/ParquetHiveSerDe.java 
8b02b42 
  
ql/src/java/org/apache/hadoop/hive/ql/io/parquet/write/ParquetRecordWriterWrapper.java
 765b5ac 
  ql/src/test/queries/clientpositive/create_like.q 7271306 
  ql/src/test/results/clientpositive/create_like.q.out 0c82cea 

Diff: https://reviews.apache.org/r/27907/diff/


Testing
---

UT passed locally


Thanks,

cheng xu



[jira] [Updated] (HIVE-5961) Add explain authorize for checking privileges

2014-11-11 Thread Navis (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Navis updated HIVE-5961:

Labels:   (was: TODOC14)

> Add explain authorize for checking privileges
> -
>
> Key: HIVE-5961
> URL: https://issues.apache.org/jira/browse/HIVE-5961
> Project: Hive
>  Issue Type: Improvement
>  Components: Authorization
>Reporter: Navis
>Assignee: Navis
>Priority: Trivial
> Fix For: 0.14.0
>
> Attachments: HIVE-5961.1.patch.txt, HIVE-5961.2.patch.txt, 
> HIVE-5961.3.patch.txt, HIVE-5961.4.patch.txt, HIVE-5961.5.patch.txt, 
> HIVE-5961.6.patch.txt
>
>
> For easy checking of need privileges for a query, 
> {noformat}
> explain authorize select * from src join srcpart
> INPUTS: 
>   default@srcpart
>   default@srcpart@ds=2008-04-08/hr=11
>   default@srcpart@ds=2008-04-08/hr=12
>   default@srcpart@ds=2008-04-09/hr=11
>   default@srcpart@ds=2008-04-09/hr=12
>   default@src
> OUTPUTS: 
>   
> file:/home/navis/apache/oss-hive/itests/qtest/target/tmp/localscratchdir/hive_2013-12-04_21-57-53_748_5323811717799107868-1/-mr-1
> CURRENT_USER: 
>   hive_test_user
> OPERATION: 
>   QUERY
> AUTHORIZATION_FAILURES: 
>   No privilege 'Select' found for inputs { database:default, table:srcpart, 
> columnName:key}
>   No privilege 'Select' found for inputs { database:default, table:src, 
> columnName:key}
>   No privilege 'Select' found for inputs { database:default, table:src, 
> columnName:key}
> {noformat}
> Hopefully good for debugging of authorization, which is in progress on 
> HIVE-5837.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-6031) explain subquery rewrite for where clause predicates

2014-11-11 Thread Navis (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Navis updated HIVE-6031:

Labels: TODOC14  (was: )

> explain subquery rewrite for where clause predicates 
> -
>
> Key: HIVE-6031
> URL: https://issues.apache.org/jira/browse/HIVE-6031
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Harish Butani
>Assignee: Harish Butani
>  Labels: TODOC14
> Fix For: 0.14.0
>
> Attachments: HIVE-6031.1.patch, HIVE-6031.2.patch, HIVE-6031.3.patch, 
> HIVE-6031.4.patch, HIVE-6031.5.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-6037) Synchronize HiveConf with hive-default.xml.template and support show conf

2014-11-11 Thread Navis (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Navis updated HIVE-6037:

Labels:   (was: TODOC14)

> Synchronize HiveConf with hive-default.xml.template and support show conf
> -
>
> Key: HIVE-6037
> URL: https://issues.apache.org/jira/browse/HIVE-6037
> Project: Hive
>  Issue Type: Improvement
>  Components: Configuration
>Reporter: Navis
>Assignee: Navis
>Priority: Minor
> Fix For: 0.14.0
>
> Attachments: CHIVE-6037.3.patch.txt, HIVE-6037-0.13.0, 
> HIVE-6037.1.patch.txt, HIVE-6037.10.patch.txt, HIVE-6037.11.patch.txt, 
> HIVE-6037.12.patch.txt, HIVE-6037.14.patch.txt, HIVE-6037.15.patch.txt, 
> HIVE-6037.16.patch.txt, HIVE-6037.17.patch, HIVE-6037.18.patch.txt, 
> HIVE-6037.19.patch.txt, HIVE-6037.19.patch.txt, HIVE-6037.2.patch.txt, 
> HIVE-6037.20.patch.txt, HIVE-6037.4.patch.txt, HIVE-6037.5.patch.txt, 
> HIVE-6037.6.patch.txt, HIVE-6037.7.patch.txt, HIVE-6037.8.patch.txt, 
> HIVE-6037.9.patch.txt, HIVE-6037.patch
>
>
> see HIVE-5879



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HIVE-8780) insert1.q and ppd_join4.q hangs with hadoop-1 [Spark Branch]

2014-11-11 Thread Chengxiang Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chengxiang Li reassigned HIVE-8780:
---

Assignee: Chengxiang Li

> insert1.q and ppd_join4.q hangs with hadoop-1 [Spark Branch]
> 
>
> Key: HIVE-8780
> URL: https://issues.apache.org/jira/browse/HIVE-8780
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Reporter: Jimmy Xiang
>Assignee: Chengxiang Li
> Attachments: HIVE-8780.1-spark.patch, insert1.q-spark.png, 
> insert1.q.jstack, itests.patch
>
>
> In working on HIVE-8758, found these tests hang at 
> {noformat}
>java.lang.Thread.State: TIMED_WAITING (sleeping)
> at java.lang.Thread.sleep(Native Method)
> at 
> org.apache.hadoop.hive.ql.exec.spark.status.SparkJobMonitor.startMoni
> tor(SparkJobMonitor.java:129)
> at 
> org.apache.hadoop.hive.ql.exec.spark.SparkTask.execute(SparkTask.java
> :111)
> at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:161)
> at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.ja
> va:85)
> at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1644)
> at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1404)
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1216)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1043)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1033)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:2
> 47)
> at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:199)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:410)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:345)
> at 
> org.apache.hadoop.hive.ql.QTestUtil.executeClient(QTestUtil.java:832)
> at 
> org.apache.hadoop.hive.cli.TestSparkCliDriver.runTest(TestSparkCliDri
> ver.java:3706)
> at 
> org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_ppd_join4
> (TestSparkCliDriver.java:2790)
> {noformat}
> Both tests hang at the same place. There could be other hanging tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8780) insert1.q and ppd_join4.q hangs with hadoop-1 [Spark Branch]

2014-11-11 Thread Chengxiang Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chengxiang Li updated HIVE-8780:

Status: Patch Available  (was: Open)

> insert1.q and ppd_join4.q hangs with hadoop-1 [Spark Branch]
> 
>
> Key: HIVE-8780
> URL: https://issues.apache.org/jira/browse/HIVE-8780
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Reporter: Jimmy Xiang
>Assignee: Chengxiang Li
> Attachments: HIVE-8780.1-spark.patch, insert1.q-spark.png, 
> insert1.q.jstack, itests.patch
>
>
> In working on HIVE-8758, found these tests hang at 
> {noformat}
>java.lang.Thread.State: TIMED_WAITING (sleeping)
> at java.lang.Thread.sleep(Native Method)
> at 
> org.apache.hadoop.hive.ql.exec.spark.status.SparkJobMonitor.startMoni
> tor(SparkJobMonitor.java:129)
> at 
> org.apache.hadoop.hive.ql.exec.spark.SparkTask.execute(SparkTask.java
> :111)
> at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:161)
> at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.ja
> va:85)
> at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1644)
> at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1404)
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1216)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1043)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1033)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:2
> 47)
> at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:199)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:410)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:345)
> at 
> org.apache.hadoop.hive.ql.QTestUtil.executeClient(QTestUtil.java:832)
> at 
> org.apache.hadoop.hive.cli.TestSparkCliDriver.runTest(TestSparkCliDri
> ver.java:3706)
> at 
> org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_ppd_join4
> (TestSparkCliDriver.java:2790)
> {noformat}
> Both tests hang at the same place. There could be other hanging tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8780) insert1.q and ppd_join4.q hangs with hadoop-1 [Spark Branch]

2014-11-11 Thread Chengxiang Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chengxiang Li updated HIVE-8780:

Attachment: HIVE-8780.1-spark.patch

> insert1.q and ppd_join4.q hangs with hadoop-1 [Spark Branch]
> 
>
> Key: HIVE-8780
> URL: https://issues.apache.org/jira/browse/HIVE-8780
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Reporter: Jimmy Xiang
>Assignee: Chengxiang Li
> Attachments: HIVE-8780.1-spark.patch, insert1.q-spark.png, 
> insert1.q.jstack, itests.patch
>
>
> In working on HIVE-8758, found these tests hang at 
> {noformat}
>java.lang.Thread.State: TIMED_WAITING (sleeping)
> at java.lang.Thread.sleep(Native Method)
> at 
> org.apache.hadoop.hive.ql.exec.spark.status.SparkJobMonitor.startMoni
> tor(SparkJobMonitor.java:129)
> at 
> org.apache.hadoop.hive.ql.exec.spark.SparkTask.execute(SparkTask.java
> :111)
> at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:161)
> at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.ja
> va:85)
> at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1644)
> at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1404)
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1216)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1043)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1033)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:2
> 47)
> at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:199)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:410)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:345)
> at 
> org.apache.hadoop.hive.ql.QTestUtil.executeClient(QTestUtil.java:832)
> at 
> org.apache.hadoop.hive.cli.TestSparkCliDriver.runTest(TestSparkCliDri
> ver.java:3706)
> at 
> org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_ppd_join4
> (TestSparkCliDriver.java:2790)
> {noformat}
> Both tests hang at the same place. There could be other hanging tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8780) insert1.q and ppd_join4.q hangs with hadoop-1 [Spark Branch]

2014-11-11 Thread Chengxiang Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chengxiang Li updated HIVE-8780:

Attachment: (was: 8780.1-spark.patch)

> insert1.q and ppd_join4.q hangs with hadoop-1 [Spark Branch]
> 
>
> Key: HIVE-8780
> URL: https://issues.apache.org/jira/browse/HIVE-8780
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Reporter: Jimmy Xiang
>Assignee: Chengxiang Li
> Attachments: HIVE-8780.1-spark.patch, insert1.q-spark.png, 
> insert1.q.jstack, itests.patch
>
>
> In working on HIVE-8758, found these tests hang at 
> {noformat}
>java.lang.Thread.State: TIMED_WAITING (sleeping)
> at java.lang.Thread.sleep(Native Method)
> at 
> org.apache.hadoop.hive.ql.exec.spark.status.SparkJobMonitor.startMoni
> tor(SparkJobMonitor.java:129)
> at 
> org.apache.hadoop.hive.ql.exec.spark.SparkTask.execute(SparkTask.java
> :111)
> at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:161)
> at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.ja
> va:85)
> at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1644)
> at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1404)
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1216)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1043)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1033)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:2
> 47)
> at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:199)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:410)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:345)
> at 
> org.apache.hadoop.hive.ql.QTestUtil.executeClient(QTestUtil.java:832)
> at 
> org.apache.hadoop.hive.cli.TestSparkCliDriver.runTest(TestSparkCliDri
> ver.java:3706)
> at 
> org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_ppd_join4
> (TestSparkCliDriver.java:2790)
> {noformat}
> Both tests hang at the same place. There could be other hanging tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8780) insert1.q and ppd_join4.q hangs with hadoop-1 [Spark Branch]

2014-11-11 Thread Chengxiang Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chengxiang Li updated HIVE-8780:

Attachment: 8780.1-spark.patch

Spark job with empty source data would not be submmited to spark cluster 
actually, so no JobEnd/JobStart event are posted to Spark listener bus. As Hive 
monitor spark job state by spark listener, it would never get any job state, we 
use JavaFutureAction to fetch job state in this patch.

> insert1.q and ppd_join4.q hangs with hadoop-1 [Spark Branch]
> 
>
> Key: HIVE-8780
> URL: https://issues.apache.org/jira/browse/HIVE-8780
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Reporter: Jimmy Xiang
> Attachments: HIVE-8780.1-spark.patch, insert1.q-spark.png, 
> insert1.q.jstack, itests.patch
>
>
> In working on HIVE-8758, found these tests hang at 
> {noformat}
>java.lang.Thread.State: TIMED_WAITING (sleeping)
> at java.lang.Thread.sleep(Native Method)
> at 
> org.apache.hadoop.hive.ql.exec.spark.status.SparkJobMonitor.startMoni
> tor(SparkJobMonitor.java:129)
> at 
> org.apache.hadoop.hive.ql.exec.spark.SparkTask.execute(SparkTask.java
> :111)
> at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:161)
> at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.ja
> va:85)
> at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1644)
> at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1404)
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1216)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1043)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1033)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:2
> 47)
> at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:199)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:410)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:345)
> at 
> org.apache.hadoop.hive.ql.QTestUtil.executeClient(QTestUtil.java:832)
> at 
> org.apache.hadoop.hive.cli.TestSparkCliDriver.runTest(TestSparkCliDri
> ver.java:3706)
> at 
> org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_ppd_join4
> (TestSparkCliDriver.java:2790)
> {noformat}
> Both tests hang at the same place. There could be other hanging tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8214) Release 0.13.1 missing hwi-war file

2014-11-11 Thread sanjiv singh (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14207714#comment-14207714
 ] 

sanjiv singh commented on HIVE-8214:


After going through history , It appears that major refactoring have been done 
in terms of packaging between v0.12.* and v0.13.*.  With this, Hive HWI 
packaging had been changed from war to jar. and also not incorporated assembly 
xml.   

I don't know if it was missed or made intensionally.  

Attached "HIVE-8214.2.patch" will resolve the issue.


> Release 0.13.1 missing hwi-war file
> ---
>
> Key: HIVE-8214
> URL: https://issues.apache.org/jira/browse/HIVE-8214
> Project: Hive
>  Issue Type: Bug
>  Components: Web UI
>Affects Versions: 0.13.1
>Reporter: Naimdjon Takhirov
>Priority: Minor
>  Labels: HIVE-8214.1.patch, branch-0.14, trunk
> Attachments: HIVE-8214.1.patch, HIVE-8214.2.patch
>
>
> Starting the Hive with --service hwi option:
> $opt/hive/latest: hive --service hwi
> ls: /opt/hive/latest/lib/hive-hwi-*.war: No such file or directory
> 14/09/22 11:43:46 INFO hwi.HWIServer: HWI is starting up
> 14/09/22 11:43:46 INFO mortbay.log: Logging to 
> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via 
> org.mortbay.log.Slf4jLog
> 14/09/22 11:43:46 INFO mortbay.log: jetty-6.1.26
> 14/09/22 11:43:47 INFO mortbay.log: Started SocketConnector@0.0.0.0:
> When navigating to localhost:, it just shows the directory index. Looking 
> at the distribution, the war file is missing in the lib directory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8831) show roles appends dummy new line

2014-11-11 Thread Navis (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Navis updated HIVE-8831:

Status: Patch Available  (was: Open)

Also, seemed trivial enough to be included in hive-0.14 

> show roles appends dummy new line
> -
>
> Key: HIVE-8831
> URL: https://issues.apache.org/jira/browse/HIVE-8831
> Project: Hive
>  Issue Type: Improvement
>  Components: Authentication
>Reporter: Navis
>Assignee: Navis
>Priority: Trivial
> Attachments: HIVE-8831.1.patch.txt
>
>
> {noformat}
> hive> show roles;
> OK
> ADMIN
> PUBLIC
> admin
> navis
> public
> r1
> role1
> role2
> s1
> src_role2
> Time taken: 0.092 seconds, Fetched: 11 row(s)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8831) show roles appends dummy new line

2014-11-11 Thread Navis (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14207716#comment-14207716
 ] 

Navis commented on HIVE-8831:
-

[~thejas] Could you review this?

> show roles appends dummy new line
> -
>
> Key: HIVE-8831
> URL: https://issues.apache.org/jira/browse/HIVE-8831
> Project: Hive
>  Issue Type: Improvement
>  Components: Authentication
>Reporter: Navis
>Assignee: Navis
>Priority: Trivial
> Attachments: HIVE-8831.1.patch.txt
>
>
> {noformat}
> hive> show roles;
> OK
> ADMIN
> PUBLIC
> admin
> navis
> public
> r1
> role1
> role2
> s1
> src_role2
> Time taken: 0.092 seconds, Fetched: 11 row(s)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8831) show roles appends dummy new line

2014-11-11 Thread Navis (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Navis updated HIVE-8831:

Attachment: HIVE-8831.1.patch.txt

> show roles appends dummy new line
> -
>
> Key: HIVE-8831
> URL: https://issues.apache.org/jira/browse/HIVE-8831
> Project: Hive
>  Issue Type: Improvement
>  Components: Authentication
>Reporter: Navis
>Assignee: Navis
>Priority: Trivial
> Attachments: HIVE-8831.1.patch.txt
>
>
> {noformat}
> hive> show roles;
> OK
> ADMIN
> PUBLIC
> admin
> navis
> public
> r1
> role1
> role2
> s1
> src_role2
> Time taken: 0.092 seconds, Fetched: 11 row(s)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-8831) show roles appends dummy new line

2014-11-11 Thread Navis (JIRA)
Navis created HIVE-8831:
---

 Summary: show roles appends dummy new line
 Key: HIVE-8831
 URL: https://issues.apache.org/jira/browse/HIVE-8831
 Project: Hive
  Issue Type: Improvement
  Components: Authentication
Reporter: Navis
Assignee: Navis
Priority: Trivial


{noformat}
hive> show roles;
OK
ADMIN
PUBLIC
admin
navis
public
r1
role1
role2
s1
src_role2

Time taken: 0.092 seconds, Fetched: 11 row(s)
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8793) Make sure multi-insert works with map join [Spark Branch]

2014-11-11 Thread Rui Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Li updated HIVE-8793:
-
Status: Patch Available  (was: Open)

> Make sure multi-insert works with map join [Spark Branch]
> -
>
> Key: HIVE-8793
> URL: https://issues.apache.org/jira/browse/HIVE-8793
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Affects Versions: spark-branch
>Reporter: Chao
>Assignee: Rui Li
> Attachments: HIVE-8793.1-spark.patch
>
>
> Currently, HIVE-8622 is implemented based on an assumption, that for a map 
> join query, a BaseWork would not have multiple children. By testing through 
> subquery_multiinsert.q did reveal that's the case. But, we need to 
> investigate on this, and make sure this won't happen in general.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8793) Make sure multi-insert works with map join [Spark Branch]

2014-11-11 Thread Rui Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Li updated HIVE-8793:
-
Attachment: HIVE-8793.1-spark.patch

Refactor to make split spark work as a physical resolver.

> Make sure multi-insert works with map join [Spark Branch]
> -
>
> Key: HIVE-8793
> URL: https://issues.apache.org/jira/browse/HIVE-8793
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Affects Versions: spark-branch
>Reporter: Chao
>Assignee: Rui Li
> Attachments: HIVE-8793.1-spark.patch
>
>
> Currently, HIVE-8622 is implemented based on an assumption, that for a map 
> join query, a BaseWork would not have multiple children. By testing through 
> subquery_multiinsert.q did reveal that's the case. But, we need to 
> investigate on this, and make sure this won't happen in general.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8823) Add additional serde properties for parquet

2014-11-11 Thread Ferdinand Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ferdinand Xu updated HIVE-8823:
---
Status: Open  (was: Patch Available)

> Add additional serde properties for parquet
> ---
>
> Key: HIVE-8823
> URL: https://issues.apache.org/jira/browse/HIVE-8823
> Project: Hive
>  Issue Type: Improvement
>Reporter: Brock Noland
>Assignee: Ferdinand Xu
> Attachments: HIVE-8823.patch
>
>
> Similar to HIVE-7858 and HIVE-8469 I think that users could want to configure 
> {{parquet.enable.dictionary}} and {{parquet.block.size}} on a per-table basis.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8823) Add additional serde properties for parquet

2014-11-11 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14207690#comment-14207690
 ] 

Brock Noland commented on HIVE-8823:


Hi,

I don't see us using the new member variables in ParquetHiveSerDe?

> Add additional serde properties for parquet
> ---
>
> Key: HIVE-8823
> URL: https://issues.apache.org/jira/browse/HIVE-8823
> Project: Hive
>  Issue Type: Improvement
>Reporter: Brock Noland
>Assignee: Ferdinand Xu
> Attachments: HIVE-8823.patch
>
>
> Similar to HIVE-7858 and HIVE-8469 I think that users could want to configure 
> {{parquet.enable.dictionary}} and {{parquet.block.size}} on a per-table basis.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8823) Add additional serde properties for parquet

2014-11-11 Thread Ferdinand Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ferdinand Xu updated HIVE-8823:
---
Attachment: HIVE-8823.patch

patch includes changes:
1. use the default value provided by ParquetWriter to initialize table 
properties
2. add the unit tests accordingly

> Add additional serde properties for parquet
> ---
>
> Key: HIVE-8823
> URL: https://issues.apache.org/jira/browse/HIVE-8823
> Project: Hive
>  Issue Type: Improvement
>Reporter: Brock Noland
>Assignee: Ferdinand Xu
> Attachments: HIVE-8823.patch
>
>
> Similar to HIVE-7858 and HIVE-8469 I think that users could want to configure 
> {{parquet.enable.dictionary}} and {{parquet.block.size}} on a per-table basis.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8823) Add additional serde properties for parquet

2014-11-11 Thread Ferdinand Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ferdinand Xu updated HIVE-8823:
---
Status: Patch Available  (was: Open)

> Add additional serde properties for parquet
> ---
>
> Key: HIVE-8823
> URL: https://issues.apache.org/jira/browse/HIVE-8823
> Project: Hive
>  Issue Type: Improvement
>Reporter: Brock Noland
>Assignee: Ferdinand Xu
> Attachments: HIVE-8823.patch
>
>
> Similar to HIVE-7858 and HIVE-8469 I think that users could want to configure 
> {{parquet.enable.dictionary}} and {{parquet.block.size}} on a per-table basis.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-2573) Create per-session function registry

2014-11-11 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14207676#comment-14207676
 ] 

Hive QA commented on HIVE-2573:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12680915/HIVE-2573.12.patch.txt

{color:red}ERROR:{color} -1 due to 115 failed/errored test(s), 6686 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_ba_table_udfs
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_cbo_simple_select
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_decimal_udf
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_delete_orig_table
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_delete_where_partitioned
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_delete_whole_partition
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_insert1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_literal_decimal
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_literal_double
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_macro
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_orc_predicate_pushdown
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_ptf_matchpath
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_show_functions
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_timestamp_comparison2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_udf4
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_udf7
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_udf_abs
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_udf_acos
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_udf_asin
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_udf_atan
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_udf_between
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_udf_bin
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_udf_conv
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_udf_cos
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_udf_format_number
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_udf_hex
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_udf_negative
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_udf_pmod
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_udf_positive
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_udf_repeat
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_udf_round
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_udf_round_3
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_udf_sign
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_udf_sin
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_udf_sort_array
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_udf_space
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_udf_substr
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_udf_tan
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_udf_to_boolean
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_udf_to_byte
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_udf_to_double
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_udf_to_float
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_udf_to_long
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_udf_to_short
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_udf_to_string
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_update_orig_table
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_vector_between_in
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_vector_decimal_math_funcs
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_vector_decimal_round
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_vector_decimal_round_2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_vector_decimal_udf
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_vectorization_0
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_vectorization_1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_vectorization_10
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_vectorization_11
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_vectorization_12
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_vectorization_13
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_vectorization_14
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_vectorization_15
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_vectorization_16
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_vectorization_2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_vectorization_3
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_vectorization_4
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_vectorization_5
org.apache.hadoop.hive.cli.TestCli

[jira] [Commented] (HIVE-8826) Remove jdbm from top level license file

2014-11-11 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14207669#comment-14207669
 ] 

Brock Noland commented on HIVE-8826:


+1 pending tests

> Remove jdbm from top level license file
> ---
>
> Key: HIVE-8826
> URL: https://issues.apache.org/jira/browse/HIVE-8826
> Project: Hive
>  Issue Type: Bug
>Reporter: Brock Noland
>Assignee: Ferdinand Xu
> Attachments: HIVE-8826.patch
>
>
> HIVE-1754 removed jdbm but we did not remove it from the top level license 
> file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8774) CBO: enable groupBy index

2014-11-11 Thread Pengcheng Xiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pengcheng Xiong updated HIVE-8774:
--
Status: Patch Available  (was: Open)

> CBO: enable groupBy index
> -
>
> Key: HIVE-8774
> URL: https://issues.apache.org/jira/browse/HIVE-8774
> Project: Hive
>  Issue Type: Improvement
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
> Attachments: HIVE-8774.1.patch, HIVE-8774.2.patch, HIVE-8774.3.patch
>
>
> Right now, even when groupby index is build, CBO is not able to use it. In 
> this patch, we are trying to make it use groupby index that we build.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8774) CBO: enable groupBy index

2014-11-11 Thread Pengcheng Xiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pengcheng Xiong updated HIVE-8774:
--
Attachment: HIVE-8774.3.patch

> CBO: enable groupBy index
> -
>
> Key: HIVE-8774
> URL: https://issues.apache.org/jira/browse/HIVE-8774
> Project: Hive
>  Issue Type: Improvement
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
> Attachments: HIVE-8774.1.patch, HIVE-8774.2.patch, HIVE-8774.3.patch
>
>
> Right now, even when groupby index is build, CBO is not able to use it. In 
> this patch, we are trying to make it use groupby index that we build.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8774) CBO: enable groupBy index

2014-11-11 Thread Pengcheng Xiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pengcheng Xiong updated HIVE-8774:
--
Status: Open  (was: Patch Available)

> CBO: enable groupBy index
> -
>
> Key: HIVE-8774
> URL: https://issues.apache.org/jira/browse/HIVE-8774
> Project: Hive
>  Issue Type: Improvement
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
> Attachments: HIVE-8774.1.patch, HIVE-8774.2.patch
>
>
> Right now, even when groupby index is build, CBO is not able to use it. In 
> this patch, we are trying to make it use groupby index that we build.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8774) CBO: enable groupBy index

2014-11-11 Thread Pengcheng Xiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pengcheng Xiong updated HIVE-8774:
--
Status: Open  (was: Patch Available)

> CBO: enable groupBy index
> -
>
> Key: HIVE-8774
> URL: https://issues.apache.org/jira/browse/HIVE-8774
> Project: Hive
>  Issue Type: Improvement
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
> Attachments: HIVE-8774.1.patch, HIVE-8774.2.patch
>
>
> Right now, even when groupby index is build, CBO is not able to use it. In 
> this patch, we are trying to make it use groupby index that we build.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8774) CBO: enable groupBy index

2014-11-11 Thread Pengcheng Xiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pengcheng Xiong updated HIVE-8774:
--
Attachment: (was: HIVE-8774.3.patch)

> CBO: enable groupBy index
> -
>
> Key: HIVE-8774
> URL: https://issues.apache.org/jira/browse/HIVE-8774
> Project: Hive
>  Issue Type: Improvement
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
> Attachments: HIVE-8774.1.patch, HIVE-8774.2.patch
>
>
> Right now, even when groupby index is build, CBO is not able to use it. In 
> this patch, we are trying to make it use groupby index that we build.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8774) CBO: enable groupBy index

2014-11-11 Thread Pengcheng Xiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pengcheng Xiong updated HIVE-8774:
--
Status: Patch Available  (was: Open)

> CBO: enable groupBy index
> -
>
> Key: HIVE-8774
> URL: https://issues.apache.org/jira/browse/HIVE-8774
> Project: Hive
>  Issue Type: Improvement
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
> Attachments: HIVE-8774.1.patch, HIVE-8774.2.patch
>
>
> Right now, even when groupby index is build, CBO is not able to use it. In 
> this patch, we are trying to make it use groupby index that we build.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8681) CBO: Column names are missing from join expression in Map join with CBO enabled

2014-11-11 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14207630#comment-14207630
 ] 

Hive QA commented on HIVE-8681:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12680917/HIVE-8681.2.patch

{color:red}ERROR:{color} -1 due to 279 failed/errored test(s), 6689 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join0
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join10
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join11
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join12
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join13
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join14
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join15
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join16
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join17
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join19
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join20
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join21
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join22
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join23
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join24
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join25
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join26
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join27
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join28
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join29
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join3
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join30
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join31
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join32
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join33
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join4
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join5
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join8
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join9
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join_filters
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join_nulls
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join_without_localtask
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_smb_mapjoin_14
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_sortmerge_join_1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_sortmerge_join_10
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_sortmerge_join_11
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_sortmerge_join_12
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_sortmerge_join_13
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_sortmerge_join_14
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_sortmerge_join_15
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_sortmerge_join_2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_sortmerge_join_3
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_sortmerge_join_4
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_sortmerge_join_5
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_sortmerge_join_6
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_sortmerge_join_7
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_sortmerge_join_8
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_sortmerge_join_9
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucket_if_with_path_filter
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucket_map_join_1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucket_map_join_2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketcontext_1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketcontext_2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketcontext_3
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketcontext_4
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketcontext_5
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketcontext_6
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketcontext_7
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketcontext_8
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketizedhiveinputformat_auto
org.apache.hadoop.hive.cli.TestCliDriver.tes

[jira] [Updated] (HIVE-7847) query orc partitioned table fail when table column type change

2014-11-11 Thread Matt McCline (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt McCline updated HIVE-7847:
---
Attachment: vector_alter_partition_change_col.q

> query orc partitioned table fail when table column type change
> --
>
> Key: HIVE-7847
> URL: https://issues.apache.org/jira/browse/HIVE-7847
> Project: Hive
>  Issue Type: Bug
>  Components: File Formats
>Affects Versions: 0.11.0, 0.12.0, 0.13.0
>Reporter: Zhichun Wu
>Assignee: Zhichun Wu
> Fix For: 0.14.0
>
> Attachments: HIVE-7847.1.patch, vector_alter_partition_change_col.q
>
>
> I use the following script to test orc column type change with partitioned 
> table on branch-0.13:
> {code}
> use test;
> DROP TABLE if exists orc_change_type_staging;
> DROP TABLE if exists orc_change_type;
> CREATE TABLE orc_change_type_staging (
> id int
> );
> CREATE TABLE orc_change_type (
> id int
> ) PARTITIONED BY (`dt` string)
> stored as orc;
> --- load staging table
> LOAD DATA LOCAL INPATH '../hive/examples/files/int.txt' OVERWRITE INTO TABLE 
> orc_change_type_staging;
> --- populate orc hive table
> INSERT OVERWRITE TABLE orc_change_type partition(dt='20140718') select * FROM 
> orc_change_type_staging limit 1;
> --- change column id from int to bigint
> ALTER TABLE orc_change_type CHANGE id id bigint;
> INSERT OVERWRITE TABLE orc_change_type partition(dt='20140719') select * FROM 
> orc_change_type_staging limit 1;
> SELECT id FROM orc_change_type where dt between '20140718' and '20140719';
> {code}
> if fails in the last query "SELECT id FROM orc_change_type where dt between 
> '20140718' and '20140719';" with exception:
> {code}
> Error: java.io.IOException: java.io.IOException: 
> java.lang.ClassCastException: org.apache.hadoop.io.IntWritable cannot be cast 
> to org.apache.hadoop.io.LongWritable
> at 
> org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderNextException(HiveIOExceptionHandlerChain.java:121)
> at 
> org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderNextException(HiveIOExceptionHandlerUtil.java:77)
> at 
> org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.doNextWithExceptionHandler(HadoopShimsSecure.java:256)
> at 
> org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.next(HadoopShimsSecure.java:171)
> at 
> org.apache.hadoop.mapred.MapTask$TrackedRecordReader.moveToNext(MapTask.java:197)
> at 
> org.apache.hadoop.mapred.MapTask$TrackedRecordReader.next(MapTask.java:183)
> at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:52)
> at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:429)
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:162)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:157)
> Caused by: java.io.IOException: java.lang.ClassCastException: 
> org.apache.hadoop.io.IntWritable cannot be cast to 
> org.apache.hadoop.io.LongWritable
> at 
> org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderNextException(HiveIOExceptionHandlerChain.java:121)
> at 
> org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderNextException(HiveIOExceptionHandlerUtil.java:77)
> at 
> org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:344)
> at 
> org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.doNext(CombineHiveRecordReader.java:101)
> at 
> org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.doNext(CombineHiveRecordReader.java:41)
> at 
> org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.next(HiveContextAwareRecordReader.java:122)
> at 
> org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.doNextWithExceptionHandler(HadoopShimsSecure.java:254)
> ... 11 more
> Caused by: java.lang.ClassCastException: org.apache.hadoop.io.IntWritable 
> cannot be cast to org.apache.hadoop.io.LongWritable
> at 
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl$LongTreeReader.next(RecordReaderImpl.java:717)
> at 
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl$StructTreeReader.next(RecordReaderImpl.java:1788)
> at 
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.next(RecordReaderImpl.java:2997)
> at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$OrcRecordReader.next(OrcInputFormat.java:153)
>   

[jira] [Commented] (HIVE-7847) query orc partitioned table fail when table column type change

2014-11-11 Thread Matt McCline (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14207622#comment-14207622
 ] 

Matt McCline commented on HIVE-7847:


I also encountered a similar type cast exception with ORC and changing the 
column type.

I took the alter_partition_change_col.q test and created a vectorized version 
of it by making the table ORC.  I started with vectorization *OFF* to try and 
get a base case running, but it fails.

And, I applied your patch, but the exception:

{noformat}
... ClassCastException: org.apache.hadoop.io.Text cannot be cast to 
org.apache.hadoop.hive.serde2.io.HiveDecimalWritable
{noformat}

still occurs.

> query orc partitioned table fail when table column type change
> --
>
> Key: HIVE-7847
> URL: https://issues.apache.org/jira/browse/HIVE-7847
> Project: Hive
>  Issue Type: Bug
>  Components: File Formats
>Affects Versions: 0.11.0, 0.12.0, 0.13.0
>Reporter: Zhichun Wu
>Assignee: Zhichun Wu
> Fix For: 0.14.0
>
> Attachments: HIVE-7847.1.patch
>
>
> I use the following script to test orc column type change with partitioned 
> table on branch-0.13:
> {code}
> use test;
> DROP TABLE if exists orc_change_type_staging;
> DROP TABLE if exists orc_change_type;
> CREATE TABLE orc_change_type_staging (
> id int
> );
> CREATE TABLE orc_change_type (
> id int
> ) PARTITIONED BY (`dt` string)
> stored as orc;
> --- load staging table
> LOAD DATA LOCAL INPATH '../hive/examples/files/int.txt' OVERWRITE INTO TABLE 
> orc_change_type_staging;
> --- populate orc hive table
> INSERT OVERWRITE TABLE orc_change_type partition(dt='20140718') select * FROM 
> orc_change_type_staging limit 1;
> --- change column id from int to bigint
> ALTER TABLE orc_change_type CHANGE id id bigint;
> INSERT OVERWRITE TABLE orc_change_type partition(dt='20140719') select * FROM 
> orc_change_type_staging limit 1;
> SELECT id FROM orc_change_type where dt between '20140718' and '20140719';
> {code}
> if fails in the last query "SELECT id FROM orc_change_type where dt between 
> '20140718' and '20140719';" with exception:
> {code}
> Error: java.io.IOException: java.io.IOException: 
> java.lang.ClassCastException: org.apache.hadoop.io.IntWritable cannot be cast 
> to org.apache.hadoop.io.LongWritable
> at 
> org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderNextException(HiveIOExceptionHandlerChain.java:121)
> at 
> org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderNextException(HiveIOExceptionHandlerUtil.java:77)
> at 
> org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.doNextWithExceptionHandler(HadoopShimsSecure.java:256)
> at 
> org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.next(HadoopShimsSecure.java:171)
> at 
> org.apache.hadoop.mapred.MapTask$TrackedRecordReader.moveToNext(MapTask.java:197)
> at 
> org.apache.hadoop.mapred.MapTask$TrackedRecordReader.next(MapTask.java:183)
> at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:52)
> at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:429)
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:162)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:157)
> Caused by: java.io.IOException: java.lang.ClassCastException: 
> org.apache.hadoop.io.IntWritable cannot be cast to 
> org.apache.hadoop.io.LongWritable
> at 
> org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderNextException(HiveIOExceptionHandlerChain.java:121)
> at 
> org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderNextException(HiveIOExceptionHandlerUtil.java:77)
> at 
> org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:344)
> at 
> org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.doNext(CombineHiveRecordReader.java:101)
> at 
> org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.doNext(CombineHiveRecordReader.java:41)
> at 
> org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.next(HiveContextAwareRecordReader.java:122)
> at 
> org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.doNextWithExceptionHandler(HadoopShimsSecure.java:254)
> ... 11 more
> Caused by: java.lang.ClassCastException: org.apache.hadoop.io.IntWritable 
> cannot be cast to org.apache.ha

Hive-0.14 - Build # 721 - Still Failing

2014-11-11 Thread Apache Jenkins Server
Changes for Build #696
[rohini] PIG-4186: Fix e2e run against new build of pig and some enhancements 
(rohini)


Changes for Build #697

Changes for Build #698

Changes for Build #699

Changes for Build #700

Changes for Build #701

Changes for Build #702

Changes for Build #703
[daijy] HIVE-8484: HCatalog throws an exception if Pig job is of type 'fetch' 
(Lorand Bendig via Daniel Dai)


Changes for Build #704
[gunther] HIVE-8781: Nullsafe joins are busted on Tez (Gunther Hagleitner, 
reviewed by Prasanth J)


Changes for Build #705
[gunther] HIVE-8760: Pass a copy of HiveConf to hooks (Gunther Hagleitner, 
reviewed by Gopal V)


Changes for Build #706
[thejas] HIVE-8772 : zookeeper info logs are always printed from beeline with 
service discovery mode (Thejas Nair, reviewed by Vaibhav Gumashta)


Changes for Build #707
[gunther] HIVE-8782: HBase handler doesn't compile with hadoop-1 (Jimmy Xiang, 
reviewed by Xuefu and Sergey)


Changes for Build #708

Changes for Build #709
[thejas] HIVE-8785 : HiveServer2 LogDivertAppender should be more selective for 
beeline getLogs (Thejas Nair, reviewed by Gopal V)


Changes for Build #710
[vgumashta] HIVE-8764: Windows: HiveServer2 TCP SSL cannot recognize localhost 
(Vaibhav Gumashta reviewed by Thejas Nair)


Changes for Build #711
[gunther] HIVE-8768: CBO: Fix filter selectivity for 'in clause' & '<>' (Laljo 
John Pullokkaran via Gunther Hagleitner)


Changes for Build #712
[gunther] HIVE-8794: Hive on Tez leaks AMs when killed before first dag is run 
(Gunther Hagleitner, reviewed by Gopal V)


Changes for Build #713
[gunther] HIVE-8798: Some Oracle deadlocks not being caught in TxnHandler (Alan 
Gates via Gunther Hagleitner)


Changes for Build #714
[gunther] HIVE-8800: Update release notes and notice for hive .14 (Gunther 
Hagleitner, reviewed by Prasanth J)

[gunther] HIVE-8799: boatload of missing apache headers (Gunther Hagleitner, 
reviewed by Thejas M Nair)


Changes for Build #715
[gunther] Preparing for release 0.14.0


Changes for Build #716
[gunther] Preparing for release 0.14.0

[gunther] Preparing for release 0.14.0


Changes for Build #717

Changes for Build #718

Changes for Build #719

Changes for Build #720
[gunther] HIVE-8811: Dynamic partition pruning can result in NPE during query 
compilation (Gunther Hagleitner, reviewed by Gopal V)


Changes for Build #721
[gunther] HIVE-8805: CBO skipped due to SemanticException: Line 0:-1 Both left 
and right aliases encountered in JOIN 'avg_cs_ext_discount_amt' (Laljo John 
Pullokkaran via Gunther Hagleitner)

[sershe] HIVE-8715 : Hive 14 upgrade scripts can fail for statistics if 
database was created using auto-create
 ADDENDUM (Sergey Shelukhin, reviewed by Ashutosh Chauhan and Gunther 
Hagleitner)




No tests ran.

The Apache Jenkins build system has built Hive-0.14 (build #721)

Status: Still Failing

Check console output at https://builds.apache.org/job/Hive-0.14/721/ to view 
the results.

[jira] [Updated] (HIVE-8715) Hive 14 upgrade scripts can fail for statistics if database was created using auto-create

2014-11-11 Thread Gunther Hagleitner (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gunther Hagleitner updated HIVE-8715:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Committed to branch and trunk.

> Hive 14 upgrade scripts can fail for statistics if database was created using 
> auto-create
> -
>
> Key: HIVE-8715
> URL: https://issues.apache.org/jira/browse/HIVE-8715
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Critical
> Fix For: 0.14.1
>
> Attachments: HIVE-8715.addendum.patch, HIVE-8715.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8715) Hive 14 upgrade scripts can fail for statistics if database was created using auto-create

2014-11-11 Thread Gunther Hagleitner (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gunther Hagleitner updated HIVE-8715:
-
Fix Version/s: (was: 0.14.0)
   0.14.1

> Hive 14 upgrade scripts can fail for statistics if database was created using 
> auto-create
> -
>
> Key: HIVE-8715
> URL: https://issues.apache.org/jira/browse/HIVE-8715
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Critical
> Fix For: 0.14.1
>
> Attachments: HIVE-8715.addendum.patch, HIVE-8715.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8830) hcatalog process don't exit because of non daemon thread

2014-11-11 Thread Sushanth Sowmyan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14207601#comment-14207601
 ] 

Sushanth Sowmyan commented on HIVE-8830:


+1, much needed fix. Thanks, Thejas!

> hcatalog process don't exit because of non daemon thread
> 
>
> Key: HIVE-8830
> URL: https://issues.apache.org/jira/browse/HIVE-8830
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.14.0
>Reporter: Thejas M Nair
>Assignee: Thejas M Nair
> Fix For: 0.15.0
>
> Attachments: HIVE-8830.1.patch, HIVE-8830.2.patch
>
>
> HiveClientCache has a cleanup thread which is not a daemon. It can cause hcat 
> client process to hang even after the complete.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8826) Remove jdbm from top level license file

2014-11-11 Thread Ferdinand Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8826?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ferdinand Xu updated HIVE-8826:
---
Status: Patch Available  (was: Open)

Remove jdbm license from the hive license file due to no use for jdbm.

> Remove jdbm from top level license file
> ---
>
> Key: HIVE-8826
> URL: https://issues.apache.org/jira/browse/HIVE-8826
> Project: Hive
>  Issue Type: Bug
>Reporter: Brock Noland
>Assignee: Ferdinand Xu
> Attachments: HIVE-8826.patch
>
>
> HIVE-1754 removed jdbm but we did not remove it from the top level license 
> file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8826) Remove jdbm from top level license file

2014-11-11 Thread Ferdinand Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8826?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ferdinand Xu updated HIVE-8826:
---
Attachment: HIVE-8826.patch

> Remove jdbm from top level license file
> ---
>
> Key: HIVE-8826
> URL: https://issues.apache.org/jira/browse/HIVE-8826
> Project: Hive
>  Issue Type: Bug
>Reporter: Brock Noland
>Assignee: Ferdinand Xu
> Attachments: HIVE-8826.patch
>
>
> HIVE-1754 removed jdbm but we did not remove it from the top level license 
> file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8805) CBO skipped due to SemanticException: Line 0:-1 Both left and right aliases encountered in JOIN 'avg_cs_ext_discount_amt'

2014-11-11 Thread Gunther Hagleitner (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8805?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gunther Hagleitner updated HIVE-8805:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Committed to trunk and branch.

> CBO skipped due to SemanticException: Line 0:-1 Both left and right aliases 
> encountered in JOIN 'avg_cs_ext_discount_amt'
> -
>
> Key: HIVE-8805
> URL: https://issues.apache.org/jira/browse/HIVE-8805
> Project: Hive
>  Issue Type: Bug
>  Components: CBO
>Affects Versions: 0.14.0
>Reporter: Mostafa Mokhtar
>Assignee: Laljo John Pullokkaran
> Fix For: 0.14.1
>
> Attachments: HIVE-8805.patch, HIVE-8805.patch
>
>
> Query
> {code}
> set hive.cbo.enable=true
> set hive.stats.fetch.column.stats=true
> set hive.exec.dynamic.partition.mode=nonstrict
> set hive.tez.auto.reducer.parallelism=true
> set hive.auto.convert.join.noconditionaltask.size=32000
> set hive.exec.reducers.bytes.per.reducer=1
> set hive.txn.manager=org.apache.hadoop.hive.ql.lockmgr.DummyTxnManager
> set hive.support.concurrency=false
> set hive.tez.exec.print.summary=true
> explain  
> SELECT sum(cs1.cs_ext_discount_amt) as excess_discount_amount
> FROM (SELECT cs.cs_item_sk as cs_item_sk,
>  cs.cs_ext_discount_amt as cs_ext_discount_amt
>  FROM catalog_sales cs
>  JOIN date_dim d ON (d.d_date_sk = cs.cs_sold_date_sk)
>  WHERE d.d_date between '2000-01-27' and '2000-04-27') cs1
> JOIN item i ON (i.i_item_sk = cs1.cs_item_sk)
> JOIN (SELECT cs2.cs_item_sk as cs_item_sk,
>   1.3 * avg(cs_ext_discount_amt) as 
> avg_cs_ext_discount_amt
>FROM (SELECT cs.cs_item_sk as cs_item_sk,
> cs.cs_ext_discount_amt as 
> cs_ext_discount_amt
> FROM catalog_sales cs
> JOIN date_dim d ON (d.d_date_sk = cs.cs_sold_date_sk)
> WHERE d.d_date between '2000-01-27' and '2000-04-27') 
> cs2
> GROUP BY cs2.cs_item_sk) tmp1
> ON (i.i_item_sk = tmp1.cs_item_sk)
> WHERE i.i_manufact_id = 436 and
>cs1.cs_ext_discount_amt > tmp1.avg_cs_ext_discount_amt
> {code}
> Exception
> {code}
> 14/11/07 19:15:38 [main]: ERROR parse.SemanticAnalyzer: CBO failed, skipping 
> CBO. 
> org.apache.hadoop.hive.ql.parse.SemanticException: Line 0:-1 Both left and 
> right aliases encountered in JOIN 'avg_cs_ext_discount_amt'
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.parseJoinCondition(SemanticAnalyzer.java:2369)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.parseJoinCondition(SemanticAnalyzer.java:2293)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.parseJoinCondition(SemanticAnalyzer.java:2249)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genJoinTree(SemanticAnalyzer.java:8010)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:9678)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:9593)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:9619)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:9593)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:9619)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:9606)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:10053)
>   at 
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:221)
>   at 
> org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:74)
>   at 
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:221)
>   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:415)
>   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:303)
>   at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1067)
>   at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1129)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1004)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:994)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:247)
>   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:199)
>   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:410)
>   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:345)
>   at 
> 

Re: Review Request 27566: HIVE-8609: move beeline to jline2

2014-11-11 Thread cheng xu

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/27566/
---

(Updated Nov. 12, 2014, 2:43 a.m.)


Review request for hive.


Changes
---

The new diff includes that:
(1) add license for ClassNameCompleter in the License file


Repository: hive-git


Description
---

HIVE-8609: move beeline to jline2
The following will be changed:
* MultiCompletor-> AggregateCompleter
* SimpleCompletor->StringsCompleter
* Terminal.getTerminalWidth() -> Terminal.getWidth()
* Terminal is an interface now; -> use TerminalFactory to get instances of a 
Terminal
* String -> CharSequence


Diffs (updated)
-

  LICENSE 2885945 
  beeline/src/java/org/apache/hive/beeline/AbstractCommandHandler.java a9479d5 
  beeline/src/java/org/apache/hive/beeline/BeeLine.java 8539a41 
  beeline/src/java/org/apache/hive/beeline/BeeLineCommandCompleter.java 
PRE-CREATION 
  beeline/src/java/org/apache/hive/beeline/BeeLineCommandCompletor.java 52313e6 
  beeline/src/java/org/apache/hive/beeline/BeeLineCompleter.java PRE-CREATION 
  beeline/src/java/org/apache/hive/beeline/BeeLineCompletor.java c6bb4fe 
  beeline/src/java/org/apache/hive/beeline/BeeLineOpts.java f73fb44 
  beeline/src/java/org/apache/hive/beeline/BooleanCompleter.java PRE-CREATION 
  beeline/src/java/org/apache/hive/beeline/BooleanCompletor.java 3e88c53 
  beeline/src/java/org/apache/hive/beeline/ClassNameCompleter.java PRE-CREATION 
  beeline/src/java/org/apache/hive/beeline/CommandHandler.java bab1778 
  beeline/src/java/org/apache/hive/beeline/Commands.java 7e366dc 
  beeline/src/java/org/apache/hive/beeline/DatabaseConnection.java ab67700 
  beeline/src/java/org/apache/hive/beeline/ReflectiveCommandHandler.java 
2b957f2 
  beeline/src/java/org/apache/hive/beeline/SQLCompleter.java PRE-CREATION 
  beeline/src/java/org/apache/hive/beeline/SQLCompletor.java 844b9ae 
  beeline/src/java/org/apache/hive/beeline/TableNameCompletor.java bc0d9be 
  cli/src/java/org/apache/hadoop/hive/cli/CliDriver.java 0ccaacb 
  cli/src/test/org/apache/hadoop/hive/cli/TestCliDriverMethods.java 88a37d5 
  hcatalog/hcatalog-pig-adapter/pom.xml 2d959e6 
  pom.xml ec8c4fe 
  ql/src/java/org/apache/hadoop/hive/ql/exec/tez/TezJobMonitor.java dea3460 

Diff: https://reviews.apache.org/r/27566/diff/


Testing
---


Thanks,

cheng xu



Re: Review Request 27566: HIVE-8609: move beeline to jline2

2014-11-11 Thread cheng xu


> On Nov. 11, 2014, 4:38 p.m., Brock Noland wrote:
> > beeline/src/java/org/apache/hive/beeline/ClassNameCompleter.java, line 1
> > 
> >
> > I see, this class is being copied from JLine source. We'll need special 
> > handling of this class.
> > 
> > (1) We need to add the original license BSD-2 to the top of the file 
> > with the apache license below it.
> > 
> > (2) We need to add a section to the top level license file, in the 
> > style of: https://github.com/apache/hive/blob/trunk/LICENSE#L212
> > 
> > (3) We should consider reformatting the file to comply with hive 
> > standards
> > 
> > Note that my specific example has been removed and as such we need to 
> > remove it from the top level license file. I filed HIVE-8826 to do that. 
> > Should be an easy fix.
> 
> cheng xu wrote:
> Thank Brock for your kind remind.
> (1) The original license to the top of the file added
> (2) JLine part is already added, does this file need a standalone section 
> in this license file?
> (3) Formatted in HIVE code style.

I saw one example in license file, add this license for the specified class 
"ClassNameCompleter", answering my second item. Mark this issue as fixed with 
license added for ClassNameCompeter file.


- cheng


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/27566/#review60788
---


On Nov. 12, 2014, 2:30 a.m., cheng xu wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/27566/
> ---
> 
> (Updated Nov. 12, 2014, 2:30 a.m.)
> 
> 
> Review request for hive.
> 
> 
> Repository: hive-git
> 
> 
> Description
> ---
> 
> HIVE-8609: move beeline to jline2
> The following will be changed:
> * MultiCompletor-> AggregateCompleter
> * SimpleCompletor->StringsCompleter
> * Terminal.getTerminalWidth() -> Terminal.getWidth()
> * Terminal is an interface now; -> use TerminalFactory to get instances of a 
> Terminal
> * String -> CharSequence
> 
> 
> Diffs
> -
> 
>   beeline/src/java/org/apache/hive/beeline/AbstractCommandHandler.java 
> a9479d5 
>   beeline/src/java/org/apache/hive/beeline/BeeLine.java 8539a41 
>   beeline/src/java/org/apache/hive/beeline/BeeLineCommandCompleter.java 
> PRE-CREATION 
>   beeline/src/java/org/apache/hive/beeline/BeeLineCommandCompletor.java 
> 52313e6 
>   beeline/src/java/org/apache/hive/beeline/BeeLineCompleter.java PRE-CREATION 
>   beeline/src/java/org/apache/hive/beeline/BeeLineCompletor.java c6bb4fe 
>   beeline/src/java/org/apache/hive/beeline/BeeLineOpts.java f73fb44 
>   beeline/src/java/org/apache/hive/beeline/BooleanCompleter.java PRE-CREATION 
>   beeline/src/java/org/apache/hive/beeline/BooleanCompletor.java 3e88c53 
>   beeline/src/java/org/apache/hive/beeline/ClassNameCompleter.java 
> PRE-CREATION 
>   beeline/src/java/org/apache/hive/beeline/CommandHandler.java bab1778 
>   beeline/src/java/org/apache/hive/beeline/Commands.java 7e366dc 
>   beeline/src/java/org/apache/hive/beeline/DatabaseConnection.java ab67700 
>   beeline/src/java/org/apache/hive/beeline/ReflectiveCommandHandler.java 
> 2b957f2 
>   beeline/src/java/org/apache/hive/beeline/SQLCompleter.java PRE-CREATION 
>   beeline/src/java/org/apache/hive/beeline/SQLCompletor.java 844b9ae 
>   beeline/src/java/org/apache/hive/beeline/TableNameCompletor.java bc0d9be 
>   cli/src/java/org/apache/hadoop/hive/cli/CliDriver.java 0ccaacb 
>   cli/src/test/org/apache/hadoop/hive/cli/TestCliDriverMethods.java 88a37d5 
>   hcatalog/hcatalog-pig-adapter/pom.xml 2d959e6 
>   pom.xml ec8c4fe 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/tez/TezJobMonitor.java dea3460 
> 
> Diff: https://reviews.apache.org/r/27566/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> cheng xu
> 
>



[jira] [Updated] (HIVE-8811) Dynamic partition pruning can result in NPE during query compilation

2014-11-11 Thread Gunther Hagleitner (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gunther Hagleitner updated HIVE-8811:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Committed to trunk and branch.

> Dynamic partition pruning can result in NPE during query compilation
> 
>
> Key: HIVE-8811
> URL: https://issues.apache.org/jira/browse/HIVE-8811
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.14.0
>Reporter: Gunther Hagleitner
>Assignee: Gunther Hagleitner
> Fix For: 0.14.1
>
> Attachments: HIVE-8811.1.patch, HIVE-8811.1.patch, HIVE-8811.1.patch
>
>
> Bug in tarjan's algo results in incorrect strongly connected components. I've 
> seen this manifest itself as an NPE in TezCompiler.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8805) CBO skipped due to SemanticException: Line 0:-1 Both left and right aliases encountered in JOIN 'avg_cs_ext_discount_amt'

2014-11-11 Thread Gunther Hagleitner (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8805?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gunther Hagleitner updated HIVE-8805:
-
Fix Version/s: (was: 0.14.0)
   0.14.1

> CBO skipped due to SemanticException: Line 0:-1 Both left and right aliases 
> encountered in JOIN 'avg_cs_ext_discount_amt'
> -
>
> Key: HIVE-8805
> URL: https://issues.apache.org/jira/browse/HIVE-8805
> Project: Hive
>  Issue Type: Bug
>  Components: CBO
>Affects Versions: 0.14.0
>Reporter: Mostafa Mokhtar
>Assignee: Laljo John Pullokkaran
> Fix For: 0.14.1
>
> Attachments: HIVE-8805.patch, HIVE-8805.patch
>
>
> Query
> {code}
> set hive.cbo.enable=true
> set hive.stats.fetch.column.stats=true
> set hive.exec.dynamic.partition.mode=nonstrict
> set hive.tez.auto.reducer.parallelism=true
> set hive.auto.convert.join.noconditionaltask.size=32000
> set hive.exec.reducers.bytes.per.reducer=1
> set hive.txn.manager=org.apache.hadoop.hive.ql.lockmgr.DummyTxnManager
> set hive.support.concurrency=false
> set hive.tez.exec.print.summary=true
> explain  
> SELECT sum(cs1.cs_ext_discount_amt) as excess_discount_amount
> FROM (SELECT cs.cs_item_sk as cs_item_sk,
>  cs.cs_ext_discount_amt as cs_ext_discount_amt
>  FROM catalog_sales cs
>  JOIN date_dim d ON (d.d_date_sk = cs.cs_sold_date_sk)
>  WHERE d.d_date between '2000-01-27' and '2000-04-27') cs1
> JOIN item i ON (i.i_item_sk = cs1.cs_item_sk)
> JOIN (SELECT cs2.cs_item_sk as cs_item_sk,
>   1.3 * avg(cs_ext_discount_amt) as 
> avg_cs_ext_discount_amt
>FROM (SELECT cs.cs_item_sk as cs_item_sk,
> cs.cs_ext_discount_amt as 
> cs_ext_discount_amt
> FROM catalog_sales cs
> JOIN date_dim d ON (d.d_date_sk = cs.cs_sold_date_sk)
> WHERE d.d_date between '2000-01-27' and '2000-04-27') 
> cs2
> GROUP BY cs2.cs_item_sk) tmp1
> ON (i.i_item_sk = tmp1.cs_item_sk)
> WHERE i.i_manufact_id = 436 and
>cs1.cs_ext_discount_amt > tmp1.avg_cs_ext_discount_amt
> {code}
> Exception
> {code}
> 14/11/07 19:15:38 [main]: ERROR parse.SemanticAnalyzer: CBO failed, skipping 
> CBO. 
> org.apache.hadoop.hive.ql.parse.SemanticException: Line 0:-1 Both left and 
> right aliases encountered in JOIN 'avg_cs_ext_discount_amt'
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.parseJoinCondition(SemanticAnalyzer.java:2369)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.parseJoinCondition(SemanticAnalyzer.java:2293)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.parseJoinCondition(SemanticAnalyzer.java:2249)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genJoinTree(SemanticAnalyzer.java:8010)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:9678)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:9593)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:9619)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:9593)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:9619)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:9606)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:10053)
>   at 
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:221)
>   at 
> org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:74)
>   at 
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:221)
>   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:415)
>   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:303)
>   at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1067)
>   at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1129)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1004)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:994)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:247)
>   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:199)
>   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:410)
>   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:345)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.pro

Re: Review Request 27566: HIVE-8609: move beeline to jline2

2014-11-11 Thread cheng xu

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/27566/
---

(Updated Nov. 12, 2014, 2:30 a.m.)


Review request for hive.


Changes
---

The new diff includes the following changes:
(1) add license for ClassNameCompleter
(2) format the ClassNameCompleter.java file by HIVE code format


Repository: hive-git


Description
---

HIVE-8609: move beeline to jline2
The following will be changed:
* MultiCompletor-> AggregateCompleter
* SimpleCompletor->StringsCompleter
* Terminal.getTerminalWidth() -> Terminal.getWidth()
* Terminal is an interface now; -> use TerminalFactory to get instances of a 
Terminal
* String -> CharSequence


Diffs (updated)
-

  beeline/src/java/org/apache/hive/beeline/AbstractCommandHandler.java a9479d5 
  beeline/src/java/org/apache/hive/beeline/BeeLine.java 8539a41 
  beeline/src/java/org/apache/hive/beeline/BeeLineCommandCompleter.java 
PRE-CREATION 
  beeline/src/java/org/apache/hive/beeline/BeeLineCommandCompletor.java 52313e6 
  beeline/src/java/org/apache/hive/beeline/BeeLineCompleter.java PRE-CREATION 
  beeline/src/java/org/apache/hive/beeline/BeeLineCompletor.java c6bb4fe 
  beeline/src/java/org/apache/hive/beeline/BeeLineOpts.java f73fb44 
  beeline/src/java/org/apache/hive/beeline/BooleanCompleter.java PRE-CREATION 
  beeline/src/java/org/apache/hive/beeline/BooleanCompletor.java 3e88c53 
  beeline/src/java/org/apache/hive/beeline/ClassNameCompleter.java PRE-CREATION 
  beeline/src/java/org/apache/hive/beeline/CommandHandler.java bab1778 
  beeline/src/java/org/apache/hive/beeline/Commands.java 7e366dc 
  beeline/src/java/org/apache/hive/beeline/DatabaseConnection.java ab67700 
  beeline/src/java/org/apache/hive/beeline/ReflectiveCommandHandler.java 
2b957f2 
  beeline/src/java/org/apache/hive/beeline/SQLCompleter.java PRE-CREATION 
  beeline/src/java/org/apache/hive/beeline/SQLCompletor.java 844b9ae 
  beeline/src/java/org/apache/hive/beeline/TableNameCompletor.java bc0d9be 
  cli/src/java/org/apache/hadoop/hive/cli/CliDriver.java 0ccaacb 
  cli/src/test/org/apache/hadoop/hive/cli/TestCliDriverMethods.java 88a37d5 
  hcatalog/hcatalog-pig-adapter/pom.xml 2d959e6 
  pom.xml ec8c4fe 
  ql/src/java/org/apache/hadoop/hive/ql/exec/tez/TezJobMonitor.java dea3460 

Diff: https://reviews.apache.org/r/27566/diff/


Testing
---


Thanks,

cheng xu



Re: Review Request 27900: HIVE-8739

2014-11-11 Thread Ashutosh Chauhan

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/27900/#review60933
---



metastore/src/java/org/apache/hadoop/hive/metastore/MetaStoreDirectSql.java


Are we sure about Oracle? Till now this worked on Oracle. Do we have 
evidence for otherwise?


- Ashutosh Chauhan


On Nov. 12, 2014, 2:02 a.m., Sergey Shelukhin wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/27900/
> ---
> 
> (Updated Nov. 12, 2014, 2:02 a.m.)
> 
> 
> Review request for hive.
> 
> 
> Repository: hive-git
> 
> 
> Description
> ---
> 
> see jira
> 
> 
> Diffs
> -
> 
>   metastore/src/java/org/apache/hadoop/hive/metastore/MetaStoreDirectSql.java 
> b6c633c 
> 
> Diff: https://reviews.apache.org/r/27900/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Sergey Shelukhin
> 
>



[jira] [Updated] (HIVE-8706) Table statistic collection on counter failed due to table name character case.

2014-11-11 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-8706:
---
   Resolution: Fixed
Fix Version/s: 0.15.0
   Status: Resolved  (was: Patch Available)

Committed to trunk. Thanks, [~chengxiang li]

> Table statistic collection on counter failed due to table name character case.
> --
>
> Key: HIVE-8706
> URL: https://issues.apache.org/jira/browse/HIVE-8706
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.13.1
>Reporter: Chengxiang Li
>Assignee: Chengxiang Li
> Fix For: 0.15.0
>
> Attachments: HIVE-8706.1.patch
>
>
> Hive ignore table name character case, transfer table name to lowercase in 
> metastore, while Counters/TezCounters are character case sensitive. This 
> difference may lead to table statistic collection failed, as when Hive 
> collection table statistic based on Counters, and hive use table name as the 
> group name of counter. 
> ctas.q is an example, during the INSERT OVERWRITE TABLE sql execution, table 
> name contains uppercase characters, Hive gather table statistic in 
> FileSinkOperator with uppercase table name(translated  from sql), and 
> aggregate table statistic in StatsTask with lowercase table name(from 
> metastore), which would failed table statistic collection.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8830) hcatalog process don't exit because of non daemon thread

2014-11-11 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14207556#comment-14207556
 ] 

Thejas M Nair commented on HIVE-8830:
-

Changed to use a number with thread as well. Using guava ThreadFactoryBuilder 
now. Guava is already used by this class, so it does not add a new dependency.


> hcatalog process don't exit because of non daemon thread
> 
>
> Key: HIVE-8830
> URL: https://issues.apache.org/jira/browse/HIVE-8830
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.14.0
>Reporter: Thejas M Nair
>Assignee: Thejas M Nair
> Fix For: 0.15.0
>
> Attachments: HIVE-8830.1.patch, HIVE-8830.2.patch
>
>
> HiveClientCache has a cleanup thread which is not a daemon. It can cause hcat 
> client process to hang even after the complete.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 27566: HIVE-8609: move beeline to jline2

2014-11-11 Thread cheng xu


> On Nov. 11, 2014, 4:38 p.m., Brock Noland wrote:
> > beeline/src/java/org/apache/hive/beeline/ClassNameCompleter.java, line 1
> > 
> >
> > I see, this class is being copied from JLine source. We'll need special 
> > handling of this class.
> > 
> > (1) We need to add the original license BSD-2 to the top of the file 
> > with the apache license below it.
> > 
> > (2) We need to add a section to the top level license file, in the 
> > style of: https://github.com/apache/hive/blob/trunk/LICENSE#L212
> > 
> > (3) We should consider reformatting the file to comply with hive 
> > standards
> > 
> > Note that my specific example has been removed and as such we need to 
> > remove it from the top level license file. I filed HIVE-8826 to do that. 
> > Should be an easy fix.

Thank Brock for your kind remind.
(1) The original license to the top of the file added
(2) JLine part is already added, does this file need a standalone section in 
this license file?
(3) Formatted in HIVE code style.


> On Nov. 11, 2014, 4:38 p.m., Brock Noland wrote:
> > hcatalog/hcatalog-pig-adapter/pom.xml, line 57
> > 
> >
> > I am not 100% sure this will work, let's see.

Test locally already, anyway, still wait for the CI result :)


> On Nov. 11, 2014, 4:38 p.m., Brock Noland wrote:
> > beeline/src/java/org/apache/hive/beeline/SQLCompleter.java, line 53
> > 
> >
> > Wow, this is not good. I created HIVE-8825 to fix catching Throwable. 
> > Should be quick fix.

Let's do it in HIVE-8825 Once this patch ready.


- cheng


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/27566/#review60788
---


On Nov. 11, 2014, 11:45 a.m., cheng xu wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/27566/
> ---
> 
> (Updated Nov. 11, 2014, 11:45 a.m.)
> 
> 
> Review request for hive.
> 
> 
> Repository: hive-git
> 
> 
> Description
> ---
> 
> HIVE-8609: move beeline to jline2
> The following will be changed:
> * MultiCompletor-> AggregateCompleter
> * SimpleCompletor->StringsCompleter
> * Terminal.getTerminalWidth() -> Terminal.getWidth()
> * Terminal is an interface now; -> use TerminalFactory to get instances of a 
> Terminal
> * String -> CharSequence
> 
> 
> Diffs
> -
> 
>   beeline/src/java/org/apache/hive/beeline/AbstractCommandHandler.java 
> a9479d5 
>   beeline/src/java/org/apache/hive/beeline/BeeLine.java 8539a41 
>   beeline/src/java/org/apache/hive/beeline/BeeLineCommandCompleter.java 
> PRE-CREATION 
>   beeline/src/java/org/apache/hive/beeline/BeeLineCommandCompletor.java 
> 52313e6 
>   beeline/src/java/org/apache/hive/beeline/BeeLineCompleter.java PRE-CREATION 
>   beeline/src/java/org/apache/hive/beeline/BeeLineCompletor.java c6bb4fe 
>   beeline/src/java/org/apache/hive/beeline/BeeLineOpts.java f73fb44 
>   beeline/src/java/org/apache/hive/beeline/BooleanCompleter.java PRE-CREATION 
>   beeline/src/java/org/apache/hive/beeline/BooleanCompletor.java 3e88c53 
>   beeline/src/java/org/apache/hive/beeline/ClassNameCompleter.java 
> PRE-CREATION 
>   beeline/src/java/org/apache/hive/beeline/CommandHandler.java bab1778 
>   beeline/src/java/org/apache/hive/beeline/Commands.java 7e366dc 
>   beeline/src/java/org/apache/hive/beeline/DatabaseConnection.java ab67700 
>   beeline/src/java/org/apache/hive/beeline/ReflectiveCommandHandler.java 
> 2b957f2 
>   beeline/src/java/org/apache/hive/beeline/SQLCompleter.java PRE-CREATION 
>   beeline/src/java/org/apache/hive/beeline/SQLCompletor.java 844b9ae 
>   beeline/src/java/org/apache/hive/beeline/TableNameCompletor.java bc0d9be 
>   cli/src/java/org/apache/hadoop/hive/cli/CliDriver.java 0ccaacb 
>   cli/src/test/org/apache/hadoop/hive/cli/TestCliDriverMethods.java 88a37d5 
>   hcatalog/hcatalog-pig-adapter/pom.xml 2d959e6 
>   pom.xml ec8c4fe 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/tez/TezJobMonitor.java dea3460 
> 
> Diff: https://reviews.apache.org/r/27566/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> cheng xu
> 
>



[jira] [Updated] (HIVE-8830) hcatalog process don't exit because of non daemon thread

2014-11-11 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-8830:

Attachment: HIVE-8830.2.patch

> hcatalog process don't exit because of non daemon thread
> 
>
> Key: HIVE-8830
> URL: https://issues.apache.org/jira/browse/HIVE-8830
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.14.0
>Reporter: Thejas M Nair
>Assignee: Thejas M Nair
> Fix For: 0.15.0
>
> Attachments: HIVE-8830.1.patch, HIVE-8830.2.patch
>
>
> HiveClientCache has a cleanup thread which is not a daemon. It can cause hcat 
> client process to hang even after the complete.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8805) CBO skipped due to SemanticException: Line 0:-1 Both left and right aliases encountered in JOIN 'avg_cs_ext_discount_amt'

2014-11-11 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14207548#comment-14207548
 ] 

Hive QA commented on HIVE-8805:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12680911/HIVE-8805.patch

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 6687 tests executed
*Failed tests:*
{noformat}
org.apache.hive.hcatalog.streaming.TestStreaming.testTransactionBatchEmptyCommit
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/1742/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/1742/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-1742/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12680911 - PreCommit-HIVE-TRUNK-Build

> CBO skipped due to SemanticException: Line 0:-1 Both left and right aliases 
> encountered in JOIN 'avg_cs_ext_discount_amt'
> -
>
> Key: HIVE-8805
> URL: https://issues.apache.org/jira/browse/HIVE-8805
> Project: Hive
>  Issue Type: Bug
>  Components: CBO
>Affects Versions: 0.14.0
>Reporter: Mostafa Mokhtar
>Assignee: Laljo John Pullokkaran
> Fix For: 0.14.0
>
> Attachments: HIVE-8805.patch, HIVE-8805.patch
>
>
> Query
> {code}
> set hive.cbo.enable=true
> set hive.stats.fetch.column.stats=true
> set hive.exec.dynamic.partition.mode=nonstrict
> set hive.tez.auto.reducer.parallelism=true
> set hive.auto.convert.join.noconditionaltask.size=32000
> set hive.exec.reducers.bytes.per.reducer=1
> set hive.txn.manager=org.apache.hadoop.hive.ql.lockmgr.DummyTxnManager
> set hive.support.concurrency=false
> set hive.tez.exec.print.summary=true
> explain  
> SELECT sum(cs1.cs_ext_discount_amt) as excess_discount_amount
> FROM (SELECT cs.cs_item_sk as cs_item_sk,
>  cs.cs_ext_discount_amt as cs_ext_discount_amt
>  FROM catalog_sales cs
>  JOIN date_dim d ON (d.d_date_sk = cs.cs_sold_date_sk)
>  WHERE d.d_date between '2000-01-27' and '2000-04-27') cs1
> JOIN item i ON (i.i_item_sk = cs1.cs_item_sk)
> JOIN (SELECT cs2.cs_item_sk as cs_item_sk,
>   1.3 * avg(cs_ext_discount_amt) as 
> avg_cs_ext_discount_amt
>FROM (SELECT cs.cs_item_sk as cs_item_sk,
> cs.cs_ext_discount_amt as 
> cs_ext_discount_amt
> FROM catalog_sales cs
> JOIN date_dim d ON (d.d_date_sk = cs.cs_sold_date_sk)
> WHERE d.d_date between '2000-01-27' and '2000-04-27') 
> cs2
> GROUP BY cs2.cs_item_sk) tmp1
> ON (i.i_item_sk = tmp1.cs_item_sk)
> WHERE i.i_manufact_id = 436 and
>cs1.cs_ext_discount_amt > tmp1.avg_cs_ext_discount_amt
> {code}
> Exception
> {code}
> 14/11/07 19:15:38 [main]: ERROR parse.SemanticAnalyzer: CBO failed, skipping 
> CBO. 
> org.apache.hadoop.hive.ql.parse.SemanticException: Line 0:-1 Both left and 
> right aliases encountered in JOIN 'avg_cs_ext_discount_amt'
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.parseJoinCondition(SemanticAnalyzer.java:2369)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.parseJoinCondition(SemanticAnalyzer.java:2293)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.parseJoinCondition(SemanticAnalyzer.java:2249)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genJoinTree(SemanticAnalyzer.java:8010)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:9678)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:9593)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:9619)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:9593)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:9619)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:9606)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:10053)
>   at 
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(Base

Hive-0.14 - Build # 720 - Still Failing

2014-11-11 Thread Apache Jenkins Server
Changes for Build #696
[rohini] PIG-4186: Fix e2e run against new build of pig and some enhancements 
(rohini)


Changes for Build #697

Changes for Build #698

Changes for Build #699

Changes for Build #700

Changes for Build #701

Changes for Build #702

Changes for Build #703
[daijy] HIVE-8484: HCatalog throws an exception if Pig job is of type 'fetch' 
(Lorand Bendig via Daniel Dai)


Changes for Build #704
[gunther] HIVE-8781: Nullsafe joins are busted on Tez (Gunther Hagleitner, 
reviewed by Prasanth J)


Changes for Build #705
[gunther] HIVE-8760: Pass a copy of HiveConf to hooks (Gunther Hagleitner, 
reviewed by Gopal V)


Changes for Build #706
[thejas] HIVE-8772 : zookeeper info logs are always printed from beeline with 
service discovery mode (Thejas Nair, reviewed by Vaibhav Gumashta)


Changes for Build #707
[gunther] HIVE-8782: HBase handler doesn't compile with hadoop-1 (Jimmy Xiang, 
reviewed by Xuefu and Sergey)


Changes for Build #708

Changes for Build #709
[thejas] HIVE-8785 : HiveServer2 LogDivertAppender should be more selective for 
beeline getLogs (Thejas Nair, reviewed by Gopal V)


Changes for Build #710
[vgumashta] HIVE-8764: Windows: HiveServer2 TCP SSL cannot recognize localhost 
(Vaibhav Gumashta reviewed by Thejas Nair)


Changes for Build #711
[gunther] HIVE-8768: CBO: Fix filter selectivity for 'in clause' & '<>' (Laljo 
John Pullokkaran via Gunther Hagleitner)


Changes for Build #712
[gunther] HIVE-8794: Hive on Tez leaks AMs when killed before first dag is run 
(Gunther Hagleitner, reviewed by Gopal V)


Changes for Build #713
[gunther] HIVE-8798: Some Oracle deadlocks not being caught in TxnHandler (Alan 
Gates via Gunther Hagleitner)


Changes for Build #714
[gunther] HIVE-8800: Update release notes and notice for hive .14 (Gunther 
Hagleitner, reviewed by Prasanth J)

[gunther] HIVE-8799: boatload of missing apache headers (Gunther Hagleitner, 
reviewed by Thejas M Nair)


Changes for Build #715
[gunther] Preparing for release 0.14.0


Changes for Build #716
[gunther] Preparing for release 0.14.0

[gunther] Preparing for release 0.14.0


Changes for Build #717

Changes for Build #718

Changes for Build #719

Changes for Build #720
[gunther] HIVE-8811: Dynamic partition pruning can result in NPE during query 
compilation (Gunther Hagleitner, reviewed by Gopal V)




No tests ran.

The Apache Jenkins build system has built Hive-0.14 (build #720)

Status: Still Failing

Check console output at https://builds.apache.org/job/Hive-0.14/720/ to view 
the results.

[jira] [Commented] (HIVE-8780) insert1.q and ppd_join4.q hangs with hadoop-1 [Spark Branch]

2014-11-11 Thread Chengxiang Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14207537#comment-14207537
 ] 

Chengxiang Li commented on HIVE-8780:
-

Yes, I would like to see what's going on here.

> insert1.q and ppd_join4.q hangs with hadoop-1 [Spark Branch]
> 
>
> Key: HIVE-8780
> URL: https://issues.apache.org/jira/browse/HIVE-8780
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Reporter: Jimmy Xiang
> Attachments: insert1.q-spark.png, insert1.q.jstack, itests.patch
>
>
> In working on HIVE-8758, found these tests hang at 
> {noformat}
>java.lang.Thread.State: TIMED_WAITING (sleeping)
> at java.lang.Thread.sleep(Native Method)
> at 
> org.apache.hadoop.hive.ql.exec.spark.status.SparkJobMonitor.startMoni
> tor(SparkJobMonitor.java:129)
> at 
> org.apache.hadoop.hive.ql.exec.spark.SparkTask.execute(SparkTask.java
> :111)
> at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:161)
> at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.ja
> va:85)
> at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1644)
> at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1404)
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1216)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1043)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1033)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:2
> 47)
> at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:199)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:410)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:345)
> at 
> org.apache.hadoop.hive.ql.QTestUtil.executeClient(QTestUtil.java:832)
> at 
> org.apache.hadoop.hive.cli.TestSparkCliDriver.runTest(TestSparkCliDri
> ver.java:3706)
> at 
> org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_ppd_join4
> (TestSparkCliDriver.java:2790)
> {noformat}
> Both tests hang at the same place. There could be other hanging tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-7390) Make quote character optional and configurable in BeeLine CSV/TSV output

2014-11-11 Thread Szehon Ho (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szehon Ho updated HIVE-7390:

Labels:   (was: TODOC14)

Doc covered in HIVE-8615

> Make quote character optional and configurable in BeeLine CSV/TSV output
> 
>
> Key: HIVE-7390
> URL: https://issues.apache.org/jira/browse/HIVE-7390
> Project: Hive
>  Issue Type: New Feature
>  Components: Clients
>Affects Versions: 0.13.1
>Reporter: Jim Halfpenny
>Assignee: Ferdinand Xu
> Fix For: 0.14.0
>
> Attachments: HIVE-7390.1.patch, HIVE-7390.2.patch, HIVE-7390.3.patch, 
> HIVE-7390.4.patch, HIVE-7390.5.patch, HIVE-7390.6.patch, HIVE-7390.7.patch, 
> HIVE-7390.8.patch, HIVE-7390.9.patch, HIVE-7390.patch
>
>
> Currently when either the CSV or TSV output formats are used in beeline each 
> column is wrapped in single quotes. Quote wrapping of columns should be 
> optional and the user should be able to choose the character used to wrap the 
> columns.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-6928) Beeline should not chop off "describe extended" results by default

2014-11-11 Thread Szehon Ho (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szehon Ho updated HIVE-6928:

Labels:   (was: TODOC14)

Added information about truncateTable Beeline option in:
[https://cwiki.apache.org/confluence/display/Hive/HiveServer2+Clients#HiveServer2Clients-BeelineCommandOptions|https://cwiki.apache.org/confluence/display/Hive/HiveServer2+Clients#HiveServer2Clients-BeelineCommandOptions]

> Beeline should not chop off "describe extended" results by default
> --
>
> Key: HIVE-6928
> URL: https://issues.apache.org/jira/browse/HIVE-6928
> Project: Hive
>  Issue Type: Bug
>  Components: CLI
>Reporter: Szehon Ho
>Assignee: Ferdinand Xu
> Fix For: 0.14.0
>
> Attachments: HIVE-6928.1.patch, HIVE-6928.2.patch, HIVE-6928.3 
> .patch, HIVE-6928.3 .patch, HIVE-6928.3 .patch, HIVE-6928.3.patch, 
> HIVE-6928.patch
>
>
> By default, beeline truncates long results based on the console width like:
> {code}
> +-+--+
> |  col_name   |   
>|
> +-+--+
> | pat_id  | string
>|
> | score   | float 
>|
> | acutes  | float 
>|
> | |   
>|
> | Detailed Table Information  | Table(tableName:refills, dbName:default, 
> owner:hdadmin, createTime:1393882396, lastAccessTime:0, retention:0, sd:Sto |
> +-+--+
> 5 rows selected (0.4 seconds)
> {code}
> This can be changed by !outputformat, but the default should behave better to 
> give a better experience to the first-time beeline user.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8739) handle Derby errors with joins and filters in Direct SQL in a Derby-specific path

2014-11-11 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14207532#comment-14207532
 ] 

Sergey Shelukhin commented on HIVE-8739:


https://reviews.apache.org/r/27900/diff/#
Note that patch has changed a lot

> handle Derby errors with joins and filters in Direct SQL in a Derby-specific 
> path
> -
>
> Key: HIVE-8739
> URL: https://issues.apache.org/jira/browse/HIVE-8739
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-8739.patch, HIVE-8739.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Review Request 27900: HIVE-8739

2014-11-11 Thread Sergey Shelukhin

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/27900/
---

Review request for hive.


Repository: hive-git


Description
---

see jira


Diffs
-

  metastore/src/java/org/apache/hadoop/hive/metastore/MetaStoreDirectSql.java 
b6c633c 

Diff: https://reviews.apache.org/r/27900/diff/


Testing
---


Thanks,

Sergey Shelukhin



[jira] [Commented] (HIVE-8830) hcatalog process don't exit because of non daemon thread

2014-11-11 Thread Eugene Koifman (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14207520#comment-14207520
 ] 

Eugene Koifman commented on HIVE-8830:
--

t.setName("HiveClientCache cleaner");
It's a minor thing, but it would be good to add a counter in the ThreadFactory 
and add it to the name so that it's like this "HiveClientCache cleaner-1".
If the thread in the pool dies, a new one is created to replace it.  With the 
counter, we'll know about that in thread dumps and logs.

+1 pending tests


> hcatalog process don't exit because of non daemon thread
> 
>
> Key: HIVE-8830
> URL: https://issues.apache.org/jira/browse/HIVE-8830
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.14.0
>Reporter: Thejas M Nair
>Assignee: Thejas M Nair
> Fix For: 0.15.0
>
> Attachments: HIVE-8830.1.patch
>
>
> HiveClientCache has a cleanup thread which is not a daemon. It can cause hcat 
> client process to hang even after the complete.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-6427) Hive Server2 should reopen Metastore client in case of any Thrift exceptions

2014-11-11 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6427?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-6427:
---
Status: Open  (was: Patch Available)

patch needs a rebase

> Hive Server2 should reopen Metastore client in case of any Thrift exceptions
> 
>
> Key: HIVE-6427
> URL: https://issues.apache.org/jira/browse/HIVE-6427
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 0.12.0
>Reporter: Andrey Stepachev
>Priority: Critical
> Attachments: HIVE-6427.patch
>
>
> In case of metastore restart hive server doesn't reopen connection to 
> metastore. Any command gives broken pipe or similar exceptions.
> http://paste.ubuntu.com/6926215/
> Any subsequent command doesn't reestablish connection and tries to use stale 
> (closed) connection.
> Looks like we shouldn't blindly convert any MetaException to 
> HiveSQLException, but should distinguish between fatal exceptions and logical 
> exceptions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-4963) Support in memory PTF partitions

2014-11-11 Thread Lefty Leverenz (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4963?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lefty Leverenz updated HIVE-4963:
-
Labels: TODOC12  (was: )

> Support in memory PTF partitions
> 
>
> Key: HIVE-4963
> URL: https://issues.apache.org/jira/browse/HIVE-4963
> Project: Hive
>  Issue Type: New Feature
>  Components: PTF-Windowing
>Reporter: Harish Butani
>Assignee: Harish Butani
>  Labels: TODOC12
> Fix For: 0.12.0
>
> Attachments: HIVE-4963.D11955.1.patch, HIVE-4963.D12279.1.patch, 
> HIVE-4963.D12279.2.patch, HIVE-4963.D12279.3.patch, PTFRowContainer.patch
>
>
> PTF partitions apply the defensive mode of assuming that partitions will not 
> fit in memory. Because of this there is a significant deserialization 
> overhead when accessing elements. 
> Allow the user to specify that there is enough memory to hold partitions 
> through a 'hive.ptf.partition.fits.in.mem' option.  
> Savings depends on partition size and in case of windowing the number of 
> UDAFs and the window ranges. For eg for the following (admittedly extreme) 
> case the PTFOperator exec times went from 39 secs to 8 secs.
>  
> {noformat}
> select t, s, i, b, f, d,
> min(t) over(partition by 1 rows between unbounded preceding and current row), 
> min(s) over(partition by 1 rows between unbounded preceding and current row), 
> min(i) over(partition by 1 rows between unbounded preceding and current row), 
> min(b) over(partition by 1 rows between unbounded preceding and current row) 
> from over10k
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-4963) Support in memory PTF partitions

2014-11-11 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14207516#comment-14207516
 ] 

Lefty Leverenz commented on HIVE-4963:
--

bq.  Could someone either document this on the Wiki or explain it to me?

The wiki doesn't have a section about PTFs yet, and the description of 
*hive.join.cache.size* hasn't been changed since Hive 0.5.0:  "How many rows in 
the joining tables (except the streaming table) should be cached in memory."

So I'm adding a TODOC12 label.  What should the wiki say?

> Support in memory PTF partitions
> 
>
> Key: HIVE-4963
> URL: https://issues.apache.org/jira/browse/HIVE-4963
> Project: Hive
>  Issue Type: New Feature
>  Components: PTF-Windowing
>Reporter: Harish Butani
>Assignee: Harish Butani
> Fix For: 0.12.0
>
> Attachments: HIVE-4963.D11955.1.patch, HIVE-4963.D12279.1.patch, 
> HIVE-4963.D12279.2.patch, HIVE-4963.D12279.3.patch, PTFRowContainer.patch
>
>
> PTF partitions apply the defensive mode of assuming that partitions will not 
> fit in memory. Because of this there is a significant deserialization 
> overhead when accessing elements. 
> Allow the user to specify that there is enough memory to hold partitions 
> through a 'hive.ptf.partition.fits.in.mem' option.  
> Savings depends on partition size and in case of windowing the number of 
> UDAFs and the window ranges. For eg for the following (admittedly extreme) 
> case the PTFOperator exec times went from 39 secs to 8 secs.
>  
> {noformat}
> select t, s, i, b, f, d,
> min(t) over(partition by 1 rows between unbounded preceding and current row), 
> min(s) over(partition by 1 rows between unbounded preceding and current row), 
> min(i) over(partition by 1 rows between unbounded preceding and current row), 
> min(b) over(partition by 1 rows between unbounded preceding and current row) 
> from over10k
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-7255) Allow partial partition spec in analyze command

2014-11-11 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14207514#comment-14207514
 ] 

Ashutosh Chauhan commented on HIVE-7255:


updated the cwiki

> Allow partial partition spec in analyze command
> ---
>
> Key: HIVE-7255
> URL: https://issues.apache.org/jira/browse/HIVE-7255
> Project: Hive
>  Issue Type: New Feature
>  Components: Statistics
>Affects Versions: 0.10.0, 0.11.0, 0.12.0, 0.13.0, 0.13.1
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
> Fix For: 0.14.0
>
> Attachments: HIVE-7255.1.patch, HIVE-7255.2.patch
>
>
> So that stats collection can happen for multiple partitions through one 
> statement.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-7255) Allow partial partition spec in analyze command

2014-11-11 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-7255:
---
Affects Version/s: 0.10.0
   0.11.0
   0.12.0
   0.13.0
   0.13.1

> Allow partial partition spec in analyze command
> ---
>
> Key: HIVE-7255
> URL: https://issues.apache.org/jira/browse/HIVE-7255
> Project: Hive
>  Issue Type: New Feature
>  Components: Statistics
>Affects Versions: 0.10.0, 0.11.0, 0.12.0, 0.13.0, 0.13.1
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
> Fix For: 0.14.0
>
> Attachments: HIVE-7255.1.patch, HIVE-7255.2.patch
>
>
> So that stats collection can happen for multiple partitions through one 
> statement.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-7255) Allow partial partition spec in analyze command

2014-11-11 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-7255:
---
Labels:   (was: TODOC14)

> Allow partial partition spec in analyze command
> ---
>
> Key: HIVE-7255
> URL: https://issues.apache.org/jira/browse/HIVE-7255
> Project: Hive
>  Issue Type: New Feature
>  Components: Statistics
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
> Fix For: 0.14.0
>
> Attachments: HIVE-7255.1.patch, HIVE-7255.2.patch
>
>
> So that stats collection can happen for multiple partitions through one 
> statement.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-7168) Don't require to name all columns in analyze statements if stats collection is for all columns

2014-11-11 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-7168:
---
Component/s: Statistics

> Don't require to name all columns in analyze statements if stats collection 
> is for all columns
> --
>
> Key: HIVE-7168
> URL: https://issues.apache.org/jira/browse/HIVE-7168
> Project: Hive
>  Issue Type: Improvement
>  Components: Statistics
>Affects Versions: 0.10.0, 0.11.0, 0.12.0, 0.13.0, 0.13.1
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
> Fix For: 0.14.0
>
> Attachments: HIVE-7168.1.patch, HIVE-7168.2.patch, HIVE-7168.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-7168) Don't require to name all columns in analyze statements if stats collection is for all columns

2014-11-11 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-7168:
---
Affects Version/s: 0.10.0
   0.11.0
   0.12.0
   0.13.0
   0.13.1

> Don't require to name all columns in analyze statements if stats collection 
> is for all columns
> --
>
> Key: HIVE-7168
> URL: https://issues.apache.org/jira/browse/HIVE-7168
> Project: Hive
>  Issue Type: Improvement
>  Components: Statistics
>Affects Versions: 0.10.0, 0.11.0, 0.12.0, 0.13.0, 0.13.1
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
> Fix For: 0.14.0
>
> Attachments: HIVE-7168.1.patch, HIVE-7168.2.patch, HIVE-7168.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   3   >