Hive-0.14 - Build # 716 - Still Failing

2014-11-08 Thread Apache Jenkins Server
Changes for Build #696
[rohini] PIG-4186: Fix e2e run against new build of pig and some enhancements 
(rohini)


Changes for Build #697

Changes for Build #698

Changes for Build #699

Changes for Build #700

Changes for Build #701

Changes for Build #702

Changes for Build #703
[daijy] HIVE-8484: HCatalog throws an exception if Pig job is of type 'fetch' 
(Lorand Bendig via Daniel Dai)


Changes for Build #704
[gunther] HIVE-8781: Nullsafe joins are busted on Tez (Gunther Hagleitner, 
reviewed by Prasanth J)


Changes for Build #705
[gunther] HIVE-8760: Pass a copy of HiveConf to hooks (Gunther Hagleitner, 
reviewed by Gopal V)


Changes for Build #706
[thejas] HIVE-8772 : zookeeper info logs are always printed from beeline with 
service discovery mode (Thejas Nair, reviewed by Vaibhav Gumashta)


Changes for Build #707
[gunther] HIVE-8782: HBase handler doesn't compile with hadoop-1 (Jimmy Xiang, 
reviewed by Xuefu and Sergey)


Changes for Build #708

Changes for Build #709
[thejas] HIVE-8785 : HiveServer2 LogDivertAppender should be more selective for 
beeline getLogs (Thejas Nair, reviewed by Gopal V)


Changes for Build #710
[vgumashta] HIVE-8764: Windows: HiveServer2 TCP SSL cannot recognize localhost 
(Vaibhav Gumashta reviewed by Thejas Nair)


Changes for Build #711
[gunther] HIVE-8768: CBO: Fix filter selectivity for 'in clause' & '<>' (Laljo 
John Pullokkaran via Gunther Hagleitner)


Changes for Build #712
[gunther] HIVE-8794: Hive on Tez leaks AMs when killed before first dag is run 
(Gunther Hagleitner, reviewed by Gopal V)


Changes for Build #713
[gunther] HIVE-8798: Some Oracle deadlocks not being caught in TxnHandler (Alan 
Gates via Gunther Hagleitner)


Changes for Build #714
[gunther] HIVE-8800: Update release notes and notice for hive .14 (Gunther 
Hagleitner, reviewed by Prasanth J)

[gunther] HIVE-8799: boatload of missing apache headers (Gunther Hagleitner, 
reviewed by Thejas M Nair)


Changes for Build #715
[gunther] Preparing for release 0.14.0


Changes for Build #716
[gunther] Preparing for release 0.14.0

[gunther] Preparing for release 0.14.0




No tests ran.

The Apache Jenkins build system has built Hive-0.14 (build #716)

Status: Still Failing

Check console output at https://builds.apache.org/job/Hive-0.14/716/ to view 
the results.

[jira] [Updated] (HIVE-8801) Make orc_merge_incompat1.q deterministic across platforms

2014-11-08 Thread Prasanth J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8801?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth J updated HIVE-8801:
-
Attachment: HIVE-8801.2.patch

Missed the diff for tez test.

> Make orc_merge_incompat1.q deterministic across platforms
> -
>
> Key: HIVE-8801
> URL: https://issues.apache.org/jira/browse/HIVE-8801
> Project: Hive
>  Issue Type: Test
>Affects Versions: 0.15.0
>Reporter: Prasanth J
>Assignee: Prasanth J
> Attachments: HIVE-8801.1.patch, HIVE-8801.2.patch
>
>
> orc_merge_incompat1.q tests for ORC fast file merge when there are 
> incompatible files in a partition. The outcome of merge will be dependent on 
> order of the files that CombineHiveInputFormat passes on to 
> OrcFileMergeOperator. Since the ordering of files is not guaranteed the 
> result of merge operation will be different across different OS'es.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8801) Make orc_merge_incompat1.q deterministic across platforms

2014-11-08 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14203808#comment-14203808
 ] 

Hive QA commented on HIVE-8801:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12680458/HIVE-8801.1.patch

{color:red}ERROR:{color} -1 due to 4 failed/errored test(s), 6671 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_acid_join
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_insert_nonacid_from_acid
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_transform_acid
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_orc_merge_incompat1
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/1708/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/1708/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-1708/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 4 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12680458 - PreCommit-HIVE-TRUNK-Build

> Make orc_merge_incompat1.q deterministic across platforms
> -
>
> Key: HIVE-8801
> URL: https://issues.apache.org/jira/browse/HIVE-8801
> Project: Hive
>  Issue Type: Test
>Affects Versions: 0.15.0
>Reporter: Prasanth J
>Assignee: Prasanth J
> Attachments: HIVE-8801.1.patch
>
>
> orc_merge_incompat1.q tests for ORC fast file merge when there are 
> incompatible files in a partition. The outcome of merge will be dependent on 
> order of the files that CombineHiveInputFormat passes on to 
> OrcFileMergeOperator. Since the ordering of files is not guaranteed the 
> result of merge operation will be different across different OS'es.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 27627: Split map-join plan into 2 SparkTasks in 3 stages [Spark Branch]

2014-11-08 Thread Chao Sun


> On Nov. 8, 2014, 3:15 p.m., Xuefu Zhang wrote:
> > ql/src/java/org/apache/hadoop/hive/ql/optimizer/physical/SparkMapJoinResolver.java,
> >  line 214
> > 
> >
> > This assumes that result SparkWorks will be linearly dependent on each 
> > other, which isn't true in general.Let's say the are two works (w1 and w2), 
> > each having a map join operator. w1 and w2 are connected to w3 via HTS. w3 
> > also contains map join operator. Dependency in this scenario will be 
> > graphic rather than linear.
> 
> Chao Sun wrote:
> I was thinking, in this case, if there's no dependency between w1 and w2, 
> they can be put in the same SparkWork, right?
> Otherwise, they will form a linear dependency too.
> 
> Xuefu Zhang wrote:
> w1 and w2 are fine. they will be in the same SparkWork. This SparkWork 
> will depends on both the SparkWork generated at w1 and SparkWork generated at 
> w2. This dependency is not linear.
> 
> To put more details, for each work that has map join op, we need to 
> create a SparkWork to handle its small tables. So, both w1 and w2 will need 
> to create such SparkWork. While w1 and w2 are in the same SparkWork, this 
> SparkWork depends on the two SparkWorks created.

I'm not getting it, why "This dependency is not linear"? Can you give a counter 
example?
Suppose w1(MJ_1) w2(MJ_2), and w3(MJ_3) are like the following:

 HTS_1   HTS_2 HTS_3HTS_4
   \  /   \ /
\/ \   /
  MJ_1  MJ_2
   | |
   | |
  HTS_5HTS_6
  \/
   \  /
\/
 \  /
  \/
MJ_3

Then, what I'm doing is to put HTS_1, HTS_2, HTS_3, and HTS_4 in the same 
SparkWork, say SW_1
then, MJ_1, MJ_2, HTS_5, and HTS_6 will be in another SparkWork SW_2, and MJ_3 
in another SparkWork SW_3.
SW_1 -> SW_2 -> SW_3.


- Chao


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/27627/#review60482
---


On Nov. 7, 2014, 6:07 p.m., Chao Sun wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/27627/
> ---
> 
> (Updated Nov. 7, 2014, 6:07 p.m.)
> 
> 
> Review request for hive.
> 
> 
> Bugs: HIVE-8622
> https://issues.apache.org/jira/browse/HIVE-8622
> 
> 
> Repository: hive-git
> 
> 
> Description
> ---
> 
> This is a sub-task of map-join for spark 
> https://issues.apache.org/jira/browse/HIVE-7613
> This can use the baseline patch for map-join
> https://issues.apache.org/jira/browse/HIVE-8616
> 
> 
> Diffs
> -
> 
>   
> ql/src/java/org/apache/hadoop/hive/ql/optimizer/physical/SparkMapJoinResolver.java
>  PRE-CREATION 
>   ql/src/java/org/apache/hadoop/hive/ql/plan/SparkWork.java 66fd6b6 
> 
> Diff: https://reviews.apache.org/r/27627/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Chao Sun
> 
>



Re: Review Request 27627: Split map-join plan into 2 SparkTasks in 3 stages [Spark Branch]

2014-11-08 Thread Xuefu Zhang


> On Nov. 8, 2014, 3:15 p.m., Xuefu Zhang wrote:
> > ql/src/java/org/apache/hadoop/hive/ql/optimizer/physical/SparkMapJoinResolver.java,
> >  line 214
> > 
> >
> > This assumes that result SparkWorks will be linearly dependent on each 
> > other, which isn't true in general.Let's say the are two works (w1 and w2), 
> > each having a map join operator. w1 and w2 are connected to w3 via HTS. w3 
> > also contains map join operator. Dependency in this scenario will be 
> > graphic rather than linear.
> 
> Chao Sun wrote:
> I was thinking, in this case, if there's no dependency between w1 and w2, 
> they can be put in the same SparkWork, right?
> Otherwise, they will form a linear dependency too.

w1 and w2 are fine. they will be in the same SparkWork. This SparkWork will 
depends on both the SparkWork generated at w1 and SparkWork generated at w2. 
This dependency is not linear.

To put more details, for each work that has map join op, we need to create a 
SparkWork to handle its small tables. So, both w1 and w2 will need to create 
such SparkWork. While w1 and w2 are in the same SparkWork, this SparkWork 
depends on the two SparkWorks created.


- Xuefu


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/27627/#review60482
---


On Nov. 7, 2014, 6:07 p.m., Chao Sun wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/27627/
> ---
> 
> (Updated Nov. 7, 2014, 6:07 p.m.)
> 
> 
> Review request for hive.
> 
> 
> Bugs: HIVE-8622
> https://issues.apache.org/jira/browse/HIVE-8622
> 
> 
> Repository: hive-git
> 
> 
> Description
> ---
> 
> This is a sub-task of map-join for spark 
> https://issues.apache.org/jira/browse/HIVE-7613
> This can use the baseline patch for map-join
> https://issues.apache.org/jira/browse/HIVE-8616
> 
> 
> Diffs
> -
> 
>   
> ql/src/java/org/apache/hadoop/hive/ql/optimizer/physical/SparkMapJoinResolver.java
>  PRE-CREATION 
>   ql/src/java/org/apache/hadoop/hive/ql/plan/SparkWork.java 66fd6b6 
> 
> Diff: https://reviews.apache.org/r/27627/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Chao Sun
> 
>



[jira] [Updated] (HIVE-8802) acid_join and insert_nonacid_from_acid tests are failing

2014-11-08 Thread Alan Gates (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8802?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Gates updated HIVE-8802:
-
Status: Patch Available  (was: Open)

> acid_join and insert_nonacid_from_acid tests are failing
> 
>
> Key: HIVE-8802
> URL: https://issues.apache.org/jira/browse/HIVE-8802
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.15.0
>Reporter: Alan Gates
>Assignee: Alan Gates
> Attachments: HIVE-8802.patch
>
>
> HIVE-8745 and HIVE-8710 were committed around the same time, thus the changed 
> qfile results caused by HIVE-8745 weren't picked up in HIVE-8710.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8802) acid_join and insert_nonacid_from_acid tests are failing

2014-11-08 Thread Alan Gates (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8802?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Gates updated HIVE-8802:
-
Attachment: HIVE-8802.patch

> acid_join and insert_nonacid_from_acid tests are failing
> 
>
> Key: HIVE-8802
> URL: https://issues.apache.org/jira/browse/HIVE-8802
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.15.0
>Reporter: Alan Gates
>Assignee: Alan Gates
> Attachments: HIVE-8802.patch
>
>
> HIVE-8745 and HIVE-8710 were committed around the same time, thus the changed 
> qfile results caused by HIVE-8745 weren't picked up in HIVE-8710.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-8802) acid_join and insert_nonacid_from_acid tests are failing

2014-11-08 Thread Alan Gates (JIRA)
Alan Gates created HIVE-8802:


 Summary: acid_join and insert_nonacid_from_acid tests are failing
 Key: HIVE-8802
 URL: https://issues.apache.org/jira/browse/HIVE-8802
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.15.0
Reporter: Alan Gates
Assignee: Alan Gates


HIVE-8745 and HIVE-8710 were committed around the same time, thus the changed 
qfile results caused by HIVE-8745 weren't picked up in HIVE-8710.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 27627: Split map-join plan into 2 SparkTasks in 3 stages [Spark Branch]

2014-11-08 Thread Chao Sun


> On Nov. 7, 2014, 11:07 p.m., Xuefu Zhang wrote:
> > ql/src/java/org/apache/hadoop/hive/ql/optimizer/physical/SparkMapJoinResolver.java,
> >  line 100
> > 
> >
> > It seems possible that current is MJwork, right? Are you going to add 
> > it to the target?

Yes, it's possible. But that MJwork will be a one of which all HTS are already 
handled, so we can go through it to some HTS for other MJworks.


> On Nov. 7, 2014, 11:07 p.m., Xuefu Zhang wrote:
> > ql/src/java/org/apache/hadoop/hive/ql/optimizer/physical/SparkMapJoinResolver.java,
> >  line 115
> > 
> >
> > Frankly, I'm not 100% following the logic. The diagram has operators 
> > mixed with works, which makes it hard. But I'm seeing where you're coming 
> > from. Maybe you can explain to me better in person.

Here the operator name (MJ, HTS) means a work contains the operator, so MJ is a 
BaseWork containing MJ operator, and same for HTS.
Yes, I think explaining in person would be better.


> On Nov. 7, 2014, 11:07 p.m., Xuefu Zhang wrote:
> > ql/src/java/org/apache/hadoop/hive/ql/optimizer/physical/SparkMapJoinResolver.java,
> >  line 155
> > 
> >
> > I think there is a separate JIRA handling combining mapjoins, owned by 
> > Szehon.

In my understanding, Szehon's JIRA is try to put MJ operators in the same 
BaseWork. But, there're some cases that we cannot apply this optimization, and 
MJ operators will be in different BaseWorks. My work here is to try to put them 
in the same SparkWork, if there's no dependency among them.


- Chao


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/27627/#review60403
---


On Nov. 7, 2014, 6:07 p.m., Chao Sun wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/27627/
> ---
> 
> (Updated Nov. 7, 2014, 6:07 p.m.)
> 
> 
> Review request for hive.
> 
> 
> Bugs: HIVE-8622
> https://issues.apache.org/jira/browse/HIVE-8622
> 
> 
> Repository: hive-git
> 
> 
> Description
> ---
> 
> This is a sub-task of map-join for spark 
> https://issues.apache.org/jira/browse/HIVE-7613
> This can use the baseline patch for map-join
> https://issues.apache.org/jira/browse/HIVE-8616
> 
> 
> Diffs
> -
> 
>   
> ql/src/java/org/apache/hadoop/hive/ql/optimizer/physical/SparkMapJoinResolver.java
>  PRE-CREATION 
>   ql/src/java/org/apache/hadoop/hive/ql/plan/SparkWork.java 66fd6b6 
> 
> Diff: https://reviews.apache.org/r/27627/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Chao Sun
> 
>



[jira] [Commented] (HIVE-8801) Make orc_merge_incompat1.q deterministic across platforms

2014-11-08 Thread Gunther Hagleitner (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14203781#comment-14203781
 ] 

Gunther Hagleitner commented on HIVE-8801:
--

LGTM +1

> Make orc_merge_incompat1.q deterministic across platforms
> -
>
> Key: HIVE-8801
> URL: https://issues.apache.org/jira/browse/HIVE-8801
> Project: Hive
>  Issue Type: Test
>Affects Versions: 0.15.0
>Reporter: Prasanth J
>Assignee: Prasanth J
> Attachments: HIVE-8801.1.patch
>
>
> orc_merge_incompat1.q tests for ORC fast file merge when there are 
> incompatible files in a partition. The outcome of merge will be dependent on 
> order of the files that CombineHiveInputFormat passes on to 
> OrcFileMergeOperator. Since the ordering of files is not guaranteed the 
> result of merge operation will be different across different OS'es.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8801) Make orc_merge_incompat1.q deterministic across platforms

2014-11-08 Thread Prasanth J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8801?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth J updated HIVE-8801:
-
Attachment: HIVE-8801.1.patch

Added one more file to partition. Now there are 3 files written with 0.11 
version and 3 files written with 0.12 version. The outcome of merge will be 4 
files independent of which input file is chosen first.

> Make orc_merge_incompat1.q deterministic across platforms
> -
>
> Key: HIVE-8801
> URL: https://issues.apache.org/jira/browse/HIVE-8801
> Project: Hive
>  Issue Type: Test
>Affects Versions: 0.15.0
>Reporter: Prasanth J
>Assignee: Prasanth J
> Attachments: HIVE-8801.1.patch
>
>
> orc_merge_incompat1.q tests for ORC fast file merge when there are 
> incompatible files in a partition. The outcome of merge will be dependent on 
> order of the files that CombineHiveInputFormat passes on to 
> OrcFileMergeOperator. Since the ordering of files is not guaranteed the 
> result of merge operation will be different across different OS'es.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8801) Make orc_merge_incompat1.q deterministic across platforms

2014-11-08 Thread Prasanth J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8801?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth J updated HIVE-8801:
-
Status: Patch Available  (was: Open)

> Make orc_merge_incompat1.q deterministic across platforms
> -
>
> Key: HIVE-8801
> URL: https://issues.apache.org/jira/browse/HIVE-8801
> Project: Hive
>  Issue Type: Test
>Affects Versions: 0.15.0
>Reporter: Prasanth J
>Assignee: Prasanth J
> Attachments: HIVE-8801.1.patch
>
>
> orc_merge_incompat1.q tests for ORC fast file merge when there are 
> incompatible files in a partition. The outcome of merge will be dependent on 
> order of the files that CombineHiveInputFormat passes on to 
> OrcFileMergeOperator. Since the ordering of files is not guaranteed the 
> result of merge operation will be different across different OS'es.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8542) Enable groupby_map_ppr.q and groupby_map_ppr_multi_distinct.q [Spark Branch]

2014-11-08 Thread Xuefu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14203780#comment-14203780
 ] 

Xuefu Zhang commented on HIVE-8542:
---

Hi [~lirui], when the patch is ready, please provide a RB entry. Thanks.

> Enable groupby_map_ppr.q and groupby_map_ppr_multi_distinct.q [Spark Branch]
> 
>
> Key: HIVE-8542
> URL: https://issues.apache.org/jira/browse/HIVE-8542
> Project: Hive
>  Issue Type: Bug
>  Components: Spark
>Reporter: Chao
>Assignee: Rui Li
> Attachments: HIVE-8542.1-spark.patch, HIVE-8542.2-spark.patch, 
> HIVE-8542.3-spark.patch
>
>
> Currently, in Spark branch, results for these two test files are very 
> different from MR's. We need to find out the cause for this, and identify 
> potential bug in our current implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 27627: Split map-join plan into 2 SparkTasks in 3 stages [Spark Branch]

2014-11-08 Thread Chao Sun


> On Nov. 8, 2014, 3:15 p.m., Xuefu Zhang wrote:
> > ql/src/java/org/apache/hadoop/hive/ql/optimizer/physical/SparkMapJoinResolver.java,
> >  line 214
> > 
> >
> > This assumes that result SparkWorks will be linearly dependent on each 
> > other, which isn't true in general.Let's say the are two works (w1 and w2), 
> > each having a map join operator. w1 and w2 are connected to w3 via HTS. w3 
> > also contains map join operator. Dependency in this scenario will be 
> > graphic rather than linear.

I was thinking, in this case, if there's no dependency between w1 and w2, they 
can be put in the same SparkWork, right?
Otherwise, they will form a linear dependency too.


- Chao


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/27627/#review60482
---


On Nov. 7, 2014, 6:07 p.m., Chao Sun wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/27627/
> ---
> 
> (Updated Nov. 7, 2014, 6:07 p.m.)
> 
> 
> Review request for hive.
> 
> 
> Bugs: HIVE-8622
> https://issues.apache.org/jira/browse/HIVE-8622
> 
> 
> Repository: hive-git
> 
> 
> Description
> ---
> 
> This is a sub-task of map-join for spark 
> https://issues.apache.org/jira/browse/HIVE-7613
> This can use the baseline patch for map-join
> https://issues.apache.org/jira/browse/HIVE-8616
> 
> 
> Diffs
> -
> 
>   
> ql/src/java/org/apache/hadoop/hive/ql/optimizer/physical/SparkMapJoinResolver.java
>  PRE-CREATION 
>   ql/src/java/org/apache/hadoop/hive/ql/plan/SparkWork.java 66fd6b6 
> 
> Diff: https://reviews.apache.org/r/27627/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Chao Sun
> 
>



Re: Review Request 27627: Split map-join plan into 2 SparkTasks in 3 stages [Spark Branch]

2014-11-08 Thread Chao Sun


> On Nov. 8, 2014, 12:44 a.m., Szehon Ho wrote:
> > ql/src/java/org/apache/hadoop/hive/ql/optimizer/physical/SparkMapJoinResolver.java,
> >  line 224
> > 
> >
> > I've been thinking about this, as you had brought up a pretty rare 
> > use-case where a big-table parent of mapjoin1 still had a HTS , but its for 
> > another(!) mapjoin.  I dont know if this is still a valid case , but do you 
> > think this handles it, as it just indisciriminately adds it to the parent 
> > map if it has HTS?

Fixed through a offline chat.


- Chao


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/27627/#review60380
---


On Nov. 7, 2014, 6:07 p.m., Chao Sun wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/27627/
> ---
> 
> (Updated Nov. 7, 2014, 6:07 p.m.)
> 
> 
> Review request for hive.
> 
> 
> Bugs: HIVE-8622
> https://issues.apache.org/jira/browse/HIVE-8622
> 
> 
> Repository: hive-git
> 
> 
> Description
> ---
> 
> This is a sub-task of map-join for spark 
> https://issues.apache.org/jira/browse/HIVE-7613
> This can use the baseline patch for map-join
> https://issues.apache.org/jira/browse/HIVE-8616
> 
> 
> Diffs
> -
> 
>   
> ql/src/java/org/apache/hadoop/hive/ql/optimizer/physical/SparkMapJoinResolver.java
>  PRE-CREATION 
>   ql/src/java/org/apache/hadoop/hive/ql/plan/SparkWork.java 66fd6b6 
> 
> Diff: https://reviews.apache.org/r/27627/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Chao Sun
> 
>



[jira] [Created] (HIVE-8801) Make orc_merge_incompat1.q deterministic across platforms

2014-11-08 Thread Prasanth J (JIRA)
Prasanth J created HIVE-8801:


 Summary: Make orc_merge_incompat1.q deterministic across platforms
 Key: HIVE-8801
 URL: https://issues.apache.org/jira/browse/HIVE-8801
 Project: Hive
  Issue Type: Test
Affects Versions: 0.15.0
Reporter: Prasanth J
Assignee: Prasanth J


orc_merge_incompat1.q tests for ORC fast file merge when there are incompatible 
files in a partition. The outcome of merge will be dependent on order of the 
files that CombineHiveInputFormat passes on to OrcFileMergeOperator. Since the 
ordering of files is not guaranteed the result of merge operation will be 
different across different OS'es.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8622) Split map-join plan into 2 SparkTasks in 3 stages [Spark Branch]

2014-11-08 Thread Xuefu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14203758#comment-14203758
 ] 

Xuefu Zhang commented on HIVE-8622:
---

Here is my sudo code showing my attemp to solve this seemingly complex problem:
{code}
// Notation:
// MJWork - a work with map join operator
// HTSWork = a work with HashTableSinkOperator

// Each MJWork will build a SparkWork for its small table works. This info is 
held in a map ,
// originally empty and named childSparkWorkMap
Map childSparkWorkMap = new HashMap();

// Each work, including a MJWork, also belongs to a parent SparkWork. 
Originally, all works belong to the original SparkWork.
// The info is help in another map  named parentSparkWork.
Map parentSparkWorkMap = new HashMap();
List works = sparkWork.getAllWorks(); // sparkWork is original 
SparkWork to be split
for (BaseWork work : works) {
  parentSparkWorkMap.put(work, sparkWork);
}

// dependency map among all SparkWorks. This our final result
Map dependencyMap = new new HashMap();

// Process the original SparkWork from leaves backwards to roots.
List leaves = sparkWork.getLeaves();
for (BaseWork leaf : leaves) {
  move(leaf, sparkWork);
}

/**
 * Move a work from original SparkWork to the target SparkWork
 */
void move(BaseWork work, SparkWork target) {
  List parentWorks = sparkWork.getParents(work);
  SparkWork currentParentSparkWork = parentSparkWorkMap.get(work);
  if(currentParentSparkWork != target) {
// TODO: move the work from currentParent to target.
parentSparkWorkMap.put(work, target); // update new parent
}
 
  if (!(work instanceof MJWork)) {
for(BaseWork parent : parents) {
  // move each parent to the same parent SparkWork of work
  move(parent, target);
}
  } else {
// it's a MJWork.
SparkWork childSparkWork = new SparkWork();
dependencyMap.put(target, childSparkWork);
childSparkMap.put(work, childSparkWork);
for(BaseWork parent : parents) {
  if (parent instanceof HTSWork) {
move(parent, childSparkWork);
  } else {
move(parent, target);
  }
}
  }
}

{code}

> Split map-join plan into 2 SparkTasks in 3 stages [Spark Branch]
> 
>
> Key: HIVE-8622
> URL: https://issues.apache.org/jira/browse/HIVE-8622
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Suhas Satish
>Assignee: Chao
> Attachments: HIVE-8622.2-spark.patch, HIVE-8622.3-spark.patch, 
> HIVE-8622.patch
>
>
> This is a sub-task of map-join for spark 
> https://issues.apache.org/jira/browse/HIVE-7613
> This can use the baseline patch for map-join
> https://issues.apache.org/jira/browse/HIVE-8616



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Hive-0.14 - Build # 715 - Still Failing

2014-11-08 Thread Apache Jenkins Server
Changes for Build #696
[rohini] PIG-4186: Fix e2e run against new build of pig and some enhancements 
(rohini)


Changes for Build #697

Changes for Build #698

Changes for Build #699

Changes for Build #700

Changes for Build #701

Changes for Build #702

Changes for Build #703
[daijy] HIVE-8484: HCatalog throws an exception if Pig job is of type 'fetch' 
(Lorand Bendig via Daniel Dai)


Changes for Build #704
[gunther] HIVE-8781: Nullsafe joins are busted on Tez (Gunther Hagleitner, 
reviewed by Prasanth J)


Changes for Build #705
[gunther] HIVE-8760: Pass a copy of HiveConf to hooks (Gunther Hagleitner, 
reviewed by Gopal V)


Changes for Build #706
[thejas] HIVE-8772 : zookeeper info logs are always printed from beeline with 
service discovery mode (Thejas Nair, reviewed by Vaibhav Gumashta)


Changes for Build #707
[gunther] HIVE-8782: HBase handler doesn't compile with hadoop-1 (Jimmy Xiang, 
reviewed by Xuefu and Sergey)


Changes for Build #708

Changes for Build #709
[thejas] HIVE-8785 : HiveServer2 LogDivertAppender should be more selective for 
beeline getLogs (Thejas Nair, reviewed by Gopal V)


Changes for Build #710
[vgumashta] HIVE-8764: Windows: HiveServer2 TCP SSL cannot recognize localhost 
(Vaibhav Gumashta reviewed by Thejas Nair)


Changes for Build #711
[gunther] HIVE-8768: CBO: Fix filter selectivity for 'in clause' & '<>' (Laljo 
John Pullokkaran via Gunther Hagleitner)


Changes for Build #712
[gunther] HIVE-8794: Hive on Tez leaks AMs when killed before first dag is run 
(Gunther Hagleitner, reviewed by Gopal V)


Changes for Build #713
[gunther] HIVE-8798: Some Oracle deadlocks not being caught in TxnHandler (Alan 
Gates via Gunther Hagleitner)


Changes for Build #714
[gunther] HIVE-8800: Update release notes and notice for hive .14 (Gunther 
Hagleitner, reviewed by Prasanth J)

[gunther] HIVE-8799: boatload of missing apache headers (Gunther Hagleitner, 
reviewed by Thejas M Nair)


Changes for Build #715
[gunther] Preparing for release 0.14.0




No tests ran.

The Apache Jenkins build system has built Hive-0.14 (build #715)

Status: Still Failing

Check console output at https://builds.apache.org/job/Hive-0.14/715/ to view 
the results.

Hive-0.14 - Build # 714 - Still Failing

2014-11-08 Thread Apache Jenkins Server
Changes for Build #696
[rohini] PIG-4186: Fix e2e run against new build of pig and some enhancements 
(rohini)


Changes for Build #697

Changes for Build #698

Changes for Build #699

Changes for Build #700

Changes for Build #701

Changes for Build #702

Changes for Build #703
[daijy] HIVE-8484: HCatalog throws an exception if Pig job is of type 'fetch' 
(Lorand Bendig via Daniel Dai)


Changes for Build #704
[gunther] HIVE-8781: Nullsafe joins are busted on Tez (Gunther Hagleitner, 
reviewed by Prasanth J)


Changes for Build #705
[gunther] HIVE-8760: Pass a copy of HiveConf to hooks (Gunther Hagleitner, 
reviewed by Gopal V)


Changes for Build #706
[thejas] HIVE-8772 : zookeeper info logs are always printed from beeline with 
service discovery mode (Thejas Nair, reviewed by Vaibhav Gumashta)


Changes for Build #707
[gunther] HIVE-8782: HBase handler doesn't compile with hadoop-1 (Jimmy Xiang, 
reviewed by Xuefu and Sergey)


Changes for Build #708

Changes for Build #709
[thejas] HIVE-8785 : HiveServer2 LogDivertAppender should be more selective for 
beeline getLogs (Thejas Nair, reviewed by Gopal V)


Changes for Build #710
[vgumashta] HIVE-8764: Windows: HiveServer2 TCP SSL cannot recognize localhost 
(Vaibhav Gumashta reviewed by Thejas Nair)


Changes for Build #711
[gunther] HIVE-8768: CBO: Fix filter selectivity for 'in clause' & '<>' (Laljo 
John Pullokkaran via Gunther Hagleitner)


Changes for Build #712
[gunther] HIVE-8794: Hive on Tez leaks AMs when killed before first dag is run 
(Gunther Hagleitner, reviewed by Gopal V)


Changes for Build #713
[gunther] HIVE-8798: Some Oracle deadlocks not being caught in TxnHandler (Alan 
Gates via Gunther Hagleitner)


Changes for Build #714
[gunther] HIVE-8800: Update release notes and notice for hive .14 (Gunther 
Hagleitner, reviewed by Prasanth J)

[gunther] HIVE-8799: boatload of missing apache headers (Gunther Hagleitner, 
reviewed by Thejas M Nair)




No tests ran.

The Apache Jenkins build system has built Hive-0.14 (build #714)

Status: Still Failing

Check console output at https://builds.apache.org/job/Hive-0.14/714/ to view 
the results.

[jira] [Resolved] (HIVE-8800) Update release notes and notice for hive .14

2014-11-08 Thread Gunther Hagleitner (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8800?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gunther Hagleitner resolved HIVE-8800.
--
Resolution: Fixed

Committed to trunk and branch

> Update release notes and notice for hive .14
> 
>
> Key: HIVE-8800
> URL: https://issues.apache.org/jira/browse/HIVE-8800
> Project: Hive
>  Issue Type: Bug
>Reporter: Gunther Hagleitner
>Assignee: Gunther Hagleitner
> Attachments: HIVE-8800.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8800) Update release notes and notice for hive .14

2014-11-08 Thread Gunther Hagleitner (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8800?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gunther Hagleitner updated HIVE-8800:
-
Fix Version/s: 0.14.0

> Update release notes and notice for hive .14
> 
>
> Key: HIVE-8800
> URL: https://issues.apache.org/jira/browse/HIVE-8800
> Project: Hive
>  Issue Type: Bug
>Reporter: Gunther Hagleitner
>Assignee: Gunther Hagleitner
> Fix For: 0.14.0
>
> Attachments: HIVE-8800.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8800) Update release notes and notice for hive .14

2014-11-08 Thread Prasanth J (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8800?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14203744#comment-14203744
 ] 

Prasanth J commented on HIVE-8800:
--

+1

> Update release notes and notice for hive .14
> 
>
> Key: HIVE-8800
> URL: https://issues.apache.org/jira/browse/HIVE-8800
> Project: Hive
>  Issue Type: Bug
>Reporter: Gunther Hagleitner
>Assignee: Gunther Hagleitner
> Attachments: HIVE-8800.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8800) Update release notes and notice for hive .14

2014-11-08 Thread Gunther Hagleitner (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8800?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gunther Hagleitner updated HIVE-8800:
-
Attachment: HIVE-8800.1.patch

> Update release notes and notice for hive .14
> 
>
> Key: HIVE-8800
> URL: https://issues.apache.org/jira/browse/HIVE-8800
> Project: Hive
>  Issue Type: Bug
>Reporter: Gunther Hagleitner
>Assignee: Gunther Hagleitner
> Attachments: HIVE-8800.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-8800) Update release notes and notice for hive .14

2014-11-08 Thread Gunther Hagleitner (JIRA)
Gunther Hagleitner created HIVE-8800:


 Summary: Update release notes and notice for hive .14
 Key: HIVE-8800
 URL: https://issues.apache.org/jira/browse/HIVE-8800
 Project: Hive
  Issue Type: Bug
Reporter: Gunther Hagleitner
Assignee: Gunther Hagleitner






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-6012) restore backward compatibility of arithmetic operations

2014-11-08 Thread Jason Dere (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14203740#comment-14203740
 ] 

Jason Dere commented on HIVE-6012:
--

Yes [~leftylev] that looks correct.

> restore backward compatibility of arithmetic operations
> ---
>
> Key: HIVE-6012
> URL: https://issues.apache.org/jira/browse/HIVE-6012
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Affects Versions: 0.13.0
>Reporter: Thejas M Nair
>Assignee: Jason Dere
> Fix For: 0.13.0
>
> Attachments: HIVE-6012.1.patch, HIVE-6012.2.patch, HIVE-6012.3.patch, 
> HIVE-6012.4.patch, HIVE-6012.5.patch, HIVE-6012.6.patch
>
>
> HIVE-5356 changed the behavior of some of the arithmetic operations, and the 
> change is not backward compatible, as pointed out in this [jira 
> comment|https://issues.apache.org/jira/browse/HIVE-5356?focusedCommentId=13813398&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13813398]
> {code}
> int / int => decimal
> float / float => double
> float * float => double
> float + float => double
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8796) TestCliDriver acid tests with decimal needs benchmark to be updated

2014-11-08 Thread Jason Dere (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14203737#comment-14203737
 ] 

Jason Dere commented on HIVE-8796:
--

Looks right, +1

> TestCliDriver acid tests with decimal needs benchmark to be updated
> ---
>
> Key: HIVE-8796
> URL: https://issues.apache.org/jira/browse/HIVE-8796
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.15.0
>Reporter: Thejas M Nair
>Assignee: Thejas M Nair
> Attachments: HIVE-8796.1.patch
>
>
> testCliDriver_insert_nonacid_from_acid, testCliDriver_acid_join are failing. 
> They were committed around same time HIVE-8745 changes to decimal went it, 
> didn't have the required updates to decimal output.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8542) Enable groupby_map_ppr.q and groupby_map_ppr_multi_distinct.q [Spark Branch]

2014-11-08 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14203733#comment-14203733
 ] 

Hive QA commented on HIVE-8542:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12680439/HIVE-8542.3-spark.patch

{color:red}ERROR:{color} -1 due to 5 failed/errored test(s), 7234 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_sample_islocalmode_hook
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_optimize_nullscan
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_bucket_map_join_1
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_optimize_nullscan
org.apache.hive.hcatalog.streaming.TestStreaming.testTransactionBatchEmptyCommit
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-SPARK-Build/331/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-SPARK-Build/331/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-SPARK-Build-331/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 5 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12680439 - PreCommit-HIVE-SPARK-Build

> Enable groupby_map_ppr.q and groupby_map_ppr_multi_distinct.q [Spark Branch]
> 
>
> Key: HIVE-8542
> URL: https://issues.apache.org/jira/browse/HIVE-8542
> Project: Hive
>  Issue Type: Bug
>  Components: Spark
>Reporter: Chao
>Assignee: Rui Li
> Attachments: HIVE-8542.1-spark.patch, HIVE-8542.2-spark.patch, 
> HIVE-8542.3-spark.patch
>
>
> Currently, in Spark branch, results for these two test files are very 
> different from MR's. We need to find out the cause for this, and identify 
> potential bug in our current implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HIVE-8799) boatload of missing apache headers

2014-11-08 Thread Gunther Hagleitner (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8799?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gunther Hagleitner resolved HIVE-8799.
--
   Resolution: Fixed
Fix Version/s: 0.14.0

Committed to trunk and .14

> boatload of missing apache headers
> --
>
> Key: HIVE-8799
> URL: https://issues.apache.org/jira/browse/HIVE-8799
> Project: Hive
>  Issue Type: Bug
>Reporter: Gunther Hagleitner
>Assignee: Gunther Hagleitner
> Fix For: 0.14.0
>
> Attachments: HIVE-8799.1.patch
>
>
> Adding missing apache headers to a number of files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8799) boatload of missing apache headers

2014-11-08 Thread Prasanth J (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14203729#comment-14203729
 ] 

Prasanth J commented on HIVE-8799:
--

ha ha :) completely self-contained name.

> boatload of missing apache headers
> --
>
> Key: HIVE-8799
> URL: https://issues.apache.org/jira/browse/HIVE-8799
> Project: Hive
>  Issue Type: Bug
>Reporter: Gunther Hagleitner
>Assignee: Gunther Hagleitner
> Attachments: HIVE-8799.1.patch
>
>
> Adding missing apache headers to a number of files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8799) boatload of missing apache headers

2014-11-08 Thread Gunther Hagleitner (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14203728#comment-14203728
 ] 

Gunther Hagleitner commented on HIVE-8799:
--

[~prasanth_j] no i meant sit - there's a friggin shell script called "sit" in 
hive.

> boatload of missing apache headers
> --
>
> Key: HIVE-8799
> URL: https://issues.apache.org/jira/browse/HIVE-8799
> Project: Hive
>  Issue Type: Bug
>Reporter: Gunther Hagleitner
>Assignee: Gunther Hagleitner
> Attachments: HIVE-8799.1.patch
>
>
> Adding missing apache headers to a number of files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8799) boatload of missing apache headers

2014-11-08 Thread Prasanth J (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14203726#comment-14203726
 ] 

Prasanth J commented on HIVE-8799:
--

The change in pom.xml, "**/sit" did you mean "**/site" 
directory?

> boatload of missing apache headers
> --
>
> Key: HIVE-8799
> URL: https://issues.apache.org/jira/browse/HIVE-8799
> Project: Hive
>  Issue Type: Bug
>Reporter: Gunther Hagleitner
>Assignee: Gunther Hagleitner
> Attachments: HIVE-8799.1.patch
>
>
> Adding missing apache headers to a number of files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8799) boatload of missing apache headers

2014-11-08 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14203724#comment-14203724
 ] 

Thejas M Nair commented on HIVE-8799:
-

+1

> boatload of missing apache headers
> --
>
> Key: HIVE-8799
> URL: https://issues.apache.org/jira/browse/HIVE-8799
> Project: Hive
>  Issue Type: Bug
>Reporter: Gunther Hagleitner
>Assignee: Gunther Hagleitner
> Attachments: HIVE-8799.1.patch
>
>
> Adding missing apache headers to a number of files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8799) boatload of missing apache headers

2014-11-08 Thread Gunther Hagleitner (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8799?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gunther Hagleitner updated HIVE-8799:
-
Description: Adding missing apache headers to a number of files.

> boatload of missing apache headers
> --
>
> Key: HIVE-8799
> URL: https://issues.apache.org/jira/browse/HIVE-8799
> Project: Hive
>  Issue Type: Bug
>Reporter: Gunther Hagleitner
>Assignee: Gunther Hagleitner
> Attachments: HIVE-8799.1.patch
>
>
> Adding missing apache headers to a number of files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8799) boatload of missing apache headers

2014-11-08 Thread Gunther Hagleitner (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8799?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gunther Hagleitner updated HIVE-8799:
-
Attachment: HIVE-8799.1.patch

> boatload of missing apache headers
> --
>
> Key: HIVE-8799
> URL: https://issues.apache.org/jira/browse/HIVE-8799
> Project: Hive
>  Issue Type: Bug
>Reporter: Gunther Hagleitner
>Assignee: Gunther Hagleitner
> Attachments: HIVE-8799.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-8799) boatload of missing apache headers

2014-11-08 Thread Gunther Hagleitner (JIRA)
Gunther Hagleitner created HIVE-8799:


 Summary: boatload of missing apache headers
 Key: HIVE-8799
 URL: https://issues.apache.org/jira/browse/HIVE-8799
 Project: Hive
  Issue Type: Bug
Reporter: Gunther Hagleitner
Assignee: Gunther Hagleitner






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Hive-0.14 - Build # 713 - Still Failing

2014-11-08 Thread Apache Jenkins Server
Changes for Build #696
[rohini] PIG-4186: Fix e2e run against new build of pig and some enhancements 
(rohini)


Changes for Build #697

Changes for Build #698

Changes for Build #699

Changes for Build #700

Changes for Build #701

Changes for Build #702

Changes for Build #703
[daijy] HIVE-8484: HCatalog throws an exception if Pig job is of type 'fetch' 
(Lorand Bendig via Daniel Dai)


Changes for Build #704
[gunther] HIVE-8781: Nullsafe joins are busted on Tez (Gunther Hagleitner, 
reviewed by Prasanth J)


Changes for Build #705
[gunther] HIVE-8760: Pass a copy of HiveConf to hooks (Gunther Hagleitner, 
reviewed by Gopal V)


Changes for Build #706
[thejas] HIVE-8772 : zookeeper info logs are always printed from beeline with 
service discovery mode (Thejas Nair, reviewed by Vaibhav Gumashta)


Changes for Build #707
[gunther] HIVE-8782: HBase handler doesn't compile with hadoop-1 (Jimmy Xiang, 
reviewed by Xuefu and Sergey)


Changes for Build #708

Changes for Build #709
[thejas] HIVE-8785 : HiveServer2 LogDivertAppender should be more selective for 
beeline getLogs (Thejas Nair, reviewed by Gopal V)


Changes for Build #710
[vgumashta] HIVE-8764: Windows: HiveServer2 TCP SSL cannot recognize localhost 
(Vaibhav Gumashta reviewed by Thejas Nair)


Changes for Build #711
[gunther] HIVE-8768: CBO: Fix filter selectivity for 'in clause' & '<>' (Laljo 
John Pullokkaran via Gunther Hagleitner)


Changes for Build #712
[gunther] HIVE-8794: Hive on Tez leaks AMs when killed before first dag is run 
(Gunther Hagleitner, reviewed by Gopal V)


Changes for Build #713
[gunther] HIVE-8798: Some Oracle deadlocks not being caught in TxnHandler (Alan 
Gates via Gunther Hagleitner)




No tests ran.

The Apache Jenkins build system has built Hive-0.14 (build #713)

Status: Still Failing

Check console output at https://builds.apache.org/job/Hive-0.14/713/ to view 
the results.

[jira] [Updated] (HIVE-8542) Enable groupby_map_ppr.q and groupby_map_ppr_multi_distinct.q [Spark Branch]

2014-11-08 Thread Rui Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Li updated HIVE-8542:
-
Attachment: HIVE-8542.3-spark.patch

Rebase patch.

> Enable groupby_map_ppr.q and groupby_map_ppr_multi_distinct.q [Spark Branch]
> 
>
> Key: HIVE-8542
> URL: https://issues.apache.org/jira/browse/HIVE-8542
> Project: Hive
>  Issue Type: Bug
>  Components: Spark
>Reporter: Chao
>Assignee: Rui Li
> Attachments: HIVE-8542.1-spark.patch, HIVE-8542.2-spark.patch, 
> HIVE-8542.3-spark.patch
>
>
> Currently, in Spark branch, results for these two test files are very 
> different from MR's. We need to find out the cause for this, and identify 
> potential bug in our current implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8779) Tez in-place progress UI can show wrong estimated time for sub-second queries

2014-11-08 Thread Gunther Hagleitner (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gunther Hagleitner updated HIVE-8779:
-
Issue Type: New Feature  (was: Bug)

> Tez in-place progress UI can show wrong estimated time for sub-second queries
> -
>
> Key: HIVE-8779
> URL: https://issues.apache.org/jira/browse/HIVE-8779
> Project: Hive
>  Issue Type: New Feature
>Affects Versions: 0.14.0
>Reporter: Prasanth J
>Assignee: Prasanth J
>Priority: Trivial
> Fix For: 0.14.0, 0.15.0
>
> Attachments: HIVE-8779.1.patch
>
>
> The in-place progress update UI added as part of HIVE-8495 can show wrong 
> estimated time for AM only job which goes from INITED to SUCCEEDED DAG state 
> directly without going to RUNNING state.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-7826) Dynamic partition pruning on Tez

2014-11-08 Thread Gunther Hagleitner (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7826?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gunther Hagleitner updated HIVE-7826:
-
Issue Type: New Feature  (was: Bug)

> Dynamic partition pruning on Tez
> 
>
> Key: HIVE-7826
> URL: https://issues.apache.org/jira/browse/HIVE-7826
> Project: Hive
>  Issue Type: New Feature
>  Components: Tez
>Reporter: Gunther Hagleitner
>Assignee: Gunther Hagleitner
>  Labels: TODOC14, tez
> Fix For: 0.14.0
>
> Attachments: HIVE-7826.1.patch, HIVE-7826.2.patch, HIVE-7826.3.patch, 
> HIVE-7826.4.patch, HIVE-7826.5.patch, HIVE-7826.6.patch, HIVE-7826.7.patch
>
>
> It's natural in a star schema to map one or more dimensions to partition 
> columns. Time or location are likely candidates. 
> It can also useful to be to compute the partitions one would like to scan via 
> a subquery (where p in select ... from ...).
> The resulting joins in hive require a full table scan of the large table 
> though, because partition pruning takes place before the corresponding values 
> are known.
> On Tez it's relatively straight forward to send the values needed to prune to 
> the application master - where splits are generated and tasks are submitted. 
> Using these values we can strip out any unneeded partitions dynamically, 
> while the query is running.
> The approach is straight forward:
> - Insert synthetic conditions for each join representing "x in (keys of other 
> side in join)"
> - This conditions will be pushed as far down as possible
> - If the condition hits a table scan and the column involved is a partition 
> column:
>- Setup Operator to send key events to AM
> - else:
>- Remove synthetic predicate
> Add  these properties :
> ||Property||Default Value||
> |{{hive.tez.dynamic.partition.pruning}}|true|
> |{{hive.tez.dynamic.partition.pruning.max.event.size}}|1*1024*1024L|
> |{{hive.tez.dynamic.parition.pruning.max.data.size}}|100*1024*1024L|



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8779) Tez in-place progress UI can show wrong estimated time for sub-second queries

2014-11-08 Thread Gunther Hagleitner (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gunther Hagleitner updated HIVE-8779:
-
Issue Type: Improvement  (was: New Feature)

> Tez in-place progress UI can show wrong estimated time for sub-second queries
> -
>
> Key: HIVE-8779
> URL: https://issues.apache.org/jira/browse/HIVE-8779
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 0.14.0
>Reporter: Prasanth J
>Assignee: Prasanth J
>Priority: Trivial
> Fix For: 0.14.0, 0.15.0
>
> Attachments: HIVE-8779.1.patch
>
>
> The in-place progress update UI added as part of HIVE-8495 can show wrong 
> estimated time for AM only job which goes from INITED to SUCCEEDED DAG state 
> directly without going to RUNNING state.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-7509) Fast stripe level merging for ORC

2014-11-08 Thread Gunther Hagleitner (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gunther Hagleitner updated HIVE-7509:
-
Issue Type: New Feature  (was: Bug)

> Fast stripe level merging for ORC
> -
>
> Key: HIVE-7509
> URL: https://issues.apache.org/jira/browse/HIVE-7509
> Project: Hive
>  Issue Type: New Feature
>Affects Versions: 0.14.0
>Reporter: Prasanth J
>Assignee: Prasanth J
>  Labels: orcfile
> Fix For: 0.14.0
>
> Attachments: HIVE-7509.1.patch, HIVE-7509.2.patch, HIVE-7509.3.patch, 
> HIVE-7509.4.patch, HIVE-7509.5.patch
>
>
> Similar to HIVE-1950, add support for fast stripe level merging of ORC files 
> through CONCATENATE command and conditional merge task. This fast merging is 
> ideal for merging many small ORC files to a larger file without decompressing 
> and decoding the data of small orc files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-7158) Use Tez auto-parallelism in Hive

2014-11-08 Thread Gunther Hagleitner (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gunther Hagleitner updated HIVE-7158:
-
Issue Type: New Feature  (was: Bug)

> Use Tez auto-parallelism in Hive
> 
>
> Key: HIVE-7158
> URL: https://issues.apache.org/jira/browse/HIVE-7158
> Project: Hive
>  Issue Type: New Feature
>Reporter: Gunther Hagleitner
>Assignee: Gunther Hagleitner
>  Labels: TODOC14
> Fix For: 0.14.0
>
> Attachments: HIVE-7158.1.patch, HIVE-7158.2.patch, HIVE-7158.3.patch, 
> HIVE-7158.4.patch, HIVE-7158.5.patch
>
>
> Tez can optionally sample data from a fraction of the tasks of a vertex and 
> use that information to choose the number of downstream tasks for any given 
> scatter gather edge.
> Hive estimates the count of reducers by looking at stats and estimates for 
> each operator in the operator pipeline leading up to the reducer. However, if 
> this estimate turns out to be too large, Tez can reign in the resources used 
> to compute the reducer.
> It does so by combining partitions of the upstream vertex. It cannot, 
> however, add reducers at this stage.
> I'm proposing to let users specify whether they want to use auto-parallelism 
> or not. If they do there will be scaling factors to determine max and min 
> reducers Tez can choose from. We will then partition by max reducers, letting 
> Tez sample and reign in the count up until the specified min.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8042) Optionally allow move tasks to run in parallel

2014-11-08 Thread Gunther Hagleitner (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gunther Hagleitner updated HIVE-8042:
-
Issue Type: Improvement  (was: Bug)

> Optionally allow move tasks to run in parallel
> --
>
> Key: HIVE-8042
> URL: https://issues.apache.org/jira/browse/HIVE-8042
> Project: Hive
>  Issue Type: Improvement
>Reporter: Gunther Hagleitner
>Assignee: Gunther Hagleitner
> Fix For: 0.14.0
>
> Attachments: HIVE-8042.1.patch, HIVE-8042.2.patch, HIVE-8042.3.patch
>
>
> hive.exec.parallel allows one to run different stages of a query in parallel. 
> However that applies only to map-reduce tasks. When using large multi insert 
> queries there are many MoveTasks that are all executed in sequence on the 
> client. There's no real reason for that - they could be run in parallel as 
> well (i.e.: the stage graph captures the dependencies and knows which tasks 
> can happen in parallel).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-7299) Enable metadata only optimization on Tez

2014-11-08 Thread Gunther Hagleitner (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gunther Hagleitner updated HIVE-7299:
-
Issue Type: New Feature  (was: Bug)

> Enable metadata only optimization on Tez
> 
>
> Key: HIVE-7299
> URL: https://issues.apache.org/jira/browse/HIVE-7299
> Project: Hive
>  Issue Type: New Feature
>Reporter: Gunther Hagleitner
>Assignee: Gunther Hagleitner
> Fix For: 0.14.0
>
> Attachments: HIVE-7299.1.patch, HIVE-7299.2.patch, HIVE-7299.3.patch, 
> HIVE-7299.4.patch, HIVE-7299.5.patch, HIVE-7299.6.patch
>
>
> Enables the metadata only optimization (the one with OneNullRowInputFormat 
> not the query-result-from-stats optimizaton)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-7495) Print dictionary size in orc file dump

2014-11-08 Thread Gunther Hagleitner (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gunther Hagleitner updated HIVE-7495:
-
Issue Type: Improvement  (was: Bug)

> Print dictionary size in orc file dump
> --
>
> Key: HIVE-7495
> URL: https://issues.apache.org/jira/browse/HIVE-7495
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 0.14.0
>Reporter: Prasanth J
>Assignee: Prasanth J
>Priority: Minor
>  Labels: orcfile
> Fix For: 0.14.0
>
> Attachments: HIVE-7495.1.patch
>
>
> DICTIONARY_V2 fails to print dictionary size in file dump.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8798) Some Oracle deadlocks not being caught in TxnHandler

2014-11-08 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14203692#comment-14203692
 ] 

Hive QA commented on HIVE-8798:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12680418/HIVE-8798.patch

{color:red}ERROR:{color} -1 due to 6 failed/errored test(s), 6672 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_acid_join
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby3_map
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_insert_nonacid_from_acid
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_udaf_histogram_numeric
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_optimize_nullscan
org.apache.hive.hcatalog.streaming.TestStreaming.testEndpointConnection
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/1707/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/1707/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-1707/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 6 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12680418 - PreCommit-HIVE-TRUNK-Build

> Some Oracle deadlocks not being caught in TxnHandler
> 
>
> Key: HIVE-8798
> URL: https://issues.apache.org/jira/browse/HIVE-8798
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 0.14.0
>Reporter: Alan Gates
>Assignee: Alan Gates
>Priority: Critical
> Fix For: 0.14.0
>
> Attachments: HIVE-8798.patch
>
>
> Oracle seems to give different error codes and different error messages at 
> different times for deadlocks.  There are still some error codes/messages we 
> are missing in TxnHandler.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8736) add ordering to cbo_correctness to make result consistent

2014-11-08 Thread Prasanth J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth J updated HIVE-8736:
-
Fix Version/s: 0.15.0

> add ordering to cbo_correctness to make result consistent
> -
>
> Key: HIVE-8736
> URL: https://issues.apache.org/jira/browse/HIVE-8736
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Fix For: 0.14.0, 0.15.0
>
> Attachments: HIVE-8736.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8797) Simultaneous dynamic inserts can result in "partition already exists" error

2014-11-08 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14203676#comment-14203676
 ] 

Hive QA commented on HIVE-8797:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12680414/HIVE-8797.patch

{color:red}ERROR:{color} -1 due to 3 failed/errored test(s), 6671 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_acid_join
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_insert_nonacid_from_acid
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_optimize_nullscan
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/1706/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/1706/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-1706/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 3 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12680414 - PreCommit-HIVE-TRUNK-Build

> Simultaneous dynamic inserts can result in "partition already exists" error
> ---
>
> Key: HIVE-8797
> URL: https://issues.apache.org/jira/browse/HIVE-8797
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.14.0
>Reporter: Alan Gates
>Assignee: Alan Gates
> Attachments: HIVE-8797.patch
>
>
> If two users attempt a dynamic insert into the same new partition at the same 
> time, a possible race condition exists where both will attempt to create the 
> partition and one will fail.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Hive-0.14 - Build # 712 - Still Failing

2014-11-08 Thread Apache Jenkins Server
Changes for Build #696
[rohini] PIG-4186: Fix e2e run against new build of pig and some enhancements 
(rohini)


Changes for Build #697

Changes for Build #698

Changes for Build #699

Changes for Build #700

Changes for Build #701

Changes for Build #702

Changes for Build #703
[daijy] HIVE-8484: HCatalog throws an exception if Pig job is of type 'fetch' 
(Lorand Bendig via Daniel Dai)


Changes for Build #704
[gunther] HIVE-8781: Nullsafe joins are busted on Tez (Gunther Hagleitner, 
reviewed by Prasanth J)


Changes for Build #705
[gunther] HIVE-8760: Pass a copy of HiveConf to hooks (Gunther Hagleitner, 
reviewed by Gopal V)


Changes for Build #706
[thejas] HIVE-8772 : zookeeper info logs are always printed from beeline with 
service discovery mode (Thejas Nair, reviewed by Vaibhav Gumashta)


Changes for Build #707
[gunther] HIVE-8782: HBase handler doesn't compile with hadoop-1 (Jimmy Xiang, 
reviewed by Xuefu and Sergey)


Changes for Build #708

Changes for Build #709
[thejas] HIVE-8785 : HiveServer2 LogDivertAppender should be more selective for 
beeline getLogs (Thejas Nair, reviewed by Gopal V)


Changes for Build #710
[vgumashta] HIVE-8764: Windows: HiveServer2 TCP SSL cannot recognize localhost 
(Vaibhav Gumashta reviewed by Thejas Nair)


Changes for Build #711
[gunther] HIVE-8768: CBO: Fix filter selectivity for 'in clause' & '<>' (Laljo 
John Pullokkaran via Gunther Hagleitner)


Changes for Build #712
[gunther] HIVE-8794: Hive on Tez leaks AMs when killed before first dag is run 
(Gunther Hagleitner, reviewed by Gopal V)




No tests ran.

The Apache Jenkins build system has built Hive-0.14 (build #712)

Status: Still Failing

Check console output at https://builds.apache.org/job/Hive-0.14/712/ to view 
the results.

[jira] [Commented] (HIVE-4629) HS2 should support an API to retrieve query logs

2014-11-08 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14203666#comment-14203666
 ] 

Lefty Leverenz commented on HIVE-4629:
--

Doc note:  HIVE-8785 extends the description of 
*hive.server2.logging.operation.enabled* to "When true, HS2 will save operation 
logs _and make them available for clients_" (emphasis added).

> HS2 should support an API to retrieve query logs
> 
>
> Key: HIVE-4629
> URL: https://issues.apache.org/jira/browse/HIVE-4629
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2
>Reporter: Shreepadma Venugopalan
>Assignee: Dong Chen
>  Labels: TODOC14
> Fix For: 0.14.0
>
> Attachments: HIVE-4629-no_thrift.1.patch, HIVE-4629.1.patch, 
> HIVE-4629.2.patch, HIVE-4629.3.patch.txt, HIVE-4629.4.patch, 
> HIVE-4629.5.patch, HIVE-4629.6.patch, HIVE-4629.7.patch, HIVE-4629.8.patch, 
> HIVE-4629.9.patch
>
>
> HiveServer2 should support an API to retrieve query logs. This is 
> particularly relevant because HiveServer2 supports async execution but 
> doesn't provide a way to report progress. Providing an API to retrieve query 
> logs will help report progress to the client.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8768) CBO: Fix filter selectivity for "in clause" & "<>"

2014-11-08 Thread Gunther Hagleitner (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gunther Hagleitner updated HIVE-8768:
-
   Resolution: Fixed
Fix Version/s: 0.14.0
   Status: Resolved  (was: Patch Available)

Committed to trunk and .14

> CBO: Fix filter selectivity for "in clause" & "<>" 
> ---
>
> Key: HIVE-8768
> URL: https://issues.apache.org/jira/browse/HIVE-8768
> Project: Hive
>  Issue Type: Sub-task
>  Components: CBO
>Affects Versions: 0.14.0
>Reporter: Laljo John Pullokkaran
>Assignee: Laljo John Pullokkaran
>Priority: Critical
> Fix For: 0.14.0
>
> Attachments: HIVE-8768.1.patch, HIVE-8768.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8785) HiveServer2 LogDivertAppender should be more selective for beeline getLogs

2014-11-08 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14203657#comment-14203657
 ] 

Lefty Leverenz commented on HIVE-8785:
--

Doc note:  This changes the description of 
*hive.server2.logging.operation.enabled* (created by HIVE-4629) and adds 
*hive.server2.logging.operation.verbose* in HiveConf.java, so they need to be 
documented in the HiveServer2 section of Configuration Properties.

* [Configuration Properties -- HiveServer2 | 
https://cwiki.apache.org/confluence/display/Hive/Configuration+Properties#ConfigurationProperties-HiveServer2]

> HiveServer2 LogDivertAppender should be more selective for beeline getLogs
> --
>
> Key: HIVE-8785
> URL: https://issues.apache.org/jira/browse/HIVE-8785
> Project: Hive
>  Issue Type: Bug
>Reporter: Gopal V
>Assignee: Thejas M Nair
>  Labels: TODOC14
> Fix For: 0.14.0
>
> Attachments: HIVE-8785.1.patch, HIVE-8785.2.patch, HIVE-8785.3.patch, 
> HIVE-8785.4.patch, HIVE-8785.4.patch, HIVE-8785.5.patch
>
>
> A simple query run via beeline JDBC like {{explain select count(1) from 
> testing.foo;}} produces 50 lines of output which looks like 
> {code}
> 0: jdbc:hive2://localhost:10002> explain select count(1) from testing.foo;
> 14/11/06 00:35:59 INFO log.PerfLogger:  from=org.apache.hadoop.hive.ql.Driver>
> 14/11/06 00:35:59 INFO log.PerfLogger:  from=org.apache.hadoop.hive.ql.Driver>
> 14/11/06 00:35:59 INFO parse.ParseDriver: Parsing command: explain select 
> count(1) from testing.foo
> 14/11/06 00:35:59 INFO parse.ParseDriver: Parse Completed
> 14/11/06 00:35:59 INFO log.PerfLogger:  start=1415262959379 end=1415262959380 duration=1 
> from=org.apache.hadoop.hive.ql.Driver>
> 14/11/06 00:35:59 INFO log.PerfLogger:  from=org.apache.hadoop.hive.ql.Driver>
> 14/11/06 00:35:59 INFO parse.SemanticAnalyzer: Starting Semantic Analysis
> 14/11/06 00:35:59 INFO parse.SemanticAnalyzer: Completed phase 1 of Semantic 
> Analysis
> 14/11/06 00:35:59 INFO parse.SemanticAnalyzer: Get metadata for source tables
> 14/11/06 00:35:59 INFO parse.SemanticAnalyzer: Get metadata for subqueries
> 14/11/06 00:35:59 INFO parse.SemanticAnalyzer: Get metadata for destination 
> tables
> 14/11/06 00:35:59 INFO ql.Context: New scratch dir is 
> hdfs://cn041-10.l42scl.hortonworks.com:8020/tmp/hive/gopal/6b3980f6-3238-4e91-ae53-cb3f54092dab/hive_2014-11-06_00-35-59_379_317426424610374080-1
> 14/11/06 00:35:59 INFO parse.SemanticAnalyzer: Completed getting MetaData in 
> Semantic Analysis
> 14/11/06 00:35:59 INFO parse.SemanticAnalyzer: Set stats collection dir : 
> hdfs://cn041-10.l42scl.hortonworks.com:8020/tmp/hive/gopal/6b3980f6-3238-4e91-ae53-cb3f54092dab/hive_2014-11-06_00-35-59_379_317426424610374080-1/-ext-10002
> 14/11/06 00:35:59 INFO ppd.OpProcFactory: Processing for FS(16)
> 14/11/06 00:35:59 INFO ppd.OpProcFactory: Processing for SEL(15)
> 14/11/06 00:35:59 INFO ppd.OpProcFactory: Processing for GBY(14)
> 14/11/06 00:35:59 INFO ppd.OpProcFactory: Processing for RS(13)
> 14/11/06 00:35:59 INFO ppd.OpProcFactory: Processing for GBY(12)
> 14/11/06 00:35:59 INFO ppd.OpProcFactory: Processing for SEL(11)
> 14/11/06 00:35:59 INFO ppd.OpProcFactory: Processing for TS(10)
> 14/11/06 00:35:59 INFO optimizer.ColumnPrunerProcFactory: RS 13 
> oldColExprMap: {VALUE._col0=Column[_col0]}
> 14/11/06 00:35:59 INFO optimizer.ColumnPrunerProcFactory: RS 13 
> newColExprMap: {VALUE._col0=Column[_col0]}
> 14/11/06 00:35:59 INFO parse.SemanticAnalyzer: Completed plan generation
> 14/11/06 00:35:59 INFO ql.Driver: Semantic Analysis Completed
> 14/11/06 00:35:59 INFO log.PerfLogger:  start=1415262959381 end=1415262959401 duration=20 
> from=org.apache.hadoop.hive.ql.Driver>
> 14/11/06 00:35:59 INFO ql.Driver: Returning Hive schema: 
> Schema(fieldSchemas:[FieldSchema(name:Explain, type:string, comment:null)], 
> properties:null)
> 14/11/06 00:35:59 INFO log.PerfLogger:  start=1415262959378 end=1415262959402 duration=24 
> from=org.apache.hadoop.hive.ql.Driver>
> 14/11/06 00:35:59 INFO log.PerfLogger:  from=org.apache.hadoop.hive.ql.Driver>
> ++--+
> |  Explain   |
> ++--+
> | STAGE DEPENDENCIES:|
> |   Stage-0 is a root stage  |
> ||
> | STAGE PLANS:   |
> |   Stage: Stage-0   |
> | Fetch Operator |
> |   limit: 1 |
> |   Processor Tree:  |
> | ListSink   |
> ||
> ++--+
> 10 rows selected (0.1 seconds)
> 14/11/06 00:35:59 INFO log.PerfLogger:  from=org.apache.hadoop.hive.ql.Driver>
> 14/11/06 00:35:59 INFO ql.Driver: Concurrency mode is disabled, not creating 
> a lock manage

[jira] [Commented] (HIVE-2691) Specify location of log4j configuration files via configuration properties

2014-11-08 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14203653#comment-14203653
 ] 

Lefty Leverenz commented on HIVE-2691:
--

Doc note:  This added two configuration parameters (*hive.log4j.file* and 
*hive.exec.log4j.file*) to HiveConf.java and the template file in 0.11, so they 
need to be documented in the wiki.

* [Configuration Properties -- Query and DDL Execution | 
https://cwiki.apache.org/confluence/display/Hive/Configuration+Properties#ConfigurationProperties-QueryandDDLExecution]

> Specify location of log4j configuration files via configuration properties
> --
>
> Key: HIVE-2691
> URL: https://issues.apache.org/jira/browse/HIVE-2691
> Project: Hive
>  Issue Type: New Feature
>  Components: Configuration, Logging
>Reporter: Carl Steinbach
>Assignee: Zhenxiao Luo
>  Labels: TODOC11
> Fix For: 0.11.0
>
> Attachments: ASF.LICENSE.NOT.GRANTED--HIVE-2691.D1131.1.patch, 
> ASF.LICENSE.NOT.GRANTED--HIVE-2691.D1203.1.patch, 
> ASF.LICENSE.NOT.GRANTED--HIVE-2691.D1203.2.patch, 
> ASF.LICENSE.NOT.GRANTED--HIVE-2691.D1203.3.patch, 
> ASF.LICENSE.NOT.GRANTED--HIVE-2691.D1203.4.patch, 
> ASF.LICENSE.NOT.GRANTED--HIVE-2691.D1203.5.patch, 
> ASF.LICENSE.NOT.GRANTED--HIVE-2691.D1203.6.patch, 
> ASF.LICENSE.NOT.GRANTED--HIVE-2691.D2667.1.patch, HIVE-2691.1.patch.txt, 
> HIVE-2691.2.patch.txt, HIVE-2691.D2667.1.patch
>
>
> Oozie needs to be able to override the default location of the log4j 
> configuration
> files from the Hive command line, e.g:
> {noformat}
> hive -hiveconf hive.log4j.file=/home/carl/hive-log4j.properties -hiveconf 
> hive.log4j.exec.file=/home/carl/hive-exec-log4j.properties
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-2691) Specify location of log4j configuration files via configuration properties

2014-11-08 Thread Lefty Leverenz (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lefty Leverenz updated HIVE-2691:
-
Labels: TODOC11  (was: )

> Specify location of log4j configuration files via configuration properties
> --
>
> Key: HIVE-2691
> URL: https://issues.apache.org/jira/browse/HIVE-2691
> Project: Hive
>  Issue Type: New Feature
>  Components: Configuration, Logging
>Reporter: Carl Steinbach
>Assignee: Zhenxiao Luo
>  Labels: TODOC11
> Fix For: 0.11.0
>
> Attachments: ASF.LICENSE.NOT.GRANTED--HIVE-2691.D1131.1.patch, 
> ASF.LICENSE.NOT.GRANTED--HIVE-2691.D1203.1.patch, 
> ASF.LICENSE.NOT.GRANTED--HIVE-2691.D1203.2.patch, 
> ASF.LICENSE.NOT.GRANTED--HIVE-2691.D1203.3.patch, 
> ASF.LICENSE.NOT.GRANTED--HIVE-2691.D1203.4.patch, 
> ASF.LICENSE.NOT.GRANTED--HIVE-2691.D1203.5.patch, 
> ASF.LICENSE.NOT.GRANTED--HIVE-2691.D1203.6.patch, 
> ASF.LICENSE.NOT.GRANTED--HIVE-2691.D2667.1.patch, HIVE-2691.1.patch.txt, 
> HIVE-2691.2.patch.txt, HIVE-2691.D2667.1.patch
>
>
> Oozie needs to be able to override the default location of the log4j 
> configuration
> files from the Hive command line, e.g:
> {noformat}
> hive -hiveconf hive.log4j.file=/home/carl/hive-log4j.properties -hiveconf 
> hive.log4j.exec.file=/home/carl/hive-exec-log4j.properties
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8794) Hive on Tez leaks AMs when killed before first dag is run

2014-11-08 Thread Gunther Hagleitner (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gunther Hagleitner updated HIVE-8794:
-
   Resolution: Fixed
Fix Version/s: 0.14.0
   Status: Resolved  (was: Patch Available)

Committed to .14. and trunk.

> Hive on Tez leaks AMs when killed before first dag is run
> -
>
> Key: HIVE-8794
> URL: https://issues.apache.org/jira/browse/HIVE-8794
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.14.0
>Reporter: Gunther Hagleitner
>Assignee: Gunther Hagleitner
> Fix For: 0.14.0
>
> Attachments: HIVE-8794.1.patch, HIVE-8794.2.patch
>
>
> The shutdown hook that guards against this kind of leakage is only set up 
> when the TezJobMonitor class is loaded. If you kill the shell before that - 
> that might be too late.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8785) HiveServer2 LogDivertAppender should be more selective for beeline getLogs

2014-11-08 Thread Lefty Leverenz (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lefty Leverenz updated HIVE-8785:
-
Labels: TODOC14  (was: )

> HiveServer2 LogDivertAppender should be more selective for beeline getLogs
> --
>
> Key: HIVE-8785
> URL: https://issues.apache.org/jira/browse/HIVE-8785
> Project: Hive
>  Issue Type: Bug
>Reporter: Gopal V
>Assignee: Thejas M Nair
>  Labels: TODOC14
> Fix For: 0.14.0
>
> Attachments: HIVE-8785.1.patch, HIVE-8785.2.patch, HIVE-8785.3.patch, 
> HIVE-8785.4.patch, HIVE-8785.4.patch, HIVE-8785.5.patch
>
>
> A simple query run via beeline JDBC like {{explain select count(1) from 
> testing.foo;}} produces 50 lines of output which looks like 
> {code}
> 0: jdbc:hive2://localhost:10002> explain select count(1) from testing.foo;
> 14/11/06 00:35:59 INFO log.PerfLogger:  from=org.apache.hadoop.hive.ql.Driver>
> 14/11/06 00:35:59 INFO log.PerfLogger:  from=org.apache.hadoop.hive.ql.Driver>
> 14/11/06 00:35:59 INFO parse.ParseDriver: Parsing command: explain select 
> count(1) from testing.foo
> 14/11/06 00:35:59 INFO parse.ParseDriver: Parse Completed
> 14/11/06 00:35:59 INFO log.PerfLogger:  start=1415262959379 end=1415262959380 duration=1 
> from=org.apache.hadoop.hive.ql.Driver>
> 14/11/06 00:35:59 INFO log.PerfLogger:  from=org.apache.hadoop.hive.ql.Driver>
> 14/11/06 00:35:59 INFO parse.SemanticAnalyzer: Starting Semantic Analysis
> 14/11/06 00:35:59 INFO parse.SemanticAnalyzer: Completed phase 1 of Semantic 
> Analysis
> 14/11/06 00:35:59 INFO parse.SemanticAnalyzer: Get metadata for source tables
> 14/11/06 00:35:59 INFO parse.SemanticAnalyzer: Get metadata for subqueries
> 14/11/06 00:35:59 INFO parse.SemanticAnalyzer: Get metadata for destination 
> tables
> 14/11/06 00:35:59 INFO ql.Context: New scratch dir is 
> hdfs://cn041-10.l42scl.hortonworks.com:8020/tmp/hive/gopal/6b3980f6-3238-4e91-ae53-cb3f54092dab/hive_2014-11-06_00-35-59_379_317426424610374080-1
> 14/11/06 00:35:59 INFO parse.SemanticAnalyzer: Completed getting MetaData in 
> Semantic Analysis
> 14/11/06 00:35:59 INFO parse.SemanticAnalyzer: Set stats collection dir : 
> hdfs://cn041-10.l42scl.hortonworks.com:8020/tmp/hive/gopal/6b3980f6-3238-4e91-ae53-cb3f54092dab/hive_2014-11-06_00-35-59_379_317426424610374080-1/-ext-10002
> 14/11/06 00:35:59 INFO ppd.OpProcFactory: Processing for FS(16)
> 14/11/06 00:35:59 INFO ppd.OpProcFactory: Processing for SEL(15)
> 14/11/06 00:35:59 INFO ppd.OpProcFactory: Processing for GBY(14)
> 14/11/06 00:35:59 INFO ppd.OpProcFactory: Processing for RS(13)
> 14/11/06 00:35:59 INFO ppd.OpProcFactory: Processing for GBY(12)
> 14/11/06 00:35:59 INFO ppd.OpProcFactory: Processing for SEL(11)
> 14/11/06 00:35:59 INFO ppd.OpProcFactory: Processing for TS(10)
> 14/11/06 00:35:59 INFO optimizer.ColumnPrunerProcFactory: RS 13 
> oldColExprMap: {VALUE._col0=Column[_col0]}
> 14/11/06 00:35:59 INFO optimizer.ColumnPrunerProcFactory: RS 13 
> newColExprMap: {VALUE._col0=Column[_col0]}
> 14/11/06 00:35:59 INFO parse.SemanticAnalyzer: Completed plan generation
> 14/11/06 00:35:59 INFO ql.Driver: Semantic Analysis Completed
> 14/11/06 00:35:59 INFO log.PerfLogger:  start=1415262959381 end=1415262959401 duration=20 
> from=org.apache.hadoop.hive.ql.Driver>
> 14/11/06 00:35:59 INFO ql.Driver: Returning Hive schema: 
> Schema(fieldSchemas:[FieldSchema(name:Explain, type:string, comment:null)], 
> properties:null)
> 14/11/06 00:35:59 INFO log.PerfLogger:  start=1415262959378 end=1415262959402 duration=24 
> from=org.apache.hadoop.hive.ql.Driver>
> 14/11/06 00:35:59 INFO log.PerfLogger:  from=org.apache.hadoop.hive.ql.Driver>
> ++--+
> |  Explain   |
> ++--+
> | STAGE DEPENDENCIES:|
> |   Stage-0 is a root stage  |
> ||
> | STAGE PLANS:   |
> |   Stage: Stage-0   |
> | Fetch Operator |
> |   limit: 1 |
> |   Processor Tree:  |
> | ListSink   |
> ||
> ++--+
> 10 rows selected (0.1 seconds)
> 14/11/06 00:35:59 INFO log.PerfLogger:  from=org.apache.hadoop.hive.ql.Driver>
> 14/11/06 00:35:59 INFO ql.Driver: Concurrency mode is disabled, not creating 
> a lock manager
> 14/11/06 00:35:59 INFO log.PerfLogger:  from=org.apache.hadoop.hive.ql.Driver>
> 14/11/06 00:35:59 INFO ql.Driver: Starting command: explain select count(1) 
> from testing.foo
> 14/11/06 00:35:59 INFO log.PerfLogger:  start=1415262959403 end=1415262959405 duration=2 
> from=org.apache.hadoop.hive.ql.Driver>
> 14/11/06 00:35:59 INFO log.PerfLogger:  from=org.apache.hadoop.hive.ql.Driver>
> 14/11/06 00:35:59 INFO log.PerfLogger:  from=org.apach

[jira] [Commented] (HIVE-8772) zookeeper info logs are always printed from beeline with service discovery mode

2014-11-08 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14203630#comment-14203630
 ] 

Lefty Leverenz commented on HIVE-8772:
--

bq.  it should be documented in HiveServer2 Clients – Beeline

Okay, although a link from the logging section in Getting Started wouldn't 
hurt.  Thanks.

> zookeeper info logs are always printed from beeline with service discovery 
> mode
> ---
>
> Key: HIVE-8772
> URL: https://issues.apache.org/jira/browse/HIVE-8772
> Project: Hive
>  Issue Type: Bug
>  Components: Beeline
>Affects Versions: 0.14.0
>Reporter: Thejas M Nair
>Assignee: Thejas M Nair
>  Labels: TODOC14
> Fix For: 0.14.0
>
> Attachments: HIVE-8772.1.patch
>
>
> Log messages like following are being printed by zookeeper for beeline 
> commands, and there is no way to suppress that using beeline commandline 
> options (--silent or --verbose).
> {noformat}
> 14/11/04 16:05:47 INFO zookeeper.ZooKeeper: Client 
> environment:java.vendor=Oracle Corporation
> 14/11/04 16:05:47 INFO zookeeper.ZooKeeper: Client 
> environment:java.home=/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.71.x86_64/jre
> 14/11/04 16:05:47 INFO zookeeper.ZooKeeper: Client 
> environment:java.class.path=/usr/hdp/2.2.0.0-1756/hadoop/conf:/usr/hdp/2.2.0.0-1756/hadoop/lib/ranger-plugins-cred-0.4.0.2.2.0.0-1756.jar:/usr/hdp/2.2.0.0-1756/hadoop/lib/jetty-util-6.1.26.hwx
> .jar:/usr/hdp/2.2.0.0-1756/hadoop/lib/ranger-hdfs-plugin-0.4.0.2.2.0.0-1756.jar:/usr/hdp/2.2.0.0-1756/hadoop/lib/commons-math3-3.1.1.jar:/usr/hdp/2.2.0.0-1756/hadoop/lib/mysql-connector-java.jar:/usr/hdp/2.2.0.0-1756/hadoop/lib/mockito-all-1.8
> .5.jar:/usr/hdp/2.2.0.0-1756/hadoop/lib/curator-framework-2.6.0.jar:/usr/hdp/2.2.0.0-1756/hadoop/lib/guava-11.0.2.jar:/usr/hdp/2.2.0.0-1756/hadoop/lib/java-xmlbuilder-0.4.jar:/usr/hdp/2.2.0.0-1756/hadoop/lib/ranger-plugins-audit-0.4.0.2.2.0.0-
> 1756.jar:/usr/hdp/2.2.0.0-1756/hadoop/lib/curator-client-2.6.0.jar:/usr/hdp/2.2.0.0-1756/hadoop/lib/commons-httpclient-3.1.jar:/usr/hdp/2.2.0.0-1756/hadoop/lib/junit-4.11.jar:/usr/hdp/2.2.0.0-1756/hadoop/lib/jersey-core-1.9.jar:/usr/hdp/2.2.0.
> 0-1756/hadoop/lib/jersey-json-1.9.jar:/usr/hdp/2.2.0.0-1756/hadoop/lib/jersey-server-1.9.jar:/usr/hdp/2.2.0.0-1756/hadoop/lib/api-asn1-api-1.0.0-M20.jar:/usr/hdp/2.2.0.0-1756/hadoop/lib/asm-3.2.jar:/usr/hdp/2.2.0.0-1756/hadoop/lib/protobuf-jav
> a-2.5.0.jar:/usr/hdp/2.2.0.0-1756/hadoop/lib/eclipselink-2.5.2-M1.jar:/usr/hdp/2.2.0.0-1756/hadoop/lib/commons-beanutils-1.7.0.jar:/usr/hdp/2.2.0.0-1756/hadoop/lib/curator-recipes-2.6.0.jar:/usr/hdp/2.2.0.0-1756/hadoop/lib/jettison-1.1.jar:/us
> r/hdp/2.2.0.0-1756/hadoop/lib/commons-digester-1.8.jar:/usr/hdp/2.2.0.0-1756/hadoop/lib/htrace-core-3.0.4.jar:/usr/hdp/2.2.0.0-1756/hadoop/lib/commons-compress-1.4.1.jar:/usr/hdp/2.2.0.0-1756/hadoop/lib/jackson-jaxrs-1.9.13.jar:/usr/hdp/2.2.0.
> 0-1
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8794) Hive on Tez leaks AMs when killed before first dag is run

2014-11-08 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14203625#comment-14203625
 ] 

Hive QA commented on HIVE-8794:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12680406/HIVE-8794.2.patch

{color:red}ERROR:{color} -1 due to 4 failed/errored test(s), 6671 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_acid_join
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_insert_nonacid_from_acid
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_optimize_nullscan
org.apache.hive.hcatalog.streaming.TestStreaming.testTransactionBatchEmptyCommit
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/1705/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/1705/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-1705/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 4 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12680406 - PreCommit-HIVE-TRUNK-Build

> Hive on Tez leaks AMs when killed before first dag is run
> -
>
> Key: HIVE-8794
> URL: https://issues.apache.org/jira/browse/HIVE-8794
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.14.0
>Reporter: Gunther Hagleitner
>Assignee: Gunther Hagleitner
> Attachments: HIVE-8794.1.patch, HIVE-8794.2.patch
>
>
> The shutdown hook that guards against this kind of leakage is only set up 
> when the TezJobMonitor class is loaded. If you kill the shell before that - 
> that might be too late.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8772) zookeeper info logs are always printed from beeline with service discovery mode

2014-11-08 Thread Lefty Leverenz (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lefty Leverenz updated HIVE-8772:
-
Labels: TODOC14  (was: )

> zookeeper info logs are always printed from beeline with service discovery 
> mode
> ---
>
> Key: HIVE-8772
> URL: https://issues.apache.org/jira/browse/HIVE-8772
> Project: Hive
>  Issue Type: Bug
>  Components: Beeline
>Affects Versions: 0.14.0
>Reporter: Thejas M Nair
>Assignee: Thejas M Nair
>  Labels: TODOC14
> Fix For: 0.14.0
>
> Attachments: HIVE-8772.1.patch
>
>
> Log messages like following are being printed by zookeeper for beeline 
> commands, and there is no way to suppress that using beeline commandline 
> options (--silent or --verbose).
> {noformat}
> 14/11/04 16:05:47 INFO zookeeper.ZooKeeper: Client 
> environment:java.vendor=Oracle Corporation
> 14/11/04 16:05:47 INFO zookeeper.ZooKeeper: Client 
> environment:java.home=/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.71.x86_64/jre
> 14/11/04 16:05:47 INFO zookeeper.ZooKeeper: Client 
> environment:java.class.path=/usr/hdp/2.2.0.0-1756/hadoop/conf:/usr/hdp/2.2.0.0-1756/hadoop/lib/ranger-plugins-cred-0.4.0.2.2.0.0-1756.jar:/usr/hdp/2.2.0.0-1756/hadoop/lib/jetty-util-6.1.26.hwx
> .jar:/usr/hdp/2.2.0.0-1756/hadoop/lib/ranger-hdfs-plugin-0.4.0.2.2.0.0-1756.jar:/usr/hdp/2.2.0.0-1756/hadoop/lib/commons-math3-3.1.1.jar:/usr/hdp/2.2.0.0-1756/hadoop/lib/mysql-connector-java.jar:/usr/hdp/2.2.0.0-1756/hadoop/lib/mockito-all-1.8
> .5.jar:/usr/hdp/2.2.0.0-1756/hadoop/lib/curator-framework-2.6.0.jar:/usr/hdp/2.2.0.0-1756/hadoop/lib/guava-11.0.2.jar:/usr/hdp/2.2.0.0-1756/hadoop/lib/java-xmlbuilder-0.4.jar:/usr/hdp/2.2.0.0-1756/hadoop/lib/ranger-plugins-audit-0.4.0.2.2.0.0-
> 1756.jar:/usr/hdp/2.2.0.0-1756/hadoop/lib/curator-client-2.6.0.jar:/usr/hdp/2.2.0.0-1756/hadoop/lib/commons-httpclient-3.1.jar:/usr/hdp/2.2.0.0-1756/hadoop/lib/junit-4.11.jar:/usr/hdp/2.2.0.0-1756/hadoop/lib/jersey-core-1.9.jar:/usr/hdp/2.2.0.
> 0-1756/hadoop/lib/jersey-json-1.9.jar:/usr/hdp/2.2.0.0-1756/hadoop/lib/jersey-server-1.9.jar:/usr/hdp/2.2.0.0-1756/hadoop/lib/api-asn1-api-1.0.0-M20.jar:/usr/hdp/2.2.0.0-1756/hadoop/lib/asm-3.2.jar:/usr/hdp/2.2.0.0-1756/hadoop/lib/protobuf-jav
> a-2.5.0.jar:/usr/hdp/2.2.0.0-1756/hadoop/lib/eclipselink-2.5.2-M1.jar:/usr/hdp/2.2.0.0-1756/hadoop/lib/commons-beanutils-1.7.0.jar:/usr/hdp/2.2.0.0-1756/hadoop/lib/curator-recipes-2.6.0.jar:/usr/hdp/2.2.0.0-1756/hadoop/lib/jettison-1.1.jar:/us
> r/hdp/2.2.0.0-1756/hadoop/lib/commons-digester-1.8.jar:/usr/hdp/2.2.0.0-1756/hadoop/lib/htrace-core-3.0.4.jar:/usr/hdp/2.2.0.0-1756/hadoop/lib/commons-compress-1.4.1.jar:/usr/hdp/2.2.0.0-1756/hadoop/lib/jackson-jaxrs-1.9.13.jar:/usr/hdp/2.2.0.
> 0-1
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Hive-0.14 - Build # 711 - Still Failing

2014-11-08 Thread Apache Jenkins Server
Changes for Build #696
[rohini] PIG-4186: Fix e2e run against new build of pig and some enhancements 
(rohini)


Changes for Build #697

Changes for Build #698

Changes for Build #699

Changes for Build #700

Changes for Build #701

Changes for Build #702

Changes for Build #703
[daijy] HIVE-8484: HCatalog throws an exception if Pig job is of type 'fetch' 
(Lorand Bendig via Daniel Dai)


Changes for Build #704
[gunther] HIVE-8781: Nullsafe joins are busted on Tez (Gunther Hagleitner, 
reviewed by Prasanth J)


Changes for Build #705
[gunther] HIVE-8760: Pass a copy of HiveConf to hooks (Gunther Hagleitner, 
reviewed by Gopal V)


Changes for Build #706
[thejas] HIVE-8772 : zookeeper info logs are always printed from beeline with 
service discovery mode (Thejas Nair, reviewed by Vaibhav Gumashta)


Changes for Build #707
[gunther] HIVE-8782: HBase handler doesn't compile with hadoop-1 (Jimmy Xiang, 
reviewed by Xuefu and Sergey)


Changes for Build #708

Changes for Build #709
[thejas] HIVE-8785 : HiveServer2 LogDivertAppender should be more selective for 
beeline getLogs (Thejas Nair, reviewed by Gopal V)


Changes for Build #710
[vgumashta] HIVE-8764: Windows: HiveServer2 TCP SSL cannot recognize localhost 
(Vaibhav Gumashta reviewed by Thejas Nair)


Changes for Build #711
[gunther] HIVE-8768: CBO: Fix filter selectivity for 'in clause' & '<>' (Laljo 
John Pullokkaran via Gunther Hagleitner)




No tests ran.

The Apache Jenkins build system has built Hive-0.14 (build #711)

Status: Still Failing

Check console output at https://builds.apache.org/job/Hive-0.14/711/ to view 
the results.

[jira] [Commented] (HIVE-8796) TestCliDriver acid tests with decimal needs benchmark to be updated

2014-11-08 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14203591#comment-14203591
 ] 

Hive QA commented on HIVE-8796:
---



{color:green}Overall{color}: +1 all checks pass

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12680402/HIVE-8796.1.patch

{color:green}SUCCESS:{color} +1 6672 tests passed

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/1704/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/1704/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-1704/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12680402 - PreCommit-HIVE-TRUNK-Build

> TestCliDriver acid tests with decimal needs benchmark to be updated
> ---
>
> Key: HIVE-8796
> URL: https://issues.apache.org/jira/browse/HIVE-8796
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.15.0
>Reporter: Thejas M Nair
>Assignee: Thejas M Nair
> Attachments: HIVE-8796.1.patch
>
>
> testCliDriver_insert_nonacid_from_acid, testCliDriver_acid_join are failing. 
> They were committed around same time HIVE-8745 changes to decimal went it, 
> didn't have the required updates to decimal output.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8798) Some Oracle deadlocks not being caught in TxnHandler

2014-11-08 Thread Gunther Hagleitner (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14203584#comment-14203584
 ] 

Gunther Hagleitner commented on HIVE-8798:
--

+1

> Some Oracle deadlocks not being caught in TxnHandler
> 
>
> Key: HIVE-8798
> URL: https://issues.apache.org/jira/browse/HIVE-8798
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 0.14.0
>Reporter: Alan Gates
>Assignee: Alan Gates
>Priority: Critical
> Fix For: 0.14.0
>
> Attachments: HIVE-8798.patch
>
>
> Oracle seems to give different error codes and different error messages at 
> different times for deadlocks.  There are still some error codes/messages we 
> are missing in TxnHandler.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8798) Some Oracle deadlocks not being caught in TxnHandler

2014-11-08 Thread Alan Gates (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Gates updated HIVE-8798:
-
Fix Version/s: 0.14.0
   Status: Patch Available  (was: Open)

> Some Oracle deadlocks not being caught in TxnHandler
> 
>
> Key: HIVE-8798
> URL: https://issues.apache.org/jira/browse/HIVE-8798
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 0.14.0
>Reporter: Alan Gates
>Assignee: Alan Gates
>Priority: Critical
> Fix For: 0.14.0
>
> Attachments: HIVE-8798.patch
>
>
> Oracle seems to give different error codes and different error messages at 
> different times for deadlocks.  There are still some error codes/messages we 
> are missing in TxnHandler.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8798) Some Oracle deadlocks not being caught in TxnHandler

2014-11-08 Thread Alan Gates (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Gates updated HIVE-8798:
-
Attachment: HIVE-8798.patch

Added another message to the list of messages interpreted as a deadlock when 
coming from Oracle.

> Some Oracle deadlocks not being caught in TxnHandler
> 
>
> Key: HIVE-8798
> URL: https://issues.apache.org/jira/browse/HIVE-8798
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 0.14.0
>Reporter: Alan Gates
>Assignee: Alan Gates
>Priority: Critical
> Fix For: 0.14.0
>
> Attachments: HIVE-8798.patch
>
>
> Oracle seems to give different error codes and different error messages at 
> different times for deadlocks.  There are still some error codes/messages we 
> are missing in TxnHandler.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-8798) Some Oracle deadlocks not being caught in TxnHandler

2014-11-08 Thread Alan Gates (JIRA)
Alan Gates created HIVE-8798:


 Summary: Some Oracle deadlocks not being caught in TxnHandler
 Key: HIVE-8798
 URL: https://issues.apache.org/jira/browse/HIVE-8798
 Project: Hive
  Issue Type: Bug
  Components: Transactions
Affects Versions: 0.14.0
Reporter: Alan Gates
Assignee: Alan Gates
Priority: Critical


Oracle seems to give different error codes and different error messages at 
different times for deadlocks.  There are still some error codes/messages we 
are missing in TxnHandler.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Hive-0.14 - Build # 710 - Still Failing

2014-11-08 Thread Apache Jenkins Server
Changes for Build #696
[rohini] PIG-4186: Fix e2e run against new build of pig and some enhancements 
(rohini)


Changes for Build #697

Changes for Build #698

Changes for Build #699

Changes for Build #700

Changes for Build #701

Changes for Build #702

Changes for Build #703
[daijy] HIVE-8484: HCatalog throws an exception if Pig job is of type 'fetch' 
(Lorand Bendig via Daniel Dai)


Changes for Build #704
[gunther] HIVE-8781: Nullsafe joins are busted on Tez (Gunther Hagleitner, 
reviewed by Prasanth J)


Changes for Build #705
[gunther] HIVE-8760: Pass a copy of HiveConf to hooks (Gunther Hagleitner, 
reviewed by Gopal V)


Changes for Build #706
[thejas] HIVE-8772 : zookeeper info logs are always printed from beeline with 
service discovery mode (Thejas Nair, reviewed by Vaibhav Gumashta)


Changes for Build #707
[gunther] HIVE-8782: HBase handler doesn't compile with hadoop-1 (Jimmy Xiang, 
reviewed by Xuefu and Sergey)


Changes for Build #708

Changes for Build #709
[thejas] HIVE-8785 : HiveServer2 LogDivertAppender should be more selective for 
beeline getLogs (Thejas Nair, reviewed by Gopal V)


Changes for Build #710
[vgumashta] HIVE-8764: Windows: HiveServer2 TCP SSL cannot recognize localhost 
(Vaibhav Gumashta reviewed by Thejas Nair)




No tests ran.

The Apache Jenkins build system has built Hive-0.14 (build #710)

Status: Still Failing

Check console output at https://builds.apache.org/job/Hive-0.14/710/ to view 
the results.

[jira] [Commented] (HIVE-8794) Hive on Tez leaks AMs when killed before first dag is run

2014-11-08 Thread Gunther Hagleitner (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14203571#comment-14203571
 ] 

Gunther Hagleitner commented on HIVE-8794:
--

Nice. I was using classForName first, but didn't like the string. This is 
better.

> Hive on Tez leaks AMs when killed before first dag is run
> -
>
> Key: HIVE-8794
> URL: https://issues.apache.org/jira/browse/HIVE-8794
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.14.0
>Reporter: Gunther Hagleitner
>Assignee: Gunther Hagleitner
> Attachments: HIVE-8794.1.patch, HIVE-8794.2.patch
>
>
> The shutdown hook that guards against this kind of leakage is only set up 
> when the TezJobMonitor class is loaded. If you kill the shell before that - 
> that might be too late.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8797) Simultaneous dynamic inserts can result in "partition already exists" error

2014-11-08 Thread Alan Gates (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8797?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Gates updated HIVE-8797:
-
Status: Patch Available  (was: Open)

> Simultaneous dynamic inserts can result in "partition already exists" error
> ---
>
> Key: HIVE-8797
> URL: https://issues.apache.org/jira/browse/HIVE-8797
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.14.0
>Reporter: Alan Gates
>Assignee: Alan Gates
> Attachments: HIVE-8797.patch
>
>
> If two users attempt a dynamic insert into the same new partition at the same 
> time, a possible race condition exists where both will attempt to create the 
> partition and one will fail.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8797) Simultaneous dynamic inserts can result in "partition already exists" error

2014-11-08 Thread Alan Gates (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8797?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Gates updated HIVE-8797:
-
Attachment: HIVE-8797.patch

This patch changes Hive.getPartition to shift from creating the partition to 
adding it's files to it if the createPartition call fails with already exists.

> Simultaneous dynamic inserts can result in "partition already exists" error
> ---
>
> Key: HIVE-8797
> URL: https://issues.apache.org/jira/browse/HIVE-8797
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.14.0
>Reporter: Alan Gates
>Assignee: Alan Gates
> Attachments: HIVE-8797.patch
>
>
> If two users attempt a dynamic insert into the same new partition at the same 
> time, a possible race condition exists where both will attempt to create the 
> partition and one will fail.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-8797) Simultaneous dynamic inserts can result in "partition already exists" error

2014-11-08 Thread Alan Gates (JIRA)
Alan Gates created HIVE-8797:


 Summary: Simultaneous dynamic inserts can result in "partition 
already exists" error
 Key: HIVE-8797
 URL: https://issues.apache.org/jira/browse/HIVE-8797
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.14.0
Reporter: Alan Gates
Assignee: Alan Gates


If two users attempt a dynamic insert into the same new partition at the same 
time, a possible race condition exists where both will attempt to create the 
partition and one will fail.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8764) Windows: HiveServer2 TCP SSL cannot recognize localhost

2014-11-08 Thread Vaibhav Gumashta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vaibhav Gumashta updated HIVE-8764:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Patch committed to trunk and 14. Thanks for the review [~thejas].

> Windows: HiveServer2 TCP SSL cannot recognize localhost 
> 
>
> Key: HIVE-8764
> URL: https://issues.apache.org/jira/browse/HIVE-8764
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 0.14.0
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
> Fix For: 0.14.0
>
> Attachments: HIVE-8764.1.patch
>
>
> Seen on windows and HS2 running in binary mode (http mode works fine; so does 
> using dynamic service discovery). Previously jdbc clients could use 
> localhost:port to connect to the server, now they explicitly need to specify 
> hostname:port. With ZooKeeper indirection however, this is not an issue coz 
> uris on ZK are added in the hostname:port format anyway.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Hive-0.14 - Build # 709 - Still Failing

2014-11-08 Thread Apache Jenkins Server
Changes for Build #696
[rohini] PIG-4186: Fix e2e run against new build of pig and some enhancements 
(rohini)


Changes for Build #697

Changes for Build #698

Changes for Build #699

Changes for Build #700

Changes for Build #701

Changes for Build #702

Changes for Build #703
[daijy] HIVE-8484: HCatalog throws an exception if Pig job is of type 'fetch' 
(Lorand Bendig via Daniel Dai)


Changes for Build #704
[gunther] HIVE-8781: Nullsafe joins are busted on Tez (Gunther Hagleitner, 
reviewed by Prasanth J)


Changes for Build #705
[gunther] HIVE-8760: Pass a copy of HiveConf to hooks (Gunther Hagleitner, 
reviewed by Gopal V)


Changes for Build #706
[thejas] HIVE-8772 : zookeeper info logs are always printed from beeline with 
service discovery mode (Thejas Nair, reviewed by Vaibhav Gumashta)


Changes for Build #707
[gunther] HIVE-8782: HBase handler doesn't compile with hadoop-1 (Jimmy Xiang, 
reviewed by Xuefu and Sergey)


Changes for Build #708

Changes for Build #709
[thejas] HIVE-8785 : HiveServer2 LogDivertAppender should be more selective for 
beeline getLogs (Thejas Nair, reviewed by Gopal V)




No tests ran.

The Apache Jenkins build system has built Hive-0.14 (build #709)

Status: Still Failing

Check console output at https://builds.apache.org/job/Hive-0.14/709/ to view 
the results.

[jira] [Updated] (HIVE-8794) Hive on Tez leaks AMs when killed before first dag is run

2014-11-08 Thread Gopal V (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gopal V updated HIVE-8794:
--
Status: Patch Available  (was: Open)

> Hive on Tez leaks AMs when killed before first dag is run
> -
>
> Key: HIVE-8794
> URL: https://issues.apache.org/jira/browse/HIVE-8794
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.14.0
>Reporter: Gunther Hagleitner
>Assignee: Gunther Hagleitner
> Attachments: HIVE-8794.1.patch, HIVE-8794.2.patch
>
>
> The shutdown hook that guards against this kind of leakage is only set up 
> when the TezJobMonitor class is loaded. If you kill the shell before that - 
> that might be too late.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8794) Hive on Tez leaks AMs when killed before first dag is run

2014-11-08 Thread Gopal V (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gopal V updated HIVE-8794:
--
Status: Open  (was: Patch Available)

> Hive on Tez leaks AMs when killed before first dag is run
> -
>
> Key: HIVE-8794
> URL: https://issues.apache.org/jira/browse/HIVE-8794
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.14.0
>Reporter: Gunther Hagleitner
>Assignee: Gunther Hagleitner
> Attachments: HIVE-8794.1.patch, HIVE-8794.2.patch
>
>
> The shutdown hook that guards against this kind of leakage is only set up 
> when the TezJobMonitor class is loaded. If you kill the shell before that - 
> that might be too late.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8785) HiveServer2 LogDivertAppender should be more selective for beeline getLogs

2014-11-08 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-8785:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Patch committed to trunk and 0.14 branch.
Thanks for the reviews [~gopalv] [~hagleitn]

> HiveServer2 LogDivertAppender should be more selective for beeline getLogs
> --
>
> Key: HIVE-8785
> URL: https://issues.apache.org/jira/browse/HIVE-8785
> Project: Hive
>  Issue Type: Bug
>Reporter: Gopal V
>Assignee: Thejas M Nair
> Fix For: 0.14.0
>
> Attachments: HIVE-8785.1.patch, HIVE-8785.2.patch, HIVE-8785.3.patch, 
> HIVE-8785.4.patch, HIVE-8785.4.patch, HIVE-8785.5.patch
>
>
> A simple query run via beeline JDBC like {{explain select count(1) from 
> testing.foo;}} produces 50 lines of output which looks like 
> {code}
> 0: jdbc:hive2://localhost:10002> explain select count(1) from testing.foo;
> 14/11/06 00:35:59 INFO log.PerfLogger:  from=org.apache.hadoop.hive.ql.Driver>
> 14/11/06 00:35:59 INFO log.PerfLogger:  from=org.apache.hadoop.hive.ql.Driver>
> 14/11/06 00:35:59 INFO parse.ParseDriver: Parsing command: explain select 
> count(1) from testing.foo
> 14/11/06 00:35:59 INFO parse.ParseDriver: Parse Completed
> 14/11/06 00:35:59 INFO log.PerfLogger:  start=1415262959379 end=1415262959380 duration=1 
> from=org.apache.hadoop.hive.ql.Driver>
> 14/11/06 00:35:59 INFO log.PerfLogger:  from=org.apache.hadoop.hive.ql.Driver>
> 14/11/06 00:35:59 INFO parse.SemanticAnalyzer: Starting Semantic Analysis
> 14/11/06 00:35:59 INFO parse.SemanticAnalyzer: Completed phase 1 of Semantic 
> Analysis
> 14/11/06 00:35:59 INFO parse.SemanticAnalyzer: Get metadata for source tables
> 14/11/06 00:35:59 INFO parse.SemanticAnalyzer: Get metadata for subqueries
> 14/11/06 00:35:59 INFO parse.SemanticAnalyzer: Get metadata for destination 
> tables
> 14/11/06 00:35:59 INFO ql.Context: New scratch dir is 
> hdfs://cn041-10.l42scl.hortonworks.com:8020/tmp/hive/gopal/6b3980f6-3238-4e91-ae53-cb3f54092dab/hive_2014-11-06_00-35-59_379_317426424610374080-1
> 14/11/06 00:35:59 INFO parse.SemanticAnalyzer: Completed getting MetaData in 
> Semantic Analysis
> 14/11/06 00:35:59 INFO parse.SemanticAnalyzer: Set stats collection dir : 
> hdfs://cn041-10.l42scl.hortonworks.com:8020/tmp/hive/gopal/6b3980f6-3238-4e91-ae53-cb3f54092dab/hive_2014-11-06_00-35-59_379_317426424610374080-1/-ext-10002
> 14/11/06 00:35:59 INFO ppd.OpProcFactory: Processing for FS(16)
> 14/11/06 00:35:59 INFO ppd.OpProcFactory: Processing for SEL(15)
> 14/11/06 00:35:59 INFO ppd.OpProcFactory: Processing for GBY(14)
> 14/11/06 00:35:59 INFO ppd.OpProcFactory: Processing for RS(13)
> 14/11/06 00:35:59 INFO ppd.OpProcFactory: Processing for GBY(12)
> 14/11/06 00:35:59 INFO ppd.OpProcFactory: Processing for SEL(11)
> 14/11/06 00:35:59 INFO ppd.OpProcFactory: Processing for TS(10)
> 14/11/06 00:35:59 INFO optimizer.ColumnPrunerProcFactory: RS 13 
> oldColExprMap: {VALUE._col0=Column[_col0]}
> 14/11/06 00:35:59 INFO optimizer.ColumnPrunerProcFactory: RS 13 
> newColExprMap: {VALUE._col0=Column[_col0]}
> 14/11/06 00:35:59 INFO parse.SemanticAnalyzer: Completed plan generation
> 14/11/06 00:35:59 INFO ql.Driver: Semantic Analysis Completed
> 14/11/06 00:35:59 INFO log.PerfLogger:  start=1415262959381 end=1415262959401 duration=20 
> from=org.apache.hadoop.hive.ql.Driver>
> 14/11/06 00:35:59 INFO ql.Driver: Returning Hive schema: 
> Schema(fieldSchemas:[FieldSchema(name:Explain, type:string, comment:null)], 
> properties:null)
> 14/11/06 00:35:59 INFO log.PerfLogger:  start=1415262959378 end=1415262959402 duration=24 
> from=org.apache.hadoop.hive.ql.Driver>
> 14/11/06 00:35:59 INFO log.PerfLogger:  from=org.apache.hadoop.hive.ql.Driver>
> ++--+
> |  Explain   |
> ++--+
> | STAGE DEPENDENCIES:|
> |   Stage-0 is a root stage  |
> ||
> | STAGE PLANS:   |
> |   Stage: Stage-0   |
> | Fetch Operator |
> |   limit: 1 |
> |   Processor Tree:  |
> | ListSink   |
> ||
> ++--+
> 10 rows selected (0.1 seconds)
> 14/11/06 00:35:59 INFO log.PerfLogger:  from=org.apache.hadoop.hive.ql.Driver>
> 14/11/06 00:35:59 INFO ql.Driver: Concurrency mode is disabled, not creating 
> a lock manager
> 14/11/06 00:35:59 INFO log.PerfLogger:  from=org.apache.hadoop.hive.ql.Driver>
> 14/11/06 00:35:59 INFO ql.Driver: Starting command: explain select count(1) 
> from testing.foo
> 14/11/06 00:35:59 INFO log.PerfLogger:  start=1415262959403 end=1415262959405 duration=2 
> from=org.apache.hadoop.hive.ql.Driver>
> 14/11/06 00:35:59 INFO log.PerfLogger:

[jira] [Updated] (HIVE-8794) Hive on Tez leaks AMs when killed before first dag is run

2014-11-08 Thread Gopal V (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gopal V updated HIVE-8794:
--
Attachment: HIVE-8794.2.patch

> Hive on Tez leaks AMs when killed before first dag is run
> -
>
> Key: HIVE-8794
> URL: https://issues.apache.org/jira/browse/HIVE-8794
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.14.0
>Reporter: Gunther Hagleitner
>Assignee: Gunther Hagleitner
> Attachments: HIVE-8794.1.patch, HIVE-8794.2.patch
>
>
> The shutdown hook that guards against this kind of leakage is only set up 
> when the TezJobMonitor class is loaded. If you kill the shell before that - 
> that might be too late.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8794) Hive on Tez leaks AMs when killed before first dag is run

2014-11-08 Thread Gopal V (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14203524#comment-14203524
 ] 

Gopal V commented on HIVE-8794:
---

Don't like it, it is too complicated - the private final + static was 
sufficient & thread-safe (because {{}} is).

Now this isn't. An easier fix would be to do

{code}
public static void initShutdownHook() {
Preconditions.checkArgument(shutdownList != null, "Unexpected 
initialization case for shutdown sessions list");
}
{code}

This would run the previous code exactly as before, under the JVM class lock, 
but won't need any active code within the init calls.

> Hive on Tez leaks AMs when killed before first dag is run
> -
>
> Key: HIVE-8794
> URL: https://issues.apache.org/jira/browse/HIVE-8794
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.14.0
>Reporter: Gunther Hagleitner
>Assignee: Gunther Hagleitner
> Attachments: HIVE-8794.1.patch
>
>
> The shutdown hook that guards against this kind of leakage is only set up 
> when the TezJobMonitor class is loaded. If you kill the shell before that - 
> that might be too late.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8796) TestCliDriver acid tests with decimal needs benchmark to be updated

2014-11-08 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8796?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-8796:

Status: Patch Available  (was: Open)

> TestCliDriver acid tests with decimal needs benchmark to be updated
> ---
>
> Key: HIVE-8796
> URL: https://issues.apache.org/jira/browse/HIVE-8796
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.15.0
>Reporter: Thejas M Nair
>Assignee: Thejas M Nair
> Attachments: HIVE-8796.1.patch
>
>
> testCliDriver_insert_nonacid_from_acid, testCliDriver_acid_join are failing. 
> They were committed around same time HIVE-8745 changes to decimal went it, 
> didn't have the required updates to decimal output.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8796) TestCliDriver acid tests with decimal needs benchmark to be updated

2014-11-08 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8796?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-8796:

Attachment: HIVE-8796.1.patch

[~jdere] Can you please review this q.out file update ?


> TestCliDriver acid tests with decimal needs benchmark to be updated
> ---
>
> Key: HIVE-8796
> URL: https://issues.apache.org/jira/browse/HIVE-8796
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.15.0
>Reporter: Thejas M Nair
>Assignee: Thejas M Nair
> Attachments: HIVE-8796.1.patch
>
>
> testCliDriver_insert_nonacid_from_acid, testCliDriver_acid_join are failing. 
> They were committed around same time HIVE-8745 changes to decimal went it, 
> didn't have the required updates to decimal output.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-8796) TestCliDriver acid tests with decimal needs benchmark to be updated

2014-11-08 Thread Thejas M Nair (JIRA)
Thejas M Nair created HIVE-8796:
---

 Summary: TestCliDriver acid tests with decimal needs benchmark to 
be updated
 Key: HIVE-8796
 URL: https://issues.apache.org/jira/browse/HIVE-8796
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.15.0
Reporter: Thejas M Nair
Assignee: Thejas M Nair


testCliDriver_insert_nonacid_from_acid, testCliDriver_acid_join are failing. 
They were committed around same time HIVE-8745 changes to decimal went it, 
didn't have the required updates to decimal output.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8785) HiveServer2 LogDivertAppender should be more selective for beeline getLogs

2014-11-08 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14203510#comment-14203510
 ] 

Thejas M Nair commented on HIVE-8785:
-

Test failures are unrelated.


> HiveServer2 LogDivertAppender should be more selective for beeline getLogs
> --
>
> Key: HIVE-8785
> URL: https://issues.apache.org/jira/browse/HIVE-8785
> Project: Hive
>  Issue Type: Bug
>Reporter: Gopal V
>Assignee: Thejas M Nair
> Fix For: 0.14.0
>
> Attachments: HIVE-8785.1.patch, HIVE-8785.2.patch, HIVE-8785.3.patch, 
> HIVE-8785.4.patch, HIVE-8785.4.patch, HIVE-8785.5.patch
>
>
> A simple query run via beeline JDBC like {{explain select count(1) from 
> testing.foo;}} produces 50 lines of output which looks like 
> {code}
> 0: jdbc:hive2://localhost:10002> explain select count(1) from testing.foo;
> 14/11/06 00:35:59 INFO log.PerfLogger:  from=org.apache.hadoop.hive.ql.Driver>
> 14/11/06 00:35:59 INFO log.PerfLogger:  from=org.apache.hadoop.hive.ql.Driver>
> 14/11/06 00:35:59 INFO parse.ParseDriver: Parsing command: explain select 
> count(1) from testing.foo
> 14/11/06 00:35:59 INFO parse.ParseDriver: Parse Completed
> 14/11/06 00:35:59 INFO log.PerfLogger:  start=1415262959379 end=1415262959380 duration=1 
> from=org.apache.hadoop.hive.ql.Driver>
> 14/11/06 00:35:59 INFO log.PerfLogger:  from=org.apache.hadoop.hive.ql.Driver>
> 14/11/06 00:35:59 INFO parse.SemanticAnalyzer: Starting Semantic Analysis
> 14/11/06 00:35:59 INFO parse.SemanticAnalyzer: Completed phase 1 of Semantic 
> Analysis
> 14/11/06 00:35:59 INFO parse.SemanticAnalyzer: Get metadata for source tables
> 14/11/06 00:35:59 INFO parse.SemanticAnalyzer: Get metadata for subqueries
> 14/11/06 00:35:59 INFO parse.SemanticAnalyzer: Get metadata for destination 
> tables
> 14/11/06 00:35:59 INFO ql.Context: New scratch dir is 
> hdfs://cn041-10.l42scl.hortonworks.com:8020/tmp/hive/gopal/6b3980f6-3238-4e91-ae53-cb3f54092dab/hive_2014-11-06_00-35-59_379_317426424610374080-1
> 14/11/06 00:35:59 INFO parse.SemanticAnalyzer: Completed getting MetaData in 
> Semantic Analysis
> 14/11/06 00:35:59 INFO parse.SemanticAnalyzer: Set stats collection dir : 
> hdfs://cn041-10.l42scl.hortonworks.com:8020/tmp/hive/gopal/6b3980f6-3238-4e91-ae53-cb3f54092dab/hive_2014-11-06_00-35-59_379_317426424610374080-1/-ext-10002
> 14/11/06 00:35:59 INFO ppd.OpProcFactory: Processing for FS(16)
> 14/11/06 00:35:59 INFO ppd.OpProcFactory: Processing for SEL(15)
> 14/11/06 00:35:59 INFO ppd.OpProcFactory: Processing for GBY(14)
> 14/11/06 00:35:59 INFO ppd.OpProcFactory: Processing for RS(13)
> 14/11/06 00:35:59 INFO ppd.OpProcFactory: Processing for GBY(12)
> 14/11/06 00:35:59 INFO ppd.OpProcFactory: Processing for SEL(11)
> 14/11/06 00:35:59 INFO ppd.OpProcFactory: Processing for TS(10)
> 14/11/06 00:35:59 INFO optimizer.ColumnPrunerProcFactory: RS 13 
> oldColExprMap: {VALUE._col0=Column[_col0]}
> 14/11/06 00:35:59 INFO optimizer.ColumnPrunerProcFactory: RS 13 
> newColExprMap: {VALUE._col0=Column[_col0]}
> 14/11/06 00:35:59 INFO parse.SemanticAnalyzer: Completed plan generation
> 14/11/06 00:35:59 INFO ql.Driver: Semantic Analysis Completed
> 14/11/06 00:35:59 INFO log.PerfLogger:  start=1415262959381 end=1415262959401 duration=20 
> from=org.apache.hadoop.hive.ql.Driver>
> 14/11/06 00:35:59 INFO ql.Driver: Returning Hive schema: 
> Schema(fieldSchemas:[FieldSchema(name:Explain, type:string, comment:null)], 
> properties:null)
> 14/11/06 00:35:59 INFO log.PerfLogger:  start=1415262959378 end=1415262959402 duration=24 
> from=org.apache.hadoop.hive.ql.Driver>
> 14/11/06 00:35:59 INFO log.PerfLogger:  from=org.apache.hadoop.hive.ql.Driver>
> ++--+
> |  Explain   |
> ++--+
> | STAGE DEPENDENCIES:|
> |   Stage-0 is a root stage  |
> ||
> | STAGE PLANS:   |
> |   Stage: Stage-0   |
> | Fetch Operator |
> |   limit: 1 |
> |   Processor Tree:  |
> | ListSink   |
> ||
> ++--+
> 10 rows selected (0.1 seconds)
> 14/11/06 00:35:59 INFO log.PerfLogger:  from=org.apache.hadoop.hive.ql.Driver>
> 14/11/06 00:35:59 INFO ql.Driver: Concurrency mode is disabled, not creating 
> a lock manager
> 14/11/06 00:35:59 INFO log.PerfLogger:  from=org.apache.hadoop.hive.ql.Driver>
> 14/11/06 00:35:59 INFO ql.Driver: Starting command: explain select count(1) 
> from testing.foo
> 14/11/06 00:35:59 INFO log.PerfLogger:  start=1415262959403 end=1415262959405 duration=2 
> from=org.apache.hadoop.hive.ql.Driver>
> 14/11/06 00:35:59 INFO log.PerfLogger:  from=org.apache.hadoop.hive.ql.Driver>
> 14/11/06 00:35:59 INFO log.P

Re: Review Request 27699: HIVE-8435: Add identity project remover optimization

2014-11-08 Thread Jesús Camacho Rodríguez

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/27699/
---

(Updated Nov. 8, 2014, 5:26 p.m.)


Review request for hive and Ashutosh Chauhan.


Repository: hive-git


Description
---

Patch with the most conservative approach of project remover optimization.

Still four tests failing with CliDriver:
- lateral_view.q
- load_dyn_part15_test.q
- multi_insert_lateral_view.q
- ppd_field_garbage.q


Diffs (updated)
-

  accumulo-handler/src/test/results/positive/accumulo_queries.q.out 
254eeaba4b8d633c63c706c0c74bb1165089 
  common/src/java/org/apache/hadoop/hive/conf/HiveConf.java 
87f67d128e01117c5b950c2a3d25a662427b230d 
  contrib/src/test/results/clientpositive/lateral_view_explode2.q.out 
74a7e1719f8e026aaecd53fc147258620a75ccc4 
  hbase-handler/src/test/results/positive/hbase_queries.q.out 
b1e7936738b1121c14132909178646290ee8b4d5 
  ql/src/java/org/apache/hadoop/hive/ql/exec/SelectOperator.java 
95d2d76c80aa59b62e9464f704523d921302d401 
  ql/src/java/org/apache/hadoop/hive/ql/optimizer/IdentityProjectRemover.java 
PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/optimizer/Optimizer.java 
5be0e4540a6843c6b40cb5c22db6e90e1f0da922 
  ql/src/test/queries/clientpositive/identity_proj_remove.q PRE-CREATION 
  ql/src/test/results/clientnegative/udf_assert_true.q.out 
4a5b30de3b20e560b3f064d6a4e5ccab8539f85e 
  ql/src/test/results/clientnegative/udf_assert_true2.q.out 
3684a3f6c5c4d8f49c3cd2dd3fdfbcd85afda473 
  ql/src/test/results/clientpositive/annotate_stats_groupby.q.out 
718b43c6e0fc2c28981f8caf0f38c1360e69837d 
  ql/src/test/results/clientpositive/auto_join0.q.out 
9261ce02f3cfcfd9f048f15fe7357846bb386c31 
  ql/src/test/results/clientpositive/auto_join10.q.out 
3d2bcc216dea80522002f149e5777a73ca52fe5b 
  ql/src/test/results/clientpositive/auto_join11.q.out 
8dbad6724475b71dc53d1198c77e36dfe752484e 
  ql/src/test/results/clientpositive/auto_join12.q.out 
037116c2c6994fe8bc7bccbb89950a13854cd9af 
  ql/src/test/results/clientpositive/auto_join13.q.out 
0cb9b4ffc460121584887f395eb1697bd53013c3 
  ql/src/test/results/clientpositive/auto_join16.q.out 
f96bae3590f5e26b059458650dd508b3dd4b1235 
  ql/src/test/results/clientpositive/auto_join18.q.out 
0de3f2a2c8ca5646071fb852c838337b76aab9f9 
  ql/src/test/results/clientpositive/auto_join18_multi_distinct.q.out 
46559a746f51fa3ad516629220bcf0f31bef685a 
  ql/src/test/results/clientpositive/auto_join24.q.out 
1fa3e6ea54f809c529d4ec7b50d5d5191284939f 
  ql/src/test/results/clientpositive/auto_join26.q.out 
d494d95785283b7083820d0defaadb351f783085 
  ql/src/test/results/clientpositive/auto_join27.q.out 
c16992f2bed4de9dd23dcfbe004825f37abbe56e 
  ql/src/test/results/clientpositive/auto_join30.q.out 
608ca22323e3b4f1900dd5077a7aecf54d8a8ca2 
  ql/src/test/results/clientpositive/auto_join31.q.out 
b0df20270ba3dbb9115c529c50aaca5d13d57a95 
  ql/src/test/results/clientpositive/auto_join32.q.out 
bc2d56c0199133e84efd213dff1538173f1686c7 
  ql/src/test/results/clientpositive/auto_smb_mapjoin_14.q.out 
2583d9a50d4a07db50dca7f88c6db141c392a3b8 
  ql/src/test/results/clientpositive/auto_sortmerge_join_1.q.out 
5a7f174a52d60028f524a7aac14a9b326d060af8 
  ql/src/test/results/clientpositive/auto_sortmerge_join_10.q.out 
7606dd2adcd43ca410e66e0c8f1799084fa4f39e 
  ql/src/test/results/clientpositive/auto_sortmerge_join_11.q.out 
8372a6312a2fe85fd78f0c6da0665164b49b320c 
  ql/src/test/results/clientpositive/auto_sortmerge_join_12.q.out 
3c30a315d9028fda114def015e41a6171341153a 
  ql/src/test/results/clientpositive/auto_sortmerge_join_14.q.out 
69bd43af9a8210b19cbea17181f90bf707d93e85 
  ql/src/test/results/clientpositive/auto_sortmerge_join_15.q.out 
10b20d84eb06a30ed3655e346431bc52dfb486fe 
  ql/src/test/results/clientpositive/auto_sortmerge_join_2.q.out 
72242bbd713baa216d41c40749f9c732271102cb 
  ql/src/test/results/clientpositive/auto_sortmerge_join_3.q.out 
35fa02fa60f6c50d6acf55ed3fae1570a644c1e1 
  ql/src/test/results/clientpositive/auto_sortmerge_join_4.q.out 
4fea70d4e47bbd75530e92f5b2a8be2edd66bdbd 
  ql/src/test/results/clientpositive/auto_sortmerge_join_5.q.out 
1904cc246729a8d3fd2cd1815e563b50e261da6a 
  ql/src/test/results/clientpositive/auto_sortmerge_join_6.q.out 
e5e2a6a770d5064df944c69576d81d07b1d95c77 
  ql/src/test/results/clientpositive/auto_sortmerge_join_7.q.out 
abb1db4a87e6b8e820ff7df53d21a4036254b098 
  ql/src/test/results/clientpositive/auto_sortmerge_join_8.q.out 
9226dc6b2929c2b185f5904bd607a7b18e356dca 
  ql/src/test/results/clientpositive/auto_sortmerge_join_9.q.out 
1a7fdf9650f3e5650400ecc24177637856701536 
  ql/src/test/results/clientpositive/bucket_map_join_1.q.out 
b194a2be3e39c0294df14c00fe69c6d6f9283702 
  ql/src/test/results/clientpositive/bucket_map_join_2.q.out 
07c887854179e333e4c68d02c247216b1c06dee7 
  ql/src/test/results/clientpositive/bucketcontext_1.q.out 
0ea304dbff3

Hive-0.14 - Build # 708 - Still Failing

2014-11-08 Thread Apache Jenkins Server
Changes for Build #696
[rohini] PIG-4186: Fix e2e run against new build of pig and some enhancements 
(rohini)


Changes for Build #697

Changes for Build #698

Changes for Build #699

Changes for Build #700

Changes for Build #701

Changes for Build #702

Changes for Build #703
[daijy] HIVE-8484: HCatalog throws an exception if Pig job is of type 'fetch' 
(Lorand Bendig via Daniel Dai)


Changes for Build #704
[gunther] HIVE-8781: Nullsafe joins are busted on Tez (Gunther Hagleitner, 
reviewed by Prasanth J)


Changes for Build #705
[gunther] HIVE-8760: Pass a copy of HiveConf to hooks (Gunther Hagleitner, 
reviewed by Gopal V)


Changes for Build #706
[thejas] HIVE-8772 : zookeeper info logs are always printed from beeline with 
service discovery mode (Thejas Nair, reviewed by Vaibhav Gumashta)


Changes for Build #707
[gunther] HIVE-8782: HBase handler doesn't compile with hadoop-1 (Jimmy Xiang, 
reviewed by Xuefu and Sergey)


Changes for Build #708



No tests ran.

The Apache Jenkins build system has built Hive-0.14 (build #708)

Status: Still Failing

Check console output at https://builds.apache.org/job/Hive-0.14/708/ to view 
the results.

[jira] [Commented] (HIVE-8542) Enable groupby_map_ppr.q and groupby_map_ppr_multi_distinct.q [Spark Branch]

2014-11-08 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14203483#comment-14203483
 ] 

Hive QA commented on HIVE-8542:
---



{color:red}Overall{color}: -1 no tests executed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12680390/HIVE-8542.2-spark.patch

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-SPARK-Build/330/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-SPARK-Build/330/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-SPARK-Build-330/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hive-ptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ [[ -n /usr/java/jdk1.7.0_45-cloudera ]]
+ export JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera
+ JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera
+ export 
PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/lib64/qt-3.3/bin:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin
+ 
PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/lib64/qt-3.3/bin:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'M2_OPTS=-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ M2_OPTS='-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ cd /data/hive-ptest/working/
+ tee /data/hive-ptest/logs/PreCommit-HIVE-SPARK-Build-330/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ svn = \s\v\n ]]
+ [[ -n '' ]]
+ [[ -d apache-svn-spark-source ]]
+ [[ ! -d apache-svn-spark-source/.svn ]]
+ [[ ! -d apache-svn-spark-source ]]
+ cd apache-svn-spark-source
+ svn revert -R .
Reverted 'itests/src/test/resources/testconfiguration.properties'
Reverted 'ql/src/test/results/clientpositive/spark/semijoin.q.out'
Reverted 'ql/src/test/results/clientpositive/spark/groupby_sort_skew_1_23.q.out'
Reverted 'ql/src/test/results/clientpositive/spark/groupby_sort_1_23.q.out'
Reverted 
'ql/src/java/org/apache/hadoop/hive/ql/parse/spark/GenSparkProcContext.java'
Reverted 'ql/src/java/org/apache/hadoop/hive/ql/parse/spark/GenSparkUtils.java'
++ svn status --no-ignore
++ egrep -v '^X|^Performing status on external'
++ awk '{print $2}'
+ rm -rf target datanucleus.log ant/target shims/target shims/0.20/target 
shims/0.20S/target shims/0.23/target shims/aggregator/target 
shims/common/target shims/common-secure/target shims/scheduler/target 
packaging/target hbase-handler/target testutils/target jdbc/target 
metastore/target itests/target itests/hcatalog-unit/target 
itests/test-serde/target itests/qtest/target itests/hive-unit-hadoop2/target 
itests/hive-minikdc/target 
itests/src/test/resources/testconfiguration.properties.orig 
itests/hive-unit/target itests/custom-serde/target itests/util/target 
itests/qtest-spark/target hcatalog/target hcatalog/core/target 
hcatalog/streaming/target hcatalog/server-extensions/target 
hcatalog/hcatalog-pig-adapter/target hcatalog/webhcat/svr/target 
hcatalog/webhcat/java-client/target accumulo-handler/target hwi/target 
common/target common/src/gen spark-client/target contrib/target service/target 
serde/target beeline/target cli/target odbc/target 
ql/dependency-reduced-pom.xml ql/target 
ql/src/test/results/clientpositive/spark/stats1.q.out
+ svn update

Fetching external item into 'hcatalog/src/test/e2e/harness'
External at revision 1637574.

At revision 1637574.
+ patchCommandPath=/data/hive-ptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hive-ptest/working/scratch/build.patch
+ [[ -f /data/hive-ptest/working/scratch/build.patch ]]
+ chmod +x /data/hive-ptest/working/scratch/smart-apply-patch.sh
+ /data/hive-ptest/working/scratch/smart-apply-patch.sh 
/data/hive-ptest/working/scratch/build.patch
The patch does not appear to apply with p0, p1, or p2
+ exit 1
'
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12680390 - PreCommit-HIVE-SPARK-Build

> Enable groupby_map_ppr.q and groupby_map_ppr_multi_distinct.q [Spark Branch]
> 
>
> Key: HIVE-8542
> URL: https://issues.apache.org/jira/browse/HIVE-8542
> Project: Hive
>  Issue Type: Bug
>  Components: Spark
>Reporter: Chao
>Assignee: Rui Li
> Attachments: HIVE-8542.1-spark.patch, HIVE-8542.2-spark.patch
>

[jira] [Updated] (HIVE-8542) Enable groupby_map_ppr.q and groupby_map_ppr_multi_distinct.q [Spark Branch]

2014-11-08 Thread Rui Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Li updated HIVE-8542:
-
Attachment: HIVE-8542.2-spark.patch

Fix some bugs and update golden files.
Most of the changes made to the golden files are in the shuffle edge type, e.g. 
SHUFFLE_GROUP is not added as default type, SHUFFLE_GROUP won't work with 
SHUFFLE_SORT, etc. Only one golden file has new result:  
{{groupby3_map_skew.q.out}}. I checked this new result and it's the same as the 
MR version, so I suppose that's alright.

> Enable groupby_map_ppr.q and groupby_map_ppr_multi_distinct.q [Spark Branch]
> 
>
> Key: HIVE-8542
> URL: https://issues.apache.org/jira/browse/HIVE-8542
> Project: Hive
>  Issue Type: Bug
>  Components: Spark
>Reporter: Chao
>Assignee: Rui Li
> Attachments: HIVE-8542.1-spark.patch, HIVE-8542.2-spark.patch
>
>
> Currently, in Spark branch, results for these two test files are very 
> different from MR's. We need to find out the cause for this, and identify 
> potential bug in our current implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 27627: Split map-join plan into 2 SparkTasks in 3 stages [Spark Branch]

2014-11-08 Thread Xuefu Zhang

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/27627/#review60482
---



ql/src/java/org/apache/hadoop/hive/ql/optimizer/physical/SparkMapJoinResolver.java


This assumes that result SparkWorks will be linearly dependent on each 
other, which isn't true in general.Let's say the are two works (w1 and w2), 
each having a map join operator. w1 and w2 are connected to w3 via HTS. w3 also 
contains map join operator. Dependency in this scenario will be graphic rather 
than linear.


- Xuefu Zhang


On Nov. 7, 2014, 6:07 p.m., Chao Sun wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/27627/
> ---
> 
> (Updated Nov. 7, 2014, 6:07 p.m.)
> 
> 
> Review request for hive.
> 
> 
> Bugs: HIVE-8622
> https://issues.apache.org/jira/browse/HIVE-8622
> 
> 
> Repository: hive-git
> 
> 
> Description
> ---
> 
> This is a sub-task of map-join for spark 
> https://issues.apache.org/jira/browse/HIVE-7613
> This can use the baseline patch for map-join
> https://issues.apache.org/jira/browse/HIVE-8616
> 
> 
> Diffs
> -
> 
>   
> ql/src/java/org/apache/hadoop/hive/ql/optimizer/physical/SparkMapJoinResolver.java
>  PRE-CREATION 
>   ql/src/java/org/apache/hadoop/hive/ql/plan/SparkWork.java 66fd6b6 
> 
> Diff: https://reviews.apache.org/r/27627/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Chao Sun
> 
>



[jira] [Commented] (HIVE-8793) Make sure multi-insert works with map join [Spark Branch]

2014-11-08 Thread Xuefu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14203451#comment-14203451
 ] 

Xuefu Zhang commented on HIVE-8793:
---

We need to treat this as a general case. Handling this seems straightfoward. We 
just need to make the processing (splitSparkWork) in HIVE-8118 happen before 
MapJoinResolver.

> Make sure multi-insert works with map join [Spark Branch]
> -
>
> Key: HIVE-8793
> URL: https://issues.apache.org/jira/browse/HIVE-8793
> Project: Hive
>  Issue Type: Task
>  Components: Spark
>Affects Versions: spark-branch
>Reporter: Chao
>
> Currently, HIVE-8622 is implemented based on an assumption, that for a map 
> join query, a BaseWork would not have multiple children. By testing through 
> subquery_multiinsert.q did reveal that's the case. But, we need to 
> investigate on this, and make sure this won't happen in general.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8795) Switch precommit test from local to local-cluster [Spark Branch]

2014-11-08 Thread Xuefu Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuefu Zhang updated HIVE-8795:
--
Issue Type: Sub-task  (was: Task)
Parent: HIVE-7292

> Switch precommit test from local to local-cluster [Spark Branch]
> 
>
> Key: HIVE-8795
> URL: https://issues.apache.org/jira/browse/HIVE-8795
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Reporter: Xuefu Zhang
>
>  It seems unlikely that Spark community will provide MRMiniCluster equivalent 
> (SPARK-3691), and Spark local-cluster was the recommendation. Latest research 
> shows that Spark local-cluster works with Hive. Therefore, for now, we use 
> Spark local-cluster (instead of current local) for our precommit test.
> It's previous belived (HIVE-7382) that a Spark installation is required and 
> SPARK_HOME env variable needs to set. Since Spark pulls in Spark's assembly 
> jar, it's believed now we only need a few script from Spark installation 
> instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-8795) Switch precommit test from local to local-cluster [Spark Branch]

2014-11-08 Thread Xuefu Zhang (JIRA)
Xuefu Zhang created HIVE-8795:
-

 Summary: Switch precommit test from local to local-cluster [Spark 
Branch]
 Key: HIVE-8795
 URL: https://issues.apache.org/jira/browse/HIVE-8795
 Project: Hive
  Issue Type: Task
  Components: Spark
Reporter: Xuefu Zhang


 It seems unlikely that Spark community will provide MRMiniCluster equivalent 
(SPARK-3691), and Spark local-cluster was the recommendation. Latest research 
shows that Spark local-cluster works with Hive. Therefore, for now, we use 
Spark local-cluster (instead of current local) for our precommit test.

It's previous belived (HIVE-7382) that a Spark installation is required and 
SPARK_HOME env variable needs to set. Since Spark pulls in Spark's assembly 
jar, it's believed now we only need a few script from Spark installation 
instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8794) Hive on Tez leaks AMs when killed before first dag is run

2014-11-08 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14203408#comment-14203408
 ] 

Hive QA commented on HIVE-8794:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12680384/HIVE-8794.1.patch

{color:red}ERROR:{color} -1 due to 3 failed/errored test(s), 6672 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_acid_join
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_insert_nonacid_from_acid
org.apache.hive.hcatalog.streaming.TestStreaming.testTransactionBatchEmptyCommit
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/1703/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/1703/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-1703/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 3 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12680384 - PreCommit-HIVE-TRUNK-Build

> Hive on Tez leaks AMs when killed before first dag is run
> -
>
> Key: HIVE-8794
> URL: https://issues.apache.org/jira/browse/HIVE-8794
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.14.0
>Reporter: Gunther Hagleitner
>Assignee: Gunther Hagleitner
> Attachments: HIVE-8794.1.patch
>
>
> The shutdown hook that guards against this kind of leakage is only set up 
> when the TezJobMonitor class is loaded. If you kill the shell before that - 
> that might be too late.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8785) HiveServer2 LogDivertAppender should be more selective for beeline getLogs

2014-11-08 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14203371#comment-14203371
 ] 

Hive QA commented on HIVE-8785:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12680378/HIVE-8785.5.patch

{color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 6672 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_acid_join
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_insert_nonacid_from_acid
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/1702/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/1702/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-1702/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 2 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12680378 - PreCommit-HIVE-TRUNK-Build

> HiveServer2 LogDivertAppender should be more selective for beeline getLogs
> --
>
> Key: HIVE-8785
> URL: https://issues.apache.org/jira/browse/HIVE-8785
> Project: Hive
>  Issue Type: Bug
>Reporter: Gopal V
>Assignee: Thejas M Nair
> Fix For: 0.14.0
>
> Attachments: HIVE-8785.1.patch, HIVE-8785.2.patch, HIVE-8785.3.patch, 
> HIVE-8785.4.patch, HIVE-8785.4.patch, HIVE-8785.5.patch
>
>
> A simple query run via beeline JDBC like {{explain select count(1) from 
> testing.foo;}} produces 50 lines of output which looks like 
> {code}
> 0: jdbc:hive2://localhost:10002> explain select count(1) from testing.foo;
> 14/11/06 00:35:59 INFO log.PerfLogger:  from=org.apache.hadoop.hive.ql.Driver>
> 14/11/06 00:35:59 INFO log.PerfLogger:  from=org.apache.hadoop.hive.ql.Driver>
> 14/11/06 00:35:59 INFO parse.ParseDriver: Parsing command: explain select 
> count(1) from testing.foo
> 14/11/06 00:35:59 INFO parse.ParseDriver: Parse Completed
> 14/11/06 00:35:59 INFO log.PerfLogger:  start=1415262959379 end=1415262959380 duration=1 
> from=org.apache.hadoop.hive.ql.Driver>
> 14/11/06 00:35:59 INFO log.PerfLogger:  from=org.apache.hadoop.hive.ql.Driver>
> 14/11/06 00:35:59 INFO parse.SemanticAnalyzer: Starting Semantic Analysis
> 14/11/06 00:35:59 INFO parse.SemanticAnalyzer: Completed phase 1 of Semantic 
> Analysis
> 14/11/06 00:35:59 INFO parse.SemanticAnalyzer: Get metadata for source tables
> 14/11/06 00:35:59 INFO parse.SemanticAnalyzer: Get metadata for subqueries
> 14/11/06 00:35:59 INFO parse.SemanticAnalyzer: Get metadata for destination 
> tables
> 14/11/06 00:35:59 INFO ql.Context: New scratch dir is 
> hdfs://cn041-10.l42scl.hortonworks.com:8020/tmp/hive/gopal/6b3980f6-3238-4e91-ae53-cb3f54092dab/hive_2014-11-06_00-35-59_379_317426424610374080-1
> 14/11/06 00:35:59 INFO parse.SemanticAnalyzer: Completed getting MetaData in 
> Semantic Analysis
> 14/11/06 00:35:59 INFO parse.SemanticAnalyzer: Set stats collection dir : 
> hdfs://cn041-10.l42scl.hortonworks.com:8020/tmp/hive/gopal/6b3980f6-3238-4e91-ae53-cb3f54092dab/hive_2014-11-06_00-35-59_379_317426424610374080-1/-ext-10002
> 14/11/06 00:35:59 INFO ppd.OpProcFactory: Processing for FS(16)
> 14/11/06 00:35:59 INFO ppd.OpProcFactory: Processing for SEL(15)
> 14/11/06 00:35:59 INFO ppd.OpProcFactory: Processing for GBY(14)
> 14/11/06 00:35:59 INFO ppd.OpProcFactory: Processing for RS(13)
> 14/11/06 00:35:59 INFO ppd.OpProcFactory: Processing for GBY(12)
> 14/11/06 00:35:59 INFO ppd.OpProcFactory: Processing for SEL(11)
> 14/11/06 00:35:59 INFO ppd.OpProcFactory: Processing for TS(10)
> 14/11/06 00:35:59 INFO optimizer.ColumnPrunerProcFactory: RS 13 
> oldColExprMap: {VALUE._col0=Column[_col0]}
> 14/11/06 00:35:59 INFO optimizer.ColumnPrunerProcFactory: RS 13 
> newColExprMap: {VALUE._col0=Column[_col0]}
> 14/11/06 00:35:59 INFO parse.SemanticAnalyzer: Completed plan generation
> 14/11/06 00:35:59 INFO ql.Driver: Semantic Analysis Completed
> 14/11/06 00:35:59 INFO log.PerfLogger:  start=1415262959381 end=1415262959401 duration=20 
> from=org.apache.hadoop.hive.ql.Driver>
> 14/11/06 00:35:59 INFO ql.Driver: Returning Hive schema: 
> Schema(fieldSchemas:[FieldSchema(name:Explain, type:string, comment:null)], 
> properties:null)
> 14/11/06 00:35:59 INFO log.PerfLogger:  start=1415262959378 end=1415262959402 duration=24 
> from=org.apache.hadoop.hive.ql.Driver>
> 14/11/06 00:35:59 INFO log.PerfLogger:  from=org.apache.hadoop.hive.ql.Driver>
>

[jira] [Commented] (HIVE-8794) Hive on Tez leaks AMs when killed before first dag is run

2014-11-08 Thread Gunther Hagleitner (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14203348#comment-14203348
 ] 

Gunther Hagleitner commented on HIVE-8794:
--

[~gopalv] can you take a look?

> Hive on Tez leaks AMs when killed before first dag is run
> -
>
> Key: HIVE-8794
> URL: https://issues.apache.org/jira/browse/HIVE-8794
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.14.0
>Reporter: Gunther Hagleitner
>Assignee: Gunther Hagleitner
> Attachments: HIVE-8794.1.patch
>
>
> The shutdown hook that guards against this kind of leakage is only set up 
> when the TezJobMonitor class is loaded. If you kill the shell before that - 
> that might be too late.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8794) Hive on Tez leaks AMs when killed before first dag is run

2014-11-08 Thread Gunther Hagleitner (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gunther Hagleitner updated HIVE-8794:
-
Attachment: HIVE-8794.1.patch

> Hive on Tez leaks AMs when killed before first dag is run
> -
>
> Key: HIVE-8794
> URL: https://issues.apache.org/jira/browse/HIVE-8794
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.14.0
>Reporter: Gunther Hagleitner
>Assignee: Gunther Hagleitner
> Attachments: HIVE-8794.1.patch
>
>
> The shutdown hook that guards against this kind of leakage is only set up 
> when the TezJobMonitor class is loaded. If you kill the shell before that - 
> that might be too late.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8794) Hive on Tez leaks AMs when killed before first dag is run

2014-11-08 Thread Gunther Hagleitner (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gunther Hagleitner updated HIVE-8794:
-
Status: Patch Available  (was: Open)

> Hive on Tez leaks AMs when killed before first dag is run
> -
>
> Key: HIVE-8794
> URL: https://issues.apache.org/jira/browse/HIVE-8794
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.14.0
>Reporter: Gunther Hagleitner
>Assignee: Gunther Hagleitner
> Attachments: HIVE-8794.1.patch
>
>
> The shutdown hook that guards against this kind of leakage is only set up 
> when the TezJobMonitor class is loaded. If you kill the shell before that - 
> that might be too late.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-8794) Hive on Tez leaks AMs when killed before first dag is run

2014-11-08 Thread Gunther Hagleitner (JIRA)
Gunther Hagleitner created HIVE-8794:


 Summary: Hive on Tez leaks AMs when killed before first dag is run
 Key: HIVE-8794
 URL: https://issues.apache.org/jira/browse/HIVE-8794
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.14.0
Reporter: Gunther Hagleitner
Assignee: Gunther Hagleitner


The shutdown hook that guards against this kind of leakage is only set up when 
the TezJobMonitor class is loaded. If you kill the shell before that - that 
might be too late.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8484) HCatalog throws an exception if Pig job is of type 'fetch'

2014-11-08 Thread Lorand Bendig (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14203332#comment-14203332
 ] 

Lorand Bendig commented on HIVE-8484:
-

Thank you, Daniel!

> HCatalog throws an exception if Pig job is of type 'fetch'
> --
>
> Key: HIVE-8484
> URL: https://issues.apache.org/jira/browse/HIVE-8484
> Project: Hive
>  Issue Type: Bug
>  Components: HCatalog
>Affects Versions: 0.14.0
>Reporter: Lorand Bendig
> Fix For: 0.14.0
>
> Attachments: HIVE-8484.patch
>
>
> When Pig tries to retrieve result in fetch mode through HCatalog then 
> HCatLoader#setLocation(String location, Job job) can't set the outputschema 
> because HCatUtil#checkJobContextIfRunningFromBackend(job) always returns 
> false :
> {code}
> public static boolean checkJobContextIfRunningFromBackend(JobContext j) {
> if (j.getConfiguration().get("mapred.task.id", "").equals("") &&
> !("true".equals(j.getConfiguration().get("pig.illustrating" {
>   return false;
> }
> return true;
>   }
> {code}
> This is because in fetch mode we don't have a mapred.task.id. A null 
> outputschema will raise an exception when HCatBaseLoader#getNext() is called: 
> (ERROR 6018: Error converting read value to tuple).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Hive-0.14 - Build # 707 - Still Failing

2014-11-08 Thread Apache Jenkins Server
Changes for Build #696
[rohini] PIG-4186: Fix e2e run against new build of pig and some enhancements 
(rohini)


Changes for Build #697

Changes for Build #698

Changes for Build #699

Changes for Build #700

Changes for Build #701

Changes for Build #702

Changes for Build #703
[daijy] HIVE-8484: HCatalog throws an exception if Pig job is of type 'fetch' 
(Lorand Bendig via Daniel Dai)


Changes for Build #704
[gunther] HIVE-8781: Nullsafe joins are busted on Tez (Gunther Hagleitner, 
reviewed by Prasanth J)


Changes for Build #705
[gunther] HIVE-8760: Pass a copy of HiveConf to hooks (Gunther Hagleitner, 
reviewed by Gopal V)


Changes for Build #706
[thejas] HIVE-8772 : zookeeper info logs are always printed from beeline with 
service discovery mode (Thejas Nair, reviewed by Vaibhav Gumashta)


Changes for Build #707
[gunther] HIVE-8782: HBase handler doesn't compile with hadoop-1 (Jimmy Xiang, 
reviewed by Xuefu and Sergey)




No tests ran.

The Apache Jenkins build system has built Hive-0.14 (build #707)

Status: Still Failing

Check console output at https://builds.apache.org/job/Hive-0.14/707/ to view 
the results.

[jira] [Commented] (HIVE-8785) HiveServer2 LogDivertAppender should be more selective for beeline getLogs

2014-11-08 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14203317#comment-14203317
 ] 

Hive QA commented on HIVE-8785:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12680353/HIVE-8785.3.patch

{color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 6672 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_acid_join
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_insert_nonacid_from_acid
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/1701/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/1701/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-1701/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 2 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12680353 - PreCommit-HIVE-TRUNK-Build

> HiveServer2 LogDivertAppender should be more selective for beeline getLogs
> --
>
> Key: HIVE-8785
> URL: https://issues.apache.org/jira/browse/HIVE-8785
> Project: Hive
>  Issue Type: Bug
>Reporter: Gopal V
>Assignee: Thejas M Nair
> Fix For: 0.14.0
>
> Attachments: HIVE-8785.1.patch, HIVE-8785.2.patch, HIVE-8785.3.patch, 
> HIVE-8785.4.patch, HIVE-8785.4.patch, HIVE-8785.5.patch
>
>
> A simple query run via beeline JDBC like {{explain select count(1) from 
> testing.foo;}} produces 50 lines of output which looks like 
> {code}
> 0: jdbc:hive2://localhost:10002> explain select count(1) from testing.foo;
> 14/11/06 00:35:59 INFO log.PerfLogger:  from=org.apache.hadoop.hive.ql.Driver>
> 14/11/06 00:35:59 INFO log.PerfLogger:  from=org.apache.hadoop.hive.ql.Driver>
> 14/11/06 00:35:59 INFO parse.ParseDriver: Parsing command: explain select 
> count(1) from testing.foo
> 14/11/06 00:35:59 INFO parse.ParseDriver: Parse Completed
> 14/11/06 00:35:59 INFO log.PerfLogger:  start=1415262959379 end=1415262959380 duration=1 
> from=org.apache.hadoop.hive.ql.Driver>
> 14/11/06 00:35:59 INFO log.PerfLogger:  from=org.apache.hadoop.hive.ql.Driver>
> 14/11/06 00:35:59 INFO parse.SemanticAnalyzer: Starting Semantic Analysis
> 14/11/06 00:35:59 INFO parse.SemanticAnalyzer: Completed phase 1 of Semantic 
> Analysis
> 14/11/06 00:35:59 INFO parse.SemanticAnalyzer: Get metadata for source tables
> 14/11/06 00:35:59 INFO parse.SemanticAnalyzer: Get metadata for subqueries
> 14/11/06 00:35:59 INFO parse.SemanticAnalyzer: Get metadata for destination 
> tables
> 14/11/06 00:35:59 INFO ql.Context: New scratch dir is 
> hdfs://cn041-10.l42scl.hortonworks.com:8020/tmp/hive/gopal/6b3980f6-3238-4e91-ae53-cb3f54092dab/hive_2014-11-06_00-35-59_379_317426424610374080-1
> 14/11/06 00:35:59 INFO parse.SemanticAnalyzer: Completed getting MetaData in 
> Semantic Analysis
> 14/11/06 00:35:59 INFO parse.SemanticAnalyzer: Set stats collection dir : 
> hdfs://cn041-10.l42scl.hortonworks.com:8020/tmp/hive/gopal/6b3980f6-3238-4e91-ae53-cb3f54092dab/hive_2014-11-06_00-35-59_379_317426424610374080-1/-ext-10002
> 14/11/06 00:35:59 INFO ppd.OpProcFactory: Processing for FS(16)
> 14/11/06 00:35:59 INFO ppd.OpProcFactory: Processing for SEL(15)
> 14/11/06 00:35:59 INFO ppd.OpProcFactory: Processing for GBY(14)
> 14/11/06 00:35:59 INFO ppd.OpProcFactory: Processing for RS(13)
> 14/11/06 00:35:59 INFO ppd.OpProcFactory: Processing for GBY(12)
> 14/11/06 00:35:59 INFO ppd.OpProcFactory: Processing for SEL(11)
> 14/11/06 00:35:59 INFO ppd.OpProcFactory: Processing for TS(10)
> 14/11/06 00:35:59 INFO optimizer.ColumnPrunerProcFactory: RS 13 
> oldColExprMap: {VALUE._col0=Column[_col0]}
> 14/11/06 00:35:59 INFO optimizer.ColumnPrunerProcFactory: RS 13 
> newColExprMap: {VALUE._col0=Column[_col0]}
> 14/11/06 00:35:59 INFO parse.SemanticAnalyzer: Completed plan generation
> 14/11/06 00:35:59 INFO ql.Driver: Semantic Analysis Completed
> 14/11/06 00:35:59 INFO log.PerfLogger:  start=1415262959381 end=1415262959401 duration=20 
> from=org.apache.hadoop.hive.ql.Driver>
> 14/11/06 00:35:59 INFO ql.Driver: Returning Hive schema: 
> Schema(fieldSchemas:[FieldSchema(name:Explain, type:string, comment:null)], 
> properties:null)
> 14/11/06 00:35:59 INFO log.PerfLogger:  start=1415262959378 end=1415262959402 duration=24 
> from=org.apache.hadoop.hive.ql.Driver>
> 14/11/06 00:35:59 INFO log.PerfLogger:  from=org.apache.hadoop.hive.ql.Driver>
>

[jira] [Commented] (HIVE-8785) HiveServer2 LogDivertAppender should be more selective for beeline getLogs

2014-11-08 Thread Gunther Hagleitner (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14203300#comment-14203300
 ] 

Gunther Hagleitner commented on HIVE-8785:
--

Lol. Thanks [~thejas]

> HiveServer2 LogDivertAppender should be more selective for beeline getLogs
> --
>
> Key: HIVE-8785
> URL: https://issues.apache.org/jira/browse/HIVE-8785
> Project: Hive
>  Issue Type: Bug
>Reporter: Gopal V
>Assignee: Thejas M Nair
> Fix For: 0.14.0
>
> Attachments: HIVE-8785.1.patch, HIVE-8785.2.patch, HIVE-8785.3.patch, 
> HIVE-8785.4.patch, HIVE-8785.4.patch, HIVE-8785.5.patch
>
>
> A simple query run via beeline JDBC like {{explain select count(1) from 
> testing.foo;}} produces 50 lines of output which looks like 
> {code}
> 0: jdbc:hive2://localhost:10002> explain select count(1) from testing.foo;
> 14/11/06 00:35:59 INFO log.PerfLogger:  from=org.apache.hadoop.hive.ql.Driver>
> 14/11/06 00:35:59 INFO log.PerfLogger:  from=org.apache.hadoop.hive.ql.Driver>
> 14/11/06 00:35:59 INFO parse.ParseDriver: Parsing command: explain select 
> count(1) from testing.foo
> 14/11/06 00:35:59 INFO parse.ParseDriver: Parse Completed
> 14/11/06 00:35:59 INFO log.PerfLogger:  start=1415262959379 end=1415262959380 duration=1 
> from=org.apache.hadoop.hive.ql.Driver>
> 14/11/06 00:35:59 INFO log.PerfLogger:  from=org.apache.hadoop.hive.ql.Driver>
> 14/11/06 00:35:59 INFO parse.SemanticAnalyzer: Starting Semantic Analysis
> 14/11/06 00:35:59 INFO parse.SemanticAnalyzer: Completed phase 1 of Semantic 
> Analysis
> 14/11/06 00:35:59 INFO parse.SemanticAnalyzer: Get metadata for source tables
> 14/11/06 00:35:59 INFO parse.SemanticAnalyzer: Get metadata for subqueries
> 14/11/06 00:35:59 INFO parse.SemanticAnalyzer: Get metadata for destination 
> tables
> 14/11/06 00:35:59 INFO ql.Context: New scratch dir is 
> hdfs://cn041-10.l42scl.hortonworks.com:8020/tmp/hive/gopal/6b3980f6-3238-4e91-ae53-cb3f54092dab/hive_2014-11-06_00-35-59_379_317426424610374080-1
> 14/11/06 00:35:59 INFO parse.SemanticAnalyzer: Completed getting MetaData in 
> Semantic Analysis
> 14/11/06 00:35:59 INFO parse.SemanticAnalyzer: Set stats collection dir : 
> hdfs://cn041-10.l42scl.hortonworks.com:8020/tmp/hive/gopal/6b3980f6-3238-4e91-ae53-cb3f54092dab/hive_2014-11-06_00-35-59_379_317426424610374080-1/-ext-10002
> 14/11/06 00:35:59 INFO ppd.OpProcFactory: Processing for FS(16)
> 14/11/06 00:35:59 INFO ppd.OpProcFactory: Processing for SEL(15)
> 14/11/06 00:35:59 INFO ppd.OpProcFactory: Processing for GBY(14)
> 14/11/06 00:35:59 INFO ppd.OpProcFactory: Processing for RS(13)
> 14/11/06 00:35:59 INFO ppd.OpProcFactory: Processing for GBY(12)
> 14/11/06 00:35:59 INFO ppd.OpProcFactory: Processing for SEL(11)
> 14/11/06 00:35:59 INFO ppd.OpProcFactory: Processing for TS(10)
> 14/11/06 00:35:59 INFO optimizer.ColumnPrunerProcFactory: RS 13 
> oldColExprMap: {VALUE._col0=Column[_col0]}
> 14/11/06 00:35:59 INFO optimizer.ColumnPrunerProcFactory: RS 13 
> newColExprMap: {VALUE._col0=Column[_col0]}
> 14/11/06 00:35:59 INFO parse.SemanticAnalyzer: Completed plan generation
> 14/11/06 00:35:59 INFO ql.Driver: Semantic Analysis Completed
> 14/11/06 00:35:59 INFO log.PerfLogger:  start=1415262959381 end=1415262959401 duration=20 
> from=org.apache.hadoop.hive.ql.Driver>
> 14/11/06 00:35:59 INFO ql.Driver: Returning Hive schema: 
> Schema(fieldSchemas:[FieldSchema(name:Explain, type:string, comment:null)], 
> properties:null)
> 14/11/06 00:35:59 INFO log.PerfLogger:  start=1415262959378 end=1415262959402 duration=24 
> from=org.apache.hadoop.hive.ql.Driver>
> 14/11/06 00:35:59 INFO log.PerfLogger:  from=org.apache.hadoop.hive.ql.Driver>
> ++--+
> |  Explain   |
> ++--+
> | STAGE DEPENDENCIES:|
> |   Stage-0 is a root stage  |
> ||
> | STAGE PLANS:   |
> |   Stage: Stage-0   |
> | Fetch Operator |
> |   limit: 1 |
> |   Processor Tree:  |
> | ListSink   |
> ||
> ++--+
> 10 rows selected (0.1 seconds)
> 14/11/06 00:35:59 INFO log.PerfLogger:  from=org.apache.hadoop.hive.ql.Driver>
> 14/11/06 00:35:59 INFO ql.Driver: Concurrency mode is disabled, not creating 
> a lock manager
> 14/11/06 00:35:59 INFO log.PerfLogger:  from=org.apache.hadoop.hive.ql.Driver>
> 14/11/06 00:35:59 INFO ql.Driver: Starting command: explain select count(1) 
> from testing.foo
> 14/11/06 00:35:59 INFO log.PerfLogger:  start=1415262959403 end=1415262959405 duration=2 
> from=org.apache.hadoop.hive.ql.Driver>
> 14/11/06 00:35:59 INFO log.PerfLogger:  from=org.apache.hadoop.hive.ql.Driver>
> 14/11/06 00:35:59 INFO log

[jira] [Commented] (HIVE-8785) HiveServer2 LogDivertAppender should be more selective for beeline getLogs

2014-11-08 Thread Gunther Hagleitner (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14203301#comment-14203301
 ] 

Gunther Hagleitner commented on HIVE-8785:
--

Yes - that's my only change.

> HiveServer2 LogDivertAppender should be more selective for beeline getLogs
> --
>
> Key: HIVE-8785
> URL: https://issues.apache.org/jira/browse/HIVE-8785
> Project: Hive
>  Issue Type: Bug
>Reporter: Gopal V
>Assignee: Thejas M Nair
> Fix For: 0.14.0
>
> Attachments: HIVE-8785.1.patch, HIVE-8785.2.patch, HIVE-8785.3.patch, 
> HIVE-8785.4.patch, HIVE-8785.4.patch, HIVE-8785.5.patch
>
>
> A simple query run via beeline JDBC like {{explain select count(1) from 
> testing.foo;}} produces 50 lines of output which looks like 
> {code}
> 0: jdbc:hive2://localhost:10002> explain select count(1) from testing.foo;
> 14/11/06 00:35:59 INFO log.PerfLogger:  from=org.apache.hadoop.hive.ql.Driver>
> 14/11/06 00:35:59 INFO log.PerfLogger:  from=org.apache.hadoop.hive.ql.Driver>
> 14/11/06 00:35:59 INFO parse.ParseDriver: Parsing command: explain select 
> count(1) from testing.foo
> 14/11/06 00:35:59 INFO parse.ParseDriver: Parse Completed
> 14/11/06 00:35:59 INFO log.PerfLogger:  start=1415262959379 end=1415262959380 duration=1 
> from=org.apache.hadoop.hive.ql.Driver>
> 14/11/06 00:35:59 INFO log.PerfLogger:  from=org.apache.hadoop.hive.ql.Driver>
> 14/11/06 00:35:59 INFO parse.SemanticAnalyzer: Starting Semantic Analysis
> 14/11/06 00:35:59 INFO parse.SemanticAnalyzer: Completed phase 1 of Semantic 
> Analysis
> 14/11/06 00:35:59 INFO parse.SemanticAnalyzer: Get metadata for source tables
> 14/11/06 00:35:59 INFO parse.SemanticAnalyzer: Get metadata for subqueries
> 14/11/06 00:35:59 INFO parse.SemanticAnalyzer: Get metadata for destination 
> tables
> 14/11/06 00:35:59 INFO ql.Context: New scratch dir is 
> hdfs://cn041-10.l42scl.hortonworks.com:8020/tmp/hive/gopal/6b3980f6-3238-4e91-ae53-cb3f54092dab/hive_2014-11-06_00-35-59_379_317426424610374080-1
> 14/11/06 00:35:59 INFO parse.SemanticAnalyzer: Completed getting MetaData in 
> Semantic Analysis
> 14/11/06 00:35:59 INFO parse.SemanticAnalyzer: Set stats collection dir : 
> hdfs://cn041-10.l42scl.hortonworks.com:8020/tmp/hive/gopal/6b3980f6-3238-4e91-ae53-cb3f54092dab/hive_2014-11-06_00-35-59_379_317426424610374080-1/-ext-10002
> 14/11/06 00:35:59 INFO ppd.OpProcFactory: Processing for FS(16)
> 14/11/06 00:35:59 INFO ppd.OpProcFactory: Processing for SEL(15)
> 14/11/06 00:35:59 INFO ppd.OpProcFactory: Processing for GBY(14)
> 14/11/06 00:35:59 INFO ppd.OpProcFactory: Processing for RS(13)
> 14/11/06 00:35:59 INFO ppd.OpProcFactory: Processing for GBY(12)
> 14/11/06 00:35:59 INFO ppd.OpProcFactory: Processing for SEL(11)
> 14/11/06 00:35:59 INFO ppd.OpProcFactory: Processing for TS(10)
> 14/11/06 00:35:59 INFO optimizer.ColumnPrunerProcFactory: RS 13 
> oldColExprMap: {VALUE._col0=Column[_col0]}
> 14/11/06 00:35:59 INFO optimizer.ColumnPrunerProcFactory: RS 13 
> newColExprMap: {VALUE._col0=Column[_col0]}
> 14/11/06 00:35:59 INFO parse.SemanticAnalyzer: Completed plan generation
> 14/11/06 00:35:59 INFO ql.Driver: Semantic Analysis Completed
> 14/11/06 00:35:59 INFO log.PerfLogger:  start=1415262959381 end=1415262959401 duration=20 
> from=org.apache.hadoop.hive.ql.Driver>
> 14/11/06 00:35:59 INFO ql.Driver: Returning Hive schema: 
> Schema(fieldSchemas:[FieldSchema(name:Explain, type:string, comment:null)], 
> properties:null)
> 14/11/06 00:35:59 INFO log.PerfLogger:  start=1415262959378 end=1415262959402 duration=24 
> from=org.apache.hadoop.hive.ql.Driver>
> 14/11/06 00:35:59 INFO log.PerfLogger:  from=org.apache.hadoop.hive.ql.Driver>
> ++--+
> |  Explain   |
> ++--+
> | STAGE DEPENDENCIES:|
> |   Stage-0 is a root stage  |
> ||
> | STAGE PLANS:   |
> |   Stage: Stage-0   |
> | Fetch Operator |
> |   limit: 1 |
> |   Processor Tree:  |
> | ListSink   |
> ||
> ++--+
> 10 rows selected (0.1 seconds)
> 14/11/06 00:35:59 INFO log.PerfLogger:  from=org.apache.hadoop.hive.ql.Driver>
> 14/11/06 00:35:59 INFO ql.Driver: Concurrency mode is disabled, not creating 
> a lock manager
> 14/11/06 00:35:59 INFO log.PerfLogger:  from=org.apache.hadoop.hive.ql.Driver>
> 14/11/06 00:35:59 INFO ql.Driver: Starting command: explain select count(1) 
> from testing.foo
> 14/11/06 00:35:59 INFO log.PerfLogger:  start=1415262959403 end=1415262959405 duration=2 
> from=org.apache.hadoop.hive.ql.Driver>
> 14/11/06 00:35:59 INFO log.PerfLogger:  from=org.apache.hadoop.hive.ql.Driver>
> 14/11/06 00:35:59 I

  1   2   >