[jira] [Assigned] (HIVE-9711) ORC Vectorization DoubleColumnVector.isRepeating=false if all entries are NaN

2015-03-02 Thread Gopal V (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gopal V reassigned HIVE-9711:
-

Assignee: Gopal V

> ORC Vectorization DoubleColumnVector.isRepeating=false if all entries are NaN
> -
>
> Key: HIVE-9711
> URL: https://issues.apache.org/jira/browse/HIVE-9711
> Project: Hive
>  Issue Type: Bug
>  Components: File Formats
>Affects Versions: 1.2.0
>Reporter: Gopal V
>Assignee: Gopal V
>
> The isRepeating=true check uses Java equality, which results in NaN != NaN 
> comparison operations.
> The noNulls case needs the current check folded into the previous loop, while 
> the hasNulls case needs a logical AND of the isNull[] field instead of == 
> comparisons.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9182) avoid FileSystem.getAclStatus rpc call for filesystems that don't support acl

2015-03-02 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14344635#comment-14344635
 ] 

Hive QA commented on HIVE-9182:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12702036/HIVE-9182.1.patch

{color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 7587 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_udaf_context_ngrams
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_udaf_percentile_approx_23
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2925/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2925/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-2925/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 2 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12702036 - PreCommit-HIVE-TRUNK-Build

> avoid FileSystem.getAclStatus rpc call for filesystems that don't support acl
> -
>
> Key: HIVE-9182
> URL: https://issues.apache.org/jira/browse/HIVE-9182
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.14.0
>Reporter: Thejas M Nair
>Assignee: Abdelrahman Shettia
> Fix For: 1.2.0
>
> Attachments: HIVE-9182.1.patch
>
>
> File systems such as s3, wasp (azure) don't implement Hadoop FileSystem acl 
> functionality.
> Hadoop23Shims has code that calls getAclStatus on file systems.
> Instead of calling getAclStatus and catching the exception, we can also check 
> FsPermission#getAclBit .
> Additionally, instead of catching all exceptions for calls to getAclStatus 
> and ignoring them, it is better to just catch UnsupportedOperationException.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9779) ATSHook does not log the end user if doAs=false (it logs the hs2 server user)

2015-03-02 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14344576#comment-14344576
 ] 

Hive QA commented on HIVE-9779:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12702033/HIVE-9779.2.patch

{color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 7586 tests executed
*Failed tests:*
{noformat}
TestCustomAuthentication - did not produce a TEST-*.xml file
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby3_map
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2924/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2924/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-2924/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 2 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12702033 - PreCommit-HIVE-TRUNK-Build

> ATSHook does not log the end user if doAs=false (it logs the hs2 server user)
> -
>
> Key: HIVE-9779
> URL: https://issues.apache.org/jira/browse/HIVE-9779
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 0.13.0, 0.14.0
>Reporter: Abdelrahman Shettia
>Assignee: Abdelrahman Shettia
>  Labels: patch
> Fix For: 1.2.0
>
> Attachments: HIVE-9779-testing.xlsx, HIVE-9779.2.patch
>
>
> When doAs=false, ATSHook should log the end username in ATS instead of 
> logging the hiveserver2 user's name.
> The way things are, it is not possible for an admin to identify which query 
> is being run by which user. The end user information is already available in 
> the HookContext.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9792) Support interval type in expressions/predicates

2015-03-02 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14344478#comment-14344478
 ] 

Hive QA commented on HIVE-9792:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12701256/HIVE-9792.2.patch

{color:red}ERROR:{color} -1 due to 8 failed/errored test(s), 7635 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_interval_arithmetic
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_interval_2
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_interval_3
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_invalid_arithmetic_type
org.apache.hadoop.hive.ql.udf.generic.TestGenericUDFOPMinus.testIntervalDayTimeMinusIntervalDayTime
org.apache.hadoop.hive.ql.udf.generic.TestGenericUDFOPMinus.testIntervalYearMonthMinusIntervalYearMonth
org.apache.hadoop.hive.ql.udf.generic.TestGenericUDFOPPlus.testIntervalDayTimePlusIntervalDayTime
org.apache.hadoop.hive.ql.udf.generic.TestGenericUDFOPPlus.testIntervalYearMonthPlusIntervalYearMonth
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2923/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2923/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-2923/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 8 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12701256 - PreCommit-HIVE-TRUNK-Build

> Support interval type in expressions/predicates 
> 
>
> Key: HIVE-9792
> URL: https://issues.apache.org/jira/browse/HIVE-9792
> Project: Hive
>  Issue Type: Sub-task
>  Components: Types
>Reporter: Jason Dere
>Assignee: Jason Dere
> Attachments: HIVE-9792.1.patch, HIVE-9792.2.patch
>
>
> Provide partial support for the interval year-month/interval day-time types 
> in Hive. Intervals will be usable in expressions/predicates/joins:
> {noformat}
>   select birthdate + interval '30-0' year to month as thirtieth_birthday
>   from table
>   where (current_timestamp - ts1 < interval '3 0:0:0' day to second)
> {noformat}
> This stops short of adding making the interval types usable as a storable 
> column type.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9659) 'Error while trying to create table container' occurs during hive query case execution when hive.optimize.skewjoin set to 'true' [Spark Branch]

2015-03-02 Thread Xin Hao (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14344456#comment-14344456
 ] 

Xin Hao commented on HIVE-9659:
---

Hi, Rui, tried to verify this issue based on HIVE-9659.1-spark.patch, and seems 
that the issue still exists. Could you update Big-Bench to latest version to 
have a double check (Q12 has update recently)? Thanks.

> 'Error while trying to create table container' occurs during hive query case 
> execution when hive.optimize.skewjoin set to 'true' [Spark Branch]
> ---
>
> Key: HIVE-9659
> URL: https://issues.apache.org/jira/browse/HIVE-9659
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Reporter: Xin Hao
>Assignee: Rui Li
> Attachments: HIVE-9659.1-spark.patch
>
>
> We found that 'Error while trying to create table container'  occurs during 
> Big-Bench Q12 case execution when hive.optimize.skewjoin set to 'true'.
> If hive.optimize.skewjoin set to 'false', the case could pass.
> How to reproduce:
> 1. set hive.optimize.skewjoin=true;
> 2. Run BigBench case Q12 and it will fail. 
> Check the executor log (e.g. /usr/lib/spark/work/app-/2/stderr) and you 
> will found error 'Error while trying to create table container' in the log 
> and also a NullPointerException near the end of the log.
> (a) Detail error message for 'Error while trying to create table container':
> {noformat}
> 15/02/12 01:29:49 ERROR SparkMapRecordHandler: Error processing row: 
> org.apache.hadoop.hive.ql.metadata.HiveException: 
> org.apache.hadoop.hive.ql.metadata.HiveException: Error while trying to 
> create table container
> org.apache.hadoop.hive.ql.metadata.HiveException: 
> org.apache.hadoop.hive.ql.metadata.HiveException: Error while trying to 
> create table container
>   at 
> org.apache.hadoop.hive.ql.exec.spark.HashTableLoader.load(HashTableLoader.java:118)
>   at 
> org.apache.hadoop.hive.ql.exec.MapJoinOperator.loadHashTable(MapJoinOperator.java:193)
>   at 
> org.apache.hadoop.hive.ql.exec.MapJoinOperator.cleanUpInputFileChangedOp(MapJoinOperator.java:219)
>   at 
> org.apache.hadoop.hive.ql.exec.Operator.cleanUpInputFileChanged(Operator.java:1051)
>   at 
> org.apache.hadoop.hive.ql.exec.Operator.cleanUpInputFileChanged(Operator.java:1055)
>   at 
> org.apache.hadoop.hive.ql.exec.Operator.cleanUpInputFileChanged(Operator.java:1055)
>   at 
> org.apache.hadoop.hive.ql.exec.Operator.cleanUpInputFileChanged(Operator.java:1055)
>   at 
> org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:486)
>   at 
> org.apache.hadoop.hive.ql.exec.spark.SparkMapRecordHandler.processRow(SparkMapRecordHandler.java:141)
>   at 
> org.apache.hadoop.hive.ql.exec.spark.HiveMapFunctionResultList.processNextRecord(HiveMapFunctionResultList.java:47)
>   at 
> org.apache.hadoop.hive.ql.exec.spark.HiveMapFunctionResultList.processNextRecord(HiveMapFunctionResultList.java:27)
>   at 
> org.apache.hadoop.hive.ql.exec.spark.HiveBaseFunctionResultList$ResultIterator.hasNext(HiveBaseFunctionResultList.java:98)
>   at 
> scala.collection.convert.Wrappers$JIteratorWrapper.hasNext(Wrappers.scala:41)
>   at 
> org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:217)
>   at 
> org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:65)
>   at 
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)
>   at 
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
>   at org.apache.spark.scheduler.Task.run(Task.scala:56)
>   at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:196)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Error while 
> trying to create table container
>   at 
> org.apache.hadoop.hive.ql.exec.persistence.MapJoinTableContainerSerDe.load(MapJoinTableContainerSerDe.java:158)
>   at 
> org.apache.hadoop.hive.ql.exec.spark.HashTableLoader.load(HashTableLoader.java:115)
>   ... 21 more
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Error, not a 
> directory: 
> hdfs://bhx1:8020/tmp/hive/root/d22ef465-bff5-4edb-a822-0a9f1c25b66c/hive_2015-02-12_01-28-10_008_6897031694580088767-1/-mr-10009/HashTable-Stage-6/MapJoin-mapfile01--.hashtable
>   at 
> org.apache.hadoop.hive.ql.exec.persistence.MapJoinTableContainerSerDe.load(MapJoinTableContainerSerDe.java:106)
>   ... 22 more
> 15/02/12 01:29

[jira] [Resolved] (HIVE-9837) LLAP: Decision to use llap or uber is being lost in some reducers

2015-03-02 Thread Gunther Hagleitner (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gunther Hagleitner resolved HIVE-9837.
--
   Resolution: Fixed
Fix Version/s: llap

Committed to branch.

> LLAP: Decision to use llap or uber is being lost in some reducers
> -
>
> Key: HIVE-9837
> URL: https://issues.apache.org/jira/browse/HIVE-9837
> Project: Hive
>  Issue Type: Sub-task
>Affects Versions: llap
>Reporter: Gunther Hagleitner
>Assignee: Gunther Hagleitner
> Fix For: llap
>
> Attachments: HIVE-9837.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9837) LLAP: Decision to use llap or uber is being lost in some reducers

2015-03-02 Thread Gunther Hagleitner (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14344436#comment-14344436
 ] 

Gunther Hagleitner commented on HIVE-9837:
--

cc [~sseth] [~vikram.dixit]

> LLAP: Decision to use llap or uber is being lost in some reducers
> -
>
> Key: HIVE-9837
> URL: https://issues.apache.org/jira/browse/HIVE-9837
> Project: Hive
>  Issue Type: Sub-task
>Affects Versions: llap
>Reporter: Gunther Hagleitner
>Assignee: Gunther Hagleitner
> Attachments: HIVE-9837.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9837) LLAP: Decision to use llap or uber is being lost in some reducers

2015-03-02 Thread Gunther Hagleitner (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gunther Hagleitner updated HIVE-9837:
-
Attachment: HIVE-9837.1.patch

> LLAP: Decision to use llap or uber is being lost in some reducers
> -
>
> Key: HIVE-9837
> URL: https://issues.apache.org/jira/browse/HIVE-9837
> Project: Hive
>  Issue Type: Sub-task
>Affects Versions: llap
>Reporter: Gunther Hagleitner
>Assignee: Gunther Hagleitner
> Attachments: HIVE-9837.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9810) prep object registry for multi threading

2015-03-02 Thread Gunther Hagleitner (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14344433#comment-14344433
 ] 

Gunther Hagleitner commented on HIVE-9810:
--

Test failures are unrelated, committed to trunk.

> prep object registry for multi threading
> 
>
> Key: HIVE-9810
> URL: https://issues.apache.org/jira/browse/HIVE-9810
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Gunther Hagleitner
>Assignee: Gunther Hagleitner
> Fix For: 1.2.0
>
> Attachments: HIVE-9810.1.patch, HIVE-9810.2.patch
>
>
> The object registry relies on the fact that only one thread at a time is 
> active in a container. With llap that's not the case. There's multiple 
> threads that will try to generate the same cache object at the time, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9729) LLAP: design and implement proper metadata cache

2015-03-02 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-9729:
---
Fix Version/s: llap

> LLAP: design and implement proper metadata cache
> 
>
> Key: HIVE-9729
> URL: https://issues.apache.org/jira/browse/HIVE-9729
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Fix For: llap
>
>
> Simple approach: add external priorities to data cache, read metadata parts 
> of orc file into it. Advantage: simple; consistent management (no need to 
> coordinate sizes and eviction between data and metadata caches, etc); 
> disadvantage - have to decode every time.
> Maybe add decoded metadata cache on top - fixed size, small and 
> opportunistic? Or some other approach.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9729) LLAP: design and implement proper metadata cache

2015-03-02 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14344404#comment-14344404
 ] 

Sergey Shelukhin commented on HIVE-9729:


The approach from the description will be taken

> LLAP: design and implement proper metadata cache
> 
>
> Key: HIVE-9729
> URL: https://issues.apache.org/jira/browse/HIVE-9729
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Fix For: llap
>
>
> Simple approach: add external priorities to data cache, read metadata parts 
> of orc file into it. Advantage: simple; consistent management (no need to 
> coordinate sizes and eviction between data and metadata caches, etc); 
> disadvantage - have to decode every time.
> Maybe add decoded metadata cache on top - fixed size, small and 
> opportunistic? Or some other approach.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9836) Hive on tez: fails when virtual columns are present in the join conditions (for e.g. partition columns)

2015-03-02 Thread Gunther Hagleitner (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14344389#comment-14344389
 ] 

Gunther Hagleitner commented on HIVE-9836:
--

+1

> Hive on tez: fails when virtual columns are present in the join conditions 
> (for e.g. partition columns)
> ---
>
> Key: HIVE-9836
> URL: https://issues.apache.org/jira/browse/HIVE-9836
> Project: Hive
>  Issue Type: Bug
>  Components: Tez
>Affects Versions: 1.0.0, 1.2.0, 1.1.0
>Reporter: Vikram Dixit K
>Assignee: Vikram Dixit K
> Attachments: HIVE-9836.1.patch
>
>
> {code}
> explain
> select a.key, a.value, b.value
> from tab a join tab_part b on a.key = b.key and a.ds = b.ds;
> {code}
> fails.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9831) HiveServer2 should use ConcurrentHashMap in ThreadFactory

2015-03-02 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14344387#comment-14344387
 ] 

Hive QA commented on HIVE-9831:
---



{color:green}Overall{color}: +1 all checks pass

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12702003/HIVE-9831.1.patch

{color:green}SUCCESS:{color} +1 7587 tests passed

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2922/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2922/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-2922/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12702003 - PreCommit-HIVE-TRUNK-Build

> HiveServer2 should use ConcurrentHashMap in ThreadFactory
> -
>
> Key: HIVE-9831
> URL: https://issues.apache.org/jira/browse/HIVE-9831
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 1.0.0, 1.2.0, 1.1.0
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
> Fix For: 1.2.0
>
> Attachments: HIVE-9831.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9832) Merge join followed by union and a map join in hive on tez fails.

2015-03-02 Thread Gunther Hagleitner (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14344380#comment-14344380
 ] 

Gunther Hagleitner commented on HIVE-9832:
--

+1 although i think sum(hash(...)) would seriously reduce output size of the 
test.

> Merge join followed by union and a map join in hive on tez fails.
> -
>
> Key: HIVE-9832
> URL: https://issues.apache.org/jira/browse/HIVE-9832
> Project: Hive
>  Issue Type: Bug
>  Components: Tez
>Affects Versions: 1.0.0, 1.2.0, 1.1.0
>Reporter: Vikram Dixit K
>Assignee: Vikram Dixit K
>Priority: Critical
> Attachments: HIVE-9832.1.patch
>
>
> {code}
> select a.key, b.value from (select x.key as key, y.value as value from
> srcpart x join srcpart y on (x.key = y.key)
> union all
> select key, value from srcpart z) a join src b on (a.value = b.value);
> {code}
> {code}
> TaskAttempt 3 failed, info=[Error: Failure while running 
> task:java.lang.RuntimeException: java.lang.RuntimeException: Hive Runtime 
> Error while closing operators: null
> at 
> org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:186)
> at 
> org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:138)
> at 
> org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:324)
> at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:176)
> at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:168)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
> at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.call(TezTaskRunner.java:168)
> at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.call(TezTaskRunner.java:163)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.RuntimeException: Hive Runtime Error while closing 
> operators: null
> at 
> org.apache.hadoop.hive.ql.exec.tez.ReduceRecordProcessor.close(ReduceRecordProcessor.java:214)
> at 
> org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:177)
> ... 13 more
> Caused by: java.lang.NullPointerException
> at 
> org.apache.hadoop.hive.ql.exec.MapJoinOperator.closeOp(MapJoinOperator.java:317)
> at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:598)
> at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:610)
> at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:610)
> at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:610)
> at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:610)
> at 
> org.apache.hadoop.hive.ql.exec.tez.ReduceRecordProcessor.close(ReduceRecordProcessor.java:196)
> ... 14 more
> ]], Vertex failed as one or more tasks failed. failedTasks:1, Vertex 
> vertex_1425055721029_0048_4_09 [Reducer 5] killed/failed due to:null]
> Vertex killed, vertexName=Reducer 7, vertexId=vertex_1425055721029_0048_4_11, 
> diagnostics=[Vertex received Kill while in RUNNING state., Vertex killed as 
> other vertex failed. failedTasks:0, Vertex vertex_1425055721029_0048_4_11 
> [Reducer 7] killed/failed due to:null]
> Vertex killed, vertexName=Reducer 4, vertexId=vertex_1425055721029_0048_4_07, 
> diagnostics=[Vertex received Kill while in RUNNING state., Vertex killed as 
> other vertex failed. failedTasks:0, Vertex vertex_1425055721029_0048_4_07 
> [Reducer 4] killed/failed due to:null]
> DAG failed due to vertex failure. failedVertices:1 killedVertices:2
> FAILED: Execution Error, return code 2 from 
> org.apache.hadoop.hive.ql.exec.tez.TezTask
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9836) Hive on tez: fails when virtual columns are present in the join conditions (for e.g. partition columns)

2015-03-02 Thread Vikram Dixit K (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9836?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vikram Dixit K updated HIVE-9836:
-
Affects Version/s: 1.1.0
   1.2.0

> Hive on tez: fails when virtual columns are present in the join conditions 
> (for e.g. partition columns)
> ---
>
> Key: HIVE-9836
> URL: https://issues.apache.org/jira/browse/HIVE-9836
> Project: Hive
>  Issue Type: Bug
>  Components: Tez
>Affects Versions: 1.0.0, 1.2.0, 1.1.0
>Reporter: Vikram Dixit K
>Assignee: Vikram Dixit K
> Attachments: HIVE-9836.1.patch
>
>
> {code}
> explain
> select a.key, a.value, b.value
> from tab a join tab_part b on a.key = b.key and a.ds = b.ds;
> {code}
> fails.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9832) Merge join followed by union and a map join in hive on tez fails.

2015-03-02 Thread Vikram Dixit K (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vikram Dixit K updated HIVE-9832:
-
Affects Version/s: 1.1.0
   1.0.0

> Merge join followed by union and a map join in hive on tez fails.
> -
>
> Key: HIVE-9832
> URL: https://issues.apache.org/jira/browse/HIVE-9832
> Project: Hive
>  Issue Type: Bug
>  Components: Tez
>Affects Versions: 1.0.0, 1.2.0, 1.1.0
>Reporter: Vikram Dixit K
>Assignee: Vikram Dixit K
>Priority: Critical
> Attachments: HIVE-9832.1.patch
>
>
> {code}
> select a.key, b.value from (select x.key as key, y.value as value from
> srcpart x join srcpart y on (x.key = y.key)
> union all
> select key, value from srcpart z) a join src b on (a.value = b.value);
> {code}
> {code}
> TaskAttempt 3 failed, info=[Error: Failure while running 
> task:java.lang.RuntimeException: java.lang.RuntimeException: Hive Runtime 
> Error while closing operators: null
> at 
> org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:186)
> at 
> org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:138)
> at 
> org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:324)
> at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:176)
> at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:168)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
> at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.call(TezTaskRunner.java:168)
> at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.call(TezTaskRunner.java:163)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.RuntimeException: Hive Runtime Error while closing 
> operators: null
> at 
> org.apache.hadoop.hive.ql.exec.tez.ReduceRecordProcessor.close(ReduceRecordProcessor.java:214)
> at 
> org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:177)
> ... 13 more
> Caused by: java.lang.NullPointerException
> at 
> org.apache.hadoop.hive.ql.exec.MapJoinOperator.closeOp(MapJoinOperator.java:317)
> at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:598)
> at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:610)
> at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:610)
> at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:610)
> at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:610)
> at 
> org.apache.hadoop.hive.ql.exec.tez.ReduceRecordProcessor.close(ReduceRecordProcessor.java:196)
> ... 14 more
> ]], Vertex failed as one or more tasks failed. failedTasks:1, Vertex 
> vertex_1425055721029_0048_4_09 [Reducer 5] killed/failed due to:null]
> Vertex killed, vertexName=Reducer 7, vertexId=vertex_1425055721029_0048_4_11, 
> diagnostics=[Vertex received Kill while in RUNNING state., Vertex killed as 
> other vertex failed. failedTasks:0, Vertex vertex_1425055721029_0048_4_11 
> [Reducer 7] killed/failed due to:null]
> Vertex killed, vertexName=Reducer 4, vertexId=vertex_1425055721029_0048_4_07, 
> diagnostics=[Vertex received Kill while in RUNNING state., Vertex killed as 
> other vertex failed. failedTasks:0, Vertex vertex_1425055721029_0048_4_07 
> [Reducer 4] killed/failed due to:null]
> DAG failed due to vertex failure. failedVertices:1 killedVertices:2
> FAILED: Execution Error, return code 2 from 
> org.apache.hadoop.hive.ql.exec.tez.TezTask
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9836) Hive on tez: fails when virtual columns are present in the join conditions (for e.g. partition columns)

2015-03-02 Thread Vikram Dixit K (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9836?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vikram Dixit K updated HIVE-9836:
-
Attachment: HIVE-9836.1.patch

> Hive on tez: fails when virtual columns are present in the join conditions 
> (for e.g. partition columns)
> ---
>
> Key: HIVE-9836
> URL: https://issues.apache.org/jira/browse/HIVE-9836
> Project: Hive
>  Issue Type: Bug
>  Components: Tez
>Affects Versions: 1.0.0, 1.2.0, 1.1.0
>Reporter: Vikram Dixit K
>Assignee: Vikram Dixit K
> Attachments: HIVE-9836.1.patch
>
>
> {code}
> explain
> select a.key, a.value, b.value
> from tab a join tab_part b on a.key = b.key and a.ds = b.ds;
> {code}
> fails.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9624) NullPointerException in MapJoinOperator.processOp(MapJoinOperator.java:253) for TPC-DS Q75 against un-partitioned schema

2015-03-02 Thread Vikram Dixit K (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14344329#comment-14344329
 ] 

Vikram Dixit K commented on HIVE-9624:
--

Same fix for both the issues.

> NullPointerException in MapJoinOperator.processOp(MapJoinOperator.java:253) 
> for TPC-DS Q75 against un-partitioned schema
> 
>
> Key: HIVE-9624
> URL: https://issues.apache.org/jira/browse/HIVE-9624
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Affects Versions: 0.14.0
>Reporter: Mostafa Mokhtar
>Assignee: Gunther Hagleitner
> Fix For: 1.2.0
>
>
> Running TPC-DS Q75 against a non-partitioned schema fails with 
> {code}
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Unexpected 
> exception: null
>   at 
> org.apache.hadoop.hive.ql.exec.MapJoinOperator.processOp(MapJoinOperator.java:314)
>   at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:815)
>   at 
> org.apache.hadoop.hive.ql.exec.CommonJoinOperator.internalForward(CommonJoinOperator.java:638)
>   at 
> org.apache.hadoop.hive.ql.exec.CommonJoinOperator.createForwardJoinObject(CommonJoinOperator.java:433)
>   at 
> org.apache.hadoop.hive.ql.exec.CommonJoinOperator.genObject(CommonJoinOperator.java:525)
>   at 
> org.apache.hadoop.hive.ql.exec.CommonJoinOperator.genObject(CommonJoinOperator.java:522)
>   at 
> org.apache.hadoop.hive.ql.exec.CommonJoinOperator.genJoinObject(CommonJoinOperator.java:451)
>   at 
> org.apache.hadoop.hive.ql.exec.CommonJoinOperator.checkAndGenObject(CommonJoinOperator.java:752)
>   at 
> org.apache.hadoop.hive.ql.exec.CommonMergeJoinOperator.joinObject(CommonMergeJoinOperator.java:248)
>   at 
> org.apache.hadoop.hive.ql.exec.CommonMergeJoinOperator.joinOneGroup(CommonMergeJoinOperator.java:213)
>   at 
> org.apache.hadoop.hive.ql.exec.CommonMergeJoinOperator.processOp(CommonMergeJoinOperator.java:196)
>   at 
> org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource$GroupIterator.next(ReduceRecordSource.java:328)
>   ... 16 more
> Caused by: java.lang.NullPointerException
>   at 
> org.apache.hadoop.hive.ql.exec.MapJoinOperator.processOp(MapJoinOperator.java:253)
>   ... 27 more
> ]], Vertex failed as one or more tasks failed. failedTasks:1, Vertex 
> vertex_1422895755428_0924_1_29 [Reducer 27] killed/failed due to:null]
> {code}
> This line maps to hashMapRowGetters = new 
> ReusableGetAdaptor[mapJoinTables.length] in the code snippet below
> {code}
>  alias = (byte) tag;
>   if (hashMapRowGetters == null) {
> hashMapRowGetters = new ReusableGetAdaptor[mapJoinTables.length];
> MapJoinKey refKey = getRefKey(alias);
> for (byte pos = 0; pos < order.length; pos++) {
>   if (pos != alias) {
> hashMapRowGetters[pos] = mapJoinTables[pos].createGetter(refKey);
>   }
> }
>   }
> {code}
> Query 
> {code}
> WITH all_sales AS (
>  SELECT d_year
>,i_brand_id
>,i_class_id
>,i_category_id
>,i_manufact_id
>,SUM(sales_cnt) AS sales_cnt
>,SUM(sales_amt) AS sales_amt
>  FROM (SELECT d_year
>  ,i_brand_id
>  ,i_class_id
>  ,i_category_id
>  ,i_manufact_id
>  ,cs_quantity - COALESCE(cr_return_quantity,0) AS sales_cnt
>  ,cs_ext_sales_price - COALESCE(cr_return_amount,0.0) AS sales_amt
>FROM catalog_sales JOIN item ON i_item_sk=cs_item_sk
>   JOIN date_dim ON d_date_sk=cs_sold_date_sk
>   LEFT JOIN catalog_returns ON 
> (cs_order_number=cr_order_number 
> AND cs_item_sk=cr_item_sk)
>WHERE i_category='Sports'
>UNION ALL
>SELECT d_year
>  ,i_brand_id
>  ,i_class_id
>  ,i_category_id
>  ,i_manufact_id
>  ,ss_quantity - COALESCE(sr_return_quantity,0) AS sales_cnt
>  ,ss_ext_sales_price - COALESCE(sr_return_amt,0.0) AS sales_amt
>FROM store_sales JOIN item ON i_item_sk=ss_item_sk
> JOIN date_dim ON d_date_sk=ss_sold_date_sk
> LEFT JOIN store_returns ON 
> (ss_ticket_number=sr_ticket_number 
> AND ss_item_sk=sr_item_sk)
>WHERE i_category='Sports'
>UNION ALL
>SELECT d_year
>  ,i_brand_id
>  ,i_class_id
>  ,i_category_id
>  ,i_manufact_id
>  ,ws_quantity - COALESCE(wr_return_quantity,0) AS sales_cnt
>  ,ws_ext_sales_price - COALESCE(wr_return_amt,0.0) AS sales_am

[jira] [Resolved] (HIVE-9624) NullPointerException in MapJoinOperator.processOp(MapJoinOperator.java:253) for TPC-DS Q75 against un-partitioned schema

2015-03-02 Thread Vikram Dixit K (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vikram Dixit K resolved HIVE-9624.
--
Resolution: Duplicate
  Assignee: Vikram Dixit K  (was: Gunther Hagleitner)

> NullPointerException in MapJoinOperator.processOp(MapJoinOperator.java:253) 
> for TPC-DS Q75 against un-partitioned schema
> 
>
> Key: HIVE-9624
> URL: https://issues.apache.org/jira/browse/HIVE-9624
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Affects Versions: 0.14.0
>Reporter: Mostafa Mokhtar
>Assignee: Vikram Dixit K
> Fix For: 1.2.0
>
>
> Running TPC-DS Q75 against a non-partitioned schema fails with 
> {code}
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Unexpected 
> exception: null
>   at 
> org.apache.hadoop.hive.ql.exec.MapJoinOperator.processOp(MapJoinOperator.java:314)
>   at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:815)
>   at 
> org.apache.hadoop.hive.ql.exec.CommonJoinOperator.internalForward(CommonJoinOperator.java:638)
>   at 
> org.apache.hadoop.hive.ql.exec.CommonJoinOperator.createForwardJoinObject(CommonJoinOperator.java:433)
>   at 
> org.apache.hadoop.hive.ql.exec.CommonJoinOperator.genObject(CommonJoinOperator.java:525)
>   at 
> org.apache.hadoop.hive.ql.exec.CommonJoinOperator.genObject(CommonJoinOperator.java:522)
>   at 
> org.apache.hadoop.hive.ql.exec.CommonJoinOperator.genJoinObject(CommonJoinOperator.java:451)
>   at 
> org.apache.hadoop.hive.ql.exec.CommonJoinOperator.checkAndGenObject(CommonJoinOperator.java:752)
>   at 
> org.apache.hadoop.hive.ql.exec.CommonMergeJoinOperator.joinObject(CommonMergeJoinOperator.java:248)
>   at 
> org.apache.hadoop.hive.ql.exec.CommonMergeJoinOperator.joinOneGroup(CommonMergeJoinOperator.java:213)
>   at 
> org.apache.hadoop.hive.ql.exec.CommonMergeJoinOperator.processOp(CommonMergeJoinOperator.java:196)
>   at 
> org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource$GroupIterator.next(ReduceRecordSource.java:328)
>   ... 16 more
> Caused by: java.lang.NullPointerException
>   at 
> org.apache.hadoop.hive.ql.exec.MapJoinOperator.processOp(MapJoinOperator.java:253)
>   ... 27 more
> ]], Vertex failed as one or more tasks failed. failedTasks:1, Vertex 
> vertex_1422895755428_0924_1_29 [Reducer 27] killed/failed due to:null]
> {code}
> This line maps to hashMapRowGetters = new 
> ReusableGetAdaptor[mapJoinTables.length] in the code snippet below
> {code}
>  alias = (byte) tag;
>   if (hashMapRowGetters == null) {
> hashMapRowGetters = new ReusableGetAdaptor[mapJoinTables.length];
> MapJoinKey refKey = getRefKey(alias);
> for (byte pos = 0; pos < order.length; pos++) {
>   if (pos != alias) {
> hashMapRowGetters[pos] = mapJoinTables[pos].createGetter(refKey);
>   }
> }
>   }
> {code}
> Query 
> {code}
> WITH all_sales AS (
>  SELECT d_year
>,i_brand_id
>,i_class_id
>,i_category_id
>,i_manufact_id
>,SUM(sales_cnt) AS sales_cnt
>,SUM(sales_amt) AS sales_amt
>  FROM (SELECT d_year
>  ,i_brand_id
>  ,i_class_id
>  ,i_category_id
>  ,i_manufact_id
>  ,cs_quantity - COALESCE(cr_return_quantity,0) AS sales_cnt
>  ,cs_ext_sales_price - COALESCE(cr_return_amount,0.0) AS sales_amt
>FROM catalog_sales JOIN item ON i_item_sk=cs_item_sk
>   JOIN date_dim ON d_date_sk=cs_sold_date_sk
>   LEFT JOIN catalog_returns ON 
> (cs_order_number=cr_order_number 
> AND cs_item_sk=cr_item_sk)
>WHERE i_category='Sports'
>UNION ALL
>SELECT d_year
>  ,i_brand_id
>  ,i_class_id
>  ,i_category_id
>  ,i_manufact_id
>  ,ss_quantity - COALESCE(sr_return_quantity,0) AS sales_cnt
>  ,ss_ext_sales_price - COALESCE(sr_return_amt,0.0) AS sales_amt
>FROM store_sales JOIN item ON i_item_sk=ss_item_sk
> JOIN date_dim ON d_date_sk=ss_sold_date_sk
> LEFT JOIN store_returns ON 
> (ss_ticket_number=sr_ticket_number 
> AND ss_item_sk=sr_item_sk)
>WHERE i_category='Sports'
>UNION ALL
>SELECT d_year
>  ,i_brand_id
>  ,i_class_id
>  ,i_category_id
>  ,i_manufact_id
>  ,ws_quantity - COALESCE(wr_return_quantity,0) AS sales_cnt
>  ,ws_ext_sales_price - COALESCE(wr_return_amt,0.0) AS sales_amt
>   

[jira] [Commented] (HIVE-9659) 'Error while trying to create table container' occurs during hive query case execution when hive.optimize.skewjoin set to 'true' [Spark Branch]

2015-03-02 Thread Xin Hao (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14344318#comment-14344318
 ] 

Xin Hao commented on HIVE-9659:
---

Sure, working on verifying it and will provide the feedback soon. Thanks.

> 'Error while trying to create table container' occurs during hive query case 
> execution when hive.optimize.skewjoin set to 'true' [Spark Branch]
> ---
>
> Key: HIVE-9659
> URL: https://issues.apache.org/jira/browse/HIVE-9659
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Reporter: Xin Hao
>Assignee: Rui Li
> Attachments: HIVE-9659.1-spark.patch
>
>
> We found that 'Error while trying to create table container'  occurs during 
> Big-Bench Q12 case execution when hive.optimize.skewjoin set to 'true'.
> If hive.optimize.skewjoin set to 'false', the case could pass.
> How to reproduce:
> 1. set hive.optimize.skewjoin=true;
> 2. Run BigBench case Q12 and it will fail. 
> Check the executor log (e.g. /usr/lib/spark/work/app-/2/stderr) and you 
> will found error 'Error while trying to create table container' in the log 
> and also a NullPointerException near the end of the log.
> (a) Detail error message for 'Error while trying to create table container':
> {noformat}
> 15/02/12 01:29:49 ERROR SparkMapRecordHandler: Error processing row: 
> org.apache.hadoop.hive.ql.metadata.HiveException: 
> org.apache.hadoop.hive.ql.metadata.HiveException: Error while trying to 
> create table container
> org.apache.hadoop.hive.ql.metadata.HiveException: 
> org.apache.hadoop.hive.ql.metadata.HiveException: Error while trying to 
> create table container
>   at 
> org.apache.hadoop.hive.ql.exec.spark.HashTableLoader.load(HashTableLoader.java:118)
>   at 
> org.apache.hadoop.hive.ql.exec.MapJoinOperator.loadHashTable(MapJoinOperator.java:193)
>   at 
> org.apache.hadoop.hive.ql.exec.MapJoinOperator.cleanUpInputFileChangedOp(MapJoinOperator.java:219)
>   at 
> org.apache.hadoop.hive.ql.exec.Operator.cleanUpInputFileChanged(Operator.java:1051)
>   at 
> org.apache.hadoop.hive.ql.exec.Operator.cleanUpInputFileChanged(Operator.java:1055)
>   at 
> org.apache.hadoop.hive.ql.exec.Operator.cleanUpInputFileChanged(Operator.java:1055)
>   at 
> org.apache.hadoop.hive.ql.exec.Operator.cleanUpInputFileChanged(Operator.java:1055)
>   at 
> org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:486)
>   at 
> org.apache.hadoop.hive.ql.exec.spark.SparkMapRecordHandler.processRow(SparkMapRecordHandler.java:141)
>   at 
> org.apache.hadoop.hive.ql.exec.spark.HiveMapFunctionResultList.processNextRecord(HiveMapFunctionResultList.java:47)
>   at 
> org.apache.hadoop.hive.ql.exec.spark.HiveMapFunctionResultList.processNextRecord(HiveMapFunctionResultList.java:27)
>   at 
> org.apache.hadoop.hive.ql.exec.spark.HiveBaseFunctionResultList$ResultIterator.hasNext(HiveBaseFunctionResultList.java:98)
>   at 
> scala.collection.convert.Wrappers$JIteratorWrapper.hasNext(Wrappers.scala:41)
>   at 
> org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:217)
>   at 
> org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:65)
>   at 
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)
>   at 
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
>   at org.apache.spark.scheduler.Task.run(Task.scala:56)
>   at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:196)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Error while 
> trying to create table container
>   at 
> org.apache.hadoop.hive.ql.exec.persistence.MapJoinTableContainerSerDe.load(MapJoinTableContainerSerDe.java:158)
>   at 
> org.apache.hadoop.hive.ql.exec.spark.HashTableLoader.load(HashTableLoader.java:115)
>   ... 21 more
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Error, not a 
> directory: 
> hdfs://bhx1:8020/tmp/hive/root/d22ef465-bff5-4edb-a822-0a9f1c25b66c/hive_2015-02-12_01-28-10_008_6897031694580088767-1/-mr-10009/HashTable-Stage-6/MapJoin-mapfile01--.hashtable
>   at 
> org.apache.hadoop.hive.ql.exec.persistence.MapJoinTableContainerSerDe.load(MapJoinTableContainerSerDe.java:106)
>   ... 22 more
> 15/02/12 01:29:49 INFO SparkRecordHandler: maximum memory = 40939028480
> 15/02/12 01:29:49 INFO PerfLogger:  from=org.apache.hadoop.hive.ql.exec.spark.Sp

[jira] [Commented] (HIVE-9830) Map join could dump a small table multiple times [Spark Branch]

2015-03-02 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14344300#comment-14344300
 ] 

Hive QA commented on HIVE-9830:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12702030/HIVE-9830.2-spark.patch

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 7567 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_udaf_percentile_approx_23
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-SPARK-Build/754/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-SPARK-Build/754/console
Test logs: 
http://ec2-50-18-27-0.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-SPARK-Build-754/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12702030 - PreCommit-HIVE-SPARK-Build

> Map join could dump a small table multiple times [Spark Branch]
> ---
>
> Key: HIVE-9830
> URL: https://issues.apache.org/jira/browse/HIVE-9830
> Project: Hive
>  Issue Type: Bug
>  Components: spark-branch
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
> Fix For: spark-branch
>
> Attachments: HIVE-9830.1-spark.patch, HIVE-9830.2-spark.patch
>
>
> We found auto_sortmerge_join_8 is flaky is flaky for Spark. Sometimes, the 
> output could be wrong.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9835) [CBO] Add a rule to insert exchange operator in front of group by in calcite tree

2015-03-02 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-9835:
---
Attachment: HIVE-9835.cbo.patch

[~jcamachorodriguez] can you take a look ?

> [CBO] Add a rule to insert exchange operator in front of group by in calcite 
> tree
> -
>
> Key: HIVE-9835
> URL: https://issues.apache.org/jira/browse/HIVE-9835
> Project: Hive
>  Issue Type: Task
>  Components: CBO
>Affects Versions: cbo-branch
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
> Attachments: HIVE-9835.cbo.patch
>
>
> Takes into account map side aggregation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-3454) Problem with CAST(BIGINT as TIMESTAMP)

2015-03-02 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14344253#comment-14344253
 ] 

Hive QA commented on HIVE-3454:
---



{color:green}Overall{color}: +1 all checks pass

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12701998/HIVE-3454.4.patch

{color:green}SUCCESS:{color} +1 7580 tests passed

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2921/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2921/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-2921/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12701998 - PreCommit-HIVE-TRUNK-Build

> Problem with CAST(BIGINT as TIMESTAMP)
> --
>
> Key: HIVE-3454
> URL: https://issues.apache.org/jira/browse/HIVE-3454
> Project: Hive
>  Issue Type: Bug
>  Components: Types, UDF
>Affects Versions: 0.8.0, 0.8.1, 0.9.0, 0.10.0, 0.11.0, 0.12.0, 0.13.0, 
> 0.13.1
>Reporter: Ryan Harris
>Assignee: Aihua Xu
>  Labels: newbie, newdev, patch
> Attachments: HIVE-3454.1.patch.txt, HIVE-3454.2.patch, 
> HIVE-3454.3.patch, HIVE-3454.4.patch, HIVE-3454.patch
>
>
> Ran into an issue while working with timestamp conversion.
> CAST(unix_timestamp() as TIMESTAMP) should create a timestamp for the current 
> time from the BIGINT returned by unix_timestamp()
> Instead, however, a 1970-01-16 timestamp is returned.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9832) Merge join followed by union and a map join in hive on tez fails.

2015-03-02 Thread Vikram Dixit K (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vikram Dixit K updated HIVE-9832:
-
Attachment: HIVE-9832.1.patch

> Merge join followed by union and a map join in hive on tez fails.
> -
>
> Key: HIVE-9832
> URL: https://issues.apache.org/jira/browse/HIVE-9832
> Project: Hive
>  Issue Type: Bug
>  Components: Tez
>Affects Versions: 1.2.0
>Reporter: Vikram Dixit K
>Assignee: Vikram Dixit K
>Priority: Critical
> Attachments: HIVE-9832.1.patch
>
>
> {code}
> select a.key, b.value from (select x.key as key, y.value as value from
> srcpart x join srcpart y on (x.key = y.key)
> union all
> select key, value from srcpart z) a join src b on (a.value = b.value);
> {code}
> {code}
> TaskAttempt 3 failed, info=[Error: Failure while running 
> task:java.lang.RuntimeException: java.lang.RuntimeException: Hive Runtime 
> Error while closing operators: null
> at 
> org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:186)
> at 
> org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:138)
> at 
> org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:324)
> at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:176)
> at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:168)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
> at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.call(TezTaskRunner.java:168)
> at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.call(TezTaskRunner.java:163)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.RuntimeException: Hive Runtime Error while closing 
> operators: null
> at 
> org.apache.hadoop.hive.ql.exec.tez.ReduceRecordProcessor.close(ReduceRecordProcessor.java:214)
> at 
> org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:177)
> ... 13 more
> Caused by: java.lang.NullPointerException
> at 
> org.apache.hadoop.hive.ql.exec.MapJoinOperator.closeOp(MapJoinOperator.java:317)
> at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:598)
> at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:610)
> at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:610)
> at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:610)
> at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:610)
> at 
> org.apache.hadoop.hive.ql.exec.tez.ReduceRecordProcessor.close(ReduceRecordProcessor.java:196)
> ... 14 more
> ]], Vertex failed as one or more tasks failed. failedTasks:1, Vertex 
> vertex_1425055721029_0048_4_09 [Reducer 5] killed/failed due to:null]
> Vertex killed, vertexName=Reducer 7, vertexId=vertex_1425055721029_0048_4_11, 
> diagnostics=[Vertex received Kill while in RUNNING state., Vertex killed as 
> other vertex failed. failedTasks:0, Vertex vertex_1425055721029_0048_4_11 
> [Reducer 7] killed/failed due to:null]
> Vertex killed, vertexName=Reducer 4, vertexId=vertex_1425055721029_0048_4_07, 
> diagnostics=[Vertex received Kill while in RUNNING state., Vertex killed as 
> other vertex failed. failedTasks:0, Vertex vertex_1425055721029_0048_4_07 
> [Reducer 4] killed/failed due to:null]
> DAG failed due to vertex failure. failedVertices:1 killedVertices:2
> FAILED: Execution Error, return code 2 from 
> org.apache.hadoop.hive.ql.exec.tez.TezTask
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9277) Hybrid Hybrid Grace Hash Join

2015-03-02 Thread Wei Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14344229#comment-14344229
 ] 

Wei Zheng commented on HIVE-9277:
-

Right now I'm using HIVECONVERTJOINNOCONDITIONALTASK as a threshold to do 
estimation. Once the memory management part is ready, I can rely on that to 
provide me an exact number.

> Hybrid Hybrid Grace Hash Join
> -
>
> Key: HIVE-9277
> URL: https://issues.apache.org/jira/browse/HIVE-9277
> Project: Hive
>  Issue Type: New Feature
>  Components: Physical Optimizer
>Reporter: Wei Zheng
>Assignee: Wei Zheng
>  Labels: join
> Attachments: HIVE-9277.01.patch, HIVE-9277.02.patch, 
> HIVE-9277.03.patch, HIVE-9277.04.patch, HIVE-9277.05.patch, 
> HIVE-9277.06.patch, High-leveldesignforHybridHybridGraceHashJoinv1.0.pdf
>
>
> We are proposing an enhanced hash join algorithm called _“hybrid hybrid grace 
> hash join”_.
> We can benefit from this feature as illustrated below:
> * The query will not fail even if the estimated memory requirement is 
> slightly wrong
> * Expensive garbage collection overhead can be avoided when hash table grows
> * Join execution using a Map join operator even though the small table 
> doesn't fit in memory as spilling some data from the build and probe sides 
> will still be cheaper than having to shuffle the large fact table
> The design was based on Hadoop’s parallel processing capability and 
> significant amount of memory available.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-4940) udaf_percentile_approx.q is not deterministic

2015-03-02 Thread Wei Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14344219#comment-14344219
 ] 

Wei Zheng commented on HIVE-4940:
-

HIVE-9833 has been opened to report the same issue. It seems the problem is 
still there.

> udaf_percentile_approx.q is not deterministic
> -
>
> Key: HIVE-4940
> URL: https://issues.apache.org/jira/browse/HIVE-4940
> Project: Hive
>  Issue Type: Sub-task
>  Components: Tests
>Reporter: Navis
>Assignee: Navis
>Priority: Minor
> Fix For: 0.12.0
>
> Attachments: HIVE-4940.D12189.1.patch
>
>
> Makes different result for 20(S) and 23.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9182) avoid FileSystem.getAclStatus rpc call for filesystems that don't support acl

2015-03-02 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14344206#comment-14344206
 ] 

Chris Nauroth commented on HIVE-9182:
-

Hi [~ashettia] and [~thejas].  Do you think {{setFullFileStatus}} needs to be 
changed too?

> avoid FileSystem.getAclStatus rpc call for filesystems that don't support acl
> -
>
> Key: HIVE-9182
> URL: https://issues.apache.org/jira/browse/HIVE-9182
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.14.0
>Reporter: Thejas M Nair
>Assignee: Abdelrahman Shettia
> Fix For: 1.2.0
>
> Attachments: HIVE-9182.1.patch
>
>
> File systems such as s3, wasp (azure) don't implement Hadoop FileSystem acl 
> functionality.
> Hadoop23Shims has code that calls getAclStatus on file systems.
> Instead of calling getAclStatus and catching the exception, we can also check 
> FsPermission#getAclBit .
> Additionally, instead of catching all exceptions for calls to getAclStatus 
> and ignoring them, it is better to just catch UnsupportedOperationException.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9834) VectorGroupByOperator logs too much

2015-03-02 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14344204#comment-14344204
 ] 

Ashutosh Chauhan commented on HIVE-9834:


+1

> VectorGroupByOperator logs too much
> ---
>
> Key: HIVE-9834
> URL: https://issues.apache.org/jira/browse/HIVE-9834
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Trivial
> Attachments: HIVE-9834.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9834) VectorGroupByOperator logs too much

2015-03-02 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-9834:
---
Attachment: HIVE-9834.patch

[~ashutoshc] you appear to have added this. Can you take a look? It logs every 
row on debug level, causing even q tests to be slow

> VectorGroupByOperator logs too much
> ---
>
> Key: HIVE-9834
> URL: https://issues.apache.org/jira/browse/HIVE-9834
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-9834.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9834) VectorGroupByOperator logs too much

2015-03-02 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-9834:
---
Priority: Trivial  (was: Major)

> VectorGroupByOperator logs too much
> ---
>
> Key: HIVE-9834
> URL: https://issues.apache.org/jira/browse/HIVE-9834
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Trivial
> Attachments: HIVE-9834.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9182) avoid FileSystem.getAclStatus rpc call for filesystems that don't support acl

2015-03-02 Thread Abdelrahman Shettia (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14344189#comment-14344189
 ] 

Abdelrahman Shettia commented on HIVE-9182:
---

Hi [~thejas],

I have uploaded the patch file called HIVE-9182.1.patch and used the 
recommended FsPermission#getAclBit. 

Thanks
-Rahman

> avoid FileSystem.getAclStatus rpc call for filesystems that don't support acl
> -
>
> Key: HIVE-9182
> URL: https://issues.apache.org/jira/browse/HIVE-9182
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.14.0
>Reporter: Thejas M Nair
>Assignee: Abdelrahman Shettia
> Fix For: 1.2.0
>
> Attachments: HIVE-9182.1.patch
>
>
> File systems such as s3, wasp (azure) don't implement Hadoop FileSystem acl 
> functionality.
> Hadoop23Shims has code that calls getAclStatus on file systems.
> Instead of calling getAclStatus and catching the exception, we can also check 
> FsPermission#getAclBit .
> Additionally, instead of catching all exceptions for calls to getAclStatus 
> and ignoring them, it is better to just catch UnsupportedOperationException.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9182) avoid FileSystem.getAclStatus rpc call for filesystems that don't support acl

2015-03-02 Thread Abdelrahman Shettia (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abdelrahman Shettia updated HIVE-9182:
--
Attachment: HIVE-9182.1.patch

> avoid FileSystem.getAclStatus rpc call for filesystems that don't support acl
> -
>
> Key: HIVE-9182
> URL: https://issues.apache.org/jira/browse/HIVE-9182
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.14.0
>Reporter: Thejas M Nair
>Assignee: Abdelrahman Shettia
> Fix For: 1.2.0
>
> Attachments: HIVE-9182.1.patch
>
>
> File systems such as s3, wasp (azure) don't implement Hadoop FileSystem acl 
> functionality.
> Hadoop23Shims has code that calls getAclStatus on file systems.
> Instead of calling getAclStatus and catching the exception, we can also check 
> FsPermission#getAclBit .
> Additionally, instead of catching all exceptions for calls to getAclStatus 
> and ignoring them, it is better to just catch UnsupportedOperationException.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9779) ATSHook does not log the end user if doAs=false (it logs the hs2 server user)

2015-03-02 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14344180#comment-14344180
 ] 

Thejas M Nair commented on HIVE-9779:
-

+1

> ATSHook does not log the end user if doAs=false (it logs the hs2 server user)
> -
>
> Key: HIVE-9779
> URL: https://issues.apache.org/jira/browse/HIVE-9779
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 0.13.0, 0.14.0
>Reporter: Abdelrahman Shettia
>Assignee: Abdelrahman Shettia
>  Labels: patch
> Fix For: 1.2.0
>
> Attachments: HIVE-9779-testing.xlsx, HIVE-9779.2.patch
>
>
> When doAs=false, ATSHook should log the end username in ATS instead of 
> logging the hiveserver2 user's name.
> The way things are, it is not possible for an admin to identify which query 
> is being run by which user. The end user information is already available in 
> the HookContext.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9779) ATSHook does not log the end user if doAs=false (it logs the hs2 server user)

2015-03-02 Thread Abdelrahman Shettia (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abdelrahman Shettia updated HIVE-9779:
--
Attachment: HIVE-9779.2.patch

> ATSHook does not log the end user if doAs=false (it logs the hs2 server user)
> -
>
> Key: HIVE-9779
> URL: https://issues.apache.org/jira/browse/HIVE-9779
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 0.13.0, 0.14.0
>Reporter: Abdelrahman Shettia
>Assignee: Abdelrahman Shettia
>  Labels: patch
> Fix For: 1.2.0
>
> Attachments: HIVE-9779-testing.xlsx, HIVE-9779.2.patch
>
>
> When doAs=false, ATSHook should log the end username in ATS instead of 
> logging the hiveserver2 user's name.
> The way things are, it is not possible for an admin to identify which query 
> is being run by which user. The end user information is already available in 
> the HookContext.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9779) ATSHook does not log the end user if doAs=false (it logs the hs2 server user)

2015-03-02 Thread Abdelrahman Shettia (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abdelrahman Shettia updated HIVE-9779:
--
Attachment: (was: 9979.001.patch)

> ATSHook does not log the end user if doAs=false (it logs the hs2 server user)
> -
>
> Key: HIVE-9779
> URL: https://issues.apache.org/jira/browse/HIVE-9779
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 0.13.0, 0.14.0
>Reporter: Abdelrahman Shettia
>Assignee: Abdelrahman Shettia
> Attachments: HIVE-9779-testing.xlsx
>
>
> When doAs=false, ATSHook should log the end username in ATS instead of 
> logging the hiveserver2 user's name.
> The way things are, it is not possible for an admin to identify which query 
> is being run by which user. The end user information is already available in 
> the HookContext.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9779) ATSHook does not log the end user if doAs=false (it logs the hs2 server user)

2015-03-02 Thread Abdelrahman Shettia (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abdelrahman Shettia updated HIVE-9779:
--
Attachment: (was: 9979.002.patch)

> ATSHook does not log the end user if doAs=false (it logs the hs2 server user)
> -
>
> Key: HIVE-9779
> URL: https://issues.apache.org/jira/browse/HIVE-9779
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 0.13.0, 0.14.0
>Reporter: Abdelrahman Shettia
>Assignee: Abdelrahman Shettia
> Attachments: 9979.001.patch, HIVE-9779-testing.xlsx
>
>
> When doAs=false, ATSHook should log the end username in ATS instead of 
> logging the hiveserver2 user's name.
> The way things are, it is not possible for an admin to identify which query 
> is being run by which user. The end user information is already available in 
> the HookContext.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9830) Map join could dump a small table multiple times [Spark Branch]

2015-03-02 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14344161#comment-14344161
 ] 

Hive QA commented on HIVE-9830:
---



{color:green}Overall{color}: +1 all checks pass

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12702002/HIVE-9830.1-spark.patch

{color:green}SUCCESS:{color} +1 7567 tests passed

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-SPARK-Build/753/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-SPARK-Build/753/console
Test logs: 
http://ec2-50-18-27-0.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-SPARK-Build-753/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12702002 - PreCommit-HIVE-SPARK-Build

> Map join could dump a small table multiple times [Spark Branch]
> ---
>
> Key: HIVE-9830
> URL: https://issues.apache.org/jira/browse/HIVE-9830
> Project: Hive
>  Issue Type: Bug
>  Components: spark-branch
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
> Fix For: spark-branch
>
> Attachments: HIVE-9830.1-spark.patch, HIVE-9830.2-spark.patch
>
>
> We found auto_sortmerge_join_8 is flaky is flaky for Spark. Sometimes, the 
> output could be wrong.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9830) Map join could dump a small table multiple times [Spark Branch]

2015-03-02 Thread Jimmy Xiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14344157#comment-14344157
 ] 

Jimmy Xiang commented on HIVE-9830:
---

Patch v2 is posted on RB: https://reviews.apache.org/r/31648/

> Map join could dump a small table multiple times [Spark Branch]
> ---
>
> Key: HIVE-9830
> URL: https://issues.apache.org/jira/browse/HIVE-9830
> Project: Hive
>  Issue Type: Bug
>  Components: spark-branch
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
> Fix For: spark-branch
>
> Attachments: HIVE-9830.1-spark.patch, HIVE-9830.2-spark.patch
>
>
> We found auto_sortmerge_join_8 is flaky is flaky for Spark. Sometimes, the 
> output could be wrong.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9830) Map join could dump a small table multiple times [Spark Branch]

2015-03-02 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang updated HIVE-9830:
--
Attachment: HIVE-9830.2-spark.patch

> Map join could dump a small table multiple times [Spark Branch]
> ---
>
> Key: HIVE-9830
> URL: https://issues.apache.org/jira/browse/HIVE-9830
> Project: Hive
>  Issue Type: Bug
>  Components: spark-branch
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
> Fix For: spark-branch
>
> Attachments: HIVE-9830.1-spark.patch, HIVE-9830.2-spark.patch
>
>
> We found auto_sortmerge_join_8 is flaky is flaky for Spark. Sometimes, the 
> output could be wrong.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9775) LLAP: Add a MiniLLAPCluster for tests

2015-03-02 Thread Siddharth Seth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Seth updated HIVE-9775:
-
Attachment: HIVE-9775.1.patch

Patch to add a MiniLLAPCluster. This isn't wired into the tests and shims just 
yet - that needs some more work with circular dependencies and such. Will 
figure that out in a separate jira.
Applies on top of HIVE-9808.

> LLAP: Add a MiniLLAPCluster for tests
> -
>
> Key: HIVE-9775
> URL: https://issues.apache.org/jira/browse/HIVE-9775
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Siddharth Seth
>Assignee: Siddharth Seth
> Fix For: llap
>
> Attachments: HIVE-9775.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9480) Build UDF TRUNC to implement FIRST_DAY as compared with LAST_DAY

2015-03-02 Thread Jason Dere (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Dere updated HIVE-9480:
-
Issue Type: Improvement  (was: Bug)

> Build UDF TRUNC to implement FIRST_DAY as compared with LAST_DAY
> 
>
> Key: HIVE-9480
> URL: https://issues.apache.org/jira/browse/HIVE-9480
> Project: Hive
>  Issue Type: Improvement
>  Components: UDF
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Fix For: 1.2.0
>
> Attachments: HIVE-9480.1.patch, HIVE-9480.3.patch, HIVE-9480.4.patch, 
> HIVE-9480.5.patch, HIVE-9480.6.patch, HIVE-9480.7.patch, HIVE-9480.8.patch, 
> HIVE-9480.9.patch
>
>
> Hive already supports LAST_DAY UDF, in some cases, FIRST_DAY is necessary to 
> do date/timestamp related computation. This JIRA is to track such an 
> implementation. Choose to impl TRUNC, a more standard way to get the first 
> day of a a month, e.g., SELECT TRUNC('2009-12-12', 'MM'); will return 
> 2009-12-01, SELECT TRUNC('2009-12-12', 'YEAR'); will return 2009-01-01.
> BTW, this TRUNC is not as feature complete as aligned with Oracle one. only 
> 'MM' and 'YEAR' are supported as format, however, it's a base to add on other 
> formats.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9480) Build UDF TRUNC to implement FIRST_DAY as compared with LAST_DAY

2015-03-02 Thread Jason Dere (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Dere updated HIVE-9480:
-
Affects Version/s: (was: 0.14.0)

> Build UDF TRUNC to implement FIRST_DAY as compared with LAST_DAY
> 
>
> Key: HIVE-9480
> URL: https://issues.apache.org/jira/browse/HIVE-9480
> Project: Hive
>  Issue Type: Bug
>  Components: UDF
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Fix For: 1.2.0
>
> Attachments: HIVE-9480.1.patch, HIVE-9480.3.patch, HIVE-9480.4.patch, 
> HIVE-9480.5.patch, HIVE-9480.6.patch, HIVE-9480.7.patch, HIVE-9480.8.patch, 
> HIVE-9480.9.patch
>
>
> Hive already supports LAST_DAY UDF, in some cases, FIRST_DAY is necessary to 
> do date/timestamp related computation. This JIRA is to track such an 
> implementation. Choose to impl TRUNC, a more standard way to get the first 
> day of a a month, e.g., SELECT TRUNC('2009-12-12', 'MM'); will return 
> 2009-12-01, SELECT TRUNC('2009-12-12', 'YEAR'); will return 2009-01-01.
> BTW, this TRUNC is not as feature complete as aligned with Oracle one. only 
> 'MM' and 'YEAR' are supported as format, however, it's a base to add on other 
> formats.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9674) *DropPartitionEvent should handle partition-sets.

2015-03-02 Thread Mithun Radhakrishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mithun Radhakrishnan updated HIVE-9674:
---
Attachment: HIVE-9736.3.patch

[~cdrome] has me know (thank you!) that I'd neglected to change 
{{TestMetaStoreEventListener}} for this change. Here's the emended patch.

> *DropPartitionEvent should handle partition-sets.
> -
>
> Key: HIVE-9674
> URL: https://issues.apache.org/jira/browse/HIVE-9674
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Affects Versions: 0.14.0
>Reporter: Mithun Radhakrishnan
>Assignee: Mithun Radhakrishnan
> Attachments: HIVE-9674.2.patch, HIVE-9736.3.patch
>
>
> Dropping a set of N partitions from a table currently results in N 
> DropPartitionEvents (and N PreDropPartitionEvents) being fired serially. This 
> is wasteful, especially so for large N. It also makes it impossible to even 
> try to run authorization-checks on all partitions in a batch.
> Taking the cue from HIVE-9609, we should compose an {{Iterable}} 
> in the event, and expose them via an {{Iterator}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9277) Hybrid Hybrid Grace Hash Join

2015-03-02 Thread Wei Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14344091#comment-14344091
 ] 

Wei Zheng commented on HIVE-9277:
-

Yes, already updated it.

> Hybrid Hybrid Grace Hash Join
> -
>
> Key: HIVE-9277
> URL: https://issues.apache.org/jira/browse/HIVE-9277
> Project: Hive
>  Issue Type: New Feature
>  Components: Physical Optimizer
>Reporter: Wei Zheng
>Assignee: Wei Zheng
>  Labels: join
> Attachments: HIVE-9277.01.patch, HIVE-9277.02.patch, 
> HIVE-9277.03.patch, HIVE-9277.04.patch, HIVE-9277.05.patch, 
> HIVE-9277.06.patch, High-leveldesignforHybridHybridGraceHashJoinv1.0.pdf
>
>
> We are proposing an enhanced hash join algorithm called _“hybrid hybrid grace 
> hash join”_.
> We can benefit from this feature as illustrated below:
> * The query will not fail even if the estimated memory requirement is 
> slightly wrong
> * Expensive garbage collection overhead can be avoided when hash table grows
> * Join execution using a Map join operator even though the small table 
> doesn't fit in memory as spilling some data from the build and probe sides 
> will still be cheaper than having to shuffle the large fact table
> The design was based on Hadoop’s parallel processing capability and 
> significant amount of memory available.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9277) Hybrid Hybrid Grace Hash Join

2015-03-02 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14344079#comment-14344079
 ] 

Hive QA commented on HIVE-9277:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12701966/HIVE-9277.06.patch

{color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 7579 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_hybridhashjoin
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_udaf_percentile_approx_23
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2920/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2920/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-2920/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 2 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12701966 - PreCommit-HIVE-TRUNK-Build

> Hybrid Hybrid Grace Hash Join
> -
>
> Key: HIVE-9277
> URL: https://issues.apache.org/jira/browse/HIVE-9277
> Project: Hive
>  Issue Type: New Feature
>  Components: Physical Optimizer
>Reporter: Wei Zheng
>Assignee: Wei Zheng
>  Labels: join
> Attachments: HIVE-9277.01.patch, HIVE-9277.02.patch, 
> HIVE-9277.03.patch, HIVE-9277.04.patch, HIVE-9277.05.patch, 
> HIVE-9277.06.patch, High-leveldesignforHybridHybridGraceHashJoinv1.0.pdf
>
>
> We are proposing an enhanced hash join algorithm called _“hybrid hybrid grace 
> hash join”_.
> We can benefit from this feature as illustrated below:
> * The query will not fail even if the estimated memory requirement is 
> slightly wrong
> * Expensive garbage collection overhead can be avoided when hash table grows
> * Join execution using a Map join operator even though the small table 
> doesn't fit in memory as spilling some data from the build and probe sides 
> will still be cheaper than having to shuffle the large fact table
> The design was based on Hadoop’s parallel processing capability and 
> significant amount of memory available.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9779) ATSHook does not log the end user if doAs=false (it logs the hs2 server user)

2015-03-02 Thread Abdelrahman Shettia (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14344027#comment-14344027
 ] 

Abdelrahman Shettia commented on HIVE-9779:
---

I have uploaded excel sheet called HIVE-9779-testing. It has all the details of 
the test cases. 

Thanks
-Rahman

> ATSHook does not log the end user if doAs=false (it logs the hs2 server user)
> -
>
> Key: HIVE-9779
> URL: https://issues.apache.org/jira/browse/HIVE-9779
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 0.13.0, 0.14.0
>Reporter: Abdelrahman Shettia
>Assignee: Abdelrahman Shettia
> Attachments: 9979.001.patch, 9979.002.patch, HIVE-9779-testing.xlsx
>
>
> When doAs=false, ATSHook should log the end username in ATS instead of 
> logging the hiveserver2 user's name.
> The way things are, it is not possible for an admin to identify which query 
> is being run by which user. The end user information is already available in 
> the HookContext.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9779) ATSHook does not log the end user if doAs=false (it logs the hs2 server user)

2015-03-02 Thread Abdelrahman Shettia (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abdelrahman Shettia updated HIVE-9779:
--
Attachment: HIVE-9779-testing.xlsx

> ATSHook does not log the end user if doAs=false (it logs the hs2 server user)
> -
>
> Key: HIVE-9779
> URL: https://issues.apache.org/jira/browse/HIVE-9779
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 0.13.0, 0.14.0
>Reporter: Abdelrahman Shettia
>Assignee: Abdelrahman Shettia
> Attachments: 9979.001.patch, 9979.002.patch, HIVE-9779-testing.xlsx
>
>
> When doAs=false, ATSHook should log the end username in ATS instead of 
> logging the hiveserver2 user's name.
> The way things are, it is not possible for an admin to identify which query 
> is being run by which user. The end user information is already available in 
> the HookContext.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9831) HiveServer2 should use ConcurrentHashMap in ThreadFactory

2015-03-02 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14343949#comment-14343949
 ] 

Thejas M Nair commented on HIVE-9831:
-

+1

> HiveServer2 should use ConcurrentHashMap in ThreadFactory
> -
>
> Key: HIVE-9831
> URL: https://issues.apache.org/jira/browse/HIVE-9831
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 1.0.0, 1.2.0, 1.1.0
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
> Fix For: 1.2.0
>
> Attachments: HIVE-9831.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9831) HiveServer2 should use ConcurrentHashMap in ThreadFactory

2015-03-02 Thread Vaibhav Gumashta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vaibhav Gumashta updated HIVE-9831:
---
Attachment: HIVE-9831.1.patch

> HiveServer2 should use ConcurrentHashMap in ThreadFactory
> -
>
> Key: HIVE-9831
> URL: https://issues.apache.org/jira/browse/HIVE-9831
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 1.0.0, 1.2.0, 1.1.0
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
> Fix For: 1.2.0
>
> Attachments: HIVE-9831.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9831) HiveServer2 should use ConcurrentHashMap in ThreadFactory

2015-03-02 Thread Vaibhav Gumashta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vaibhav Gumashta updated HIVE-9831:
---
Fix Version/s: 1.2.0

> HiveServer2 should use ConcurrentHashMap in ThreadFactory
> -
>
> Key: HIVE-9831
> URL: https://issues.apache.org/jira/browse/HIVE-9831
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 1.0.0, 1.2.0, 1.1.0
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
> Fix For: 1.2.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9829) LLAP: fix unit tests

2015-03-02 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-9829:
---
Fix Version/s: llap

> LLAP: fix unit tests
> 
>
> Key: HIVE-9829
> URL: https://issues.apache.org/jira/browse/HIVE-9829
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Fix For: llap
>
>
> Unit tests are broken. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9830) Map join could dump a small table multiple times [Spark Branch]

2015-03-02 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang updated HIVE-9830:
--
Attachment: HIVE-9830.1-spark.patch

> Map join could dump a small table multiple times [Spark Branch]
> ---
>
> Key: HIVE-9830
> URL: https://issues.apache.org/jira/browse/HIVE-9830
> Project: Hive
>  Issue Type: Bug
>  Components: spark-branch
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
> Fix For: spark-branch
>
> Attachments: HIVE-9830.1-spark.patch
>
>
> We found auto_sortmerge_join_8 is flaky is flaky for Spark. Sometimes, the 
> output could be wrong.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9830) Map join could dump a small table multiple times [Spark Branch]

2015-03-02 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang updated HIVE-9830:
--
Summary: Map join could dump a small table multiple times [Spark Branch]  
(was: Test auto_sortmerge_join_8 is flaky [Spark Branch])

> Map join could dump a small table multiple times [Spark Branch]
> ---
>
> Key: HIVE-9830
> URL: https://issues.apache.org/jira/browse/HIVE-9830
> Project: Hive
>  Issue Type: Bug
>  Components: spark-branch
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
> Fix For: spark-branch
>
>
> We found auto_sortmerge_join_8 is flaky is flaky for Spark. Sometimes, the 
> output could be wrong.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9118) Support auto-purge for tables, when dropping tables/partitions.

2015-03-02 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14343923#comment-14343923
 ] 

Hive QA commented on HIVE-9118:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12701792/HIVE-9118.3.patch

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 7580 tests executed
*Failed tests:*
{noformat}
org.apache.hive.jdbc.TestMultiSessionsHS2WithLocalClusterSpark.testSparkQuery
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2919/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2919/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-2919/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12701792 - PreCommit-HIVE-TRUNK-Build

> Support auto-purge for tables, when dropping tables/partitions.
> ---
>
> Key: HIVE-9118
> URL: https://issues.apache.org/jira/browse/HIVE-9118
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Affects Versions: 1.0.0, 1.1
>Reporter: Mithun Radhakrishnan
>Assignee: Mithun Radhakrishnan
> Attachments: HIVE-9118.1.patch, HIVE-9118.2.patch, HIVE-9118.3.patch
>
>
> HIVE-7100 introduced a way to skip the trash directory, when deleting 
> table-data, while dropping tables.
> In HIVE-9083/HIVE-9086, I extended this to work when partitions are dropped.
> Here, I propose a table-parameter ({{"auto.purge"}}) to set up tables to 
> skip-trash when table/partition data is deleted, without needing to say 
> "PURGE" on the Hive CLI. Apropos, on {{dropTable()}} and {{dropPartition()}}, 
> table data is deleted directly (and not moved to trash) if the following hold 
> true:
> # The table is MANAGED.
> # The {{deleteData}} parameter to the {{HMSC.drop*()}} methods is true.
> # Either PURGE is explicitly specified on the command-line (or rather, 
> {{"ifPurge"}} is set in the environment context, OR
> # TBLPROPERTIES contains {{"auto.purge"="true"}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-3454) Problem with CAST(BIGINT as TIMESTAMP)

2015-03-02 Thread Aihua Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14343913#comment-14343913
 ] 

Aihua Xu commented on HIVE-3454:


Updated to initialize for local mode in Driver.runInternal() and configure() of 
ExecMapper and ExecReducer.

> Problem with CAST(BIGINT as TIMESTAMP)
> --
>
> Key: HIVE-3454
> URL: https://issues.apache.org/jira/browse/HIVE-3454
> Project: Hive
>  Issue Type: Bug
>  Components: Types, UDF
>Affects Versions: 0.8.0, 0.8.1, 0.9.0, 0.10.0, 0.11.0, 0.12.0, 0.13.0, 
> 0.13.1
>Reporter: Ryan Harris
>Assignee: Aihua Xu
>  Labels: newbie, newdev, patch
> Attachments: HIVE-3454.1.patch.txt, HIVE-3454.2.patch, 
> HIVE-3454.3.patch, HIVE-3454.4.patch, HIVE-3454.patch
>
>
> Ran into an issue while working with timestamp conversion.
> CAST(unix_timestamp() as TIMESTAMP) should create a timestamp for the current 
> time from the BIGINT returned by unix_timestamp()
> Instead, however, a 1970-01-16 timestamp is returned.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-3454) Problem with CAST(BIGINT as TIMESTAMP)

2015-03-02 Thread Aihua Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aihua Xu updated HIVE-3454:
---
Attachment: (was: HIVE-3454.4.patch)

> Problem with CAST(BIGINT as TIMESTAMP)
> --
>
> Key: HIVE-3454
> URL: https://issues.apache.org/jira/browse/HIVE-3454
> Project: Hive
>  Issue Type: Bug
>  Components: Types, UDF
>Affects Versions: 0.8.0, 0.8.1, 0.9.0, 0.10.0, 0.11.0, 0.12.0, 0.13.0, 
> 0.13.1
>Reporter: Ryan Harris
>Assignee: Aihua Xu
>  Labels: newbie, newdev, patch
> Attachments: HIVE-3454.1.patch.txt, HIVE-3454.2.patch, 
> HIVE-3454.3.patch, HIVE-3454.patch
>
>
> Ran into an issue while working with timestamp conversion.
> CAST(unix_timestamp() as TIMESTAMP) should create a timestamp for the current 
> time from the BIGINT returned by unix_timestamp()
> Instead, however, a 1970-01-16 timestamp is returned.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-3454) Problem with CAST(BIGINT as TIMESTAMP)

2015-03-02 Thread Aihua Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aihua Xu updated HIVE-3454:
---
Attachment: HIVE-3454.4.patch

> Problem with CAST(BIGINT as TIMESTAMP)
> --
>
> Key: HIVE-3454
> URL: https://issues.apache.org/jira/browse/HIVE-3454
> Project: Hive
>  Issue Type: Bug
>  Components: Types, UDF
>Affects Versions: 0.8.0, 0.8.1, 0.9.0, 0.10.0, 0.11.0, 0.12.0, 0.13.0, 
> 0.13.1
>Reporter: Ryan Harris
>Assignee: Aihua Xu
>  Labels: newbie, newdev, patch
> Attachments: HIVE-3454.1.patch.txt, HIVE-3454.2.patch, 
> HIVE-3454.3.patch, HIVE-3454.4.patch, HIVE-3454.patch
>
>
> Ran into an issue while working with timestamp conversion.
> CAST(unix_timestamp() as TIMESTAMP) should create a timestamp for the current 
> time from the BIGINT returned by unix_timestamp()
> Instead, however, a 1970-01-16 timestamp is returned.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9809) Fix FindBugs found bugs in hive-exec

2015-03-02 Thread Jason Dere (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14343888#comment-14343888
 ] 

Jason Dere commented on HIVE-9809:
--

Do you have a list of the errors reported by FindBugs?

> Fix FindBugs found bugs in hive-exec
> 
>
> Key: HIVE-9809
> URL: https://issues.apache.org/jira/browse/HIVE-9809
> Project: Hive
>  Issue Type: Bug
>Reporter: Alexander Pivovarov
>Assignee: Alexander Pivovarov
> Attachments: HIVE-9809.1.patch, HIVE-9809.2.patch
>
>
> FindBugs finds several bugs in hive-exec project



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9817) fix DateFormat pattern in hive-exec

2015-03-02 Thread Jason Dere (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14343877#comment-14343877
 ] 

Jason Dere commented on HIVE-9817:
--

+1

> fix DateFormat pattern in hive-exec
> ---
>
> Key: HIVE-9817
> URL: https://issues.apache.org/jira/browse/HIVE-9817
> Project: Hive
>  Issue Type: Bug
>Reporter: Alexander Pivovarov
>Assignee: Alexander Pivovarov
>Priority: Minor
> Attachments: HIVE-9817.1.patch
>
>
> some classes use mm for month and hh for hours
> it should be MM and HH



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9830) Test auto_sortmerge_join_8 is flaky [Spark Branch]

2015-03-02 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang updated HIVE-9830:
--
Summary: Test auto_sortmerge_join_8 is flaky [Spark Branch]  (was: Test 
auto_sortmerge_join_8 is flaky)

> Test auto_sortmerge_join_8 is flaky [Spark Branch]
> --
>
> Key: HIVE-9830
> URL: https://issues.apache.org/jira/browse/HIVE-9830
> Project: Hive
>  Issue Type: Bug
>  Components: spark-branch
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
> Fix For: spark-branch
>
>
> We found auto_sortmerge_join_8 is flaky is flaky for Spark. Sometimes, the 
> output could be wrong.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9744) Move common arguments validation and value extraction code to GenericUDF

2015-03-02 Thread Jason Dere (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9744?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14343824#comment-14343824
 ] 

Jason Dere commented on HIVE-9744:
--

+1

> Move common arguments validation and value extraction code to GenericUDF
> 
>
> Key: HIVE-9744
> URL: https://issues.apache.org/jira/browse/HIVE-9744
> Project: Hive
>  Issue Type: Improvement
>  Components: UDF
>Reporter: Alexander Pivovarov
>Assignee: Alexander Pivovarov
>Priority: Minor
> Attachments: HIVE-9744.1.patch, HIVE-9744.2.patch, HIVE-9744.3.patch, 
> HIVE-9744.5.patch
>
>
> most of the UDFs 
> - check if arguments are primitive / complex
> - check if arguments are particular type or type_group
> - get converters to read values
> - check if argument is constant
> - extract arguments values
> Probably we should move these common methods to GenericUDF



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HIVE-9757) LLAP: Reuse string dictionaries per column per stripe when processing row groups

2015-03-02 Thread Prasanth Jayachandran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9757?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran resolved HIVE-9757.
-
Resolution: Fixed

Fixed by HIVE-9782

> LLAP: Reuse string dictionaries per column per stripe when processing row 
> groups
> 
>
> Key: HIVE-9757
> URL: https://issues.apache.org/jira/browse/HIVE-9757
> Project: Hive
>  Issue Type: Sub-task
>Affects Versions: llap
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
> Fix For: llap
>
>
> Dictionary streams are cached by low level cache. When creating column 
> vectors from streams we create dictionary for every row group that we 
> process. We should add per query cache to cache the deserialized 
> representation of dictionary per stripe per column. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9677) Implement privileges call in HBaseStore

2015-03-02 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14343780#comment-14343780
 ] 

Hive QA commented on HIVE-9677:
---



{color:red}Overall{color}: -1 no tests executed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12701935/HIVE-9677.patch

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2918/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2918/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-2918/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hive-ptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ [[ -n /usr/java/jdk1.7.0_45-cloudera ]]
+ export JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera
+ JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera
+ export 
PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin
+ 
PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'M2_OPTS=-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ M2_OPTS='-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ cd /data/hive-ptest/working/
+ tee /data/hive-ptest/logs/PreCommit-HIVE-TRUNK-Build-2918/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ svn = \s\v\n ]]
+ [[ -n '' ]]
+ [[ -d apache-svn-trunk-source ]]
+ [[ ! -d apache-svn-trunk-source/.svn ]]
+ [[ ! -d apache-svn-trunk-source ]]
+ cd apache-svn-trunk-source
+ svn revert -R .
Reverted 'ql/src/java/org/apache/hadoop/hive/ql/session/SessionState.java'
Reverted 'ql/pom.xml'
++ awk '{print $2}'
++ egrep -v '^X|^Performing status on external'
++ svn status --no-ignore
+ rm -rf target datanucleus.log ant/target shims/target shims/0.20S/target 
shims/0.23/target shims/aggregator/target shims/common/target 
shims/scheduler/target packaging/target conf/ivysettings.xml 
hbase-handler/target testutils/target jdbc/target metastore/target 
itests/target itests/thirdparty itests/hcatalog-unit/target 
itests/test-serde/target itests/qtest/target itests/hive-unit-hadoop2/target 
itests/hive-minikdc/target itests/hive-jmh/target itests/hive-unit/target 
itests/custom-serde/target itests/util/target itests/qtest-spark/target 
hcatalog/target hcatalog/core/target hcatalog/streaming/target 
hcatalog/server-extensions/target hcatalog/webhcat/svr/target 
hcatalog/webhcat/java-client/target hcatalog/hcatalog-pig-adapter/target 
accumulo-handler/target hwi/target common/target common/src/gen 
spark-client/target service/target contrib/target serde/target beeline/target 
odbc/target cli/target ql/dependency-reduced-pom.xml ql/target 
ql/src/test/org/apache/hadoop/hive/ql/session/TestAddResource.java 
ql/src/java/org/apache/hadoop/hive/ql/session/GetArtifacts.java
+ svn update

Fetching external item into 'hcatalog/src/test/e2e/harness'
External at revision 1663434.

At revision 1663434.
+ patchCommandPath=/data/hive-ptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hive-ptest/working/scratch/build.patch
+ [[ -f /data/hive-ptest/working/scratch/build.patch ]]
+ chmod +x /data/hive-ptest/working/scratch/smart-apply-patch.sh
+ /data/hive-ptest/working/scratch/smart-apply-patch.sh 
/data/hive-ptest/working/scratch/build.patch
The patch does not appear to apply with p0, p1, or p2
+ exit 1
'
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12701935 - PreCommit-HIVE-TRUNK-Build

> Implement privileges call in HBaseStore
> ---
>
> Key: HIVE-9677
> URL: https://issues.apache.org/jira/browse/HIVE-9677
> Project: Hive
>  Issue Type: Sub-task
>  Components: Metastore
>Reporter: Alan Gates
>Assignee: Alan Gates
> Attachments: HIVE-9677.patch
>
>
> All of the list*Grants methods, grantPrivileges, and revokePrivileges need to 
> be implemented.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9664) Hive "add jar" command should be able to download and add jars from a repository

2015-03-02 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9664?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14343779#comment-14343779
 ] 

Hive QA commented on HIVE-9664:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12701920/HIVE-9664.patch

{color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 7581 tests executed
*Failed tests:*
{noformat}
org.apache.hive.hcatalog.hbase.TestPigHBaseStorageHandler.org.apache.hive.hcatalog.hbase.TestPigHBaseStorageHandler
org.apache.hive.hcatalog.streaming.TestStreaming.testTransactionBatchCommit_Json
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2917/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2917/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-2917/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 2 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12701920 - PreCommit-HIVE-TRUNK-Build

> Hive "add jar" command should be able to download and add jars from a 
> repository
> 
>
> Key: HIVE-9664
> URL: https://issues.apache.org/jira/browse/HIVE-9664
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 0.14.0
>Reporter: Anant Nag
>  Labels: hive, patch
> Attachments: HIVE-9664.patch
>
>
> Currently Hive's "add jar" command takes a local path to the dependency jar. 
> This clutters the local file-system as users may forget to remove this jar 
> later
> It would be nice if Hive supported a Gradle like notation to download the jar 
> from a repository.
> Example:  add jar org:module:version
> 
> It should also be backward compatible and should take jar from the local 
> file-system as well. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9721) Hadoop23Shims.setFullFileStatus should check for null

2015-03-02 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HIVE-9721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14343752#comment-14343752
 ] 

Sergio Peña commented on HIVE-9721:
---

What variable in setFullFileStatus may be null?

> Hadoop23Shims.setFullFileStatus should check for null
> -
>
> Key: HIVE-9721
> URL: https://issues.apache.org/jira/browse/HIVE-9721
> Project: Hive
>  Issue Type: Bug
>Reporter: Brock Noland
>
> {noformat}
> 2015-02-18 22:46:10,209 INFO org.apache.hadoop.hive.shims.HadoopShimsSecure: 
> Skipping ACL inheritance: File system for path 
> file:/tmp/hive/f1a28dee-70e8-4bc3-bd35-9be13834d1fc/hive_2015-02-18_22-46-10_065_3348083202601156561-1
>  does not support ACLs but dfs.namenode.acls.enabled is set to true: 
> java.lang.UnsupportedOperationException: RawLocalFileSystem doesn't support 
> getAclStatus
> java.lang.UnsupportedOperationException: RawLocalFileSystem doesn't support 
> getAclStatus
>   at org.apache.hadoop.fs.FileSystem.getAclStatus(FileSystem.java:2429)
>   at 
> org.apache.hadoop.fs.FilterFileSystem.getAclStatus(FilterFileSystem.java:562)
>   at 
> org.apache.hadoop.hive.shims.Hadoop23Shims.getFullFileStatus(Hadoop23Shims.java:645)
>   at org.apache.hadoop.hive.common.FileUtils.mkdir(FileUtils.java:524)
>   at org.apache.hadoop.hive.ql.Context.getStagingDir(Context.java:234)
>   at 
> org.apache.hadoop.hive.ql.Context.getExtTmpPathRelTo(Context.java:424)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genFileSinkPlan(SemanticAnalyzer.java:6290)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPostGroupByBodyPlan(SemanticAnalyzer.java:9069)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genBodyPlan(SemanticAnalyzer.java:8961)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:9807)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:9700)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genOPTree(SemanticAnalyzer.java:10136)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.genOPTree(CalcitePlanner.java:284)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:10147)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:190)
>   at 
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:222)
>   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:421)
>   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:307)
>   at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1112)
>   at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1106)
>   at 
> org.apache.hive.service.cli.operation.SQLOperation.prepare(SQLOperation.java:101)
>   at 
> org.apache.hive.service.cli.operation.SQLOperation.runInternal(SQLOperation.java:172)
>   at 
> org.apache.hive.service.cli.operation.Operation.run(Operation.java:257)
>   at 
> org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementInternal(HiveSessionImpl.java:379)
>   at 
> org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementAsync(HiveSessionImpl.java:366)
>   at 
> org.apache.hive.service.cli.CLIService.executeStatementAsync(CLIService.java:271)
>   at 
> org.apache.hive.service.cli.thrift.ThriftCLIService.ExecuteStatement(ThriftCLIService.java:415)
>   at 
> org.apache.hive.service.cli.thrift.TCLIService$Processor$ExecuteStatement.getResult(TCLIService.java:1313)
>   at 
> org.apache.hive.service.cli.thrift.TCLIService$Processor$ExecuteStatement.getResult(TCLIService.java:1298)
>   at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
>   at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
>   at 
> org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor.process(HadoopThriftAuthBridge.java:692)
>   at 
> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:285)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> 2015-02-18 17:30:58,753 INFO org.apache.hadoop.hive.shims.HadoopShimsSecure: 
> Skipping ACL inheritance: File system for path 
> file:/tmp/hive/e3eb01f0-bb58-45a8-b773-8f4f3420457c/hive_2015-02-18_17-30-58_346_5020255420422913166-1/-mr-1
>  does not support ACLs but dfs.namenode.acls.enabled is set to true: 
> java.lang.NullPointerException
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hive.shims.Hadoop23Shims.setFullFil

[jira] [Commented] (HIVE-9277) Hybrid Hybrid Grace Hash Join

2015-03-02 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14343736#comment-14343736
 ] 

Sergey Shelukhin commented on HIVE-9277:


Are you updating patches on RB?

> Hybrid Hybrid Grace Hash Join
> -
>
> Key: HIVE-9277
> URL: https://issues.apache.org/jira/browse/HIVE-9277
> Project: Hive
>  Issue Type: New Feature
>  Components: Physical Optimizer
>Reporter: Wei Zheng
>Assignee: Wei Zheng
>  Labels: join
> Attachments: HIVE-9277.01.patch, HIVE-9277.02.patch, 
> HIVE-9277.03.patch, HIVE-9277.04.patch, HIVE-9277.05.patch, 
> HIVE-9277.06.patch, High-leveldesignforHybridHybridGraceHashJoinv1.0.pdf
>
>
> We are proposing an enhanced hash join algorithm called _“hybrid hybrid grace 
> hash join”_.
> We can benefit from this feature as illustrated below:
> * The query will not fail even if the estimated memory requirement is 
> slightly wrong
> * Expensive garbage collection overhead can be avoided when hash table grows
> * Join execution using a Map join operator even though the small table 
> doesn't fit in memory as spilling some data from the build and probe sides 
> will still be cheaper than having to shuffle the large fact table
> The design was based on Hadoop’s parallel processing capability and 
> significant amount of memory available.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9277) Hybrid Hybrid Grace Hash Join

2015-03-02 Thread Wei Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Zheng updated HIVE-9277:

Attachment: HIVE-9277.06.patch

Upload 6th patch for testing

> Hybrid Hybrid Grace Hash Join
> -
>
> Key: HIVE-9277
> URL: https://issues.apache.org/jira/browse/HIVE-9277
> Project: Hive
>  Issue Type: New Feature
>  Components: Physical Optimizer
>Reporter: Wei Zheng
>Assignee: Wei Zheng
>  Labels: join
> Attachments: HIVE-9277.01.patch, HIVE-9277.02.patch, 
> HIVE-9277.03.patch, HIVE-9277.04.patch, HIVE-9277.05.patch, 
> HIVE-9277.06.patch, High-leveldesignforHybridHybridGraceHashJoinv1.0.pdf
>
>
> We are proposing an enhanced hash join algorithm called _“hybrid hybrid grace 
> hash join”_.
> We can benefit from this feature as illustrated below:
> * The query will not fail even if the estimated memory requirement is 
> slightly wrong
> * Expensive garbage collection overhead can be avoided when hash table grows
> * Join execution using a Map join operator even though the small table 
> doesn't fit in memory as spilling some data from the build and probe sides 
> will still be cheaper than having to shuffle the large fact table
> The design was based on Hadoop’s parallel processing capability and 
> significant amount of memory available.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9518) Implement MONTHS_BETWEEN aligned with Oracle one

2015-03-02 Thread Alexander Pivovarov (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14343682#comment-14343682
 ] 

Alexander Pivovarov commented on HIVE-9518:
---

I did it on Feb. 27, 2015, 7:38 p.m. - several minor issues. Check RB

> Implement MONTHS_BETWEEN aligned with Oracle one
> 
>
> Key: HIVE-9518
> URL: https://issues.apache.org/jira/browse/HIVE-9518
> Project: Hive
>  Issue Type: Bug
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HIVE-9518.1.patch, HIVE-9518.2.patch, HIVE-9518.3.patch, 
> HIVE-9518.4.patch
>
>
> This is used to track work to build Oracle like months_between. Here's 
> semantics:
> MONTHS_BETWEEN returns number of months between dates date1 and date2. If 
> date1 is later than date2, then the result is positive. If date1 is earlier 
> than date2, then the result is negative. If date1 and date2 are either the 
> same days of the month or both last days of months, then the result is always 
> an integer. Otherwise Oracle Database calculates the fractional portion of 
> the result based on a 31-day month and considers the difference in time 
> components date1 and date2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9788) Make double quote optional in tsv/csv/dsv output

2015-03-02 Thread Naveen Gangam (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14343683#comment-14343683
 ] 

Naveen Gangam commented on HIVE-9788:
-

[~Ferd] 
Thank you for re-working the fix.
"What if the value contains a separator char? "
Based on my limited testing, if the data returned by the result set contains 
the separator char, the formatted string returned by the CsvListWriter 
accounted for those characters. Looking at the code, the code appears to add a 
the quote string around each _quote char_ found in the column value, and then 
wraps the entire column in a pair of _quotes_.  So to me, it appears that it 
should work with all sorts of characters within the data.
Could you please elaborate your usecase where this fails?

Also if we do want to retain backward compatibility, would making 
"disableQuotingForSV" a system property-based instead of commandline option be 
better suited? It is easier to stop support for a system property(ignore) than 
for a commandline switch, if and when we stop supporting this option (or if a 
new version of super-csv becomes available that does not support this). Just my 
opinion on this. Thanks again

> Make double quote optional in tsv/csv/dsv output
> 
>
> Key: HIVE-9788
> URL: https://issues.apache.org/jira/browse/HIVE-9788
> Project: Hive
>  Issue Type: Improvement
>Reporter: Brock Noland
>Assignee: Ferdinand Xu
> Attachments: HIVE-9788.1.patch, HIVE-9788.patch
>
>
> Similar to HIVE-7390 some customers would like the double quotes to be 
> optional. So if the data is {{"A"}} then the output from beeline should be 
> {{"A"}} which is the same as the Hive CLI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9647) Discrepancy in cardinality estimates between partitioned and un-partitioned tables

2015-03-02 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14343668#comment-14343668
 ] 

Ashutosh Chauhan commented on HIVE-9647:


+1 code changes look good.

> Discrepancy in cardinality estimates between partitioned and un-partitioned 
> tables 
> ---
>
> Key: HIVE-9647
> URL: https://issues.apache.org/jira/browse/HIVE-9647
> Project: Hive
>  Issue Type: Bug
>  Components: CBO
>Affects Versions: 0.14.0
>Reporter: Mostafa Mokhtar
>Assignee: Pengcheng Xiong
> Fix For: 1.2.0
>
> Attachments: HIVE-9647.01.patch
>
>
> High-level summary
> HiveRelMdSelectivity.computeInnerJoinSelectivity relies on per column number 
> of distinct value to estimate join selectivity.
> The way statistics are aggregated for partitioned tables results in 
> discrepancy in number of distinct values which results in different plans 
> between partitioned and un-partitioned schemas.
> The table below summarizes the NDVs in computeInnerJoinSelectivity which are 
> used to estimate selectivity of joins.
> ||Column  ||Partitioned count distincts|| Un-Partitioned count 
> distincts 
> |sr_customer_sk   |71,245 |1,415,625|
> |sr_item_sk   |38,846|62,562|
> |sr_ticket_number |71,245 |34,931,085|
> |ss_customer_sk   |88,476|1,415,625|
> |ss_item_sk   |38,846|62,562|
> |ss_ticket_number|100,756 |56,256,175|
>   
> The discrepancy is because NDV calculation for a partitioned table assumes 
> that the NDV range is contained within each partition and is calculates as 
> "select max(NUM_DISTINCTS) from PART_COL_STATS” .
> This is problematic for columns like ticket number which are naturally 
> increasing with the partitioned date column ss_sold_date_sk.
> Suggestions
> Use Hyper Log Log as suggested by Gopal, there is an HLL implementation for 
> HBASE co-porccessors which we can use as a reference here 
> Using the global stats from TAB_COL_STATS and the per partition stats from 
> PART_COL_STATS extrapolate the NDV for the qualified partitions as in :
> Max ( (NUM_DISTINCTS from TAB_COL_STATS) x (Number of qualified partitions) / 
> (Number of Partitions), max(NUM_DISTINCTS) from PART_COL_STATS))
> More details
> While doing TPC-DS Partitioned vs. Un-Partitioned runs I noticed that many of 
> the plans are different, then I dumped the CBO logical plan and I found that 
> join estimates are drastically different
> Unpartitioned schema :
> {code}
> 2015-02-10 11:33:27,624 DEBUG [main]: parse.SemanticAnalyzer 
> (SemanticAnalyzer.java:apply(12624)) - Plan After Join Reordering:
> HiveProjectRel(store_sales_quantitycount=[$0], store_sales_quantityave=[$1], 
> store_sales_quantitystdev=[$2], store_sales_quantitycov=[/($2, $1)], 
> as_store_returns_quantitycount=[$3], as_store_returns_quantityave=[$4], 
> as_store_returns_quantitystdev=[$5], store_returns_quantitycov=[/($5, $4)]): 
> rowcount = 1.0, cumulative cost = {6.056835407771381E8 rows, 0.0 cpu, 0.0 
> io}, id = 2956
>   HiveAggregateRel(group=[{}], agg#0=[count($0)], agg#1=[avg($0)], 
> agg#2=[stddev_samp($0)], agg#3=[count($1)], agg#4=[avg($1)], 
> agg#5=[stddev_samp($1)]): rowcount = 1.0, cumulative cost = 
> {6.056835407771381E8 rows, 0.0 cpu, 0.0 io}, id = 2954
> HiveProjectRel($f0=[$4], $f1=[$8]): rowcount = 40.05611776795562, 
> cumulative cost = {6.056835407771381E8 rows, 0.0 cpu, 0.0 io}, id = 2952
>   HiveProjectRel(ss_sold_date_sk=[$0], ss_item_sk=[$1], 
> ss_customer_sk=[$2], ss_ticket_number=[$3], ss_quantity=[$4], 
> sr_item_sk=[$5], sr_customer_sk=[$6], sr_ticket_number=[$7], 
> sr_return_quantity=[$8], d_date_sk=[$9], d_quarter_name=[$10]): rowcount = 
> 40.05611776795562, cumulative cost = {6.056835407771381E8 rows, 0.0 cpu, 0.0 
> io}, id = 2982
> HiveJoinRel(condition=[=($9, $0)], joinType=[inner]): rowcount = 
> 40.05611776795562, cumulative cost = {6.056835407771381E8 rows, 0.0 cpu, 0.0 
> io}, id = 2980
>   HiveJoinRel(condition=[AND(AND(=($2, $6), =($1, $5)), =($3, $7))], 
> joinType=[inner]): rowcount = 28880.460910696, cumulative cost = 
> {6.05654559E8 rows, 0.0 cpu, 0.0 io}, id = 2964
> HiveProjectRel(ss_sold_date_sk=[$0], ss_item_sk=[$2], 
> ss_customer_sk=[$3], ss_ticket_number=[$9], ss_quantity=[$10]): rowcount = 
> 5.50076554E8, cumulative cost = {0.0 rows, 0.0 cpu, 0.0 io}, id = 2920
>   HiveTableScanRel(table=[[tpcds_bin_orc_200.store_sales]]): 
> rowcount = 5.50076554E8, cumulative cost = {0}, id = 2822
> HiveProjectRel(sr_item_sk=[$2], sr_customer_sk=[$3], 
> sr_ticket_number=[$9], sr_return_quantity=[$10]): rowcount = 5.5578005E7, 
> cumulative cost = {0.0 rows, 0.0 cpu, 0.0 io}, id = 2923
>   HiveTableScanRel(table=[[tpcd

[jira] [Commented] (HIVE-9826) Firing insert event fails on temporary table

2015-03-02 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14343666#comment-14343666
 ] 

Hive QA commented on HIVE-9826:
---



{color:green}Overall{color}: +1 all checks pass

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12701921/HIVE-9826.patch

{color:green}SUCCESS:{color} +1 7578 tests passed

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2916/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2916/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-2916/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12701921 - PreCommit-HIVE-TRUNK-Build

> Firing insert event fails on temporary table
> 
>
> Key: HIVE-9826
> URL: https://issues.apache.org/jira/browse/HIVE-9826
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.0.0
>Reporter: Alan Gates
>Assignee: Alan Gates
>Priority: Minor
> Attachments: HIVE-9826.patch
>
>
> When hive.metastore.dml.events=true and MoveTask attempts to fire an insert 
> event on insert to a temporary table this fails, because the db event 
> listener cannot find the temporary table.  This is because temporary tables 
> are only stored in the client, not in the server, thus the metastore listener 
> will never be able to find it.
> The proper fix is to not fire events for temporary tables, as they have not 
> duration beyond the current client session.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HIVE-9827) LLAP: Make stripe level column readers thread safe

2015-03-02 Thread Prasanth Jayachandran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran resolved HIVE-9827.
-
Resolution: Fixed

> LLAP: Make stripe level column readers thread safe
> --
>
> Key: HIVE-9827
> URL: https://issues.apache.org/jira/browse/HIVE-9827
> Project: Hive
>  Issue Type: Sub-task
>Affects Versions: llap
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
> Fix For: llap
>
> Attachments: HIVE-9827-llap.patch
>
>
> previousStripeIndex used in OrcColumnVectorProducer is not thread safe as 
> OrcColumnVectorProducer is singleton. Move it to OrcEncodedDataConsumer which 
> is per query object.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9518) Implement MONTHS_BETWEEN aligned with Oracle one

2015-03-02 Thread Xiaobing Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14343627#comment-14343627
 ] 

Xiaobing Zhou commented on HIVE-9518:
-

[~apivovarov] and [~jdere] can you help to review it? Thanks.

> Implement MONTHS_BETWEEN aligned with Oracle one
> 
>
> Key: HIVE-9518
> URL: https://issues.apache.org/jira/browse/HIVE-9518
> Project: Hive
>  Issue Type: Bug
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HIVE-9518.1.patch, HIVE-9518.2.patch, HIVE-9518.3.patch, 
> HIVE-9518.4.patch
>
>
> This is used to track work to build Oracle like months_between. Here's 
> semantics:
> MONTHS_BETWEEN returns number of months between dates date1 and date2. If 
> date1 is later than date2, then the result is positive. If date1 is earlier 
> than date2, then the result is negative. If date1 and date2 are either the 
> same days of the month or both last days of months, then the result is always 
> an integer. Otherwise Oracle Database calculates the fractional portion of 
> the result based on a 31-day month and considers the difference in time 
> components date1 and date2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-7292) Hive on Spark

2015-03-02 Thread Ruslan Dautkhanov (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14343577#comment-14343577
 ] 

Ruslan Dautkhanov commented on HIVE-7292:
-

Exciting. Hopefully it will be released some time soon.

> Hive on Spark
> -
>
> Key: HIVE-7292
> URL: https://issues.apache.org/jira/browse/HIVE-7292
> Project: Hive
>  Issue Type: Improvement
>  Components: Spark
>Reporter: Xuefu Zhang
>Assignee: Xuefu Zhang
>  Labels: Spark-M1, Spark-M2, Spark-M3, Spark-M4, Spark-M5
> Attachments: Hive-on-Spark.pdf
>
>
> Spark as an open-source data analytics cluster computing framework has gained 
> significant momentum recently. Many Hive users already have Spark installed 
> as their computing backbone. To take advantages of Hive, they still need to 
> have either MapReduce or Tez on their cluster. This initiative will provide 
> user a new alternative so that those user can consolidate their backend. 
> Secondly, providing such an alternative further increases Hive's adoption as 
> it exposes Spark users  to a viable, feature-rich de facto standard SQL tools 
> on Hadoop.
> Finally, allowing Hive to run on Spark also has performance benefits. Hive 
> queries, especially those involving multiple reducer stages, will run faster, 
> thus improving user experience as Tez does.
> This is an umbrella JIRA which will cover many coming subtask. Design doc 
> will be attached here shortly, and will be on the wiki as well. Feedback from 
> the community is greatly appreciated!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9083) New metastore API to support to purge partition-data directly in dropPartitions().

2015-03-02 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-9083:
---
Affects Version/s: (was: 0.15.0)
   1.1.0
   1.0.0

> New metastore API to support to purge partition-data directly in 
> dropPartitions().
> --
>
> Key: HIVE-9083
> URL: https://issues.apache.org/jira/browse/HIVE-9083
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Affects Versions: 1.0.0, 1.1.0
>Reporter: Mithun Radhakrishnan
>Assignee: Mithun Radhakrishnan
> Fix For: 1.2.0
>
> Attachments: HIVE-9083.3.patch, HIVE-9083.4.patch, HIVE-9083.5.patch
>
>
> HIVE-7100 adds the option to purge table-data when dropping a table (from 
> Hive CLI.)
> This patch adds HiveMetaStoreClient APIs to support the same for 
> {{dropPartitions()}}.
> (I'll add a follow-up to support a command-line option for the same.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9677) Implement privileges call in HBaseStore

2015-03-02 Thread Alan Gates (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Gates updated HIVE-9677:
-
Attachment: HIVE-9677.patch

This patch is more complicated than many of the previous ones.  Due to the 
hierarchical nature of roles and the fact that users can belong to multiple 
roles it was not possible to do all operations by a direct key lookup as it is 
with fetching tables, partitions, etc.  Obviously this makes things more 
complicated for HBase.

To resolve this I stored the information to different ways:
1) In the ROLES table, each role stores all users and roles that have been 
directly included in it (that is, granted that role).
2) I added a new table USER_TO_ROLE that for each user, lists all roles the 
user is in either directly or indirectly.

The USER_TO_ROLES table is built to be very efficient for DML/select queries 
where we need to quickly know what roles the user participates in.  However, it 
is expensive to build, as each row requires a multi-pass walk of the ROLES 
table.  This is alleviated somewhat by reading the entire ROLES table in memory 
before rebuilding the table.

This does mean that adding a user to a role or dropping him is somewhat 
expensive as the row for that user in the USER_TO_ROLES table has to be 
rebuilt.  Adding a role to another role, dropping a role from another role, or 
dropping a role altogether is very expensive because multiple rows in the 
USER_TO_ROLE table have to be rebuilt.

Given that grant/revoke statements are very rare compared to DML/select queries 
and rarely performance sensitive, it makes sense to take grants and revokes 
take a few more seconds in order to shave milliseconds off each DML or select 
operation.


> Implement privileges call in HBaseStore
> ---
>
> Key: HIVE-9677
> URL: https://issues.apache.org/jira/browse/HIVE-9677
> Project: Hive
>  Issue Type: Sub-task
>  Components: Metastore
>Reporter: Alan Gates
>Assignee: Alan Gates
> Attachments: HIVE-9677.patch
>
>
> All of the list*Grants methods, grantPrivileges, and revokePrivileges need to 
> be implemented.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9827) LLAP: Make stripe level column readers thread safe

2015-03-02 Thread Prasanth Jayachandran (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14343524#comment-14343524
 ] 

Prasanth Jayachandran commented on HIVE-9827:
-

Committed patch to llap branch.

> LLAP: Make stripe level column readers thread safe
> --
>
> Key: HIVE-9827
> URL: https://issues.apache.org/jira/browse/HIVE-9827
> Project: Hive
>  Issue Type: Sub-task
>Affects Versions: llap
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
> Fix For: llap
>
> Attachments: HIVE-9827-llap.patch
>
>
> previousStripeIndex used in OrcColumnVectorProducer is not thread safe as 
> OrcColumnVectorProducer is singleton. Move it to OrcEncodedDataConsumer which 
> is per query object.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9827) LLAP: Make stripe level column readers thread safe

2015-03-02 Thread Prasanth Jayachandran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran updated HIVE-9827:

Attachment: HIVE-9827-llap.patch

> LLAP: Make stripe level column readers thread safe
> --
>
> Key: HIVE-9827
> URL: https://issues.apache.org/jira/browse/HIVE-9827
> Project: Hive
>  Issue Type: Sub-task
>Affects Versions: llap
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
> Fix For: llap
>
> Attachments: HIVE-9827-llap.patch
>
>
> previousStripeIndex used in OrcColumnVectorProducer is not thread safe as 
> OrcColumnVectorProducer is singleton. Move it to OrcEncodedDataConsumer which 
> is per query object.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9821) Having the consistent physical execution plan , which using explain command with disable CBO and enable CBO.

2015-03-02 Thread Laljo John Pullokkaran (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14343514#comment-14343514
 ] 

Laljo John Pullokkaran commented on HIVE-9821:
--

[~asko] What is the issue here?
If the issue you are raising is the physical plans are different with CBO 
ON/OFF then it is expected as the join order may change.

> Having the consistent physical execution plan  , which using explain command  
> with disable CBO and enable CBO.
> --
>
> Key: HIVE-9821
> URL: https://issues.apache.org/jira/browse/HIVE-9821
> Project: Hive
>  Issue Type: Bug
>  Components: CBO
>Affects Versions: 0.14.0
>Reporter: asko
>Priority: Critical
>
> bq.Test case is( JOIN sub tree had been flatten after CBO in final plan stage 
> of calcite optimizer)  
> {code:sql}
> --set  hive.cbo.enable=true;
> --ANALYZE TABLE customer COMPUTE STATISTICS for columns;
> --ANALYZE TABLE orders COMPUTE STATISTICS for columns;
> --ANALYZE TABLE lineitem COMPUTE STATISTICS for columns;
> --ANALYZE TABLE region COMPUTE STATISTICS for columns;
> --ANALYZE TABLE supplier COMPUTE STATISTICS for columns;
> --ANALYZE TABLE partsupp COMPUTE STATISTICS for columns;
> --ANALYZE TABLE part COMPUTE STATISTICS for columns;
> --ANALYZE TABLE nation COMPUTE STATISTICS for columns;
> explain select
>   o_year, sum(case when nation = 'BRAZIL' then volume else 0.0 end) / 
> sum(volume) as mkt_share
> from
>   (
> select
>   year(o_orderdate) as o_year, l_extendedprice * (1-l_discount) as volume,
>   n2.n_name as nation
> from
> nation n1 join region r
>   on n1.n_regionkey = r.r_regionkey and r.r_name = 'AMERICA'
> join customer c
>   on c.c_nationkey = n1.n_nationkey
> join orders o
>   on c.c_custkey = o.o_custkey
> join lineitem l
>   on l.l_orderkey = o.o_orderkey and o.o_orderdate >= '1995-01-01'
>  and o.o_orderdate < '1996-12-31'
> join part p
>   on p.p_partkey = l.l_partkey and p.p_type = 'ECONOMY ANODIZED STEEL'
> join supplier s
>   on s.s_suppkey = l.l_suppkey
> join  nation n2
>   on s.s_nationkey = n2.n_nationkey
>   ) all_nation
> group by o_year
> order by o_year;
> {code}
> bq. This test from had modified q8 in TPC-H_full . Uncomment  could enable 
> CBO. twice run results are same :
> {quote}
> STAGE DEPENDENCIES:
>   Stage-1 is a root stage
>   Stage-2 depends on stages: Stage-1, Stage-7
>   Stage-3 depends on stages: Stage-2, Stage-10
>   Stage-4 depends on stages: Stage-3
>   Stage-5 depends on stages: Stage-4
>   Stage-7 is a root stage
>   Stage-9 is a root stage
>   Stage-10 depends on stages: Stage-9, Stage-12
>   Stage-12 is a root stage
>   Stage-0 depends on stages: Stage-5
> STAGE PLANS:
>   Stage: Stage-1
> Map Reduce
>   Map Operator Tree:
>   TableScan
> alias: l
> Statistics: Num rows: 27137974 Data size: 759863296 Basic stats: 
> COMPLETE Column stats: NONE
> Filter Operator
>   predicate: ((l_partkey is not null and l_suppkey is not null) 
> and l_orderkey is not null) (type: boolean)
>   Statistics: Num rows: 3392247 Data size: 94982919 Basic stats: 
> COMPLETE Column stats: NONE
>   Select Operator
> expressions: l_orderkey (type: int), l_partkey (type: int), 
> l_suppkey (type: int), l_extendedprice (type: double), l_discount (type: 
> double)
> outputColumnNames: _col0, _col1, _col2, _col3, _col4
> Statistics: Num rows: 3392247 Data size: 94982919 Basic 
> stats: COMPLETE Column stats: NONE
> Reduce Output Operator
>   key expressions: _col1 (type: int)
>   sort order: +
>   Map-reduce partition columns: _col1 (type: int)
>   Statistics: Num rows: 3392247 Data size: 94982919 Basic 
> stats: COMPLETE Column stats: NONE
>   value expressions: _col0 (type: int), _col2 (type: int), 
> _col3 (type: double), _col4 (type: double)
>   TableScan
> alias: p
> Statistics: Num rows: 928322 Data size: 24136384 Basic stats: 
> COMPLETE Column stats: NONE
> Filter Operator
>   predicate: ((p_type = 'ECONOMY ANODIZED STEEL') and p_partkey 
> is not null) (type: boolean)
>   Statistics: Num rows: 232081 Data size: 6034109 Basic stats: 
> COMPLETE Column stats: NONE
>   Select Operator
> expressions: p_partkey (type: int)
> outputColumnNames: _col0
> Statistics: Num rows: 232081 Data size: 6034109 Basic stats: 
> COMPLETE Column stats: NONE
> Reduce Output Operator
>   key expressions: _c

[jira] [Commented] (HIVE-9826) Firing insert event fails on temporary table

2015-03-02 Thread Sushanth Sowmyan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14343489#comment-14343489
 ] 

Sushanth Sowmyan commented on HIVE-9826:


+1.

Thanks for the fix, Alan!

> Firing insert event fails on temporary table
> 
>
> Key: HIVE-9826
> URL: https://issues.apache.org/jira/browse/HIVE-9826
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.0.0
>Reporter: Alan Gates
>Assignee: Alan Gates
>Priority: Minor
> Attachments: HIVE-9826.patch
>
>
> When hive.metastore.dml.events=true and MoveTask attempts to fire an insert 
> event on insert to a temporary table this fails, because the db event 
> listener cannot find the temporary table.  This is because temporary tables 
> are only stored in the client, not in the server, thus the metastore listener 
> will never be able to find it.
> The proper fix is to not fire events for temporary tables, as they have not 
> duration beyond the current client session.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9826) Firing insert event fails on temporary table

2015-03-02 Thread Alan Gates (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9826?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Gates updated HIVE-9826:
-
Attachment: HIVE-9826.patch

> Firing insert event fails on temporary table
> 
>
> Key: HIVE-9826
> URL: https://issues.apache.org/jira/browse/HIVE-9826
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.0.0
>Reporter: Alan Gates
>Assignee: Alan Gates
>Priority: Minor
> Attachments: HIVE-9826.patch
>
>
> When hive.metastore.dml.events=true and MoveTask attempts to fire an insert 
> event on insert to a temporary table this fails, because the db event 
> listener cannot find the temporary table.  This is because temporary tables 
> are only stored in the client, not in the server, thus the metastore listener 
> will never be able to find it.
> The proper fix is to not fire events for temporary tables, as they have not 
> duration beyond the current client session.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9664) Hive "add jar" command should be able to download and add jars from a repository

2015-03-02 Thread Anant Nag (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anant Nag updated HIVE-9664:

Attachment: HIVE-9664.patch

> Hive "add jar" command should be able to download and add jars from a 
> repository
> 
>
> Key: HIVE-9664
> URL: https://issues.apache.org/jira/browse/HIVE-9664
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 0.14.0
>Reporter: Anant Nag
>  Labels: hive, patch
> Attachments: HIVE-9664.patch
>
>
> Currently Hive's "add jar" command takes a local path to the dependency jar. 
> This clutters the local file-system as users may forget to remove this jar 
> later
> It would be nice if Hive supported a Gradle like notation to download the jar 
> from a repository.
> Example:  add jar org:module:version
> 
> It should also be backward compatible and should take jar from the local 
> file-system as well. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9664) Hive "add jar" command should be able to download and add jars from a repository

2015-03-02 Thread Anant Nag (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anant Nag updated HIVE-9664:

Attachment: (was: HIVE-9664.patch)

> Hive "add jar" command should be able to download and add jars from a 
> repository
> 
>
> Key: HIVE-9664
> URL: https://issues.apache.org/jira/browse/HIVE-9664
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 0.14.0
>Reporter: Anant Nag
>  Labels: hive, patch
> Attachments: HIVE-9664.patch
>
>
> Currently Hive's "add jar" command takes a local path to the dependency jar. 
> This clutters the local file-system as users may forget to remove this jar 
> later
> It would be nice if Hive supported a Gradle like notation to download the jar 
> from a repository.
> Example:  add jar org:module:version
> 
> It should also be backward compatible and should take jar from the local 
> file-system as well. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9664) Hive "add jar" command should be able to download and add jars from a repository

2015-03-02 Thread Anant Nag (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anant Nag updated HIVE-9664:

Attachment: HIVE-9664.patch

> Hive "add jar" command should be able to download and add jars from a 
> repository
> 
>
> Key: HIVE-9664
> URL: https://issues.apache.org/jira/browse/HIVE-9664
> Project: Hive
>  Issue Type: Improvement
>Reporter: Anant Nag
>  Labels: hive
> Attachments: HIVE-9664.patch
>
>
> Currently Hive's "add jar" command takes a local path to the dependency jar. 
> This clutters the local file-system as users may forget to remove this jar 
> later
> It would be nice if Hive supported a Gradle like notation to download the jar 
> from a repository.
> Example:  add jar org:module:version
> 
> It should also be backward compatible and should take jar from the local 
> file-system as well. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9825) CBO (Calcite Return Path): Translate PTFs and Windowing to Hive Op [CBO branch]

2015-03-02 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-9825:
--
Summary: CBO (Calcite Return Path): Translate PTFs and Windowing to Hive Op 
[CBO branch]  (was: CBO (Calcite Return Path): Translate PTFs to Hive Op [CBO 
branch])

> CBO (Calcite Return Path): Translate PTFs and Windowing to Hive Op [CBO 
> branch]
> ---
>
> Key: HIVE-9825
> URL: https://issues.apache.org/jira/browse/HIVE-9825
> Project: Hive
>  Issue Type: Sub-task
>  Components: CBO
>Affects Versions: cbo-branch
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
> Fix For: cbo-branch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9664) Hive "add jar" command should be able to download and add jars from a repository

2015-03-02 Thread Anant Nag (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9664?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14343446#comment-14343446
 ] 

Anant Nag commented on HIVE-9664:
-

rb link : https://reviews.apache.org/r/31628/

> Hive "add jar" command should be able to download and add jars from a 
> repository
> 
>
> Key: HIVE-9664
> URL: https://issues.apache.org/jira/browse/HIVE-9664
> Project: Hive
>  Issue Type: Improvement
>Reporter: Anant Nag
>  Labels: hive
>
> Currently Hive's "add jar" command takes a local path to the dependency jar. 
> This clutters the local file-system as users may forget to remove this jar 
> later
> It would be nice if Hive supported a Gradle like notation to download the jar 
> from a repository.
> Example:  add jar org:module:version
> 
> It should also be backward compatible and should take jar from the local 
> file-system as well. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9659) 'Error while trying to create table container' occurs during hive query case execution when hive.optimize.skewjoin set to 'true' [Spark Branch]

2015-03-02 Thread Jimmy Xiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14343358#comment-14343358
 ] 

Jimmy Xiang commented on HIVE-9659:
---

[~lirui], sure. Assigned the issue to you since you are working on it.

> 'Error while trying to create table container' occurs during hive query case 
> execution when hive.optimize.skewjoin set to 'true' [Spark Branch]
> ---
>
> Key: HIVE-9659
> URL: https://issues.apache.org/jira/browse/HIVE-9659
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Reporter: Xin Hao
>Assignee: Rui Li
> Attachments: HIVE-9659.1-spark.patch
>
>
> We found that 'Error while trying to create table container'  occurs during 
> Big-Bench Q12 case execution when hive.optimize.skewjoin set to 'true'.
> If hive.optimize.skewjoin set to 'false', the case could pass.
> How to reproduce:
> 1. set hive.optimize.skewjoin=true;
> 2. Run BigBench case Q12 and it will fail. 
> Check the executor log (e.g. /usr/lib/spark/work/app-/2/stderr) and you 
> will found error 'Error while trying to create table container' in the log 
> and also a NullPointerException near the end of the log.
> (a) Detail error message for 'Error while trying to create table container':
> {noformat}
> 15/02/12 01:29:49 ERROR SparkMapRecordHandler: Error processing row: 
> org.apache.hadoop.hive.ql.metadata.HiveException: 
> org.apache.hadoop.hive.ql.metadata.HiveException: Error while trying to 
> create table container
> org.apache.hadoop.hive.ql.metadata.HiveException: 
> org.apache.hadoop.hive.ql.metadata.HiveException: Error while trying to 
> create table container
>   at 
> org.apache.hadoop.hive.ql.exec.spark.HashTableLoader.load(HashTableLoader.java:118)
>   at 
> org.apache.hadoop.hive.ql.exec.MapJoinOperator.loadHashTable(MapJoinOperator.java:193)
>   at 
> org.apache.hadoop.hive.ql.exec.MapJoinOperator.cleanUpInputFileChangedOp(MapJoinOperator.java:219)
>   at 
> org.apache.hadoop.hive.ql.exec.Operator.cleanUpInputFileChanged(Operator.java:1051)
>   at 
> org.apache.hadoop.hive.ql.exec.Operator.cleanUpInputFileChanged(Operator.java:1055)
>   at 
> org.apache.hadoop.hive.ql.exec.Operator.cleanUpInputFileChanged(Operator.java:1055)
>   at 
> org.apache.hadoop.hive.ql.exec.Operator.cleanUpInputFileChanged(Operator.java:1055)
>   at 
> org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:486)
>   at 
> org.apache.hadoop.hive.ql.exec.spark.SparkMapRecordHandler.processRow(SparkMapRecordHandler.java:141)
>   at 
> org.apache.hadoop.hive.ql.exec.spark.HiveMapFunctionResultList.processNextRecord(HiveMapFunctionResultList.java:47)
>   at 
> org.apache.hadoop.hive.ql.exec.spark.HiveMapFunctionResultList.processNextRecord(HiveMapFunctionResultList.java:27)
>   at 
> org.apache.hadoop.hive.ql.exec.spark.HiveBaseFunctionResultList$ResultIterator.hasNext(HiveBaseFunctionResultList.java:98)
>   at 
> scala.collection.convert.Wrappers$JIteratorWrapper.hasNext(Wrappers.scala:41)
>   at 
> org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:217)
>   at 
> org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:65)
>   at 
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)
>   at 
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
>   at org.apache.spark.scheduler.Task.run(Task.scala:56)
>   at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:196)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Error while 
> trying to create table container
>   at 
> org.apache.hadoop.hive.ql.exec.persistence.MapJoinTableContainerSerDe.load(MapJoinTableContainerSerDe.java:158)
>   at 
> org.apache.hadoop.hive.ql.exec.spark.HashTableLoader.load(HashTableLoader.java:115)
>   ... 21 more
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Error, not a 
> directory: 
> hdfs://bhx1:8020/tmp/hive/root/d22ef465-bff5-4edb-a822-0a9f1c25b66c/hive_2015-02-12_01-28-10_008_6897031694580088767-1/-mr-10009/HashTable-Stage-6/MapJoin-mapfile01--.hashtable
>   at 
> org.apache.hadoop.hive.ql.exec.persistence.MapJoinTableContainerSerDe.load(MapJoinTableContainerSerDe.java:106)
>   ... 22 more
> 15/02/12 01:29:49 INFO SparkRecordHandler: maximum memory = 40939028480
> 15/02/12 01:29:49 INFO PerfLogger:  from=org.apache.hadoop.hive.ql.exec.spa

[jira] [Updated] (HIVE-9659) 'Error while trying to create table container' occurs during hive query case execution when hive.optimize.skewjoin set to 'true' [Spark Branch]

2015-03-02 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang updated HIVE-9659:
--
Assignee: Rui Li  (was: Jimmy Xiang)

> 'Error while trying to create table container' occurs during hive query case 
> execution when hive.optimize.skewjoin set to 'true' [Spark Branch]
> ---
>
> Key: HIVE-9659
> URL: https://issues.apache.org/jira/browse/HIVE-9659
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Reporter: Xin Hao
>Assignee: Rui Li
> Attachments: HIVE-9659.1-spark.patch
>
>
> We found that 'Error while trying to create table container'  occurs during 
> Big-Bench Q12 case execution when hive.optimize.skewjoin set to 'true'.
> If hive.optimize.skewjoin set to 'false', the case could pass.
> How to reproduce:
> 1. set hive.optimize.skewjoin=true;
> 2. Run BigBench case Q12 and it will fail. 
> Check the executor log (e.g. /usr/lib/spark/work/app-/2/stderr) and you 
> will found error 'Error while trying to create table container' in the log 
> and also a NullPointerException near the end of the log.
> (a) Detail error message for 'Error while trying to create table container':
> {noformat}
> 15/02/12 01:29:49 ERROR SparkMapRecordHandler: Error processing row: 
> org.apache.hadoop.hive.ql.metadata.HiveException: 
> org.apache.hadoop.hive.ql.metadata.HiveException: Error while trying to 
> create table container
> org.apache.hadoop.hive.ql.metadata.HiveException: 
> org.apache.hadoop.hive.ql.metadata.HiveException: Error while trying to 
> create table container
>   at 
> org.apache.hadoop.hive.ql.exec.spark.HashTableLoader.load(HashTableLoader.java:118)
>   at 
> org.apache.hadoop.hive.ql.exec.MapJoinOperator.loadHashTable(MapJoinOperator.java:193)
>   at 
> org.apache.hadoop.hive.ql.exec.MapJoinOperator.cleanUpInputFileChangedOp(MapJoinOperator.java:219)
>   at 
> org.apache.hadoop.hive.ql.exec.Operator.cleanUpInputFileChanged(Operator.java:1051)
>   at 
> org.apache.hadoop.hive.ql.exec.Operator.cleanUpInputFileChanged(Operator.java:1055)
>   at 
> org.apache.hadoop.hive.ql.exec.Operator.cleanUpInputFileChanged(Operator.java:1055)
>   at 
> org.apache.hadoop.hive.ql.exec.Operator.cleanUpInputFileChanged(Operator.java:1055)
>   at 
> org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:486)
>   at 
> org.apache.hadoop.hive.ql.exec.spark.SparkMapRecordHandler.processRow(SparkMapRecordHandler.java:141)
>   at 
> org.apache.hadoop.hive.ql.exec.spark.HiveMapFunctionResultList.processNextRecord(HiveMapFunctionResultList.java:47)
>   at 
> org.apache.hadoop.hive.ql.exec.spark.HiveMapFunctionResultList.processNextRecord(HiveMapFunctionResultList.java:27)
>   at 
> org.apache.hadoop.hive.ql.exec.spark.HiveBaseFunctionResultList$ResultIterator.hasNext(HiveBaseFunctionResultList.java:98)
>   at 
> scala.collection.convert.Wrappers$JIteratorWrapper.hasNext(Wrappers.scala:41)
>   at 
> org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:217)
>   at 
> org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:65)
>   at 
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)
>   at 
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
>   at org.apache.spark.scheduler.Task.run(Task.scala:56)
>   at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:196)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Error while 
> trying to create table container
>   at 
> org.apache.hadoop.hive.ql.exec.persistence.MapJoinTableContainerSerDe.load(MapJoinTableContainerSerDe.java:158)
>   at 
> org.apache.hadoop.hive.ql.exec.spark.HashTableLoader.load(HashTableLoader.java:115)
>   ... 21 more
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Error, not a 
> directory: 
> hdfs://bhx1:8020/tmp/hive/root/d22ef465-bff5-4edb-a822-0a9f1c25b66c/hive_2015-02-12_01-28-10_008_6897031694580088767-1/-mr-10009/HashTable-Stage-6/MapJoin-mapfile01--.hashtable
>   at 
> org.apache.hadoop.hive.ql.exec.persistence.MapJoinTableContainerSerDe.load(MapJoinTableContainerSerDe.java:106)
>   ... 22 more
> 15/02/12 01:29:49 INFO SparkRecordHandler: maximum memory = 40939028480
> 15/02/12 01:29:49 INFO PerfLogger:  from=org.apache.hadoop.hive.ql.exec.spark.SparkRecordHandler>
> {noformat}
> (b) Detail error message for NullPointerException:

[jira] [Commented] (HIVE-9641) Fill out remaining partition functions in HBaseStore

2015-03-02 Thread Vaibhav Gumashta (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9641?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14343328#comment-14343328
 ] 

Vaibhav Gumashta commented on HIVE-9641:


+1

> Fill out remaining partition functions in HBaseStore
> 
>
> Key: HIVE-9641
> URL: https://issues.apache.org/jira/browse/HIVE-9641
> Project: Hive
>  Issue Type: Sub-task
>  Components: Metastore
>Reporter: Alan Gates
>Assignee: Alan Gates
> Attachments: HIVE-9641.patch
>
>
> A number of the listPartition and getPartition methods are not implemented.  
> The rest need to be implemented.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9813) Hive JDBC - DatabaseMetaData.getColumns method cannot find classes added with "add jar" command

2015-03-02 Thread Yongzhi Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14343213#comment-14343213
 ] 

Yongzhi Chen commented on HIVE-9813:


HIVE-9252 uses a permanent way to register jar to a table at create time, 
therefore after create table in the way proposed in HIVE-9252, there is no need 
to use add jar command. So, the two jiras are different, but both will solve 
metastore's problem. 


> Hive JDBC - DatabaseMetaData.getColumns method cannot find classes added with 
> "add jar" command
> ---
>
> Key: HIVE-9813
> URL: https://issues.apache.org/jira/browse/HIVE-9813
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Reporter: Yongzhi Chen
>Assignee: Yongzhi Chen
>
> Execute following JDBC client program:
> {code}
> import java.sql.*;
> public class TestAddJar {
> private static Connection makeConnection(String connString, String 
> classPath) throws ClassNotFoundException, SQLException
> {
> System.out.println("Current Connection info: "+ connString);
> Class.forName(classPath);
> System.out.println("Current driver info: "+ classPath);
> return DriverManager.getConnection(connString);
> }
> public static void main(String[] args)
> {
> if(2 != args.length)
> {
> System.out.println("Two arguments needed: connection string, path 
> to jar to be added (include jar name)");
> System.out.println("Example: java -jar TestApp.jar 
> jdbc:hive2://192.168.111.111 /tmp/json-serde-1.3-jar-with-dependencies.jar");
> return;
> }
> Connection conn;
> try
> {
> conn = makeConnection(args[0], "org.apache.hive.jdbc.HiveDriver");
> 
> System.out.println("---");
> System.out.println("DONE");
> 
> System.out.println("---");
> System.out.println("Execute query: add jar " + args[1] + ";");
> Statement stmt = conn.createStatement();
> int c = stmt.executeUpdate("add jar " + args[1]);
> System.out.println("Returned value is: [" + c + "]\n");
> 
> System.out.println("---");
> final String createTableQry = "Create table if not exists 
> json_test(id int, content string) " +
> "row format serde 'org.openx.data.jsonserde.JsonSerDe'";
> System.out.println("Execute query:" + createTableQry + ";");
> stmt.execute(createTableQry);
> 
> System.out.println("---");
> System.out.println("getColumn() 
> Call---\n");
> DatabaseMetaData md = conn.getMetaData();
> System.out.println("Test get all column in a schema:");
> ResultSet rs = md.getColumns("Hive", "default", "json_test", 
> null);
> while (rs.next()) {
> System.out.println(rs.getString(1));
> }
> conn.close();
> }
> catch (ClassNotFoundException e)
> {
> e.printStackTrace();
> }
> catch (SQLException e)
> {
> e.printStackTrace();
> }
> }
> }
> {code}
> Get Exception, and from metastore log:
> 7:41:30.316 PMERROR   hive.log
> error in initSerDe: java.lang.ClassNotFoundException Class 
> org.openx.data.jsonserde.JsonSerDe not found
> java.lang.ClassNotFoundException: Class org.openx.data.jsonserde.JsonSerDe 
> not found
> at 
> org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:1803)
> at 
> org.apache.hadoop.hive.metastore.MetaStoreUtils.getDeserializer(MetaStoreUtils.java:183)
> at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_fields(HiveMetaStore.java:2487)
> at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_schema(HiveMetaStore.java:2542)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:105)
> at com.sun.proxy.$Proxy5.get_schema(Unknown Source)
> at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$get_schema.getResult(ThriftHiveMetastore.java:6425)
> at 
> org

[jira] [Updated] (HIVE-9659) 'Error while trying to create table container' occurs during hive query case execution when hive.optimize.skewjoin set to 'true' [Spark Branch]

2015-03-02 Thread Rui Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Li updated HIVE-9659:
-
Attachment: HIVE-9659.1-spark.patch

Hi Jimmy and Xin, could you help to verify if the patch can solve the issue? 
Thanks!

> 'Error while trying to create table container' occurs during hive query case 
> execution when hive.optimize.skewjoin set to 'true' [Spark Branch]
> ---
>
> Key: HIVE-9659
> URL: https://issues.apache.org/jira/browse/HIVE-9659
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Reporter: Xin Hao
>Assignee: Jimmy Xiang
> Attachments: HIVE-9659.1-spark.patch
>
>
> We found that 'Error while trying to create table container'  occurs during 
> Big-Bench Q12 case execution when hive.optimize.skewjoin set to 'true'.
> If hive.optimize.skewjoin set to 'false', the case could pass.
> How to reproduce:
> 1. set hive.optimize.skewjoin=true;
> 2. Run BigBench case Q12 and it will fail. 
> Check the executor log (e.g. /usr/lib/spark/work/app-/2/stderr) and you 
> will found error 'Error while trying to create table container' in the log 
> and also a NullPointerException near the end of the log.
> (a) Detail error message for 'Error while trying to create table container':
> {noformat}
> 15/02/12 01:29:49 ERROR SparkMapRecordHandler: Error processing row: 
> org.apache.hadoop.hive.ql.metadata.HiveException: 
> org.apache.hadoop.hive.ql.metadata.HiveException: Error while trying to 
> create table container
> org.apache.hadoop.hive.ql.metadata.HiveException: 
> org.apache.hadoop.hive.ql.metadata.HiveException: Error while trying to 
> create table container
>   at 
> org.apache.hadoop.hive.ql.exec.spark.HashTableLoader.load(HashTableLoader.java:118)
>   at 
> org.apache.hadoop.hive.ql.exec.MapJoinOperator.loadHashTable(MapJoinOperator.java:193)
>   at 
> org.apache.hadoop.hive.ql.exec.MapJoinOperator.cleanUpInputFileChangedOp(MapJoinOperator.java:219)
>   at 
> org.apache.hadoop.hive.ql.exec.Operator.cleanUpInputFileChanged(Operator.java:1051)
>   at 
> org.apache.hadoop.hive.ql.exec.Operator.cleanUpInputFileChanged(Operator.java:1055)
>   at 
> org.apache.hadoop.hive.ql.exec.Operator.cleanUpInputFileChanged(Operator.java:1055)
>   at 
> org.apache.hadoop.hive.ql.exec.Operator.cleanUpInputFileChanged(Operator.java:1055)
>   at 
> org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:486)
>   at 
> org.apache.hadoop.hive.ql.exec.spark.SparkMapRecordHandler.processRow(SparkMapRecordHandler.java:141)
>   at 
> org.apache.hadoop.hive.ql.exec.spark.HiveMapFunctionResultList.processNextRecord(HiveMapFunctionResultList.java:47)
>   at 
> org.apache.hadoop.hive.ql.exec.spark.HiveMapFunctionResultList.processNextRecord(HiveMapFunctionResultList.java:27)
>   at 
> org.apache.hadoop.hive.ql.exec.spark.HiveBaseFunctionResultList$ResultIterator.hasNext(HiveBaseFunctionResultList.java:98)
>   at 
> scala.collection.convert.Wrappers$JIteratorWrapper.hasNext(Wrappers.scala:41)
>   at 
> org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:217)
>   at 
> org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:65)
>   at 
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)
>   at 
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
>   at org.apache.spark.scheduler.Task.run(Task.scala:56)
>   at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:196)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Error while 
> trying to create table container
>   at 
> org.apache.hadoop.hive.ql.exec.persistence.MapJoinTableContainerSerDe.load(MapJoinTableContainerSerDe.java:158)
>   at 
> org.apache.hadoop.hive.ql.exec.spark.HashTableLoader.load(HashTableLoader.java:115)
>   ... 21 more
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Error, not a 
> directory: 
> hdfs://bhx1:8020/tmp/hive/root/d22ef465-bff5-4edb-a822-0a9f1c25b66c/hive_2015-02-12_01-28-10_008_6897031694580088767-1/-mr-10009/HashTable-Stage-6/MapJoin-mapfile01--.hashtable
>   at 
> org.apache.hadoop.hive.ql.exec.persistence.MapJoinTableContainerSerDe.load(MapJoinTableContainerSerDe.java:106)
>   ... 22 more
> 15/02/12 01:29:49 INFO SparkRecordHandler: maximum memory = 40939028480
> 15/02/12 01:29:49 INFO PerfLogger:  from=org.apache.hadoop.hive.ql.exec.spark.Spar

[jira] [Commented] (HIVE-8626) Extend HDFS super-user checks to dropPartitions

2015-03-02 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14342974#comment-14342974
 ] 

Hive QA commented on HIVE-8626:
---



{color:green}Overall{color}: +1 all checks pass

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12701799/HIVE-8626.2.patch

{color:green}SUCCESS:{color} +1 7576 tests passed

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2915/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2915/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-2915/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12701799 - PreCommit-HIVE-TRUNK-Build

> Extend HDFS super-user checks to dropPartitions
> ---
>
> Key: HIVE-8626
> URL: https://issues.apache.org/jira/browse/HIVE-8626
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Affects Versions: 0.14.0
>Reporter: Mithun Radhakrishnan
>Assignee: Mithun Radhakrishnan
> Attachments: HIVE-8626.1.patch, HIVE-8626.2.patch
>
>
> HIVE-6392 takes care of allowing HDFS super-user accounts to register 
> partitions in tables whose HDFS paths don't explicitly grant 
> write-permissions to the super-user.
> However, the dropPartitions()/dropTable()/dropDatabase() use-cases don't 
> handle this at all. i.e. An HDFS super-user ({{kal...@dev.grid.myth.net}}) 
> can't drop the very partitions that were added to a table-directory owned by 
> the user ({{mithunr}}). The following error is the result:
> {quote}
> FAILED: Execution Error, return code 1 from 
> org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:Table metadata 
> not deleted since 
> hdfs://mythcluster-nn1.grid.myth.net:8020/user/mithunr/myth.db/myth_table is 
> not writable by kal...@dev.grid.myth.net)
> {quote}
> This is the result of redundant checks in 
> {{HiveMetaStore::dropPartitionsAndGetLocations()}}:
> {code:title=HiveMetaStore.java|borderStyle=solid}
> if (!wh.isWritable(partPath.getParent())) {
>   throw new MetaException("Table metadata not deleted since the partition "
> + Warehouse.makePartName(partitionKeys, part.getValues()) 
> +  " has parent location " + partPath.getParent() 
> + " which is not writable " 
> + "by " + hiveConf.getUser());
> }
> {code}
> This check is already made in StorageBasedAuthorizationProvider. If the 
> argument is that the SBAP isn't guaranteed to be in play, then this shouldn't 
> be checked in HMS either. If HDFS permissions need to be checked in addition 
> to say, ACLs, then perhaps a recursively-composed auth-provider ought to be 
> used.
> For the moment, I'll get {{Warehouse.isWritable()}} to handle HDFS 
> super-users. But I think {{isWritable()}} checks oughtn't to be in 
> HiveMetaStore. (Perhaps fix this in another JIRA?)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-6617) Reduce ambiguity in grammar

2015-03-02 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14342901#comment-14342901
 ] 

Hive QA commented on HIVE-6617:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12701790/HIVE-6617.21.patch

{color:red}ERROR:{color} -1 due to 3 failed/errored test(s), 7574 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_select_charliteral
org.apache.hadoop.hive.ql.parse.TestIUD.testSelectStarFromAnonymousVirtTable1Row
org.apache.hive.hcatalog.hbase.TestPigHBaseStorageHandler.org.apache.hive.hcatalog.hbase.TestPigHBaseStorageHandler
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2914/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2914/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-2914/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 3 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12701790 - PreCommit-HIVE-TRUNK-Build

> Reduce ambiguity in grammar
> ---
>
> Key: HIVE-6617
> URL: https://issues.apache.org/jira/browse/HIVE-6617
> Project: Hive
>  Issue Type: Task
>Reporter: Ashutosh Chauhan
>Assignee: Pengcheng Xiong
> Attachments: HIVE-6617.01.patch, HIVE-6617.02.patch, 
> HIVE-6617.03.patch, HIVE-6617.04.patch, HIVE-6617.05.patch, 
> HIVE-6617.06.patch, HIVE-6617.07.patch, HIVE-6617.08.patch, 
> HIVE-6617.09.patch, HIVE-6617.10.patch, HIVE-6617.11.patch, 
> HIVE-6617.12.patch, HIVE-6617.13.patch, HIVE-6617.14.patch, 
> HIVE-6617.15.patch, HIVE-6617.16.patch, HIVE-6617.17.patch, 
> HIVE-6617.18.patch, HIVE-6617.19.patch, HIVE-6617.20.patch, HIVE-6617.21.patch
>
>
> CLEAR LIBRARY CACHE
> As of today, antlr reports 214 warnings. Need to bring down this number, 
> ideally to 0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)