[jira] [Updated] (HIVE-9644) CASE comparison operator rotation optimization

2015-02-20 Thread Gopal V (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gopal V updated HIVE-9644:
--
Summary: CASE comparison operator rotation optimization  (was: Constant 
folding case for CASE/WHEN)

> CASE comparison operator rotation optimization
> --
>
> Key: HIVE-9644
> URL: https://issues.apache.org/jira/browse/HIVE-9644
> Project: Hive
>  Issue Type: Bug
>  Components: Logical Optimizer
>Affects Versions: 1.2.0
>Reporter: Gopal V
>
> Constant folding for queries don't kick in for some automatically generated 
> query patterns which look like this.
> {code}
> hive> explain select count(1) from store_sales where (case ss_sold_date when 
> '1998-01-01' then 1 else null end)=1;
> {code}
> This should get rewritten by pushing the equality into the case branches.
> {code}
> select count(1) from store_sales where (case ss_sold_date when '1998-01-01' 
> then 1=1 else null=1 end);
> {code}
> Ending up with a simplified filter condition, resolving itself as 
> {code}
> select count(1) from store_sales where ss_sold_date= '1998-01-01' ;
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9680) GlobalLimitOptimizer is not checking filters correctly

2015-02-20 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9680?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-9680:
---
   Resolution: Fixed
Fix Version/s: 1.2.0
   Status: Resolved  (was: Patch Available)

Committed to trunk. Thanks, Navis!

> GlobalLimitOptimizer is not checking filters correctly 
> ---
>
> Key: HIVE-9680
> URL: https://issues.apache.org/jira/browse/HIVE-9680
> Project: Hive
>  Issue Type: Bug
>  Components: Query Planning
>Reporter: Navis
>Assignee: Navis
>Priority: Trivial
> Fix For: 1.2.0
>
> Attachments: HIVE-9680.1.patch.txt
>
>
> Some predicates can be not included in opToPartPruner



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9628) HiveMetaStoreClient.dropPartitions(...List>...) doesn't take (boolean needResult)

2015-02-20 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-9628:
---
   Resolution: Fixed
Fix Version/s: 1.2.0
   Status: Resolved  (was: Patch Available)

Committed to trunk. Thanks, Mithun!

> HiveMetaStoreClient.dropPartitions(...List>...) 
> doesn't take (boolean needResult)
> 
>
> Key: HIVE-9628
> URL: https://issues.apache.org/jira/browse/HIVE-9628
> Project: Hive
>  Issue Type: Bug
>  Components: API, Metastore
>Affects Versions: 0.14.0
>Reporter: Mithun Radhakrishnan
>Assignee: Mithun Radhakrishnan
> Fix For: 1.2.0
>
> Attachments: HIVE-9628.1.patch
>
>
> {{HiveMetaStoreClient::dropPartitions()}} assumes that the dropped 
> {{List}} must be returned to the caller. That's a lot of thrift 
> traffic that the caller might choose not to pay for.
> I propose an overload that retains the default behaviour, but allows 
> {{needResult}} to be overridden.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9480) Build UDF TRUNC to implement FIRST_DAY as compared with LAST_DAY

2015-02-20 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1433#comment-1433
 ] 

Hive QA commented on HIVE-9480:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/1267/HIVE-9480.7.patch

{color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 7575 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby3_map
org.apache.hive.jdbc.TestMultiSessionsHS2WithLocalClusterSpark.testSparkQuery
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2840/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2840/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-2840/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 2 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 1267 - PreCommit-HIVE-TRUNK-Build

> Build UDF TRUNC to implement FIRST_DAY as compared with LAST_DAY
> 
>
> Key: HIVE-9480
> URL: https://issues.apache.org/jira/browse/HIVE-9480
> Project: Hive
>  Issue Type: Bug
>  Components: UDF
>Affects Versions: 0.14.0
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HIVE-9480.1.patch, HIVE-9480.3.patch, HIVE-9480.4.patch, 
> HIVE-9480.5.patch, HIVE-9480.6.patch, HIVE-9480.7.patch
>
>
> Hive already supports LAST_DAY UDF, in some cases, FIRST_DAY is necessary to 
> do date/timestamp related computation. This JIRA is to track such an 
> implementation. Choose to impl TRUNC, a more standard way to get the first 
> day of a a month, e.g., SELECT TRUNC('2009-12-12', 'MM'); will return 
> 2009-12-01, SELECT TRUNC('2009-12-12', 'YEAR'); will return 2009-01-01.
> BTW, this TRUNC is not as feature complete as aligned with Oracle one. only 
> 'MM' and 'YEAR' are supported as format, however, it's a base to add on other 
> formats.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9730) LLAP: make sure logging is never called when not needed

2015-02-20 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14329991#comment-14329991
 ] 

Sergey Shelukhin commented on HIVE-9730:


I have epic patch in the works that extends this to operator pipeline and other 
parts of code randomly. Will probably make it for trunk too, next week...

> LLAP: make sure logging is never called when not needed
> ---
>
> Key: HIVE-9730
> URL: https://issues.apache.org/jira/browse/HIVE-9730
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: log4j-llap.png
>
>
> log4j logging has really inefficient serialization
> !log4j-llap.png!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HIVE-9737) Issue come while creating the table in hbase using java Impla API

2015-02-20 Thread Mohit Sharma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9737?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mohit Sharma resolved HIVE-9737.

  Resolution: Fixed
Release Note: Thanks

> Issue come while creating the table in hbase using java Impla API
> -
>
> Key: HIVE-9737
> URL: https://issues.apache.org/jira/browse/HIVE-9737
> Project: Hive
>  Issue Type: Bug
>  Components: Database/Schema
> Environment: Linux OS, Using Java Impala API with thrift, Cloudera 
> Hbase database
>Reporter: Mohit Sharma
>Assignee: Damien Carol
>
> I am trying to create hbase table using this query
> {code}
> CREATE TABLE foo4(rowkey STRING, a STRING, b STRING) STORED BY 
> 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' WITH SERDEPROPERTIES 
> ('hbase.columns.mapping' = ':key,f:c1,f:c2') TBLPROPERTIES 
> ('hbase.table.name' = 'bar4');
> {code}
> with the help of java Impala api and I am sharing you code link
> https://github.com/pauldeschacht/impala-java-client
> When I am trying to create table in habse I am facing this issue
> {noformat}
> AnalysisException: Syntax error in line 2:
> STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
>^
> Encountered: BY
> Expected: AS
> CAUSED BY: Exception: Syntax error,HY000,0,false
> {noformat}
> Please help me what I do?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9277) Hybrid Hybrid Grace Hash Join

2015-02-20 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14329984#comment-14329984
 ] 

Hive QA commented on HIVE-9277:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12699983/HIVE-9277.03.patch

{color:red}ERROR:{color} -1 due to 24 failed/errored test(s), 7567 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_hybridhashjoin
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_udaf_percentile_approx_23
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_auto_join0
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_auto_join30
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_auto_join_filters
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_auto_sortmerge_join_1
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_auto_sortmerge_join_10
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_auto_sortmerge_join_11
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_auto_sortmerge_join_12
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_auto_sortmerge_join_14
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_auto_sortmerge_join_16
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_auto_sortmerge_join_2
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_auto_sortmerge_join_3
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_auto_sortmerge_join_5
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_auto_sortmerge_join_8
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_auto_sortmerge_join_9
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_dynamic_partition_pruning
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_dynamic_partition_pruning_2
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_tez_union
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_unionDistinct_1
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_vector_decimal_mapjoin
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_vector_left_outer_join
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_vectorized_context
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_vectorized_dynamic_partition_pruning
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2839/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2839/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-2839/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 24 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12699983 - PreCommit-HIVE-TRUNK-Build

> Hybrid Hybrid Grace Hash Join
> -
>
> Key: HIVE-9277
> URL: https://issues.apache.org/jira/browse/HIVE-9277
> Project: Hive
>  Issue Type: New Feature
>  Components: Physical Optimizer
>Reporter: Wei Zheng
>Assignee: Wei Zheng
>  Labels: join
> Attachments: HIVE-9277.01.patch, HIVE-9277.02.patch, 
> HIVE-9277.03.patch, High-leveldesignforHybridHybridGraceHashJoinv1.0.pdf
>
>
> We are proposing an enhanced hash join algorithm called _“hybrid hybrid grace 
> hash join”_.
> We can benefit from this feature as illustrated below:
> * The query will not fail even if the estimated memory requirement is 
> slightly wrong
> * Expensive garbage collection overhead can be avoided when hash table grows
> * Join execution using a Map join operator even though the small table 
> doesn't fit in memory as spilling some data from the build and probe sides 
> will still be cheaper than having to shuffle the large fact table
> The design was based on Hadoop’s parallel processing capability and 
> significant amount of memory available.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 30335: Build UDF TRUNC to implement FIRST_DAY as compared with LAST_DAY

2015-02-20 Thread Alexander Pivovarov

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/30335/#review73391
---



ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFTrunc.java


according to java coding convention UPPER_CASE var names used only for 
constants (static final)



ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFTrunc.java


why you read FORMAT_INPUT only for the first row and then cache it?
what if format is dynamic?

As I mentioned before, use ConstantObjectInspector to get format in 
inintialize() method

if it's not constant then you have to read format valus in evaluate() 
method for every single row. don't cache format if it is not constant

look at decode udf as an example



ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFTrunc.java


I think ("MM".equals(formatInput) || "MON".equals(formatInput) || 
"MONTH".equals(formatInput)) is faster than calling contains() on ArrayList



ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFTrunc.java


("YEAR".equals(formatInput) || "".equals(formatInput) || 
"YY".equals(formatInput))


- Alexander Pivovarov


On Feb. 21, 2015, 2:54 a.m., XIAOBING ZHOU wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/30335/
> ---
> 
> (Updated Feb. 21, 2015, 2:54 a.m.)
> 
> 
> Review request for hive.
> 
> 
> Repository: hive-git
> 
> 
> Description
> ---
> 
> Hive already supports LAST_DAY UDF, in some cases, FIRST_DAY is necessary to 
> do date/timestamp related computation. This JIRA is to track such an 
> implementation.
> 
> https://issues.apache.org/jira/browse/HIVE-9480
> 
> 
> Diffs
> -
> 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/FunctionRegistry.java bfeb33c 
>   ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFTrunc.java 
> PRE-CREATION 
>   ql/src/test/org/apache/hadoop/hive/ql/udf/generic/TestGenericUDFTrunc.java 
> PRE-CREATION 
>   ql/src/test/queries/clientnegative/udf_trunc_error1.q PRE-CREATION 
>   ql/src/test/queries/clientnegative/udf_trunc_error2.q PRE-CREATION 
>   ql/src/test/queries/clientpositive/udf_trunc.q PRE-CREATION 
>   ql/src/test/results/clientnegative/udf_trunc_error1.q.out PRE-CREATION 
>   ql/src/test/results/clientnegative/udf_trunc_error2.q.out PRE-CREATION 
>   ql/src/test/results/clientpositive/show_functions.q.out d4b0650 
>   ql/src/test/results/clientpositive/udf_trunc.q.out PRE-CREATION 
> 
> Diff: https://reviews.apache.org/r/30335/diff/
> 
> 
> Testing
> ---
> 
> Unit tests done in 
> ql/src/test/org/apache/hadoop/hive/ql/udf/generic/TestGenericUDFFirstDay.java
> 
> 
> Thanks,
> 
> XIAOBING ZHOU
> 
>



[jira] [Commented] (HIVE-9480) Build UDF TRUNC to implement FIRST_DAY as compared with LAST_DAY

2015-02-20 Thread Xiaobing Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14329964#comment-14329964
 ] 

Xiaobing Zhou commented on HIVE-9480:
-

V7 addressed latest comments.
Thanks [~apivovarov] and [~jdere] for reviews.

> Build UDF TRUNC to implement FIRST_DAY as compared with LAST_DAY
> 
>
> Key: HIVE-9480
> URL: https://issues.apache.org/jira/browse/HIVE-9480
> Project: Hive
>  Issue Type: Bug
>  Components: UDF
>Affects Versions: 0.14.0
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HIVE-9480.1.patch, HIVE-9480.3.patch, HIVE-9480.4.patch, 
> HIVE-9480.5.patch, HIVE-9480.6.patch, HIVE-9480.7.patch
>
>
> Hive already supports LAST_DAY UDF, in some cases, FIRST_DAY is necessary to 
> do date/timestamp related computation. This JIRA is to track such an 
> implementation. Choose to impl TRUNC, a more standard way to get the first 
> day of a a month, e.g., SELECT TRUNC('2009-12-12', 'MM'); will return 
> 2009-12-01, SELECT TRUNC('2009-12-12', 'YEAR'); will return 2009-01-01.
> BTW, this TRUNC is not as feature complete as aligned with Oracle one. only 
> 'MM' and 'YEAR' are supported as format, however, it's a base to add on other 
> formats.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9480) Build UDF TRUNC to implement FIRST_DAY as compared with LAST_DAY

2015-02-20 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HIVE-9480:

Attachment: HIVE-9480.7.patch

> Build UDF TRUNC to implement FIRST_DAY as compared with LAST_DAY
> 
>
> Key: HIVE-9480
> URL: https://issues.apache.org/jira/browse/HIVE-9480
> Project: Hive
>  Issue Type: Bug
>  Components: UDF
>Affects Versions: 0.14.0
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HIVE-9480.1.patch, HIVE-9480.3.patch, HIVE-9480.4.patch, 
> HIVE-9480.5.patch, HIVE-9480.6.patch, HIVE-9480.7.patch
>
>
> Hive already supports LAST_DAY UDF, in some cases, FIRST_DAY is necessary to 
> do date/timestamp related computation. This JIRA is to track such an 
> implementation. Choose to impl TRUNC, a more standard way to get the first 
> day of a a month, e.g., SELECT TRUNC('2009-12-12', 'MM'); will return 
> 2009-12-01, SELECT TRUNC('2009-12-12', 'YEAR'); will return 2009-01-01.
> BTW, this TRUNC is not as feature complete as aligned with Oracle one. only 
> 'MM' and 'YEAR' are supported as format, however, it's a base to add on other 
> formats.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 30335: Build UDF TRUNC to implement FIRST_DAY as compared with LAST_DAY

2015-02-20 Thread XIAOBING ZHOU

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/30335/
---

(Updated Feb. 21, 2015, 2:54 a.m.)


Review request for hive.


Changes
---

V7 addressed latest comments.


Repository: hive-git


Description
---

Hive already supports LAST_DAY UDF, in some cases, FIRST_DAY is necessary to do 
date/timestamp related computation. This JIRA is to track such an 
implementation.

https://issues.apache.org/jira/browse/HIVE-9480


Diffs (updated)
-

  ql/src/java/org/apache/hadoop/hive/ql/exec/FunctionRegistry.java bfeb33c 
  ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFTrunc.java 
PRE-CREATION 
  ql/src/test/org/apache/hadoop/hive/ql/udf/generic/TestGenericUDFTrunc.java 
PRE-CREATION 
  ql/src/test/queries/clientnegative/udf_trunc_error1.q PRE-CREATION 
  ql/src/test/queries/clientnegative/udf_trunc_error2.q PRE-CREATION 
  ql/src/test/queries/clientpositive/udf_trunc.q PRE-CREATION 
  ql/src/test/results/clientnegative/udf_trunc_error1.q.out PRE-CREATION 
  ql/src/test/results/clientnegative/udf_trunc_error2.q.out PRE-CREATION 
  ql/src/test/results/clientpositive/show_functions.q.out d4b0650 
  ql/src/test/results/clientpositive/udf_trunc.q.out PRE-CREATION 

Diff: https://reviews.apache.org/r/30335/diff/


Testing
---

Unit tests done in 
ql/src/test/org/apache/hadoop/hive/ql/udf/generic/TestGenericUDFFirstDay.java


Thanks,

XIAOBING ZHOU



[jira] [Commented] (HIVE-9480) Build UDF TRUNC to implement FIRST_DAY as compared with LAST_DAY

2015-02-20 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14329948#comment-14329948
 ] 

Hive QA commented on HIVE-9480:
---



{color:green}Overall{color}: +1 all checks pass

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12699980/HIVE-9480.6.patch

{color:green}SUCCESS:{color} +1 7572 tests passed

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2838/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2838/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-2838/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12699980 - PreCommit-HIVE-TRUNK-Build

> Build UDF TRUNC to implement FIRST_DAY as compared with LAST_DAY
> 
>
> Key: HIVE-9480
> URL: https://issues.apache.org/jira/browse/HIVE-9480
> Project: Hive
>  Issue Type: Bug
>  Components: UDF
>Affects Versions: 0.14.0
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HIVE-9480.1.patch, HIVE-9480.3.patch, HIVE-9480.4.patch, 
> HIVE-9480.5.patch, HIVE-9480.6.patch
>
>
> Hive already supports LAST_DAY UDF, in some cases, FIRST_DAY is necessary to 
> do date/timestamp related computation. This JIRA is to track such an 
> implementation. Choose to impl TRUNC, a more standard way to get the first 
> day of a a month, e.g., SELECT TRUNC('2009-12-12', 'MM'); will return 
> 2009-12-01, SELECT TRUNC('2009-12-12', 'YEAR'); will return 2009-01-01.
> BTW, this TRUNC is not as feature complete as aligned with Oracle one. only 
> 'MM' and 'YEAR' are supported as format, however, it's a base to add on other 
> formats.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9747) LLAP: Merge from trunk to llap branch 2/20/2015

2015-02-20 Thread Prasanth Jayachandran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran updated HIVE-9747:

Attachment: (was: HIVE-7926-llap.patch)

> LLAP: Merge from trunk to llap branch 2/20/2015
> ---
>
> Key: HIVE-9747
> URL: https://issues.apache.org/jira/browse/HIVE-9747
> Project: Hive
>  Issue Type: Sub-task
>Affects Versions: llap
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
> Fix For: llap
>
> Attachments: HIVE-9747-llap.patch
>
>
> Rebase llap branch against trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9747) LLAP: Merge from trunk to llap branch 2/20/2015

2015-02-20 Thread Prasanth Jayachandran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran updated HIVE-9747:

Attachment: HIVE-9747-llap.patch

Renamed the patch.

> LLAP: Merge from trunk to llap branch 2/20/2015
> ---
>
> Key: HIVE-9747
> URL: https://issues.apache.org/jira/browse/HIVE-9747
> Project: Hive
>  Issue Type: Sub-task
>Affects Versions: llap
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
> Fix For: llap
>
> Attachments: HIVE-9747-llap.patch
>
>
> Rebase llap branch against trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HIVE-9747) LLAP: Merge from trunk to llap branch 2/20/2015

2015-02-20 Thread Prasanth Jayachandran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran resolved HIVE-9747.
-
Resolution: Fixed

> LLAP: Merge from trunk to llap branch 2/20/2015
> ---
>
> Key: HIVE-9747
> URL: https://issues.apache.org/jira/browse/HIVE-9747
> Project: Hive
>  Issue Type: Sub-task
>Affects Versions: llap
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
> Fix For: llap
>
> Attachments: HIVE-7926-llap.patch
>
>
> Rebase llap branch against trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9747) LLAP: Merge from trunk to llap branch 2/20/2015

2015-02-20 Thread Prasanth Jayachandran (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14329944#comment-14329944
 ] 

Prasanth Jayachandran commented on HIVE-9747:
-

Committed to llap branch.

> LLAP: Merge from trunk to llap branch 2/20/2015
> ---
>
> Key: HIVE-9747
> URL: https://issues.apache.org/jira/browse/HIVE-9747
> Project: Hive
>  Issue Type: Sub-task
>Affects Versions: llap
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
> Fix For: llap
>
> Attachments: HIVE-7926-llap.patch
>
>
> Rebase llap branch against trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9747) LLAP: Merge from trunk to llap branch 2/20/2015

2015-02-20 Thread Prasanth Jayachandran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran updated HIVE-9747:

Attachment: HIVE-7926-llap.patch

> LLAP: Merge from trunk to llap branch 2/20/2015
> ---
>
> Key: HIVE-9747
> URL: https://issues.apache.org/jira/browse/HIVE-9747
> Project: Hive
>  Issue Type: Sub-task
>Affects Versions: llap
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
> Fix For: llap
>
> Attachments: HIVE-7926-llap.patch
>
>
> Rebase llap branch against trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9730) LLAP: make sure logging is never called when not needed

2015-02-20 Thread Gopal V (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14329934#comment-14329934
 ] 

Gopal V commented on HIVE-9730:
---

That is true, the initialize won't be sped up at all.

Instead, what you propose would prevent any un-necessary Appender code from 
appearing in the JIT tracing & the JIT will remove all logging as dead-code 
instead.

> LLAP: make sure logging is never called when not needed
> ---
>
> Key: HIVE-9730
> URL: https://issues.apache.org/jira/browse/HIVE-9730
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: log4j-llap.png
>
>
> log4j logging has really inefficient serialization
> !log4j-llap.png!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Is it allowed to include GPLv2 software to Apache projects

2015-02-20 Thread Alan Gates

No.

Alan.


Alexander Pivovarov 
February 20, 2015 at 14:30
Hi Everyone

Apache License v2.0 and GPL Compatibility page clearly says "GPLv3 
software

cannot be included in Apache projects"

http://www.apache.org/licenses/GPL-compatibility.html

What about GPLv2 software? Can it be included to Apache projects?



[jira] [Created] (HIVE-9747) LLAP: Merge from trunk to llap branch 2/20/2015

2015-02-20 Thread Prasanth Jayachandran (JIRA)
Prasanth Jayachandran created HIVE-9747:
---

 Summary: LLAP: Merge from trunk to llap branch 2/20/2015
 Key: HIVE-9747
 URL: https://issues.apache.org/jira/browse/HIVE-9747
 Project: Hive
  Issue Type: Sub-task
Affects Versions: llap
Reporter: Prasanth Jayachandran
Assignee: Prasanth Jayachandran
 Fix For: llap


Rebase llap branch against trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9745) predicate evaluation of character fields with spaces and literals with spaces returns unexpected result

2015-02-20 Thread Jason Dere (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14329913#comment-14329913
 ] 

Jason Dere commented on HIVE-9745:
--

This is unfortunately due to the fact that string literals in Hive type string, 
whereas most other databases treat them as of type char. When comparing string 
to char, both sides are converted to string before comparison, where trailing 
spaces are stripped when converting from char to string, and trailing spaces 
are significant during string comparison.

It would have been nice to have been able to change string literals to become 
char type, but this would have meant changing a pretty fundamental behavior in 
Hive, and I'm not really sure what the consequences would be here.

As you have already noticed, one solution is to cast string literals to char. 
Another option that might work here is to strip trailing whitespace from the 
string literals.

As with HIVE-9537, better documentation would probably be helpful here.

> predicate evaluation of character fields with spaces and literals with spaces 
> returns unexpected result
> ---
>
> Key: HIVE-9745
> URL: https://issues.apache.org/jira/browse/HIVE-9745
> Project: Hive
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 0.14.0
>Reporter: N Campbell
>
> The following query should return 5 rows but Hive returns 3
> select rnum, tchar.cchar from tchar where not (  tchar.cchar = ' ' or ( 
> tchar.cchar is null and ' ' is null ))
> Consider the following project of the base table
> select rnum, tchar.cchar, 
> case tchar.cchar when ' ' then 'space' else 'not space' end, 
> case when tchar.cchar is null then 'is null' else 'not null' end, case when ' 
> ' is null then 'is null' else 'not null' end
> from tchar
> order by rnum
> Row 0 is a NULL
> Row 1 was loaded with a zero length string ''
> Row 2 was loaded with a single space ' '
> rnum  tchar.cchar _c2 _c3 _c4
> 0   not space   is null not null
> 1 not space   not null
> not null
> 2 not space   not null
> not null
> 3 BB  not space   not null
> not null
> 4 EE  not space   not null
> not null
> 5 FF  not space   not null
> not null
> Explicitly type cast the literal which many  SQL developers would not expect 
> need to do.
> select rnum, tchar.cchar, 
> case tchar.cchar when cast(' ' as char(1)) then 'space' else 'not space' end, 
> case when tchar.cchar is null then 'is null' else 'not null' end, case when 
> cast( ' ' as char(1)) is null then 'is null' else 'not null' end
> from tchar
> order by rnum
> rnum  tchar.cchar _c2 _c3 _c4
> 0   not space   is null not null
> 1 space   not nullnot null
> 2 space   not nullnot null
> 3 BB  not space   not null
> not null
> 4 EE  not space   not null
> not null
> 5 FF  not space   not null
> not null
> create table  if not exists T_TCHAR ( RNUM int , CCHAR char(32 ))
> ROW FORMAT DELIMITED FIELDS TERMINATED BY '|' LINES TERMINATED BY '\n' 
>  STORED AS TEXTFILE  ;
> 0|\N
> 1|
> 2| 
> 3|BB
> 4|EE
> 5|FF
> create table  if not exists TCHAR ( RNUM int , CCHAR char(32 ))
>  STORED AS orc  ;
> insert overwrite table TCHAR select * from  T_TCHAR;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9730) LLAP: make sure logging is never called when not needed

2015-02-20 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14329906#comment-14329906
 ] 

Sergey Shelukhin commented on HIVE-9730:


Actually this screenshot can only be dealt with by disabling logging or 
removing/downgrading the level of offending statements.
My main change is caching log level enabled state... I will look at operator 
stuff too

> LLAP: make sure logging is never called when not needed
> ---
>
> Key: HIVE-9730
> URL: https://issues.apache.org/jira/browse/HIVE-9730
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: log4j-llap.png
>
>
> log4j logging has really inefficient serialization
> !log4j-llap.png!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9746) Refactor ATSHook constructor to avoid issues of twice CTRL+C terminating running query

2015-02-20 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HIVE-9746:

Description: Run a hive query and then hit Ctrl+C twice. This kills the 
hive query instantly before recording with ATS that is finished. Hive query 
should attempt recording with ATS before going down.  (was: Run a hive query 
and then hit Ctrl+C twice. This kills the hive query instantly before recording 
with ATS that is has finished. Hive query should attempt recording with ATS 
before going down.

Hive query should record with ATS when Ctrl+C is pressed)

> Refactor ATSHook constructor to avoid issues of twice CTRL+C terminating 
> running query
> --
>
> Key: HIVE-9746
> URL: https://issues.apache.org/jira/browse/HIVE-9746
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.0.0
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: Jobs1.png, Jobs2.png
>
>
> Run a hive query and then hit Ctrl+C twice. This kills the hive query 
> instantly before recording with ATS that is finished. Hive query should 
> attempt recording with ATS before going down.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9746) Refactor ATSHook constructor to avoid issues of twice CTRL+C terminating running query

2015-02-20 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HIVE-9746:

Description: 
Run a hive query and then hit Ctrl+C twice. This kills the hive query instantly 
before recording with ATS that is has finished. Hive query should attempt 
recording with ATS before going down.

Hive query should record with ATS when Ctrl+C is pressed

> Refactor ATSHook constructor to avoid issues of twice CTRL+C terminating 
> running query
> --
>
> Key: HIVE-9746
> URL: https://issues.apache.org/jira/browse/HIVE-9746
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.0.0
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
>
> Run a hive query and then hit Ctrl+C twice. This kills the hive query 
> instantly before recording with ATS that is has finished. Hive query should 
> attempt recording with ATS before going down.
> Hive query should record with ATS when Ctrl+C is pressed



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9746) Refactor ATSHook constructor to avoid issues of twice CTRL+C terminating running query

2015-02-20 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HIVE-9746:

Attachment: Jobs2.png
Jobs1.png

> Refactor ATSHook constructor to avoid issues of twice CTRL+C terminating 
> running query
> --
>
> Key: HIVE-9746
> URL: https://issues.apache.org/jira/browse/HIVE-9746
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.0.0
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: Jobs1.png, Jobs2.png
>
>
> Run a hive query and then hit Ctrl+C twice. This kills the hive query 
> instantly before recording with ATS that is has finished. Hive query should 
> attempt recording with ATS before going down.
> Hive query should record with ATS when Ctrl+C is pressed



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9726) Upgrade to spark 1.3 [Spark Branch]

2015-02-20 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14329897#comment-14329897
 ] 

Brock Noland commented on HIVE-9726:


Thanks everyone for your help!

> Upgrade to spark 1.3 [Spark Branch]
> ---
>
> Key: HIVE-9726
> URL: https://issues.apache.org/jira/browse/HIVE-9726
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Affects Versions: spark-branch
>Reporter: Brock Noland
>Assignee: Brock Noland
> Fix For: spark-branch
>
> Attachments: HIVE-9671.1-spark.patch, HIVE-9726.1-spark.patch, 
> hive.log.txt.gz, yarn-am-stderr.txt, yarn-am-stdout.txt
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] Apache Hive 1.1.0 Release Candidate 3

2015-02-20 Thread Brock Noland
Both solutions are reasonable from my perspective...

Brock

On Fri, Feb 20, 2015 at 2:01 PM, Thejas Nair  wrote:

> Thanks for finding the reason for this Brock!
> I think the ATSHook contents should be shimmed (in trunk) so that it
> is not excluded from a hadoop-1 build. (Or maybe, we should start
> surveying if people are still using newer versions of hive on Hadoop
> 1.x).
>
> I also ran a few simple queries using the RC in a single node cluster
> and everything looked good.
>
>
> On Fri, Feb 20, 2015 at 1:00 PM, Brock Noland  wrote:
> > Hi,
> >
> > That is true and by design when built with the hadoop-1 profile:
> >
> >
> https://github.com/apache/hive/commit/820690b9bb908f48f8403ca87d14b26c18f00c38
> >
> > Brock
> >
> > On Fri, Feb 20, 2015 at 11:08 AM, Thejas Nair 
> wrote:
> >> A few classes seem to be missing from the hive-exec*jar in binary
> >> tar.gz. When I build from the source tar.gz , the hive-exec*jar has
> >> those. ie, the source tar.gz looks fine.
> >>
> >> It is the ATSHook classes that are missing. Those are needed to be
> >> able to register job progress information with Yarn timeline server.
> >>
> >>  diff /tmp/src.txt /tmp/bin.txt
> >> 4768,4775d4767
> >> < org/apache/hadoop/hive/ql/hooks/ATSHook$1.class
> >> < org/apache/hadoop/hive/ql/hooks/ATSHook$2.class
> >> < org/apache/hadoop/hive/ql/hooks/ATSHook$3.class
> >> < org/apache/hadoop/hive/ql/hooks/ATSHook.class
> >> < org/apache/hadoop/hive/ql/hooks/ATSHook$EntityTypes.class
> >> < org/apache/hadoop/hive/ql/hooks/ATSHook$EventTypes.class
> >> < org/apache/hadoop/hive/ql/hooks/ATSHook$OtherInfoTypes.class
> >> < org/apache/hadoop/hive/ql/hooks/ATSHook$PrimaryFilterTypes.class
> >>
> >>
> >> On Thu, Feb 19, 2015 at 8:54 AM, Chao Sun  wrote:
> >>> +1
> >>>
> >>> 1. Build src with hadoop-1 and hadoop-2, tested the generated bin with
> some
> >>> DDL/DML queries.
> >>> 2. Tested the bin with some DDL/DML queries.
> >>> 3. Verified signature for bin and src, both asc and md5.
> >>>
> >>> Chao
> >>>
> >>> On Thu, Feb 19, 2015 at 1:55 AM, Szehon Ho 
> wrote:
> >>>
>  +1
> 
>  1.  Verified signature for bin and src
>  2.  Built src with hadoop2
>  3.  Ran few queries from beeline with src
>  4.  Ran few queries from beeline with bin
>  5.  Verified no SNAPSHOT deps
> 
>  Thanks
>  Szehon
> 
>  On Wed, Feb 18, 2015 at 10:03 PM, Xuefu Zhang 
> wrote:
> 
>  > +1
>  >
>  > 1. downloaded the src tarball and built w/ -Phadoop-1/2
>  > 2. verified no binary (jars) in the src tarball
>  >
>  > On Wed, Feb 18, 2015 at 8:56 PM, Brock Noland 
>  wrote:
>  >
>  > > +1
>  > >
>  > > verified sigs, hashes, created tables, ran MR on YARN jobs
>  > >
>  > > On Wed, Feb 18, 2015 at 8:54 PM, Brock Noland  >
>  > wrote:
>  > > > Apache Hive 1.1.0 Release Candidate 3 is available here:
>  > > > http://people.apache.org/~brock/apache-hive-1.1.0-rc3/
>  > > >
>  > > > Maven artifacts are available here:
>  > > >
> 
> https://repository.apache.org/content/repositories/orgapachehive-1026/
>  > > >
>  > > > Source tag for RC3 is at:
>  > > > http://svn.apache.org/repos/asf/hive/tags/release-1.1.0-rc3/
>  > > >
>  > > > My key is located here:
>  https://people.apache.org/keys/group/hive.asc
>  > > >
>  > > > Voting will conclude in 72 hours
>  > >
>  >
> 
> >>>
> >>>
> >>>
> >>> --
> >>> Best,
> >>> Chao
>


[jira] [Updated] (HIVE-9726) Upgrade to spark 1.3 [Spark Branch]

2015-02-20 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-9726:
---
   Resolution: Fixed
Fix Version/s: spark-branch
   Status: Resolved  (was: Patch Available)

> Upgrade to spark 1.3 [Spark Branch]
> ---
>
> Key: HIVE-9726
> URL: https://issues.apache.org/jira/browse/HIVE-9726
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Affects Versions: spark-branch
>Reporter: Brock Noland
>Assignee: Brock Noland
> Fix For: spark-branch
>
> Attachments: HIVE-9671.1-spark.patch, HIVE-9726.1-spark.patch, 
> hive.log.txt.gz, yarn-am-stderr.txt, yarn-am-stdout.txt
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-9746) Refactor ATSHook constructor to avoid issues of twice CTRL+C terminating running query

2015-02-20 Thread Xiaobing Zhou (JIRA)
Xiaobing Zhou created HIVE-9746:
---

 Summary: Refactor ATSHook constructor to avoid issues of twice 
CTRL+C terminating running query
 Key: HIVE-9746
 URL: https://issues.apache.org/jira/browse/HIVE-9746
 Project: Hive
  Issue Type: Bug
Affects Versions: 1.0.0
Reporter: Xiaobing Zhou
Assignee: Xiaobing Zhou






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9277) Hybrid Hybrid Grace Hash Join

2015-02-20 Thread Wei Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Zheng updated HIVE-9277:

Attachment: HIVE-9277.03.patch

Uploaded 3rd patch for testing

> Hybrid Hybrid Grace Hash Join
> -
>
> Key: HIVE-9277
> URL: https://issues.apache.org/jira/browse/HIVE-9277
> Project: Hive
>  Issue Type: New Feature
>  Components: Physical Optimizer
>Reporter: Wei Zheng
>Assignee: Wei Zheng
>  Labels: join
> Attachments: HIVE-9277.01.patch, HIVE-9277.02.patch, 
> HIVE-9277.03.patch, High-leveldesignforHybridHybridGraceHashJoinv1.0.pdf
>
>
> We are proposing an enhanced hash join algorithm called _“hybrid hybrid grace 
> hash join”_.
> We can benefit from this feature as illustrated below:
> * The query will not fail even if the estimated memory requirement is 
> slightly wrong
> * Expensive garbage collection overhead can be avoided when hash table grows
> * Join execution using a Map join operator even though the small table 
> doesn't fit in memory as spilling some data from the build and probe sides 
> will still be cheaper than having to shuffle the large fact table
> The design was based on Hadoop’s parallel processing capability and 
> significant amount of memory available.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9741) Refactor MetaStoreDirectSql by using getProductName instead of querying DB to determine DbType

2015-02-20 Thread Xiaobing Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14329884#comment-14329884
 ] 

Xiaobing Zhou commented on HIVE-9741:
-

Thanks [~ashutoshc], will do that in upcoming patch.

> Refactor MetaStoreDirectSql by using getProductName instead of querying DB to 
> determine DbType
> --
>
> Key: HIVE-9741
> URL: https://issues.apache.org/jira/browse/HIVE-9741
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Affects Versions: 1.0.0
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HIVE-9741.1.patch, HIVE-9741.2.patch
>
>
> MetaStoreDirectSql constructor is querying DB to determine dbType. which 
> leads to too many DB queries to make metastore slow or hanging if 
> MetaStoreDirectSql constructor is frequently called. This is to propose 
> getProductName to get dbType info. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9726) Upgrade to spark 1.3 [Spark Branch]

2015-02-20 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14329881#comment-14329881
 ] 

Hive QA commented on HIVE-9726:
---



{color:green}Overall{color}: +1 all checks pass

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12699961/HIVE-9726.1-spark.patch

{color:green}SUCCESS:{color} +1 7553 tests passed

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-SPARK-Build/740/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-SPARK-Build/740/console
Test logs: 
http://ec2-50-18-27-0.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-SPARK-Build-740/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12699961 - PreCommit-HIVE-SPARK-Build

> Upgrade to spark 1.3 [Spark Branch]
> ---
>
> Key: HIVE-9726
> URL: https://issues.apache.org/jira/browse/HIVE-9726
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Affects Versions: spark-branch
>Reporter: Brock Noland
>Assignee: Brock Noland
> Attachments: HIVE-9671.1-spark.patch, HIVE-9726.1-spark.patch, 
> hive.log.txt.gz, yarn-am-stderr.txt, yarn-am-stdout.txt
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9741) Refactor MetaStoreDirectSql by using getProductName instead of querying DB to determine DbType

2015-02-20 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HIVE-9741:

Attachment: HIVE-9741.2.patch

Patch V2.

> Refactor MetaStoreDirectSql by using getProductName instead of querying DB to 
> determine DbType
> --
>
> Key: HIVE-9741
> URL: https://issues.apache.org/jira/browse/HIVE-9741
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Affects Versions: 1.0.0
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HIVE-9741.1.patch, HIVE-9741.2.patch
>
>
> MetaStoreDirectSql constructor is querying DB to determine dbType. which 
> leads to too many DB queries to make metastore slow or hanging if 
> MetaStoreDirectSql constructor is frequently called. This is to propose 
> getProductName to get dbType info. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Review Request 31249: HIVE-9741: Refactor MetaStoreDirectSql by using getProductName instead of querying DB to determine DbType

2015-02-20 Thread XIAOBING ZHOU

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/31249/
---

Review request for hive.


Repository: hive-git


Description
---

MetaStoreDirectSql constructor is querying DB to determine dbType. which leads 
to too many DB queries to make metastore slow or hanging if MetaStoreDirectSql 
constructor is frequently called. This is to propose getProductName to get 
dbType info.


Diffs
-

  metastore/src/java/org/apache/hadoop/hive/metastore/MetaStoreDirectSql.java 
574141c 

Diff: https://reviews.apache.org/r/31249/diff/


Testing
---


Thanks,

XIAOBING ZHOU



[jira] [Created] (HIVE-9745) predicate evaluation of character fields with spaces and literals with spaces returns unexpected result

2015-02-20 Thread N Campbell (JIRA)
N Campbell created HIVE-9745:


 Summary: predicate evaluation of character fields with spaces and 
literals with spaces returns unexpected result
 Key: HIVE-9745
 URL: https://issues.apache.org/jira/browse/HIVE-9745
 Project: Hive
  Issue Type: Bug
  Components: SQL
Affects Versions: 0.14.0
Reporter: N Campbell


The following query should return 5 rows but Hive returns 3


select rnum, tchar.cchar from tchar where not (  tchar.cchar = ' ' or ( 
tchar.cchar is null and ' ' is null ))

Consider the following project of the base table


select rnum, tchar.cchar, 
case tchar.cchar when ' ' then 'space' else 'not space' end, 
case when tchar.cchar is null then 'is null' else 'not null' end, case when ' ' 
is null then 'is null' else 'not null' end
from tchar
order by rnum

Row 0 is a NULL
Row 1 was loaded with a zero length string ''
Row 2 was loaded with a single space ' '

rnumtchar.cchar _c2 _c3 _c4
0 not space   is null not null
1   not space   not null
not null
2   not space   not null
not null
3   BB  not space   not null
not null
4   EE  not space   not null
not null
5   FF  not space   not null
not null

Explicitly type cast the literal which many  SQL developers would not expect 
need to do.

select rnum, tchar.cchar, 
case tchar.cchar when cast(' ' as char(1)) then 'space' else 'not space' end, 
case when tchar.cchar is null then 'is null' else 'not null' end, case when 
cast( ' ' as char(1)) is null then 'is null' else 'not null' end
from tchar
order by rnum

rnumtchar.cchar _c2 _c3 _c4
0 not space   is null not null
1   space   not nullnot null
2   space   not nullnot null
3   BB  not space   not null
not null
4   EE  not space   not null
not null
5   FF  not space   not null
not null


create table  if not exists T_TCHAR ( RNUM int , CCHAR char(32 ))
ROW FORMAT DELIMITED FIELDS TERMINATED BY '|' LINES TERMINATED BY '\n' 
 STORED AS TEXTFILE  ;

0|\N
1|
2| 
3|BB
4|EE
5|FF


create table  if not exists TCHAR ( RNUM int , CCHAR char(32 ))
 STORED AS orc  ;

insert overwrite table TCHAR select * from  T_TCHAR;




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-3454) Problem with CAST(BIGINT as TIMESTAMP)

2015-02-20 Thread Jason Dere (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14329876#comment-14329876
 ] 

Jason Dere commented on HIVE-3454:
--

TimestampWritable.intToTimeStampInSeconds is still static, which means we could 
run into issues in HiveServer2 with concurrent queries. Maybe this should be 
thread-local.
Hmm, yeah I'm not sure where the best place is to call 
TimestampWritable.initialize() .. I was going to recommend Driver.compile(), 
though we would also have to add it to MR task initialization somewhere if 
there is such a place.

> Problem with CAST(BIGINT as TIMESTAMP)
> --
>
> Key: HIVE-3454
> URL: https://issues.apache.org/jira/browse/HIVE-3454
> Project: Hive
>  Issue Type: Bug
>  Components: Types, UDF
>Affects Versions: 0.8.0, 0.8.1, 0.9.0, 0.10.0, 0.11.0, 0.12.0, 0.13.0, 
> 0.13.1
>Reporter: Ryan Harris
>Assignee: Aihua Xu
>  Labels: newbie, newdev, patch
> Attachments: HIVE-3454.1.patch.txt, HIVE-3454.2.patch, 
> HIVE-3454.3.patch, HIVE-3454.4.patch, HIVE-3454.patch
>
>
> Ran into an issue while working with timestamp conversion.
> CAST(unix_timestamp() as TIMESTAMP) should create a timestamp for the current 
> time from the BIGINT returned by unix_timestamp()
> Instead, however, a 1970-01-16 timestamp is returned.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9480) Build UDF TRUNC to implement FIRST_DAY as compared with LAST_DAY

2015-02-20 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HIVE-9480:

Attachment: HIVE-9480.6.patch

Thanks [~jdere]. Here's V6 after rebase.

> Build UDF TRUNC to implement FIRST_DAY as compared with LAST_DAY
> 
>
> Key: HIVE-9480
> URL: https://issues.apache.org/jira/browse/HIVE-9480
> Project: Hive
>  Issue Type: Bug
>  Components: UDF
>Affects Versions: 0.14.0
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HIVE-9480.1.patch, HIVE-9480.3.patch, HIVE-9480.4.patch, 
> HIVE-9480.5.patch, HIVE-9480.6.patch
>
>
> Hive already supports LAST_DAY UDF, in some cases, FIRST_DAY is necessary to 
> do date/timestamp related computation. This JIRA is to track such an 
> implementation. Choose to impl TRUNC, a more standard way to get the first 
> day of a a month, e.g., SELECT TRUNC('2009-12-12', 'MM'); will return 
> 2009-12-01, SELECT TRUNC('2009-12-12', 'YEAR'); will return 2009-01-01.
> BTW, this TRUNC is not as feature complete as aligned with Oracle one. only 
> 'MM' and 'YEAR' are supported as format, however, it's a base to add on other 
> formats.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 30335: Build UDF TRUNC to implement FIRST_DAY as compared with LAST_DAY

2015-02-20 Thread XIAOBING ZHOU

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/30335/
---

(Updated Feb. 21, 2015, 12:49 a.m.)


Review request for hive.


Changes
---

patch V6, did a rebase.


Repository: hive-git


Description
---

Hive already supports LAST_DAY UDF, in some cases, FIRST_DAY is necessary to do 
date/timestamp related computation. This JIRA is to track such an 
implementation.

https://issues.apache.org/jira/browse/HIVE-9480


Diffs (updated)
-

  ql/src/java/org/apache/hadoop/hive/ql/exec/FunctionRegistry.java bfeb33c 
  ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFTrunc.java 
PRE-CREATION 
  ql/src/test/org/apache/hadoop/hive/ql/udf/generic/TestGenericUDFTrunc.java 
PRE-CREATION 
  ql/src/test/queries/clientnegative/udf_trunc_error1.q PRE-CREATION 
  ql/src/test/queries/clientnegative/udf_trunc_error2.q PRE-CREATION 
  ql/src/test/queries/clientpositive/udf_trunc.q PRE-CREATION 
  ql/src/test/results/clientnegative/udf_trunc_error1.q.out PRE-CREATION 
  ql/src/test/results/clientnegative/udf_trunc_error2.q.out PRE-CREATION 
  ql/src/test/results/clientpositive/show_functions.q.out d4b0650 
  ql/src/test/results/clientpositive/udf_trunc.q.out PRE-CREATION 

Diff: https://reviews.apache.org/r/30335/diff/


Testing
---

Unit tests done in 
ql/src/test/org/apache/hadoop/hive/ql/udf/generic/TestGenericUDFFirstDay.java


Thanks,

XIAOBING ZHOU



[jira] [Commented] (HIVE-3454) Problem with CAST(BIGINT as TIMESTAMP)

2015-02-20 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14329857#comment-14329857
 ] 

Brock Noland commented on HIVE-3454:


+1 LGTM

best we can do in this situation.

> Problem with CAST(BIGINT as TIMESTAMP)
> --
>
> Key: HIVE-3454
> URL: https://issues.apache.org/jira/browse/HIVE-3454
> Project: Hive
>  Issue Type: Bug
>  Components: Types, UDF
>Affects Versions: 0.8.0, 0.8.1, 0.9.0, 0.10.0, 0.11.0, 0.12.0, 0.13.0, 
> 0.13.1
>Reporter: Ryan Harris
>Assignee: Aihua Xu
>  Labels: newbie, newdev, patch
> Attachments: HIVE-3454.1.patch.txt, HIVE-3454.2.patch, 
> HIVE-3454.3.patch, HIVE-3454.4.patch, HIVE-3454.patch
>
>
> Ran into an issue while working with timestamp conversion.
> CAST(unix_timestamp() as TIMESTAMP) should create a timestamp for the current 
> time from the BIGINT returned by unix_timestamp()
> Instead, however, a 1970-01-16 timestamp is returned.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (HIVE-9744) Move common arguments validation and value extraction code to GenericUDF

2015-02-20 Thread Alexander Pivovarov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HIVE-9744 started by Alexander Pivovarov.
-
> Move common arguments validation and value extraction code to GenericUDF
> 
>
> Key: HIVE-9744
> URL: https://issues.apache.org/jira/browse/HIVE-9744
> Project: Hive
>  Issue Type: Improvement
>  Components: UDF
>Reporter: Alexander Pivovarov
>Assignee: Alexander Pivovarov
>Priority: Minor
>
> most of the UDFs 
> - check if arguments are primitive / complex
> - check if arguments are particular type or type_group
> - get converters to read values
> - check if argument is constant
> - extract arguments values
> Probably we should move these common methods to GenericUDF



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 30335: Build UDF TRUNC to implement FIRST_DAY as compared with LAST_DAY

2015-02-20 Thread Alexander Pivovarov


> On Feb. 20, 2015, 9:21 p.m., Jason Dere wrote:
> > ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFTrunc.java, 
> > line 94
> > 
> >
> > We do this same check in so many different date functions .. we should 
> > eventually add a utility method to do this date type parameter checking. We 
> > don't have to do it in this Jira, it can be future work.

working on it https://issues.apache.org/jira/browse/HIVE-9744


- Alexander


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/30335/#review73309
---


On Feb. 20, 2015, 1:18 a.m., XIAOBING ZHOU wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/30335/
> ---
> 
> (Updated Feb. 20, 2015, 1:18 a.m.)
> 
> 
> Review request for hive.
> 
> 
> Repository: hive-git
> 
> 
> Description
> ---
> 
> Hive already supports LAST_DAY UDF, in some cases, FIRST_DAY is necessary to 
> do date/timestamp related computation. This JIRA is to track such an 
> implementation.
> 
> https://issues.apache.org/jira/browse/HIVE-9480
> 
> 
> Diffs
> -
> 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/FunctionRegistry.java bfb4dc2 
>   ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFTrunc.java 
> PRE-CREATION 
>   ql/src/test/org/apache/hadoop/hive/ql/udf/generic/TestGenericUDFTrunc.java 
> PRE-CREATION 
>   ql/src/test/queries/clientnegative/udf_trunc_error1.q PRE-CREATION 
>   ql/src/test/queries/clientnegative/udf_trunc_error2.q PRE-CREATION 
>   ql/src/test/queries/clientpositive/udf_trunc.q PRE-CREATION 
>   ql/src/test/results/clientnegative/udf_trunc_error1.q.out PRE-CREATION 
>   ql/src/test/results/clientnegative/udf_trunc_error2.q.out PRE-CREATION 
>   ql/src/test/results/clientpositive/show_functions.q.out e21b54b 
>   ql/src/test/results/clientpositive/udf_trunc.q.out PRE-CREATION 
> 
> Diff: https://reviews.apache.org/r/30335/diff/
> 
> 
> Testing
> ---
> 
> Unit tests done in 
> ql/src/test/org/apache/hadoop/hive/ql/udf/generic/TestGenericUDFFirstDay.java
> 
> 
> Thanks,
> 
> XIAOBING ZHOU
> 
>



[jira] [Created] (HIVE-9744) Move common arguments validation and value extraction code to GenericUDF

2015-02-20 Thread Alexander Pivovarov (JIRA)
Alexander Pivovarov created HIVE-9744:
-

 Summary: Move common arguments validation and value extraction 
code to GenericUDF
 Key: HIVE-9744
 URL: https://issues.apache.org/jira/browse/HIVE-9744
 Project: Hive
  Issue Type: Improvement
  Components: UDF
Reporter: Alexander Pivovarov
Assignee: Alexander Pivovarov
Priority: Minor


most of the UDFs 
- check if arguments are primitive / complex
- check if arguments are particular type or type_group
- get converters to read values
- check if argument is constant
- extract arguments values

Probably we should move these common methods to GenericUDF




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-9743) incorrect result set for left outer join when executed with tez versus mapreduce

2015-02-20 Thread N Campbell (JIRA)
N Campbell created HIVE-9743:


 Summary: incorrect result set for left outer join when executed 
with tez versus mapreduce
 Key: HIVE-9743
 URL: https://issues.apache.org/jira/browse/HIVE-9743
 Project: Hive
  Issue Type: Bug
  Components: SQL
Affects Versions: 0.14.0
Reporter: N Campbell


This query is supposed to return 3 rows and will when run without Tez but 
returns 2 rows when run with Tez.

select tjoin1.rnum, tjoin1.c1, tjoin1.c2, tjoin2.c2 as c2j2 from tjoin1 left 
outer join tjoin2 on ( tjoin1.c1 = tjoin2.c1 and tjoin1.c2 > 15 )

tjoin1.rnum tjoin1.c1   tjoin1.c2   c2j2
1   20  25  
2 50  

instead of

tjoin1.rnum tjoin1.c1   tjoin1.c2   c2j2
0   10  15  
1   20  25  
2 50  

create table  if not exists TJOIN1 (RNUM int , C1 int, C2 int)
 STORED AS orc ;

0|10|15
1|20|25
2|\N|50

create table  if not exists TJOIN2 (RNUM int , C1 int, C2 char(2))
ROW FORMAT DELIMITED FIELDS TERMINATED BY '|' LINES TERMINATED BY '\n' 
 STORED AS TEXTFILE ;

0|10|BB
1|15|DD
2|\N|EE
3|10|FF



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9480) Build UDF TRUNC to implement FIRST_DAY as compared with LAST_DAY

2015-02-20 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14329805#comment-14329805
 ] 

Hive QA commented on HIVE-9480:
---



{color:red}Overall{color}: -1 no tests executed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12699968/HIVE-9480.5.patch

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2837/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2837/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-2837/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hive-ptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ [[ -n /usr/java/jdk1.7.0_45-cloudera ]]
+ export JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera
+ JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera
+ export 
PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin
+ 
PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'M2_OPTS=-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ M2_OPTS='-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ cd /data/hive-ptest/working/
+ tee /data/hive-ptest/logs/PreCommit-HIVE-TRUNK-Build-2837/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ svn = \s\v\n ]]
+ [[ -n '' ]]
+ [[ -d apache-svn-trunk-source ]]
+ [[ ! -d apache-svn-trunk-source/.svn ]]
+ [[ ! -d apache-svn-trunk-source ]]
+ cd apache-svn-trunk-source
+ svn revert -R .
Reverted 'common/src/java/org/apache/hadoop/hive/conf/HiveConf.java'
Reverted 
'serde/src/test/org/apache/hadoop/hive/serde2/objectinspector/primitive/TestPrimitiveObjectInspectorUtils.java'
Reverted 'serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroSerDe.java'
Reverted 
'serde/src/java/org/apache/hadoop/hive/serde2/objectinspector/primitive/PrimitiveObjectInspectorUtils.java'
Reverted 'serde/src/java/org/apache/hadoop/hive/serde2/AbstractSerDe.java'
Reverted 
'serde/src/java/org/apache/hadoop/hive/serde2/io/TimestampWritable.java'
Reverted 'ql/src/java/org/apache/hadoop/hive/ql/io/orc/OrcSerde.java'
++ awk '{print $2}'
++ egrep -v '^X|^Performing status on external'
++ svn status --no-ignore
+ rm -rf target datanucleus.log ant/target shims/target shims/0.20S/target 
shims/0.23/target shims/aggregator/target shims/common/target 
shims/scheduler/target packaging/target hbase-handler/target testutils/target 
jdbc/target metastore/target itests/target itests/thirdparty 
itests/hcatalog-unit/target itests/test-serde/target itests/qtest/target 
itests/hive-unit-hadoop2/target itests/hive-minikdc/target 
itests/hive-jmh/target itests/hive-unit/target itests/custom-serde/target 
itests/util/target itests/qtest-spark/target hcatalog/target 
hcatalog/core/target hcatalog/streaming/target 
hcatalog/server-extensions/target hcatalog/webhcat/svr/target 
hcatalog/webhcat/java-client/target hcatalog/hcatalog-pig-adapter/target 
accumulo-handler/target hwi/target common/target common/src/gen 
spark-client/target contrib/target service/target serde/target beeline/target 
odbc/target cli/target ql/dependency-reduced-pom.xml ql/target
+ svn update
U
serde/src/java/org/apache/hadoop/hive/serde2/lazy/objectinspector/LazyMapObjectInspector.java
Uql/src/java/org/apache/hadoop/hive/ql/io/orc/OrcStruct.java

Fetching external item into 'hcatalog/src/test/e2e/harness'
Updated external to revision 1661252.

Updated to revision 1661250.
+ patchCommandPath=/data/hive-ptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hive-ptest/working/scratch/build.patch
+ [[ -f /data/hive-ptest/working/scratch/build.patch ]]
+ chmod +x /data/hive-ptest/working/scratch/smart-apply-patch.sh
+ /data/hive-ptest/working/scratch/smart-apply-patch.sh 
/data/hive-ptest/working/scratch/build.patch
The patch does not appear to apply with p0, p1, or p2
+ exit 1
'
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12699968 - PreCommit-HIVE-TRUNK-Build

> Build UDF TRUNC to implement FIRST_DAY as compared with LAST_DAY
> 
>
> Key: HIVE-9480
> URL: https://issues.apache.org/jira/browse/HIVE-9480
> Project: Hive
>  Issue Type: Bug
>  Com

[jira] [Commented] (HIVE-3454) Problem with CAST(BIGINT as TIMESTAMP)

2015-02-20 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14329802#comment-14329802
 ] 

Hive QA commented on HIVE-3454:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12699952/HIVE-3454.4.patch

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 7568 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_udaf_percentile_approx_23
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2836/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2836/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-2836/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12699952 - PreCommit-HIVE-TRUNK-Build

> Problem with CAST(BIGINT as TIMESTAMP)
> --
>
> Key: HIVE-3454
> URL: https://issues.apache.org/jira/browse/HIVE-3454
> Project: Hive
>  Issue Type: Bug
>  Components: Types, UDF
>Affects Versions: 0.8.0, 0.8.1, 0.9.0, 0.10.0, 0.11.0, 0.12.0, 0.13.0, 
> 0.13.1
>Reporter: Ryan Harris
>Assignee: Aihua Xu
>  Labels: newbie, newdev, patch
> Attachments: HIVE-3454.1.patch.txt, HIVE-3454.2.patch, 
> HIVE-3454.3.patch, HIVE-3454.4.patch, HIVE-3454.patch
>
>
> Ran into an issue while working with timestamp conversion.
> CAST(unix_timestamp() as TIMESTAMP) should create a timestamp for the current 
> time from the BIGINT returned by unix_timestamp()
> Instead, however, a 1970-01-16 timestamp is returned.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9480) Build UDF TRUNC to implement FIRST_DAY as compared with LAST_DAY

2015-02-20 Thread Jason Dere (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14329784#comment-14329784
 ] 

Jason Dere commented on HIVE-9480:
--

The patch does not apply on trunk. You probably need to rebase it because a new 
function was added to FunctionRegistry.java

> Build UDF TRUNC to implement FIRST_DAY as compared with LAST_DAY
> 
>
> Key: HIVE-9480
> URL: https://issues.apache.org/jira/browse/HIVE-9480
> Project: Hive
>  Issue Type: Bug
>  Components: UDF
>Affects Versions: 0.14.0
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HIVE-9480.1.patch, HIVE-9480.3.patch, HIVE-9480.4.patch, 
> HIVE-9480.5.patch
>
>
> Hive already supports LAST_DAY UDF, in some cases, FIRST_DAY is necessary to 
> do date/timestamp related computation. This JIRA is to track such an 
> implementation. Choose to impl TRUNC, a more standard way to get the first 
> day of a a month, e.g., SELECT TRUNC('2009-12-12', 'MM'); will return 
> 2009-12-01, SELECT TRUNC('2009-12-12', 'YEAR'); will return 2009-01-01.
> BTW, this TRUNC is not as feature complete as aligned with Oracle one. only 
> 'MM' and 'YEAR' are supported as format, however, it's a base to add on other 
> formats.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9480) Build UDF TRUNC to implement FIRST_DAY as compared with LAST_DAY

2015-02-20 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HIVE-9480:

Attachment: HIVE-9480.5.patch

Re-submit patch in order to trigger UT(s) run. Don't know why it's not 
triggered by previous patch submission.

> Build UDF TRUNC to implement FIRST_DAY as compared with LAST_DAY
> 
>
> Key: HIVE-9480
> URL: https://issues.apache.org/jira/browse/HIVE-9480
> Project: Hive
>  Issue Type: Bug
>  Components: UDF
>Affects Versions: 0.14.0
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HIVE-9480.1.patch, HIVE-9480.3.patch, HIVE-9480.4.patch, 
> HIVE-9480.5.patch
>
>
> Hive already supports LAST_DAY UDF, in some cases, FIRST_DAY is necessary to 
> do date/timestamp related computation. This JIRA is to track such an 
> implementation. Choose to impl TRUNC, a more standard way to get the first 
> day of a a month, e.g., SELECT TRUNC('2009-12-12', 'MM'); will return 
> 2009-12-01, SELECT TRUNC('2009-12-12', 'YEAR'); will return 2009-01-01.
> BTW, this TRUNC is not as feature complete as aligned with Oracle one. only 
> 'MM' and 'YEAR' are supported as format, however, it's a base to add on other 
> formats.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9480) Build UDF TRUNC to implement FIRST_DAY as compared with LAST_DAY

2015-02-20 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HIVE-9480:

Attachment: (was: HIVE-9480.5.patch)

> Build UDF TRUNC to implement FIRST_DAY as compared with LAST_DAY
> 
>
> Key: HIVE-9480
> URL: https://issues.apache.org/jira/browse/HIVE-9480
> Project: Hive
>  Issue Type: Bug
>  Components: UDF
>Affects Versions: 0.14.0
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HIVE-9480.1.patch, HIVE-9480.3.patch, HIVE-9480.4.patch
>
>
> Hive already supports LAST_DAY UDF, in some cases, FIRST_DAY is necessary to 
> do date/timestamp related computation. This JIRA is to track such an 
> implementation. Choose to impl TRUNC, a more standard way to get the first 
> day of a a month, e.g., SELECT TRUNC('2009-12-12', 'MM'); will return 
> 2009-12-01, SELECT TRUNC('2009-12-12', 'YEAR'); will return 2009-01-01.
> BTW, this TRUNC is not as feature complete as aligned with Oracle one. only 
> 'MM' and 'YEAR' are supported as format, however, it's a base to add on other 
> formats.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-9742) LLAP: ORC decoding of row groups for complex types

2015-02-20 Thread Prasanth Jayachandran (JIRA)
Prasanth Jayachandran created HIVE-9742:
---

 Summary: LLAP: ORC decoding of row groups for complex types
 Key: HIVE-9742
 URL: https://issues.apache.org/jira/browse/HIVE-9742
 Project: Hive
  Issue Type: Sub-task
Reporter: Prasanth Jayachandran


HIVE-9419 only added support for primitive types. Structs, Union, Maps and 
Lists are yet to be supported.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HIVE-9742) LLAP: ORC decoding of row groups for complex types

2015-02-20 Thread Prasanth Jayachandran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran reassigned HIVE-9742:
---

Assignee: Prasanth Jayachandran

> LLAP: ORC decoding of row groups for complex types
> --
>
> Key: HIVE-9742
> URL: https://issues.apache.org/jira/browse/HIVE-9742
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>
> HIVE-9419 only added support for primitive types. Structs, Union, Maps and 
> Lists are yet to be supported.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HIVE-9419) LLAP: ORC decoding of row-groups

2015-02-20 Thread Prasanth Jayachandran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran resolved HIVE-9419.
-
Resolution: Fixed

Committed patch in parts.

> LLAP: ORC decoding of row-groups
> 
>
> Key: HIVE-9419
> URL: https://issues.apache.org/jira/browse/HIVE-9419
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sergey Shelukhin
>Assignee: Prasanth Jayachandran
> Attachments: HIVE-9419.patch
>
>
> ORC should decode row-groups sent up by encoded data producer, and make VRBs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9419) LLAP: ORC decoding of row-groups

2015-02-20 Thread Prasanth Jayachandran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran updated HIVE-9419:

Attachment: HIVE-9419.patch

This patch was committed in parts to llap to not block development effort. 
Attaching a consolidated patch. 

> LLAP: ORC decoding of row-groups
> 
>
> Key: HIVE-9419
> URL: https://issues.apache.org/jira/browse/HIVE-9419
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sergey Shelukhin
>Assignee: Prasanth Jayachandran
> Attachments: HIVE-9419.patch
>
>
> ORC should decode row-groups sent up by encoded data producer, and make VRBs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9739) Various queries fails with Tez/ORC file org.apache.hadoop.hive.ql.exec.tez.TezTask due to Caused by: java.lang.ClassCastException

2015-02-20 Thread Jason Dere (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14329761#comment-14329761
 ] 

Jason Dere commented on HIVE-9739:
--

Are you using Hive 0.14? It looks like this might be the same issue as HIVE-9249
Since this looks like an issue during vectorization, one workaround (if you 
cannot use a later Hive version) might be to try with 
hive.vectorized.execution.enabled=false

> Various queries fails with Tez/ORC file 
> org.apache.hadoop.hive.ql.exec.tez.TezTask due to Caused by: 
> java.lang.ClassCastException
> -
>
> Key: HIVE-9739
> URL: https://issues.apache.org/jira/browse/HIVE-9739
> Project: Hive
>  Issue Type: Bug
>  Components: SQL
>Reporter: N Campbell
>
> This fails when using Tez and ORC. 
> It will run when text files are used or text/ORC and MapReduce and not Tez 
> used.
> Is this another example of a type issue per 
> https://issues.apache.org/jira/browse/HIVE-9735
> select rnum, c1, c2 from tset1 as t1 where exists ( select c1 from tset2 
> where c1 = t1.c1 )
> This will run in both Tez and MapReduce using a text file
> select rnum, c1, c2 from t_tset1 as t1 where exists ( select c1 from t_tset2 
> where c1 = t1.c1 )
> Caused by: java.lang.RuntimeException: 
> org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while 
> processing row 
>   at 
> org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.processRow(MapRecordSource.java:91)
>   at 
> org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.pushRecord(MapRecordSource.java:68)
>   at 
> org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.run(MapRecordProcessor.java:294)
>   at 
> org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:163)
>   ... 13 more
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime 
> Error while processing row 
>   at 
> org.apache.hadoop.hive.ql.exec.vector.VectorMapOperator.process(VectorMapOperator.java:52)
>   at 
> org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.processRow(MapRecordSource.java:83)
>   ... 16 more
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Unexpected 
> exception: org.apache.hadoop.hive.serde2.io.HiveCharWritable cannot be cast 
> to org.apache.hadoop.hive.common.type.HiveChar
>   at 
> org.apache.hadoop.hive.ql.exec.MapJoinOperator.processOp(MapJoinOperator.java:311)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.VectorMapJoinOperator.processOp(VectorMapJoinOperator.java:249)
>   at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:815)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.VectorFilterOperator.processOp(VectorFilterOperator.java:111)
>   at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:815)
>   at 
> org.apache.hadoop.hive.ql.exec.TableScanOperator.processOp(TableScanOperator.java:95)
>   at 
> org.apache.hadoop.hive.ql.exec.MapOperator$MapOpCtx.forward(MapOperator.java:157)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.VectorMapOperator.process(VectorMapOperator.java:45)
>   ... 17 more
> Caused by: java.lang.ClassCastException: 
> org.apache.hadoop.hive.serde2.io.HiveCharWritable cannot be cast to 
> org.apache.hadoop.hive.common.type.HiveChar
>   at 
> org.apache.hadoop.hive.ql.exec.vector.VectorColumnAssignFactory$18.assignObjectValue(VectorColumnAssignFactory.java:432)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.VectorMapJoinOperator.internalForward(VectorMapJoinOperator.java:196)
>   at 
> org.apache.hadoop.hive.ql.exec.CommonJoinOperator.genAllOneUniqueJoinObject(CommonJoinOperator.java:670)
>   at 
> org.apache.hadoop.hive.ql.exec.CommonJoinOperator.checkAndGenObject(CommonJoinOperator.java:748)
>   at 
> org.apache.hadoop.hive.ql.exec.MapJoinOperator.processOp(MapJoinOperator.java:299)
>   ... 24 more
> create table  if not exists T_TSET1 (RNUM int , C1 int, C2 char(3))
>  ROW FORMAT DELIMITED FIELDS TERMINATED BY '|' LINES TERMINATED BY '\n' 
>  STORED AS textfile ;
> create table  if not exists T_TSET1 (RNUM int , C1 int, C2 char(3))
>  STORED AS ORC ;
> create table  if not exists T_TSET2 (RNUM int , C1 int, C2 char(3))
>  ROW FORMAT DELIMITED FIELDS TERMINATED BY '|' LINES TERMINATED BY '\n' 
>  STORED AS textfile ;
> TSET1 data
> 0|10|AAA
> 1|10|AAA
> 2|10|AAA
> 3|20|BBB
> 4|30|CCC
> 5|40|DDD
> 6|50|\N
> 7|60|\N
> 8|\N|AAA
> 9|\N|AAA
> 10|\N|\N
> 11|\N|\N
> TSET2 DATA
> 0|10|AAA
> 1|10|AAA
> 2|40|DDD
> 3|50|EEE
> 4|60|FFF



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9726) Upgrade to spark 1.3 [Spark Branch]

2015-02-20 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-9726:
---
Attachment: HIVE-9726.1-spark.patch

> Upgrade to spark 1.3 [Spark Branch]
> ---
>
> Key: HIVE-9726
> URL: https://issues.apache.org/jira/browse/HIVE-9726
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Affects Versions: spark-branch
>Reporter: Brock Noland
>Assignee: Brock Noland
> Attachments: HIVE-9671.1-spark.patch, HIVE-9726.1-spark.patch, 
> hive.log.txt.gz, yarn-am-stderr.txt, yarn-am-stdout.txt
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9726) Upgrade to spark 1.3 [Spark Branch]

2015-02-20 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14329750#comment-14329750
 ] 

Brock Noland commented on HIVE-9726:


Sandy helped me debug this. Basically  need to set: 

{{yarn.resourcemanager.scheduler.class=org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler}}

> Upgrade to spark 1.3 [Spark Branch]
> ---
>
> Key: HIVE-9726
> URL: https://issues.apache.org/jira/browse/HIVE-9726
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Affects Versions: spark-branch
>Reporter: Brock Noland
>Assignee: Brock Noland
> Attachments: HIVE-9671.1-spark.patch, HIVE-9726.1-spark.patch, 
> hive.log.txt.gz, yarn-am-stderr.txt, yarn-am-stdout.txt
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9497) Implicit conversion between Varchar and Char should result in Varchar

2015-02-20 Thread Jason Dere (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14329746#comment-14329746
 ] 

Jason Dere commented on HIVE-9497:
--

The logic in FunctionRegistry that determines implicit conversion/common type 
is a bit complicated. I was thinking of replacing the logic in those methods 
with lookup tables for each type, which might make it easier to see and change 
the implicit/common type behavior. This might be one issue that could benefit 
from that.

> Implicit conversion between Varchar and Char should result in Varchar
> -
>
> Key: HIVE-9497
> URL: https://issues.apache.org/jira/browse/HIVE-9497
> Project: Hive
>  Issue Type: New Feature
>Reporter: Pavas Garg
>  Labels: Hive
>
> In situations where implicit conversion happen between varchar and char, the 
> result should be either varchar or char but not string.
> A string type result causes a major issue for SAS, an analytic application
> Varchar with bigger length should be the implicit conversion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9739) Various queries fails with Tez/ORC file org.apache.hadoop.hive.ql.exec.tez.TezTask due to Caused by: java.lang.ClassCastException

2015-02-20 Thread N Campbell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

N Campbell updated HIVE-9739:
-
Summary: Various queries fails with Tez/ORC file 
org.apache.hadoop.hive.ql.exec.tez.TezTask due to Caused by: 
java.lang.ClassCastException  (was: Various queries fails with Tez/ORC file 
org.apache.hadoop.hive.ql.exec.tez.TezTask)

> Various queries fails with Tez/ORC file 
> org.apache.hadoop.hive.ql.exec.tez.TezTask due to Caused by: 
> java.lang.ClassCastException
> -
>
> Key: HIVE-9739
> URL: https://issues.apache.org/jira/browse/HIVE-9739
> Project: Hive
>  Issue Type: Bug
>  Components: SQL
>Reporter: N Campbell
>
> This fails when using Tez and ORC. 
> It will run when text files are used or text/ORC and MapReduce and not Tez 
> used.
> Is this another example of a type issue per 
> https://issues.apache.org/jira/browse/HIVE-9735
> select rnum, c1, c2 from tset1 as t1 where exists ( select c1 from tset2 
> where c1 = t1.c1 )
> This will run in both Tez and MapReduce using a text file
> select rnum, c1, c2 from t_tset1 as t1 where exists ( select c1 from t_tset2 
> where c1 = t1.c1 )
> Caused by: java.lang.RuntimeException: 
> org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while 
> processing row 
>   at 
> org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.processRow(MapRecordSource.java:91)
>   at 
> org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.pushRecord(MapRecordSource.java:68)
>   at 
> org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.run(MapRecordProcessor.java:294)
>   at 
> org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:163)
>   ... 13 more
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime 
> Error while processing row 
>   at 
> org.apache.hadoop.hive.ql.exec.vector.VectorMapOperator.process(VectorMapOperator.java:52)
>   at 
> org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.processRow(MapRecordSource.java:83)
>   ... 16 more
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Unexpected 
> exception: org.apache.hadoop.hive.serde2.io.HiveCharWritable cannot be cast 
> to org.apache.hadoop.hive.common.type.HiveChar
>   at 
> org.apache.hadoop.hive.ql.exec.MapJoinOperator.processOp(MapJoinOperator.java:311)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.VectorMapJoinOperator.processOp(VectorMapJoinOperator.java:249)
>   at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:815)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.VectorFilterOperator.processOp(VectorFilterOperator.java:111)
>   at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:815)
>   at 
> org.apache.hadoop.hive.ql.exec.TableScanOperator.processOp(TableScanOperator.java:95)
>   at 
> org.apache.hadoop.hive.ql.exec.MapOperator$MapOpCtx.forward(MapOperator.java:157)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.VectorMapOperator.process(VectorMapOperator.java:45)
>   ... 17 more
> Caused by: java.lang.ClassCastException: 
> org.apache.hadoop.hive.serde2.io.HiveCharWritable cannot be cast to 
> org.apache.hadoop.hive.common.type.HiveChar
>   at 
> org.apache.hadoop.hive.ql.exec.vector.VectorColumnAssignFactory$18.assignObjectValue(VectorColumnAssignFactory.java:432)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.VectorMapJoinOperator.internalForward(VectorMapJoinOperator.java:196)
>   at 
> org.apache.hadoop.hive.ql.exec.CommonJoinOperator.genAllOneUniqueJoinObject(CommonJoinOperator.java:670)
>   at 
> org.apache.hadoop.hive.ql.exec.CommonJoinOperator.checkAndGenObject(CommonJoinOperator.java:748)
>   at 
> org.apache.hadoop.hive.ql.exec.MapJoinOperator.processOp(MapJoinOperator.java:299)
>   ... 24 more
> create table  if not exists T_TSET1 (RNUM int , C1 int, C2 char(3))
>  ROW FORMAT DELIMITED FIELDS TERMINATED BY '|' LINES TERMINATED BY '\n' 
>  STORED AS textfile ;
> create table  if not exists T_TSET1 (RNUM int , C1 int, C2 char(3))
>  STORED AS ORC ;
> create table  if not exists T_TSET2 (RNUM int , C1 int, C2 char(3))
>  ROW FORMAT DELIMITED FIELDS TERMINATED BY '|' LINES TERMINATED BY '\n' 
>  STORED AS textfile ;
> TSET1 data
> 0|10|AAA
> 1|10|AAA
> 2|10|AAA
> 3|20|BBB
> 4|30|CCC
> 5|40|DDD
> 6|50|\N
> 7|60|\N
> 8|\N|AAA
> 9|\N|AAA
> 10|\N|\N
> 11|\N|\N
> TSET2 DATA
> 0|10|AAA
> 1|10|AAA
> 2|40|DDD
> 3|50|EEE
> 4|60|FFF



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9739) Various queries fails with Tez/ORC file org.apache.hadoop.hive.ql.exec.tez.TezTask

2015-02-20 Thread N Campbell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

N Campbell updated HIVE-9739:
-
Summary: Various queries fails with Tez/ORC file 
org.apache.hadoop.hive.ql.exec.tez.TezTask  (was: Quantified query fails with 
Tez/ORC file org.apache.hadoop.hive.ql.exec.tez.TezTask)

> Various queries fails with Tez/ORC file 
> org.apache.hadoop.hive.ql.exec.tez.TezTask
> --
>
> Key: HIVE-9739
> URL: https://issues.apache.org/jira/browse/HIVE-9739
> Project: Hive
>  Issue Type: Bug
>  Components: SQL
>Reporter: N Campbell
>
> This fails when using Tez and ORC. 
> It will run when text files are used or text/ORC and MapReduce and not Tez 
> used.
> Is this another example of a type issue per 
> https://issues.apache.org/jira/browse/HIVE-9735
> select rnum, c1, c2 from tset1 as t1 where exists ( select c1 from tset2 
> where c1 = t1.c1 )
> This will run in both Tez and MapReduce using a text file
> select rnum, c1, c2 from t_tset1 as t1 where exists ( select c1 from t_tset2 
> where c1 = t1.c1 )
> Caused by: java.lang.RuntimeException: 
> org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while 
> processing row 
>   at 
> org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.processRow(MapRecordSource.java:91)
>   at 
> org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.pushRecord(MapRecordSource.java:68)
>   at 
> org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.run(MapRecordProcessor.java:294)
>   at 
> org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:163)
>   ... 13 more
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime 
> Error while processing row 
>   at 
> org.apache.hadoop.hive.ql.exec.vector.VectorMapOperator.process(VectorMapOperator.java:52)
>   at 
> org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.processRow(MapRecordSource.java:83)
>   ... 16 more
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Unexpected 
> exception: org.apache.hadoop.hive.serde2.io.HiveCharWritable cannot be cast 
> to org.apache.hadoop.hive.common.type.HiveChar
>   at 
> org.apache.hadoop.hive.ql.exec.MapJoinOperator.processOp(MapJoinOperator.java:311)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.VectorMapJoinOperator.processOp(VectorMapJoinOperator.java:249)
>   at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:815)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.VectorFilterOperator.processOp(VectorFilterOperator.java:111)
>   at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:815)
>   at 
> org.apache.hadoop.hive.ql.exec.TableScanOperator.processOp(TableScanOperator.java:95)
>   at 
> org.apache.hadoop.hive.ql.exec.MapOperator$MapOpCtx.forward(MapOperator.java:157)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.VectorMapOperator.process(VectorMapOperator.java:45)
>   ... 17 more
> Caused by: java.lang.ClassCastException: 
> org.apache.hadoop.hive.serde2.io.HiveCharWritable cannot be cast to 
> org.apache.hadoop.hive.common.type.HiveChar
>   at 
> org.apache.hadoop.hive.ql.exec.vector.VectorColumnAssignFactory$18.assignObjectValue(VectorColumnAssignFactory.java:432)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.VectorMapJoinOperator.internalForward(VectorMapJoinOperator.java:196)
>   at 
> org.apache.hadoop.hive.ql.exec.CommonJoinOperator.genAllOneUniqueJoinObject(CommonJoinOperator.java:670)
>   at 
> org.apache.hadoop.hive.ql.exec.CommonJoinOperator.checkAndGenObject(CommonJoinOperator.java:748)
>   at 
> org.apache.hadoop.hive.ql.exec.MapJoinOperator.processOp(MapJoinOperator.java:299)
>   ... 24 more
> create table  if not exists T_TSET1 (RNUM int , C1 int, C2 char(3))
>  ROW FORMAT DELIMITED FIELDS TERMINATED BY '|' LINES TERMINATED BY '\n' 
>  STORED AS textfile ;
> create table  if not exists T_TSET1 (RNUM int , C1 int, C2 char(3))
>  STORED AS ORC ;
> create table  if not exists T_TSET2 (RNUM int , C1 int, C2 char(3))
>  ROW FORMAT DELIMITED FIELDS TERMINATED BY '|' LINES TERMINATED BY '\n' 
>  STORED AS textfile ;
> TSET1 data
> 0|10|AAA
> 1|10|AAA
> 2|10|AAA
> 3|20|BBB
> 4|30|CCC
> 5|40|DDD
> 6|50|\N
> 7|60|\N
> 8|\N|AAA
> 9|\N|AAA
> 10|\N|\N
> 11|\N|\N
> TSET2 DATA
> 0|10|AAA
> 1|10|AAA
> 2|40|DDD
> 3|50|EEE
> 4|60|FFF



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HIVE-9722) CBO (Calcite Return Path): Translate Sort/Limit to Hive Op [CBO branch]

2015-02-20 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan resolved HIVE-9722.

   Resolution: Fixed
Fix Version/s: (was: 1.2.0)
   cbo-branch

Committed to branch.

> CBO (Calcite Return Path): Translate Sort/Limit to Hive Op [CBO branch]
> ---
>
> Key: HIVE-9722
> URL: https://issues.apache.org/jira/browse/HIVE-9722
> Project: Hive
>  Issue Type: Sub-task
>  Components: CBO
>Affects Versions: cbo-branch
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
> Fix For: cbo-branch
>
> Attachments: HIVE-9722.cbo.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9322) Make null-checks consistent for MapObjectInspector subclasses.

2015-02-20 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-9322:
---
   Resolution: Fixed
Fix Version/s: 1.2.0
   Status: Resolved  (was: Patch Available)

Committed to trunk. Thanks, Mithun!

> Make null-checks consistent for MapObjectInspector subclasses.
> --
>
> Key: HIVE-9322
> URL: https://issues.apache.org/jira/browse/HIVE-9322
> Project: Hive
>  Issue Type: Bug
>  Components: Serializers/Deserializers
>Affects Versions: 0.14.0
>Reporter: Mithun Radhakrishnan
>Assignee: Mithun Radhakrishnan
>Priority: Minor
> Fix For: 1.2.0
>
> Attachments: HIVE-9322.1.patch
>
>
> {{LazyBinaryMapObjectInspector}}, {{DeepParquetHiveMapInspector}}, etc. check 
> both the map-column value and the map-key for null, before dereferencing 
> them. {{OrcMapObjectInspector}} and {{LazyMapObjectInspector}} do not.
> This patch brings them all in sync. Might not be a real problem, unless (for 
> example) the lookup key is itself a (possibly null) value from another column.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9727) GroupingID translation from Calcite

2015-02-20 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14329699#comment-14329699
 ] 

Hive QA commented on HIVE-9727:
---



{color:green}Overall{color}: +1 all checks pass

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12699919/HIVE-9727.01.patch

{color:green}SUCCESS:{color} +1 7566 tests passed

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2835/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2835/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-2835/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12699919 - PreCommit-HIVE-TRUNK-Build

> GroupingID translation from Calcite
> ---
>
> Key: HIVE-9727
> URL: https://issues.apache.org/jira/browse/HIVE-9727
> Project: Hive
>  Issue Type: Bug
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
> Attachments: HIVE-9727.01.patch, HIVE-9727.patch
>
>
> The translation from Calcite back to Hive might produce wrong results while 
> interacting with other Calcite optimization rules.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8297) Wrong results with JDBC direct read of TIMESTAMP column in RCFile and ORC format

2015-02-20 Thread Aihua Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14329694#comment-14329694
 ] 

Aihua Xu commented on HIVE-8297:


I can't repro by following your steps. The result seems to be right from my 
side. Do you still see the issues?

> Wrong results with JDBC direct read of TIMESTAMP column in RCFile and ORC 
> format
> 
>
> Key: HIVE-8297
> URL: https://issues.apache.org/jira/browse/HIVE-8297
> Project: Hive
>  Issue Type: Bug
>  Components: CLI, JDBC
>Affects Versions: 0.13.0
> Environment: Linux
>Reporter: Doug Sedlak
>
> For the case:
> SELECT * FROM [table]
> JDBC direct reads the table backing data, versus cranking up a MR and 
> creating a result set.  Where table format is RCFile or ORC, incorrect 
> results are delivered by JDBC direct read for TIMESTAMP columns.  If you 
> force a result set, correct data is returned.
> To reproduce using beeline:
> 1) Create this file as follows in HDFS.
> $ cat > /tmp/ts.txt
> 2014-09-28 00:00:00
> 2014-09-29 00:00:00
> 2014-09-30 00:00:00
> 
> $ hadoop fs -copyFromLocal /tmp/ts.txt /tmp/ts.txt
> 2) In beeline load above HDFS data to a TEXTFILE table, and verify ok:
> $ beeline
> > !connect jdbc:hive2://:/ hive pass 
> > org.apache.hive.jdbc.HiveDriver
> > drop table `TIMESTAMP_TEXT`;
> > CREATE TABLE `TIMESTAMP_TEXT` (`ts` TIMESTAMP) ROW FORMAT DELIMITED FIELDS 
> > TERMINATED BY '\001'
> LINES TERMINATED BY '\012' STORED AS TEXTFILE;
> > LOAD DATA INPATH '/tmp/ts.txt' OVERWRITE INTO TABLE
> `TIMESTAMP_TEXT`;
> > select * from `TIMESTAMP_TEXT`;
> 3) In beeline create and load an RCFile from the TEXTFILE:
> > drop table `TIMESTAMP_RCFILE`;
> > CREATE TABLE `TIMESTAMP_RCFILE` (`ts` TIMESTAMP) stored as rcfile;
> > INSERT INTO TABLE `TIMESTAMP_RCFILE` SELECT * FROM `TIMESTAMP_TEXT`;
> 4) Demonstrate incorrect direct JDBC read versus good read by inducing result 
> set creation:
> > SELECT * FROM `TIMESTAMP_RCFILE`;
> ++
> |  timestamp_rcfile.ts   |
> ++
> | 2014-09-30 00:00:00.0  |
> | 2014-09-30 00:00:00.0  |
> | 2014-09-30 00:00:00.0  |
> ++
> >  SELECT * FROM `TIMESTAMP_RCFILE` where ts is not NULL;
> ++
> |  timestamp_rcfile.ts   |
> ++
> | 2014-09-28 00:00:00.0  |
> | 2014-09-29 00:00:00.0  |
> | 2014-09-30 00:00:00.0  |
> ++
> Note 1: The incorrect conduct demonstrated above replicates with a standalone 
> Java/JDBC program.
>  
> Note 2: Don't know if this is an issue with any other data types, also don't 
> know what releases affected, however this occurs in Hive 13.  Direct JDBC 
> read of TEXTFILE and SEQUENCEFILE work fine.  As above for RCFile and ORC 
> wrong results are delivered, did not test any other file types.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 30335: Build UDF TRUNC to implement FIRST_DAY as compared with LAST_DAY

2015-02-20 Thread Alexander Pivovarov


> On Feb. 20, 2015, 9:21 p.m., Jason Dere wrote:
> > ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFTrunc.java, 
> > line 73
> > 
> >
> > If you are only going to support a single format string for month, I 
> > think "MONTH" would be a better choice and consistent with using "YEAR" for 
> > years.

similar to Oracle we can support MM, MON, MONTH, YY, , YEAR
http://docs.oracle.com/cd/B19306_01/server.102/b14200/functions230.htm#i1002084


- Alexander


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/30335/#review73309
---


On Feb. 20, 2015, 1:18 a.m., XIAOBING ZHOU wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/30335/
> ---
> 
> (Updated Feb. 20, 2015, 1:18 a.m.)
> 
> 
> Review request for hive.
> 
> 
> Repository: hive-git
> 
> 
> Description
> ---
> 
> Hive already supports LAST_DAY UDF, in some cases, FIRST_DAY is necessary to 
> do date/timestamp related computation. This JIRA is to track such an 
> implementation.
> 
> https://issues.apache.org/jira/browse/HIVE-9480
> 
> 
> Diffs
> -
> 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/FunctionRegistry.java bfb4dc2 
>   ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFTrunc.java 
> PRE-CREATION 
>   ql/src/test/org/apache/hadoop/hive/ql/udf/generic/TestGenericUDFTrunc.java 
> PRE-CREATION 
>   ql/src/test/queries/clientnegative/udf_trunc_error1.q PRE-CREATION 
>   ql/src/test/queries/clientnegative/udf_trunc_error2.q PRE-CREATION 
>   ql/src/test/queries/clientpositive/udf_trunc.q PRE-CREATION 
>   ql/src/test/results/clientnegative/udf_trunc_error1.q.out PRE-CREATION 
>   ql/src/test/results/clientnegative/udf_trunc_error2.q.out PRE-CREATION 
>   ql/src/test/results/clientpositive/show_functions.q.out e21b54b 
>   ql/src/test/results/clientpositive/udf_trunc.q.out PRE-CREATION 
> 
> Diff: https://reviews.apache.org/r/30335/diff/
> 
> 
> Testing
> ---
> 
> Unit tests done in 
> ql/src/test/org/apache/hadoop/hive/ql/udf/generic/TestGenericUDFFirstDay.java
> 
> 
> Thanks,
> 
> XIAOBING ZHOU
> 
>



Is it allowed to include GPLv2 software to Apache projects

2015-02-20 Thread Alexander Pivovarov
Hi Everyone

Apache License v2.0 and GPL Compatibility page clearly says "GPLv3 software
cannot be included in Apache projects"

http://www.apache.org/licenses/GPL-compatibility.html

What about GPLv2 software? Can it be included to Apache projects?


[jira] [Updated] (HIVE-3454) Problem with CAST(BIGINT as TIMESTAMP)

2015-02-20 Thread Aihua Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aihua Xu updated HIVE-3454:
---
Status: Patch Available  (was: In Progress)

> Problem with CAST(BIGINT as TIMESTAMP)
> --
>
> Key: HIVE-3454
> URL: https://issues.apache.org/jira/browse/HIVE-3454
> Project: Hive
>  Issue Type: Bug
>  Components: Types, UDF
>Affects Versions: 0.13.1, 0.13.0, 0.12.0, 0.11.0, 0.10.0, 0.9.0, 0.8.1, 
> 0.8.0
>Reporter: Ryan Harris
>Assignee: Aihua Xu
>  Labels: newbie, newdev, patch
> Attachments: HIVE-3454.1.patch.txt, HIVE-3454.2.patch, 
> HIVE-3454.3.patch, HIVE-3454.4.patch, HIVE-3454.patch
>
>
> Ran into an issue while working with timestamp conversion.
> CAST(unix_timestamp() as TIMESTAMP) should create a timestamp for the current 
> time from the BIGINT returned by unix_timestamp()
> Instead, however, a 1970-01-16 timestamp is returned.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-3454) Problem with CAST(BIGINT as TIMESTAMP)

2015-02-20 Thread Aihua Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aihua Xu updated HIVE-3454:
---
Attachment: (was: HIVE-3454.4.patch)

> Problem with CAST(BIGINT as TIMESTAMP)
> --
>
> Key: HIVE-3454
> URL: https://issues.apache.org/jira/browse/HIVE-3454
> Project: Hive
>  Issue Type: Bug
>  Components: Types, UDF
>Affects Versions: 0.8.0, 0.8.1, 0.9.0, 0.10.0, 0.11.0, 0.12.0, 0.13.0, 
> 0.13.1
>Reporter: Ryan Harris
>Assignee: Aihua Xu
>  Labels: newbie, newdev, patch
> Attachments: HIVE-3454.1.patch.txt, HIVE-3454.2.patch, 
> HIVE-3454.3.patch, HIVE-3454.patch
>
>
> Ran into an issue while working with timestamp conversion.
> CAST(unix_timestamp() as TIMESTAMP) should create a timestamp for the current 
> time from the BIGINT returned by unix_timestamp()
> Instead, however, a 1970-01-16 timestamp is returned.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-3454) Problem with CAST(BIGINT as TIMESTAMP)

2015-02-20 Thread Aihua Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aihua Xu updated HIVE-3454:
---
Attachment: HIVE-3454.4.patch

> Problem with CAST(BIGINT as TIMESTAMP)
> --
>
> Key: HIVE-3454
> URL: https://issues.apache.org/jira/browse/HIVE-3454
> Project: Hive
>  Issue Type: Bug
>  Components: Types, UDF
>Affects Versions: 0.8.0, 0.8.1, 0.9.0, 0.10.0, 0.11.0, 0.12.0, 0.13.0, 
> 0.13.1
>Reporter: Ryan Harris
>Assignee: Aihua Xu
>  Labels: newbie, newdev, patch
> Attachments: HIVE-3454.1.patch.txt, HIVE-3454.2.patch, 
> HIVE-3454.3.patch, HIVE-3454.4.patch, HIVE-3454.patch
>
>
> Ran into an issue while working with timestamp conversion.
> CAST(unix_timestamp() as TIMESTAMP) should create a timestamp for the current 
> time from the BIGINT returned by unix_timestamp()
> Instead, however, a 1970-01-16 timestamp is returned.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (HIVE-3454) Problem with CAST(BIGINT as TIMESTAMP)

2015-02-20 Thread Aihua Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HIVE-3454 started by Aihua Xu.
--
> Problem with CAST(BIGINT as TIMESTAMP)
> --
>
> Key: HIVE-3454
> URL: https://issues.apache.org/jira/browse/HIVE-3454
> Project: Hive
>  Issue Type: Bug
>  Components: Types, UDF
>Affects Versions: 0.8.0, 0.8.1, 0.9.0, 0.10.0, 0.11.0, 0.12.0, 0.13.0, 
> 0.13.1
>Reporter: Ryan Harris
>Assignee: Aihua Xu
>  Labels: newbie, newdev, patch
> Attachments: HIVE-3454.1.patch.txt, HIVE-3454.2.patch, 
> HIVE-3454.3.patch, HIVE-3454.4.patch, HIVE-3454.patch
>
>
> Ran into an issue while working with timestamp conversion.
> CAST(unix_timestamp() as TIMESTAMP) should create a timestamp for the current 
> time from the BIGINT returned by unix_timestamp()
> Instead, however, a 1970-01-16 timestamp is returned.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-3454) Problem with CAST(BIGINT as TIMESTAMP)

2015-02-20 Thread Aihua Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aihua Xu updated HIVE-3454:
---
Status: Open  (was: Patch Available)

> Problem with CAST(BIGINT as TIMESTAMP)
> --
>
> Key: HIVE-3454
> URL: https://issues.apache.org/jira/browse/HIVE-3454
> Project: Hive
>  Issue Type: Bug
>  Components: Types, UDF
>Affects Versions: 0.13.1, 0.13.0, 0.12.0, 0.11.0, 0.10.0, 0.9.0, 0.8.1, 
> 0.8.0
>Reporter: Ryan Harris
>Assignee: Aihua Xu
>  Labels: newbie, newdev, patch
> Attachments: HIVE-3454.1.patch.txt, HIVE-3454.2.patch, 
> HIVE-3454.3.patch, HIVE-3454.4.patch, HIVE-3454.patch
>
>
> Ran into an issue while working with timestamp conversion.
> CAST(unix_timestamp() as TIMESTAMP) should create a timestamp for the current 
> time from the BIGINT returned by unix_timestamp()
> Instead, however, a 1970-01-16 timestamp is returned.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Fix version for hbase-metastore branch

2015-02-20 Thread Lefty Leverenz
D'oh!  I'd forgotted about the idea of a doc JIRA.  In that case, we don't
really need the label.  (Less clutter in the label drop-down.)

-- Lefty

On Fri, Feb 20, 2015 at 8:38 AM, Alan Gates  wrote:

> TODOC-HBMETA works for me.  I'll change that at the same time I fix the
> change versions.  I'll also open a JIRA for docs on this stuff with links
> to the JIRAs that need documentation.
>
> Alan.
>
>   Lefty Leverenz 
>  February 19, 2015 at 23:01
> Also, what should we use for a documentation label?  (HIVE-9606
>  needs one.)
>
> TODOC labels are proliferating for all the releases and branches, but I
> don't think a generic TODOC label would be helpful.  So what would be a
> good abbreviation for the hbase-metastore branch?  Maybe TODOC-HBMETA?
>
> -- Lefty
>
>
>   Alan Gates 
>  February 19, 2015 at 19:12
>  Could someone with admin permissions on our JIRA add an
> hbase-metastore-branch label?  I'll take care of changing all the fix
> versions for the few JIRA's we've already committed.  Thanks.
>
> Alan.
>
>   Ashutosh Chauhan 
>  February 19, 2015 at 11:22
> This is what we have been doing for cbo work. e.g.
> https://issues.apache.org/jira/browse/HIVE-9581
>
>
>   Thejas Nair 
>  February 19, 2015 at 11:17
> I agree, using a label for fix version makes sense in this case. I believe
> that is what had been done for hive-on-spark and hive-on-tez.
>
>
>
>   Alan Gates 
>  February 19, 2015 at 10:56
> I've been marking JIRAs on this branch as fixed in 1.2, since that's the
> next version.  But that seems wrong as I doubt this code will be in by
> 1.2.  What's the usual practice here?  It seems it would make sense to make
> a label for this branch and mark them as fixed with that label and then
> when we actually release this in a version we can update all the JIRAs with
> that label.
>
> Alan.
>
>


Re: [VOTE] Apache Hive 1.1.0 Release Candidate 3

2015-02-20 Thread Thejas Nair
Thanks for finding the reason for this Brock!
I think the ATSHook contents should be shimmed (in trunk) so that it
is not excluded from a hadoop-1 build. (Or maybe, we should start
surveying if people are still using newer versions of hive on Hadoop
1.x).

I also ran a few simple queries using the RC in a single node cluster
and everything looked good.


On Fri, Feb 20, 2015 at 1:00 PM, Brock Noland  wrote:
> Hi,
>
> That is true and by design when built with the hadoop-1 profile:
>
> https://github.com/apache/hive/commit/820690b9bb908f48f8403ca87d14b26c18f00c38
>
> Brock
>
> On Fri, Feb 20, 2015 at 11:08 AM, Thejas Nair  wrote:
>> A few classes seem to be missing from the hive-exec*jar in binary
>> tar.gz. When I build from the source tar.gz , the hive-exec*jar has
>> those. ie, the source tar.gz looks fine.
>>
>> It is the ATSHook classes that are missing. Those are needed to be
>> able to register job progress information with Yarn timeline server.
>>
>>  diff /tmp/src.txt /tmp/bin.txt
>> 4768,4775d4767
>> < org/apache/hadoop/hive/ql/hooks/ATSHook$1.class
>> < org/apache/hadoop/hive/ql/hooks/ATSHook$2.class
>> < org/apache/hadoop/hive/ql/hooks/ATSHook$3.class
>> < org/apache/hadoop/hive/ql/hooks/ATSHook.class
>> < org/apache/hadoop/hive/ql/hooks/ATSHook$EntityTypes.class
>> < org/apache/hadoop/hive/ql/hooks/ATSHook$EventTypes.class
>> < org/apache/hadoop/hive/ql/hooks/ATSHook$OtherInfoTypes.class
>> < org/apache/hadoop/hive/ql/hooks/ATSHook$PrimaryFilterTypes.class
>>
>>
>> On Thu, Feb 19, 2015 at 8:54 AM, Chao Sun  wrote:
>>> +1
>>>
>>> 1. Build src with hadoop-1 and hadoop-2, tested the generated bin with some
>>> DDL/DML queries.
>>> 2. Tested the bin with some DDL/DML queries.
>>> 3. Verified signature for bin and src, both asc and md5.
>>>
>>> Chao
>>>
>>> On Thu, Feb 19, 2015 at 1:55 AM, Szehon Ho  wrote:
>>>
 +1

 1.  Verified signature for bin and src
 2.  Built src with hadoop2
 3.  Ran few queries from beeline with src
 4.  Ran few queries from beeline with bin
 5.  Verified no SNAPSHOT deps

 Thanks
 Szehon

 On Wed, Feb 18, 2015 at 10:03 PM, Xuefu Zhang  wrote:

 > +1
 >
 > 1. downloaded the src tarball and built w/ -Phadoop-1/2
 > 2. verified no binary (jars) in the src tarball
 >
 > On Wed, Feb 18, 2015 at 8:56 PM, Brock Noland 
 wrote:
 >
 > > +1
 > >
 > > verified sigs, hashes, created tables, ran MR on YARN jobs
 > >
 > > On Wed, Feb 18, 2015 at 8:54 PM, Brock Noland 
 > wrote:
 > > > Apache Hive 1.1.0 Release Candidate 3 is available here:
 > > > http://people.apache.org/~brock/apache-hive-1.1.0-rc3/
 > > >
 > > > Maven artifacts are available here:
 > > >
 https://repository.apache.org/content/repositories/orgapachehive-1026/
 > > >
 > > > Source tag for RC3 is at:
 > > > http://svn.apache.org/repos/asf/hive/tags/release-1.1.0-rc3/
 > > >
 > > > My key is located here:
 https://people.apache.org/keys/group/hive.asc
 > > >
 > > > Voting will conclude in 72 hours
 > >
 >

>>>
>>>
>>>
>>> --
>>> Best,
>>> Chao


[jira] [Commented] (HIVE-9497) Implicit conversion between Varchar and Char should result in Varchar

2015-02-20 Thread Aihua Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14329622#comment-14329622
 ] 

Aihua Xu commented on HIVE-9497:


Can you give an example for this issue?

> Implicit conversion between Varchar and Char should result in Varchar
> -
>
> Key: HIVE-9497
> URL: https://issues.apache.org/jira/browse/HIVE-9497
> Project: Hive
>  Issue Type: New Feature
>Reporter: Pavas Garg
>  Labels: Hive
>
> In situations where implicit conversion happen between varchar and char, the 
> result should be either varchar or char but not string.
> A string type result causes a major issue for SAS, an analytic application
> Varchar with bigger length should be the implicit conversion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9716) Map job fails when table's LOCATION does not have scheme

2015-02-20 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14329583#comment-14329583
 ] 

Hive QA commented on HIVE-9716:
---



{color:green}Overall{color}: +1 all checks pass

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12699915/HIVE-9716.1.patch

{color:green}SUCCESS:{color} +1 7566 tests passed

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2834/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2834/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-2834/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12699915 - PreCommit-HIVE-TRUNK-Build

> Map job fails when table's LOCATION does not have scheme
> 
>
> Key: HIVE-9716
> URL: https://issues.apache.org/jira/browse/HIVE-9716
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.12.0, 0.13.0, 0.14.0
>Reporter: Yongzhi Chen
>Assignee: Yongzhi Chen
>Priority: Minor
> Attachments: HIVE-9716.1.patch
>
>
> When a table's location (the value of column 'LOCATION' in SDS table in 
> metastore) does not have a scheme, map job returns error. For example, 
> when do select count ( * ) from t1, get following exception:
> {noformat}
> 15/02/18 12:29:43 [Thread-22]: WARN mapred.LocalJobRunner: 
> job_local2120192529_0001
> java.lang.Exception: java.lang.RuntimeException: 
> java.lang.IllegalStateException: Invalid input path 
> file:/user/hive/warehouse/t1/data
>   at 
> org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:354)
> Caused by: java.lang.RuntimeException: java.lang.IllegalStateException: 
> Invalid input path file:/user/hive/warehouse/t1/data
>   at org.apache.hadoop.hive.ql.exec.mr.ExecMapper.map(ExecMapper.java:179)
>   at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50)
>   at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:430)
>   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:366)
>   at 
> org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:223)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.IllegalStateException: Invalid input path 
> file:/user/hive/warehouse/t1/data
>   at 
> org.apache.hadoop.hive.ql.exec.MapOperator.getNominalPath(MapOperator.java:406)
>   at 
> org.apache.hadoop.hive.ql.exec.MapOperator.cleanUpInputFileChangedOp(MapOperator.java:442)
>   at 
> org.apache.hadoop.hive.ql.exec.Operator.cleanUpInputFileChanged(Operator.java:1051)
>   at 
> org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:486)
>   at org.apache.hadoop.hive.ql.exec.mr.ExecMapper.map(ExecMapper.java:170)
>   ... 9 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9537) string expressions on a fixed length character do not preserve trailing spaces

2015-02-20 Thread Aihua Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14329581#comment-14329581
 ] 

Aihua Xu commented on HIVE-9537:


[~the6campbells] Are you satisfied with the explanation? This is considered to 
by design. If you are concerned about the documentation, I will find out how to 
get it updated to reflect your point and trailing space. Let me know.

> string expressions on a fixed length character do not preserve trailing spaces
> --
>
> Key: HIVE-9537
> URL: https://issues.apache.org/jira/browse/HIVE-9537
> Project: Hive
>  Issue Type: Bug
>  Components: SQL
>Reporter: N Campbell
>Assignee: Aihua Xu
>
> When a string expression such as upper or lower is applied to a fixed length 
> column the trailing spaces of the fixed length character are not preserved.
> {code:sql}
> CREATE TABLE  if not exists TCHAR ( 
> RNUM int, 
> CCHAR char(32)
> )
> ROW FORMAT DELIMITED 
> FIELDS TERMINATED BY '|' 
> LINES TERMINATED BY '\n' 
> STORED AS TEXTFILE;
> {code}
> {{cchar}} as a {{char(32)}}.
> {code:sql}
> select cchar, concat(cchar, cchar), concat(lower(cchar), cchar), 
> concat(upper(cchar), cchar) 
> from tchar;
> {code}
> 0|\N
> 1|
> 2| 
> 3|BB
> 4|EE
> 5|FF



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 30335: Build UDF TRUNC to implement FIRST_DAY as compared with LAST_DAY

2015-02-20 Thread Jason Dere

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/30335/#review73309
---



ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFTrunc.java


If you are only going to support a single format string for month, I think 
"MONTH" would be a better choice and consistent with using "YEAR" for years.



ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFTrunc.java


We do this same check in so many different date functions .. we should 
eventually add a utility method to do this date type parameter checking. We 
don't have to do it in this Jira, it can be future work.


- Jason Dere


On Feb. 20, 2015, 1:18 a.m., XIAOBING ZHOU wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/30335/
> ---
> 
> (Updated Feb. 20, 2015, 1:18 a.m.)
> 
> 
> Review request for hive.
> 
> 
> Repository: hive-git
> 
> 
> Description
> ---
> 
> Hive already supports LAST_DAY UDF, in some cases, FIRST_DAY is necessary to 
> do date/timestamp related computation. This JIRA is to track such an 
> implementation.
> 
> https://issues.apache.org/jira/browse/HIVE-9480
> 
> 
> Diffs
> -
> 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/FunctionRegistry.java bfb4dc2 
>   ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFTrunc.java 
> PRE-CREATION 
>   ql/src/test/org/apache/hadoop/hive/ql/udf/generic/TestGenericUDFTrunc.java 
> PRE-CREATION 
>   ql/src/test/queries/clientnegative/udf_trunc_error1.q PRE-CREATION 
>   ql/src/test/queries/clientnegative/udf_trunc_error2.q PRE-CREATION 
>   ql/src/test/queries/clientpositive/udf_trunc.q PRE-CREATION 
>   ql/src/test/results/clientnegative/udf_trunc_error1.q.out PRE-CREATION 
>   ql/src/test/results/clientnegative/udf_trunc_error2.q.out PRE-CREATION 
>   ql/src/test/results/clientpositive/show_functions.q.out e21b54b 
>   ql/src/test/results/clientpositive/udf_trunc.q.out PRE-CREATION 
> 
> Diff: https://reviews.apache.org/r/30335/diff/
> 
> 
> Testing
> ---
> 
> Unit tests done in 
> ql/src/test/org/apache/hadoop/hive/ql/udf/generic/TestGenericUDFFirstDay.java
> 
> 
> Thanks,
> 
> XIAOBING ZHOU
> 
>



[jira] [Commented] (HIVE-9739) Quantified query fails with Tez/ORC file org.apache.hadoop.hive.ql.exec.tez.TezTask

2015-02-20 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14329561#comment-14329561
 ] 

Ashutosh Chauhan commented on HIVE-9739:


This looks unrelated to HIVE-9735

> Quantified query fails with Tez/ORC file 
> org.apache.hadoop.hive.ql.exec.tez.TezTask
> ---
>
> Key: HIVE-9739
> URL: https://issues.apache.org/jira/browse/HIVE-9739
> Project: Hive
>  Issue Type: Bug
>  Components: SQL
>Reporter: N Campbell
>
> This fails when using Tez and ORC. 
> It will run when text files are used or text/ORC and MapReduce and not Tez 
> used.
> Is this another example of a type issue per 
> https://issues.apache.org/jira/browse/HIVE-9735
> select rnum, c1, c2 from tset1 as t1 where exists ( select c1 from tset2 
> where c1 = t1.c1 )
> This will run in both Tez and MapReduce using a text file
> select rnum, c1, c2 from t_tset1 as t1 where exists ( select c1 from t_tset2 
> where c1 = t1.c1 )
> Caused by: java.lang.RuntimeException: 
> org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while 
> processing row 
>   at 
> org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.processRow(MapRecordSource.java:91)
>   at 
> org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.pushRecord(MapRecordSource.java:68)
>   at 
> org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.run(MapRecordProcessor.java:294)
>   at 
> org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:163)
>   ... 13 more
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime 
> Error while processing row 
>   at 
> org.apache.hadoop.hive.ql.exec.vector.VectorMapOperator.process(VectorMapOperator.java:52)
>   at 
> org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.processRow(MapRecordSource.java:83)
>   ... 16 more
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Unexpected 
> exception: org.apache.hadoop.hive.serde2.io.HiveCharWritable cannot be cast 
> to org.apache.hadoop.hive.common.type.HiveChar
>   at 
> org.apache.hadoop.hive.ql.exec.MapJoinOperator.processOp(MapJoinOperator.java:311)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.VectorMapJoinOperator.processOp(VectorMapJoinOperator.java:249)
>   at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:815)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.VectorFilterOperator.processOp(VectorFilterOperator.java:111)
>   at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:815)
>   at 
> org.apache.hadoop.hive.ql.exec.TableScanOperator.processOp(TableScanOperator.java:95)
>   at 
> org.apache.hadoop.hive.ql.exec.MapOperator$MapOpCtx.forward(MapOperator.java:157)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.VectorMapOperator.process(VectorMapOperator.java:45)
>   ... 17 more
> Caused by: java.lang.ClassCastException: 
> org.apache.hadoop.hive.serde2.io.HiveCharWritable cannot be cast to 
> org.apache.hadoop.hive.common.type.HiveChar
>   at 
> org.apache.hadoop.hive.ql.exec.vector.VectorColumnAssignFactory$18.assignObjectValue(VectorColumnAssignFactory.java:432)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.VectorMapJoinOperator.internalForward(VectorMapJoinOperator.java:196)
>   at 
> org.apache.hadoop.hive.ql.exec.CommonJoinOperator.genAllOneUniqueJoinObject(CommonJoinOperator.java:670)
>   at 
> org.apache.hadoop.hive.ql.exec.CommonJoinOperator.checkAndGenObject(CommonJoinOperator.java:748)
>   at 
> org.apache.hadoop.hive.ql.exec.MapJoinOperator.processOp(MapJoinOperator.java:299)
>   ... 24 more
> create table  if not exists T_TSET1 (RNUM int , C1 int, C2 char(3))
>  ROW FORMAT DELIMITED FIELDS TERMINATED BY '|' LINES TERMINATED BY '\n' 
>  STORED AS textfile ;
> create table  if not exists T_TSET1 (RNUM int , C1 int, C2 char(3))
>  STORED AS ORC ;
> create table  if not exists T_TSET2 (RNUM int , C1 int, C2 char(3))
>  ROW FORMAT DELIMITED FIELDS TERMINATED BY '|' LINES TERMINATED BY '\n' 
>  STORED AS textfile ;
> TSET1 data
> 0|10|AAA
> 1|10|AAA
> 2|10|AAA
> 3|20|BBB
> 4|30|CCC
> 5|40|DDD
> 6|50|\N
> 7|60|\N
> 8|\N|AAA
> 9|\N|AAA
> 10|\N|\N
> 11|\N|\N
> TSET2 DATA
> 0|10|AAA
> 1|10|AAA
> 2|40|DDD
> 3|50|EEE
> 4|60|FFF



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9735) aggregate ( smalllint ) fails when ORC file used ava.lang.ClassCastException: java.lang.Long cannot be cast to java.lang.Short

2015-02-20 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9735?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-9735:
---
Affects Version/s: 1.1

> aggregate ( smalllint ) fails when ORC file used ava.lang.ClassCastException: 
> java.lang.Long cannot be cast to java.lang.Short
> --
>
> Key: HIVE-9735
> URL: https://issues.apache.org/jira/browse/HIVE-9735
> Project: Hive
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 0.14.0, 1.1
>Reporter: N Campbell
> Fix For: 1.2.0
>
>
> select min( tsint.csint ) from tsint 
> select max( tsint.csint ) from tsint
> ava.lang.ClassCastException: java.lang.Long cannot be cast to java.lang.Short
> select min( t_tsint.csint ) from t_tsint 
> create table  if not exists T_TSINT ( RNUM int , CSINT smallint   )
>  ROW FORMAT DELIMITED FIELDS TERMINATED BY '|' LINES TERMINATED BY '\n' 
>  STORED AS textfile  ;
> create table  if not exists TSINT ( RNUM int , CSINT smallint   )
> TERMINATED BY '\n' 
>  STORED AS orc  ;
> input data loaded into text file and then inserted into ORC table from text 
> based table
> 0|\N
> 1|-1
> 2|0
> 3|1
> 4|10



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HIVE-9735) aggregate ( smalllint ) fails when ORC file used ava.lang.ClassCastException: java.lang.Long cannot be cast to java.lang.Short

2015-02-20 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9735?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan resolved HIVE-9735.

   Resolution: Fixed
Fix Version/s: 1.2.0

Yeah.. than its a same issue.

> aggregate ( smalllint ) fails when ORC file used ava.lang.ClassCastException: 
> java.lang.Long cannot be cast to java.lang.Short
> --
>
> Key: HIVE-9735
> URL: https://issues.apache.org/jira/browse/HIVE-9735
> Project: Hive
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 0.14.0
>Reporter: N Campbell
> Fix For: 1.2.0
>
>
> select min( tsint.csint ) from tsint 
> select max( tsint.csint ) from tsint
> ava.lang.ClassCastException: java.lang.Long cannot be cast to java.lang.Short
> select min( t_tsint.csint ) from t_tsint 
> create table  if not exists T_TSINT ( RNUM int , CSINT smallint   )
>  ROW FORMAT DELIMITED FIELDS TERMINATED BY '|' LINES TERMINATED BY '\n' 
>  STORED AS textfile  ;
> create table  if not exists TSINT ( RNUM int , CSINT smallint   )
> TERMINATED BY '\n' 
>  STORED AS orc  ;
> input data loaded into text file and then inserted into ORC table from text 
> based table
> 0|\N
> 1|-1
> 2|0
> 3|1
> 4|10



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] Apache Hive 1.1.0 Release Candidate 3

2015-02-20 Thread Brock Noland
Hi,

That is true and by design when built with the hadoop-1 profile:

https://github.com/apache/hive/commit/820690b9bb908f48f8403ca87d14b26c18f00c38

Brock

On Fri, Feb 20, 2015 at 11:08 AM, Thejas Nair  wrote:
> A few classes seem to be missing from the hive-exec*jar in binary
> tar.gz. When I build from the source tar.gz , the hive-exec*jar has
> those. ie, the source tar.gz looks fine.
>
> It is the ATSHook classes that are missing. Those are needed to be
> able to register job progress information with Yarn timeline server.
>
>  diff /tmp/src.txt /tmp/bin.txt
> 4768,4775d4767
> < org/apache/hadoop/hive/ql/hooks/ATSHook$1.class
> < org/apache/hadoop/hive/ql/hooks/ATSHook$2.class
> < org/apache/hadoop/hive/ql/hooks/ATSHook$3.class
> < org/apache/hadoop/hive/ql/hooks/ATSHook.class
> < org/apache/hadoop/hive/ql/hooks/ATSHook$EntityTypes.class
> < org/apache/hadoop/hive/ql/hooks/ATSHook$EventTypes.class
> < org/apache/hadoop/hive/ql/hooks/ATSHook$OtherInfoTypes.class
> < org/apache/hadoop/hive/ql/hooks/ATSHook$PrimaryFilterTypes.class
>
>
> On Thu, Feb 19, 2015 at 8:54 AM, Chao Sun  wrote:
>> +1
>>
>> 1. Build src with hadoop-1 and hadoop-2, tested the generated bin with some
>> DDL/DML queries.
>> 2. Tested the bin with some DDL/DML queries.
>> 3. Verified signature for bin and src, both asc and md5.
>>
>> Chao
>>
>> On Thu, Feb 19, 2015 at 1:55 AM, Szehon Ho  wrote:
>>
>>> +1
>>>
>>> 1.  Verified signature for bin and src
>>> 2.  Built src with hadoop2
>>> 3.  Ran few queries from beeline with src
>>> 4.  Ran few queries from beeline with bin
>>> 5.  Verified no SNAPSHOT deps
>>>
>>> Thanks
>>> Szehon
>>>
>>> On Wed, Feb 18, 2015 at 10:03 PM, Xuefu Zhang  wrote:
>>>
>>> > +1
>>> >
>>> > 1. downloaded the src tarball and built w/ -Phadoop-1/2
>>> > 2. verified no binary (jars) in the src tarball
>>> >
>>> > On Wed, Feb 18, 2015 at 8:56 PM, Brock Noland 
>>> wrote:
>>> >
>>> > > +1
>>> > >
>>> > > verified sigs, hashes, created tables, ran MR on YARN jobs
>>> > >
>>> > > On Wed, Feb 18, 2015 at 8:54 PM, Brock Noland 
>>> > wrote:
>>> > > > Apache Hive 1.1.0 Release Candidate 3 is available here:
>>> > > > http://people.apache.org/~brock/apache-hive-1.1.0-rc3/
>>> > > >
>>> > > > Maven artifacts are available here:
>>> > > >
>>> https://repository.apache.org/content/repositories/orgapachehive-1026/
>>> > > >
>>> > > > Source tag for RC3 is at:
>>> > > > http://svn.apache.org/repos/asf/hive/tags/release-1.1.0-rc3/
>>> > > >
>>> > > > My key is located here:
>>> https://people.apache.org/keys/group/hive.asc
>>> > > >
>>> > > > Voting will conclude in 72 hours
>>> > >
>>> >
>>>
>>
>>
>>
>> --
>> Best,
>> Chao


[jira] [Commented] (HIVE-9735) aggregate ( smalllint ) fails when ORC file used ava.lang.ClassCastException: java.lang.Long cannot be cast to java.lang.Short

2015-02-20 Thread N Campbell (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14329533#comment-14329533
 ] 

N Campbell commented on HIVE-9735:
--

Using JDBC with SQLSquirrel

> aggregate ( smalllint ) fails when ORC file used ava.lang.ClassCastException: 
> java.lang.Long cannot be cast to java.lang.Short
> --
>
> Key: HIVE-9735
> URL: https://issues.apache.org/jira/browse/HIVE-9735
> Project: Hive
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 0.14.0
>Reporter: N Campbell
>
> select min( tsint.csint ) from tsint 
> select max( tsint.csint ) from tsint
> ava.lang.ClassCastException: java.lang.Long cannot be cast to java.lang.Short
> select min( t_tsint.csint ) from t_tsint 
> create table  if not exists T_TSINT ( RNUM int , CSINT smallint   )
>  ROW FORMAT DELIMITED FIELDS TERMINATED BY '|' LINES TERMINATED BY '\n' 
>  STORED AS textfile  ;
> create table  if not exists TSINT ( RNUM int , CSINT smallint   )
> TERMINATED BY '\n' 
>  STORED AS orc  ;
> input data loaded into text file and then inserted into ORC table from text 
> based table
> 0|\N
> 1|-1
> 2|0
> 3|1
> 4|10



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-3454) Problem with CAST(BIGINT as TIMESTAMP)

2015-02-20 Thread Aihua Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aihua Xu updated HIVE-3454:
---
Release Note: 
The behaviors of converting from BOOLEAN/TINYINT/SMALLINT/INT/BIGINT and 
converting from FLOAT/DOUBLE to TIMESTAMP have been inconsistent. The value of 
a BOOLEAN/TINYINT/SMALLINT/INT/BIGINT is treated as the time in milliseconds 
while  the value of a FLOAT/DOUBLE is treated as the time in seconds. 

With the change of HIVE-3454, we support an additional configuration 
"hive.int.timestamp.conversion.in.seconds" to enable the interpretation the 
BOOLEAN/BYTE/SHORT/INT/BIGINT value in seconds during the timestamp conversion 
without breaking the existing customers. By default, the existing functionality 
is kept.

  was:
The behaviors of converting from BOOLEAN/BYTE/SHORT/INT/BIGINT and converting 
from FLOAT/DOUBLE to TIMESTAMP have been inconsistent. The value of a 
BOOLEAN/BYTE/SHORT/INT/BIGINT is treated as the time in milliseconds while  the 
value of a FLOAT/DOUBLE is treated as the time in seconds. 

With the change of HIVE-3454, we support an additional configuration 
"hive.int.timestamp.conversion.in.seconds" to enable the interpretation the 
BOOLEAN/BYTE/SHORT/INT/BIGINT value in seconds during the timestamp conversion 
without breaking the existing customers. By default, the existing functionality 
is kept.

  Status: Patch Available  (was: In Progress)

> Problem with CAST(BIGINT as TIMESTAMP)
> --
>
> Key: HIVE-3454
> URL: https://issues.apache.org/jira/browse/HIVE-3454
> Project: Hive
>  Issue Type: Bug
>  Components: Types, UDF
>Affects Versions: 0.13.1, 0.13.0, 0.12.0, 0.11.0, 0.10.0, 0.9.0, 0.8.1, 
> 0.8.0
>Reporter: Ryan Harris
>Assignee: Aihua Xu
>  Labels: newbie, newdev, patch
> Attachments: HIVE-3454.1.patch.txt, HIVE-3454.2.patch, 
> HIVE-3454.3.patch, HIVE-3454.4.patch, HIVE-3454.patch
>
>
> Ran into an issue while working with timestamp conversion.
> CAST(unix_timestamp() as TIMESTAMP) should create a timestamp for the current 
> time from the BIGINT returned by unix_timestamp()
> Instead, however, a 1970-01-16 timestamp is returned.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-3454) Problem with CAST(BIGINT as TIMESTAMP)

2015-02-20 Thread Aihua Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aihua Xu updated HIVE-3454:
---
Attachment: HIVE-3454.4.patch

Read the configuration from passed in Configuration object from the 
initialize() in AbstractSerDe class. Since AvroSerDe overrides the function and 
OrcSerDe doesn't inherit from AbstractSerDe right now, added in these two 
classes as well. It will support the session override.

> Problem with CAST(BIGINT as TIMESTAMP)
> --
>
> Key: HIVE-3454
> URL: https://issues.apache.org/jira/browse/HIVE-3454
> Project: Hive
>  Issue Type: Bug
>  Components: Types, UDF
>Affects Versions: 0.8.0, 0.8.1, 0.9.0, 0.10.0, 0.11.0, 0.12.0, 0.13.0, 
> 0.13.1
>Reporter: Ryan Harris
>Assignee: Aihua Xu
>  Labels: newbie, newdev, patch
> Attachments: HIVE-3454.1.patch.txt, HIVE-3454.2.patch, 
> HIVE-3454.3.patch, HIVE-3454.4.patch, HIVE-3454.patch
>
>
> Ran into an issue while working with timestamp conversion.
> CAST(unix_timestamp() as TIMESTAMP) should create a timestamp for the current 
> time from the BIGINT returned by unix_timestamp()
> Instead, however, a 1970-01-16 timestamp is returned.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9740) Unable to find my UDTF class

2015-02-20 Thread Gopal V (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gopal V updated HIVE-9740:
--
Affects Version/s: 0.13.0

> Unable to find my  UDTF class
> -
>
> Key: HIVE-9740
> URL: https://issues.apache.org/jira/browse/HIVE-9740
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.13.0
> Environment: CentOS release 6.4 (Final), Hortonworks 2.1, Tez
> Hive 0.13.0.2.1.3.0-563
> Subversion 
> git://ip-10-0-0-91/grid/0/jenkins/workspace/BIGTOP-HDP_RPM_REPO-baikal-GA-centos6/bigtop/build/hive/rpm/BUILD/h
>  ive-0.13.0.2.1.3.0 -r a738a76c72d6d9dd304691faada57a94429256bc
> Compiled by jenkins on Thu Jun 26 18:28:50 EDT 2014
> From source with checksum 4dbd99dd254f0c521ad8ab072045325d
>Reporter: Max Zuevskiy
>
> I add UDTF class to Hive from jar.
> If hive.execution.engine=tez by default i get Unable to find my class: 
> my.udtf.class.
> if set hive.execution.engine=mr my query work done, after in session change 
> hive.execution.engine to "tez" query also work done.
> Reopen session -> query fail with Unable to find my class: my.udtf.class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8342) Potential null dereference in ColumnTruncateMapper#jobClose()

2015-02-20 Thread Lars Francke (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14329458#comment-14329458
 ] 

Lars Francke commented on HIVE-8342:


Ted, thanks for the reminder.

Looks mostly good. I'd suggest

{code}
if (conf == null) {
  throw new HiveException("FileSinkDesc cannot be null");
}
{code}

instead. Adheres to coding standard and removes extra period at the end of the 
message.

The only problem with this patch is that 
{{AbstractFileMergeOperator#jobCloseOp}} calls the method {{mvFileToFinalPath}} 
with {{null}}. I didn't follow the code to see if if this can actually happen 
though.

> Potential null dereference in ColumnTruncateMapper#jobClose()
> -
>
> Key: HIVE-8342
> URL: https://issues.apache.org/jira/browse/HIVE-8342
> Project: Hive
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: skrho
>Priority: Minor
> Attachments: HIVE-8342_001.patch, HIVE-8342_002.patch
>
>
> {code}
> Utilities.mvFileToFinalPath(outputPath, job, success, LOG, dynPartCtx, 
> null,
>   reporter);
> {code}
> Utilities.mvFileToFinalPath() calls createEmptyBuckets() where conf is 
> dereferenced:
> {code}
> boolean isCompressed = conf.getCompressed();
> TableDesc tableInfo = conf.getTableInfo();
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-6617) Reduce ambiguity in grammar

2015-02-20 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14329468#comment-14329468
 ] 

Hive QA commented on HIVE-6617:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12699912/HIVE-6617.18.patch

{color:red}ERROR:{color} -1 due to 3 failed/errored test(s), 7705 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_authorization_cannot_create_all_role
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_authorization_cannot_create_none_role
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_lateral_view_join
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2833/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2833/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-2833/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 3 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12699912 - PreCommit-HIVE-TRUNK-Build

> Reduce ambiguity in grammar
> ---
>
> Key: HIVE-6617
> URL: https://issues.apache.org/jira/browse/HIVE-6617
> Project: Hive
>  Issue Type: Task
>Reporter: Ashutosh Chauhan
>Assignee: Pengcheng Xiong
> Attachments: HIVE-6617.01.patch, HIVE-6617.02.patch, 
> HIVE-6617.03.patch, HIVE-6617.04.patch, HIVE-6617.05.patch, 
> HIVE-6617.06.patch, HIVE-6617.07.patch, HIVE-6617.08.patch, 
> HIVE-6617.09.patch, HIVE-6617.10.patch, HIVE-6617.11.patch, 
> HIVE-6617.12.patch, HIVE-6617.13.patch, HIVE-6617.14.patch, 
> HIVE-6617.15.patch, HIVE-6617.16.patch, HIVE-6617.17.patch, HIVE-6617.18.patch
>
>
> CLEAR LIBRARY CACHE
> As of today, antlr reports 214 warnings. Need to bring down this number, 
> ideally to 0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9322) Make null-checks consistent for MapObjectInspector subclasses.

2015-02-20 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14329443#comment-14329443
 ] 

Ashutosh Chauhan commented on HIVE-9322:


I see, changes are concerned with read side only. I think it makes sense to 
have this null check, incase underlying map implementation changes for these 
OIs. Problem is java's map interface which most of OIs uses is lax about nulls 
as keys. As a data point, HashMap allows it while ConcurrentHashMap doesnt 
(throws NPE). Since keys here is provided by user its better that Hive returns 
null in those cases, instead of throwing NPE. Performance consideration is 
secondary, we should be concentrating on what semantics we want to provide to 
users.

+1

> Make null-checks consistent for MapObjectInspector subclasses.
> --
>
> Key: HIVE-9322
> URL: https://issues.apache.org/jira/browse/HIVE-9322
> Project: Hive
>  Issue Type: Bug
>  Components: Serializers/Deserializers
>Affects Versions: 0.14.0
>Reporter: Mithun Radhakrishnan
>Assignee: Mithun Radhakrishnan
>Priority: Minor
> Attachments: HIVE-9322.1.patch
>
>
> {{LazyBinaryMapObjectInspector}}, {{DeepParquetHiveMapInspector}}, etc. check 
> both the map-column value and the map-key for null, before dereferencing 
> them. {{OrcMapObjectInspector}} and {{LazyMapObjectInspector}} do not.
> This patch brings them all in sync. Might not be a real problem, unless (for 
> example) the lookup key is itself a (possibly null) value from another column.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9727) GroupingID translation from Calcite

2015-02-20 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-9727:
--
Status: Open  (was: Patch Available)

> GroupingID translation from Calcite
> ---
>
> Key: HIVE-9727
> URL: https://issues.apache.org/jira/browse/HIVE-9727
> Project: Hive
>  Issue Type: Bug
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
> Attachments: HIVE-9727.01.patch, HIVE-9727.patch
>
>
> The translation from Calcite back to Hive might produce wrong results while 
> interacting with other Calcite optimization rules.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 31209: HIVE-9727

2015-02-20 Thread Jesús Camacho Rodríguez

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/31209/
---

(Updated Feb. 20, 2015, 7:42 p.m.)


Review request for hive and John Pullokkaran.


Bugs: HIVE-9727
https://issues.apache.org/jira/browse/HIVE-9727


Repository: hive-git


Description
---

GroupingID translation from Calcite


Diffs (updated)
-

  
ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/reloperators/HiveGroupingID.java
 345b64af8514466c84e9899e9c019b679b761ba6 
  
ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/translator/ASTConverter.java
 ea5918110fa1255f105c646c08e7d307afb3f94b 
  ql/src/java/org/apache/hadoop/hive/ql/parse/CalcitePlanner.java 
47a209f4d79fb4d73f8bd69f8edc18cd834bf940 

Diff: https://reviews.apache.org/r/31209/diff/


Testing
---

Existing tests (groupby*.q)


Thanks,

Jesús Camacho Rodríguez



[jira] [Updated] (HIVE-9727) GroupingID translation from Calcite

2015-02-20 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-9727:
--
Attachment: HIVE-9727.01.patch

> GroupingID translation from Calcite
> ---
>
> Key: HIVE-9727
> URL: https://issues.apache.org/jira/browse/HIVE-9727
> Project: Hive
>  Issue Type: Bug
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
> Attachments: HIVE-9727.01.patch, HIVE-9727.patch
>
>
> The translation from Calcite back to Hive might produce wrong results while 
> interacting with other Calcite optimization rules.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9727) GroupingID translation from Calcite

2015-02-20 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-9727:
--
Status: Patch Available  (was: Open)

> GroupingID translation from Calcite
> ---
>
> Key: HIVE-9727
> URL: https://issues.apache.org/jira/browse/HIVE-9727
> Project: Hive
>  Issue Type: Bug
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
> Attachments: HIVE-9727.01.patch, HIVE-9727.patch
>
>
> The translation from Calcite back to Hive might produce wrong results while 
> interacting with other Calcite optimization rules.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9322) Make null-checks consistent for MapObjectInspector subclasses.

2015-02-20 Thread Mithun Radhakrishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14329418#comment-14329418
 ] 

Mithun Radhakrishnan commented on HIVE-9322:


@[~ashutoshc]: You're right about the deja vu. :]
https://issues.apache.org/jira/browse/HIVE-6389?focusedCommentId=13917716&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13917716

(The problem in HIVE-6389 was that we were returning -1, when the data was 
NULL, even if it wasn't an integer-map. We discussed the data vs key null-check 
as an aside.)

At the moment, the semantics aren't uniform across OIs. {{LazyBinaryMapOI}} and 
{{DeepParquetHiveMapOI}} already guard against null-keys, while the others 
don't. Wouldn't uniformity be best? In light of your performance concern, 
should we consider removing the null-checks in all MapOIs?

I don't think we're changing semantics of what can be stored in a Map because 
I'd expect an NPE when writing a null-key (although I might be mistaken). We're 
only guarding against non-deterministic behaviour for stuff like:

{code:sql}
SELECT map_column[ string_column ] FROM my_table; 
{code}

... in cases where {{string_column IS NULL}}.

> Make null-checks consistent for MapObjectInspector subclasses.
> --
>
> Key: HIVE-9322
> URL: https://issues.apache.org/jira/browse/HIVE-9322
> Project: Hive
>  Issue Type: Bug
>  Components: Serializers/Deserializers
>Affects Versions: 0.14.0
>Reporter: Mithun Radhakrishnan
>Assignee: Mithun Radhakrishnan
>Priority: Minor
> Attachments: HIVE-9322.1.patch
>
>
> {{LazyBinaryMapObjectInspector}}, {{DeepParquetHiveMapInspector}}, etc. check 
> both the map-column value and the map-key for null, before dereferencing 
> them. {{OrcMapObjectInspector}} and {{LazyMapObjectInspector}} do not.
> This patch brings them all in sync. Might not be a real problem, unless (for 
> example) the lookup key is itself a (possibly null) value from another column.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9735) aggregate ( smalllint ) fails when ORC file used ava.lang.ClassCastException: java.lang.Long cannot be cast to java.lang.Short

2015-02-20 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14329416#comment-14329416
 ] 

Ashutosh Chauhan commented on HIVE-9735:


[~the6campbells] Can you confirm if you are running your queries through 
beeline? Also, it will help if you can post full stack-trace.

> aggregate ( smalllint ) fails when ORC file used ava.lang.ClassCastException: 
> java.lang.Long cannot be cast to java.lang.Short
> --
>
> Key: HIVE-9735
> URL: https://issues.apache.org/jira/browse/HIVE-9735
> Project: Hive
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 0.14.0
>Reporter: N Campbell
>
> select min( tsint.csint ) from tsint 
> select max( tsint.csint ) from tsint
> ava.lang.ClassCastException: java.lang.Long cannot be cast to java.lang.Short
> select min( t_tsint.csint ) from t_tsint 
> create table  if not exists T_TSINT ( RNUM int , CSINT smallint   )
>  ROW FORMAT DELIMITED FIELDS TERMINATED BY '|' LINES TERMINATED BY '\n' 
>  STORED AS textfile  ;
> create table  if not exists TSINT ( RNUM int , CSINT smallint   )
> TERMINATED BY '\n' 
>  STORED AS orc  ;
> input data loaded into text file and then inserted into ORC table from text 
> based table
> 0|\N
> 1|-1
> 2|0
> 3|1
> 4|10



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9741) Refactor MetaStoreDirectSql by using getProductName instead of querying DB to determine DbType

2015-02-20 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14329403#comment-14329403
 ] 

Ashutosh Chauhan commented on HIVE-9741:


You may also delete runDbCheck() with this patch, since thats not used anymore. 
But before that you need to pay attention to MySQL specific query here, because 
that does more than determining db type, it also sets ansi mode for mysql. We 
need to run this query but outside of the lock acquired in 
ObjectStore::setConf(). One way to do that is to keep your current changes as 
it is and than after doing unlock() in setConf() in objectstore, call a method 
on MetaStoreDirectSQL to run this query.

> Refactor MetaStoreDirectSql by using getProductName instead of querying DB to 
> determine DbType
> --
>
> Key: HIVE-9741
> URL: https://issues.apache.org/jira/browse/HIVE-9741
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Affects Versions: 1.0.0
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HIVE-9741.1.patch
>
>
> MetaStoreDirectSql constructor is querying DB to determine dbType. which 
> leads to too many DB queries to make metastore slow or hanging if 
> MetaStoreDirectSql constructor is frequently called. This is to propose 
> getProductName to get dbType info. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9020) When dropping external tables, Hive should not verify whether user has access to the data.

2015-02-20 Thread Anthony Hsu (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14329397#comment-14329397
 ] 

Anthony Hsu commented on HIVE-9020:
---

Patch looks fine to me, apart from some formatting issues (indentation and 
spaces around {{&&}}). I agree with Thejas that we should add a unit test for 
this.

> When dropping external tables, Hive should not verify whether user has access 
> to the data. 
> ---
>
> Key: HIVE-9020
> URL: https://issues.apache.org/jira/browse/HIVE-9020
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.13.1
>Reporter: Anant Nag
> Attachments: dropExternal.patch
>
>
> When dropping tables, hive verifies whether the user has access to the data 
> on hdfs. It fails, if user doesn't have access. It makes sense for internal 
> tables since the data has to be deleted when dropping internal tables but for 
> external tables, Hive should not check for data access. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9537) string expressions on a fixed length character do not preserve trailing spaces

2015-02-20 Thread Aihua Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14329382#comment-14329382
 ] 

Aihua Xu commented on HIVE-9537:


Got it. Thanks.

> string expressions on a fixed length character do not preserve trailing spaces
> --
>
> Key: HIVE-9537
> URL: https://issues.apache.org/jira/browse/HIVE-9537
> Project: Hive
>  Issue Type: Bug
>  Components: SQL
>Reporter: N Campbell
>Assignee: Aihua Xu
>
> When a string expression such as upper or lower is applied to a fixed length 
> column the trailing spaces of the fixed length character are not preserved.
> {code:sql}
> CREATE TABLE  if not exists TCHAR ( 
> RNUM int, 
> CCHAR char(32)
> )
> ROW FORMAT DELIMITED 
> FIELDS TERMINATED BY '|' 
> LINES TERMINATED BY '\n' 
> STORED AS TEXTFILE;
> {code}
> {{cchar}} as a {{char(32)}}.
> {code:sql}
> select cchar, concat(cchar, cchar), concat(lower(cchar), cchar), 
> concat(upper(cchar), cchar) 
> from tchar;
> {code}
> 0|\N
> 1|
> 2| 
> 3|BB
> 4|EE
> 5|FF



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] Apache Hive 1.1.0 Release Candidate 3

2015-02-20 Thread Thejas Nair
A few classes seem to be missing from the hive-exec*jar in binary
tar.gz. When I build from the source tar.gz , the hive-exec*jar has
those. ie, the source tar.gz looks fine.

It is the ATSHook classes that are missing. Those are needed to be
able to register job progress information with Yarn timeline server.

 diff /tmp/src.txt /tmp/bin.txt
4768,4775d4767
< org/apache/hadoop/hive/ql/hooks/ATSHook$1.class
< org/apache/hadoop/hive/ql/hooks/ATSHook$2.class
< org/apache/hadoop/hive/ql/hooks/ATSHook$3.class
< org/apache/hadoop/hive/ql/hooks/ATSHook.class
< org/apache/hadoop/hive/ql/hooks/ATSHook$EntityTypes.class
< org/apache/hadoop/hive/ql/hooks/ATSHook$EventTypes.class
< org/apache/hadoop/hive/ql/hooks/ATSHook$OtherInfoTypes.class
< org/apache/hadoop/hive/ql/hooks/ATSHook$PrimaryFilterTypes.class


On Thu, Feb 19, 2015 at 8:54 AM, Chao Sun  wrote:
> +1
>
> 1. Build src with hadoop-1 and hadoop-2, tested the generated bin with some
> DDL/DML queries.
> 2. Tested the bin with some DDL/DML queries.
> 3. Verified signature for bin and src, both asc and md5.
>
> Chao
>
> On Thu, Feb 19, 2015 at 1:55 AM, Szehon Ho  wrote:
>
>> +1
>>
>> 1.  Verified signature for bin and src
>> 2.  Built src with hadoop2
>> 3.  Ran few queries from beeline with src
>> 4.  Ran few queries from beeline with bin
>> 5.  Verified no SNAPSHOT deps
>>
>> Thanks
>> Szehon
>>
>> On Wed, Feb 18, 2015 at 10:03 PM, Xuefu Zhang  wrote:
>>
>> > +1
>> >
>> > 1. downloaded the src tarball and built w/ -Phadoop-1/2
>> > 2. verified no binary (jars) in the src tarball
>> >
>> > On Wed, Feb 18, 2015 at 8:56 PM, Brock Noland 
>> wrote:
>> >
>> > > +1
>> > >
>> > > verified sigs, hashes, created tables, ran MR on YARN jobs
>> > >
>> > > On Wed, Feb 18, 2015 at 8:54 PM, Brock Noland 
>> > wrote:
>> > > > Apache Hive 1.1.0 Release Candidate 3 is available here:
>> > > > http://people.apache.org/~brock/apache-hive-1.1.0-rc3/
>> > > >
>> > > > Maven artifacts are available here:
>> > > >
>> https://repository.apache.org/content/repositories/orgapachehive-1026/
>> > > >
>> > > > Source tag for RC3 is at:
>> > > > http://svn.apache.org/repos/asf/hive/tags/release-1.1.0-rc3/
>> > > >
>> > > > My key is located here:
>> https://people.apache.org/keys/group/hive.asc
>> > > >
>> > > > Voting will conclude in 72 hours
>> > >
>> >
>>
>
>
>
> --
> Best,
> Chao


[jira] [Commented] (HIVE-9537) string expressions on a fixed length character do not preserve trailing spaces

2015-02-20 Thread Jason Dere (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14329374#comment-14329374
 ] 

Jason Dere commented on HIVE-9537:
--

- char(64) is correct per the rules mentioned above, since the combined length 
(32+32 = 64) is less than the char max length of 255. If the lengths exceed 
that (like concat(cchar, cchar, cchar, cchar), I believe it reverts to string 
type.
- As for (2), that depends on what we decide are the semantics regarding 
trailing spaces for char. Currently for Hive we ignore them for the purposes of 
comparison, length, and concatenation. As mentioned earlier this is similar to 
the MySQL/Postgres char semantics (which has often been done when for Hive 
dev). It's not impossible to change this, though we might have to think about 
backward compatibility issues again if we do.

> string expressions on a fixed length character do not preserve trailing spaces
> --
>
> Key: HIVE-9537
> URL: https://issues.apache.org/jira/browse/HIVE-9537
> Project: Hive
>  Issue Type: Bug
>  Components: SQL
>Reporter: N Campbell
>Assignee: Aihua Xu
>
> When a string expression such as upper or lower is applied to a fixed length 
> column the trailing spaces of the fixed length character are not preserved.
> {code:sql}
> CREATE TABLE  if not exists TCHAR ( 
> RNUM int, 
> CCHAR char(32)
> )
> ROW FORMAT DELIMITED 
> FIELDS TERMINATED BY '|' 
> LINES TERMINATED BY '\n' 
> STORED AS TEXTFILE;
> {code}
> {{cchar}} as a {{char(32)}}.
> {code:sql}
> select cchar, concat(cchar, cchar), concat(lower(cchar), cchar), 
> concat(upper(cchar), cchar) 
> from tchar;
> {code}
> 0|\N
> 1|
> 2| 
> 3|BB
> 4|EE
> 5|FF



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9716) Map job fails when table's LOCATION does not have scheme

2015-02-20 Thread Yongzhi Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongzhi Chen updated HIVE-9716:
---
Status: Patch Available  (was: Open)

Need code review. 

> Map job fails when table's LOCATION does not have scheme
> 
>
> Key: HIVE-9716
> URL: https://issues.apache.org/jira/browse/HIVE-9716
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.14.0, 0.13.0, 0.12.0
>Reporter: Yongzhi Chen
>Assignee: Yongzhi Chen
>Priority: Minor
> Attachments: HIVE-9716.1.patch
>
>
> When a table's location (the value of column 'LOCATION' in SDS table in 
> metastore) does not have a scheme, map job returns error. For example, 
> when do select count ( * ) from t1, get following exception:
> {noformat}
> 15/02/18 12:29:43 [Thread-22]: WARN mapred.LocalJobRunner: 
> job_local2120192529_0001
> java.lang.Exception: java.lang.RuntimeException: 
> java.lang.IllegalStateException: Invalid input path 
> file:/user/hive/warehouse/t1/data
>   at 
> org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:354)
> Caused by: java.lang.RuntimeException: java.lang.IllegalStateException: 
> Invalid input path file:/user/hive/warehouse/t1/data
>   at org.apache.hadoop.hive.ql.exec.mr.ExecMapper.map(ExecMapper.java:179)
>   at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50)
>   at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:430)
>   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:366)
>   at 
> org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:223)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.IllegalStateException: Invalid input path 
> file:/user/hive/warehouse/t1/data
>   at 
> org.apache.hadoop.hive.ql.exec.MapOperator.getNominalPath(MapOperator.java:406)
>   at 
> org.apache.hadoop.hive.ql.exec.MapOperator.cleanUpInputFileChangedOp(MapOperator.java:442)
>   at 
> org.apache.hadoop.hive.ql.exec.Operator.cleanUpInputFileChanged(Operator.java:1051)
>   at 
> org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:486)
>   at org.apache.hadoop.hive.ql.exec.mr.ExecMapper.map(ExecMapper.java:170)
>   ... 9 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9716) Map job fails when table's LOCATION does not have scheme

2015-02-20 Thread Yongzhi Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongzhi Chen updated HIVE-9716:
---
Attachment: HIVE-9716.1.patch

> Map job fails when table's LOCATION does not have scheme
> 
>
> Key: HIVE-9716
> URL: https://issues.apache.org/jira/browse/HIVE-9716
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.12.0, 0.13.0, 0.14.0
>Reporter: Yongzhi Chen
>Assignee: Yongzhi Chen
>Priority: Minor
> Attachments: HIVE-9716.1.patch
>
>
> When a table's location (the value of column 'LOCATION' in SDS table in 
> metastore) does not have a scheme, map job returns error. For example, 
> when do select count ( * ) from t1, get following exception:
> {noformat}
> 15/02/18 12:29:43 [Thread-22]: WARN mapred.LocalJobRunner: 
> job_local2120192529_0001
> java.lang.Exception: java.lang.RuntimeException: 
> java.lang.IllegalStateException: Invalid input path 
> file:/user/hive/warehouse/t1/data
>   at 
> org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:354)
> Caused by: java.lang.RuntimeException: java.lang.IllegalStateException: 
> Invalid input path file:/user/hive/warehouse/t1/data
>   at org.apache.hadoop.hive.ql.exec.mr.ExecMapper.map(ExecMapper.java:179)
>   at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50)
>   at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:430)
>   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:366)
>   at 
> org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:223)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.IllegalStateException: Invalid input path 
> file:/user/hive/warehouse/t1/data
>   at 
> org.apache.hadoop.hive.ql.exec.MapOperator.getNominalPath(MapOperator.java:406)
>   at 
> org.apache.hadoop.hive.ql.exec.MapOperator.cleanUpInputFileChangedOp(MapOperator.java:442)
>   at 
> org.apache.hadoop.hive.ql.exec.Operator.cleanUpInputFileChanged(Operator.java:1051)
>   at 
> org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:486)
>   at org.apache.hadoop.hive.ql.exec.mr.ExecMapper.map(ExecMapper.java:170)
>   ... 9 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9021) Hive should not allow any user to create tables in other hive DB's that user doesn't own

2015-02-20 Thread Anthony Hsu (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14329339#comment-14329339
 ] 

Anthony Hsu commented on HIVE-9021:
---

I don't think this feature is necessary. Hive has a [Storage-Based 
Authorization 
Model|https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Authorization#LanguageManualAuthorization-1StorageBasedAuthorizationintheMetastoreServer]
 that uses HDFS permissions for authorization. If a user does not want other 
users to be able to create tables in his database, he should set the 
permissions for his database's directory on HDFS accordingly (such as to 
rwxr-xr-x).

> Hive should not allow any user to create tables in other hive DB's that user 
> doesn't own
> 
>
> Key: HIVE-9021
> URL: https://issues.apache.org/jira/browse/HIVE-9021
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.13.1
>Reporter: Anant Nag
>  Labels: patch
> Attachments: db.patch
>
>
> Hive allows users to create tables in other users db. This should not be 
> allowed. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >