[jira] [Work logged] (HIVE-22010) Clean up ShowCreateTableOperation

2019-07-23 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22010?focusedWorklogId=280849&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-280849
 ]

ASF GitHub Bot logged work on HIVE-22010:
-

Author: ASF GitHub Bot
Created on: 23/Jul/19 07:06
Start Date: 23/Jul/19 07:06
Worklog Time Spent: 10m 
  Work Description: miklosgergely commented on pull request #732: 
HIVE-22010 - Clean up ShowCreateTableOperation
URL: https://github.com/apache/hive/pull/732#discussion_r306157517
 
 

 ##
 File path: 
itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/parse/TestReplicationScenarios.java
 ##
 @@ -186,6 +186,8 @@ static void internalBeforeClassSetup(Map 
additionalProperties, b
 });
 
 MetaStoreTestUtils.startMetaStoreWithRetry(hconf);
+// re set the WAREHOUSE property to the test dir, as the previous command 
added a random port to it
+hconf.set(MetastoreConf.ConfVars.WAREHOUSE.getVarname(), 
System.getProperty("test.warehouse.dir", "/tmp"));
 
 Review comment:
   Yes it wouldn't in theory, still the class already references this property 
4 times before this change, twice specifying /tmp as fallback value, twice not. 
I thought that we should be consistent, and it can't hurt to have it there so I 
put it here, and also fixed the missing one in line 3273 - but missed to do so 
in the 4th occasion at line 3293. So we can have it at all occasions,, or 
remove it from all, just be consistent. What do you suggest?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 280849)
Time Spent: 40m  (was: 0.5h)

> Clean up ShowCreateTableOperation
> -
>
> Key: HIVE-22010
> URL: https://issues.apache.org/jira/browse/HIVE-22010
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
>  Labels: pull-request-available, refactor-ddl
> Attachments: HIVE-22010.01.patch, HIVE-22010.02.patch
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Work logged] (HIVE-22010) Clean up ShowCreateTableOperation

2019-07-23 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22010?focusedWorklogId=280850&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-280850
 ]

ASF GitHub Bot logged work on HIVE-22010:
-

Author: ASF GitHub Bot
Created on: 23/Jul/19 07:12
Start Date: 23/Jul/19 07:12
Worklog Time Spent: 10m 
  Work Description: jcamachor commented on pull request #732: HIVE-22010 - 
Clean up ShowCreateTableOperation
URL: https://github.com/apache/hive/pull/732#discussion_r306159482
 
 

 ##
 File path: 
itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/parse/TestReplicationScenarios.java
 ##
 @@ -186,6 +186,8 @@ static void internalBeforeClassSetup(Map 
additionalProperties, b
 });
 
 MetaStoreTestUtils.startMetaStoreWithRetry(hconf);
+// re set the WAREHOUSE property to the test dir, as the previous command 
added a random port to it
+hconf.set(MetastoreConf.ConfVars.WAREHOUSE.getVarname(), 
System.getProperty("test.warehouse.dir", "/tmp"));
 
 Review comment:
   Consistency is good idd... We can include `tmp` it in all occurrences.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 280850)
Time Spent: 50m  (was: 40m)

> Clean up ShowCreateTableOperation
> -
>
> Key: HIVE-22010
> URL: https://issues.apache.org/jira/browse/HIVE-22010
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
>  Labels: pull-request-available, refactor-ddl
> Attachments: HIVE-22010.01.patch, HIVE-22010.02.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-22009) CTLV with user specified location is not honoured

2019-07-23 Thread Naresh P R (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16890786#comment-16890786
 ] 

Naresh P R commented on HIVE-22009:
---

[~maheshk114], I updated the testcase, Could you please review the latest patch 
?

> CTLV with user specified location is not honoured 
> --
>
> Key: HIVE-22009
> URL: https://issues.apache.org/jira/browse/HIVE-22009
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 4.0.0, 3.1.1
>Reporter: Naresh P R
>Assignee: Naresh P R
>Priority: Major
> Attachments: HIVE-22009-branch-3.1.patch, 
> HIVE-22009.1-branch-3.1.patch, HIVE-22009.1.patch, HIVE-22009.2.patch, 
> HIVE-22009.3.patch, HIVE-22009.patch
>
>
> Steps to repro :
>  
> {code:java}
> CREATE TABLE emp_table (id int, name string, salary int);
> insert into emp_table values(1,'a',2);
> CREATE VIEW emp_view AS SELECT * FROM emp_table WHERE salary>1;
> CREATE EXTERNAL TABLE emp_ext_table like emp_view LOCATION 
> '/tmp/emp_ext_table';
> show create table emp_ext_table;{code}
>  
> {code:java}
> ++
> | createtab_stmt |
> ++
> | CREATE EXTERNAL TABLE `emp_ext_table`( |
> | `id` int, |
> | `name` string, |
> | `salary` int) |
> | ROW FORMAT SERDE |
> | 'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe' |
> | STORED AS INPUTFORMAT |
> | 'org.apache.hadoop.mapred.TextInputFormat' |
> | OUTPUTFORMAT |
> | 'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat' |
> | LOCATION |
> | 'hdfs://nn:8020/warehouse/tablespace/external/hive/emp_ext_table' |
> | TBLPROPERTIES ( |
> | 'bucketing_version'='2', |
> | 'transient_lastDdlTime'='1563467962') |
> ++{code}
> Table Location is not '/tmp/emp_ext_table', instead location is set to 
> default warehouse path.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-21960) HMS tasks on replica

2019-07-23 Thread Ashutosh Bapat (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Bapat updated HIVE-21960:
--
Status: In Progress  (was: Patch Available)

> HMS tasks on replica
> 
>
> Key: HIVE-21960
> URL: https://issues.apache.org/jira/browse/HIVE-21960
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2, repl
>Affects Versions: 4.0.0
>Reporter: Ashutosh Bapat
>Assignee: Ashutosh Bapat
>Priority: Major
> Attachments: HIVE-21960.01.patch, HIVE-21960.02.patch, Replication 
> and House keeping tasks.pdf
>
>
> An HMS performs a number of housekeeping tasks. Assess whether
>  # They are required to be performed in the replicated data
>  # Performing those on replicated data causes any issues and how to fix those.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-21960) HMS tasks on replica

2019-07-23 Thread Ashutosh Bapat (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Bapat updated HIVE-21960:
--
Attachment: HIVE-21960.03.patch
Status: Patch Available  (was: In Progress)

Fixed test failures from the previous ptest run. Also fixed some checkstyle 
issues.

> HMS tasks on replica
> 
>
> Key: HIVE-21960
> URL: https://issues.apache.org/jira/browse/HIVE-21960
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2, repl
>Affects Versions: 4.0.0
>Reporter: Ashutosh Bapat
>Assignee: Ashutosh Bapat
>Priority: Major
> Attachments: HIVE-21960.01.patch, HIVE-21960.02.patch, 
> HIVE-21960.03.patch, Replication and House keeping tasks.pdf
>
>
> An HMS performs a number of housekeeping tasks. Assess whether
>  # They are required to be performed in the replicated data
>  # Performing those on replicated data causes any issues and how to fix those.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-21987) Hive is unable to read Parquet int32 annotated with decimal

2019-07-23 Thread Nandor Kollar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21987?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nandor Kollar updated HIVE-21987:
-
Description: 
When I tried to read a Parquet file from a Hive (with Tez execution engine) 
table with a small decimal column, I got the following exception:
{code}
Caused by: java.lang.UnsupportedOperationException: 
org.apache.hadoop.hive.ql.io.parquet.convert.ETypeConverter$8$1
at 
org.apache.parquet.io.api.PrimitiveConverter.addInt(PrimitiveConverter.java:98)
at 
org.apache.parquet.column.impl.ColumnReaderImpl$2$3.writeValue(ColumnReaderImpl.java:248)
at 
org.apache.parquet.column.impl.ColumnReaderImpl.writeCurrentValueToConverter(ColumnReaderImpl.java:367)
at 
org.apache.parquet.io.RecordReaderImplementation.read(RecordReaderImplementation.java:406)
at 
org.apache.parquet.hadoop.InternalParquetRecordReader.nextKeyValue(InternalParquetRecordReader.java:226)
... 28 more
{code}

Steps to reproduce:
- Create a Hive table with a single decimal(4, 2) column
- Create a Parquet file with int32 column annotated with decimal(4, 2) logical 
type, put it into the previously created table location (or use the attached 
parquet file, in this case the column should be named as 'd', to match the Hive 
schema with the Parquet schema in the file)
- Execute a {{select *}} on this table

Also, I'm afraid that similar problems can happen with int64 decimals too. 
[Parquet specification | 
https://github.com/apache/parquet-format/blob/master/LogicalTypes.md] allows 
both of these cases.

  was:
When I tried to read a Parquet file from a Hive (with Tez execution engine) 
table with a small decimal column, I got the following exception:
{code}
Caused by: java.lang.UnsupportedOperationException: 
org.apache.hadoop.hive.ql.io.parquet.convert.ETypeConverter$8$1
at 
org.apache.parquet.io.api.PrimitiveConverter.addInt(PrimitiveConverter.java:98)
at 
org.apache.parquet.column.impl.ColumnReaderImpl$2$3.writeValue(ColumnReaderImpl.java:248)
at 
org.apache.parquet.column.impl.ColumnReaderImpl.writeCurrentValueToConverter(ColumnReaderImpl.java:367)
at 
org.apache.parquet.io.RecordReaderImplementation.read(RecordReaderImplementation.java:406)
at 
org.apache.parquet.hadoop.InternalParquetRecordReader.nextKeyValue(InternalParquetRecordReader.java:226)
... 28 more
{code}

Steps to reproduce:
- Create a Hive table with a single decimal(4, 2) column
- Create a Parquet file with int32 column annotated with decimal(4, 2) logical 
type, put it into the previously created table location (or use the attached 
parquet file)
- Execute a {{select *}} on this table

Also, I'm afraid that similar problems can happen with int64 decimals too. 
[Parquet specification | 
https://github.com/apache/parquet-format/blob/master/LogicalTypes.md] allows 
both of these cases.


> Hive is unable to read Parquet int32 annotated with decimal
> ---
>
> Key: HIVE-21987
> URL: https://issues.apache.org/jira/browse/HIVE-21987
> Project: Hive
>  Issue Type: Bug
>Reporter: Nandor Kollar
>Assignee: Marta Kuczora
>Priority: Major
> Attachments: 
> part-0-d6ee992d-ef56-4384-8855-5a170d3e3660-c000.snappy.parquet
>
>
> When I tried to read a Parquet file from a Hive (with Tez execution engine) 
> table with a small decimal column, I got the following exception:
> {code}
> Caused by: java.lang.UnsupportedOperationException: 
> org.apache.hadoop.hive.ql.io.parquet.convert.ETypeConverter$8$1
>   at 
> org.apache.parquet.io.api.PrimitiveConverter.addInt(PrimitiveConverter.java:98)
>   at 
> org.apache.parquet.column.impl.ColumnReaderImpl$2$3.writeValue(ColumnReaderImpl.java:248)
>   at 
> org.apache.parquet.column.impl.ColumnReaderImpl.writeCurrentValueToConverter(ColumnReaderImpl.java:367)
>   at 
> org.apache.parquet.io.RecordReaderImplementation.read(RecordReaderImplementation.java:406)
>   at 
> org.apache.parquet.hadoop.InternalParquetRecordReader.nextKeyValue(InternalParquetRecordReader.java:226)
>   ... 28 more
> {code}
> Steps to reproduce:
> - Create a Hive table with a single decimal(4, 2) column
> - Create a Parquet file with int32 column annotated with decimal(4, 2) 
> logical type, put it into the previously created table location (or use the 
> attached parquet file, in this case the column should be named as 'd', to 
> match the Hive schema with the Parquet schema in the file)
> - Execute a {{select *}} on this table
> Also, I'm afraid that similar problems can happen with int64 decimals too. 
> [Parquet specification | 
> https://github.com/apache/parquet-format/blob/master/LogicalTypes.md] allows 
> both of these cases.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-22031) HiveRelDecorrelator fails with IndexOutOfBoundsException if the query contains several "constant" columns

2019-07-23 Thread Artem Velykorodnyi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Artem Velykorodnyi updated HIVE-22031:
--
Status: In Progress  (was: Patch Available)

> HiveRelDecorrelator fails with IndexOutOfBoundsException if the query 
> contains several "constant" columns
> -
>
> Key: HIVE-22031
> URL: https://issues.apache.org/jira/browse/HIVE-22031
> Project: Hive
>  Issue Type: Bug
>  Components: CBO
>Affects Versions: 2.3.5
>Reporter: Artem Velykorodnyi
>Assignee: Artem Velykorodnyi
>Priority: Major
> Attachments: HIVE-22031.02.patch, HIVE-22031.1.patch, HIVE-22031.patch
>
>
> Steps for reproducing:
> {code}
> 1. Create table orders
> create table orders (ORD_NUM INT, CUST_CODE STRING);
> 2. Create table customers
> create table customers (CUST_CODE STRING);
> 3. Make select with constants and with a subquery:
> select DISTINCT(CUST_CODE), '777' as ANY, ORD_NUM, '888' as CONSTANT
> from orders 
> WHERE not exists 
> (select 1 
> from customers 
> WHERE CUST_CODE=orders.CUST_CODE
> );
> {code}
> Query fails with IndexOutOfBoundsException
> {code}
> Exception in thread "main" java.lang.AssertionError: Internal error: While 
> invoking method 'public 
> org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveRelDecorrelator$Frame 
> org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveRelDecorrelator.decorrelateRel(org.apache.hadoop.hive.ql.optimizer.calcite.reloperators.HiveProject)
>  throws org.apache.hadoop.hive.ql.parse.SemanticException'
>   at org.apache.calcite.util.Util.newInternal(Util.java:792)
>   at org.apache.calcite.util.ReflectUtil$2.invoke(ReflectUtil.java:534)
>   at 
> org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveRelDecorrelator.getInvoke(HiveRelDecorrelator.java:660)
>   at 
> org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveRelDecorrelator.decorrelate(HiveRelDecorrelator.java:252)
>   at 
> org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveRelDecorrelator.decorrelateQuery(HiveRelDecorrelator.java:218)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.apply(CalcitePlanner.java:1347)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.apply(CalcitePlanner.java:1261)
>   at org.apache.calcite.tools.Frameworks$1.apply(Frameworks.java:113)
>   at 
> org.apache.calcite.prepare.CalcitePrepareImpl.perform(CalcitePrepareImpl.java:997)
>   at org.apache.calcite.tools.Frameworks.withPrepare(Frameworks.java:149)
>   at org.apache.calcite.tools.Frameworks.withPlanner(Frameworks.java:106)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.logicalPlan(CalcitePlanner.java:1069)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.getOptimizedAST(CalcitePlanner.java:1085)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.genOPTree(CalcitePlanner.java:364)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:11138)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:286)
>   at 
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:258)
>   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:512)
>   at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1317)
>   at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1457)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1237)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1227)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:233)
>   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:184)
>   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:403)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:821)
>   at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:759)
>   at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:686)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at org.apache.hadoop.util.RunJar.run(RunJar.java:233)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:148)
> Caused by: java.lang.reflect.InvocationTargetException
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.D

[jira] [Updated] (HIVE-22031) HiveRelDecorrelator fails with IndexOutOfBoundsException if the query contains several "constant" columns

2019-07-23 Thread Artem Velykorodnyi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Artem Velykorodnyi updated HIVE-22031:
--
Attachment: HIVE-22031.3.patch
Status: Patch Available  (was: In Progress)

> HiveRelDecorrelator fails with IndexOutOfBoundsException if the query 
> contains several "constant" columns
> -
>
> Key: HIVE-22031
> URL: https://issues.apache.org/jira/browse/HIVE-22031
> Project: Hive
>  Issue Type: Bug
>  Components: CBO
>Affects Versions: 2.3.5
>Reporter: Artem Velykorodnyi
>Assignee: Artem Velykorodnyi
>Priority: Major
> Attachments: HIVE-22031.02.patch, HIVE-22031.1.patch, 
> HIVE-22031.3.patch, HIVE-22031.patch
>
>
> Steps for reproducing:
> {code}
> 1. Create table orders
> create table orders (ORD_NUM INT, CUST_CODE STRING);
> 2. Create table customers
> create table customers (CUST_CODE STRING);
> 3. Make select with constants and with a subquery:
> select DISTINCT(CUST_CODE), '777' as ANY, ORD_NUM, '888' as CONSTANT
> from orders 
> WHERE not exists 
> (select 1 
> from customers 
> WHERE CUST_CODE=orders.CUST_CODE
> );
> {code}
> Query fails with IndexOutOfBoundsException
> {code}
> Exception in thread "main" java.lang.AssertionError: Internal error: While 
> invoking method 'public 
> org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveRelDecorrelator$Frame 
> org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveRelDecorrelator.decorrelateRel(org.apache.hadoop.hive.ql.optimizer.calcite.reloperators.HiveProject)
>  throws org.apache.hadoop.hive.ql.parse.SemanticException'
>   at org.apache.calcite.util.Util.newInternal(Util.java:792)
>   at org.apache.calcite.util.ReflectUtil$2.invoke(ReflectUtil.java:534)
>   at 
> org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveRelDecorrelator.getInvoke(HiveRelDecorrelator.java:660)
>   at 
> org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveRelDecorrelator.decorrelate(HiveRelDecorrelator.java:252)
>   at 
> org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveRelDecorrelator.decorrelateQuery(HiveRelDecorrelator.java:218)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.apply(CalcitePlanner.java:1347)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.apply(CalcitePlanner.java:1261)
>   at org.apache.calcite.tools.Frameworks$1.apply(Frameworks.java:113)
>   at 
> org.apache.calcite.prepare.CalcitePrepareImpl.perform(CalcitePrepareImpl.java:997)
>   at org.apache.calcite.tools.Frameworks.withPrepare(Frameworks.java:149)
>   at org.apache.calcite.tools.Frameworks.withPlanner(Frameworks.java:106)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.logicalPlan(CalcitePlanner.java:1069)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.getOptimizedAST(CalcitePlanner.java:1085)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.genOPTree(CalcitePlanner.java:364)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:11138)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:286)
>   at 
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:258)
>   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:512)
>   at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1317)
>   at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1457)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1237)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1227)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:233)
>   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:184)
>   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:403)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:821)
>   at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:759)
>   at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:686)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at org.apache.hadoop.util.RunJar.run(RunJar.java:233)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:148)
> Caused by: java.lang.reflect.InvocationTargetException
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke

[jira] [Updated] (HIVE-1355) Hive should use NullOutputFormat for hadoop jobs

2019-07-23 Thread Ryan Wu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-1355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan Wu updated HIVE-1355:
--
Description: 
* see https://issues.apache.org/jira/browse/MAPREDUCE-1802

hive doesn't depend on hadoop job output folder. it produces output exclusively 
via side effect folders. we should use an outputformat that can request hadoop 
skip cleanup/setup. this could be nulloutputformat (unless there are any 
objections in hadoop to changing nulloutputformat behavior).

as a small side effect, it also avoids some totally unnecessary hdfs file 
creates and deletes in hdfs.

  was:
see https://issues.apache.org/jira/browse/MAPREDUCE-1802

hive doesn't depend on hadoop job output folder. it produces output exclusively 
via side effect folders. we should use an outputformat that can request hadoop 
skip cleanup/setup. this could be nulloutputformat (unless there are any 
objections in hadoop to changing nulloutputformat behavior).

as a small side effect, it also avoids some totally unnecessary hdfs file 
creates and deletes in hdfs.


> Hive should use NullOutputFormat for hadoop jobs
> 
>
> Key: HIVE-1355
> URL: https://issues.apache.org/jira/browse/HIVE-1355
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Processor
>Reporter: Joydeep Sen Sarma
>Assignee: Joydeep Sen Sarma
>Priority: Major
> Fix For: 0.6.0
>
> Attachments: 1355.1.patch
>
>
> * see https://issues.apache.org/jira/browse/MAPREDUCE-1802
> hive doesn't depend on hadoop job output folder. it produces output 
> exclusively via side effect folders. we should use an outputformat that can 
> request hadoop skip cleanup/setup. this could be nulloutputformat (unless 
> there are any objections in hadoop to changing nulloutputformat behavior).
> as a small side effect, it also avoids some totally unnecessary hdfs file 
> creates and deletes in hdfs.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Work logged] (HIVE-12971) Hive Support for Kudu

2019-07-23 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-12971?focusedWorklogId=280998&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-280998
 ]

ASF GitHub Bot logged work on HIVE-12971:
-

Author: ASF GitHub Bot
Created on: 23/Jul/19 12:40
Start Date: 23/Jul/19 12:40
Worklog Time Spent: 10m 
  Work Description: granthenke commented on pull request #733: HIVE-12971: 
Add Support for Kudu Tables
URL: https://github.com/apache/hive/pull/733#discussion_r306294736
 
 

 ##
 File path: itests/qtest-kudu/pom.xml
 ##
 @@ -0,0 +1,494 @@
+
+
+http://maven.apache.org/POM/4.0.0";
+ xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
+ xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xsd/maven-4.0.0.xsd";>
+  4.0.0
+
+  
+org.apache.hive
+hive-it
+4.0.0-SNAPSHOT
+../pom.xml
+  
+
+  hive-it-qfile-kudu
+  jar
+  Hive Integration - QFile Kudu Tests
+
+  
+../..
+None
+
+
+
+false
+
+-mkdir -p
+  
+
+  
+
+
+
+  org.apache.hive
+  hive-common
+  ${project.version}
+  test
+
+
+  org.apache.hive
+  hive-contrib
+  ${project.version}
+  test
+  
+
+  org.apache.hive
+  hive-exec
+
+  
+
+
+  org.apache.hive
+  hive-standalone-metastore-common
+  ${project.version}
+  test
+
+
+  org.apache.hive
+  hive-standalone-metastore-common
+  ${project.version}
+  tests
+  test
+
+
+  org.apache.hive
+  hive-standalone-metastore-server
+  ${project.version}
+  test
+
+
+  org.apache.hive
+  hive-standalone-metastore-server
+  ${project.version}
+  tests
+  test
+
+
+  org.apache.hive
+  hive-it-custom-serde
+  ${project.version}
+  test
+
+
+  org.apache.hive
+  hive-it-util
+  ${project.version}
+  test
+  
+
+  org.apache.hive
+  hive-exec
+
+
+  org.apache.hive
+  hive-it-druid
+
+  
+
+
+  org.apache.hive
+  hive-serde
+  ${project.version}
+  test
+
+
+  org.apache.hive
+  hive-exec
+  ${project.version}
+  test
+  core
+
+
+  org.apache.hive
+  hive-exec
+  ${project.version}
+  test
+  tests
+
+
+
+  junit
+  junit
+  ${junit.version}
+  test
+
+
+
+  com.esotericsoftware
+  kryo
+  ${kryo.version}
+  test
+
+
+  org.apache.commons
+  commons-lang3
+  ${commons-lang3.version}
+  test
+
+
+  javolution
+  javolution
+  ${javolution.version}
+  test
+
+
+  com.sun.jersey
+  jersey-servlet
+  ${jersey.version}
+  test
+
+
+  org.apache.hadoop
+  hadoop-archives
+  ${hadoop.version}
+  test
+
+
+  org.apache.hadoop
+  hadoop-common
+  ${hadoop.version}
+  test
+
+ 
+org.slf4j
+slf4j-log4j12
+  
+  
+commons-logging
+commons-logging
+  
+  
+   
+
+  org.apache.hadoop
+  hadoop-common
+  ${hadoop.version}
+  tests
+  test
+ 
+ 
+org.slf4j
+slf4j-log4j12
+  
+  
+commons-logging
+commons-logging
+  
+  
+  
+
+  org.apache.hadoop
+  hadoop-hdfs
+  ${hadoop.version}
+  tests
+  test
+
+
+  org.apache.hadoop
+  hadoop-hdfs
+  ${hadoop.version}
+  test
+
+
+  org.apache.hadoop
+  hadoop-mapreduce-client-jobclient
+  ${hadoop.version}
+  tests
+  test
+
+ 
+org.slf4j
+slf4j-log4j12
+  
+  
+commons-logging
+commons-logging
+  
+  
+   
+
+  org.apache.hadoop
+  hadoop-mapreduce-client-hs
+  ${hadoop.version}
+  test
+
+ 
+org.slf4j
+slf4j-log4j12
+  
+  
+commons-logging
+commons-logging
+  
+  
+   
+
+  org.apache.hadoop
+  hadoop-mapreduce-client-core
+  ${hadoop.version}
+  test
+
+
+  org.apache.hadoop
+  hadoop-yarn-server-tests
+  ${hadoop.version}
+  test
+  tests
+
+
+  org.apache.hadoop
+  hadoop-yarn-client
+  ${hadoop.version}
+  test
+
+
+  org.apache.hbase
 
 Review comment:
   Removed the hbase dependencies. 
 

This is an automated message from the Apache Git Service.
To respond to the message, pleas

[jira] [Work logged] (HIVE-12971) Hive Support for Kudu

2019-07-23 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-12971?focusedWorklogId=281002&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-281002
 ]

ASF GitHub Bot logged work on HIVE-12971:
-

Author: ASF GitHub Bot
Created on: 23/Jul/19 12:59
Start Date: 23/Jul/19 12:59
Worklog Time Spent: 10m 
  Work Description: granthenke commented on pull request #733: HIVE-12971: 
Add Support for Kudu Tables
URL: https://github.com/apache/hive/pull/733#discussion_r306303135
 
 

 ##
 File path: 
itests/util/src/main/java/org/apache/hadoop/hive/cli/control/CliConfigs.java
 ##
 @@ -778,4 +778,27 @@ public MiniDruidLlapLocalCliConfig() {
 }
   }
 
+  /**
+   * The CliConfig implementation for Kudu.
+   */
+  public static class KuduCliConfig extends AbstractCliConfig {
+public KuduCliConfig() {
+  super(CoreKuduCliDriver.class);
+  try {
+setQueryDir("kudu-handler/src/test/queries/positive");
+
+setResultsDir("kudu-handler/src/test/results/positive");
+setLogDir("itests/qtest/target/qfile-results/kudu-handler/positive");
+
+setInitScript("q_test_init_src.sql");
+setCleanupScript("q_test_cleanup_src.sql");
+
+setHiveConfDir("");
+setClusterType(MiniClusterType.NONE);
 
 Review comment:
   I created a KUDU type that uses Tez type and local fs.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 281002)
Time Spent: 2.5h  (was: 2h 20m)

> Hive Support for Kudu
> -
>
> Key: HIVE-12971
> URL: https://issues.apache.org/jira/browse/HIVE-12971
> Project: Hive
>  Issue Type: New Feature
>Affects Versions: 2.0.0
>Reporter: Lenni Kuff
>Assignee: Grant Henke
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-12971.0.patch, HIVE-12971.1.patch, 
> HIVE-12971.2.patch, HIVE-12971.3.patch
>
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> JIRA for tracking work related to Hive/Kudu integration.
> It would be useful to allow Kudu data to be accessible via Hive. This would 
> involve creating a Kudu SerDe/StorageHandler and implementing support for 
> QUERY and DML commands like SELECT, INSERT, UPDATE, and DELETE. Kudu 
> Input/OutputFormats classes already exist. The work can be staged to support 
> this functionality incrementally.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Work logged] (HIVE-12971) Hive Support for Kudu

2019-07-23 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-12971?focusedWorklogId=281004&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-281004
 ]

ASF GitHub Bot logged work on HIVE-12971:
-

Author: ASF GitHub Bot
Created on: 23/Jul/19 13:00
Start Date: 23/Jul/19 13:00
Worklog Time Spent: 10m 
  Work Description: granthenke commented on pull request #733: HIVE-12971: 
Add Support for Kudu Tables
URL: https://github.com/apache/hive/pull/733#discussion_r306303498
 
 

 ##
 File path: 
itests/util/src/main/java/org/apache/hadoop/hive/kudu/KuduQTestUtil.java
 ##
 @@ -0,0 +1,44 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hive.kudu;
+
+import org.apache.hadoop.hive.ql.QTestArguments;
+import org.apache.hadoop.hive.ql.QTestMiniClusters.MiniClusterType;
+import org.apache.hadoop.hive.ql.QTestUtil;
+
+/**
+ * KuduQTestUtil initializes Kudu-specific test fixtures.
+ */
+public class KuduQTestUtil extends QTestUtil {
+
+  public KuduQTestUtil(String outDir, String logDir, MiniClusterType miniMr,
+   KuduTestSetup setup, String initScript, String 
cleanupScript) throws Exception {
+
+super(
+QTestArguments.QTestArgumentsBuilder.instance()
 
 Review comment:
   I removed this extra wrapper class and moved the logic into the CliDriver 
given this wasn't doing much. 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 281004)
Time Spent: 2h 40m  (was: 2.5h)

> Hive Support for Kudu
> -
>
> Key: HIVE-12971
> URL: https://issues.apache.org/jira/browse/HIVE-12971
> Project: Hive
>  Issue Type: New Feature
>Affects Versions: 2.0.0
>Reporter: Lenni Kuff
>Assignee: Grant Henke
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-12971.0.patch, HIVE-12971.1.patch, 
> HIVE-12971.2.patch, HIVE-12971.3.patch
>
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> JIRA for tracking work related to Hive/Kudu integration.
> It would be useful to allow Kudu data to be accessible via Hive. This would 
> involve creating a Kudu SerDe/StorageHandler and implementing support for 
> QUERY and DML commands like SELECT, INSERT, UPDATE, and DELETE. Kudu 
> Input/OutputFormats classes already exist. The work can be staged to support 
> this functionality incrementally.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Work logged] (HIVE-12971) Hive Support for Kudu

2019-07-23 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-12971?focusedWorklogId=281005&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-281005
 ]

ASF GitHub Bot logged work on HIVE-12971:
-

Author: ASF GitHub Bot
Created on: 23/Jul/19 13:03
Start Date: 23/Jul/19 13:03
Worklog Time Spent: 10m 
  Work Description: granthenke commented on pull request #733: HIVE-12971: 
Add Support for Kudu Tables
URL: https://github.com/apache/hive/pull/733#discussion_r306304930
 
 

 ##
 File path: common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
 ##
 @@ -2900,6 +2900,11 @@ private static void 
populateLlapDaemonVarsSet(Set llapDaemonVarsSetLocal
 HIVE_HBASE_SNAPSHOT_RESTORE_DIR("hive.hbase.snapshot.restoredir", "/tmp", 
"The directory in which to " +
 "restore the HBase table snapshot."),
 
+// For Kudu storage handler
+HIVE_KUDU_MASTER_ADDRESSES_DEFAULT("hive.kudu.master.addresses.default", 
"localhost:7050",
 
 Review comment:
   This is correct. I will update. 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 281005)
Time Spent: 2h 50m  (was: 2h 40m)

> Hive Support for Kudu
> -
>
> Key: HIVE-12971
> URL: https://issues.apache.org/jira/browse/HIVE-12971
> Project: Hive
>  Issue Type: New Feature
>Affects Versions: 2.0.0
>Reporter: Lenni Kuff
>Assignee: Grant Henke
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-12971.0.patch, HIVE-12971.1.patch, 
> HIVE-12971.2.patch, HIVE-12971.3.patch
>
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> JIRA for tracking work related to Hive/Kudu integration.
> It would be useful to allow Kudu data to be accessible via Hive. This would 
> involve creating a Kudu SerDe/StorageHandler and implementing support for 
> QUERY and DML commands like SELECT, INSERT, UPDATE, and DELETE. Kudu 
> Input/OutputFormats classes already exist. The work can be staged to support 
> this functionality incrementally.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Work logged] (HIVE-12971) Hive Support for Kudu

2019-07-23 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-12971?focusedWorklogId=281009&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-281009
 ]

ASF GitHub Bot logged work on HIVE-12971:
-

Author: ASF GitHub Bot
Created on: 23/Jul/19 13:08
Start Date: 23/Jul/19 13:08
Worklog Time Spent: 10m 
  Work Description: granthenke commented on pull request #733: HIVE-12971: 
Add Support for Kudu Tables
URL: https://github.com/apache/hive/pull/733#discussion_r306307087
 
 

 ##
 File path: kudu-handler/src/test/queries/positive/kudu_queries.q
 ##
 @@ -0,0 +1,148 @@
+--! qt:dataset:src
+
+-- Create table specifying columns.
+-- Note: Kudu is the source of truth for schema.
+DROP TABLE IF EXISTS kv_table;
+CREATE EXTERNAL TABLE kv_table(key int, value string)
+STORED BY 'org.apache.hadoop.hive.kudu.KuduStorageHandler'
+TBLPROPERTIES ("kudu.table_name" = "default.kudu_kv");
+
+DESCRIBE EXTENDED kv_table;
+
+-- Verify INSERT support.
+INSERT INTO TABLE kv_table VALUES
+(1, "1"), (2, "2");
+
+SELECT * FROM kv_table;
+SELECT count(*) FROM kv_table;
+SELECT count(*) FROM kv_table LIMIT 1;
+SELECT count(1) FROM kv_table;
+
+-- Verify projection and case insensitivity.
+SELECT kEy FROM kv_table;
+
+DROP TABLE kv_table;
+
+-- Create table without specifying columns.
+-- Note: Kudu is the source of truth for schema.
+DROP TABLE IF EXISTS all_types_table;
+CREATE EXTERNAL TABLE all_types_table
+STORED BY 'org.apache.hadoop.hive.kudu.KuduStorageHandler'
+TBLPROPERTIES ("kudu.table_name" = "default.kudu_all_types");
+
+DESCRIBE EXTENDED all_types_table;
+
+INSERT INTO TABLE all_types_table VALUES
+(1, 1, 1, 1, true, 1.1, 1.1, "one", 'one', '2011-11-11 11:11:11', 1.111, null, 
1),
+(2, 2, 2, 2, false, 2.2, 2.2, "two", 'two', '2012-12-12 12:12:12', 2.222, 
null, 2);
+
+SELECT * FROM all_types_table;
+SELECT count(*) FROM all_types_table;
+
+-- Verify comparison predicates on byte.
+SELECT key FROM all_types_table WHERE key = 1;
 
 Review comment:
   I added some at the bottom but didn't want to go overboard. How many of the 
predicate tests should I add explain to? one per type?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 281009)
Time Spent: 3h  (was: 2h 50m)

> Hive Support for Kudu
> -
>
> Key: HIVE-12971
> URL: https://issues.apache.org/jira/browse/HIVE-12971
> Project: Hive
>  Issue Type: New Feature
>Affects Versions: 2.0.0
>Reporter: Lenni Kuff
>Assignee: Grant Henke
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-12971.0.patch, HIVE-12971.1.patch, 
> HIVE-12971.2.patch, HIVE-12971.3.patch
>
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> JIRA for tracking work related to Hive/Kudu integration.
> It would be useful to allow Kudu data to be accessible via Hive. This would 
> involve creating a Kudu SerDe/StorageHandler and implementing support for 
> QUERY and DML commands like SELECT, INSERT, UPDATE, and DELETE. Kudu 
> Input/OutputFormats classes already exist. The work can be staged to support 
> this functionality incrementally.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-21960) HMS tasks on replica

2019-07-23 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16890985#comment-16890985
 ] 

Hive QA commented on HIVE-21960:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
46s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
 8s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
51s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
31s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  2m 
30s{color} | {color:blue} standalone-metastore/metastore-common in master has 
31 extant Findbugs warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  1m 
16s{color} | {color:blue} standalone-metastore/metastore-server in master has 
179 extant Findbugs warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m  
4s{color} | {color:blue} ql in master has 2250 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
40s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m  
5s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
26s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} The patch metastore-common passed checkstyle {color} 
|
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} standalone-metastore/metastore-server: The patch 
generated 0 new + 50 unchanged - 1 fixed = 50 total (was 51) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} ql: The patch generated 0 new + 68 unchanged - 1 
fixed = 68 total (was 69) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
19s{color} | {color:red} itests/hive-unit: The patch generated 12 new + 237 
unchanged - 12 fixed = 249 total (was 249) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  9m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m  
8s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 46m 32s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-18142/dev-support/hive-personality.sh
 |
| git revision | master / 11c79af |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-18142/yetus/diff-checkstyle-itests_hive-unit.txt
 |
| modules | C: standalone-metastore/metastore-common 
standalone-metastore/metastore-server ql itests/hive-unit U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-18142/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generat

[jira] [Work logged] (HIVE-12971) Hive Support for Kudu

2019-07-23 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-12971?focusedWorklogId=281023&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-281023
 ]

ASF GitHub Bot logged work on HIVE-12971:
-

Author: ASF GitHub Bot
Created on: 23/Jul/19 13:35
Start Date: 23/Jul/19 13:35
Worklog Time Spent: 10m 
  Work Description: granthenke commented on pull request #733: HIVE-12971: 
Add Support for Kudu Tables
URL: https://github.com/apache/hive/pull/733#discussion_r306321072
 
 

 ##
 File path: 
kudu-handler/src/java/org/apache/hadoop/hive/kudu/KuduPredicateHandler.java
 ##
 @@ -0,0 +1,175 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hive.kudu;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hive.common.type.HiveDecimal;
+import org.apache.hadoop.hive.common.type.Timestamp;
+import org.apache.hadoop.hive.ql.exec.SerializationUtilities;
+import org.apache.hadoop.hive.ql.index.IndexPredicateAnalyzer;
+import org.apache.hadoop.hive.ql.index.IndexSearchCondition;
+import 
org.apache.hadoop.hive.ql.metadata.HiveStoragePredicateHandler.DecomposedPredicate;
+import org.apache.hadoop.hive.ql.plan.ExprNodeDesc;
+import org.apache.hadoop.hive.ql.plan.ExprNodeGenericFuncDesc;
+import org.apache.hadoop.hive.ql.plan.TableScanDesc;
+import org.apache.hadoop.hive.ql.udf.generic.GenericUDF;
+import org.apache.hadoop.hive.ql.udf.generic.GenericUDFOPAnd;
+import org.apache.hadoop.hive.ql.udf.generic.GenericUDFOPEqual;
+import org.apache.hadoop.hive.ql.udf.generic.GenericUDFOPEqualOrGreaterThan;
+import org.apache.hadoop.hive.ql.udf.generic.GenericUDFOPEqualOrLessThan;
+import org.apache.hadoop.hive.ql.udf.generic.GenericUDFOPGreaterThan;
+import org.apache.hadoop.hive.ql.udf.generic.GenericUDFOPLessThan;
+import org.apache.hadoop.hive.ql.udf.generic.GenericUDFOPNotNull;
+import org.apache.hadoop.hive.ql.udf.generic.GenericUDFOPNull;
+import org.apache.kudu.ColumnSchema;
+import org.apache.kudu.Schema;
+import org.apache.kudu.client.KuduPredicate;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.ArrayList;
+import java.util.List;
+
+/**
+ * Contains static methods for decomposing predicate/filter expressions and
+ * getting the equivalent Kudu predicates.
+ */
+public final class KuduPredicateHandler {
+  static final Logger LOG = 
LoggerFactory.getLogger(KuduPredicateHandler.class);
+
+  private KuduPredicateHandler() {}
+
+  /**
+   * Analyzes the predicates and return the portion of it which
+   * cannot be evaluated by Kudu during table access.
+   *
+   * @param predicateExpr predicate to be decomposed
+   * @param schema the schema of the Kudu table
+   * @return decomposed form of predicate, or null if no pushdown is possible 
at all
+   */
+  public static DecomposedPredicate decompose(ExprNodeDesc predicateExpr, 
Schema schema) {
+IndexPredicateAnalyzer analyzer = newAnalyzer(schema);
+List sConditions = new ArrayList<>();
+ExprNodeDesc residualPredicate = analyzer.analyzePredicate(predicateExpr, 
sConditions);
+
+// Nothing to decompose.
+if (sConditions.size() == 0) {
+  return null;
+}
+
+DecomposedPredicate decomposedPredicate = new DecomposedPredicate();
+decomposedPredicate.pushedPredicate = 
analyzer.translateSearchConditions(sConditions);
+decomposedPredicate.residualPredicate = (ExprNodeGenericFuncDesc) 
residualPredicate;
+return decomposedPredicate;
+  }
+
+  /**
+   * Returns the list of Kudu predicates from the passed configuration.
+   *
+   * @param conf the execution configuration
+   * @param schema the schema of the Kudu table
+   * @return the list of Kudu predicates
+   */
+  public static List getPredicates(Configuration conf, Schema 
schema) {
+List predicates = new ArrayList<>();
+for (IndexSearchCondition sc : getSearchConditions(conf, schema)) {
+  predicates.add(conditionToPredicate(sc, schema));
+}
+return predicates;
+  }
+
+  private static List getSearchConditions(Configuration 
conf, Schema schema) {
+List conditions = new ArrayList<>();
+ExprNodeDesc f

[jira] [Work logged] (HIVE-12971) Hive Support for Kudu

2019-07-23 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-12971?focusedWorklogId=281024&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-281024
 ]

ASF GitHub Bot logged work on HIVE-12971:
-

Author: ASF GitHub Bot
Created on: 23/Jul/19 13:37
Start Date: 23/Jul/19 13:37
Worklog Time Spent: 10m 
  Work Description: granthenke commented on pull request #733: HIVE-12971: 
Add Support for Kudu Tables
URL: https://github.com/apache/hive/pull/733#discussion_r306322299
 
 

 ##
 File path: 
kudu-handler/src/java/org/apache/hadoop/hive/kudu/KuduPredicateHandler.java
 ##
 @@ -0,0 +1,175 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hive.kudu;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hive.common.type.HiveDecimal;
+import org.apache.hadoop.hive.common.type.Timestamp;
+import org.apache.hadoop.hive.ql.exec.SerializationUtilities;
+import org.apache.hadoop.hive.ql.index.IndexPredicateAnalyzer;
+import org.apache.hadoop.hive.ql.index.IndexSearchCondition;
+import 
org.apache.hadoop.hive.ql.metadata.HiveStoragePredicateHandler.DecomposedPredicate;
+import org.apache.hadoop.hive.ql.plan.ExprNodeDesc;
+import org.apache.hadoop.hive.ql.plan.ExprNodeGenericFuncDesc;
+import org.apache.hadoop.hive.ql.plan.TableScanDesc;
+import org.apache.hadoop.hive.ql.udf.generic.GenericUDF;
+import org.apache.hadoop.hive.ql.udf.generic.GenericUDFOPAnd;
+import org.apache.hadoop.hive.ql.udf.generic.GenericUDFOPEqual;
+import org.apache.hadoop.hive.ql.udf.generic.GenericUDFOPEqualOrGreaterThan;
+import org.apache.hadoop.hive.ql.udf.generic.GenericUDFOPEqualOrLessThan;
+import org.apache.hadoop.hive.ql.udf.generic.GenericUDFOPGreaterThan;
+import org.apache.hadoop.hive.ql.udf.generic.GenericUDFOPLessThan;
+import org.apache.hadoop.hive.ql.udf.generic.GenericUDFOPNotNull;
+import org.apache.hadoop.hive.ql.udf.generic.GenericUDFOPNull;
+import org.apache.kudu.ColumnSchema;
+import org.apache.kudu.Schema;
+import org.apache.kudu.client.KuduPredicate;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.ArrayList;
+import java.util.List;
+
+/**
+ * Contains static methods for decomposing predicate/filter expressions and
+ * getting the equivalent Kudu predicates.
+ */
+public final class KuduPredicateHandler {
+  static final Logger LOG = 
LoggerFactory.getLogger(KuduPredicateHandler.class);
+
+  private KuduPredicateHandler() {}
+
+  /**
+   * Analyzes the predicates and return the portion of it which
+   * cannot be evaluated by Kudu during table access.
+   *
+   * @param predicateExpr predicate to be decomposed
+   * @param schema the schema of the Kudu table
+   * @return decomposed form of predicate, or null if no pushdown is possible 
at all
+   */
+  public static DecomposedPredicate decompose(ExprNodeDesc predicateExpr, 
Schema schema) {
+IndexPredicateAnalyzer analyzer = newAnalyzer(schema);
+List sConditions = new ArrayList<>();
+ExprNodeDesc residualPredicate = analyzer.analyzePredicate(predicateExpr, 
sConditions);
+
+// Nothing to decompose.
+if (sConditions.size() == 0) {
+  return null;
+}
+
+DecomposedPredicate decomposedPredicate = new DecomposedPredicate();
+decomposedPredicate.pushedPredicate = 
analyzer.translateSearchConditions(sConditions);
+decomposedPredicate.residualPredicate = (ExprNodeGenericFuncDesc) 
residualPredicate;
+return decomposedPredicate;
+  }
+
+  /**
+   * Returns the list of Kudu predicates from the passed configuration.
+   *
+   * @param conf the execution configuration
+   * @param schema the schema of the Kudu table
+   * @return the list of Kudu predicates
+   */
+  public static List getPredicates(Configuration conf, Schema 
schema) {
+List predicates = new ArrayList<>();
+for (IndexSearchCondition sc : getSearchConditions(conf, schema)) {
+  predicates.add(conditionToPredicate(sc, schema));
+}
+return predicates;
+  }
+
+  private static List getSearchConditions(Configuration 
conf, Schema schema) {
+List conditions = new ArrayList<>();
+ExprNodeDesc f

[jira] [Updated] (HIVE-22028) Clean up Add Partition

2019-07-23 Thread Miklos Gergely (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22028?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-22028:
--
Attachment: HIVE-22028.03.patch

> Clean up Add Partition
> --
>
> Key: HIVE-22028
> URL: https://issues.apache.org/jira/browse/HIVE-22028
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
>  Labels: refactor-ddl
> Attachments: HIVE-22028.01.patch, HIVE-22028.02.patch, 
> HIVE-22028.03.patch
>
>
> AlterTableAddPartitionDesc should be immutable, like the rest of the desc 
> classes. This can not be done 100% right now, as it requires the refactoring 
> of the ImportSemanticAnalyzer, so the task will be finished only then.
> Add Partition logic should be moved from Hive.java to 
> AlterTableAddPartitionOperation.java, only the metastore calls should remain 
> in Hive.java.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-21960) HMS tasks on replica

2019-07-23 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16891019#comment-16891019
 ] 

Hive QA commented on HIVE-21960:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12975487/HIVE-21960.03.patch

{color:green}SUCCESS:{color} +1 due to 6 test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 16688 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/18142/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/18142/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-18142/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12975487 - PreCommit-HIVE-Build

> HMS tasks on replica
> 
>
> Key: HIVE-21960
> URL: https://issues.apache.org/jira/browse/HIVE-21960
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2, repl
>Affects Versions: 4.0.0
>Reporter: Ashutosh Bapat
>Assignee: Ashutosh Bapat
>Priority: Major
> Attachments: HIVE-21960.01.patch, HIVE-21960.02.patch, 
> HIVE-21960.03.patch, Replication and House keeping tasks.pdf
>
>
> An HMS performs a number of housekeeping tasks. Assess whether
>  # They are required to be performed in the replicated data
>  # Performing those on replicated data causes any issues and how to fix those.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-21987) Hive is unable to read Parquet int32 annotated with decimal

2019-07-23 Thread Nandor Kollar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21987?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nandor Kollar updated HIVE-21987:
-
Attachment: (was: 
part-0-d6ee992d-ef56-4384-8855-5a170d3e3660-c000.snappy.parquet)

> Hive is unable to read Parquet int32 annotated with decimal
> ---
>
> Key: HIVE-21987
> URL: https://issues.apache.org/jira/browse/HIVE-21987
> Project: Hive
>  Issue Type: Bug
>Reporter: Nandor Kollar
>Assignee: Marta Kuczora
>Priority: Major
>
> When I tried to read a Parquet file from a Hive (with Tez execution engine) 
> table with a small decimal column, I got the following exception:
> {code}
> Caused by: java.lang.UnsupportedOperationException: 
> org.apache.hadoop.hive.ql.io.parquet.convert.ETypeConverter$8$1
>   at 
> org.apache.parquet.io.api.PrimitiveConverter.addInt(PrimitiveConverter.java:98)
>   at 
> org.apache.parquet.column.impl.ColumnReaderImpl$2$3.writeValue(ColumnReaderImpl.java:248)
>   at 
> org.apache.parquet.column.impl.ColumnReaderImpl.writeCurrentValueToConverter(ColumnReaderImpl.java:367)
>   at 
> org.apache.parquet.io.RecordReaderImplementation.read(RecordReaderImplementation.java:406)
>   at 
> org.apache.parquet.hadoop.InternalParquetRecordReader.nextKeyValue(InternalParquetRecordReader.java:226)
>   ... 28 more
> {code}
> Steps to reproduce:
> - Create a Hive table with a single decimal(4, 2) column
> - Create a Parquet file with int32 column annotated with decimal(4, 2) 
> logical type, put it into the previously created table location (or use the 
> attached parquet file, in this case the column should be named as 'd', to 
> match the Hive schema with the Parquet schema in the file)
> - Execute a {{select *}} on this table
> Also, I'm afraid that similar problems can happen with int64 decimals too. 
> [Parquet specification | 
> https://github.com/apache/parquet-format/blob/master/LogicalTypes.md] allows 
> both of these cases.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-21987) Hive is unable to read Parquet int32 annotated with decimal

2019-07-23 Thread Nandor Kollar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21987?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nandor Kollar updated HIVE-21987:
-
Attachment: 
part-0-e5287735-8dcf-4dda-9c6e-4d5c98dc15f2-c000.snappy.parquet

> Hive is unable to read Parquet int32 annotated with decimal
> ---
>
> Key: HIVE-21987
> URL: https://issues.apache.org/jira/browse/HIVE-21987
> Project: Hive
>  Issue Type: Bug
>Reporter: Nandor Kollar
>Assignee: Marta Kuczora
>Priority: Major
> Attachments: 
> part-0-e5287735-8dcf-4dda-9c6e-4d5c98dc15f2-c000.snappy.parquet
>
>
> When I tried to read a Parquet file from a Hive (with Tez execution engine) 
> table with a small decimal column, I got the following exception:
> {code}
> Caused by: java.lang.UnsupportedOperationException: 
> org.apache.hadoop.hive.ql.io.parquet.convert.ETypeConverter$8$1
>   at 
> org.apache.parquet.io.api.PrimitiveConverter.addInt(PrimitiveConverter.java:98)
>   at 
> org.apache.parquet.column.impl.ColumnReaderImpl$2$3.writeValue(ColumnReaderImpl.java:248)
>   at 
> org.apache.parquet.column.impl.ColumnReaderImpl.writeCurrentValueToConverter(ColumnReaderImpl.java:367)
>   at 
> org.apache.parquet.io.RecordReaderImplementation.read(RecordReaderImplementation.java:406)
>   at 
> org.apache.parquet.hadoop.InternalParquetRecordReader.nextKeyValue(InternalParquetRecordReader.java:226)
>   ... 28 more
> {code}
> Steps to reproduce:
> - Create a Hive table with a single decimal(4, 2) column
> - Create a Parquet file with int32 column annotated with decimal(4, 2) 
> logical type, put it into the previously created table location (or use the 
> attached parquet file, in this case the column should be named as 'd', to 
> match the Hive schema with the Parquet schema in the file)
> - Execute a {{select *}} on this table
> Also, I'm afraid that similar problems can happen with int64 decimals too. 
> [Parquet specification | 
> https://github.com/apache/parquet-format/blob/master/LogicalTypes.md] allows 
> both of these cases.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-22033) HiveServer2: fix delegation token renewal

2019-07-23 Thread Ion Alberdi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22033?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ion Alberdi updated HIVE-22033:
---
Description: 
Hello, the issue we faced (and a proposal for a fix) in our hive instances is 
depicted at
 [https://github.com/criteo-forks/hive/pull/24]

Reading the master branch of the project
 
[https://github.com/apache/hive/blob/master/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/security/TokenStoreDelegationTokenSecretManager.java#L147]
 I think the same behavior is replicated there.

Long story short, *TokenStoreDelegationTokenSecretManager.renewToken*, does not 
update the expiry date of a given token (as it does not get the updated 
DelegationTokenInformation from *super.currentTokens*).

This makes any call to renewToken ineffective (the expiry date of the token is 
not postponed).

  was:
Hello, the issue we faced (and a proposal for a fix) in our hive instances is 
depicted at
[https://github.com/criteo-forks/hive/pull/24]

Reading the master branch of the project
[https://github.com/apache/hive/blob/master/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/security/TokenStoreDelegationTokenSecretManager.java#L147]
I think the same behavior is replicated there.

Long story short, *TokenStoreDelegationTokenSecretManager.renewToken*, does not 
update the expiry date of a given token, (as it does not get the updated 
DelegationTokenInformation from super.currentTokens).

Which make any call to renewToken ineffective (the expiry date of the token is 
not postponed).


> HiveServer2: fix delegation token renewal
> -
>
> Key: HIVE-22033
> URL: https://issues.apache.org/jira/browse/HIVE-22033
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 2.3.5
>Reporter: Ion Alberdi
>Priority: Major
>
> Hello, the issue we faced (and a proposal for a fix) in our hive instances is 
> depicted at
>  [https://github.com/criteo-forks/hive/pull/24]
> Reading the master branch of the project
>  
> [https://github.com/apache/hive/blob/master/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/security/TokenStoreDelegationTokenSecretManager.java#L147]
>  I think the same behavior is replicated there.
> Long story short, *TokenStoreDelegationTokenSecretManager.renewToken*, does 
> not update the expiry date of a given token (as it does not get the updated 
> DelegationTokenInformation from *super.currentTokens*).
> This makes any call to renewToken ineffective (the expiry date of the token 
> is not postponed).



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Work logged] (HIVE-21960) HMS tasks on replica

2019-07-23 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21960?focusedWorklogId=281113&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-281113
 ]

ASF GitHub Bot logged work on HIVE-21960:
-

Author: ASF GitHub Bot
Created on: 23/Jul/19 15:48
Start Date: 23/Jul/19 15:48
Worklog Time Spent: 10m 
  Work Description: ashutosh-bapat commented on pull request #735: 
HIVE-21960 : Avoid running stats updater and partition management task on a 
replicated table.
URL: https://github.com/apache/hive/pull/735
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 281113)
Time Spent: 10m
Remaining Estimate: 0h

> HMS tasks on replica
> 
>
> Key: HIVE-21960
> URL: https://issues.apache.org/jira/browse/HIVE-21960
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2, repl
>Affects Versions: 4.0.0
>Reporter: Ashutosh Bapat
>Assignee: Ashutosh Bapat
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21960.01.patch, HIVE-21960.02.patch, 
> HIVE-21960.03.patch, Replication and House keeping tasks.pdf
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> An HMS performs a number of housekeeping tasks. Assess whether
>  # They are required to be performed in the replicated data
>  # Performing those on replicated data causes any issues and how to fix those.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-21960) HMS tasks on replica

2019-07-23 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HIVE-21960:
--
Labels: pull-request-available  (was: )

> HMS tasks on replica
> 
>
> Key: HIVE-21960
> URL: https://issues.apache.org/jira/browse/HIVE-21960
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2, repl
>Affects Versions: 4.0.0
>Reporter: Ashutosh Bapat
>Assignee: Ashutosh Bapat
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21960.01.patch, HIVE-21960.02.patch, 
> HIVE-21960.03.patch, Replication and House keeping tasks.pdf
>
>
> An HMS performs a number of housekeeping tasks. Assess whether
>  # They are required to be performed in the replicated data
>  # Performing those on replicated data causes any issues and how to fix those.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-21960) HMS tasks on replica

2019-07-23 Thread Ashutosh Bapat (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16891161#comment-16891161
 ] 

Ashutosh Bapat commented on HIVE-21960:
---

[~maheshk114] request to review.

> HMS tasks on replica
> 
>
> Key: HIVE-21960
> URL: https://issues.apache.org/jira/browse/HIVE-21960
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2, repl
>Affects Versions: 4.0.0
>Reporter: Ashutosh Bapat
>Assignee: Ashutosh Bapat
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21960.01.patch, HIVE-21960.02.patch, 
> HIVE-21960.03.patch, Replication and House keeping tasks.pdf
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> An HMS performs a number of housekeeping tasks. Assess whether
>  # They are required to be performed in the replicated data
>  # Performing those on replicated data causes any issues and how to fix those.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-22028) Clean up Add Partition

2019-07-23 Thread Zoltan Haindrich (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16891192#comment-16891192
 ] 

Zoltan Haindrich commented on HIVE-22028:
-

+1 pending tests

> Clean up Add Partition
> --
>
> Key: HIVE-22028
> URL: https://issues.apache.org/jira/browse/HIVE-22028
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
>  Labels: refactor-ddl
> Attachments: HIVE-22028.01.patch, HIVE-22028.02.patch, 
> HIVE-22028.03.patch
>
>
> AlterTableAddPartitionDesc should be immutable, like the rest of the desc 
> classes. This can not be done 100% right now, as it requires the refactoring 
> of the ImportSemanticAnalyzer, so the task will be finished only then.
> Add Partition logic should be moved from Hive.java to 
> AlterTableAddPartitionOperation.java, only the metastore calls should remain 
> in Hive.java.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Work logged] (HIVE-12971) Hive Support for Kudu

2019-07-23 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-12971?focusedWorklogId=281133&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-281133
 ]

ASF GitHub Bot logged work on HIVE-12971:
-

Author: ASF GitHub Bot
Created on: 23/Jul/19 16:34
Start Date: 23/Jul/19 16:34
Worklog Time Spent: 10m 
  Work Description: jcamachor commented on pull request #733: HIVE-12971: 
Add Support for Kudu Tables
URL: https://github.com/apache/hive/pull/733#discussion_r306418537
 
 

 ##
 File path: kudu-handler/src/test/queries/positive/kudu_queries.q
 ##
 @@ -0,0 +1,148 @@
+--! qt:dataset:src
+
+-- Create table specifying columns.
+-- Note: Kudu is the source of truth for schema.
+DROP TABLE IF EXISTS kv_table;
+CREATE EXTERNAL TABLE kv_table(key int, value string)
+STORED BY 'org.apache.hadoop.hive.kudu.KuduStorageHandler'
+TBLPROPERTIES ("kudu.table_name" = "default.kudu_kv");
+
+DESCRIBE EXTENDED kv_table;
+
+-- Verify INSERT support.
+INSERT INTO TABLE kv_table VALUES
+(1, "1"), (2, "2");
+
+SELECT * FROM kv_table;
+SELECT count(*) FROM kv_table;
+SELECT count(*) FROM kv_table LIMIT 1;
+SELECT count(1) FROM kv_table;
+
+-- Verify projection and case insensitivity.
+SELECT kEy FROM kv_table;
+
+DROP TABLE kv_table;
+
+-- Create table without specifying columns.
+-- Note: Kudu is the source of truth for schema.
+DROP TABLE IF EXISTS all_types_table;
+CREATE EXTERNAL TABLE all_types_table
+STORED BY 'org.apache.hadoop.hive.kudu.KuduStorageHandler'
+TBLPROPERTIES ("kudu.table_name" = "default.kudu_all_types");
+
+DESCRIBE EXTENDED all_types_table;
+
+INSERT INTO TABLE all_types_table VALUES
+(1, 1, 1, 1, true, 1.1, 1.1, "one", 'one', '2011-11-11 11:11:11', 1.111, null, 
1),
+(2, 2, 2, 2, false, 2.2, 2.2, "two", 'two', '2012-12-12 12:12:12', 2.222, 
null, 2);
+
+SELECT * FROM all_types_table;
+SELECT count(*) FROM all_types_table;
+
+-- Verify comparison predicates on byte.
+SELECT key FROM all_types_table WHERE key = 1;
 
 Review comment:
   bq. one per type?
   That may make sense, since we want to verify that we are pushing filtering 
on every different type. However, you can combine filters using e.g. AND 
clauses or OR clauses, so we do not end up with 20+ explain plans in the q file.
   If file is too large, feel free to include the complex filters (and explain 
plans) in a new q file.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 281133)
Time Spent: 3.5h  (was: 3h 20m)

> Hive Support for Kudu
> -
>
> Key: HIVE-12971
> URL: https://issues.apache.org/jira/browse/HIVE-12971
> Project: Hive
>  Issue Type: New Feature
>Affects Versions: 2.0.0
>Reporter: Lenni Kuff
>Assignee: Grant Henke
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-12971.0.patch, HIVE-12971.1.patch, 
> HIVE-12971.2.patch, HIVE-12971.3.patch
>
>  Time Spent: 3.5h
>  Remaining Estimate: 0h
>
> JIRA for tracking work related to Hive/Kudu integration.
> It would be useful to allow Kudu data to be accessible via Hive. This would 
> involve creating a Kudu SerDe/StorageHandler and implementing support for 
> QUERY and DML commands like SELECT, INSERT, UPDATE, and DELETE. Kudu 
> Input/OutputFormats classes already exist. The work can be staged to support 
> this functionality incrementally.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Assigned] (HIVE-22034) HiveStrictManagedMigration updates DB location even with --dryRun setting on

2019-07-23 Thread Jason Dere (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Dere reassigned HIVE-22034:
-


> HiveStrictManagedMigration updates DB location even with --dryRun setting on
> 
>
> Key: HIVE-22034
> URL: https://issues.apache.org/jira/browse/HIVE-22034
> Project: Hive
>  Issue Type: Bug
>Reporter: Jason Dere
>Assignee: Jason Dere
>Priority: Major
>
> The logic at the end of procesDatabase() to update the DB location in the 
> Metastore should only run if runOptions.dryRun == false.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Assigned] (HIVE-22035) HiveStrictManagedMigration settings do not always get set with --hiveconf arguments

2019-07-23 Thread Jason Dere (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Dere reassigned HIVE-22035:
-


> HiveStrictManagedMigration settings do not always get set with --hiveconf 
> arguments
> ---
>
> Key: HIVE-22035
> URL: https://issues.apache.org/jira/browse/HIVE-22035
> Project: Hive
>  Issue Type: Bug
>Reporter: Jason Dere
>Assignee: Jason Dere
>Priority: Major
>
> Currently the --hiveconf arguments get added to the System properties. While 
> this allows official HiveConf variables to be set in the conf that is loaded 
> by the HiveStrictManagedMigration utility, there are utility-specific 
> configuration settings which we would want to be set from the command line. 
> For example since Ambari knows what the Hive system user name is it would 
> make sense to be able to set strict.managed.tables.migration.owner on the 
> command line when running this utility.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-22031) HiveRelDecorrelator fails with IndexOutOfBoundsException if the query contains several "constant" columns

2019-07-23 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16891286#comment-16891286
 ] 

Hive QA commented on HIVE-22031:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
26s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
9s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m  
6s{color} | {color:blue} ql in master has 2250 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
4s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 24m 55s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-18143/dev-support/hive-personality.sh
 |
| git revision | master / 11c79af |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-18143/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> HiveRelDecorrelator fails with IndexOutOfBoundsException if the query 
> contains several "constant" columns
> -
>
> Key: HIVE-22031
> URL: https://issues.apache.org/jira/browse/HIVE-22031
> Project: Hive
>  Issue Type: Bug
>  Components: CBO
>Affects Versions: 2.3.5
>Reporter: Artem Velykorodnyi
>Assignee: Artem Velykorodnyi
>Priority: Major
> Attachments: HIVE-22031.02.patch, HIVE-22031.1.patch, 
> HIVE-22031.3.patch, HIVE-22031.patch
>
>
> Steps for reproducing:
> {code}
> 1. Create table orders
> create table orders (ORD_NUM INT, CUST_CODE STRING);
> 2. Create table customers
> create table customers (CUST_CODE STRING);
> 3. Make select with constants and with a subquery:
> select DISTINCT(CUST_CODE), '777' as ANY, ORD_NUM, '888' as CONSTANT
> from orders 
> WHERE not exists 
> (select 1 
> from customers 
> WHERE CUST_CODE=orders.CUST_CODE
> );
> {code}
> Query fails with IndexOutOfBoundsException
> {code}
> Exception in thread "main" java.lang.AssertionError: Internal error: While 
> invoking method 'public 
> org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveRelDecorrelator$Frame 
> org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveRelDecorrelator.decorrelateRel(org.apache.hadoop.hive.ql.optimizer.calcite.reloperators.HiveProject)
>  throws org.apache.hadoop.hive.ql.parse.SemanticException'
>   at org.apache.calcite.util.Util.newInternal(Util.java:792)
>   at org.apache.calcite.util.ReflectUtil$2.invoke(ReflectUtil.java:534)
>   at 
> org.apac

[jira] [Updated] (HIVE-22035) HiveStrictManagedMigration settings do not always get set with --hiveconf arguments

2019-07-23 Thread Jason Dere (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Dere updated HIVE-22035:
--
Attachment: HIVE-22035.1.patch

> HiveStrictManagedMigration settings do not always get set with --hiveconf 
> arguments
> ---
>
> Key: HIVE-22035
> URL: https://issues.apache.org/jira/browse/HIVE-22035
> Project: Hive
>  Issue Type: Bug
>Reporter: Jason Dere
>Assignee: Jason Dere
>Priority: Major
> Attachments: HIVE-22035.1.patch
>
>
> Currently the --hiveconf arguments get added to the System properties. While 
> this allows official HiveConf variables to be set in the conf that is loaded 
> by the HiveStrictManagedMigration utility, there are utility-specific 
> configuration settings which we would want to be set from the command line. 
> For example since Ambari knows what the Hive system user name is it would 
> make sense to be able to set strict.managed.tables.migration.owner on the 
> command line when running this utility.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-22035) HiveStrictManagedMigration settings do not always get set with --hiveconf arguments

2019-07-23 Thread Jason Dere (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Dere updated HIVE-22035:
--
Status: Patch Available  (was: Open)

> HiveStrictManagedMigration settings do not always get set with --hiveconf 
> arguments
> ---
>
> Key: HIVE-22035
> URL: https://issues.apache.org/jira/browse/HIVE-22035
> Project: Hive
>  Issue Type: Bug
>Reporter: Jason Dere
>Assignee: Jason Dere
>Priority: Major
> Attachments: HIVE-22035.1.patch
>
>
> Currently the --hiveconf arguments get added to the System properties. While 
> this allows official HiveConf variables to be set in the conf that is loaded 
> by the HiveStrictManagedMigration utility, there are utility-specific 
> configuration settings which we would want to be set from the command line. 
> For example since Ambari knows what the Hive system user name is it would 
> make sense to be able to set strict.managed.tables.migration.owner on the 
> command line when running this utility.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-22035) HiveStrictManagedMigration settings do not always get set with --hiveconf arguments

2019-07-23 Thread Jason Dere (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16891292#comment-16891292
 ] 

Jason Dere commented on HIVE-22035:
---

[~ashutoshc], can you review?

> HiveStrictManagedMigration settings do not always get set with --hiveconf 
> arguments
> ---
>
> Key: HIVE-22035
> URL: https://issues.apache.org/jira/browse/HIVE-22035
> Project: Hive
>  Issue Type: Bug
>Reporter: Jason Dere
>Assignee: Jason Dere
>Priority: Major
> Attachments: HIVE-22035.1.patch
>
>
> Currently the --hiveconf arguments get added to the System properties. While 
> this allows official HiveConf variables to be set in the conf that is loaded 
> by the HiveStrictManagedMigration utility, there are utility-specific 
> configuration settings which we would want to be set from the command line. 
> For example since Ambari knows what the Hive system user name is it would 
> make sense to be able to set strict.managed.tables.migration.owner on the 
> command line when running this utility.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-22034) HiveStrictManagedMigration updates DB location even with --dryRun setting on

2019-07-23 Thread Jason Dere (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Dere updated HIVE-22034:
--
Attachment: HIVE-22034.1.patch

> HiveStrictManagedMigration updates DB location even with --dryRun setting on
> 
>
> Key: HIVE-22034
> URL: https://issues.apache.org/jira/browse/HIVE-22034
> Project: Hive
>  Issue Type: Bug
>Reporter: Jason Dere
>Assignee: Jason Dere
>Priority: Major
> Attachments: HIVE-22034.1.patch
>
>
> The logic at the end of procesDatabase() to update the DB location in the 
> Metastore should only run if runOptions.dryRun == false.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-22034) HiveStrictManagedMigration updates DB location even with --dryRun setting on

2019-07-23 Thread Jason Dere (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Dere updated HIVE-22034:
--
Status: Patch Available  (was: Open)

> HiveStrictManagedMigration updates DB location even with --dryRun setting on
> 
>
> Key: HIVE-22034
> URL: https://issues.apache.org/jira/browse/HIVE-22034
> Project: Hive
>  Issue Type: Bug
>Reporter: Jason Dere
>Assignee: Jason Dere
>Priority: Major
> Attachments: HIVE-22034.1.patch
>
>
> The logic at the end of procesDatabase() to update the DB location in the 
> Metastore should only run if runOptions.dryRun == false.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-22034) HiveStrictManagedMigration updates DB location even with --dryRun setting on

2019-07-23 Thread Jason Dere (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16891293#comment-16891293
 ] 

Jason Dere commented on HIVE-22034:
---

[~ashutoshc], can you review?

> HiveStrictManagedMigration updates DB location even with --dryRun setting on
> 
>
> Key: HIVE-22034
> URL: https://issues.apache.org/jira/browse/HIVE-22034
> Project: Hive
>  Issue Type: Bug
>Reporter: Jason Dere
>Assignee: Jason Dere
>Priority: Major
> Attachments: HIVE-22034.1.patch
>
>
> The logic at the end of procesDatabase() to update the DB location in the 
> Metastore should only run if runOptions.dryRun == false.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-22035) HiveStrictManagedMigration settings do not always get set with --hiveconf arguments

2019-07-23 Thread Ashutosh Chauhan (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16891312#comment-16891312
 ] 

Ashutosh Chauhan commented on HIVE-22035:
-

+1

> HiveStrictManagedMigration settings do not always get set with --hiveconf 
> arguments
> ---
>
> Key: HIVE-22035
> URL: https://issues.apache.org/jira/browse/HIVE-22035
> Project: Hive
>  Issue Type: Bug
>Reporter: Jason Dere
>Assignee: Jason Dere
>Priority: Major
> Attachments: HIVE-22035.1.patch
>
>
> Currently the --hiveconf arguments get added to the System properties. While 
> this allows official HiveConf variables to be set in the conf that is loaded 
> by the HiveStrictManagedMigration utility, there are utility-specific 
> configuration settings which we would want to be set from the command line. 
> For example since Ambari knows what the Hive system user name is it would 
> make sense to be able to set strict.managed.tables.migration.owner on the 
> command line when running this utility.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-22031) HiveRelDecorrelator fails with IndexOutOfBoundsException if the query contains several "constant" columns

2019-07-23 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16891315#comment-16891315
 ] 

Hive QA commented on HIVE-22031:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12975505/HIVE-22031.3.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 16684 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/18143/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/18143/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-18143/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12975505 - PreCommit-HIVE-Build

> HiveRelDecorrelator fails with IndexOutOfBoundsException if the query 
> contains several "constant" columns
> -
>
> Key: HIVE-22031
> URL: https://issues.apache.org/jira/browse/HIVE-22031
> Project: Hive
>  Issue Type: Bug
>  Components: CBO
>Affects Versions: 2.3.5
>Reporter: Artem Velykorodnyi
>Assignee: Artem Velykorodnyi
>Priority: Major
> Attachments: HIVE-22031.02.patch, HIVE-22031.1.patch, 
> HIVE-22031.3.patch, HIVE-22031.patch
>
>
> Steps for reproducing:
> {code}
> 1. Create table orders
> create table orders (ORD_NUM INT, CUST_CODE STRING);
> 2. Create table customers
> create table customers (CUST_CODE STRING);
> 3. Make select with constants and with a subquery:
> select DISTINCT(CUST_CODE), '777' as ANY, ORD_NUM, '888' as CONSTANT
> from orders 
> WHERE not exists 
> (select 1 
> from customers 
> WHERE CUST_CODE=orders.CUST_CODE
> );
> {code}
> Query fails with IndexOutOfBoundsException
> {code}
> Exception in thread "main" java.lang.AssertionError: Internal error: While 
> invoking method 'public 
> org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveRelDecorrelator$Frame 
> org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveRelDecorrelator.decorrelateRel(org.apache.hadoop.hive.ql.optimizer.calcite.reloperators.HiveProject)
>  throws org.apache.hadoop.hive.ql.parse.SemanticException'
>   at org.apache.calcite.util.Util.newInternal(Util.java:792)
>   at org.apache.calcite.util.ReflectUtil$2.invoke(ReflectUtil.java:534)
>   at 
> org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveRelDecorrelator.getInvoke(HiveRelDecorrelator.java:660)
>   at 
> org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveRelDecorrelator.decorrelate(HiveRelDecorrelator.java:252)
>   at 
> org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveRelDecorrelator.decorrelateQuery(HiveRelDecorrelator.java:218)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.apply(CalcitePlanner.java:1347)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.apply(CalcitePlanner.java:1261)
>   at org.apache.calcite.tools.Frameworks$1.apply(Frameworks.java:113)
>   at 
> org.apache.calcite.prepare.CalcitePrepareImpl.perform(CalcitePrepareImpl.java:997)
>   at org.apache.calcite.tools.Frameworks.withPrepare(Frameworks.java:149)
>   at org.apache.calcite.tools.Frameworks.withPlanner(Frameworks.java:106)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.logicalPlan(CalcitePlanner.java:1069)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.getOptimizedAST(CalcitePlanner.java:1085)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.genOPTree(CalcitePlanner.java:364)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:11138)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:286)
>   at 
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:258)
>   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:512)
>   at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1317)
>   at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1457)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1237)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1227)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:233)
>   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:184)
>   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:403)
>   at 
> org.

[jira] [Commented] (HIVE-22034) HiveStrictManagedMigration updates DB location even with --dryRun setting on

2019-07-23 Thread Ashutosh Chauhan (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16891313#comment-16891313
 ] 

Ashutosh Chauhan commented on HIVE-22034:
-

+1

> HiveStrictManagedMigration updates DB location even with --dryRun setting on
> 
>
> Key: HIVE-22034
> URL: https://issues.apache.org/jira/browse/HIVE-22034
> Project: Hive
>  Issue Type: Bug
>Reporter: Jason Dere
>Assignee: Jason Dere
>Priority: Major
> Attachments: HIVE-22034.1.patch
>
>
> The logic at the end of procesDatabase() to update the DB location in the 
> Metastore should only run if runOptions.dryRun == false.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-21838) Hive Metastore Translation: Add API call to tell client why table has limited access

2019-07-23 Thread Naveen Gangam (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naveen Gangam updated HIVE-21838:
-
Status: Open  (was: Patch Available)

> Hive Metastore Translation: Add API call to tell client why table has limited 
> access
> 
>
> Key: HIVE-21838
> URL: https://issues.apache.org/jira/browse/HIVE-21838
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Yongzhi Chen
>Assignee: Naveen Gangam
>Priority: Major
> Attachments: HIVE-21838.10.patch, HIVE-21838.11.patch, 
> HIVE-21838.2.patch, HIVE-21838.3.patch, HIVE-21838.4.patch, 
> HIVE-21838.5.patch, HIVE-21838.6.patch, HIVE-21838.7.patch, 
> HIVE-21838.8.patch, HIVE-21838.9.patch, HIVE-21838.patch
>
>
> When a table access type is Read-only or None, we need a way to tell clients 
> why. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-21838) Hive Metastore Translation: Add API call to tell client why table has limited access

2019-07-23 Thread Naveen Gangam (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naveen Gangam updated HIVE-21838:
-
Status: Patch Available  (was: Open)

> Hive Metastore Translation: Add API call to tell client why table has limited 
> access
> 
>
> Key: HIVE-21838
> URL: https://issues.apache.org/jira/browse/HIVE-21838
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Yongzhi Chen
>Assignee: Naveen Gangam
>Priority: Major
> Attachments: HIVE-21838.10.patch, HIVE-21838.11.patch, 
> HIVE-21838.12.patch, HIVE-21838.2.patch, HIVE-21838.3.patch, 
> HIVE-21838.4.patch, HIVE-21838.5.patch, HIVE-21838.6.patch, 
> HIVE-21838.7.patch, HIVE-21838.8.patch, HIVE-21838.9.patch, HIVE-21838.patch
>
>
> When a table access type is Read-only or None, we need a way to tell clients 
> why. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-21838) Hive Metastore Translation: Add API call to tell client why table has limited access

2019-07-23 Thread Naveen Gangam (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naveen Gangam updated HIVE-21838:
-
Attachment: HIVE-21838.12.patch

> Hive Metastore Translation: Add API call to tell client why table has limited 
> access
> 
>
> Key: HIVE-21838
> URL: https://issues.apache.org/jira/browse/HIVE-21838
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Yongzhi Chen
>Assignee: Naveen Gangam
>Priority: Major
> Attachments: HIVE-21838.10.patch, HIVE-21838.11.patch, 
> HIVE-21838.12.patch, HIVE-21838.2.patch, HIVE-21838.3.patch, 
> HIVE-21838.4.patch, HIVE-21838.5.patch, HIVE-21838.6.patch, 
> HIVE-21838.7.patch, HIVE-21838.8.patch, HIVE-21838.9.patch, HIVE-21838.patch
>
>
> When a table access type is Read-only or None, we need a way to tell clients 
> why. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-21225) ACID: getAcidState() should cache a recursive dir listing locally

2019-07-23 Thread Vineet Garg (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16891379#comment-16891379
 ] 

Vineet Garg commented on HIVE-21225:


+1 pending minor review comments

> ACID: getAcidState() should cache a recursive dir listing locally
> -
>
> Key: HIVE-21225
> URL: https://issues.apache.org/jira/browse/HIVE-21225
> Project: Hive
>  Issue Type: Improvement
>  Components: Transactions
>Reporter: Gopal V
>Assignee: Vaibhav Gumashta
>Priority: Major
> Attachments: HIVE-21225.1.patch, HIVE-21225.10.patch, 
> HIVE-21225.11.patch, HIVE-21225.12.patch, HIVE-21225.13.patch, 
> HIVE-21225.14.patch, HIVE-21225.15.patch, HIVE-21225.15.patch, 
> HIVE-21225.16.patch, HIVE-21225.2.patch, HIVE-21225.3.patch, 
> HIVE-21225.4.patch, HIVE-21225.4.patch, HIVE-21225.5.patch, 
> HIVE-21225.6.patch, HIVE-21225.7.patch, HIVE-21225.7.patch, 
> HIVE-21225.8.patch, HIVE-21225.9.patch, async-pid-44-2.svg
>
>
> Currently getAcidState() makes 3 calls into the FS api which could be 
> answered by making a single recursive listDir call and reusing the same data 
> to check for isRawFormat() and isValidBase().
> All delta operations for a single partition can go against a single listed 
> directory snapshot instead of interacting with the NameNode or ObjectStore 
> within the inner loop.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-19113) Bucketing: Make CLUSTERED BY do CLUSTER BY if no explicit sorting is specified

2019-07-23 Thread Vineet Garg (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16891413#comment-16891413
 ] 

Vineet Garg commented on HIVE-19113:


LGTM +1

> Bucketing: Make CLUSTERED BY do CLUSTER BY if no explicit sorting is specified
> --
>
> Key: HIVE-19113
> URL: https://issues.apache.org/jira/browse/HIVE-19113
> Project: Hive
>  Issue Type: Improvement
>  Components: Logical Optimizer
>Affects Versions: 3.0.0
>Reporter: Gopal V
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-19113.01.patch, HIVE-19113.patch
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> The user's expectation of 
> "create external table bucketed (key int) clustered by (key) into 4 buckets 
> stored as orc;"
> is that the table will cluster the key into 4 buckets, while the file layout 
> does not do any actual clustering of rows.
> In the absence of a "SORTED BY", this can automatically do a "SORTED BY 
> (key)" to cluster the keys within the file as expected.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-21991) Upgrade ORC version to 1.5.6

2019-07-23 Thread Vineet Garg (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-21991:
---
Status: Open  (was: Patch Available)

> Upgrade ORC version to 1.5.6
> 
>
> Key: HIVE-21991
> URL: https://issues.apache.org/jira/browse/HIVE-21991
> Project: Hive
>  Issue Type: Task
>  Components: ORC
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-21991.1.patch, HIVE-21991.2.patch, 
> HIVE-21991.3.patch, HIVE-21991.4.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-21991) Upgrade ORC version to 1.5.6

2019-07-23 Thread Vineet Garg (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-21991:
---
Status: Patch Available  (was: Open)

> Upgrade ORC version to 1.5.6
> 
>
> Key: HIVE-21991
> URL: https://issues.apache.org/jira/browse/HIVE-21991
> Project: Hive
>  Issue Type: Task
>  Components: ORC
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-21991.1.patch, HIVE-21991.2.patch, 
> HIVE-21991.3.patch, HIVE-21991.4.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-21991) Upgrade ORC version to 1.5.6

2019-07-23 Thread Vineet Garg (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-21991:
---
Attachment: HIVE-21991.4.patch

> Upgrade ORC version to 1.5.6
> 
>
> Key: HIVE-21991
> URL: https://issues.apache.org/jira/browse/HIVE-21991
> Project: Hive
>  Issue Type: Task
>  Components: ORC
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-21991.1.patch, HIVE-21991.2.patch, 
> HIVE-21991.3.patch, HIVE-21991.4.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-22028) Clean up Add Partition

2019-07-23 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16891479#comment-16891479
 ] 

Hive QA commented on HIVE-22028:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
50s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 4s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
47s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 7s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m  
1s{color} | {color:blue} ql in master has 2250 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
29s{color} | {color:blue} hcatalog/streaming in master has 11 extant Findbugs 
warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
27s{color} | {color:blue} streaming in master has 2 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
28s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
28s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
44s{color} | {color:red} ql: The patch generated 4 new + 480 unchanged - 17 
fixed = 484 total (was 497) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} hcatalog/streaming: The patch generated 0 new + 95 
unchanged - 6 fixed = 95 total (was 101) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} The patch streaming passed checkstyle {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
14s{color} | {color:green} ql generated 0 new + 2248 unchanged - 2 fixed = 2248 
total (was 2250) {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
38s{color} | {color:green} streaming in the patch passed. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
34s{color} | {color:green} streaming in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 31m 39s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-18144/dev-support/hive-personality.sh
 |
| git revision | master / 11c79af |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-18144/yetus/diff-checkstyle-ql.txt
 |
| modules | C: ql hcatalog/streaming streaming U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-18144/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Clean up Add Partition
> --
>
> Key: HIVE-22028
> URL: https://issues.apache.org/jira/browse/HIVE-22028
> Project

[jira] [Commented] (HIVE-22028) Clean up Add Partition

2019-07-23 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16891502#comment-16891502
 ] 

Hive QA commented on HIVE-22028:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12975515/HIVE-22028.03.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 16684 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/18144/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/18144/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-18144/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12975515 - PreCommit-HIVE-Build

> Clean up Add Partition
> --
>
> Key: HIVE-22028
> URL: https://issues.apache.org/jira/browse/HIVE-22028
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
>  Labels: refactor-ddl
> Attachments: HIVE-22028.01.patch, HIVE-22028.02.patch, 
> HIVE-22028.03.patch
>
>
> AlterTableAddPartitionDesc should be immutable, like the rest of the desc 
> classes. This can not be done 100% right now, as it requires the refactoring 
> of the ImportSemanticAnalyzer, so the task will be finished only then.
> Add Partition logic should be moved from Hive.java to 
> AlterTableAddPartitionOperation.java, only the metastore calls should remain 
> in Hive.java.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Work logged] (HIVE-12971) Hive Support for Kudu

2019-07-23 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-12971?focusedWorklogId=281426&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-281426
 ]

ASF GitHub Bot logged work on HIVE-12971:
-

Author: ASF GitHub Bot
Created on: 24/Jul/19 01:32
Start Date: 24/Jul/19 01:32
Worklog Time Spent: 10m 
  Work Description: granthenke commented on pull request #733: HIVE-12971: 
Add Support for Kudu Tables
URL: https://github.com/apache/hive/pull/733#discussion_r306596204
 
 

 ##
 File path: kudu-handler/src/test/queries/positive/kudu_queries.q
 ##
 @@ -0,0 +1,148 @@
+--! qt:dataset:src
+
+-- Create table specifying columns.
+-- Note: Kudu is the source of truth for schema.
+DROP TABLE IF EXISTS kv_table;
+CREATE EXTERNAL TABLE kv_table(key int, value string)
+STORED BY 'org.apache.hadoop.hive.kudu.KuduStorageHandler'
+TBLPROPERTIES ("kudu.table_name" = "default.kudu_kv");
+
+DESCRIBE EXTENDED kv_table;
+
+-- Verify INSERT support.
+INSERT INTO TABLE kv_table VALUES
+(1, "1"), (2, "2");
+
+SELECT * FROM kv_table;
+SELECT count(*) FROM kv_table;
+SELECT count(*) FROM kv_table LIMIT 1;
+SELECT count(1) FROM kv_table;
+
+-- Verify projection and case insensitivity.
+SELECT kEy FROM kv_table;
+
+DROP TABLE kv_table;
+
+-- Create table without specifying columns.
+-- Note: Kudu is the source of truth for schema.
+DROP TABLE IF EXISTS all_types_table;
+CREATE EXTERNAL TABLE all_types_table
+STORED BY 'org.apache.hadoop.hive.kudu.KuduStorageHandler'
+TBLPROPERTIES ("kudu.table_name" = "default.kudu_all_types");
+
+DESCRIBE EXTENDED all_types_table;
+
+INSERT INTO TABLE all_types_table VALUES
+(1, 1, 1, 1, true, 1.1, 1.1, "one", 'one', '2011-11-11 11:11:11', 1.111, null, 
1),
+(2, 2, 2, 2, false, 2.2, 2.2, "two", 'two', '2012-12-12 12:12:12', 2.222, 
null, 2);
+
+SELECT * FROM all_types_table;
+SELECT count(*) FROM all_types_table;
+
+-- Verify comparison predicates on byte.
+SELECT key FROM all_types_table WHERE key = 1;
 
 Review comment:
   I added an explain for each column type.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 281426)
Time Spent: 3h 40m  (was: 3.5h)

> Hive Support for Kudu
> -
>
> Key: HIVE-12971
> URL: https://issues.apache.org/jira/browse/HIVE-12971
> Project: Hive
>  Issue Type: New Feature
>Affects Versions: 2.0.0
>Reporter: Lenni Kuff
>Assignee: Grant Henke
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-12971.0.patch, HIVE-12971.1.patch, 
> HIVE-12971.2.patch, HIVE-12971.3.patch
>
>  Time Spent: 3h 40m
>  Remaining Estimate: 0h
>
> JIRA for tracking work related to Hive/Kudu integration.
> It would be useful to allow Kudu data to be accessible via Hive. This would 
> involve creating a Kudu SerDe/StorageHandler and implementing support for 
> QUERY and DML commands like SELECT, INSERT, UPDATE, and DELETE. Kudu 
> Input/OutputFormats classes already exist. The work can be staged to support 
> this functionality incrementally.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-22035) HiveStrictManagedMigration settings do not always get set with --hiveconf arguments

2019-07-23 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16891548#comment-16891548
 ] 

Hive QA commented on HIVE-22035:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
25s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
9s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m  
8s{color} | {color:blue} ql in master has 2250 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 24m 42s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-18145/dev-support/hive-personality.sh
 |
| git revision | master / 11c79af |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-18145/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> HiveStrictManagedMigration settings do not always get set with --hiveconf 
> arguments
> ---
>
> Key: HIVE-22035
> URL: https://issues.apache.org/jira/browse/HIVE-22035
> Project: Hive
>  Issue Type: Bug
>Reporter: Jason Dere
>Assignee: Jason Dere
>Priority: Major
> Attachments: HIVE-22035.1.patch
>
>
> Currently the --hiveconf arguments get added to the System properties. While 
> this allows official HiveConf variables to be set in the conf that is loaded 
> by the HiveStrictManagedMigration utility, there are utility-specific 
> configuration settings which we would want to be set from the command line. 
> For example since Ambari knows what the Hive system user name is it would 
> make sense to be able to set strict.managed.tables.migration.owner on the 
> command line when running this utility.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-21225) ACID: getAcidState() should cache a recursive dir listing locally

2019-07-23 Thread Vaibhav Gumashta (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vaibhav Gumashta updated HIVE-21225:

Attachment: HIVE-21225.17.patch

> ACID: getAcidState() should cache a recursive dir listing locally
> -
>
> Key: HIVE-21225
> URL: https://issues.apache.org/jira/browse/HIVE-21225
> Project: Hive
>  Issue Type: Improvement
>  Components: Transactions
>Reporter: Gopal V
>Assignee: Vaibhav Gumashta
>Priority: Major
> Attachments: HIVE-21225.1.patch, HIVE-21225.10.patch, 
> HIVE-21225.11.patch, HIVE-21225.12.patch, HIVE-21225.13.patch, 
> HIVE-21225.14.patch, HIVE-21225.15.patch, HIVE-21225.15.patch, 
> HIVE-21225.16.patch, HIVE-21225.17.patch, HIVE-21225.2.patch, 
> HIVE-21225.3.patch, HIVE-21225.4.patch, HIVE-21225.4.patch, 
> HIVE-21225.5.patch, HIVE-21225.6.patch, HIVE-21225.7.patch, 
> HIVE-21225.7.patch, HIVE-21225.8.patch, HIVE-21225.9.patch, async-pid-44-2.svg
>
>
> Currently getAcidState() makes 3 calls into the FS api which could be 
> answered by making a single recursive listDir call and reusing the same data 
> to check for isRawFormat() and isValidBase().
> All delta operations for a single partition can go against a single listed 
> directory snapshot instead of interacting with the NameNode or ObjectStore 
> within the inner loop.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-21225) ACID: getAcidState() should cache a recursive dir listing locally

2019-07-23 Thread Vaibhav Gumashta (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vaibhav Gumashta updated HIVE-21225:

Fix Version/s: 4.0.0

> ACID: getAcidState() should cache a recursive dir listing locally
> -
>
> Key: HIVE-21225
> URL: https://issues.apache.org/jira/browse/HIVE-21225
> Project: Hive
>  Issue Type: Improvement
>  Components: Transactions
>Reporter: Gopal V
>Assignee: Vaibhav Gumashta
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-21225.1.patch, HIVE-21225.10.patch, 
> HIVE-21225.11.patch, HIVE-21225.12.patch, HIVE-21225.13.patch, 
> HIVE-21225.14.patch, HIVE-21225.15.patch, HIVE-21225.15.patch, 
> HIVE-21225.16.patch, HIVE-21225.17.patch, HIVE-21225.2.patch, 
> HIVE-21225.3.patch, HIVE-21225.4.patch, HIVE-21225.4.patch, 
> HIVE-21225.5.patch, HIVE-21225.6.patch, HIVE-21225.7.patch, 
> HIVE-21225.7.patch, HIVE-21225.8.patch, HIVE-21225.9.patch, async-pid-44-2.svg
>
>
> Currently getAcidState() makes 3 calls into the FS api which could be 
> answered by making a single recursive listDir call and reusing the same data 
> to check for isRawFormat() and isValidBase().
> All delta operations for a single partition can go against a single listed 
> directory snapshot instead of interacting with the NameNode or ObjectStore 
> within the inner loop.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-21225) ACID: getAcidState() should cache a recursive dir listing locally

2019-07-23 Thread Vaibhav Gumashta (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vaibhav Gumashta updated HIVE-21225:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Addressed the comments to add more doc in patch 17 and committed to master. 
Thanks [~vgarg]

> ACID: getAcidState() should cache a recursive dir listing locally
> -
>
> Key: HIVE-21225
> URL: https://issues.apache.org/jira/browse/HIVE-21225
> Project: Hive
>  Issue Type: Improvement
>  Components: Transactions
>Reporter: Gopal V
>Assignee: Vaibhav Gumashta
>Priority: Major
> Attachments: HIVE-21225.1.patch, HIVE-21225.10.patch, 
> HIVE-21225.11.patch, HIVE-21225.12.patch, HIVE-21225.13.patch, 
> HIVE-21225.14.patch, HIVE-21225.15.patch, HIVE-21225.15.patch, 
> HIVE-21225.16.patch, HIVE-21225.17.patch, HIVE-21225.2.patch, 
> HIVE-21225.3.patch, HIVE-21225.4.patch, HIVE-21225.4.patch, 
> HIVE-21225.5.patch, HIVE-21225.6.patch, HIVE-21225.7.patch, 
> HIVE-21225.7.patch, HIVE-21225.8.patch, HIVE-21225.9.patch, async-pid-44-2.svg
>
>
> Currently getAcidState() makes 3 calls into the FS api which could be 
> answered by making a single recursive listDir call and reusing the same data 
> to check for isRawFormat() and isValidBase().
> All delta operations for a single partition can go against a single listed 
> directory snapshot instead of interacting with the NameNode or ObjectStore 
> within the inner loop.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-19624) the method closeSession of SessionManager has a synchronized, is it able to be removed?

2019-07-23 Thread xialu (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16891558#comment-16891558
 ] 

xialu commented on HIVE-19624:
--

we encounter the same question,  any updated?  have you already remove 
synchronized keyword and do some tests?

> the method closeSession of SessionManager  has a synchronized, is it able to 
> be removed?
> 
>
> Key: HIVE-19624
> URL: https://issues.apache.org/jira/browse/HIVE-19624
> Project: Hive
>  Issue Type: Wish
>  Components: HiveServer2
>Affects Versions: 2.1.1
>Reporter: xulongfetion
>Assignee: Naveen Gangam
>Priority: Minor
>
> The methods closeSession of 
> org.apache.hive.service.cli.session.SessionManager use 
> synchronized for thread safe, i looked at the code and wonder why there's 
> need for synchronized.
> Because closeSession sometimes use the hadoop ipc call and takes too much 
> time to response, so if it is possible to remove the keyword synchronized.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-22035) HiveStrictManagedMigration settings do not always get set with --hiveconf arguments

2019-07-23 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16891579#comment-16891579
 ] 

Hive QA commented on HIVE-22035:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12975545/HIVE-22035.1.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 16684 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/18145/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/18145/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-18145/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12975545 - PreCommit-HIVE-Build

> HiveStrictManagedMigration settings do not always get set with --hiveconf 
> arguments
> ---
>
> Key: HIVE-22035
> URL: https://issues.apache.org/jira/browse/HIVE-22035
> Project: Hive
>  Issue Type: Bug
>Reporter: Jason Dere
>Assignee: Jason Dere
>Priority: Major
> Attachments: HIVE-22035.1.patch
>
>
> Currently the --hiveconf arguments get added to the System properties. While 
> this allows official HiveConf variables to be set in the conf that is loaded 
> by the HiveStrictManagedMigration utility, there are utility-specific 
> configuration settings which we would want to be set from the command line. 
> For example since Ambari knows what the Hive system user name is it would 
> make sense to be able to set strict.managed.tables.migration.owner on the 
> command line when running this utility.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-22029) Create table as select throws an error for Masked enabled table

2019-07-23 Thread Ankesh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22029?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankesh updated HIVE-22029:
--
Description: 
After enabling Column level masking on a Table, create table as select command 
fails during preCbo planner context.

 

Here is the stack trace for the failure : 

Caused by: java.lang.RuntimeException: java.lang.AssertionError:Unexpected type 
UNEXPECTED
 ! at 
org.apache.hadoop.hive.ql.parse.CalcitePlanner.fixUpAfterCbo(CalcitePlanner.java:949)
 ! at 
org.apache.hadoop.hive.ql.parse.CalcitePlanner.genOPTree(CalcitePlanner.java:367)
 ! at 
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:11153)
 ! at 
org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:286)
 ! at 
org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:258)
 ! at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:515)
 ! at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1320)
 ! at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1298)
 ! at 
org.apache.hive.service.cli.operation.SQLOperation.prepare(SQLOperation.java:204)
 ! 

 

 

As far as my understanding, it is setting UNEXPECTED type when there is a 
masked table and it do not understand what is the datatype for the column.
We are setting again the context type, but my suspect is we are not resetting 
the context.

  was:
After enabling Column level masking on a Table, create table as select command 
fails.

 

Here is the stack trace for the failure : 

Caused by: java.lang.RuntimeException: java.lang.AssertionError:Unexpected type 
UNEXPECTED
! at 
org.apache.hadoop.hive.ql.parse.CalcitePlanner.fixUpAfterCbo(CalcitePlanner.java:949)
! at 
org.apache.hadoop.hive.ql.parse.CalcitePlanner.genOPTree(CalcitePlanner.java:367)
! at 
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:11153)
! at 
org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:286)
! at 
org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:258)
! at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:515)
! at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1320)
! at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1298)
! at 
org.apache.hive.service.cli.operation.SQLOperation.prepare(SQLOperation.java:204)
! 

 

 

As far as my understanding, it is setting UNEXPECTED type when there is a 
masked table and it do not understand what is the datatype for the column.


> Create table as select throws an error for Masked enabled table
> ---
>
> Key: HIVE-22029
> URL: https://issues.apache.org/jira/browse/HIVE-22029
> Project: Hive
>  Issue Type: Bug
>Reporter: Ankesh
>Priority: Major
>
> After enabling Column level masking on a Table, create table as select 
> command fails during preCbo planner context.
>  
> Here is the stack trace for the failure : 
> Caused by: java.lang.RuntimeException: java.lang.AssertionError:Unexpected 
> type UNEXPECTED
>  ! at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.fixUpAfterCbo(CalcitePlanner.java:949)
>  ! at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.genOPTree(CalcitePlanner.java:367)
>  ! at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:11153)
>  ! at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:286)
>  ! at 
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:258)
>  ! at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:515)
>  ! at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1320)
>  ! at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1298)
>  ! at 
> org.apache.hive.service.cli.operation.SQLOperation.prepare(SQLOperation.java:204)
>  ! 
>  
>  
> As far as my understanding, it is setting UNEXPECTED type when there is a 
> masked table and it do not understand what is the datatype for the column.
> We are setting again the context type, but my suspect is we are not resetting 
> the context.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)