[jira] [Created] (HIVE-22115) Prevent the creation of query-router logger in HS2 as per property

2019-08-14 Thread slim bouguerra (JIRA)
slim bouguerra created HIVE-22115:
-

 Summary: Prevent the creation of query-router logger in HS2 as per 
property
 Key: HIVE-22115
 URL: https://issues.apache.org/jira/browse/HIVE-22115
 Project: Hive
  Issue Type: Improvement
Reporter: slim bouguerra
Assignee: slim bouguerra


Avoid the creation and registration of query-router logger if the Hive server 
Property is set to false by the user

{code}

HiveConf.ConfVars.HIVE_SERVER2_LOGGING_OPERATION_ENABLED

{code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


Re: Review Request 71267: HIVE-22087: Transform Database object on getDatabase() to return location based on client capabilities.

2019-08-14 Thread Thejas Nair

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/71267/#review217212
---




standalone-metastore/metastore-common/src/main/thrift/hive_metastore.thrift
Lines 1938 (patched)


To future proof that, it would have better better to have a 
GetDatabaseResponse as well
similar to get_catalog.
That can be a smaller follo wup patch


- Thejas Nair


On Aug. 14, 2019, 6:26 a.m., Naveen Gangam wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/71267/
> ---
> 
> (Updated Aug. 14, 2019, 6:26 a.m.)
> 
> 
> Review request for hive, Daniel Dai, Jason Dere, and Thejas Nair.
> 
> 
> Bugs: HIVE-22087
> https://issues.apache.org/jira/browse/HIVE-22087
> 
> 
> Repository: hive-git
> 
> 
> Description
> ---
> 
> 1) getDatabase() calls should be transformed to return a Database object that 
> can vary in location depending on the client capabilities. If client has 
> ACID*WRITE* capabilities, location is unaltered. If the client does not have 
> such capabilities, the database will return an location from the external 
> warehouse directory.
> 2) When a non-ACID MANAGED table is translated to EXTERNAL table, its 
> location should be altered to point to an external warehouse directory and 
> not to the managed warehouse.
> 3) Some new test cases.
> 
> 
> Diffs
> -
> 
>   
> itests/hive-unit/src/test/java/org/apache/hadoop/hive/metastore/TestHiveMetastoreTransformer.java
>  e50b577ff7 
>   
> standalone-metastore/metastore-common/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/AlterPartitionsRequest.java
>  6453c93d79 
>   
> standalone-metastore/metastore-common/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/CreateTableRequest.java
>  5d42a80373 
>   
> standalone-metastore/metastore-common/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/FindSchemasByColsResp.java
>  4024751ed3 
>   
> standalone-metastore/metastore-common/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/GetDatabaseRequest.java
>  PRE-CREATION 
>   
> standalone-metastore/metastore-common/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/GetPartitionsFilterSpec.java
>  fcba6ebb4d 
>   
> standalone-metastore/metastore-common/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/GetPartitionsProjectionSpec.java
>  d94cbb1bcc 
>   
> standalone-metastore/metastore-common/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/GetPartitionsRequest.java
>  dd4bf8339a 
>   
> standalone-metastore/metastore-common/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/GetPartitionsResponse.java
>  ddfa59fb1c 
>   
> standalone-metastore/metastore-common/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/RenamePartitionRequest.java
>  de467c298f 
>   
> standalone-metastore/metastore-common/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/SchemaVersion.java
>  09fcd476e9 
>   
> standalone-metastore/metastore-common/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/ThriftHiveMetastore.java
>  6b117291a6 
>   
> standalone-metastore/metastore-common/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/WMFullResourcePlan.java
>  080111d85b 
>   
> standalone-metastore/metastore-common/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/WMGetAllResourcePlanResponse.java
>  d0174005ca 
>   
> standalone-metastore/metastore-common/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/WMGetTriggersForResourePlanResponse.java
>  e5425909d4 
>   
> standalone-metastore/metastore-common/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/WMValidateResourcePlanResponse.java
>  b12c2284a2 
>   
> standalone-metastore/metastore-common/src/gen/thrift/gen-php/metastore/ThriftHiveMetastore.php
>  4623e9ab5f 
>   
> standalone-metastore/metastore-common/src/gen/thrift/gen-php/metastore/Types.php
>  0d45371b88 
>   
> standalone-metastore/metastore-common/src/gen/thrift/gen-py/hive_metastore/ThriftHiveMetastore-remote
>  647c762acd 
>   
> standalone-metastore/metastore-common/src/gen/thrift/gen-py/hive_metastore/ThriftHiveMetastore.py
>  5107d0f99a 
>   
> standalone-metastore/metastore-common/src/gen/thrift/gen-py/hive_metastore/ttypes.py
>  08c0730e1c 
>   
> standalone-metastore/metastore-common/src/gen/thrift/gen-rb/hive_metastore_types.rb
>  8ce2b88fd8 
>   
> standalone-metastore/metastore-common/src/gen/thrift/gen-rb/thrift_hive_metastore.rb
>  7a6a722d9a 
>   
> standalone-metastore/metastore-common/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClient.java
>  9b64

[jira] [Created] (HIVE-22114) insert query for partitioned table failing when all buckets are empty, s3 storage location

2019-08-14 Thread Aswathy Chellammal Sreekumar (JIRA)
Aswathy Chellammal Sreekumar created HIVE-22114:
---

 Summary: insert query for partitioned table failing when all 
buckets are empty, s3 storage location
 Key: HIVE-22114
 URL: https://issues.apache.org/jira/browse/HIVE-22114
 Project: Hive
  Issue Type: Bug
  Components: Hive
Affects Versions: 3.1.0
Reporter: Aswathy Chellammal Sreekumar
Assignee: Vineet Garg


Following insert query fails when all buckets are empty

{noformat}
create table src_emptybucket_partitioned_1 (name string, age int, gpa 
decimal(3,2))
   partitioned by(year int)
   clustered by (age)
   sorted by (age)
   into 100 buckets
   stored as orc;
insert into table src_emptybucket_partitioned_1
   partition(year=2015)
   select * from studenttab10k limit 0;
{noformat}

Error:

{noformat}
ERROR : Job Commit failed with exception 
'org.apache.hadoop.hive.ql.metadata.HiveException(java.io.FileNotFoundException:
 No such file or directory: 
s3a://warehouse/tablespace/managed/hive/src_emptybucket_partitioned/year=2015)'
# org.apache.hadoop.hive.ql.metadata.HiveException: 
java.io.FileNotFoundException: No such file or directory: 
s3a:///warehouse/tablespace/managed/hive/src_emptybucket_partitioned/year=2015
at 
org.apache.hadoop.hive.ql.exec.FileSinkOperator.jobCloseOp(FileSinkOperator.java:1403)
at org.apache.hadoop.hive.ql.exec.Operator.jobClose(Operator.java:798)
at org.apache.hadoop.hive.ql.exec.Operator.jobClose(Operator.java:803)
at org.apache.hadoop.hive.ql.exec.tez.TezTask.close(TezTask.java:590)
at org.apache.hadoop.hive.ql.exec.tez.TezTask.execute(TezTask.java:327)
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:212)
at 
org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:103)
at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2335)
at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:2002)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1674)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1372)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1366)
at 
org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:157)
at 
org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:226)
at 
org.apache.hive.service.cli.operation.SQLOperation.access$700(SQLOperation.java:87)
at 
org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:324)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
at 
org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:342)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.FileNotFoundException: No such file or directory: 
s3a:///warehouse/tablespace/managed/hive/src_emptybucket_partitioned/year=2015
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.s3GetFileStatus(S3AFileSystem.java:2805)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.innerGetFileStatus(S3AFileSystem.java:2694)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:2587)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.innerListStatus(S3AFileSystem.java:2388)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$listStatus$10(S3AFileSystem.java:2367)
at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:109)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.listStatus(S3AFileSystem.java:2367)
at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1880)
at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1922)
at 
org.apache.hadoop.hive.ql.exec.Utilities.getMmDirectoryCandidates(Utilities.java:4185)
at 
org.apache.hadoop.hive.ql.exec.Utilities.handleMmTableFinalPath(Utilities.java:4386)
at 
org.apache.hadoop.hive.ql.exec.FileSinkOperator.jobCloseOp(FileSinkOperator.java:1397)
... 26 more

ERROR : FAILED: Execution Error, return code

[jira] [Created] (HIVE-22113) Prevent LLAP shutdown on AMReporter related RuntimeException

2019-08-14 Thread Oliver Draese (JIRA)
Oliver Draese created HIVE-22113:


 Summary: Prevent LLAP shutdown on AMReporter related 
RuntimeException
 Key: HIVE-22113
 URL: https://issues.apache.org/jira/browse/HIVE-22113
 Project: Hive
  Issue Type: Bug
  Components: llap
Affects Versions: 3.1.1
Reporter: Oliver Draese
Assignee: Oliver Draese


If a task attempt cannot be removed from AMReporter (i.e. task attempt was not 
found), the AMReporter throws a RuntimeException. This exception is not caught 
and trickles up, causing an LLAP shutdown:
{{2019-08-08T23:34:39,748 ERROR [Wait-Queue-Scheduler-0 ()] org.apache.hadoop.hive.llap.daemon.impl.LlapDaemon: Thread Thread[Wait-Queue-Scheduler-0,5,main] threw an Exception. Shutting down now...}}{{java.lang.RuntimeException: attempt_1563528877295_18872_3728_01_03_0 was not registered and couldn't be removed}}{{

at org.apache.hadoop.hive.llap.daemon.impl.AMReporter$AMNodeInfo.removeTaskAttempt(AMReporter.java:524) ~[hive-llap-server-3.1.0.3.1.0.103-1.jar:3.1.0.3.1.0.103-1]}}{{

at org.apache.hadoop.hive.llap.daemon.impl.AMReporter.unregisterTask(AMReporter.java:243) ~[hive-llap-server-3.1.0.3.1.0.103-1.jar:3.1.0.3.1.0.103-1]}}{{

at org.apache.hadoop.hive.llap.daemon.impl.TaskRunnerCallable.killTask(TaskRunnerCallable.java:384) ~[hive-llap-server-3.1.0.3.1.0.103-1.jar:3.1.0.3.1.0.103-1]}}{{

at org.apache.hadoop.hive.llap.daemon.impl.TaskExecutorService.handleScheduleAttemptedRejection(TaskExecutorService.java:739) ~[hive-llap-server-3.1.0.3.1.0.103-1.jar:3.1.0.3.1.0.103-1]}}{{

at org.apache.hadoop.hive.llap.daemon.impl.TaskExecutorService.access$1100(TaskExecutorService.java:91) ~[hive-llap-server-3.1.0.3.1.0.103-1.jar:3.1.0.3.1.0.103-1]}}{{

at org.apache.hadoop.hive.llap.daemon.impl.TaskExecutorService$WaitQueueWorker.run(TaskExecutorService.java:396) ~[hive-llap-server-3.1.0.3.1.0.103-1.jar:3.1.0.3.1.0.103-1]}}{{

at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[?:1.8.0_161]}}{{

at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:108) [hive-exec-3.1.0.3.1.0.103-1.jar:3.1.0-SNAPSHOT]}}{{

at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:41) [hive-exec-3.1.0.3.1.0.103-1.jar:3.1.0-SNAPSHOT]}}{{

at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:77) [hive-exec-3.1.0.3.1.0.103-1.jar:3.1.0-SNAPSHOT]}}{{

at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_161]}}{{

at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_161]}}{{
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_161]}}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (HIVE-22112) update jackson version in disconnected poms

2019-08-14 Thread Ashutosh Chauhan (JIRA)
Ashutosh Chauhan created HIVE-22112:
---

 Summary: update jackson version in disconnected poms 
 Key: HIVE-22112
 URL: https://issues.apache.org/jira/browse/HIVE-22112
 Project: Hive
  Issue Type: Improvement
Reporter: Ashutosh Chauhan
Assignee: Ashutosh Chauhan


was updated in top level pom via HIVE-22089



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (HIVE-22111) Materialized view based on replicated table might not get refreshed

2019-08-14 Thread Peter Vary (JIRA)
Peter Vary created HIVE-22111:
-

 Summary: Materialized view based on replicated table might not get 
refreshed
 Key: HIVE-22111
 URL: https://issues.apache.org/jira/browse/HIVE-22111
 Project: Hive
  Issue Type: Bug
  Components: Materialized views, repl
Reporter: Peter Vary
Assignee: Peter Vary


Consider the following scenario:

* create a base table which we replicate
* create a materialized view in the target hive based on the base table
* modify (delete/update) the base table in the source hive
* replicate the changes (delete/update) to the target hive
* query the materialized view in the target hive
 
We do not refresh the data, since when the transaction is created by 
replication we set ctc_update_delete to 'N'.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (HIVE-22110) Initialize ReplChangeManager before starting actual dump

2019-08-14 Thread Ashutosh Bapat (JIRA)
Ashutosh Bapat created HIVE-22110:
-

 Summary: Initialize ReplChangeManager before starting actual dump
 Key: HIVE-22110
 URL: https://issues.apache.org/jira/browse/HIVE-22110
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2, repl
Affects Versions: 4.0.0
Reporter: Ashutosh Bapat
Assignee: Ashutosh Bapat
 Fix For: 4.0.0


REPL DUMP calls ReplChageManager.encodeFileUri() to add cmroot and checksum to 
the url. This requires ReplChangeManager to be initialized. So, initialize Repl 
change manager when taking a dump.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)