[jira] [Created] (HIVE-20515) Empty query results when using results cache and query temp dir, results cache dir in different filesystems

2018-09-06 Thread Jason Dere (JIRA)
Jason Dere created HIVE-20515:
-

 Summary: Empty query results when using results cache and query 
temp dir, results cache dir in different filesystems
 Key: HIVE-20515
 URL: https://issues.apache.org/jira/browse/HIVE-20515
 Project: Hive
  Issue Type: Bug
Reporter: Jason Dere
Assignee: Jason Dere


If the scratchdir for temporary query results and the results cache dir are in 
different filesystems, moving the query from the temp directory to results 
cache will fail.

Looking at the moveResultsToCacheDirectory() logic in QueryResultsCache.java, I 
see the following issues:
- FileSystem.rename() is used, which only works if the files are on the same 
filesystem. Need to use something like Hive.mvFile or something similar which 
can work between different filesystems.
- The return code from rename() was not checked which might possibly have 
caught the error here. This may not be applicable if a different method from 
FS.rename() is used in the proper fix.

With some filesystems (noticed this with WASB), if FileSystem.rename() returns 
false on failure rather than throwing an exception, then this results in empty 
results showing up for the query because the return code was not checked 
properly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Review Request 68664: HIVE-20306: Implement projection spec for fetching only requested fields from partitions

2018-09-06 Thread Alexander Kolbasov

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/68664/
---

Review request for hive, Aihua Xu, Peter Vary, Todd Lipcon, and Vihang 
Karajgaonkar.


Bugs: HIVE-20306
https://issues.apache.org/jira/browse/HIVE-20306


Repository: hive-git


Description
---

HIVE-20306: Implement projection spec for fetching only requested fields from 
partitions


Diffs
-

  
itests/hcatalog-unit/src/test/java/org/apache/hive/hcatalog/listener/DummyRawStoreFailEvent.java
 0ad2a2469e0330e050fdb8983078b80617afbbf1 
  
standalone-metastore/metastore-common/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/GetPartitionsFilterSpec.java
 PRE-CREATION 
  
standalone-metastore/metastore-common/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/GetPartitionsProjectSpec.java
 PRE-CREATION 
  
standalone-metastore/metastore-common/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/GetPartitionsRequest.java
 PRE-CREATION 
  
standalone-metastore/metastore-common/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/GetPartitionsResponse.java
 PRE-CREATION 
  
standalone-metastore/metastore-common/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/PartitionFilterMode.java
 PRE-CREATION 
  
standalone-metastore/metastore-common/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/ThriftHiveMetastore.java
 ae0956870a7d01c24f5fdaa07094c3dc6604ab9a 
  
standalone-metastore/metastore-common/src/gen/thrift/gen-php/metastore/ThriftHiveMetastore.php
 4574c6a4925ae3df9dd1ee7b8786976ae6fc8397 
  
standalone-metastore/metastore-common/src/gen/thrift/gen-php/metastore/Types.php
 22deffe1d31a64f95c49d7f017dfeb2994233e71 
  
standalone-metastore/metastore-common/src/gen/thrift/gen-py/hive_metastore/ThriftHiveMetastore-remote
 a595732f04af4304974186178377192227bb80fb 
  
standalone-metastore/metastore-common/src/gen/thrift/gen-py/hive_metastore/ThriftHiveMetastore.py
 38074ce79b8a06b3795d00431025240778abb569 
  
standalone-metastore/metastore-common/src/gen/thrift/gen-py/hive_metastore/ttypes.py
 38fac465d73c264f85fc512548ebe1919ee35c17 
  
standalone-metastore/metastore-common/src/gen/thrift/gen-rb/hive_metastore_types.rb
 0192c6da314694c1253b49949bbe749902f49b4b 
  
standalone-metastore/metastore-common/src/gen/thrift/gen-rb/thrift_hive_metastore.rb
 e6a72762bb7b0d36fdf6d20d02cb1da3337a98a0 
  standalone-metastore/metastore-common/src/main/thrift/hive_metastore.thrift 
85a5c601e03ecd2fb6ac5d30d789193e10bf38c2 
  
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java
 ba82a9327cf18e8d55ebddcd774786d3d72f753a 
  
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClient.java
 c962ccc93a14729e110c1c456695f71786d2367e 
  
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/IMetaStoreClient.java
 54e7eda0da796877f1331de137d534126375c6ba 
  
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/MetaStoreDirectSql.java
 571c789eddfd2b1a27c65c48bdc6dccfafaaf676 
  
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/MetastoreDirectSqlUtils.java
 PRE-CREATION 
  
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/ObjectStore.java
 d27224b23580b4662a85c874b657847ed068c9a3 
  
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/PartitionProjectionEvaluator.java
 PRE-CREATION 
  
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/RawStore.java
 b61ee81533930c889f23d2551041055cbdd1a6b2 
  
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/StatObjectConverter.java
 7a0b21b2580d8bb9b256dbc698f125ed15ccdcd3 
  
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/cache/CachedStore.java
 0445cbf9095285bdcde72946f1b6dd9a9a3b9fff 
  
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/model/MSerDeInfo.java
 68f07e2569b6531cf3e18919209aed1a17e88bf7 
  
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/model/MStorageDescriptor.java
 4c6ce008f89469353bfee3175168a518534a42b1 
  
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/utils/MetaStoreServerUtils.java
 10ff9dfbb6d8f61fa75f731f4cd0f006c98e0067 
  
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/utils/MetaStoreUtils.java
 c681a87a1c6b10a4f9494e49a42282cf90027ad7 
  
standalone-metastore/metastore-server/src/test/java/org/apache/hadoop/hive/metastore/DummyRawStoreControlledCommit.java
 0934aeb3a7d5413cacde500a5575e4f676306bd0 
  
standalone-metastore/metastore-server/src/test/java

[jira] [Created] (HIVE-20514) Query with outer join filter is failing with dynamic partition join

2018-09-06 Thread Vineet Garg (JIRA)
Vineet Garg created HIVE-20514:
--

 Summary: Query with outer join filter is failing with dynamic 
partition join
 Key: HIVE-20514
 URL: https://issues.apache.org/jira/browse/HIVE-20514
 Project: Hive
  Issue Type: Bug
Reporter: Vineet Garg
Assignee: Vineet Garg


*Reproducer*
Copy the following query in {{tez_dynpart_hashjoin_1.q}} and run the test
{code:sql}
select
  *
from alltypesorc a left outer join alltypesorc b on a.cint = b.cint and 
a.csmallint != a.cint
where
  a.cint between 100 and 300
order by a.cint;
{code}

*Exception*
{noformat}
Vertex failed, vertexName=Reducer 2, vertexId=vertex_1536275581088_0001_5_02, 
diagnostics=[Task failed, taskId=task_1536275581088_0001_5_02_09, 
diagnostics=[TaskAttempt 0 failed, info=[Error: Error while running task ( 
failure ) : 
attempt_1536275581088_0001_5_02_09_0:java.lang.RuntimeException: 
java.lang.RuntimeException: cannot find field _col1 from [0:key, 1:value]
at 
org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:296)
at 
org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:250)
at 
org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:374)
at 
org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:73)
at 
org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:61)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1682)
at 
org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:61)
at 
org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37)
at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
at 
org.apache.hadoop.hive.llap.daemon.impl.StatsRecordingThreadPool$WrappedCallable.call(StatsRecordingThreadPool.java:110)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: cannot find field _col1 from [0:key, 
1:value]
at 
org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorUtils.getStandardStructFieldRef(ObjectInspectorUtils.java:537)
at 
org.apache.hadoop.hive.serde2.objectinspector.StandardStructObjectInspector.getStructFieldRef(StandardStructObjectInspector.java:153)
at 
org.apache.hadoop.hive.ql.exec.ExprNodeColumnEvaluator.initialize(ExprNodeColumnEvaluator.java:56)
at 
org.apache.hadoop.hive.ql.exec.ExprNodeGenericFuncEvaluator.initialize(ExprNodeGenericFuncEvaluator.java:140)
at 
org.apache.hadoop.hive.ql.exec.ExprNodeGenericFuncEvaluator.initialize(ExprNodeGenericFuncEvaluator.java:140)
at 
org.apache.hadoop.hive.ql.exec.JoinUtil.getObjectInspectorsFromEvaluators(JoinUtil.java:91)
at 
org.apache.hadoop.hive.ql.exec.CommonJoinOperator.initializeOp(CommonJoinOperator.java:266)
at 
org.apache.hadoop.hive.ql.exec.AbstractMapJoinOperator.initializeOp(AbstractMapJoinOperator.java:78)
at 
org.apache.hadoop.hive.ql.exec.MapJoinOperator.initializeOp(MapJoinOperator.java:155)
at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:375)
at 
org.apache.hadoop.hive.ql.exec.tez.ReduceRecordProcessor.init(ReduceRecordProcessor.java:193)
at 
org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:266)
... 15 more
], TaskAttempt 1 failed, info=[Error: Error while running task ( failure ) : 
attempt_1536275581088_0001_5_02_09_1:java.lang.RuntimeException: 
java.lang.RuntimeException: cannot find field _col1 from [0:key, 1:value]
at 
org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:296)
at 
org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:250)
at 
org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:374)
at 
org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:73)
at 
org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:61)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1682)
at 
org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callab

[jira] [Created] (HIVE-20513) Vectorization: Improve Fast Vector MapJoin Bytes Hash Tables

2018-09-06 Thread Matt McCline (JIRA)
Matt McCline created HIVE-20513:
---

 Summary: Vectorization: Improve Fast Vector MapJoin Bytes Hash 
Tables
 Key: HIVE-20513
 URL: https://issues.apache.org/jira/browse/HIVE-20513
 Project: Hive
  Issue Type: Bug
  Components: Hive
Reporter: Matt McCline
Assignee: Matt McCline


 Based on HIVE-20491 discussions, improve Fast Vector MapJoin Bytes Hash Tables 
by only storing a one word slot entry.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HIVE-20512) Improve record and memory usage logging in SparkRecordHandler

2018-09-06 Thread Sahil Takiar (JIRA)
Sahil Takiar created HIVE-20512:
---

 Summary: Improve record and memory usage logging in 
SparkRecordHandler
 Key: HIVE-20512
 URL: https://issues.apache.org/jira/browse/HIVE-20512
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Sahil Takiar


We currently log memory usage and # of records processed in Spark tasks, but we 
should improve the methodology for how frequently we log this info. Currently 
we use the following code:

{code:java}
private long getNextLogThreshold(long currentThreshold) {
// A very simple counter to keep track of number of rows processed by the
// reducer. It dumps
// every 1 million times, and quickly before that
if (currentThreshold >= 100) {
  return currentThreshold + 100;
}
return 10 * currentThreshold;
  }
{code}

The issue is that after a while, the increase by 10x factor means that you have 
to process a huge # of records before this gets triggered.

A better approach would be to log this info at a given interval. This would 
help in debugging tasks that are seemingly hung.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] hive pull request #429: HIVE-20511 : REPL DUMP is leaking metastore connecti...

2018-09-06 Thread maheshk114
GitHub user maheshk114 opened a pull request:

https://github.com/apache/hive/pull/429

HIVE-20511 : REPL DUMP is leaking metastore connections



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/maheshk114/hive HIVE-20511

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/hive/pull/429.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #429


commit 5eb4e4652ed6e0e1326a62383de71c78575802d2
Author: Mahesh Kumar Behera 
Date:   2018-09-06T15:45:57Z

HIVE-20511 : REPL DUMP is leaking metastore connections




---


[jira] [Created] (HIVE-20511) REPL DUMP is leaking metastore connections

2018-09-06 Thread mahesh kumar behera (JIRA)
mahesh kumar behera created HIVE-20511:
--

 Summary: REPL DUMP is leaking metastore connections
 Key: HIVE-20511
 URL: https://issues.apache.org/jira/browse/HIVE-20511
 Project: Hive
  Issue Type: Bug
  Components: repl
Affects Versions: 4.0.0
Reporter: mahesh kumar behera
Assignee: mahesh kumar behera
 Fix For: 4.0.0


With remote metastore, REPL DUMP  leaking connections. Each repl dump task is 
leaking one connection due to the usage of stale hive object. 

{code}
18/09/04 16:01:46 INFO ReplState: REPL::EVENT_DUMP: 
{"dbName":"*","eventId":"566","eventType":"EVENT_COMMIT_TXN","eventsDumpProgress":"1/0","dumpTime":1536076906}
18/09/04 16:01:46 INFO events.AbstractEventHandler: Processing#567 OPEN_TXN 
message : 
{"txnIds":null,"timestamp":1536076905,"fromTxnId":269,"toTxnId":269,"server":"thrift://metastore-service.warehouse-1536062326-s74h.svc.cluster.local:9083","servicePrincipal":""}
18/09/04 16:01:46 INFO ReplState: REPL::EVENT_DUMP: 
{"dbName":"*","eventId":"567","eventType":"EVENT_OPEN_TXN","eventsDumpProgress":"2/0","dumpTime":1536076906}
18/09/04 16:01:46 INFO metastore.HiveMetaStoreClient: Trying to connect to 
metastore with URI 
thrift://metastore-service.warehouse-1536062326-s74h.svc.cluster.local:9083
18/09/04 16:01:46 INFO metastore.HiveMetaStoreClient: Opened a connection to 
metastore, current connections: 471
18/09/04 16:01:46 INFO metastore.HiveMetaStoreClient: Connected to metastore.
18/09/04 16:01:46 INFO metastore.RetryingMetaStoreClient: 
RetryingMetaStoreClient proxy=class 
org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient ugi=hive 
(auth:SIMPLE) retries=24 delay=5 lifetime=0
18/09/04 16:01:46 INFO ReplState: REPL::END: 
{"dbName":"*","dumpType":"INCREMENTAL","actualNumEvents":2,"dumpEndTime":1536076906,"dumpDir":"/user/hive/repl/e45bde27-74dc-45cd-9823-400a8fc1aea3","lastReplId":"567"}
18/09/04 16:01:46 INFO repl.ReplDumpTask: Done dumping events, preparing to 
return /user/hive/repl/e45bde27-74dc-45cd-9823-400a8fc1aea3,567
18/09/04 16:01:46 INFO ql.Driver: Completed executing 
command(queryId=hive_20180904160145_30f9570a-44e0-4f3b-b961-1906d3972fc4); Time 
taken: 0.585 seconds
OK
18/09/04 16:01:46 INFO ql.Driver: OK
18/09/04 16:01:46 INFO lockmgr.DbTxnManager: Stopped heartbeat for query: 
hive_20180904160145_30f9570a-44e0-4f3b-b961-1906d3972fc4
18/09/04 16:01:46 INFO metastore.HiveMetaStoreClient: Trying to connect to 
metastore with URI 
thrift://metastore-service.warehouse-1536062326-s74h.svc.cluster.local:9083
18/09/04 16:01:46 INFO metastore.HiveMetaStoreClient: Opened a connection to 
metastore, current connections: 472
18/09/04 16:01:46 INFO metastore.HiveMetaStoreClient: Connected to metastore.
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Review Request 68656: HIVE-20505: upgrade org.openjdk.jmh:jmh-core to 1.21

2018-09-06 Thread Laszlo Pinter via Review Board

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/68656/
---

Review request for hive.


Repository: hive-git


Description
---

HIVE-20505: upgrade org.openjdk.jmh:jmh-core to 1.21


Diffs
-

  itests/hive-jmh/pom.xml 0abefdf791a04593c547119256a755adcd78bda5 


Diff: https://reviews.apache.org/r/68656/diff/1/


Testing
---


Thanks,

Laszlo Pinter



Re: Review Request 68630: HIVE-20420: Provide a fallback authorizer when no other authorizer is in use

2018-09-06 Thread Laszlo Pinter via Review Board

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/68630/#review208398
---




ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/fallback/FallbackHiveAuthorizer.java
Lines 23 (patched)


Unused import.



ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/fallback/FallbackHiveAuthorizer.java
Lines 41 (patched)


Unused import.



ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/fallback/FallbackHiveAuthorizer.java
Lines 47 (patched)


Unused import.



ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/fallback/FallbackHiveAuthorizer.java
Lines 55 (patched)


The conf variable is used only in the constructor. No need to keep it in a 
member variable.



ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/fallback/FallbackHiveAuthorizer.java
Lines 58 (patched)


The constructor visibility can be changed to package-private, since it is 
instantiated through FallbackHiveAuthorizerFactory.



ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/fallback/FallbackHiveAuthorizer.java
Lines 94 (patched)


The next few methods are declared to throw HiveAccessControlException, 
though it is never thrown in the method. Let's try to keep the code as clean 
and simple as possible.



ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/fallback/FallbackHiveAuthorizer.java
Lines 148 (patched)


No need to explicitly declare the type argument. You can use type inference 
in case of generic instance creation.



ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/fallback/FallbackHiveAuthorizer.java
Lines 159 (patched)


Unnecessary throws clause declaration.



ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/fallback/FallbackHiveAuthorizer.java
Lines 164-171 (patched)


This section can be replaced with a more elegant implementation, using 
stream api.
```java
if (admins != null && Arrays.stream(admins).parallel().anyMatch(n -> 
n.equals(userName)) {
return;
}

```



ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/fallback/FallbackHiveAuthorizer.java
Lines 180-187 (patched)


I'm not sure this loop is correct, since it's only checking the first 
element of the hiveObjects list, afterwards breaks the loop. 
If the intention was to check only the first entry, no need to use a loop.
```
boolean needAdmin =
!hiveObjects.isEmpty() && hiveObjects.get(0).getType() == 
HivePrivilegeObject.HivePrivilegeObjectType.LOCAL_URI;
```
If you want to check all the entries, than this implementation is incorrect.



ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/fallback/FallbackHiveAuthorizer.java
Lines 188 (patched)


No need to check for the operation type if needAdmin is already true.



ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/fallback/FallbackHiveAuthorizer.java
Lines 203-222 (patched)


Som unthrown exception declarations.



ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/fallback/FallbackHiveAuthorizerFactory.java
Lines 33 (patched)


The HiveAuthzPluginException is never thrown by the method. Also if you 
remove the exception the ```import 
org.apache.hadoop.hive.ql.security.authorization.plugin.HiveAuthzPluginException;```
 can be removed.


- Laszlo Pinter


On Sept. 5, 2018, 5:27 p.m., Daniel Dai wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/68630/
> ---
> 
> (Updated Sept. 5, 2018, 5:27 p.m.)
> 
> 
> Review request for hive.
> 
> 
> Repository: hive-git
> 
> 
> Description
> ---
> 
> See HIVE-20420
> 
> 
> Diffs
> -
> 
>   ql/pom.xml a55cbe3 
>   
> ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/SettableConfigUpdater.java
>  12be41c 
>   
> ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/fallback/FallbackHiveAuthorizer.java
>  PRE-CREATION 
>   
> ql/src/ja

Re: Review Request 68648: HIVE-20510

2018-09-06 Thread Matt McCline

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/68648/#review208396
---




ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorizationContext.java
Lines 865 (patched)


In order for EXPLAIN VECTORIZATION to see the proper information on 
BucketNumExpression you need to call 

ve.setInputTypeInfos(inputTypeInfo);
ve.setOutputTypeInfo(outputTypeInfo);

on the new VectorExpression.

Probably in a separate method.


- Matt McCline


On Sept. 6, 2018, 6:47 a.m., Deepak Jaiswal wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/68648/
> ---
> 
> (Updated Sept. 6, 2018, 6:47 a.m.)
> 
> 
> Review request for hive, Gopal V and Matt McCline.
> 
> 
> Bugs: HIVE-20510
> https://issues.apache.org/jira/browse/HIVE-20510
> 
> 
> Repository: hive-git
> 
> 
> Description
> ---
> 
> Vectorization : Support loading bucketed tables using sorted dynamic 
> partition optimizer.
> Added a new VectorExpression BucketNumberExpression to evaluate 
> _bucket_number.
> Made the loops as tight as possible.
> 
> 
> Diffs
> -
> 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorizationContext.java 
> 57f7c0108e 
>   
> ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/BucketNumExpression.java
>  PRE-CREATION 
>   
> ql/src/java/org/apache/hadoop/hive/ql/exec/vector/reducesink/VectorReduceSinkObjectHashOperator.java
>  5ab59c9c61 
>   ql/src/test/queries/clientpositive/dynpart_sort_opt_vectorization.q 
> 435cdaddd0 
>   
> ql/src/test/results/clientpositive/llap/dynpart_sort_opt_vectorization.q.out 
> 22f0a31eb3 
> 
> 
> Diff: https://reviews.apache.org/r/68648/diff/1/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Deepak Jaiswal
> 
>