Re: Review Request 55194: HIVE-15541: Hive OOM when ATSHook enabled and ATS goes down

2017-01-05 Thread Barna Zsombor Klara

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/55194/#review160571
---




ql/src/java/org/apache/hadoop/hive/ql/hooks/ATSHook.java (line 208)


Should this be logged at INFO lvl? I understand that the hive query can 
still be executed just because we couldn't send the data to Yarn, but shouldn't 
it be at least a warning since something went wrong?



ql/src/java/org/apache/hadoop/hive/ql/hooks/ATSHook.java (line 213)


Same as line 208. Should this be a warning instead?


Thanks for the patch. LGTM, just minor nits.

- Barna Zsombor Klara


On Jan. 5, 2017, 2:28 a.m., Jason Dere wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/55194/
> ---
> 
> (Updated Jan. 5, 2017, 2:28 a.m.)
> 
> 
> Review request for hive.
> 
> 
> Bugs: HIVE-15541
> https://issues.apache.org/jira/browse/HIVE-15541
> 
> 
> Repository: hive-git
> 
> 
> Description
> ---
> 
> Create the ATSHook executor with a bounded queue capacity
> 
> 
> Diffs
> -
> 
>   common/src/java/org/apache/hadoop/hive/conf/HiveConf.java 47db0c0 
>   ql/src/java/org/apache/hadoop/hive/ql/hooks/ATSHook.java 3651c9c 
> 
> Diff: https://reviews.apache.org/r/55194/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Jason Dere
> 
>



Hive + Kerberos Ticket Issue

2017-01-05 Thread shakun grover
When trying to create Hive Jdbc Connection after getting Kerberos ticket
using kinit -k -t command with ProcessBuilder , I am getting below
exception:

java.lang.IllegalStateException: This ticket is no longer valid
at
javax.security.auth.kerberos.KerberosTicket.toString(KerberosTicket.java:638)
at java.lang.String.valueOf(String.java:2847)
at java.lang.StringBuilder.append(StringBuilder.java:128)
at sun.security.jgss.krb5.SubjectComber.findAux(SubjectComber.java:150)
at sun.security.jgss.krb5.SubjectComber.find(SubjectComber.java:59)
at sun.security.jgss.krb5.Krb5Util.getTicket(Krb5Util.java:155)
at
sun.security.jgss.krb5.Krb5InitCredential$1.run(Krb5InitCredential.java:346)
at
sun.security.jgss.krb5.Krb5InitCredential$1.run(Krb5InitCredential.java:344)
at java.security.AccessController.doPrivileged(Native Method)
at
sun.security.jgss.krb5.Krb5InitCredential.getTgt(Krb5InitCredential.java:343)
at
sun.security.jgss.krb5.Krb5InitCredential.getInstance(Krb5InitCredential.java:145)
at
sun.security.jgss.krb5.Krb5MechFactory.getCredentialElement(Krb5MechFactory.java:121)
at
sun.security.jgss.krb5.Krb5MechFactory.getMechanismContext(Krb5MechFactory.java:187)
at
sun.security.jgss.GSSManagerImpl.getMechanismContext(GSSManagerImpl.java:223)
at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:212)
at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:179)
at
com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:193)
at
org.apache.thrift.transport.TSaslClientTransport.handleSaslStartMessage(TSaslClientTransport.java:94)
at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:253)
at
org.apache.thrift.transport.TSaslClientTransport.open(TSaslClientTransport.java:37)
at
org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:52)
at
org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:49)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
at
org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport.open(TUGIAssumingTransport.java:49)
at
org.apache.hive.jdbc.HiveConnection.openTransport(HiveConnection.java:203)
at org.apache.hive.jdbc.HiveConnection.(HiveConnection.java:178)
at org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:105)
at java.sql.DriverManager.getConnection(DriverManager.java:571)
at java.sql.DriverManager.getConnection(DriverManager.java:215)
at
com.dataguise.agent.hive.util.HiveJDBCClient.(HiveJDBCClient.java:34)


After ticket is generated from code, if I klist on the cmd line I get the
valid ticket. But I am not able to get the JDBC connection after the ticket
is generated.

Moreover If I set system
property -Djavax.security.auth.useSubjectCredsOnly=false then I am able to
get the JDBC Session.

Could anyone help me with this?

Thanks in advance!!

-- 
Thanks & Regards,
Shakun Grover


Re: Review Request 55154: HIVE-15366: REPL LOAD & DUMP support for incremental INSERT events

2017-01-05 Thread Vaibhav Gumashta

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/55154/
---

(Updated Jan. 5, 2017, 10:47 a.m.)


Review request for hive, Daniel Dai, Sushanth Sowmyan, and Thejas Nair.


Bugs: HIVE-15366
https://issues.apache.org/jira/browse/HIVE-15366


Repository: hive-git


Description
---

https://issues.apache.org/jira/browse/HIVE-15366


Diffs (updated)
-

  
itests/hcatalog-unit/src/test/java/org/apache/hive/hcatalog/listener/TestDbNotificationListener.java
 39356ae 
  
itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/TestReplicationScenarios.java
 e29aa22 
  metastore/if/hive_metastore.thrift 79592ea 
  metastore/src/gen/thrift/gen-cpp/hive_metastore_types.cpp 1311b20 
  
metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/InsertEventRequestData.java
 39a607d 
  metastore/src/gen/thrift/gen-rb/hive_metastore_types.rb ebed504 
  metastore/src/java/org/apache/hadoop/hive/metastore/events/InsertEvent.java 
d9a42a7 
  
metastore/src/java/org/apache/hadoop/hive/metastore/messaging/InsertMessage.java
 fe747df 
  
metastore/src/java/org/apache/hadoop/hive/metastore/messaging/MessageFactory.java
 fdb8e80 
  
metastore/src/java/org/apache/hadoop/hive/metastore/messaging/json/JSONInsertMessage.java
 bd9f9ec 
  
metastore/src/java/org/apache/hadoop/hive/metastore/messaging/json/JSONMessageFactory.java
 9954902 
  ql/src/java/org/apache/hadoop/hive/ql/exec/ReplCopyTask.java 4c0f817 
  ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java be5a6a9 
  ql/src/java/org/apache/hadoop/hive/ql/parse/EximUtil.java 6e9602f 
  ql/src/java/org/apache/hadoop/hive/ql/parse/ExportSemanticAnalyzer.java 
f61274b 
  ql/src/java/org/apache/hadoop/hive/ql/parse/ImportSemanticAnalyzer.java 
5561e06 
  ql/src/java/org/apache/hadoop/hive/ql/parse/ReplicationSemanticAnalyzer.java 
9b83407 

Diff: https://reviews.apache.org/r/55154/diff/


Testing
---


Thanks,

Vaibhav Gumashta



[jira] [Created] (HIVE-15545) JDBC driver aborts when DatabaseMetadata.getFunctions is called

2017-01-05 Thread N Campbell (JIRA)
N Campbell created HIVE-15545:
-

 Summary: JDBC driver aborts when DatabaseMetadata.getFunctions is 
called
 Key: HIVE-15545
 URL: https://issues.apache.org/jira/browse/HIVE-15545
 Project: Hive
  Issue Type: Bug
  Components: JDBC
Affects Versions: 2.1.1
Reporter: N Campbell


getDatabaseProductVersion:1.2.1000.2.5.3.0-37
getDriverVersion:1.2.1000.2.5.0.0-1245

ResultSet rs = dbMeta.getFunctions(null, null, null);


Exception in thread "main" java.sql.SQLException: Required field 'functionName' 
is unset! 
Struct:TGetFunctionsReq(sessionHandle:TSessionHandle(sessionId:THandleIdentifier(guid:99
 48 E7 57 6A 77 40 00 8D 49 99 34 81 51 C7 04, secret:F8 64 B2 9C D8 A2 41 7A 
99 E6 F1 34 E9 38 13 1D)), functionName:null)
at 
org.apache.hive.jdbc.HiveDatabaseMetaData.getFunctions(HiveDatabaseMetaData.java:330)
at zBug.getFunctions(zBug.java:679)
at test.main(test.java:87)
Caused by: org.apache.thrift.protocol.TProtocolException: Required field 
'functionName' is unset! 
Struct:TGetFunctionsReq(sessionHandle:TSessionHandle(sessionId:THandleIdentifier(guid:99
 48 E7 57 6A 77 40 00 8D 49 99 34 81 51 C7 04, secret:F8 64 B2 9C D8 A2 41 7A 
99 E6 F1 34 E9 38 13 1D)), functionName:null)
at 
org.apache.hive.service.cli.thrift.TGetFunctionsReq.validate(TGetFunctionsReq.java:542)
at 
org.apache.hive.service.cli.thrift.TCLIService$GetFunctions_args.validate(TCLIService.java:10145)
at 
org.apache.hive.service.cli.thrift.TCLIService$GetFunctions_args$GetFunctions_argsStandardScheme.write(TCLIService.java:10202)
at 
org.apache.hive.service.cli.thrift.TCLIService$GetFunctions_args$GetFunctions_argsStandardScheme.write(TCLIService.java:10171)
at 
org.apache.hive.service.cli.thrift.TCLIService$GetFunctions_args.write(TCLIService.java:10122)
at org.apache.thrift.TServiceClient.sendBase(TServiceClient.java:71)
at org.apache.thrift.TServiceClient.sendBase(TServiceClient.java:62)
at 
org.apache.hive.service.cli.thrift.TCLIService$Client.send_GetFunctions(TCLIService.java:384)
at 
org.apache.hive.service.cli.thrift.TCLIService$Client.GetFunctions(TCLIService.java:376)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:95)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55)
at java.lang.reflect.Method.invoke(Method.java:508)
at 
org.apache.hive.jdbc.HiveConnection$SynchronizedHandler.invoke(HiveConnection.java:1363)
at com.sun.proxy.$Proxy0.GetFunctions(Unknown Source)
at 
org.apache.hive.jdbc.HiveDatabaseMetaData.getFunctions(HiveDatabaseMetaData.java:328)
... 2 more




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] hive pull request #125: HIVE-15419 Separate storage-api so that it can be re...

2017-01-05 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/hive/pull/125


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] hive pull request #121: HIVE-15375. Backport ORC-115 to Hive.

2017-01-05 Thread omalley
Github user omalley closed the pull request at:

https://github.com/apache/hive/pull/121


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[VOTE] Should we release hive-storage 2.2.0RC0

2017-01-05 Thread Owen O'Malley
All,
   I'd like to make a release of Hive's storage-api module. This will allow
ORC to remove its fork of storage-api and make a release based on it. The
RC is based on Hive's master branch as of this morning.

Artifacts:
   tag: https://github.com/apache/hive/releases/tag/storage-release-2.2.0rc0
   tar ball: http://home.apache.org/~omalley/hive-storage-2.2.0rc0/
   release branch: https://github.com/apache/hive/tree/storage-branch-2.2

If you download the tag or the release branch, you'll need to go into
storage-api to build it, because I disconnected it from the main hive build.

Should we release storage-api 2.2.0RC0?

Thanks,
   Owen


Re: Review Request 55194: HIVE-15541: Hive OOM when ATSHook enabled and ATS goes down

2017-01-05 Thread Jason Dere


> On Jan. 5, 2017, 9:38 a.m., Barna Zsombor Klara wrote:
> > ql/src/java/org/apache/hadoop/hive/ql/hooks/ATSHook.java, line 208
> > 
> >
> > Should this be logged at INFO lvl? I understand that the hive query can 
> > still be executed just because we couldn't send the data to Yarn, but 
> > shouldn't it be at least a warning since something went wrong?

Makes sense, will change to warning level logs


- Jason


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/55194/#review160571
---


On Jan. 5, 2017, 2:28 a.m., Jason Dere wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/55194/
> ---
> 
> (Updated Jan. 5, 2017, 2:28 a.m.)
> 
> 
> Review request for hive.
> 
> 
> Bugs: HIVE-15541
> https://issues.apache.org/jira/browse/HIVE-15541
> 
> 
> Repository: hive-git
> 
> 
> Description
> ---
> 
> Create the ATSHook executor with a bounded queue capacity
> 
> 
> Diffs
> -
> 
>   common/src/java/org/apache/hadoop/hive/conf/HiveConf.java 47db0c0 
>   ql/src/java/org/apache/hadoop/hive/ql/hooks/ATSHook.java 3651c9c 
> 
> Diff: https://reviews.apache.org/r/55194/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Jason Dere
> 
>



Re: Review Request 55194: HIVE-15541: Hive OOM when ATSHook enabled and ATS goes down

2017-01-05 Thread Jason Dere

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/55194/
---

(Updated Jan. 5, 2017, 6:45 p.m.)


Review request for hive.


Changes
---

Changing log level to WARN for caught exceptions


Bugs: HIVE-15541
https://issues.apache.org/jira/browse/HIVE-15541


Repository: hive-git


Description
---

Create the ATSHook executor with a bounded queue capacity


Diffs (updated)
-

  common/src/java/org/apache/hadoop/hive/conf/HiveConf.java 47db0c0 
  ql/src/java/org/apache/hadoop/hive/ql/hooks/ATSHook.java 3651c9c 

Diff: https://reviews.apache.org/r/55194/diff/


Testing
---


Thanks,

Jason Dere



[jira] [Created] (HIVE-15546) Optimize Utilities.getInputPaths()

2017-01-05 Thread Sahil Takiar (JIRA)
Sahil Takiar created HIVE-15546:
---

 Summary: Optimize Utilities.getInputPaths()
 Key: HIVE-15546
 URL: https://issues.apache.org/jira/browse/HIVE-15546
 Project: Hive
  Issue Type: Sub-task
  Components: Hive
Reporter: Sahil Takiar
Assignee: Sahil Takiar


When running on blobstores (like S3) where metadata operations (like 
listStatus) are costly, Utilities.getInputPaths() can add significant overhead 
when setting up the input paths for an MR / Spark / Tez job.

The method performs a listStatus on all input paths in order to check if the 
path is empty. If the path is empty, a dummy file is created for the given 
partition. This is all done sequentially. This can be really slow when there 
are a lot of empty partitions. Even when all partitions have input data, this 
can take a long time.

We should either:

(1) Just remove the logic to check if each input path is empty, and handle any 
edge cases accordingly.

(2) Multi-thread the listStatus calls



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-15547) nulls not sorted last on cursor specification

2017-01-05 Thread N Campbell (JIRA)
N Campbell created HIVE-15547:
-

 Summary: nulls not sorted last on cursor specification
 Key: HIVE-15547
 URL: https://issues.apache.org/jira/browse/HIVE-15547
 Project: Hive
  Issue Type: Bug
  Components: SQL
Affects Versions: 2.1.0
Reporter: N Campbell


Query that attempts to sort nulls last produces an incorrect result set.
(1) first column contains all null values which is wrong
(2) second column has not sorted the only null value as the last row

Hive server version: 2.1.0.2.5.3.0-37

Query:
SELECT `tint`.`rnum`, `tint`.`cint` FROM `tint` ORDER BY `tint`.`rnum` ASC 
NULLS LAST

Results:
tint.rnum   tint.cint
  
  -1
  0
  1
  10


Source data
rnumcint
0   
1   -1
2   0
3   1
4   10

Table
create table  if not exists TINT ( RNUM int , CINT int   )
 ROW FORMAT DELIMITED FIELDS TERMINATED BY '|' LINES TERMINATED BY '\n' 
 STORED AS textfile  ;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-15548) TEZ exception error when NULL ordering specification used on cursor or window agg

2017-01-05 Thread N Campbell (JIRA)
N Campbell created HIVE-15548:
-

 Summary: TEZ exception error when NULL ordering specification used 
on cursor or window agg
 Key: HIVE-15548
 URL: https://issues.apache.org/jira/browse/HIVE-15548
 Project: Hive
  Issue Type: Bug
  Components: SQL
Affects Versions: 2.1.0
Reporter: N Campbell


select c1, c2 from tset1 order by c1 asc nulls last,  c2 asc nulls first

 select rnum , c1 , c2 , sum( c3 ) over (partition by sum( c3 ) over (partition 
by c1 order by c1 )) from tolap 



i.e. 

Error: Error while processing statement: FAILED: Execution Error, return code 2 
from org.apache.hadoop.hive.ql.exec.tez.TezTask. Vertex failed, 
vertexName=Reducer 2, vertexId=vertex_1483461312952_0011_11_01, 
diagnostics=[Task failed, taskId=task_1483461312952_0011_11_01_00, 
diagnostics=[TaskAttempt 0 failed, info=[Error: Error while running task ( 
failure ) : 
attempt_1483461312952_0011_11_01_00_0:java.lang.RuntimeException: 
java.lang.RuntimeException: java.io.EOFException: Detail: 
"java.io.EOFException" occured for field 0 of 2 fields (INT, CHAR)
at 
org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:211)
at 
org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:168)
at 
org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:370)
at 
org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:73)
at 
org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:61)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
at 
org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:61)
at 
org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37)
at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
at 
org.apache.hadoop.hive.llap.daemon.impl.StatsRecordingThreadPool$WrappedCallable.call(StatsRecordingThreadPool.java:110)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)


table definitions:


create table  if not exists TSET1 (RNUM int , C1 int, C2 char(3))
 ROW FORMAT DELIMITED FIELDS TERMINATED BY '|' LINES TERMINATED BY '\n' 
 STORED AS textfile ;

create table  if not exists TOLAP (RNUM int , C1 char(3), C2 char(2), C3 int, 
C4 int)
 ROW FORMAT DELIMITED FIELDS TERMINATED BY '|' LINES TERMINATED BY '\n' 
 STORED AS textfile ;





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-15549) Better naming of Tez edges

2017-01-05 Thread Gunther Hagleitner (JIRA)
Gunther Hagleitner created HIVE-15549:
-

 Summary: Better naming of Tez edges
 Key: HIVE-15549
 URL: https://issues.apache.org/jira/browse/HIVE-15549
 Project: Hive
  Issue Type: Bug
Reporter: Gunther Hagleitner
Assignee: Gunther Hagleitner


Do the following renames:

CUSTOM_EDGE -> CO_PARTITION_EDGE
CUSTOM_SIMPLE_EDGE -> PARTITION_EDGE
SIMPLE_EDGE -> SORT_PARTITION_EDGE

Because that's what those edges actually do.

Also rename Map/Reduce  to just Vertex . These vertices haven't mapped or 
reduced in a long time. The names are leftover items from MR.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Review Request 55247: LLAP: use LLAP cache for non-columnar formats in a somewhat general way

2017-01-05 Thread Sergey Shelukhin

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/55247/
---

Review request for hive, Gopal V, Prasanth_J, and Siddharth Seth.


Repository: hive-git


Description
---

see JIRA


Diffs
-

  common/src/java/org/apache/hadoop/hive/conf/HiveConf.java 47db0c0 
  data/files/over10k.gz PRE-CREATION 
  llap-client/src/java/org/apache/hadoop/hive/llap/io/api/LlapIo.java d82757f 
  
llap-server/src/java/org/apache/hadoop/hive/llap/IncrementalObjectSizeEstimator.java
 3efbcc2 
  llap-server/src/java/org/apache/hadoop/hive/llap/cache/BuddyAllocator.java 
d9d407d 
  
llap-server/src/java/org/apache/hadoop/hive/llap/cache/EvictionDispatcher.java 
b6fd3e3 
  llap-server/src/java/org/apache/hadoop/hive/llap/cache/FileCache.java 
PRE-CREATION 
  
llap-server/src/java/org/apache/hadoop/hive/llap/cache/FileCacheCleanupThread.java
 PRE-CREATION 
  llap-server/src/java/org/apache/hadoop/hive/llap/cache/LlapDataBuffer.java 
d1a961c 
  llap-server/src/java/org/apache/hadoop/hive/llap/cache/LowLevelCacheImpl.java 
ea458ca 
  
llap-server/src/java/org/apache/hadoop/hive/llap/cache/SerDeLowLevelCacheImpl.java
 PRE-CREATION 
  
llap-server/src/java/org/apache/hadoop/hive/llap/io/api/impl/LlapInputFormat.java
 290624d 
  llap-server/src/java/org/apache/hadoop/hive/llap/io/api/impl/LlapIoImpl.java 
8048624 
  
llap-server/src/java/org/apache/hadoop/hive/llap/io/api/impl/LlapRecordReader.java
 PRE-CREATION 
  
llap-server/src/java/org/apache/hadoop/hive/llap/io/decode/ColumnVectorProducer.java
 db86296 
  
llap-server/src/java/org/apache/hadoop/hive/llap/io/decode/EncodedDataConsumer.java
 6b54b30 
  
llap-server/src/java/org/apache/hadoop/hive/llap/io/decode/GenericColumnVectorProducer.java
 PRE-CREATION 
  
llap-server/src/java/org/apache/hadoop/hive/llap/io/decode/OrcColumnVectorProducer.java
 2e9b9c3 
  
llap-server/src/java/org/apache/hadoop/hive/llap/io/decode/OrcEncodedDataConsumer.java
 29f1ba8 
  llap-server/src/java/org/apache/hadoop/hive/llap/io/decode/ReadPipeline.java 
4e1b851 
  
llap-server/src/java/org/apache/hadoop/hive/llap/io/encoded/LineRrOffsetReader.java
 PRE-CREATION 
  
llap-server/src/java/org/apache/hadoop/hive/llap/io/encoded/PassThruOffsetReader.java
 PRE-CREATION 
  
llap-server/src/java/org/apache/hadoop/hive/llap/io/encoded/SerDeEncodedDataReader.java
 PRE-CREATION 
  
llap-server/src/java/org/apache/hadoop/hive/llap/io/metadata/ConsumerFileMetadata.java
 PRE-CREATION 
  
llap-server/src/java/org/apache/hadoop/hive/llap/io/metadata/ConsumerStripeMetadata.java
 PRE-CREATION 
  
llap-server/src/java/org/apache/hadoop/hive/llap/io/metadata/OrcFileMetadata.java
 70cba05 
  
llap-server/src/java/org/apache/hadoop/hive/llap/io/metadata/OrcMetadataCache.java
 3f4f43b 
  
llap-server/src/java/org/apache/hadoop/hive/llap/io/metadata/OrcStripeMetadata.java
 6f0b9ff 
  orc/src/java/org/apache/orc/OrcUtils.java dc83b9c 
  orc/src/java/org/apache/orc/impl/PhysicalWriter.java 83742e4 
  orc/src/java/org/apache/orc/impl/RecordReaderImpl.java 975804b 
  orc/src/java/org/apache/orc/impl/TreeReaderFactory.java 3ddafba 
  orc/src/java/org/apache/orc/impl/WriterImpl.java 518a5f7 
  ql/src/java/org/apache/hadoop/hive/llap/DebugUtils.java 3d81e43 
  ql/src/java/org/apache/hadoop/hive/ql/exec/FetchOperator.java 601ad08 
  ql/src/java/org/apache/hadoop/hive/ql/exec/Utilities.java 0161c20 
  ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorMapOperator.java 
323419c 
  ql/src/java/org/apache/hadoop/hive/ql/io/CombineHiveRecordReader.java ba25573 
  ql/src/java/org/apache/hadoop/hive/ql/io/HiveInputFormat.java 94fcd60 
  
ql/src/java/org/apache/hadoop/hive/ql/io/LlapWrappableInputFormatInterface.java 
66e1f90 
  ql/src/java/org/apache/hadoop/hive/ql/io/orc/OrcInputFormat.java 361901e 
  ql/src/java/org/apache/hadoop/hive/ql/io/orc/WriterImpl.java 075c3b4 
  ql/src/java/org/apache/hadoop/hive/ql/io/orc/encoded/CacheChunk.java 2325140 
  
ql/src/java/org/apache/hadoop/hive/ql/io/orc/encoded/EncodedTreeReaderFactory.java
 d5f5f9d 
  ql/src/java/org/apache/hadoop/hive/ql/io/orc/encoded/StreamUtils.java cef765c 
  ql/src/java/org/apache/hadoop/hive/ql/io/rcfile/stats/PartialScanMapper.java 
09e4a47 
  ql/src/java/org/apache/hadoop/hive/ql/optimizer/physical/LlapDecider.java 
e1fb8fb 
  ql/src/java/org/apache/hadoop/hive/ql/optimizer/physical/Vectorizer.java 
2a99274 
  ql/src/java/org/apache/hadoop/hive/ql/plan/VectorPartitionDesc.java 7bf70c6 
  ql/src/test/queries/clientpositive/llap_text.q PRE-CREATION 
  ql/src/test/results/clientpositive/llap_text.q.out PRE-CREATION 
  storage-api/src/java/org/apache/hadoop/hive/common/io/DiskRangeList.java 
62f9d8e 
  
storage-api/src/java/org/apache/hadoop/hive/common/io/encoded/EncodedColumnBatch.java
 13772c9 

Diff: https://reviews.apache.org/r/55247/diff/


Testing
---


Thanks,

Sergey Shelukhin



Re: Review Request 55247: LLAP: use LLAP cache for non-columnar formats in a somewhat general way

2017-01-05 Thread Sergey Shelukhin

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/55247/#review160668
---




llap-server/src/java/org/apache/hadoop/hive/llap/io/encoded/LineRrOffsetReader.java
 (line 30)


wrong


- Sergey Shelukhin


On Jan. 6, 2017, 2:29 a.m., Sergey Shelukhin wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/55247/
> ---
> 
> (Updated Jan. 6, 2017, 2:29 a.m.)
> 
> 
> Review request for hive, Gopal V, Prasanth_J, and Siddharth Seth.
> 
> 
> Repository: hive-git
> 
> 
> Description
> ---
> 
> see JIRA
> 
> 
> Diffs
> -
> 
>   common/src/java/org/apache/hadoop/hive/conf/HiveConf.java 47db0c0 
>   data/files/over10k.gz PRE-CREATION 
>   llap-client/src/java/org/apache/hadoop/hive/llap/io/api/LlapIo.java d82757f 
>   
> llap-server/src/java/org/apache/hadoop/hive/llap/IncrementalObjectSizeEstimator.java
>  3efbcc2 
>   llap-server/src/java/org/apache/hadoop/hive/llap/cache/BuddyAllocator.java 
> d9d407d 
>   
> llap-server/src/java/org/apache/hadoop/hive/llap/cache/EvictionDispatcher.java
>  b6fd3e3 
>   llap-server/src/java/org/apache/hadoop/hive/llap/cache/FileCache.java 
> PRE-CREATION 
>   
> llap-server/src/java/org/apache/hadoop/hive/llap/cache/FileCacheCleanupThread.java
>  PRE-CREATION 
>   llap-server/src/java/org/apache/hadoop/hive/llap/cache/LlapDataBuffer.java 
> d1a961c 
>   
> llap-server/src/java/org/apache/hadoop/hive/llap/cache/LowLevelCacheImpl.java 
> ea458ca 
>   
> llap-server/src/java/org/apache/hadoop/hive/llap/cache/SerDeLowLevelCacheImpl.java
>  PRE-CREATION 
>   
> llap-server/src/java/org/apache/hadoop/hive/llap/io/api/impl/LlapInputFormat.java
>  290624d 
>   
> llap-server/src/java/org/apache/hadoop/hive/llap/io/api/impl/LlapIoImpl.java 
> 8048624 
>   
> llap-server/src/java/org/apache/hadoop/hive/llap/io/api/impl/LlapRecordReader.java
>  PRE-CREATION 
>   
> llap-server/src/java/org/apache/hadoop/hive/llap/io/decode/ColumnVectorProducer.java
>  db86296 
>   
> llap-server/src/java/org/apache/hadoop/hive/llap/io/decode/EncodedDataConsumer.java
>  6b54b30 
>   
> llap-server/src/java/org/apache/hadoop/hive/llap/io/decode/GenericColumnVectorProducer.java
>  PRE-CREATION 
>   
> llap-server/src/java/org/apache/hadoop/hive/llap/io/decode/OrcColumnVectorProducer.java
>  2e9b9c3 
>   
> llap-server/src/java/org/apache/hadoop/hive/llap/io/decode/OrcEncodedDataConsumer.java
>  29f1ba8 
>   
> llap-server/src/java/org/apache/hadoop/hive/llap/io/decode/ReadPipeline.java 
> 4e1b851 
>   
> llap-server/src/java/org/apache/hadoop/hive/llap/io/encoded/LineRrOffsetReader.java
>  PRE-CREATION 
>   
> llap-server/src/java/org/apache/hadoop/hive/llap/io/encoded/PassThruOffsetReader.java
>  PRE-CREATION 
>   
> llap-server/src/java/org/apache/hadoop/hive/llap/io/encoded/SerDeEncodedDataReader.java
>  PRE-CREATION 
>   
> llap-server/src/java/org/apache/hadoop/hive/llap/io/metadata/ConsumerFileMetadata.java
>  PRE-CREATION 
>   
> llap-server/src/java/org/apache/hadoop/hive/llap/io/metadata/ConsumerStripeMetadata.java
>  PRE-CREATION 
>   
> llap-server/src/java/org/apache/hadoop/hive/llap/io/metadata/OrcFileMetadata.java
>  70cba05 
>   
> llap-server/src/java/org/apache/hadoop/hive/llap/io/metadata/OrcMetadataCache.java
>  3f4f43b 
>   
> llap-server/src/java/org/apache/hadoop/hive/llap/io/metadata/OrcStripeMetadata.java
>  6f0b9ff 
>   orc/src/java/org/apache/orc/OrcUtils.java dc83b9c 
>   orc/src/java/org/apache/orc/impl/PhysicalWriter.java 83742e4 
>   orc/src/java/org/apache/orc/impl/RecordReaderImpl.java 975804b 
>   orc/src/java/org/apache/orc/impl/TreeReaderFactory.java 3ddafba 
>   orc/src/java/org/apache/orc/impl/WriterImpl.java 518a5f7 
>   ql/src/java/org/apache/hadoop/hive/llap/DebugUtils.java 3d81e43 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/FetchOperator.java 601ad08 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/Utilities.java 0161c20 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorMapOperator.java 
> 323419c 
>   ql/src/java/org/apache/hadoop/hive/ql/io/CombineHiveRecordReader.java 
> ba25573 
>   ql/src/java/org/apache/hadoop/hive/ql/io/HiveInputFormat.java 94fcd60 
>   
> ql/src/java/org/apache/hadoop/hive/ql/io/LlapWrappableInputFormatInterface.java
>  66e1f90 
>   ql/src/java/org/apache/hadoop/hive/ql/io/orc/OrcInputFormat.java 361901e 
>   ql/src/java/org/apache/hadoop/hive/ql/io/orc/WriterImpl.java 075c3b4 
>   ql/src/java/org/apache/hadoop/hive/ql/io/orc/encoded/CacheChunk.java 
> 2325140 
>   
> ql/src/java/org/apache/hadoop/hive/ql/io/orc/encoded/EncodedTreeReaderFactory.java
>  d5f5f9d 
>   ql/src/java/org/apache/hadoop/hive/ql/io/orc/encoded/StreamUtils.java 
> cef765c 
>   
> ql/s

Re: Review Request 55156: Min-max runtime filtering

2017-01-05 Thread Jason Dere

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/55156/#review160516
---



Looked over the first couple pages of the first patch. I'll take another look 
later.


common/src/java/org/apache/hadoop/hive/conf/HiveConf.java (line 2840)


This setting should probably default to false for the time being



orc/src/java/org/apache/orc/impl/RecordReaderImpl.java (line 30)


Looks like this is an unnecessary change - remove this.



ql/src/java/org/apache/hadoop/hive/ql/exec/ExprNodeDynamicValueEvaluator.java 
(line 30)


Looks left over from cut and paste - can you change to 
"ExprNodeDynamicValueEvaluator"



ql/src/java/org/apache/hadoop/hive/ql/exec/ExprNodeDynamicValueEvaluator.java 
(line 50)


Can you move the call to setConf() up to the constructor so it is just done 
once?



ql/src/java/org/apache/hadoop/hive/ql/exec/JoinUtil.java (line 143)


This method takes in the extra conf parameter, but then doesn't do anything 
with it, it should be used by ExprNodeEvaluatorFactory.get()



ql/src/java/org/apache/hadoop/hive/ql/exec/ObjectCacheWrapper.java (line 38)


leave this comment out



ql/src/java/org/apache/hadoop/hive/ql/exec/mr/ObjectCache.java (line 49)


leave this comment out



ql/src/java/org/apache/hadoop/hive/ql/exec/tez/DynamicValueRegistryTez.java 
(line 73)


Nit: add @Override



ql/src/java/org/apache/hadoop/hive/ql/exec/tez/ReduceRecordProcessor.java (line 
186)


ReduceRecordProcessor is missing dynamicValueCache.release(), like 
MapRecordProcessor



ql/src/java/org/apache/hadoop/hive/ql/optimizer/stats/annotation/StatsRulesProcFactory.java
 (line 1185)


I think this change can be removed if we use the appropriate call to 
OperatorFactory.getAndMakeChild() during DynamicPartitionPruningOptimization. 
I'll try to play with this.



ql/src/java/org/apache/hadoop/hive/ql/optimizer/stats/annotation/StatsRulesProcFactory.java
 (line 2240)


same as above


- Jason Dere


On Jan. 4, 2017, 10:12 p.m., Deepak Jaiswal wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/55156/
> ---
> 
> (Updated Jan. 4, 2017, 10:12 p.m.)
> 
> 
> Review request for hive, Gopal V, Gunther Hagleitner, Jason Dere, Prasanth_J, 
> and Rajesh Balamohan.
> 
> 
> Bugs: HIVE-15269
> https://issues.apache.org/jira/browse/HIVE-15269
> 
> 
> Repository: hive-git
> 
> 
> Description
> ---
> 
> HIVE-15269 min-max runtime filtering.
> The patch also contains the patch for HIVE-15270.
> 
> 
> Diffs
> -
> 
>   common/src/java/org/apache/hadoop/hive/conf/HiveConf.java 47db0c0 
>   itests/src/test/resources/testconfiguration.properties 1cebc70 
>   orc/src/java/org/apache/orc/impl/RecordReaderImpl.java 975804b 
>   orc/src/test/org/apache/orc/impl/TestRecordReaderImpl.java cdd62ac 
>   pom.xml 376197e 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/AbstractMapJoinOperator.java 
> 69ba4a2 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/CommonJoinOperator.java 940f2dd 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/DynamicValueRegistry.java 
> PRE-CREATION 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/ExprNodeColumnEvaluator.java 
> 24c8281 
>   
> ql/src/java/org/apache/hadoop/hive/ql/exec/ExprNodeConstantDefaultEvaluator.java
>  89a75eb 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/ExprNodeConstantEvaluator.java 
> 4fe72a0 
>   
> ql/src/java/org/apache/hadoop/hive/ql/exec/ExprNodeDynamicValueEvaluator.java 
> PRE-CREATION 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/ExprNodeEvaluator.java b8d6ab7 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/ExprNodeEvaluatorFactory.java 
> 0d03d8f 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/ExprNodeEvaluatorHead.java 
> 42685fb 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/ExprNodeEvaluatorRef.java 
> 0a6b66a 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/ExprNodeFieldEvaluator.java 
> ff32626 
>   
> ql/src/java/org/apache/hadoop/hive/ql/exec/ExprNodeGenericFuncEvaluator.java 
> 221abd9 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/FilterOperator.java bd0d28c 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/GroupByOperator.java 46f0ecd 
>   ql/src/java/

Re: Review Request 55154: HIVE-15366: REPL LOAD & DUMP support for incremental INSERT events

2017-01-05 Thread Vaibhav Gumashta

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/55154/
---

(Updated Jan. 6, 2017, 6:43 a.m.)


Review request for hive, Daniel Dai, Sushanth Sowmyan, and Thejas Nair.


Bugs: HIVE-15366
https://issues.apache.org/jira/browse/HIVE-15366


Repository: hive-git


Description
---

https://issues.apache.org/jira/browse/HIVE-15366


Diffs (updated)
-

  
itests/hcatalog-unit/src/test/java/org/apache/hive/hcatalog/listener/TestDbNotificationListener.java
 39356ae 
  
itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/TestReplicationScenarios.java
 e29aa22 
  metastore/if/hive_metastore.thrift 79592ea 
  metastore/src/gen/thrift/gen-cpp/hive_metastore_types.cpp 1311b20 
  
metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/InsertEventRequestData.java
 39a607d 
  metastore/src/gen/thrift/gen-rb/hive_metastore_types.rb ebed504 
  metastore/src/java/org/apache/hadoop/hive/metastore/events/InsertEvent.java 
d9a42a7 
  
metastore/src/java/org/apache/hadoop/hive/metastore/messaging/InsertMessage.java
 fe747df 
  
metastore/src/java/org/apache/hadoop/hive/metastore/messaging/MessageFactory.java
 fdb8e80 
  
metastore/src/java/org/apache/hadoop/hive/metastore/messaging/json/JSONInsertMessage.java
 bd9f9ec 
  
metastore/src/java/org/apache/hadoop/hive/metastore/messaging/json/JSONMessageFactory.java
 9954902 
  ql/src/java/org/apache/hadoop/hive/ql/exec/ReplCopyTask.java 4c0f817 
  ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java be5a6a9 
  ql/src/java/org/apache/hadoop/hive/ql/parse/EximUtil.java 6e9602f 
  ql/src/java/org/apache/hadoop/hive/ql/parse/ExportSemanticAnalyzer.java 
f61274b 
  ql/src/java/org/apache/hadoop/hive/ql/parse/ImportSemanticAnalyzer.java 
5561e06 
  ql/src/java/org/apache/hadoop/hive/ql/parse/ReplicationSemanticAnalyzer.java 
9b83407 

Diff: https://reviews.apache.org/r/55154/diff/


Testing
---


Thanks,

Vaibhav Gumashta



[jira] [Created] (HIVE-15550) fix arglist logging in schematool

2017-01-05 Thread anishek (JIRA)
anishek created HIVE-15550:
--

 Summary: fix arglist logging in schematool
 Key: HIVE-15550
 URL: https://issues.apache.org/jira/browse/HIVE-15550
 Project: Hive
  Issue Type: Improvement
  Components: Beeline
Affects Versions: 2.1.1
Reporter: anishek
Assignee: anishek
Priority: Minor


In DEBUG mode schemaTool prints the password to log file.
This is also seen if the user includes --verbose option.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)