https://github.com/apache/hive

2016-12-06 Thread jeyaram A
Hi Team,

I found the below script in github.  Can you  please let me know how to use
this master.zip in hadoop env.  I am using this for testing purpose




[image: Inline image 1]


https://github.com/apache/hive


Thanks & Regards,
Jeyaram A


Re: Review Request 54236: HIVE-15296 AM may lose task failures and not reschedule when scheduling to LLAP

2016-12-06 Thread Sergey Shelukhin

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/54236/
---

(Updated Dec. 7, 2016, 2:28 a.m.)


Review request for hive, Prasanth_J and Siddharth Seth.


Repository: hive-git


Description
---

see jira


Diffs (updated)
-

  
llap-client/src/java/org/apache/hadoop/hive/llap/ext/LlapTaskUmbilicalExternalClient.java
 4933fb3 
  
llap-common/src/gen/protobuf/gen-java/org/apache/hadoop/hive/llap/daemon/rpc/LlapDaemonProtocolProtos.java
 0581681 
  llap-common/src/java/org/apache/hadoop/hive/llap/DaemonId.java ea47330 
  
llap-common/src/java/org/apache/hadoop/hive/llap/protocol/LlapTaskUmbilicalProtocol.java
 9549567 
  llap-common/src/protobuf/LlapDaemonProtocol.proto 2e74c18 
  llap-server/src/java/org/apache/hadoop/hive/llap/daemon/impl/AMReporter.java 
04c28cb 
  
llap-server/src/java/org/apache/hadoop/hive/llap/daemon/impl/ContainerRunnerImpl.java
 91a321d 
  llap-server/src/java/org/apache/hadoop/hive/llap/daemon/impl/LlapDaemon.java 
752e6ee 
  
llap-tez/src/java/org/apache/hadoop/hive/llap/tezplugins/LlapTaskCommunicator.java
 0deebf9 

Diff: https://reviews.apache.org/r/54236/diff/


Testing
---


Thanks,

Sergey Shelukhin



Re: Review Request 54451: HIVE-15367: CTAS with LOCATION should write temp data under location directory rather than database location

2016-12-06 Thread Sahil Takiar

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/54451/
---

(Updated Dec. 7, 2016, 1:59 a.m.)


Review request for hive, Sergio Pena and Yongzhi Chen.


Bugs: HIVE-15367
https://issues.apache.org/jira/browse/HIVE-15367


Repository: hive-git


Description
---

CTAS with LOCATION should write temp data under location directory rather than 
database location


Diffs (updated)
-

  itests/hive-blobstore/src/test/queries/clientpositive/ctas.q PRE-CREATION 
  itests/hive-blobstore/src/test/results/clientpositive/ctas.q.out PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java c88dbc8 
  ql/src/test/queries/clientpositive/ctas_uses_table_location.q PRE-CREATION 
  ql/src/test/results/clientpositive/ctas_uses_table_location.q.out 
PRE-CREATION 
  ql/src/test/results/clientpositive/encrypted/encryption_ctas.q.out 5b503ac 

Diff: https://reviews.apache.org/r/54451/diff/


Testing (updated)
---

Added qtests for hive-blobstore and for qtest


Thanks,

Sahil Takiar



Re: Review Request 54443: HIVE-15149: Add additional information to ATSHook for Tez UI

2016-12-06 Thread Jason Dere

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/54443/
---

(Updated Dec. 7, 2016, 1:19 a.m.)


Review request for hive and Sergey Shelukhin.


Changes
---

Remove queryName changes from patch


Bugs: HIVE-15149
https://issues.apache.org/jira/browse/HIVE-15149


Repository: hive-git


Description
---

Add additional fields to ATS event:
- Hive conf
- Address of Hive instance (HS2/CLI)
- Address of HS2 client
- Hive instance type (HS2/CLI)
- Execution Mode (MR/Tez/Spark/LLAP)
- Tables read/written
- PerfLogger times
- Queue


Diffs (updated)
-

  ql/src/java/org/apache/hadoop/hive/ql/Driver.java 757c60c 
  ql/src/java/org/apache/hadoop/hive/ql/hooks/ATSHook.java 8ee5c04 
  ql/src/java/org/apache/hadoop/hive/ql/hooks/HookContext.java 3b4cc2c 

Diff: https://reviews.apache.org/r/54443/diff/


Testing
---


Thanks,

Jason Dere



[jira] [Created] (HIVE-15377) Driver::acquireWriteIds can be expensive trying to get details from MS

2016-12-06 Thread Rajesh Balamohan (JIRA)
Rajesh Balamohan created HIVE-15377:
---

 Summary: Driver::acquireWriteIds can be expensive trying to get 
details from MS
 Key: HIVE-15377
 URL: https://issues.apache.org/jira/browse/HIVE-15377
 Project: Hive
  Issue Type: Sub-task
Affects Versions: hive-14535
Reporter: Rajesh Balamohan


Branch: hive-14535

Populated data in tpc-ds web_returns table. Select queries take longer time 
trying to acquire writeIds from MS.

{noformat}
hive> select * from web_returns_hive_commit limit 10;
select * from web_returns_hive_commit limit 10
...

Time taken: 52.494 seconds, Fetched: 10 row(s)
{noformat}

Without commit feature, same query would execute in ~6 seconds. 

Attaching the stacktrace for reference:
{noformat}
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.read(SocketInputStream.java:152)
at java.net.SocketInputStream.read(SocketInputStream.java:122)
at 
org.postgresql.core.VisibleBufferedInputStream.readMore(VisibleBufferedInputStream.java:143)
at 
org.postgresql.core.VisibleBufferedInputStream.ensureBytes(VisibleBufferedInputStream.java:112)
at 
org.postgresql.core.VisibleBufferedInputStream.read(VisibleBufferedInputStream.java:70)
at org.postgresql.core.PGStream.ReceiveChar(PGStream.java:283)
at 
org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1799)
at 
org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:200)
- locked <0x000223192988> (a 
org.postgresql.core.v3.QueryExecutorImpl)
at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:424)
at 
org.postgresql.jdbc.PgPreparedStatement.executeWithFlags(PgPreparedStatement.java:161)
at 
org.postgresql.jdbc.PgPreparedStatement.executeQuery(PgPreparedStatement.java:114)
at 
com.jolbox.bonecp.PreparedStatementHandle.executeQuery(PreparedStatementHandle.java:174)
at 
org.datanucleus.store.rdbms.ParamLoggingPreparedStatement.executeQuery(ParamLoggingPreparedStatement.java:375)
at 
org.datanucleus.store.rdbms.SQLController.executeStatementQuery(SQLController.java:552)
at 
org.datanucleus.store.rdbms.scostore.ElementContainerStore.getSize(ElementContainerStore.java:660)
at 
org.datanucleus.store.rdbms.scostore.ElementContainerStore.size(ElementContainerStore.java:606)
at org.datanucleus.store.types.wrappers.backed.List.size(List.java:542)
- locked <0x00078e8b4f60> (a 
org.datanucleus.store.types.wrappers.backed.List)
at 
org.apache.hadoop.hive.metastore.ObjectStore.convertToOrders(ObjectStore.java:1665)
at 
org.apache.hadoop.hive.metastore.ObjectStore.convertToStorageDescriptor(ObjectStore.java:1710)
at 
org.apache.hadoop.hive.metastore.ObjectStore.convertToStorageDescriptor(ObjectStore.java:1725)
at 
org.apache.hadoop.hive.metastore.ObjectStore.convertToTable(ObjectStore.java:1578)
at 
org.apache.hadoop.hive.metastore.ObjectStore.getTable(ObjectStore.java:1274)
at sun.reflect.GeneratedMethodAccessor28.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:101)
at com.sun.proxy.$Proxy47.getTable(Unknown Source)
at 
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_valid_write_ids(HiveMetaStore.java:6874)
at sun.reflect.GeneratedMethodAccessor82.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:140)
at 
org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:99)
at com.sun.proxy.$Proxy50.get_valid_write_ids(Unknown Source)
at 
org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getValidWriteIds(HiveMetaStoreClient.java:2480)
at sun.reflect.GeneratedMethodAccessor81.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:162)
at com.sun.proxy.$Proxy51.getValidWriteIds(Unknown Source)
at 
org.apache.hadoop.hive.ql.metadata.Hive.getValidWriteIdsForTable(Hive.java:4173)
at org.apache.hadoop.hive.ql.Driver.acquireWriteIds(Driver.java:1592)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1517)
at 

[jira] [Created] (HIVE-15376) Improve heartbeater scheduling for transactions

2016-12-06 Thread Wei Zheng (JIRA)
Wei Zheng created HIVE-15376:


 Summary: Improve heartbeater scheduling for transactions
 Key: HIVE-15376
 URL: https://issues.apache.org/jira/browse/HIVE-15376
 Project: Hive
  Issue Type: Bug
  Components: Transactions
Affects Versions: 2.2.0
Reporter: Wei Zheng
Assignee: Wei Zheng






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] hive pull request #110: HIVE-15124. Fix OrcInputFormat to use reader's schem...

2016-12-06 Thread omalley
Github user omalley closed the pull request at:

https://github.com/apache/hive/pull/110


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: Trace Key-Value pairs

2016-12-06 Thread Robert Grandl
Rajesh,
Thanks so much for your answers. However, I am struggling to get the right 
information. 

As you have mentioned, in ReduceSinkOperator.java, keys and values are present 
but I have a hard time to be able to print their content.

For key: I am trying to print it in ReduceSinkOperator.java -> toHiveKey() 
method after:BinaryComparable key = 
(BinaryComparable)keySerializer.serialize(obj, keyObjectInspector);
by doing something like:StructObjectInspector soi = (StructObjectInspector) 
keyObjectInspector;for (Object element : 
((StructObjectInspector)soi).getStructFieldsDataAsList(obj)) {  LOG.info("key 
is: " + String.valueOf(element));
}
For value:in ReduceSinkOperator.java -> process() after the value is computed 
as BytesWritable value = makeValueWritable(row); I am trying to apply the same 
mechanism as before:
StructObjectInspector soi = (StructObjectInspector) valueObjectInspector;for 
(Object element : 
((StructObjectInspector)soi).getStructFieldsDataAsList(value)) {  
LOG.info("value is: " + String.valueOf(element));
}
but there is a cast problem here from BytesWritable to StructObjectInspector.  
I also tried to print the value in makeValueWritable() method, but there the 
value content seems to be the same with key content. 

Do you have a better guess if I am doing the right things, or what else to do 
to extract the proper content for both key/value from ReduceSinkOperator?
Thanks again for your help,Robert
 

On Sunday, December 4, 2016 8:15 PM, Rajesh Balamohan 
 wrote:
 

 Hi Robert,
Tez deals with bytes and does not understand if the data is coming from 
Hive/Pig/Cascading etc. So in case you print the content from Hive, you would 
get mostly binary data.  For hive, org.apache.hadoop.hive.ql.io.HiveKey, and 
value would be org.apache.hadoop.io.BytesWritable. Printing this would just 
churn out binary contents. You can print it from the below locations in Tez.
Writing keyValues: 
https://github.com/apache/tez/blob/master/tez-runtime-library/src/main/java/org/apache/tez/runtime/library/common/sort/impl/PipelinedSorter.java#L375
Reading keyValues: 
https://github.com/apache/tez/blob/master/tez-runtime-library/src/main/java/org/apache/tez/runtime/library/common/ValuesIterator.java#L186,
  
https://github.com/apache/tez/blob/master/tez-runtime-library/src/main/java/org/apache/tez/runtime/library/common/ValuesIterator.java#L213
If you are interested in knowing the real key/value details, you may want to 
print the details from Hive side. This may be best answered in Hive community 
mailing list.  But at a very high level in Hive, key gets converted to HiveKey 
which is a wrapper around BytesWritable. You may want to print the details of 
key values using the relevant object inspector in Hive. E.g 
https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/exec/ReduceSinkOperator.java#L526.
 In this case, you may want to get the relevant object inspector and print out 
the contents. This is just an example.
~Rajesh.B

On Mon, Dec 5, 2016 at 5:43 AM, Robert Grandl  wrote:

Hi guys,
I am running Hive atop Tez and run several TPC-DS / TPC-H queries. I am trying 
to print the Key/Value pairs received as input by each vertex and generated as 
output accordingly. 

However, looking at Hive / Tez code, it seems they are converted to Object type 
and use their serialized forms along. I would like to print the original 
content in  pairs both when generated and received by a vertex 
(just for the purpose of  understanding).

Could you please give me some hints on how I can do that?
Thank you,Robert





   

Re: Review Request 54236: HIVE-15296 AM may lose task failures and not reschedule when scheduling to LLAP

2016-12-06 Thread Gopal V

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/54236/#review158235
---




llap-client/src/java/org/apache/hadoop/hive/llap/ext/LlapTaskUmbilicalExternalClient.java
 (line 191)


This is async update to an object in the pending events queue.

Possible sync issues?


- Gopal V


On Nov. 30, 2016, 11:39 p.m., Sergey Shelukhin wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/54236/
> ---
> 
> (Updated Nov. 30, 2016, 11:39 p.m.)
> 
> 
> Review request for hive, Prasanth_J and Siddharth Seth.
> 
> 
> Repository: hive-git
> 
> 
> Description
> ---
> 
> see jira
> 
> 
> Diffs
> -
> 
>   
> llap-client/src/java/org/apache/hadoop/hive/llap/ext/LlapTaskUmbilicalExternalClient.java
>  4933fb3 
>   
> llap-common/src/gen/protobuf/gen-java/org/apache/hadoop/hive/llap/daemon/rpc/LlapDaemonProtocolProtos.java
>  0581681 
>   llap-common/src/java/org/apache/hadoop/hive/llap/DaemonId.java ea47330 
>   
> llap-common/src/java/org/apache/hadoop/hive/llap/protocol/LlapTaskUmbilicalProtocol.java
>  9549567 
>   llap-common/src/protobuf/LlapDaemonProtocol.proto 2e74c18 
>   
> llap-server/src/java/org/apache/hadoop/hive/llap/daemon/impl/AMReporter.java 
> 04c28cb 
>   
> llap-server/src/java/org/apache/hadoop/hive/llap/daemon/impl/ContainerRunnerImpl.java
>  91a321d 
>   
> llap-server/src/java/org/apache/hadoop/hive/llap/daemon/impl/LlapDaemon.java 
> 752e6ee 
>   
> llap-tez/src/java/org/apache/hadoop/hive/llap/tezplugins/LlapTaskCommunicator.java
>  0deebf9 
> 
> Diff: https://reviews.apache.org/r/54236/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Sergey Shelukhin
> 
>



[GitHub] hive pull request #121: HIVE-15375. Backport ORC-115 to Hive.

2016-12-06 Thread omalley
GitHub user omalley opened a pull request:

https://github.com/apache/hive/pull/121

HIVE-15375. Backport ORC-115 to Hive.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/omalley/hive hive-15375

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/hive/pull/121.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #121


commit 700820172bc8d1d2134ccbe0f2270ce7b629a011
Author: Owen O'Malley 
Date:   2016-12-06T22:54:28Z

HIVE-15375. Backport ORC-115 to Hive.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (HIVE-15375) Port ORC-115 to storage-api

2016-12-06 Thread Owen O'Malley (JIRA)
Owen O'Malley created HIVE-15375:


 Summary: Port ORC-115 to storage-api
 Key: HIVE-15375
 URL: https://issues.apache.org/jira/browse/HIVE-15375
 Project: Hive
  Issue Type: Improvement
Reporter: Owen O'Malley
Assignee: Owen O'Malley


Currently, VectorizedRowBatch.toString() assumes that all BytesColumnVector's 
use the internal buffer for all of the values. This leads to incorrect strings 
in many common cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Review Request 54451: HIVE-15367: CTAS with LOCATION should write temp data under location directory rather than database location

2016-12-06 Thread Sahil Takiar

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/54451/
---

Review request for hive.


Bugs: HIVE-15367
https://issues.apache.org/jira/browse/HIVE-15367


Repository: hive-git


Description
---

CTAS with LOCATION should write temp data under location directory rather than 
database location


Diffs
-

  ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java c88dbc8 

Diff: https://reviews.apache.org/r/54451/diff/


Testing
---


Thanks,

Sahil Takiar



[jira] [Created] (HIVE-15374) Hive column comments disappearing/being replaced by "from deserializer"

2016-12-06 Thread Naveen Gangam (JIRA)
Naveen Gangam created HIVE-15374:


 Summary: Hive column comments disappearing/being replaced by "from 
deserializer"
 Key: HIVE-15374
 URL: https://issues.apache.org/jira/browse/HIVE-15374
 Project: Hive
  Issue Type: Bug
  Components: Hive
Affects Versions: 2.0.0
Reporter: Naveen Gangam
Assignee: Naveen Gangam


After creating a table in hive with column comments, running show create table 
or describe [formatted], on the same table gives "from deserializer" instead of 
the original comments. 

CREATE TABLE `test`(
  `stringid` string COMMENT 'string id', 
  `value` string COMMENT 'description')
ROW FORMAT SERDE 
  'org.apache.hadoop.hive.contrib.serde2.RegexSerDe' 
WITH SERDEPROPERTIES ( 
  'input.regex'='(.{1})');

The comments appear to be stored correctly in the HMS backend DB. Just the 
fetching of this metadata seems to be incorrect.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 54393: HIVE-15361: INSERT dynamic partition on S3 fails with a MoveTask failure

2016-12-06 Thread Illya Yalovyy

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/54393/#review158221
---


Ship it!




Ship It!

- Illya Yalovyy


On Dec. 6, 2016, 8:13 p.m., Sergio Pena wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/54393/
> ---
> 
> (Updated Dec. 6, 2016, 8:13 p.m.)
> 
> 
> Review request for hive.
> 
> 
> Bugs: HIVE-15361
> https://issues.apache.org/jira/browse/HIVE-15361
> 
> 
> Repository: hive-git
> 
> 
> Description
> ---
> 
> Problem:
> - DynamicPartitionCtx and ListBucketingCtx objects weren't set on the new 
> MoveWork created when merging the two MoveWork objects from the 
> ConditionalTask.
> 
> Solution
> - Set the DynamicPartitionCtx and ListBucketingCtx objects to the new 
> MoveWork created for the S3 optimization.
> 
> Other changes
> - Merge the MoveWork objects inside the createCondTask() method for better 
> error handling.
> - Only merge the MoveWork related to the moveOnlyMoveTask. The MoveWork from 
> the mergeAndMoveMoveTask may cause other issues that are not correctly tested.
> - Two new private methods are added to check and merge the conditional 
> input/output tasks to the linked MoveWork.
> 
> 
> Diffs
> -
> 
>   
> itests/hive-blobstore/src/test/queries/clientpositive/insert_into_dynamic_partitions.q
>  PRE-CREATION 
>   itests/hive-blobstore/src/test/queries/clientpositive/insert_into_table.q 
> 25e2e7007ff539223d9244ca9822aa65d1441eb0 
>   
> itests/hive-blobstore/src/test/queries/clientpositive/insert_overwrite_dynamic_partitions.q
>  PRE-CREATION 
>   
> itests/hive-blobstore/src/test/queries/clientpositive/insert_overwrite_table.q
>  846b2b113f09a74a3f05c13ffb56163e81dc1e8e 
>   
> itests/hive-blobstore/src/test/results/clientpositive/insert_into_dynamic_partitions.q.out
>  PRE-CREATION 
>   
> itests/hive-blobstore/src/test/results/clientpositive/insert_into_table.q.out 
> fbb52c132a331aefe870264e035c397078f3c82e 
>   
> itests/hive-blobstore/src/test/results/clientpositive/insert_overwrite_directory.q.out
>  9f575a66ecefc3933b16dff554bdcc1c1f6420ee 
>   
> itests/hive-blobstore/src/test/results/clientpositive/insert_overwrite_dynamic_partitions.q.out
>  PRE-CREATION 
>   
> itests/hive-blobstore/src/test/results/clientpositive/insert_overwrite_table.q.out
>  c725c96cbb6b0374e67308a54204c7c25e827567 
>   ql/src/java/org/apache/hadoop/hive/ql/optimizer/GenMapRedUtils.java 
> adc1188f09c8019a8aa60403d5813d6fa4509ceb 
>   ql/src/java/org/apache/hadoop/hive/ql/plan/LoadDesc.java 
> bcd3125ab4ad20c00fec565e5004ee200c0187d5 
>   ql/src/java/org/apache/hadoop/hive/ql/plan/LoadFileDesc.java 
> 9a868a04ce93d5c2ee75b5c6e96a1401cea93133 
>   ql/src/java/org/apache/hadoop/hive/ql/plan/LoadTableDesc.java 
> 771a919ccd0bd75fe6197299ae057647ece89a7e 
>   ql/src/java/org/apache/hadoop/hive/ql/plan/MoveWork.java 
> 9f498c7fb88a7a9f77b8c6739c097a2b26b0c617 
>   
> ql/src/test/org/apache/hadoop/hive/ql/optimizer/TestGenMapRedUtilsCreateConditionalTask.java
>  e6ec44504685bd9e53f158cc359b8a7b79fd0166 
> 
> Diff: https://reviews.apache.org/r/54393/diff/
> 
> 
> Testing
> ---
> 
> All itests/hive-blobstore tests run.
> 
> Added new blobstore tests:
> - insert_into_dynamic_partitions.q
> - insert_overwrite_dynamic_partitions.q
> 
> Waiting for HiveQA to run the rest of the q-tests.
> 
> 
> Thanks,
> 
> Sergio Pena
> 
>



[jira] [Created] (HIVE-15373) Transaction management isn't thread-safe

2016-12-06 Thread Alexander Kolbasov (JIRA)
Alexander Kolbasov created HIVE-15373:
-

 Summary: Transaction management isn't thread-safe
 Key: HIVE-15373
 URL: https://issues.apache.org/jira/browse/HIVE-15373
 Project: Hive
  Issue Type: Bug
  Components: Hive
Reporter: Alexander Kolbasov


ObjectStore.java has several important calls which are not thread-safe:

* openTransaction()
* commitTransaction()
* rollbackTransaction()

These should be made thread-safe.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 54393: HIVE-15361: INSERT dynamic partition on S3 fails with a MoveTask failure

2016-12-06 Thread Sergio Pena


> On Dec. 5, 2016, 11:11 p.m., Illya Yalovyy wrote:
> > ql/src/java/org/apache/hadoop/hive/ql/optimizer/GenMapRedUtils.java, line 
> > 1760
> > 
> >
> > Could this method be made protected and covered with unit tests?

I added some unit-tests to this method. 
I did not add the ones that have a LoadTableWork on the MoveWork because a 
static method is used in order to get the table path. Nevertheless, this path 
is covered by the q-tests where real tables are created on S3.


- Sergio


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/54393/#review158069
---


On Dec. 6, 2016, 8:13 p.m., Sergio Pena wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/54393/
> ---
> 
> (Updated Dec. 6, 2016, 8:13 p.m.)
> 
> 
> Review request for hive.
> 
> 
> Bugs: HIVE-15361
> https://issues.apache.org/jira/browse/HIVE-15361
> 
> 
> Repository: hive-git
> 
> 
> Description
> ---
> 
> Problem:
> - DynamicPartitionCtx and ListBucketingCtx objects weren't set on the new 
> MoveWork created when merging the two MoveWork objects from the 
> ConditionalTask.
> 
> Solution
> - Set the DynamicPartitionCtx and ListBucketingCtx objects to the new 
> MoveWork created for the S3 optimization.
> 
> Other changes
> - Merge the MoveWork objects inside the createCondTask() method for better 
> error handling.
> - Only merge the MoveWork related to the moveOnlyMoveTask. The MoveWork from 
> the mergeAndMoveMoveTask may cause other issues that are not correctly tested.
> - Two new private methods are added to check and merge the conditional 
> input/output tasks to the linked MoveWork.
> 
> 
> Diffs
> -
> 
>   
> itests/hive-blobstore/src/test/queries/clientpositive/insert_into_dynamic_partitions.q
>  PRE-CREATION 
>   itests/hive-blobstore/src/test/queries/clientpositive/insert_into_table.q 
> 25e2e7007ff539223d9244ca9822aa65d1441eb0 
>   
> itests/hive-blobstore/src/test/queries/clientpositive/insert_overwrite_dynamic_partitions.q
>  PRE-CREATION 
>   
> itests/hive-blobstore/src/test/queries/clientpositive/insert_overwrite_table.q
>  846b2b113f09a74a3f05c13ffb56163e81dc1e8e 
>   
> itests/hive-blobstore/src/test/results/clientpositive/insert_into_dynamic_partitions.q.out
>  PRE-CREATION 
>   
> itests/hive-blobstore/src/test/results/clientpositive/insert_into_table.q.out 
> fbb52c132a331aefe870264e035c397078f3c82e 
>   
> itests/hive-blobstore/src/test/results/clientpositive/insert_overwrite_directory.q.out
>  9f575a66ecefc3933b16dff554bdcc1c1f6420ee 
>   
> itests/hive-blobstore/src/test/results/clientpositive/insert_overwrite_dynamic_partitions.q.out
>  PRE-CREATION 
>   
> itests/hive-blobstore/src/test/results/clientpositive/insert_overwrite_table.q.out
>  c725c96cbb6b0374e67308a54204c7c25e827567 
>   ql/src/java/org/apache/hadoop/hive/ql/optimizer/GenMapRedUtils.java 
> adc1188f09c8019a8aa60403d5813d6fa4509ceb 
>   ql/src/java/org/apache/hadoop/hive/ql/plan/LoadDesc.java 
> bcd3125ab4ad20c00fec565e5004ee200c0187d5 
>   ql/src/java/org/apache/hadoop/hive/ql/plan/LoadFileDesc.java 
> 9a868a04ce93d5c2ee75b5c6e96a1401cea93133 
>   ql/src/java/org/apache/hadoop/hive/ql/plan/LoadTableDesc.java 
> 771a919ccd0bd75fe6197299ae057647ece89a7e 
>   ql/src/java/org/apache/hadoop/hive/ql/plan/MoveWork.java 
> 9f498c7fb88a7a9f77b8c6739c097a2b26b0c617 
>   
> ql/src/test/org/apache/hadoop/hive/ql/optimizer/TestGenMapRedUtilsCreateConditionalTask.java
>  e6ec44504685bd9e53f158cc359b8a7b79fd0166 
> 
> Diff: https://reviews.apache.org/r/54393/diff/
> 
> 
> Testing
> ---
> 
> All itests/hive-blobstore tests run.
> 
> Added new blobstore tests:
> - insert_into_dynamic_partitions.q
> - insert_overwrite_dynamic_partitions.q
> 
> Waiting for HiveQA to run the rest of the q-tests.
> 
> 
> Thanks,
> 
> Sergio Pena
> 
>



Review Request 54443: HIVE-15149: Add additional information to ATSHook for Tez UI

2016-12-06 Thread Jason Dere

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/54443/
---

Review request for hive and Sergey Shelukhin.


Bugs: HIVE-15149
https://issues.apache.org/jira/browse/HIVE-15149


Repository: hive-git


Description
---

Add additional fields to ATS event:
- Hive conf
- Address of Hive instance (HS2/CLI)
- Address of HS2 client
- Hive instance type (HS2/CLI)
- Execution Mode (MR/Tez/Spark/LLAP)
- Tables read/written
- PerfLogger times
- Queue


Diffs
-

  ql/src/java/org/apache/hadoop/hive/ql/Driver.java 757c60c 
  ql/src/java/org/apache/hadoop/hive/ql/hooks/ATSHook.java 8ee5c04 
  ql/src/java/org/apache/hadoop/hive/ql/hooks/HookContext.java 3b4cc2c 

Diff: https://reviews.apache.org/r/54443/diff/


Testing
---


Thanks,

Jason Dere



Re: Review Request 54393: HIVE-15361: INSERT dynamic partition on S3 fails with a MoveTask failure

2016-12-06 Thread Sergio Pena

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/54393/
---

(Updated Dec. 6, 2016, 8:13 p.m.)


Review request for hive.


Changes
---

Addressed all Illya comments.


Bugs: HIVE-15361
https://issues.apache.org/jira/browse/HIVE-15361


Repository: hive-git


Description
---

Problem:
- DynamicPartitionCtx and ListBucketingCtx objects weren't set on the new 
MoveWork created when merging the two MoveWork objects from the ConditionalTask.

Solution
- Set the DynamicPartitionCtx and ListBucketingCtx objects to the new MoveWork 
created for the S3 optimization.

Other changes
- Merge the MoveWork objects inside the createCondTask() method for better 
error handling.
- Only merge the MoveWork related to the moveOnlyMoveTask. The MoveWork from 
the mergeAndMoveMoveTask may cause other issues that are not correctly tested.
- Two new private methods are added to check and merge the conditional 
input/output tasks to the linked MoveWork.


Diffs (updated)
-

  
itests/hive-blobstore/src/test/queries/clientpositive/insert_into_dynamic_partitions.q
 PRE-CREATION 
  itests/hive-blobstore/src/test/queries/clientpositive/insert_into_table.q 
25e2e7007ff539223d9244ca9822aa65d1441eb0 
  
itests/hive-blobstore/src/test/queries/clientpositive/insert_overwrite_dynamic_partitions.q
 PRE-CREATION 
  
itests/hive-blobstore/src/test/queries/clientpositive/insert_overwrite_table.q 
846b2b113f09a74a3f05c13ffb56163e81dc1e8e 
  
itests/hive-blobstore/src/test/results/clientpositive/insert_into_dynamic_partitions.q.out
 PRE-CREATION 
  itests/hive-blobstore/src/test/results/clientpositive/insert_into_table.q.out 
fbb52c132a331aefe870264e035c397078f3c82e 
  
itests/hive-blobstore/src/test/results/clientpositive/insert_overwrite_directory.q.out
 9f575a66ecefc3933b16dff554bdcc1c1f6420ee 
  
itests/hive-blobstore/src/test/results/clientpositive/insert_overwrite_dynamic_partitions.q.out
 PRE-CREATION 
  
itests/hive-blobstore/src/test/results/clientpositive/insert_overwrite_table.q.out
 c725c96cbb6b0374e67308a54204c7c25e827567 
  ql/src/java/org/apache/hadoop/hive/ql/optimizer/GenMapRedUtils.java 
adc1188f09c8019a8aa60403d5813d6fa4509ceb 
  ql/src/java/org/apache/hadoop/hive/ql/plan/LoadDesc.java 
bcd3125ab4ad20c00fec565e5004ee200c0187d5 
  ql/src/java/org/apache/hadoop/hive/ql/plan/LoadFileDesc.java 
9a868a04ce93d5c2ee75b5c6e96a1401cea93133 
  ql/src/java/org/apache/hadoop/hive/ql/plan/LoadTableDesc.java 
771a919ccd0bd75fe6197299ae057647ece89a7e 
  ql/src/java/org/apache/hadoop/hive/ql/plan/MoveWork.java 
9f498c7fb88a7a9f77b8c6739c097a2b26b0c617 
  
ql/src/test/org/apache/hadoop/hive/ql/optimizer/TestGenMapRedUtilsCreateConditionalTask.java
 e6ec44504685bd9e53f158cc359b8a7b79fd0166 

Diff: https://reviews.apache.org/r/54393/diff/


Testing
---

All itests/hive-blobstore tests run.

Added new blobstore tests:
- insert_into_dynamic_partitions.q
- insert_overwrite_dynamic_partitions.q

Waiting for HiveQA to run the rest of the q-tests.


Thanks,

Sergio Pena



Re: Review Request 54393: HIVE-15361: INSERT dynamic partition on S3 fails with a MoveTask failure

2016-12-06 Thread Sergio Pena


> On Dec. 5, 2016, 11:11 p.m., Illya Yalovyy wrote:
> > itests/hive-blobstore/src/test/queries/clientpositive/insert_into_dynamic_partitions.q,
> >  lines 15-16
> > 
> >
> > Why explicit ADD PARTITION statement is required? I think insert into 
> > will create missing partitions.

That was part of the test I did when I found the issue. But it is not needed. I 
will remove it just to keep clarity on the type of testing.


> On Dec. 5, 2016, 11:11 p.m., Illya Yalovyy wrote:
> > itests/hive-blobstore/src/test/queries/clientpositive/insert_overwrite_dynamic_partitions.q,
> >  line 25
> > 
> >
> > Will "SHOW PARTITIONS" be a good validation in this case?

That's a good one. I'll add it.


> On Dec. 5, 2016, 11:11 p.m., Illya Yalovyy wrote:
> > ql/src/java/org/apache/hadoop/hive/ql/optimizer/GenMapRedUtils.java, line 
> > 1763
> > 
> >
> > Should we take into account HIVE_BLOBSTORE_USE_BLOBSTORE_AS_SCRATCHDIR?

It's not necessary. This is just an optimization when two MoveTask are on 
BlobStore. The condition in the method verifies that.

return condOutputPath.equals(linkedSourcePath)
&& BlobStorageUtils.isBlobStoragePath(conf, condInputPath)
&& BlobStorageUtils.isBlobStoragePath(conf, linkedTargetPath);

If the user has HIVE_BLOBSTORE_USE_BLOBSTORE_AS_SCRATCHDIR=false, then the 
condInputPath might be on HDFS, and the merge won't happen.


> On Dec. 5, 2016, 11:11 p.m., Illya Yalovyy wrote:
> > ql/src/java/org/apache/hadoop/hive/ql/optimizer/GenMapRedUtils.java, line 
> > 1784
> > 
> >
> > Just note: s3 to s3 copy is *more* efficient than hdfs to s3 copy.

This is only merging two MoveWorks into one, so the hdfs to s3 copy does not 
apply here I think. I changed the comment thought:
* This is an optimization for BlobStore systems to avoid doing two renames or 
copies that are not necessary.


> On Dec. 5, 2016, 11:11 p.m., Illya Yalovyy wrote:
> > ql/src/java/org/apache/hadoop/hive/ql/optimizer/GenMapRedUtils.java, lines 
> > 1885-1894
> > 
> >
> > I feel like this logic should be inside "addDependentMoveTasks" method.

It is confusing. The "addDependentMoveTasks" is used to link a MoveTask to the 
desired task. 
For instance: addDependentMoveTasks(moveTaskToLink, conf, moveOnlyMoveTask, 
dependencyTask);
The above method will link: 
  moveOnlyMoveTask -> moveTaskToLink -> otherChildTasks
  
  
But I don't want to do that with the optimization, I want to copy the 
moveTaskToLink child tasks to moveOnlyMoveTask instead.
Like:
  moveOnlyMoveTask -> otherChildTasks


On Dec. 5, 2016, 11:11 p.m., Sergio Pena wrote:
> > Could you also add a test with a strict dynamic partitioning, like:
> > INSERT OVERWRITE TABLE t2 PARTITION(p1="1", p2) SELECT \*, c1 AS p2 FROM t1;

Thanks. I will add it.


- Sergio


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/54393/#review158069
---


On Dec. 5, 2016, 9:56 p.m., Sergio Pena wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/54393/
> ---
> 
> (Updated Dec. 5, 2016, 9:56 p.m.)
> 
> 
> Review request for hive.
> 
> 
> Bugs: HIVE-15361
> https://issues.apache.org/jira/browse/HIVE-15361
> 
> 
> Repository: hive-git
> 
> 
> Description
> ---
> 
> Problem:
> - DynamicPartitionCtx and ListBucketingCtx objects weren't set on the new 
> MoveWork created when merging the two MoveWork objects from the 
> ConditionalTask.
> 
> Solution
> - Set the DynamicPartitionCtx and ListBucketingCtx objects to the new 
> MoveWork created for the S3 optimization.
> 
> Other changes
> - Merge the MoveWork objects inside the createCondTask() method for better 
> error handling.
> - Only merge the MoveWork related to the moveOnlyMoveTask. The MoveWork from 
> the mergeAndMoveMoveTask may cause other issues that are not correctly tested.
> - Two new private methods are added to check and merge the conditional 
> input/output tasks to the linked MoveWork.
> 
> 
> Diffs
> -
> 
>   
> itests/hive-blobstore/src/test/queries/clientpositive/insert_into_dynamic_partitions.q
>  PRE-CREATION 
>   itests/hive-blobstore/src/test/queries/clientpositive/insert_into_table.q 
> 25e2e7007ff539223d9244ca9822aa65d1441eb0 
>   
> itests/hive-blobstore/src/test/queries/clientpositive/insert_overwrite_dynamic_partitions.q
>  PRE-CREATION 
>   
> 

[jira] [Created] (HIVE-15372) Flaky test: org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[stats_based_fetch_decision]

2016-12-06 Thread Jason Dere (JIRA)
Jason Dere created HIVE-15372:
-

 Summary: Flaky test: 
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[stats_based_fetch_decision]
 Key: HIVE-15372
 URL: https://issues.apache.org/jira/browse/HIVE-15372
 Project: Hive
  Issue Type: Sub-task
Reporter: Jason Dere


Been failing for a while now: 
https://builds.apache.org/job/PreCommit-HIVE-Build/2446/testReport/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-15371) Flaky test: org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[stats_based_fetch_decision]

2016-12-06 Thread Jason Dere (JIRA)
Jason Dere created HIVE-15371:
-

 Summary: Flaky test: 
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[stats_based_fetch_decision]
 Key: HIVE-15371
 URL: https://issues.apache.org/jira/browse/HIVE-15371
 Project: Hive
  Issue Type: Sub-task
Reporter: Jason Dere


Been failing for a while now: 
https://builds.apache.org/job/PreCommit-HIVE-Build/2446/testReport/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] Apache Hive 2.1.1 Release Candidate 1

2016-12-06 Thread Ashutosh Chauhan
+1 Verified md5 on src and binary distribution. Built src distribution. All
looks good.

Thanks,
Ashutosh

On Fri, Dec 2, 2016 at 7:32 AM, Jesus Camacho Rodriguez  wrote:

> PMC Members,
>
> A reminder that we still need two +1 votes to release 2.1.1. Please test
> and vote!
>
> Thanks,
> Jesús
>
>
>
>
> On 12/1/16, 6:45 PM, "Jesus Camacho Rodriguez"  hortonworks.com on behalf of jcama...@apache.org> wrote:
>
> >Sergio,
> >
> >I used OSX 10.11.
> >
> >Maybe it has to do with the version used to verify the md5? Can you just
> try to verify manually?
> >
> >$ md5sum apache-hive-2.1.1-bin.tar.gz > apache-hive-2.1.1-bin.tar.gz.
> md5.self
> >$ diff -q apache-hive-2.1.1-bin.tar.gz.md5 apache-hive-2.1.1-bin.tar.gz.
> md5.self
> >
> >
> >About the KEYS, my key is not in the file you referred, I should have
> added it before.
> >You can find it here:
> >https://people.apache.org/keys/committer/jcamacho.asc
> >
> >Let me know if that solves your problem.
> >
> >--
> >Jesús
> >
> >
> >
> >On 11/30/16, 9:08 PM, "Sergio Pena"  wrote:
> >
> >>Jesus,
> >>
> >>I tried verifying the md5 and gpg signatures, but I get these errors:
> >>
> >>hive/packaging/target⟫ md5sum -c apache-hive-2.1.1-bin.tar.gz.md5
> >>apache-hive-2.1.1-bin.tar.gz: FAILED
> >>md5sum: WARNING: 1 computed checksum did NOT match
> >>
> >>hive/packaging/target⟫ gpg --verify apache-hive-2.1.1-bin.tar.gz.asc
> >>apache-hive-2.1.1-bin.tar.gz
> >>gpg: Signature made Tue 29 Nov 2016 01:57:04 PM CST
> >>gpg:using RSA key 931E4AB3C516B444
> >>gpg: Can't check signature: No public key
> >>
> >>I'm using ubuntu, so I think the md5 differs from OSX and Linux
> machines. I
> >>remember seeing this problem before. What OS did you use?
> >>
> >>for the GPG keys, I imported the KEYS file mentioned in the Wiki, but I
> >>still get that error. Any idea what I'm missing?
> >>
> >>On Tue, Nov 29, 2016 at 6:23 PM, Gary Gregory 
> >>wrote:
> >>
> >>> FWIW, running 'mvn clean install' has been failing on Git master for a
> long
> >>> time on Windows. Will that ever be fixed?
> >>>
> >>> Gary
> >>>
> >>> On Tue, Nov 29, 2016 at 12:17 PM, Jesus Camacho Rodriguez <
> >>> jcama...@apache.org> wrote:
> >>>
> >>> > Apache Hive 2.1.1 Release Candidate 1 is available here:
> >>> > http://people.apache.org/~jcamacho/hive-2.1.1-rc1/
> >>> >
> >>> > Maven artifacts are available here:
> >>> > https://repository.apache.org/content/repositories/
> orgapachehive-1066/
> >>> >
> >>> > Source tag for RC1 is at:
> >>> > https://github.com/apache/hive/releases/tag/release-2.1.1-rc1/
> >>> >
> >>> > Voting will conclude in 72 hours.
> >>> >
> >>> > Hive PMC Members: Please test and vote.
> >>> >
> >>> > Thanks.
> >>> >
> >>> >
> >>> >
> >>> >
> >>>
> >>>
> >>> --
> >>> E-Mail: garydgreg...@gmail.com | ggreg...@apache.org
> >>> Java Persistence with Hibernate, Second Edition
> >>>  >>> tl?ie=UTF8=1789=9325=1617290459&
> >>> linkCode=as2=garygregory-20=cadb800f39946ec62ea2b1af9fe6a2
> b8>
> >>>
> >>>  >>> 1617290459>
> >>> JUnit in Action, Second Edition
> >>>  >>> tl?ie=UTF8=1789=9325=1935182021&
> >>> linkCode=as2=garygregory-20=31ecd1f6b6d1eaf8886ac902a24de4
> 18%22
> >>> >
> >>>
> >>>  >>> 1935182021>
> >>> Spring Batch in Action
> >>>  >>> tl?ie=UTF8=1789=9325=1935182951&
> >>> linkCode=%7B%7BlinkCode%7D%7D=garygregory-20=%7B%
> >>> 7Blink_id%7D%7D%22%3ESpring+Batch+in+Action>
> >>>  >>> 1935182951>
> >>> Blog: http://garygregory.wordpress.com
> >>> Home: http://garygregory.com/
> >>> Tweet! http://twitter.com/GaryGregory
> >>>
> >
> >
>
>


[jira] [Created] (HIVE-15370) Include Join residual filter expressions in user level EXPLAIN

2016-12-06 Thread Jesus Camacho Rodriguez (JIRA)
Jesus Camacho Rodriguez created HIVE-15370:
--

 Summary: Include Join residual filter expressions in user level 
EXPLAIN
 Key: HIVE-15370
 URL: https://issues.apache.org/jira/browse/HIVE-15370
 Project: Hive
  Issue Type: Improvement
Affects Versions: 2.2.0
Reporter: Jesus Camacho Rodriguez
Assignee: Jesus Camacho Rodriguez
Priority: Minor






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Compaction in hive

2016-12-06 Thread Nishant Aggarwal
Dear Hive Gurus,

I am looking to some practical solution on how to implement *Compaction *in
Hive. Hiveserver2 version 1.1.0.

We have some external Hive tables on which we  need to implement
Compaction.

Merging the map files is one option which is turned down since it is very
CPU intensive.

Need your help in order to implement Compaction, how to implement, what are
the pros and cons.

Also, is it mandatory to have bucketing to implement compaction?

Request you to please help.










Thanks and Regards
Nishant Aggarwal, PMP
Cell No:- +91 99588 94305 <099588%2094305>
http://in.linkedin.com/pub/nishant-aggarwal/53/698/11b


[jira] [Created] (HIVE-15369) Extend column pruner to account for residual filter expression in Join operator

2016-12-06 Thread Jesus Camacho Rodriguez (JIRA)
Jesus Camacho Rodriguez created HIVE-15369:
--

 Summary: Extend column pruner to account for residual filter 
expression in Join operator
 Key: HIVE-15369
 URL: https://issues.apache.org/jira/browse/HIVE-15369
 Project: Hive
  Issue Type: Bug
  Components: Logical Optimizer
Affects Versions: 2.2.0
Reporter: Jesus Camacho Rodriguez
Assignee: Jesus Camacho Rodriguez


Introduced by HIVE-15251.

We need to extend ColumnPruner logic to take into account residual filter 
expressions in Join operator. Otherwise, query will fail at execution time.

Issue can be reproduced as follows:

{code:sql}
set hive.strict.checks.cartesian.product=false;

CREATE TABLE test1 (key INT, value INT, col_1 STRING);
INSERT INTO test1 VALUES (NULL, NULL, 'None'), (98, NULL, 'None'),
(99, 0, 'Alice'), (99, 2, 'Mat'), (100, 1, 'Bob'), (101, 2, 'Car');

CREATE TABLE test2 (key INT, value INT, col_2 STRING);
INSERT INTO test2 VALUES (102, 2, 'Del'), (103, 2, 'Ema'),
(104, 3, 'Fli'), (105, NULL, 'None');


-- Complex condition and column projection
EXPLAIN
SELECT col_1, col_2
FROM test1 LEFT OUTER JOIN test2
ON (test1.value=test2.value
  OR test1.key=test2.key);

SELECT col_1, col_2
FROM test1 LEFT OUTER JOIN test2
ON (test1.value=test2.value
  OR test1.key=test2.key);
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)