[jira] [Created] (HIVE-18667) Materialized views: rewrites should be triggered without checks if the time.window=-1

2018-02-08 Thread Gopal V (JIRA)
Gopal V created HIVE-18667:
--

 Summary: Materialized views: rewrites should be triggered without 
checks if the time.window=-1
 Key: HIVE-18667
 URL: https://issues.apache.org/jira/browse/HIVE-18667
 Project: Hive
  Issue Type: Bug
  Components: Materialized views
Reporter: Gopal V


This is useful to check for Calcite failures to rewrite to a view instead of 
trying to determine if the rewrite was avoided due to a timeout on the view 
freshness.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HIVE-18666) Materialized view: "create materialized enable rewrite" should fail if rewriting is not possible

2018-02-08 Thread Gopal V (JIRA)
Gopal V created HIVE-18666:
--

 Summary: Materialized view: "create materialized enable rewrite" 
should fail if rewriting is not possible
 Key: HIVE-18666
 URL: https://issues.apache.org/jira/browse/HIVE-18666
 Project: Hive
  Issue Type: Bug
Reporter: Gopal V


{code}
CREATE MATERIALIZED VIEW TEST_AGG ENABLE REWRITE AS 
select ... from ext_Table;
{code}

works, but then 

{code}
alter materialized view TEST_AGG enable rewrite;
{code}

fails with {{SemanticException Automatic rewriting for materialized view cannot 
be enabled if the materialized view uses non-transactional tables}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HIVE-18665) LLAP: Ignore cache-affinity if the LLAP IO elevator is disabled

2018-02-08 Thread Gopal V (JIRA)
Gopal V created HIVE-18665:
--

 Summary: LLAP: Ignore cache-affinity if the LLAP IO elevator is 
disabled
 Key: HIVE-18665
 URL: https://issues.apache.org/jira/browse/HIVE-18665
 Project: Hive
  Issue Type: Bug
  Components: llap
Reporter: Gopal V


SplitLocationProvider removes HDFS locality in LLAP, causing more network 
traffic when reading file formats which aren't currently cached.

In the absence of the IO elevator, cache affinity should be disabled.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HIVE-18664) fix the fail test from TestDruidRecordWriter#testWrite

2018-02-08 Thread Saijin Huang (JIRA)
Saijin Huang created HIVE-18664:
---

 Summary: fix the fail test from TestDruidRecordWriter#testWrite
 Key: HIVE-18664
 URL: https://issues.apache.org/jira/browse/HIVE-18664
 Project: Hive
  Issue Type: Bug
Reporter: Saijin Huang
Assignee: Saijin Huang






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Review Request 65579: HIVE-18658 WM: allow not specifying scheduling policy when creating a pool

2018-02-08 Thread Harish Jaiprakash

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/65579/#review197150
---


Fix it, then Ship it!




Comment on unrelated error. Changes are fine.


ql/src/test/results/clientpositive/llap/resourceplan.q.out
Lines 1545 (patched)


Is this expected? I know its not related to the code changes, but this 
might supress someother error.


- Harish Jaiprakash


On Feb. 9, 2018, 4:36 a.m., Sergey Shelukhin wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/65579/
> ---
> 
> (Updated Feb. 9, 2018, 4:36 a.m.)
> 
> 
> Review request for hive, Harish Jaiprakash and Prasanth_J.
> 
> 
> Repository: hive-git
> 
> 
> Description
> ---
> 
> see jira
> 
> 
> Diffs
> -
> 
>   ql/src/java/org/apache/hadoop/hive/ql/parse/DDLSemanticAnalyzer.java 
> b766791ebc 
>   ql/src/test/queries/clientpositive/resourceplan.q 7314585415 
>   ql/src/test/results/clientpositive/llap/resourceplan.q.out b23720d1a8 
> 
> 
> Diff: https://reviews.apache.org/r/65579/diff/1/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Sergey Shelukhin
> 
>



[jira] [Created] (HIVE-18663) Logged Spark Job Id contains a UUID instead of the actual id

2018-02-08 Thread Sahil Takiar (JIRA)
Sahil Takiar created HIVE-18663:
---

 Summary: Logged Spark Job Id contains a UUID instead of the actual 
id
 Key: HIVE-18663
 URL: https://issues.apache.org/jira/browse/HIVE-18663
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Sahil Takiar


We have logs like {{Spark Job[job-id]}} but the {{[job-id]}} is set to a UUID 
that is created by the RSC {{ClientProtocol}}. It should be pretty easy to 
print out the actual job id instead.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Review Request 65413: HIVE-18575 ACID properties usage in jobconf is ambiguous for MM tables

2018-02-08 Thread Sergey Shelukhin

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/65413/
---

(Updated Feb. 9, 2018, 1:52 a.m.)


Review request for hive and Eugene Koifman.


Repository: hive-git


Description
---

.


Diffs (updated)
-

  common/src/java/org/apache/hadoop/hive/conf/HiveConf.java 67e22f6649 
  
hcatalog/core/src/main/java/org/apache/hive/hcatalog/mapreduce/FosterStorageHandler.java
 5ee8aadfa7 
  
hcatalog/streaming/src/java/org/apache/hive/hcatalog/streaming/HiveEndPoint.java
 3388a34446 
  
hcatalog/streaming/src/java/org/apache/hive/hcatalog/streaming/mutate/client/lock/Lock.java
 c2728376b2 
  
hcatalog/streaming/src/test/org/apache/hive/hcatalog/streaming/TestStreaming.java
 4e928121c7 
  
hcatalog/streaming/src/test/org/apache/hive/hcatalog/streaming/mutate/StreamingAssert.java
 c98d22be2e 
  
itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/txn/compactor/TestCompactor.java
 a5e6293a3e 
  
llap-server/src/java/org/apache/hadoop/hive/llap/io/api/impl/LlapRecordReader.java
 d252279be9 
  
llap-server/src/java/org/apache/hadoop/hive/llap/io/encoded/OrcEncodedDataReader.java
 68bb168bd2 
  ql/src/java/org/apache/hadoop/hive/ql/exec/DDLTask.java 20c2c3294a 
  ql/src/java/org/apache/hadoop/hive/ql/exec/FetchTask.java 090a18852a 
  ql/src/java/org/apache/hadoop/hive/ql/exec/SMBMapJoinOperator.java 270b576199 
  ql/src/java/org/apache/hadoop/hive/ql/exec/mr/MapredLocalTask.java abd42ec651 
  ql/src/java/org/apache/hadoop/hive/ql/io/AcidUtils.java 430e0fc551 
  ql/src/java/org/apache/hadoop/hive/ql/io/HiveInputFormat.java 856b026c91 
  ql/src/java/org/apache/hadoop/hive/ql/io/orc/OrcInputFormat.java ff2cc0455c 
  ql/src/java/org/apache/hadoop/hive/ql/io/orc/OrcSplit.java 61565ef030 
  
ql/src/java/org/apache/hadoop/hive/ql/io/orc/VectorizedOrcAcidRowBatchReader.java
 da200049bc 
  ql/src/java/org/apache/hadoop/hive/ql/io/orc/VectorizedOrcInputFormat.java 
7b157e6486 
  ql/src/java/org/apache/hadoop/hive/ql/lockmgr/DbTxnManager.java 3968b0e899 
  ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java c8d1589f44 
  
ql/src/java/org/apache/hadoop/hive/ql/optimizer/BucketingSortingReduceSinkOptimizer.java
 0fdff7d853 
  ql/src/java/org/apache/hadoop/hive/ql/optimizer/GenMapRedUtils.java 
69447d9d34 
  ql/src/java/org/apache/hadoop/hive/ql/optimizer/physical/Vectorizer.java 
190771ea6b 
  ql/src/java/org/apache/hadoop/hive/ql/parse/DDLSemanticAnalyzer.java 
b766791ebc 
  ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java 8e587f1cf6 
  ql/src/java/org/apache/hadoop/hive/ql/parse/repl/dump/TableExport.java 
e1cea22005 
  ql/src/java/org/apache/hadoop/hive/ql/plan/TableScanDesc.java 661446df0b 
  ql/src/java/org/apache/hadoop/hive/ql/stats/Partish.java 78f48b169a 
  ql/src/java/org/apache/hadoop/hive/ql/txn/compactor/CompactorMR.java 
0e456df19c 
  ql/src/test/org/apache/hadoop/hive/ql/io/TestAcidUtils.java 8945fdf1e7 
  ql/src/test/org/apache/hadoop/hive/ql/io/orc/TestInputOutputFormat.java 
92f005d1dc 
  ql/src/test/org/apache/hadoop/hive/ql/io/orc/TestOrcRawRecordMerger.java 
c6a866a164 
  
ql/src/test/org/apache/hadoop/hive/ql/io/orc/TestVectorizedOrcAcidRowBatchReader.java
 65508f4ddd 
  
standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/LockComponentBuilder.java
 de6c718ba9 


Diff: https://reviews.apache.org/r/65413/diff/5/

Changes: https://reviews.apache.org/r/65413/diff/4-5/


Testing
---


Thanks,

Sergey Shelukhin



[jira] [Created] (HIVE-18662) hive.acid.key.index is missing entries

2018-02-08 Thread Eugene Koifman (JIRA)
Eugene Koifman created HIVE-18662:
-

 Summary: hive.acid.key.index is missing entries
 Key: HIVE-18662
 URL: https://issues.apache.org/jira/browse/HIVE-18662
 Project: Hive
  Issue Type: Bug
  Components: Transactions
Reporter: Eugene Koifman


OrcRecordUpdater.KeyIndexBuilder stores an index in ORC footer where each entry 
is the last ROW__ID of each stripe.  In acid1 this is used to filter the events 
from delta file when merging with part of the base.

 

as can be seen in \{{TestTxnCommands.testVersioning()}} (added in HIVE-18659) 
the \{{hive.acid.key.index}} is empty.  

 

This is because very little data is written and WriterImpl.flushStripe() is not 
called except when \{{WriterImpl.close()} is called.  In the later, 
\{{WriterCallback.preFooterWrite()}} is called before \{{preStripeWrite}} and 
so KeyIndexBuilder.preFooterWriter() records nothing in \{{hive.acid.key.index}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Question on CachedStore cache update

2018-02-08 Thread Vaibhav Gumashta
Hi Alan,

To add to Daniel’s response, as part of 
https://issues.apache.org/jira/browse/HIVE-18264 and 
https://issues.apache.org/jira/browse/HIVE-18661 (I’m actively working on 
these), we plan to remove the current mechanism of updating the cache (which is 
very inefficient anyway) and instead use the NOTIFICATION_LOG table to update 
the cache incrementally. The code that you pointed was meant to not let the 
background update thread block the metastore client calls for a long time, but 
with the plan to update the cache incrementally we may not need to worry about 
that, as applying the notification incrementally will not be a long blocking 
execution.

Thanks,
--Vaibhav

On 2/8/18, 11:41 AM, "Daniel Dai"  wrote:

Hi, Alan,

If database cache is changed locally, we don’t want to bring remote copy to 
overwrite it as the remote copy doesn’t carry local changes (ideally, we shall 
also apply local changes to the remote copy images we bring in from db, but we 
are not there yet). That’s why we skip the update if there’s local changes, and 
wait for the next iteration to sync with remote. isDatabaseCacheDirty is 
initially set to false unless there’s local update, and will be reset during 
cache swap, thus give a chance for the next iteration to update the cache if 
there’s no local changes.

Thanks,
Daniel

On 2/6/18, 11:57 AM, "Alan Gates"  wrote:

I’m confused by the following code in the CachedStore.  This in in the
CacheUpdateMasterWork thread, in the updateDatabases method (which is
called by update()):

*// Skip background updates if we detect change*

*if *(*isDatabaseCacheDirty*.compareAndSet(*true*, *false*)) {

  *LOG*.debug(*"Skipping database cache update; the database list we 
have
is dirty."*);

  *return*;

}

Why are we not updating the cache if we’ve dirtied it?  Also, AFAICT no 
one
ever sets isDatabaseCacheDirty to false, meaning once one database is
created the cache will never be updated.  Am I missing something?

Alan.






[jira] [Created] (HIVE-18661) CachedStore: Use metastore notification log events to update cache

2018-02-08 Thread Vaibhav Gumashta (JIRA)
Vaibhav Gumashta created HIVE-18661:
---

 Summary: CachedStore: Use metastore notification log events to 
update cache
 Key: HIVE-18661
 URL: https://issues.apache.org/jira/browse/HIVE-18661
 Project: Hive
  Issue Type: Bug
  Components: Metastore
Reporter: Vaibhav Gumashta


Currently, a background thread updates the entire cache which is pretty 
inefficient. We capture the updates to metadata in NOTIFICATION_LOG table which 
is getting used in the Replication work. We should have the background thread 
apply these notifications to incrementally update the cache.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HIVE-18660) PCR doesn't distinguish between partition and virtual columns

2018-02-08 Thread Ashutosh Chauhan (JIRA)
Ashutosh Chauhan created HIVE-18660:
---

 Summary: PCR doesn't distinguish between partition and virtual 
columns
 Key: HIVE-18660
 URL: https://issues.apache.org/jira/browse/HIVE-18660
 Project: Hive
  Issue Type: Bug
  Components: Logical Optimizer
Reporter: Ashutosh Chauhan
Assignee: Ashutosh Chauhan


As a result transforms a filter {{INPUT__FILE__NAME is not null;}} to 
\{{false}} causing wrong results.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Review Request 65579: HIVE-18658 WM: allow not specifying scheduling policy when creating a pool

2018-02-08 Thread Sergey Shelukhin

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/65579/
---

Review request for hive, Harish Jaiprakash and Prasanth_J.


Repository: hive-git


Description
---

see jira


Diffs
-

  ql/src/java/org/apache/hadoop/hive/ql/parse/DDLSemanticAnalyzer.java 
b766791ebc 
  ql/src/test/queries/clientpositive/resourceplan.q 7314585415 
  ql/src/test/results/clientpositive/llap/resourceplan.q.out b23720d1a8 


Diff: https://reviews.apache.org/r/65579/diff/1/


Testing
---


Thanks,

Sergey Shelukhin



[jira] [Created] (HIVE-18659) add acid version marker to acid files

2018-02-08 Thread Eugene Koifman (JIRA)
Eugene Koifman created HIVE-18659:
-

 Summary: add acid version marker to acid files
 Key: HIVE-18659
 URL: https://issues.apache.org/jira/browse/HIVE-18659
 Project: Hive
  Issue Type: Bug
  Components: Transactions
Reporter: Eugene Koifman
Assignee: Eugene Koifman


add acid version marker to acid files so that we know which version of acid 
wrote the file



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Review Request 65250: HIVE-18387

2018-02-08 Thread Jesús Camacho Rodríguez

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/65250/
---

(Updated Feb. 8, 2018, 10:34 p.m.)


Review request for hive and Ashutosh Chauhan.


Bugs: HIVE-18387
https://issues.apache.org/jira/browse/HIVE-18387


Repository: hive-git


Description (updated)
---

HIVE-18387


Diffs (updated)
-

  
itests/hcatalog-unit/src/test/java/org/apache/hive/hcatalog/listener/DummyRawStoreFailEvent.java
 78b26374f21a914d1b5681788b7b936b0d9c9296 
  
itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/parse/TestReplicationScenarios.java
 d763666ab308a48456a1aebe2c94434ba3bc3fcd 
  ql/src/java/org/apache/hadoop/hive/ql/QueryLifeTimeHookRunner.java 
53d716bceb98c2ced3a3ba3f0cd607766447dfd9 
  ql/src/java/org/apache/hadoop/hive/ql/exec/DDLTask.java 
20c2c3294ab638b0d5284b9d24865f901ab6d033 
  
ql/src/java/org/apache/hadoop/hive/ql/exec/MaterializedViewUpdateRegistryTask.java
 PRE-CREATION 
  
ql/src/java/org/apache/hadoop/hive/ql/exec/MaterializedViewUpdateRegistryWork.java
 PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/exec/TaskFactory.java 
85cef8664674db72cd69929d4ad96f1bd85279da 
  ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java 
c8d1589f44c4443a64d0701260bb4850eeeab233 
  ql/src/java/org/apache/hadoop/hive/ql/optimizer/GenMapRedUtils.java 
69447d9d3412fefc37d0495dd4c96df974f08927 
  ql/src/java/org/apache/hadoop/hive/ql/parse/BaseSemanticAnalyzer.java 
8a1bfd21b45671e8fc183bcce5b028e8ece3e21b 
  
ql/src/java/org/apache/hadoop/hive/ql/parse/MaterializedViewRebuildSemanticAnalyzer.java
 PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/parse/ParseContext.java 
4c41920cba2b8b03871a73e8ae6c006853a657b6 
  ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java 
8e587f1cf6d1d224fe001df9ec89201573716033 
  ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzerFactory.java 
2e1f50e641c297914831ec1e4de2c6304408cca1 
  ql/src/java/org/apache/hadoop/hive/ql/parse/TaskCompiler.java 
92d29e3a576ad95874f312d3371b494976b59399 
  ql/src/java/org/apache/hadoop/hive/ql/plan/ImportTableDesc.java 
aef83b83e19bdba90edabde8534b5ef8f7bd40bd 
  ql/src/java/org/apache/hadoop/hive/ql/stats/BasicStatsTask.java 
b48379013d74c56df245bb9e292e45bb298367da 
  ql/src/test/queries/clientpositive/druidmini_mv.q 
e0593576020af7dd5cc26dd613395d6cde72496e 
  ql/src/test/queries/clientpositive/materialized_view_create_rewrite_4.q 
efc65c4061c608fac5ba308a9a8238aca443555f 
  ql/src/test/results/clientpositive/druid/druidmini_mv.q.out 
5a0b885f7759d013eb6b1e5d411f02e9c4d4468b 
  ql/src/test/results/clientpositive/materialized_view_create_rewrite_3.q.out 
0d8d238e8b43fdf453f3c11cd1cdd0f1aa8764bc 
  ql/src/test/results/clientpositive/materialized_view_create_rewrite_4.q.out 
8ab151718668daa8915575ffed5845d62e15b775 
  standalone-metastore/src/gen/thrift/gen-cpp/ThriftHiveMetastore.h 
bfa17eb3e64ab69de57c65b8759dca816256677f 
  standalone-metastore/src/gen/thrift/gen-cpp/ThriftHiveMetastore.cpp 
af0fd6b0e06694f9a9cb5a94edb1c2a91c819650 
  
standalone-metastore/src/gen/thrift/gen-cpp/ThriftHiveMetastore_server.skeleton.cpp
 cf9a1713aa9ba63b0b7f813fe68d7f75b5ee7c47 
  standalone-metastore/src/gen/thrift/gen-cpp/hive_metastore_types.h 
4c09bc8fe642f6c43d21d5e373795001bc34189a 
  standalone-metastore/src/gen/thrift/gen-cpp/hive_metastore_types.cpp 
aadf8f17c452cafccd85133364af6edb8a5587a5 
  
standalone-metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/Materialization.java
 b399d664229c9532f4cfbaeb680c4753902bdf36 
  
standalone-metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/ThriftHiveMetastore.java
 d5e3527d09ced377141b0585f55a1df9647aa4ac 
  standalone-metastore/src/gen/thrift/gen-php/metastore/ThriftHiveMetastore.php 
9382c60120cecfe067ff6e9ac495eb23ac73168f 
  standalone-metastore/src/gen/thrift/gen-php/metastore/Types.php 
a5b578ef37acc834cc96212de66b5217655a2e49 
  
standalone-metastore/src/gen/thrift/gen-py/hive_metastore/ThriftHiveMetastore-remote
 9b2aaffd0fa5b63adcbebb37037452f1de9a378f 
  
standalone-metastore/src/gen/thrift/gen-py/hive_metastore/ThriftHiveMetastore.py
 2e1910568a30f6aa74ab2054cb27a573fb1a8a59 
  standalone-metastore/src/gen/thrift/gen-py/hive_metastore/ttypes.py 
5598859042547daa553120fc92502a2dbe0f5f95 
  standalone-metastore/src/gen/thrift/gen-rb/hive_metastore_types.rb 
bc58cfe0efb767b5cafb2a2f946c688045d405e8 
  standalone-metastore/src/gen/thrift/gen-rb/thrift_hive_metastore.rb 
ec8813130851181e36a1a840862415c9a28b00b5 
  
standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java
 8dc9b6af92359460b732fb47d8e590329dbc91c0 
  
standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClient.java
 23cef8d556ed299612d7cbb66074f9975751e034 
  

Re: Review Request 64688: HIVE-18218

2018-02-08 Thread Deepak Jaiswal

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/64688/
---

(Updated Feb. 8, 2018, 10:09 p.m.)


Review request for hive, Ashutosh Chauhan and Jason Dere.


Repository: hive-git


Description (updated)
---

Bucket based Join : Handle buckets with no splits.

The current logic in CustomPartitionVertex assumes that there is a split for 
each bucket whereas in Tez, we can have no splits for empty buckets.
Also falls back to reduceside join if small table has more buckets than big 
table.

Disallow loading files in bucketed tables if the file name format is not like 
00_0, 01_0_copy_1 etc.


Diffs (updated)
-

  ql/src/java/org/apache/hadoop/hive/ql/exec/tez/CustomPartitionVertex.java 
26afe90faa 
  ql/src/java/org/apache/hadoop/hive/ql/exec/tez/CustomVertexConfiguration.java 
ef5e7edcd6 
  ql/src/java/org/apache/hadoop/hive/ql/exec/tez/DagUtils.java 9885038588 
  ql/src/java/org/apache/hadoop/hive/ql/optimizer/ConvertJoinMapJoin.java 
dc698c8de8 
  ql/src/java/org/apache/hadoop/hive/ql/parse/LoadSemanticAnalyzer.java 
54f5bab6de 
  ql/src/test/queries/clientpositive/auto_sortmerge_join_2.q e5fdcb57e4 
  ql/src/test/queries/clientpositive/auto_sortmerge_join_4.q abf09e5534 
  ql/src/test/queries/clientpositive/auto_sortmerge_join_5.q b85c4a7aa3 
  ql/src/test/queries/clientpositive/auto_sortmerge_join_7.q bd780861e3 
  ql/src/test/results/clientnegative/bucket_mapjoin_mismatch1.q.out b9c2e6f827 
  ql/src/test/results/clientpositive/auto_sortmerge_join_2.q.out 5cfc35aa73 
  ql/src/test/results/clientpositive/auto_sortmerge_join_4.q.out 0d586fd26b 
  ql/src/test/results/clientpositive/auto_sortmerge_join_5.q.out 45704d1253 
  ql/src/test/results/clientpositive/auto_sortmerge_join_7.q.out 1959075912 
  ql/src/test/results/clientpositive/llap/auto_sortmerge_join_2.q.out 
054b0d00be 
  ql/src/test/results/clientpositive/llap/auto_sortmerge_join_4.q.out 
95d329862c 
  ql/src/test/results/clientpositive/llap/auto_sortmerge_join_5.q.out 
e711715aa5 
  ql/src/test/results/clientpositive/llap/auto_sortmerge_join_7.q.out 
53c685cb11 
  ql/src/test/results/clientpositive/spark/auto_sortmerge_join_2.q.out 
8cfa113794 
  ql/src/test/results/clientpositive/spark/auto_sortmerge_join_4.q.out 
fce5e0cfc4 
  ql/src/test/results/clientpositive/spark/auto_sortmerge_join_5.q.out 
8250eca099 
  ql/src/test/results/clientpositive/spark/auto_sortmerge_join_7.q.out 
eb813c1734 


Diff: https://reviews.apache.org/r/64688/diff/2/

Changes: https://reviews.apache.org/r/64688/diff/1-2/


Testing
---


Thanks,

Deepak Jaiswal



[jira] [Created] (HIVE-18658) WM: allow not specifying scheduling policy when creating a pool

2018-02-08 Thread Sergey Shelukhin (JIRA)
Sergey Shelukhin created HIVE-18658:
---

 Summary: WM: allow not specifying scheduling policy when creating 
a pool
 Key: HIVE-18658
 URL: https://issues.apache.org/jira/browse/HIVE-18658
 Project: Hive
  Issue Type: Bug
Reporter: Sergey Shelukhin






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HIVE-18657) Fix checkstyle violations for Semantic Analyzer

2018-02-08 Thread Vineet Garg (JIRA)
Vineet Garg created HIVE-18657:
--

 Summary: Fix checkstyle violations for Semantic Analyzer
 Key: HIVE-18657
 URL: https://issues.apache.org/jira/browse/HIVE-18657
 Project: Hive
  Issue Type: Task
Reporter: Vineet Garg
Assignee: Vineet Garg


SemanticAnalyzer.java has quite a few checkstyle violations which should be 
fixed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Question on CachedStore cache update

2018-02-08 Thread Daniel Dai
Hi, Alan,

If database cache is changed locally, we don’t want to bring remote copy to 
overwrite it as the remote copy doesn’t carry local changes (ideally, we shall 
also apply local changes to the remote copy images we bring in from db, but we 
are not there yet). That’s why we skip the update if there’s local changes, and 
wait for the next iteration to sync with remote. isDatabaseCacheDirty is 
initially set to false unless there’s local update, and will be reset during 
cache swap, thus give a chance for the next iteration to update the cache if 
there’s no local changes.

Thanks,
Daniel

On 2/6/18, 11:57 AM, "Alan Gates"  wrote:

I’m confused by the following code in the CachedStore.  This in in the
CacheUpdateMasterWork thread, in the updateDatabases method (which is
called by update()):

*// Skip background updates if we detect change*

*if *(*isDatabaseCacheDirty*.compareAndSet(*true*, *false*)) {

  *LOG*.debug(*"Skipping database cache update; the database list we have
is dirty."*);

  *return*;

}

Why are we not updating the cache if we’ve dirtied it?  Also, AFAICT no one
ever sets isDatabaseCacheDirty to false, meaning once one database is
created the cache will never be updated.  Am I missing something?

Alan.




[jira] [Created] (HIVE-18656) Trigger with counter TOTAL_TASKS fails to result in an event even when condition is met

2018-02-08 Thread Aswathy Chellammal Sreekumar (JIRA)
Aswathy Chellammal Sreekumar created HIVE-18656:
---

 Summary: Trigger with counter TOTAL_TASKS fails to result in an 
event even when condition is met
 Key: HIVE-18656
 URL: https://issues.apache.org/jira/browse/HIVE-18656
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2
Affects Versions: 3.0.0
 Environment: Trigger involving counter TOTAL_TASKS seems to fail to 
trigger event in definition even when the trigger condition is met

Trigger definition:
{noformat}
++
|line|
++
| plan_1[status=ACTIVE,parallelism=null,defaultPool=default] |
|  +  default[allocFraction=1.0,schedulingPolicy=null,parallelism=4] |
|  |  mapped for default |
|  +  |
|  |  trigger limit_task_per_vertex_trigger: if (TOTAL_TASKS > 5) { KILL } |
++
{noformat}

Query is finishing fine even when one vertex is having 29 tasks 
{noformat}
INFO  : Query ID = hive_20180208193705_73642730-2c6b-4d4d-a608-a849b147bc37
INFO  : Total jobs = 1
INFO  : Launching Job 1 out of 1
INFO  : Starting task [Stage-1:MAPRED] in serial mode
INFO  : Subscribed to counters: [TOTAL_TASKS] for queryId: 
hive_20180208193705_73642730-2c6b-4d4d-a608-a849b147bc37
INFO  : Tez session hasn't been created yet. Opening session
INFO  : Dag name: with ssales as
(select c_last_name...ssales) (Stage-1)
INFO  : Setting tez.task.scale.memory.reserve-fraction to 0.3001192092896
INFO  : Setting tez.task.scale.memory.reserve-fraction to 0.3001192092896
INFO  : Setting tez.task.scale.memory.reserve-fraction to 0.3001192092896
INFO  : Setting tez.task.scale.memory.reserve-fraction to 0.3001192092896
INFO  : Status: Running (Executing on YARN cluster with App id 
application_151782410_0199)

--
VERTICES  MODESTATUS  TOTAL  COMPLETED  RUNNING  PENDING  
FAILED  KILLED
--
Map 6 .. container SUCCEEDED  1  100
   0   0
Map 8 .. container SUCCEEDED  1  100
   0   0
Map 7 .. container SUCCEEDED  1  100
   0   0
Map 9 .. container SUCCEEDED  1  100
   0   0
Map 10 . container SUCCEEDED  3  300
   0   0
Map 11 . container SUCCEEDED  1  100
   0   0
Map 12 . container SUCCEEDED  1  100
   0   0
Map 13 . container SUCCEEDED  3  300
   0   0
Map 1 .. container SUCCEEDED  9  900
   0   0
Reducer 2 .. container SUCCEEDED  2  200
   0   0
Reducer 4 .. container SUCCEEDED 29 2900
   0   0
Reducer 5 .. container SUCCEEDED  1  100
   0   0
Reducer 3container SUCCEEDED  0  000
   0   0
--
VERTICES: 12/13  [==>>] 100%  ELAPSED TIME: 21.15 s
--
INFO  : Status: DAG finished successfully in 21.07 seconds
{noformat}


Reporter: Aswathy Chellammal Sreekumar
Assignee: Prasanth Jayachandran






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HIVE-18655) Apache hive 2.1.1 on Apache Spark 2.0

2018-02-08 Thread AbdulMateen (JIRA)
AbdulMateen created HIVE-18655:
--

 Summary: Apache hive 2.1.1 on Apache Spark 2.0
 Key: HIVE-18655
 URL: https://issues.apache.org/jira/browse/HIVE-18655
 Project: Hive
  Issue Type: Bug
  Components: Hive, HiveServer2, Spark
Affects Versions: 2.1.1
 Environment: apache hive  -2.1.1

apache spark - 2.0 - prebulit version (removed hive jars)

apache hadoop -2.8
Reporter: AbdulMateen


Hi when connecting my beeline in hive it is not able to create spark client

{{select count(*) from student; Query ID = 
hadoop_20180208184224_f86b5aeb-f27b-4156-bd77-0aab54c0ec67 Total jobs = 1 
Launching Job 1 out of 1 In order to change the average load for a reducer (in 
bytes): set hive.exec.reducers.bytes.per.reducer= In order to limit the 
maximum number of reducers: set hive.exec.reducers.max= In order to set 
a constant number of reducers: set mapreduce.job.reduces=}}

{{FAILED: Execution Error, return code 1 from 
org.apache.hadoop.hive.ql.exec.spark.SparkTask Error: Error while processing 
statement: FAILED: Execution Error, return code 1 from 
org.apache.hadoop.hive.ql.exec.spark.SparkTask (state=08S01,code=1)}}{{}}
|Hi when connecting my beeline in hive it is not able to create spark client
{{select count(*) from student; Query ID = 
hadoop_20180208184224_f86b5aeb-f27b-4156-bd77-0aab54c0ec67 Total jobs = 1 
Launching Job 1 out of 1 In order to change the average load for a reducer (in 
bytes): set hive.exec.reducers.bytes.per.reducer= In order to limit the 
maximum number of reducers: set hive.exec.reducers.max= In order to set 
a constant number of reducers: set mapreduce.job.reduces= }}
Failed to execute spark task, with exception 
'org.apache.hadoop.hive.ql.metadata.HiveException(Failed to create spark 
client.)'
{{FAILED: Execution Error, return code 1 from 
org.apache.hadoop.hive.ql.exec.spark.SparkTask Error: Error while processing 
statement: FAILED: Execution Error, return code 1 from 
org.apache.hadoop.hive.ql.exec.spark.SparkTask (state=08S01,code=1) }}
Installed spark prebuilt 2.0 one in standalone cluster mode
My hive-site.xml -- placed in spark/conf too and removed the hive jars in hdfs 
path
{{ spark.master yarn Spark 
Master URL   
spark.eventLog.enabled true Spark 
Event Log   spark.eventLog.dir 
hdfs://xx.xxx.xx.xx:9000/user/spark/eventLogging 
Spark event log folder   
spark.executor.memory 512m Spark 
executor memory   
spark.serializer 
org.apache.spark.serializer.KryoSerializer Spark 
serializer   spark.yarn.jars 
hdfs://xx.xxx.xx.xx:9000:/user/spark/spark-jars/*  
 spark.submit.deployMode cluster 
Spark Master URL  yarn.nodemanager.resource.memory-mb 
40960   
yarn.scheduler.minimum-allocation-mb 2048 
  yarn.scheduler.maximum-allocation-mb 
8192 }}|

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)