[jira] [Commented] (HIVE-19001) ALTER TABLE ADD CONSTRAINT support for CHECK constraint

2018-04-17 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441916#comment-16441916
 ] 

Hive QA commented on HIVE-19001:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
45s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
11s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
0s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
11s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
22s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m  
4s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
48s{color} | {color:red} ql: The patch generated 12 new + 428 unchanged - 0 
fixed = 440 total (was 428) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
11s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
16s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 23m 24s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-10286/dev-support/hive-personality.sh
 |
| git revision | master / 878d6ee |
| Default Java | 1.8.0_111 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-10286/yetus/diff-checkstyle-ql.txt
 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-10286/yetus/patch-asflicense-problems.txt
 |
| modules | C: ql standalone-metastore U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-10286/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> ALTER TABLE ADD CONSTRAINT support for CHECK constraint
> ---
>
> Key: HIVE-19001
> URL: https://issues.apache.org/jira/browse/HIVE-19001
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Affects Versions: 3.0.0
>Reporter: Aswathy Chellammal Sreekumar
>Assignee: Vineet Garg
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-19001.1.patch, HIVE-19001.2.patch, 
> HIVE-19001.3.patch, HIVE-19001.4.patch
>
>
> ALTER TABLE ADD CONSTRAINT should be able to add CHECK constraint (table 
> level)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18827) useless dynamic value exceptions strike back

2018-04-17 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441885#comment-16441885
 ] 

Hive QA commented on HIVE-18827:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12919303/HIVE-18827.2.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 54 failed/errored test(s), 14285 tests 
executed
*Failed tests:*
{noformat}
TestJdbcWithSQLAuthUDFBlacklist - did not produce a TEST-*.xml file (likely 
timed out) (batchId=243)
TestNonCatCallsWithCatalog - did not produce a TEST-*.xml file (likely timed 
out) (batchId=217)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[llap_smb] (batchId=92)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_0] 
(batchId=17)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[results_cache_invalidation2]
 (batchId=39)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[tez_join_hash] 
(batchId=54)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[default_constraint]
 (batchId=163)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[results_cache_invalidation2]
 (batchId=163)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] 
(batchId=163)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_smb_1] 
(batchId=171)
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainanalyze_5] 
(batchId=105)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[alter_notnull_constraint_violation]
 (batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[avro_non_nullable_union]
 (batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[cachingprintstream]
 (batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[check_constraint_violation]
 (batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[compute_stats_long]
 (batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[default_constraint_invalid_default_value_type]
 (batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[dyn_part3] 
(batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[dyn_part_max_per_node]
 (batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[dynamic_partitions_with_whitelist]
 (batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[insert_into_acid_notnull]
 (batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[insert_into_notnull_constraint]
 (batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[insert_multi_into_notnull]
 (batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[insert_overwrite_notnull_constraint]
 (batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[insertsel_fail] 
(batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[merge_constraint_notnull]
 (batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[script_broken_pipe2]
 (batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[script_broken_pipe3]
 (batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[script_error] 
(batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[serde_regex2] 
(batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[stats_aggregator_error_2]
 (batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[stats_publisher_error_1]
 (batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[stats_publisher_error_2]
 (batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[subquery_corr_in_agg]
 (batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[subquery_in_implicit_gby]
 (batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[subquery_notin_implicit_gby]
 (batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[subquery_scalar_corr_multi_rows]
 (batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[subquery_scalar_multi_rows]
 (batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_assert_true2]
 (batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_assert_true] 
(batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_reflect_neg] 
(batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_test_error] 
(batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_test_error_reduce]
 (batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[update_notnull_constraint]
 (batchId=95)

[jira] [Commented] (HIVE-18739) Add support for Import/Export from Acid table

2018-04-17 Thread Sankar Hariappan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441865#comment-16441865
 ] 

Sankar Hariappan commented on HIVE-18739:
-

[~ekoifman],

I posted couple of comments in the RR. Please take a look.

> Add support for Import/Export from Acid table
> -
>
> Key: HIVE-18739
> URL: https://issues.apache.org/jira/browse/HIVE-18739
> Project: Hive
>  Issue Type: New Feature
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: HIVE-18739.01-branch-3.patch, HIVE-18739.01.patch, 
> HIVE-18739.04.patch, HIVE-18739.06.patch, HIVE-18739.08.patch, 
> HIVE-18739.09.patch, HIVE-18739.10.patch, HIVE-18739.11.patch, 
> HIVE-18739.12.patch, HIVE-18739.13.patch, HIVE-18739.14.patch, 
> HIVE-18739.15.patch, HIVE-18739.16.patch, HIVE-18739.17.patch, 
> HIVE-18739.19.patch, HIVE-18739.20.patch, HIVE-18739.21.patch, 
> HIVE-18739.23.patch, HIVE-18739.24.patch, HIVE-18739.25.patch, 
> HIVE-18739.26.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18827) useless dynamic value exceptions strike back

2018-04-17 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441855#comment-16441855
 ] 

Hive QA commented on HIVE-18827:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
1s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
48s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
57s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
29s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
56s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
16s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
12s{color} | {color:red} storage-api: The patch generated 1 new + 34 unchanged 
- 1 fixed = 35 total (was 35) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
15s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 18m 54s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-10285/dev-support/hive-personality.sh
 |
| git revision | master / cacb1c0 |
| Default Java | 1.8.0_111 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-10285/yetus/diff-checkstyle-storage-api.txt
 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-10285/yetus/patch-asflicense-problems.txt
 |
| modules | C: storage-api ql U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-10285/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> useless dynamic value exceptions strike back
> 
>
> Key: HIVE-18827
> URL: https://issues.apache.org/jira/browse/HIVE-18827
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Jason Dere
>Priority: Major
> Attachments: HIVE-18827.1.patch, HIVE-18827.2.patch
>
>
> Looking at ~master, I can see tons of exceptions like this in LLAP log:
> {noformat}
> 2018-02-27T14:07:51,989  WARN [IO-Elevator-Thread-12 
> (1515669035295_0909_1_08_000117_0)] impl.RecordReaderImpl: 
> NoDynamicValuesException when evaluating predicate. Skipping ORC PPD. Stats: 
> numberOfValues: 9750
> intStatistics {
>   minimum: 11335
>   maximum: 560
>   sum: 27648854404
> }
> hasNull: true
>  Predicate: (BETWEEN ss_addr_sk 
> DynamicValue(RS_27_customer_address_ca_address_sk_min) 
> DynamicValue(RS_27_customer_address_ca_address_sk_max))
> org.apache.hadoop.hive.ql.plan.DynamicValue$NoDynamicValuesException: Value 
> does not exist in registry: RS_27_customer_address_ca_address_sk_min
>   at 
> org.apache.hadoop.hive.ql.exec.tez.DynamicValueRegistryTez.getValue(DynamicValueRegistryTez.java:77)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
>   at 
> 

[jira] [Updated] (HIVE-19224) incorrect token handling for LLAP plugin endpoint - part 2

2018-04-17 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19224?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-19224:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Committed to branches. Thanks for the review!

> incorrect token handling for LLAP plugin endpoint - part 2
> --
>
> Key: HIVE-19224
> URL: https://issues.apache.org/jira/browse/HIVE-19224
> Project: Hive
>  Issue Type: Bug
>Reporter: Aswathy Chellammal Sreekumar
>Assignee: Sergey Shelukhin
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-19224.patch
>
>
> {noformat}
> Caused by: java.lang.NullPointerException
> at 
> org.apache.hadoop.hive.ql.exec.tez.LlapPluginEndpointClientImpl.getTokenUser(LlapPluginEndpointClientImpl.java:77)
>  ~[hive-exec-3.0.0.3.0.0.0-1145.jar:3.0.0.3.0.0.0-1145]
> at 
> org.apache.hadoop.hive.llap.AsyncPbRpcProxy.createProxy(AsyncPbRpcProxy.java:447)
>  ~[hive-exec-3.0.0.3.0.0.0-1145.jar:3.0.0.3.0.0.0-1145]
> at 
> org.apache.hadoop.hive.llap.AsyncPbRpcProxy.access$100(AsyncPbRpcProxy.java:66)
>  ~[hive-exec-3.0.0.3.0.0.0-1145.jar:3.0.0.3.0.0.0-1145]
> at 
> org.apache.hadoop.hive.llap.AsyncPbRpcProxy$3.call(AsyncPbRpcProxy.java:429) 
> ~[hive-exec-3.0.0.3.0.0.0-1145.jar:3.0.0.3.0.0.0-1145]
> at 
> com.google.common.cache.LocalCache$LocalManualCache$1.load(LocalCache.java:4793)
>  ~[guava-19.0.jar:?]
> at 
> com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3542)
>  ~[guava-19.0.jar:?]
> at 
> com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2323) 
> ~[guava-19.0.jar:?]
> at 
> com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2286)
>  ~[guava-19.0.jar:?]
> at 
> com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2201) 
> ~[guava-19.0.jar:?]
> ... 12 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-17647) DDLTask.generateAddMmTasks(Table tbl) and other random code should not start transactions

2018-04-17 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441837#comment-16441837
 ] 

Sergey Shelukhin commented on HIVE-17647:
-

Rebased and fixed the tests.

> DDLTask.generateAddMmTasks(Table tbl) and other random code should not start 
> transactions
> -
>
> Key: HIVE-17647
> URL: https://issues.apache.org/jira/browse/HIVE-17647
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Sergey Shelukhin
>Priority: Major
>  Labels: mm-gap-2
> Attachments: HIVE-17647.01.patch, HIVE-17647.02.patch, 
> HIVE-17647.03.patch, HIVE-17647.patch
>
>
> This method (and other places) have 
> {noformat}
>   if (txnManager.isTxnOpen()) {
> mmWriteId = txnManager.getCurrentTxnId();
>   } else {
> mmWriteId = txnManager.openTxn(new Context(conf), conf.getUser());
> txnManager.commitTxn();
>   }
> {noformat}
> this should throw if there is no open transaction.  It should never open one.
> In general the logic seems suspect.  Looks like the intent is to move all 
> existing files into a delta_x_x/ when a plain table is converted to MM table. 
>  This seems like something that needs to be done from under an Exclusive lock 
> to prevent concurrent Insert operations writing data under table/partition 
> root.  But this is too late to acquire locks which should be done from the 
> Driver.acquireLocks()  (or else have deadlock detector since acquiring them 
> here would bread all-or-nothing lock acquisition semantics currently required 
> w/o deadlock detector)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-17647) DDLTask.generateAddMmTasks(Table tbl) and other random code should not start transactions

2018-04-17 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-17647:

Attachment: HIVE-17647.03.patch

> DDLTask.generateAddMmTasks(Table tbl) and other random code should not start 
> transactions
> -
>
> Key: HIVE-17647
> URL: https://issues.apache.org/jira/browse/HIVE-17647
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Sergey Shelukhin
>Priority: Major
>  Labels: mm-gap-2
> Attachments: HIVE-17647.01.patch, HIVE-17647.02.patch, 
> HIVE-17647.03.patch, HIVE-17647.patch
>
>
> This method (and other places) have 
> {noformat}
>   if (txnManager.isTxnOpen()) {
> mmWriteId = txnManager.getCurrentTxnId();
>   } else {
> mmWriteId = txnManager.openTxn(new Context(conf), conf.getUser());
> txnManager.commitTxn();
>   }
> {noformat}
> this should throw if there is no open transaction.  It should never open one.
> In general the logic seems suspect.  Looks like the intent is to move all 
> existing files into a delta_x_x/ when a plain table is converted to MM table. 
>  This seems like something that needs to be done from under an Exclusive lock 
> to prevent concurrent Insert operations writing data under table/partition 
> root.  But this is too late to acquire locks which should be done from the 
> Driver.acquireLocks()  (or else have deadlock detector since acquiring them 
> here would bread all-or-nothing lock acquisition semantics currently required 
> w/o deadlock detector)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19215) JavaUtils.AnyIdDirFilter ignores base_n directories

2018-04-17 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441833#comment-16441833
 ] 

Hive QA commented on HIVE-19215:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12919302/HIVE-19215.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/10284/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/10284/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-10284/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ date '+%Y-%m-%d %T.%3N'
2018-04-18 03:16:21.243
+ [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]]
+ export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ export 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'MAVEN_OPTS=-Xmx1g '
+ MAVEN_OPTS='-Xmx1g '
+ cd /data/hiveptest/working/
+ tee /data/hiveptest/logs/PreCommit-HIVE-Build-10284/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ date '+%Y-%m-%d %T.%3N'
2018-04-18 03:16:21.258
+ cd apache-github-source-source
+ git fetch origin
>From https://github.com/apache/hive
   5b4c29d..cacb1c0  master -> origin/master
+ git reset --hard HEAD
HEAD is now at 5b4c29d HIVE-19194 : TestDruidStorageHandler fails (Slim B via 
Ashutosh Chauhan)
+ git clean -f -d
+ git checkout master
Already on 'master'
Your branch is behind 'origin/master' by 1 commit, and can be fast-forwarded.
  (use "git pull" to update your local branch)
+ git reset --hard origin/master
HEAD is now at cacb1c0 HIVE-18410 : [Performance][Avro] Reading flat Avro 
tables is very expensive in Hive (Ratandeep Ratti via Anthony Hsu, Ashutosh 
Chauhan)
+ git merge --ff-only origin/master
Already up-to-date.
+ date '+%Y-%m-%d %T.%3N'
2018-04-18 03:16:30.779
+ rm -rf ../yetus_PreCommit-HIVE-Build-10284
+ mkdir ../yetus_PreCommit-HIVE-Build-10284
+ git gc
+ cp -R . ../yetus_PreCommit-HIVE-Build-10284
+ mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-10284/yetus
+ patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hiveptest/working/scratch/build.patch
+ [[ -f /data/hiveptest/working/scratch/build.patch ]]
+ chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh
+ /data/hiveptest/working/scratch/smart-apply-patch.sh 
/data/hiveptest/working/scratch/build.patch
error: patch failed: 
common/src/java/org/apache/hadoop/hive/common/JavaUtils.java:228
Falling back to three-way merge...
Applied patch to 'common/src/java/org/apache/hadoop/hive/common/JavaUtils.java' 
with conflicts.
Going to apply patch with: git apply -p0
error: patch failed: 
common/src/java/org/apache/hadoop/hive/common/JavaUtils.java:228
Falling back to three-way merge...
Applied patch to 'common/src/java/org/apache/hadoop/hive/common/JavaUtils.java' 
with conflicts.
U common/src/java/org/apache/hadoop/hive/common/JavaUtils.java
+ exit 1
'
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12919302 - PreCommit-HIVE-Build

> JavaUtils.AnyIdDirFilter ignores base_n directories
> ---
>
> Key: HIVE-19215
> URL: https://issues.apache.org/jira/browse/HIVE-19215
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 3.0.0
>Reporter: Eugene Koifman
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HIVE-19215.patch
>
>
> cc [~sershe], [~steveyeom2017]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19224) incorrect token handling for LLAP plugin endpoint - part 2

2018-04-17 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441828#comment-16441828
 ] 

Hive QA commented on HIVE-19224:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12919300/HIVE-19224.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 54 failed/errored test(s), 14284 tests 
executed
*Failed tests:*
{noformat}
TestNonCatCallsWithCatalog - did not produce a TEST-*.xml file (likely timed 
out) (batchId=217)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[llap_smb] (batchId=92)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_0] 
(batchId=17)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[results_cache_invalidation2]
 (batchId=39)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[tez_join_hash] 
(batchId=54)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez1]
 (batchId=175)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[results_cache_invalidation2]
 (batchId=163)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] 
(batchId=163)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_smb_1] 
(batchId=171)
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainanalyze_5] 
(batchId=105)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[alter_notnull_constraint_violation]
 (batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[avro_non_nullable_union]
 (batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[cachingprintstream]
 (batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[check_constraint_violation]
 (batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[compute_stats_long]
 (batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[default_constraint_invalid_default_value_type]
 (batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[dyn_part3] 
(batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[dyn_part_max_per_node]
 (batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[dynamic_partitions_with_whitelist]
 (batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[insert_into_acid_notnull]
 (batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[insert_into_notnull_constraint]
 (batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[insert_multi_into_notnull]
 (batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[insert_overwrite_notnull_constraint]
 (batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[insertsel_fail] 
(batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[merge_constraint_notnull]
 (batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[script_broken_pipe2]
 (batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[script_broken_pipe3]
 (batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[script_error] 
(batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[serde_regex2] 
(batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[stats_aggregator_error_2]
 (batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[stats_publisher_error_1]
 (batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[stats_publisher_error_2]
 (batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[subquery_corr_in_agg]
 (batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[subquery_in_implicit_gby]
 (batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[subquery_notin_implicit_gby]
 (batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[subquery_scalar_corr_multi_rows]
 (batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[subquery_scalar_multi_rows]
 (batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_assert_true2]
 (batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_assert_true] 
(batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_reflect_neg] 
(batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_test_error] 
(batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_test_error_reduce]
 (batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[update_notnull_constraint]
 (batchId=95)
org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[cluster_tasklog_retrieval]
 (batchId=98)

[jira] [Updated] (HIVE-18739) Add support for Import/Export from Acid table

2018-04-17 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-18739:
--
Status: Patch Available  (was: Reopened)

> Add support for Import/Export from Acid table
> -
>
> Key: HIVE-18739
> URL: https://issues.apache.org/jira/browse/HIVE-18739
> Project: Hive
>  Issue Type: New Feature
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: HIVE-18739.01-branch-3.patch, HIVE-18739.01.patch, 
> HIVE-18739.04.patch, HIVE-18739.06.patch, HIVE-18739.08.patch, 
> HIVE-18739.09.patch, HIVE-18739.10.patch, HIVE-18739.11.patch, 
> HIVE-18739.12.patch, HIVE-18739.13.patch, HIVE-18739.14.patch, 
> HIVE-18739.15.patch, HIVE-18739.16.patch, HIVE-18739.17.patch, 
> HIVE-18739.19.patch, HIVE-18739.20.patch, HIVE-18739.21.patch, 
> HIVE-18739.23.patch, HIVE-18739.24.patch, HIVE-18739.25.patch, 
> HIVE-18739.26.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18739) Add support for Import/Export from Acid table

2018-04-17 Thread Eugene Koifman (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441823#comment-16441823
 ] 

Eugene Koifman commented on HIVE-18739:
---

added branch-3 backport patch

> Add support for Import/Export from Acid table
> -
>
> Key: HIVE-18739
> URL: https://issues.apache.org/jira/browse/HIVE-18739
> Project: Hive
>  Issue Type: New Feature
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: HIVE-18739.01-branch-3.patch, HIVE-18739.01.patch, 
> HIVE-18739.04.patch, HIVE-18739.06.patch, HIVE-18739.08.patch, 
> HIVE-18739.09.patch, HIVE-18739.10.patch, HIVE-18739.11.patch, 
> HIVE-18739.12.patch, HIVE-18739.13.patch, HIVE-18739.14.patch, 
> HIVE-18739.15.patch, HIVE-18739.16.patch, HIVE-18739.17.patch, 
> HIVE-18739.19.patch, HIVE-18739.20.patch, HIVE-18739.21.patch, 
> HIVE-18739.23.patch, HIVE-18739.24.patch, HIVE-18739.25.patch, 
> HIVE-18739.26.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18739) Add support for Import/Export from Acid table

2018-04-17 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-18739:
--
Attachment: HIVE-18739.01-branch-3.patch

> Add support for Import/Export from Acid table
> -
>
> Key: HIVE-18739
> URL: https://issues.apache.org/jira/browse/HIVE-18739
> Project: Hive
>  Issue Type: New Feature
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: HIVE-18739.01-branch-3.patch, HIVE-18739.01.patch, 
> HIVE-18739.04.patch, HIVE-18739.06.patch, HIVE-18739.08.patch, 
> HIVE-18739.09.patch, HIVE-18739.10.patch, HIVE-18739.11.patch, 
> HIVE-18739.12.patch, HIVE-18739.13.patch, HIVE-18739.14.patch, 
> HIVE-18739.15.patch, HIVE-18739.16.patch, HIVE-18739.17.patch, 
> HIVE-18739.19.patch, HIVE-18739.20.patch, HIVE-18739.21.patch, 
> HIVE-18739.23.patch, HIVE-18739.24.patch, HIVE-18739.25.patch, 
> HIVE-18739.26.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Reopened] (HIVE-18739) Add support for Import/Export from Acid table

2018-04-17 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman reopened HIVE-18739:
---

> Add support for Import/Export from Acid table
> -
>
> Key: HIVE-18739
> URL: https://issues.apache.org/jira/browse/HIVE-18739
> Project: Hive
>  Issue Type: New Feature
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: HIVE-18739.01.patch, HIVE-18739.04.patch, 
> HIVE-18739.06.patch, HIVE-18739.08.patch, HIVE-18739.09.patch, 
> HIVE-18739.10.patch, HIVE-18739.11.patch, HIVE-18739.12.patch, 
> HIVE-18739.13.patch, HIVE-18739.14.patch, HIVE-18739.15.patch, 
> HIVE-18739.16.patch, HIVE-18739.17.patch, HIVE-18739.19.patch, 
> HIVE-18739.20.patch, HIVE-18739.21.patch, HIVE-18739.23.patch, 
> HIVE-18739.24.patch, HIVE-18739.25.patch, HIVE-18739.26.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18410) [Performance][Avro] Reading flat Avro tables is very expensive in Hive

2018-04-17 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-18410:

   Resolution: Fixed
Fix Version/s: (was: 2.3.2)
   Status: Resolved  (was: Patch Available)

Pushed to master. Thanks, Ratandeep!
[~vgarg] Will be good to have this in branch-3 as well.

> [Performance][Avro] Reading flat Avro tables is very expensive in Hive
> --
>
> Key: HIVE-18410
> URL: https://issues.apache.org/jira/browse/HIVE-18410
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 1.2.1, 2.1.0, 3.0.0, 2.3.2
>Reporter: Ratandeep Ratti
>Assignee: Ratandeep Ratti
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: HIVE-18410.patch, HIVE-18410_1.patch, 
> HIVE-18410_2.patch, HIVE-18410_3.patch, profiling_with_patch.nps, 
> profiling_with_patch.png, profiling_without_patch.nps, 
> profiling_without_patch.png
>
>
> There's a performance penalty when reading flat [no nested fields] Avro 
> tables. When reading the same flat dataset in Pig, it takes half the time.  
> On profiling, a lot of time is spent in 
> {{AvroDeserializer.deserializeSingleItemNullableUnion()}}. The bulk of the 
> time is spent in GenericData.get().resolveUnion(), which calls 
> GenericData.getSchemaName(Object datum), which does a lot of instanceof 
> checks.  This could be simplified with performance benefits. A approach is 
> described in this patch which almost halves the runtime.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-17970) MM LOAD DATA with OVERWRITE doesn't use base_n directory concept

2018-04-17 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441798#comment-16441798
 ] 

Sergey Shelukhin commented on HIVE-17970:
-

Addressed the CR feedback, fixed tests

> MM LOAD DATA with OVERWRITE doesn't use base_n directory concept
> 
>
> Key: HIVE-17970
> URL: https://issues.apache.org/jira/browse/HIVE-17970
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Affects Versions: 3.0.0
>Reporter: Eugene Koifman
>Assignee: Sergey Shelukhin
>Priority: Major
>  Labels: mm-gap-2
> Attachments: HIVE-17970.01.patch, HIVE-17970.02.patch, 
> HIVE-17970.03.patch, HIVE-17970.patch
>
>
> Judging by 
> {code:java}
> Hive.loadTable(Path loadPath, String tableName, LoadFileType loadFileType, 
> boolean isSrcLocal,
>   boolean isSkewedStoreAsSubdir, boolean isAcid, boolean 
> hasFollowingStatsTask,
>   Long txnId, int stmtId, boolean isMmTable)
> {code}
> LOAD DATA with OVERWRITE will delete all existing data then write new data 
> into the table.  This logic makes sense for non-acid tables but for Acid/MM 
> it should work like INSERT OVERWRITE statement and write new data to base_n/. 
> This way the lock manager can be used to either get an X lock for IOW and 
> thus block all readers or let it run with SemiShared and let readers continue 
> and make the system more concurrent.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-17970) MM LOAD DATA with OVERWRITE doesn't use base_n directory concept

2018-04-17 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-17970:

Attachment: HIVE-17970.03.patch

> MM LOAD DATA with OVERWRITE doesn't use base_n directory concept
> 
>
> Key: HIVE-17970
> URL: https://issues.apache.org/jira/browse/HIVE-17970
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Affects Versions: 3.0.0
>Reporter: Eugene Koifman
>Assignee: Sergey Shelukhin
>Priority: Major
>  Labels: mm-gap-2
> Attachments: HIVE-17970.01.patch, HIVE-17970.02.patch, 
> HIVE-17970.03.patch, HIVE-17970.patch
>
>
> Judging by 
> {code:java}
> Hive.loadTable(Path loadPath, String tableName, LoadFileType loadFileType, 
> boolean isSrcLocal,
>   boolean isSkewedStoreAsSubdir, boolean isAcid, boolean 
> hasFollowingStatsTask,
>   Long txnId, int stmtId, boolean isMmTable)
> {code}
> LOAD DATA with OVERWRITE will delete all existing data then write new data 
> into the table.  This logic makes sense for non-acid tables but for Acid/MM 
> it should work like INSERT OVERWRITE statement and write new data to base_n/. 
> This way the lock manager can be used to either get an X lock for IOW and 
> thus block all readers or let it run with SemiShared and let readers continue 
> and make the system more concurrent.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19058) add object owner to HivePrivilegeObject

2018-04-17 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-19058:
--
Status: Open  (was: Patch Available)

> add object owner to HivePrivilegeObject
> ---
>
> Key: HIVE-19058
> URL: https://issues.apache.org/jira/browse/HIVE-19058
> Project: Hive
>  Issue Type: Bug
>  Components: Security
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>Priority: Major
> Attachments: HIVE-19058.01.patch, HIVE-19058.02.patch, 
> HIVE-19058.03.patch
>
>
> this can enable HiveAuthorizer to create policies based on the owner of the 
> object - for example, only let the owner of a table read/write it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19224) incorrect token handling for LLAP plugin endpoint - part 2

2018-04-17 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441779#comment-16441779
 ] 

Hive QA commented on HIVE-19224:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
47s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
16s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
35s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
58s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
26s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
16s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 19m 42s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-10283/dev-support/hive-personality.sh
 |
| git revision | master / 5b4c29d |
| Default Java | 1.8.0_111 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-10283/yetus/patch-asflicense-problems.txt
 |
| modules | C: llap-common ql U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-10283/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> incorrect token handling for LLAP plugin endpoint - part 2
> --
>
> Key: HIVE-19224
> URL: https://issues.apache.org/jira/browse/HIVE-19224
> Project: Hive
>  Issue Type: Bug
>Reporter: Aswathy Chellammal Sreekumar
>Assignee: Sergey Shelukhin
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-19224.patch
>
>
> {noformat}
> Caused by: java.lang.NullPointerException
> at 
> org.apache.hadoop.hive.ql.exec.tez.LlapPluginEndpointClientImpl.getTokenUser(LlapPluginEndpointClientImpl.java:77)
>  ~[hive-exec-3.0.0.3.0.0.0-1145.jar:3.0.0.3.0.0.0-1145]
> at 
> org.apache.hadoop.hive.llap.AsyncPbRpcProxy.createProxy(AsyncPbRpcProxy.java:447)
>  ~[hive-exec-3.0.0.3.0.0.0-1145.jar:3.0.0.3.0.0.0-1145]
> at 
> org.apache.hadoop.hive.llap.AsyncPbRpcProxy.access$100(AsyncPbRpcProxy.java:66)
>  ~[hive-exec-3.0.0.3.0.0.0-1145.jar:3.0.0.3.0.0.0-1145]
> at 
> org.apache.hadoop.hive.llap.AsyncPbRpcProxy$3.call(AsyncPbRpcProxy.java:429) 
> ~[hive-exec-3.0.0.3.0.0.0-1145.jar:3.0.0.3.0.0.0-1145]
> at 
> com.google.common.cache.LocalCache$LocalManualCache$1.load(LocalCache.java:4793)
>  ~[guava-19.0.jar:?]
> at 
> com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3542)
>  ~[guava-19.0.jar:?]
> at 
> 

[jira] [Resolved] (HIVE-19058) add object owner to HivePrivilegeObject

2018-04-17 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman resolved HIVE-19058.
---
Resolution: Won't Fix

> add object owner to HivePrivilegeObject
> ---
>
> Key: HIVE-19058
> URL: https://issues.apache.org/jira/browse/HIVE-19058
> Project: Hive
>  Issue Type: Bug
>  Components: Security
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>Priority: Major
> Attachments: HIVE-19058.01.patch, HIVE-19058.02.patch, 
> HIVE-19058.03.patch
>
>
> this can enable HiveAuthorizer to create policies based on the owner of the 
> object - for example, only let the owner of a table read/write it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HIVE-18741) Add support for Import into Acid table

2018-04-17 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman resolved HIVE-18741.
---
   Resolution: Done
Fix Version/s: 3.1.0
 Release Note: n/a

done in HIVE-18739

> Add support for Import into Acid table
> --
>
> Key: HIVE-18741
> URL: https://issues.apache.org/jira/browse/HIVE-18741
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: HIVE-18741.01.patch
>
>
> This should follow Load Data approach (or use load data directly)
> Note that import supports partition spec
> Does import support loading files not created by Export?  If so, similarly to 
> HIVE-19029 - should check for Acid meta columns and reject



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HIVE-18098) Add support for Export/Import for Acid tables

2018-04-17 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman resolved HIVE-18098.
---
   Resolution: Done
Fix Version/s: 3.1.0
 Release Note: n/a

done in HIVE-18739

> Add support for Export/Import for Acid tables
> -
>
> Key: HIVE-18098
> URL: https://issues.apache.org/jira/browse/HIVE-18098
> Project: Hive
>  Issue Type: New Feature
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>Priority: Major
> Fix For: 3.1.0
>
>
> How should this work?
> For regular tables export just copies the files under table root to a 
> specified directory.
> This doesn't make sense for Acid tables:
> * Some data may belong to aborted transactons
> * Transaction IDs are imbedded into data/files names.  You'd have export 
> delta/ and base/ each of which may have files with the same names, e.g. 
> bucket_0.   
> * On import these IDs won't make sense in a different cluster or even a 
> different table (which may have delta_x_x for example for the same x (but 
> different data of course).
> * Export creates a _metadata column types, storage format, etc.  Perhaps it 
> can include info about aborted IDs (if the whole file can't be skipped).
> * Even importing into the same table on the same cluster may be a problem.  
> For example delta_5_5/ existed at the time of export and was included in the 
> export.  But 2 days later it may not exist because it was compacted and 
> cleaned.
> * If importing back into the same table on the same cluster, the data could 
> be imported into a different transaction (assuming per table writeIDs) w/o 
> having to remap the IDs in the rows themselves.
> * support Import Overwrite?
> * Support Import as a new txn with remapping of ROW_IDs?  The new writeID can 
> be stored in a delta_x_x/_meta_data and ROW__IDs can be remapped at read time 
> (like isOriginal) and made permanent by compaction.
> * It doesn't seem reasonable to import acid data into non-acid table
> Perhaps import can work similar to Load Data: look at the file imported, if 
> it has Acid columns, leave a note in the delta_x_x/_meta_data to indicate 
> that these columns should be skipped a new ROW_IDs assigned at read time.
> h3. Case I
> Table has delta_7_7 and delta_8_8.  Sine both may have bucket_, we could 
> export to export_dir and rename files as bucket_ and bucket__copy_1.  
> Load Data supports input dir with copy_N files.
> h3. Case II
> what if we have delete_delta_9_9 in the source.  Now you can't just ignore 
> ROW_IDs after import.
> * -Only export the latest base_N?  Or more generally up to the smallest 
> deleted ROW_ID (which may be hard to find w/o scanning all deletes.  The 
> export then would have to be done under X lock to prevent new concurrent 
> deletes)-
> * Stash all deletes in some additional file file which on import gets added 
> into the target delta/ so that Acid reader can apply them to the data in this 
> delta/ but so that they don't clash with 'normal' deletes that exist in the 
> table.
> ** here we may also have multiple delete_delta/ with identical file names.  
> Does delete delta reader handle copy_N files?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18739) Add support for Import/Export from Acid table

2018-04-17 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-18739:
--
Issue Type: Bug  (was: Sub-task)
Parent: (was: HIVE-18098)

> Add support for Import/Export from Acid table
> -
>
> Key: HIVE-18739
> URL: https://issues.apache.org/jira/browse/HIVE-18739
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: HIVE-18739.01.patch, HIVE-18739.04.patch, 
> HIVE-18739.06.patch, HIVE-18739.08.patch, HIVE-18739.09.patch, 
> HIVE-18739.10.patch, HIVE-18739.11.patch, HIVE-18739.12.patch, 
> HIVE-18739.13.patch, HIVE-18739.14.patch, HIVE-18739.15.patch, 
> HIVE-18739.16.patch, HIVE-18739.17.patch, HIVE-18739.19.patch, 
> HIVE-18739.20.patch, HIVE-18739.21.patch, HIVE-18739.23.patch, 
> HIVE-18739.24.patch, HIVE-18739.25.patch, HIVE-18739.26.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18739) Add support for Import/Export from Acid table

2018-04-17 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-18739:
--
Issue Type: New Feature  (was: Bug)

> Add support for Import/Export from Acid table
> -
>
> Key: HIVE-18739
> URL: https://issues.apache.org/jira/browse/HIVE-18739
> Project: Hive
>  Issue Type: New Feature
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: HIVE-18739.01.patch, HIVE-18739.04.patch, 
> HIVE-18739.06.patch, HIVE-18739.08.patch, HIVE-18739.09.patch, 
> HIVE-18739.10.patch, HIVE-18739.11.patch, HIVE-18739.12.patch, 
> HIVE-18739.13.patch, HIVE-18739.14.patch, HIVE-18739.15.patch, 
> HIVE-18739.16.patch, HIVE-18739.17.patch, HIVE-18739.19.patch, 
> HIVE-18739.20.patch, HIVE-18739.21.patch, HIVE-18739.23.patch, 
> HIVE-18739.24.patch, HIVE-18739.25.patch, HIVE-18739.26.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HIVE-18739) Add support for Import/Export from Acid table

2018-04-17 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman resolved HIVE-18739.
---
   Resolution: Fixed
Fix Version/s: 3.1.0
 Release Note: n/a

> Add support for Import/Export from Acid table
> -
>
> Key: HIVE-18739
> URL: https://issues.apache.org/jira/browse/HIVE-18739
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: HIVE-18739.01.patch, HIVE-18739.04.patch, 
> HIVE-18739.06.patch, HIVE-18739.08.patch, HIVE-18739.09.patch, 
> HIVE-18739.10.patch, HIVE-18739.11.patch, HIVE-18739.12.patch, 
> HIVE-18739.13.patch, HIVE-18739.14.patch, HIVE-18739.15.patch, 
> HIVE-18739.16.patch, HIVE-18739.17.patch, HIVE-18739.19.patch, 
> HIVE-18739.20.patch, HIVE-18739.21.patch, HIVE-18739.23.patch, 
> HIVE-18739.24.patch, HIVE-18739.25.patch, HIVE-18739.26.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19196) TestTriggersMoveWorkloadManager is flaky

2018-04-17 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441772#comment-16441772
 ] 

Sergey Shelukhin commented on HIVE-19196:
-

+1. I was looking at that too, but I wonder if it's dumped as all or nothing. 

> TestTriggersMoveWorkloadManager is flaky
> 
>
> Key: HIVE-19196
> URL: https://issues.apache.org/jira/browse/HIVE-19196
> Project: Hive
>  Issue Type: Sub-task
>  Components: Test
>Reporter: Ashutosh Chauhan
>Assignee: Prasanth Jayachandran
>Priority: Major
> Attachments: HIVE-19196.1.patch
>
>
> This is a flaky test which randomly fails. Consider improving its stability.
> {code}
> org.apache.hive.jdbc.TestTriggersMoveWorkloadManager.testTriggerMoveConflictKill
> Failing for the past 1 build (Since Failed#10161 )
> Took 2.4 sec.
> Error Message
> '"eventType" : "GET"' expected in STDERR capture, but not found.
> Stacktrace
> java.lang.AssertionError: '"eventType" : "GET"' expected in STDERR capture, 
> but not found.
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at 
> org.apache.hive.jdbc.AbstractJdbcTriggersTest.runQueryWithTrigger(AbstractJdbcTriggersTest.java:169)
>   at 
> org.apache.hive.jdbc.TestTriggersMoveWorkloadManager.testTriggerMoveConflictKill(TestTriggersMoveWorkloadManager.java:261)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18739) Add support for Import/Export from Acid table

2018-04-17 Thread Eugene Koifman (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441770#comment-16441770
 ] 

Eugene Koifman commented on HIVE-18739:
---

committed patch 26 to master
thanks Sergey for the review

> Add support for Import/Export from Acid table
> -
>
> Key: HIVE-18739
> URL: https://issues.apache.org/jira/browse/HIVE-18739
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>Priority: Major
> Attachments: HIVE-18739.01.patch, HIVE-18739.04.patch, 
> HIVE-18739.06.patch, HIVE-18739.08.patch, HIVE-18739.09.patch, 
> HIVE-18739.10.patch, HIVE-18739.11.patch, HIVE-18739.12.patch, 
> HIVE-18739.13.patch, HIVE-18739.14.patch, HIVE-18739.15.patch, 
> HIVE-18739.16.patch, HIVE-18739.17.patch, HIVE-18739.19.patch, 
> HIVE-18739.20.patch, HIVE-18739.21.patch, HIVE-18739.23.patch, 
> HIVE-18739.24.patch, HIVE-18739.25.patch, HIVE-18739.26.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19210) Create separate module for streaming ingest

2018-04-17 Thread Vineet Garg (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441764#comment-16441764
 ] 

Vineet Garg commented on HIVE-19210:


[~prasanth_j] Can you re-upload the branch-3 patch after renaming the file to 
{{HIVE-19210.01-branch-3.patch}}? Looks like ptest will only be picked up in 
this format

> Create separate module for streaming ingest
> ---
>
> Key: HIVE-19210
> URL: https://issues.apache.org/jira/browse/HIVE-19210
> Project: Hive
>  Issue Type: Sub-task
>  Components: Streaming
>Affects Versions: 3.0.0, 3.1.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Major
> Fix For: 3.0.0, 3.1.0
>
> Attachments: HIVE-19210-branch-3.patch, HIVE-19210.1.patch, 
> HIVE-19210.2.patch, HIVE-19210.3.patch
>
>
> This will retain the old hcat streaming API for old clients. The new 
> streaming ingest API will be separate module under hive. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19196) TestTriggersMoveWorkloadManager is flaky

2018-04-17 Thread Prasanth Jayachandran (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441761#comment-16441761
 ] 

Prasanth Jayachandran commented on HIVE-19196:
--

[~sershe] can you take a look?

> TestTriggersMoveWorkloadManager is flaky
> 
>
> Key: HIVE-19196
> URL: https://issues.apache.org/jira/browse/HIVE-19196
> Project: Hive
>  Issue Type: Sub-task
>  Components: Test
>Reporter: Ashutosh Chauhan
>Assignee: Prasanth Jayachandran
>Priority: Major
> Attachments: HIVE-19196.1.patch
>
>
> This is a flaky test which randomly fails. Consider improving its stability.
> {code}
> org.apache.hive.jdbc.TestTriggersMoveWorkloadManager.testTriggerMoveConflictKill
> Failing for the past 1 build (Since Failed#10161 )
> Took 2.4 sec.
> Error Message
> '"eventType" : "GET"' expected in STDERR capture, but not found.
> Stacktrace
> java.lang.AssertionError: '"eventType" : "GET"' expected in STDERR capture, 
> but not found.
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at 
> org.apache.hive.jdbc.AbstractJdbcTriggersTest.runQueryWithTrigger(AbstractJdbcTriggersTest.java:169)
>   at 
> org.apache.hive.jdbc.TestTriggersMoveWorkloadManager.testTriggerMoveConflictKill(TestTriggersMoveWorkloadManager.java:261)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19196) TestTriggersMoveWorkloadManager is flaky

2018-04-17 Thread Prasanth Jayachandran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran updated HIVE-19196:
-
Attachment: HIVE-19196.1.patch

> TestTriggersMoveWorkloadManager is flaky
> 
>
> Key: HIVE-19196
> URL: https://issues.apache.org/jira/browse/HIVE-19196
> Project: Hive
>  Issue Type: Sub-task
>  Components: Test
>Reporter: Ashutosh Chauhan
>Assignee: Prasanth Jayachandran
>Priority: Major
> Attachments: HIVE-19196.1.patch
>
>
> This is a flaky test which randomly fails. Consider improving its stability.
> {code}
> org.apache.hive.jdbc.TestTriggersMoveWorkloadManager.testTriggerMoveConflictKill
> Failing for the past 1 build (Since Failed#10161 )
> Took 2.4 sec.
> Error Message
> '"eventType" : "GET"' expected in STDERR capture, but not found.
> Stacktrace
> java.lang.AssertionError: '"eventType" : "GET"' expected in STDERR capture, 
> but not found.
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at 
> org.apache.hive.jdbc.AbstractJdbcTriggersTest.runQueryWithTrigger(AbstractJdbcTriggersTest.java:169)
>   at 
> org.apache.hive.jdbc.TestTriggersMoveWorkloadManager.testTriggerMoveConflictKill(TestTriggersMoveWorkloadManager.java:261)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-19196) TestTriggersMoveWorkloadManager is flaky

2018-04-17 Thread Prasanth Jayachandran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran reassigned HIVE-19196:


Assignee: Prasanth Jayachandran  (was: Sergey Shelukhin)

> TestTriggersMoveWorkloadManager is flaky
> 
>
> Key: HIVE-19196
> URL: https://issues.apache.org/jira/browse/HIVE-19196
> Project: Hive
>  Issue Type: Sub-task
>  Components: Test
>Reporter: Ashutosh Chauhan
>Assignee: Prasanth Jayachandran
>Priority: Major
> Attachments: HIVE-19196.1.patch
>
>
> This is a flaky test which randomly fails. Consider improving its stability.
> {code}
> org.apache.hive.jdbc.TestTriggersMoveWorkloadManager.testTriggerMoveConflictKill
> Failing for the past 1 build (Since Failed#10161 )
> Took 2.4 sec.
> Error Message
> '"eventType" : "GET"' expected in STDERR capture, but not found.
> Stacktrace
> java.lang.AssertionError: '"eventType" : "GET"' expected in STDERR capture, 
> but not found.
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at 
> org.apache.hive.jdbc.AbstractJdbcTriggersTest.runQueryWithTrigger(AbstractJdbcTriggersTest.java:169)
>   at 
> org.apache.hive.jdbc.TestTriggersMoveWorkloadManager.testTriggerMoveConflictKill(TestTriggersMoveWorkloadManager.java:261)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19196) TestTriggersMoveWorkloadManager is flaky

2018-04-17 Thread Prasanth Jayachandran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran updated HIVE-19196:
-
Status: Patch Available  (was: Open)

> TestTriggersMoveWorkloadManager is flaky
> 
>
> Key: HIVE-19196
> URL: https://issues.apache.org/jira/browse/HIVE-19196
> Project: Hive
>  Issue Type: Sub-task
>  Components: Test
>Reporter: Ashutosh Chauhan
>Assignee: Prasanth Jayachandran
>Priority: Major
> Attachments: HIVE-19196.1.patch
>
>
> This is a flaky test which randomly fails. Consider improving its stability.
> {code}
> org.apache.hive.jdbc.TestTriggersMoveWorkloadManager.testTriggerMoveConflictKill
> Failing for the past 1 build (Since Failed#10161 )
> Took 2.4 sec.
> Error Message
> '"eventType" : "GET"' expected in STDERR capture, but not found.
> Stacktrace
> java.lang.AssertionError: '"eventType" : "GET"' expected in STDERR capture, 
> but not found.
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at 
> org.apache.hive.jdbc.AbstractJdbcTriggersTest.runQueryWithTrigger(AbstractJdbcTriggersTest.java:169)
>   at 
> org.apache.hive.jdbc.TestTriggersMoveWorkloadManager.testTriggerMoveConflictKill(TestTriggersMoveWorkloadManager.java:261)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19196) TestTriggersMoveWorkloadManager is flaky

2018-04-17 Thread Prasanth Jayachandran (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441760#comment-16441760
 ] 

Prasanth Jayachandran commented on HIVE-19196:
--

I am guessing there is some timing issues with this code
{code}
// failure hooks are run after HiveStatement is closed. wait sometime for 
failure hook to execute
String stdErrStr = "";
while (!stdErrStr.contains(errCaptureExpect.get(0))) {
  baos.flush();
  stdErrStr = baos.toString();
  Thread.sleep(500);
}
{code}

I will update this to wait until we see the last event in the error capture.. 

> TestTriggersMoveWorkloadManager is flaky
> 
>
> Key: HIVE-19196
> URL: https://issues.apache.org/jira/browse/HIVE-19196
> Project: Hive
>  Issue Type: Sub-task
>  Components: Test
>Reporter: Ashutosh Chauhan
>Assignee: Sergey Shelukhin
>Priority: Major
>
> This is a flaky test which randomly fails. Consider improving its stability.
> {code}
> org.apache.hive.jdbc.TestTriggersMoveWorkloadManager.testTriggerMoveConflictKill
> Failing for the past 1 build (Since Failed#10161 )
> Took 2.4 sec.
> Error Message
> '"eventType" : "GET"' expected in STDERR capture, but not found.
> Stacktrace
> java.lang.AssertionError: '"eventType" : "GET"' expected in STDERR capture, 
> but not found.
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at 
> org.apache.hive.jdbc.AbstractJdbcTriggersTest.runQueryWithTrigger(AbstractJdbcTriggersTest.java:169)
>   at 
> org.apache.hive.jdbc.TestTriggersMoveWorkloadManager.testTriggerMoveConflictKill(TestTriggersMoveWorkloadManager.java:261)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18410) [Performance][Avro] Reading flat Avro tables is very expensive in Hive

2018-04-17 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441754#comment-16441754
 ] 

Hive QA commented on HIVE-18410:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12909083/HIVE-18410_3.patch

{color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 39 failed/errored test(s), 14239 tests 
executed
*Failed tests:*
{noformat}
TestDbNotificationListener - did not produce a TEST-*.xml file (likely timed 
out) (batchId=247)
TestHCatHiveCompatibility - did not produce a TEST-*.xml file (likely timed 
out) (batchId=247)
TestNonCatCallsWithCatalog - did not produce a TEST-*.xml file (likely timed 
out) (batchId=217)
TestSequenceFileReadWrite - did not produce a TEST-*.xml file (likely timed 
out) (batchId=247)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[llap_smb] (batchId=92)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_0] 
(batchId=17)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[results_cache_invalidation2]
 (batchId=39)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[tez_join_hash] 
(batchId=54)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez1]
 (batchId=175)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[results_cache_invalidation2]
 (batchId=163)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] 
(batchId=163)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_smb_1] 
(batchId=171)
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainanalyze_5] 
(batchId=105)
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver[infer_bucket_sort_dyn_part]
 (batchId=93)
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver[infer_bucket_sort_map_operators]
 (batchId=93)
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver[infer_bucket_sort_num_buckets]
 (batchId=93)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[alter_notnull_constraint_violation]
 (batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[default_constraint_invalid_default_value_type]
 (batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[insert_into_acid_notnull]
 (batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[insert_into_notnull_constraint]
 (batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[insert_multi_into_notnull]
 (batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[insert_overwrite_notnull_constraint]
 (batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[insertsel_fail] 
(batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[update_notnull_constraint]
 (batchId=95)
org.apache.hadoop.hive.cli.control.TestDanglingQOuts.checkDanglingQOut 
(batchId=225)
org.apache.hadoop.hive.druid.TestDruidStorageHandler.testCommitMultiInsertOverwriteTable
 (batchId=261)
org.apache.hadoop.hive.ql.TestAcidOnTez.testAcidInsertWithRemoveUnion 
(batchId=228)
org.apache.hadoop.hive.ql.TestAcidOnTez.testCtasTezUnion (batchId=228)
org.apache.hadoop.hive.ql.TestAcidOnTez.testNonStandardConversion01 
(batchId=228)
org.apache.hadoop.hive.ql.TestMTQueries.testMTQueries1 (batchId=232)
org.apache.hive.minikdc.TestJdbcWithDBTokenStoreNoDoAs.testCancelRenewTokenFlow 
(batchId=254)
org.apache.hive.minikdc.TestJdbcWithDBTokenStoreNoDoAs.testConnection 
(batchId=254)
org.apache.hive.minikdc.TestJdbcWithDBTokenStoreNoDoAs.testIsValid (batchId=254)
org.apache.hive.minikdc.TestJdbcWithDBTokenStoreNoDoAs.testIsValidNeg 
(batchId=254)
org.apache.hive.minikdc.TestJdbcWithDBTokenStoreNoDoAs.testNegativeProxyAuth 
(batchId=254)
org.apache.hive.minikdc.TestJdbcWithDBTokenStoreNoDoAs.testNegativeTokenAuth 
(batchId=254)
org.apache.hive.minikdc.TestJdbcWithDBTokenStoreNoDoAs.testProxyAuth 
(batchId=254)
org.apache.hive.minikdc.TestJdbcWithDBTokenStoreNoDoAs.testRenewDelegationToken 
(batchId=254)
org.apache.hive.minikdc.TestJdbcWithDBTokenStoreNoDoAs.testTokenAuth 
(batchId=254)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/10282/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/10282/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-10282/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 39 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12909083 - PreCommit-HIVE-Build

> [Performance][Avro] Reading flat Avro 

[jira] [Updated] (HIVE-19194) TestDruidStorageHandler fails

2018-04-17 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-19194:

   Resolution: Fixed
Fix Version/s: 3.1.0
   Status: Resolved  (was: Patch Available)

Pushed to master. 

> TestDruidStorageHandler fails
> -
>
> Key: HIVE-19194
> URL: https://issues.apache.org/jira/browse/HIVE-19194
> Project: Hive
>  Issue Type: Sub-task
>  Components: Druid integration
>Reporter: Ashutosh Chauhan
>Assignee: slim bouguerra
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: HIVE-19194.patch
>
>
> This tests fails randomly. If its not reproducible locally consider improving 
> its stability since it does fail once in a while on Hive QA. 
> {code}
> java.lang.AssertionError: expected:<0> but was:<1> at 
> org.junit.Assert.fail(Assert.java:88) at 
> org.junit.Assert.failNotEquals(Assert.java:743) at 
> org.junit.Assert.assertEquals(Assert.java:118) at 
> org.junit.Assert.assertEquals(Assert.java:555) at 
> org.junit.Assert.assertEquals(Assert.java:542) at 
> org.apache.hadoop.hive.druid.TestDruidStorageHandler.testCommitMultiInsertOverwriteTable(TestDruidStorageHandler.java:414)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19184) Hive 3.0.0 release branch preparation

2018-04-17 Thread Vineet Garg (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-19184:
---
Status: Patch Available  (was: Open)

> Hive 3.0.0 release branch preparation
> -
>
> Key: HIVE-19184
> URL: https://issues.apache.org/jira/browse/HIVE-19184
> Project: Hive
>  Issue Type: Task
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-19184.01-branch-3.patch
>
>
> Need to do bunch of things to prepare branch-3 for release e.g.
> * Update pom.xml to delete SNAPSHOT
> * Update .reviewboardrc
> * Remove storage-api module to build
> * Change storage-api depdency etc



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19184) Hive 3.0.0 release branch preparation

2018-04-17 Thread Vineet Garg (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-19184:
---
Attachment: HIVE-19184.01-branch-3.patch

> Hive 3.0.0 release branch preparation
> -
>
> Key: HIVE-19184
> URL: https://issues.apache.org/jira/browse/HIVE-19184
> Project: Hive
>  Issue Type: Task
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-19184.01-branch-3.patch
>
>
> Need to do bunch of things to prepare branch-3 for release e.g.
> * Update pom.xml to delete SNAPSHOT
> * Update .reviewboardrc
> * Remove storage-api module to build
> * Change storage-api depdency etc



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19184) Hive 3.0.0 release branch preparation

2018-04-17 Thread Vineet Garg (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-19184:
---
Status: Open  (was: Patch Available)

> Hive 3.0.0 release branch preparation
> -
>
> Key: HIVE-19184
> URL: https://issues.apache.org/jira/browse/HIVE-19184
> Project: Hive
>  Issue Type: Task
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-19184.01-branch-3.patch
>
>
> Need to do bunch of things to prepare branch-3 for release e.g.
> * Update pom.xml to delete SNAPSHOT
> * Update .reviewboardrc
> * Remove storage-api module to build
> * Change storage-api depdency etc



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19184) Hive 3.0.0 release branch preparation

2018-04-17 Thread Vineet Garg (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-19184:
---
Attachment: (was: HIVE-19184-branch-3.patch)

> Hive 3.0.0 release branch preparation
> -
>
> Key: HIVE-19184
> URL: https://issues.apache.org/jira/browse/HIVE-19184
> Project: Hive
>  Issue Type: Task
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-19184.01-branch-3.patch
>
>
> Need to do bunch of things to prepare branch-3 for release e.g.
> * Update pom.xml to delete SNAPSHOT
> * Update .reviewboardrc
> * Remove storage-api module to build
> * Change storage-api depdency etc



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19124) implement a basic major compactor for MM tables

2018-04-17 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-19124:

Attachment: (was: HIVE-19124.03.patch)

> implement a basic major compactor for MM tables
> ---
>
> Key: HIVE-19124
> URL: https://issues.apache.org/jira/browse/HIVE-19124
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
>  Labels: mm-gap-2
> Attachments: HIVE-19124.01.patch, HIVE-19124.02.patch, 
> HIVE-19124.03.patch, HIVE-19124.patch
>
>
> For now, it will run a query directly and only major compactions will be 
> supported.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19124) implement a basic major compactor for MM tables

2018-04-17 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-19124:

Attachment: HIVE-19124.03.patch

> implement a basic major compactor for MM tables
> ---
>
> Key: HIVE-19124
> URL: https://issues.apache.org/jira/browse/HIVE-19124
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
>  Labels: mm-gap-2
> Attachments: HIVE-19124.01.patch, HIVE-19124.02.patch, 
> HIVE-19124.03.patch, HIVE-19124.patch
>
>
> For now, it will run a query directly and only major compactions will be 
> supported.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19124) implement a basic major compactor for MM tables

2018-04-17 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441749#comment-16441749
 ] 

Sergey Shelukhin commented on HIVE-19124:
-

Addressing table type and other characteristics and adding another test.
Also added a config setting.

I think what we eventually need is adding "insert overwrite directory compact" 
command that will do the right thing for all of this... I left a todo in insert 
overwrite directory path. Then, it can write the base in place for both ACID 
and non ACID, via a query, as long as (1) it's done inside a txn (2) the insert 
overwrite bug is fixed.

> implement a basic major compactor for MM tables
> ---
>
> Key: HIVE-19124
> URL: https://issues.apache.org/jira/browse/HIVE-19124
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
>  Labels: mm-gap-2
> Attachments: HIVE-19124.01.patch, HIVE-19124.02.patch, 
> HIVE-19124.03.patch, HIVE-19124.patch
>
>
> For now, it will run a query directly and only major compactions will be 
> supported.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19124) implement a basic major compactor for MM tables

2018-04-17 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-19124:

Attachment: HIVE-19124.03.patch

> implement a basic major compactor for MM tables
> ---
>
> Key: HIVE-19124
> URL: https://issues.apache.org/jira/browse/HIVE-19124
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
>  Labels: mm-gap-2
> Attachments: HIVE-19124.01.patch, HIVE-19124.02.patch, 
> HIVE-19124.03.patch, HIVE-19124.patch
>
>
> For now, it will run a query directly and only major compactions will be 
> supported.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19186) Multi Table INSERT statements query has a flaw for partitioned table when INSERT INTO and INSERT OVERWRITE are used

2018-04-17 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441747#comment-16441747
 ] 

Ashutosh Chauhan commented on HIVE-19186:
-

Each arm of insert will result in a QB which should correctly have info set for 
isInsertIntoTable() and isDestToOpTypeInsertOverwrite() individually. There 
should not be a need to check both flags. It seems like while constructing QB 
we didnt capture info about insert into vs overwrite.

> Multi Table INSERT statements query has a flaw for partitioned table when 
> INSERT INTO and INSERT OVERWRITE are used
> ---
>
> Key: HIVE-19186
> URL: https://issues.apache.org/jira/browse/HIVE-19186
> Project: Hive
>  Issue Type: Bug
>  Components: Query Planning
>Affects Versions: 3.0.0
>Reporter: Steve Yeom
>Assignee: Steve Yeom
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-19186.01.patch, HIVE-19186.02.patch
>
>
> One problem test case is: 
> create table intermediate(key int) partitioned by (p int) stored as orc;
> insert into table intermediate partition(p='455') select distinct key from 
> src where key >= 0 order by key desc limit 2;
> insert into table intermediate partition(p='456') select distinct key from 
> src where key is not null order by key asc limit 2;
> insert into table intermediate partition(p='457') select distinct key from 
> src where key >= 100 order by key asc limit 2;
> create table multi_partitioned (key int, key2 int) partitioned by (p int);
> from intermediate
> insert into table multi_partitioned partition(p=2) select p, key
> insert overwrite table multi_partitioned partition(p=1) select key, p;



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19196) TestTriggersMoveWorkloadManager is flaky

2018-04-17 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441746#comment-16441746
 ] 

Sergey Shelukhin commented on HIVE-19196:
-

Hmm, now I cannot repro it anymore. Looks like it's pretty rare. [~prasanth_j] 
have you seen this before? I see in the only log I saved from a failed test 
that the test proceeds normally and all the WmEvent-s are there, same as in a 
log of a test that passes. However, it still fails with the error above.
I even see the output of the events in the failed log that has all the events, 
apparently the test doesn't get it though.
The only interesting difference I can see is that in the failed log RM address 
in the minicluster is a real network IP of my machine and in the successful 
test logs it's 127.0.0.1. Not sure if that's relevant to some part of getting 
events..

> TestTriggersMoveWorkloadManager is flaky
> 
>
> Key: HIVE-19196
> URL: https://issues.apache.org/jira/browse/HIVE-19196
> Project: Hive
>  Issue Type: Sub-task
>  Components: Test
>Reporter: Ashutosh Chauhan
>Assignee: Sergey Shelukhin
>Priority: Major
>
> This is a flaky test which randomly fails. Consider improving its stability.
> {code}
> org.apache.hive.jdbc.TestTriggersMoveWorkloadManager.testTriggerMoveConflictKill
> Failing for the past 1 build (Since Failed#10161 )
> Took 2.4 sec.
> Error Message
> '"eventType" : "GET"' expected in STDERR capture, but not found.
> Stacktrace
> java.lang.AssertionError: '"eventType" : "GET"' expected in STDERR capture, 
> but not found.
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at 
> org.apache.hive.jdbc.AbstractJdbcTriggersTest.runQueryWithTrigger(AbstractJdbcTriggersTest.java:169)
>   at 
> org.apache.hive.jdbc.TestTriggersMoveWorkloadManager.testTriggerMoveConflictKill(TestTriggersMoveWorkloadManager.java:261)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18739) Add support for Import/Export from Acid table

2018-04-17 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-18739:
--
Summary: Add support for Import/Export from Acid table  (was: Add support 
for Export from Acid table)

> Add support for Import/Export from Acid table
> -
>
> Key: HIVE-18739
> URL: https://issues.apache.org/jira/browse/HIVE-18739
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>Priority: Major
> Attachments: HIVE-18739.01.patch, HIVE-18739.04.patch, 
> HIVE-18739.06.patch, HIVE-18739.08.patch, HIVE-18739.09.patch, 
> HIVE-18739.10.patch, HIVE-18739.11.patch, HIVE-18739.12.patch, 
> HIVE-18739.13.patch, HIVE-18739.14.patch, HIVE-18739.15.patch, 
> HIVE-18739.16.patch, HIVE-18739.17.patch, HIVE-18739.19.patch, 
> HIVE-18739.20.patch, HIVE-18739.21.patch, HIVE-18739.23.patch, 
> HIVE-18739.24.patch, HIVE-18739.25.patch, HIVE-18739.26.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19235) Update golden files for Minimr tests

2018-04-17 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-19235:

   Resolution: Fixed
Fix Version/s: 3.1.0
   Status: Resolved  (was: Patch Available)

Pushed to master.

> Update golden files for Minimr tests
> 
>
> Key: HIVE-19235
> URL: https://issues.apache.org/jira/browse/HIVE-19235
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: HIVE-19235.patch
>
>
> stats update needed for 3 tests.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18739) Add support for Export from Acid table

2018-04-17 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-18739:
--
Attachment: HIVE-18739.26.patch

> Add support for Export from Acid table
> --
>
> Key: HIVE-18739
> URL: https://issues.apache.org/jira/browse/HIVE-18739
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>Priority: Major
> Attachments: HIVE-18739.01.patch, HIVE-18739.04.patch, 
> HIVE-18739.06.patch, HIVE-18739.08.patch, HIVE-18739.09.patch, 
> HIVE-18739.10.patch, HIVE-18739.11.patch, HIVE-18739.12.patch, 
> HIVE-18739.13.patch, HIVE-18739.14.patch, HIVE-18739.15.patch, 
> HIVE-18739.16.patch, HIVE-18739.17.patch, HIVE-18739.19.patch, 
> HIVE-18739.20.patch, HIVE-18739.21.patch, HIVE-18739.23.patch, 
> HIVE-18739.24.patch, HIVE-18739.25.patch, HIVE-18739.26.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18410) [Performance][Avro] Reading flat Avro tables is very expensive in Hive

2018-04-17 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441733#comment-16441733
 ] 

Hive QA commented on HIVE-18410:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
35s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
54s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 7s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  7m  
6s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  7m  
2s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
14s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 48m 12s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-10282/dev-support/hive-personality.sh
 |
| git revision | master / 4cfec3e |
| Default Java | 1.8.0_111 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-10282/yetus/patch-asflicense-problems.txt
 |
| modules | C: . U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-10282/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> [Performance][Avro] Reading flat Avro tables is very expensive in Hive
> --
>
> Key: HIVE-18410
> URL: https://issues.apache.org/jira/browse/HIVE-18410
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 1.2.1, 2.1.0, 3.0.0, 2.3.2
>Reporter: Ratandeep Ratti
>Assignee: Ratandeep Ratti
>Priority: Major
> Fix For: 2.3.2, 3.1.0
>
> Attachments: HIVE-18410.patch, HIVE-18410_1.patch, 
> HIVE-18410_2.patch, HIVE-18410_3.patch, profiling_with_patch.nps, 
> profiling_with_patch.png, profiling_without_patch.nps, 
> profiling_without_patch.png
>
>
> There's a performance penalty when reading flat [no nested fields] Avro 
> tables. When reading the same flat dataset in Pig, it takes half the time.  
> On profiling, a lot of time is spent in 
> {{AvroDeserializer.deserializeSingleItemNullableUnion()}}. The bulk of the 
> time is spent in GenericData.get().resolveUnion(), which calls 
> GenericData.getSchemaName(Object datum), which does a lot of instanceof 
> checks.  This could be simplified with performance benefits. A approach is 
> described in this patch which almost halves the runtime.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18739) Add support for Export from Acid table

2018-04-17 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-18739:
--
Attachment: (was: HIVE-18739.26.patch)

> Add support for Export from Acid table
> --
>
> Key: HIVE-18739
> URL: https://issues.apache.org/jira/browse/HIVE-18739
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>Priority: Major
> Attachments: HIVE-18739.01.patch, HIVE-18739.04.patch, 
> HIVE-18739.06.patch, HIVE-18739.08.patch, HIVE-18739.09.patch, 
> HIVE-18739.10.patch, HIVE-18739.11.patch, HIVE-18739.12.patch, 
> HIVE-18739.13.patch, HIVE-18739.14.patch, HIVE-18739.15.patch, 
> HIVE-18739.16.patch, HIVE-18739.17.patch, HIVE-18739.19.patch, 
> HIVE-18739.20.patch, HIVE-18739.21.patch, HIVE-18739.23.patch, 
> HIVE-18739.24.patch, HIVE-18739.25.patch, HIVE-18739.26.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18739) Add support for Export from Acid table

2018-04-17 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-18739:
--
Status: Open  (was: Patch Available)

> Add support for Export from Acid table
> --
>
> Key: HIVE-18739
> URL: https://issues.apache.org/jira/browse/HIVE-18739
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>Priority: Major
> Attachments: HIVE-18739.01.patch, HIVE-18739.04.patch, 
> HIVE-18739.06.patch, HIVE-18739.08.patch, HIVE-18739.09.patch, 
> HIVE-18739.10.patch, HIVE-18739.11.patch, HIVE-18739.12.patch, 
> HIVE-18739.13.patch, HIVE-18739.14.patch, HIVE-18739.15.patch, 
> HIVE-18739.16.patch, HIVE-18739.17.patch, HIVE-18739.19.patch, 
> HIVE-18739.20.patch, HIVE-18739.21.patch, HIVE-18739.23.patch, 
> HIVE-18739.24.patch, HIVE-18739.25.patch, HIVE-18739.26.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18739) Add support for Export from Acid table

2018-04-17 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-18739:
--
Attachment: HIVE-18739.26.patch

> Add support for Export from Acid table
> --
>
> Key: HIVE-18739
> URL: https://issues.apache.org/jira/browse/HIVE-18739
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>Priority: Major
> Attachments: HIVE-18739.01.patch, HIVE-18739.04.patch, 
> HIVE-18739.06.patch, HIVE-18739.08.patch, HIVE-18739.09.patch, 
> HIVE-18739.10.patch, HIVE-18739.11.patch, HIVE-18739.12.patch, 
> HIVE-18739.13.patch, HIVE-18739.14.patch, HIVE-18739.15.patch, 
> HIVE-18739.16.patch, HIVE-18739.17.patch, HIVE-18739.19.patch, 
> HIVE-18739.20.patch, HIVE-18739.21.patch, HIVE-18739.23.patch, 
> HIVE-18739.24.patch, HIVE-18739.25.patch, HIVE-18739.26.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19124) implement a basic major compactor for MM tables

2018-04-17 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441693#comment-16441693
 ] 

Hive QA commented on HIVE-19124:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12919293/HIVE-19124.02.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 38 failed/errored test(s), 14240 tests 
executed
*Failed tests:*
{noformat}
TestDbNotificationListener - did not produce a TEST-*.xml file (likely timed 
out) (batchId=247)
TestHCatHiveCompatibility - did not produce a TEST-*.xml file (likely timed 
out) (batchId=247)
TestNonCatCallsWithCatalog - did not produce a TEST-*.xml file (likely timed 
out) (batchId=217)
TestSequenceFileReadWrite - did not produce a TEST-*.xml file (likely timed 
out) (batchId=247)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[llap_smb] (batchId=92)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_0] 
(batchId=17)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[results_cache_invalidation2]
 (batchId=39)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[tez_join_hash] 
(batchId=54)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[results_cache_invalidation2]
 (batchId=163)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] 
(batchId=163)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_smb_1] 
(batchId=171)
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainanalyze_5] 
(batchId=105)
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver[infer_bucket_sort_dyn_part]
 (batchId=93)
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver[infer_bucket_sort_map_operators]
 (batchId=93)
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver[infer_bucket_sort_num_buckets]
 (batchId=93)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[alter_notnull_constraint_violation]
 (batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[default_constraint_invalid_default_value_type]
 (batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[insert_into_acid_notnull]
 (batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[insert_into_notnull_constraint]
 (batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[insert_multi_into_notnull]
 (batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[insert_overwrite_notnull_constraint]
 (batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[insertsel_fail] 
(batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[update_notnull_constraint]
 (batchId=95)
org.apache.hadoop.hive.cli.control.TestDanglingQOuts.checkDanglingQOut 
(batchId=225)
org.apache.hadoop.hive.druid.TestDruidStorageHandler.testCommitMultiInsertOverwriteTable
 (batchId=261)
org.apache.hadoop.hive.llap.daemon.impl.comparator.TestAMReporter.testMultipleAM
 (batchId=309)
org.apache.hadoop.hive.ql.TestAcidOnTez.testAcidInsertWithRemoveUnion 
(batchId=228)
org.apache.hadoop.hive.ql.TestAcidOnTez.testCtasTezUnion (batchId=228)
org.apache.hadoop.hive.ql.TestAcidOnTez.testNonStandardConversion01 
(batchId=228)
org.apache.hadoop.hive.ql.TestMTQueries.testMTQueries1 (batchId=232)
org.apache.hadoop.hive.ql.TestTxnCommands2.testMmTableCompaction (batchId=287)
org.apache.hadoop.hive.ql.TestTxnCommands2WithSplitUpdateAndVectorization.testMmTableCompaction
 (batchId=296)
org.apache.hadoop.hive.ql.TestTxnCommandsForMmTable.testInsertOverwriteForMmTable
 (batchId=264)
org.apache.hadoop.hive.ql.TestTxnCommandsForMmTable.testMmTableCompaction 
(batchId=264)
org.apache.hadoop.hive.ql.TestTxnCommandsForMmTable.testSnapshotIsolationWithAbortedTxnOnMmTable
 (batchId=264)
org.apache.hadoop.hive.ql.TestTxnCommandsForOrcMmTable.testInsertOverwriteForMmTable
 (batchId=284)
org.apache.hadoop.hive.ql.TestTxnCommandsForOrcMmTable.testMmTableCompaction 
(batchId=284)
org.apache.hadoop.hive.ql.TestTxnCommandsForOrcMmTable.testSnapshotIsolationWithAbortedTxnOnMmTable
 (batchId=284)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/10280/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/10280/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-10280/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 38 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12919293 - PreCommit-HIVE-Build

> implement a basic major compactor for MM tables

[jira] [Comment Edited] (HIVE-19186) Multi Table INSERT statements query has a flaw for partitioned table when INSERT INTO and INSERT OVERWRITE are used

2018-04-17 Thread Steve Yeom (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441692#comment-16441692
 ] 

Steve Yeom edited comment on HIVE-19186 at 4/18/18 12:15 AM:
-

Hi [~ashutoshc] can you review this patch? 

For the case of multi table insert, each insert is designated by the clause 
name which is "dest" in getFileSinkPlan() method context.
For our test case of multi table inserts query with INSERT INTO and INSERT 
OVERWRITE, we call getFileSinkPlan() for each INSERT clause. 
The issue of the Jira is that for the case of INSERT OVERWRITE we just set true 
to the "isInsertInto" flag to have WRONG loadType. 
The fix is to correct that flag value.

As you see from the newly added "multi_insert_partitioned.q" statistics and 
metadata looks OK. I have double checked the results 
and "DESC FORMATTED" statment output by 
partitioning a multi table insert query into queries with single insert 
statement. 

Thank you, 
Steve. 


was (Author: steveyeom2017):
Hi [~ashutoshc] can you review this patch? 

For the case of multi table insert, each insert is designated by the clause 
name which is "dest" in getFileSinkPlan() method context.
For our test case of multi table inserts query with INSERT INTO and INSERT 
OVERWRITE, we call getFileSinkPlan() for each INSERT clause. 
The issue of the Jira is that for the case of INSERT OVERWRITE we just set true 
to the "isInsertInto" flag to have WRONG loadType. 
The fix is to correct that flag value.

As you see from the newly added "multi_insert_partitioned.q" statistics and 
metadata looks OK. I have double checked the results 
and "DESC FORMATTED" statment output by 
partitioning a multi table insert query into queries with single insert 
statement. 

> Multi Table INSERT statements query has a flaw for partitioned table when 
> INSERT INTO and INSERT OVERWRITE are used
> ---
>
> Key: HIVE-19186
> URL: https://issues.apache.org/jira/browse/HIVE-19186
> Project: Hive
>  Issue Type: Bug
>  Components: Query Planning
>Affects Versions: 3.0.0
>Reporter: Steve Yeom
>Assignee: Steve Yeom
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-19186.01.patch, HIVE-19186.02.patch
>
>
> One problem test case is: 
> create table intermediate(key int) partitioned by (p int) stored as orc;
> insert into table intermediate partition(p='455') select distinct key from 
> src where key >= 0 order by key desc limit 2;
> insert into table intermediate partition(p='456') select distinct key from 
> src where key is not null order by key asc limit 2;
> insert into table intermediate partition(p='457') select distinct key from 
> src where key >= 100 order by key asc limit 2;
> create table multi_partitioned (key int, key2 int) partitioned by (p int);
> from intermediate
> insert into table multi_partitioned partition(p=2) select p, key
> insert overwrite table multi_partitioned partition(p=1) select key, p;



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19186) Multi Table INSERT statements query has a flaw for partitioned table when INSERT INTO and INSERT OVERWRITE are used

2018-04-17 Thread Steve Yeom (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441692#comment-16441692
 ] 

Steve Yeom commented on HIVE-19186:
---

Hi [~ashutoshc] can you review this patch? 

For the case of multi table insert, each insert is designated by the clause 
name which is "dest" in getFileSinkPlan() method context.
For our test case of multi table inserts query with INSERT INTO and INSERT 
OVERWRITE, we call getFileSinkPlan() for each INSERT clause. 
The issue of the Jira is that for the case of INSERT OVERWRITE we just set true 
to the "isInsertInto" flag to have WRONG loadType. 
The fix is to correct that flag value.

As you see from the newly added "multi_insert_partitioned.q" statistics and 
metadata looks OK. I have double checked the results 
and "DESC FORMATTED" statment output by 
partitioning a multi table insert query into queries with single insert 
statement. 

> Multi Table INSERT statements query has a flaw for partitioned table when 
> INSERT INTO and INSERT OVERWRITE are used
> ---
>
> Key: HIVE-19186
> URL: https://issues.apache.org/jira/browse/HIVE-19186
> Project: Hive
>  Issue Type: Bug
>  Components: Query Planning
>Affects Versions: 3.0.0
>Reporter: Steve Yeom
>Assignee: Steve Yeom
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-19186.01.patch, HIVE-19186.02.patch
>
>
> One problem test case is: 
> create table intermediate(key int) partitioned by (p int) stored as orc;
> insert into table intermediate partition(p='455') select distinct key from 
> src where key >= 0 order by key desc limit 2;
> insert into table intermediate partition(p='456') select distinct key from 
> src where key is not null order by key asc limit 2;
> insert into table intermediate partition(p='457') select distinct key from 
> src where key >= 100 order by key asc limit 2;
> create table multi_partitioned (key int, key2 int) partitioned by (p int);
> from intermediate
> insert into table multi_partitioned partition(p=2) select p, key
> insert overwrite table multi_partitioned partition(p=1) select key, p;



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19204) Detailed errors from some tasks are not displayed to the client because the tasks don't set exception when they fail

2018-04-17 Thread Sahil Takiar (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441688#comment-16441688
 ] 

Sahil Takiar commented on HIVE-19204:
-

So should we run {{LOG.error(Throwable)}} in {{ExplainSQRewriteTask}}, 
{{ExplainTask}}, and {{ImportCommitTask}}?

> Detailed errors from some tasks are not displayed to the client because the 
> tasks don't set exception when they fail
> 
>
> Key: HIVE-19204
> URL: https://issues.apache.org/jira/browse/HIVE-19204
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Processor
>Affects Versions: 3.0.0
>Reporter: Aihua Xu
>Assignee: Aihua Xu
>Priority: Major
> Attachments: HIVE-19204.1.patch
>
>
> In TaskRunner.java, if the tasks have exception set, then the task result 
> will have such exception set and Driver.java will get such details and 
> display to the client. But some tasks don't set such exceptions so the client 
> won't see such details unless you check the HS2 log.
>   
> {noformat}
>   public void runSequential() {
> int exitVal = -101;
> try {
>   exitVal = tsk.executeTask(ss == null ? null : ss.getHiveHistory());
> } catch (Throwable t) {
>   if (tsk.getException() == null) {
> tsk.setException(t);
>   }
>   LOG.error("Error in executeTask", t);
> }
> result.setExitVal(exitVal);
> if (tsk.getException() != null) {
>   result.setTaskError(tsk.getException());
> }
>   }
>  {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19235) Update golden files for Minimr tests

2018-04-17 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-19235:

Attachment: HIVE-19235.patch

> Update golden files for Minimr tests
> 
>
> Key: HIVE-19235
> URL: https://issues.apache.org/jira/browse/HIVE-19235
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
>Priority: Major
> Attachments: HIVE-19235.patch
>
>
> stats update needed for 3 tests.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-19235) Update golden files for Minimr tests

2018-04-17 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan reassigned HIVE-19235:
---


> Update golden files for Minimr tests
> 
>
> Key: HIVE-19235
> URL: https://issues.apache.org/jira/browse/HIVE-19235
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
>Priority: Major
> Attachments: HIVE-19235.patch
>
>
> stats update needed for 3 tests.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19235) Update golden files for Minimr tests

2018-04-17 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-19235:

Status: Patch Available  (was: Open)

> Update golden files for Minimr tests
> 
>
> Key: HIVE-19235
> URL: https://issues.apache.org/jira/browse/HIVE-19235
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
>Priority: Major
> Attachments: HIVE-19235.patch
>
>
> stats update needed for 3 tests.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19184) Hive 3.0.0 release branch preparation

2018-04-17 Thread Vineet Garg (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-19184:
---
Status: Patch Available  (was: Open)

> Hive 3.0.0 release branch preparation
> -
>
> Key: HIVE-19184
> URL: https://issues.apache.org/jira/browse/HIVE-19184
> Project: Hive
>  Issue Type: Task
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-19184-branch-3.patch
>
>
> Need to do bunch of things to prepare branch-3 for release e.g.
> * Update pom.xml to delete SNAPSHOT
> * Update .reviewboardrc
> * Remove storage-api module to build
> * Change storage-api depdency etc



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19184) Hive 3.0.0 release branch preparation

2018-04-17 Thread Vineet Garg (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-19184:
---
Status: Open  (was: Patch Available)

> Hive 3.0.0 release branch preparation
> -
>
> Key: HIVE-19184
> URL: https://issues.apache.org/jira/browse/HIVE-19184
> Project: Hive
>  Issue Type: Task
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-19184-branch-3.patch
>
>
> Need to do bunch of things to prepare branch-3 for release e.g.
> * Update pom.xml to delete SNAPSHOT
> * Update .reviewboardrc
> * Remove storage-api module to build
> * Change storage-api depdency etc



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19124) implement a basic major compactor for MM tables

2018-04-17 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441663#comment-16441663
 ] 

Hive QA commented on HIVE-19124:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
49s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
48s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
13s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
43s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
55s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  1m  
1s{color} | {color:red} ql in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m  
8s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
20s{color} | {color:red} itests/hive-unit: The patch generated 5 new + 76 
unchanged - 0 fixed = 81 total (was 76) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
43s{color} | {color:red} ql: The patch generated 7 new + 210 unchanged - 1 
fixed = 217 total (was 211) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
19s{color} | {color:red} standalone-metastore: The patch generated 1 new + 19 
unchanged - 0 fixed = 20 total (was 19) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
44s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
16s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 28m 59s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-10280/dev-support/hive-personality.sh
 |
| git revision | master / 4cfec3e |
| Default Java | 1.8.0_111 |
| mvninstall | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-10280/yetus/patch-mvninstall-ql.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-10280/yetus/diff-checkstyle-itests_hive-unit.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-10280/yetus/diff-checkstyle-ql.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-10280/yetus/diff-checkstyle-standalone-metastore.txt
 |
| whitespace | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-10280/yetus/whitespace-eol.txt
 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-10280/yetus/patch-asflicense-problems.txt
 |
| modules | C: storage-api itests/hive-unit ql standalone-metastore U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-10280/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> implement a basic major compactor for MM tables
> ---
>
> Key: HIVE-19124
> URL: https://issues.apache.org/jira/browse/HIVE-19124
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Reporter: Sergey Shelukhin
>Assignee: 

[jira] [Updated] (HIVE-19233) Add utility for acid 1.0 to 2.0 migration

2018-04-17 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-19233:
--
Attachment: HIVE-19233.01.patch

> Add utility for acid 1.0 to 2.0 migration
> -
>
> Key: HIVE-19233
> URL: https://issues.apache.org/jira/browse/HIVE-19233
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 3.0.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>Priority: Major
> Attachments: HIVE-19233.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19131) DecimalColumnStatsMergerTest comparison review

2018-04-17 Thread Laszlo Bodor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Laszlo Bodor updated HIVE-19131:

Status: Patch Available  (was: Open)

> DecimalColumnStatsMergerTest comparison review
> --
>
> Key: HIVE-19131
> URL: https://issues.apache.org/jira/browse/HIVE-19131
> Project: Hive
>  Issue Type: Bug
>Reporter: Laszlo Bodor
>Assignee: Laszlo Bodor
>Priority: Major
> Attachments: HIVE-19131.01.patch
>
>
> DecimalColumnStatsMergerTest has a strange comparison logic, which needs to 
> be reviewed.
> Regarding low and high values, it uses compareTo with the same direction, 
> which seems to be incorrect: old.compareTo(new) > 0 -> pick old value in both 
> cases
> {code:java}
> Decimal lowValue = aggregateData.getLowValue() != null && 
> (aggregateData.getLowValue().compareTo(newData.getLowValue()) > 0) ? 
> aggregateData .getLowValue() : newData.getLowValue(); 
> aggregateData.setLowValue(lowValue); 
> Decimal highValue = aggregateData.getHighValue() != null && 
> (aggregateData.getHighValue().compareTo(newData.getHighValue()) > 0) ? 
> aggregateData .getHighValue() : newData.getHighValue();
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19131) DecimalColumnStatsMergerTest comparison review

2018-04-17 Thread Laszlo Bodor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Laszlo Bodor updated HIVE-19131:

Attachment: HIVE-19131.01.patch

> DecimalColumnStatsMergerTest comparison review
> --
>
> Key: HIVE-19131
> URL: https://issues.apache.org/jira/browse/HIVE-19131
> Project: Hive
>  Issue Type: Bug
>Reporter: Laszlo Bodor
>Assignee: Laszlo Bodor
>Priority: Major
> Attachments: HIVE-19131.01.patch
>
>
> DecimalColumnStatsMergerTest has a strange comparison logic, which needs to 
> be reviewed.
> Regarding low and high values, it uses compareTo with the same direction, 
> which seems to be incorrect: old.compareTo(new) > 0 -> pick old value in both 
> cases
> {code:java}
> Decimal lowValue = aggregateData.getLowValue() != null && 
> (aggregateData.getLowValue().compareTo(newData.getLowValue()) > 0) ? 
> aggregateData .getLowValue() : newData.getLowValue(); 
> aggregateData.setLowValue(lowValue); 
> Decimal highValue = aggregateData.getHighValue() != null && 
> (aggregateData.getHighValue().compareTo(newData.getHighValue()) > 0) ? 
> aggregateData .getHighValue() : newData.getHighValue();
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19204) Detailed errors from some tasks are not displayed to the client because the tasks don't set exception when they fail

2018-04-17 Thread Aihua Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aihua Xu updated HIVE-19204:

Status: In Progress  (was: Patch Available)

> Detailed errors from some tasks are not displayed to the client because the 
> tasks don't set exception when they fail
> 
>
> Key: HIVE-19204
> URL: https://issues.apache.org/jira/browse/HIVE-19204
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Processor
>Affects Versions: 3.0.0
>Reporter: Aihua Xu
>Assignee: Aihua Xu
>Priority: Major
> Attachments: HIVE-19204.1.patch
>
>
> In TaskRunner.java, if the tasks have exception set, then the task result 
> will have such exception set and Driver.java will get such details and 
> display to the client. But some tasks don't set such exceptions so the client 
> won't see such details unless you check the HS2 log.
>   
> {noformat}
>   public void runSequential() {
> int exitVal = -101;
> try {
>   exitVal = tsk.executeTask(ss == null ? null : ss.getHiveHistory());
> } catch (Throwable t) {
>   if (tsk.getException() == null) {
> tsk.setException(t);
>   }
>   LOG.error("Error in executeTask", t);
> }
> result.setExitVal(exitVal);
> if (tsk.getException() != null) {
>   result.setTaskError(tsk.getException());
> }
>   }
>  {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19204) Detailed errors from some tasks are not displayed to the client because the tasks don't set exception when they fail

2018-04-17 Thread Aihua Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aihua Xu updated HIVE-19204:

Attachment: (was: HIVE-19204.1.patch)

> Detailed errors from some tasks are not displayed to the client because the 
> tasks don't set exception when they fail
> 
>
> Key: HIVE-19204
> URL: https://issues.apache.org/jira/browse/HIVE-19204
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Processor
>Affects Versions: 3.0.0
>Reporter: Aihua Xu
>Assignee: Aihua Xu
>Priority: Major
> Attachments: HIVE-19204.1.patch
>
>
> In TaskRunner.java, if the tasks have exception set, then the task result 
> will have such exception set and Driver.java will get such details and 
> display to the client. But some tasks don't set such exceptions so the client 
> won't see such details unless you check the HS2 log.
>   
> {noformat}
>   public void runSequential() {
> int exitVal = -101;
> try {
>   exitVal = tsk.executeTask(ss == null ? null : ss.getHiveHistory());
> } catch (Throwable t) {
>   if (tsk.getException() == null) {
> tsk.setException(t);
>   }
>   LOG.error("Error in executeTask", t);
> }
> result.setExitVal(exitVal);
> if (tsk.getException() != null) {
>   result.setTaskError(tsk.getException());
> }
>   }
>  {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19204) Detailed errors from some tasks are not displayed to the client because the tasks don't set exception when they fail

2018-04-17 Thread Aihua Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aihua Xu updated HIVE-19204:

Attachment: HIVE-19204.1.patch

> Detailed errors from some tasks are not displayed to the client because the 
> tasks don't set exception when they fail
> 
>
> Key: HIVE-19204
> URL: https://issues.apache.org/jira/browse/HIVE-19204
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Processor
>Affects Versions: 3.0.0
>Reporter: Aihua Xu
>Assignee: Aihua Xu
>Priority: Major
> Attachments: HIVE-19204.1.patch
>
>
> In TaskRunner.java, if the tasks have exception set, then the task result 
> will have such exception set and Driver.java will get such details and 
> display to the client. But some tasks don't set such exceptions so the client 
> won't see such details unless you check the HS2 log.
>   
> {noformat}
>   public void runSequential() {
> int exitVal = -101;
> try {
>   exitVal = tsk.executeTask(ss == null ? null : ss.getHiveHistory());
> } catch (Throwable t) {
>   if (tsk.getException() == null) {
> tsk.setException(t);
>   }
>   LOG.error("Error in executeTask", t);
> }
> result.setExitVal(exitVal);
> if (tsk.getException() != null) {
>   result.setTaskError(tsk.getException());
> }
>   }
>  {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19204) Detailed errors from some tasks are not displayed to the client because the tasks don't set exception when they fail

2018-04-17 Thread Aihua Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aihua Xu updated HIVE-19204:

Status: Patch Available  (was: In Progress)

> Detailed errors from some tasks are not displayed to the client because the 
> tasks don't set exception when they fail
> 
>
> Key: HIVE-19204
> URL: https://issues.apache.org/jira/browse/HIVE-19204
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Processor
>Affects Versions: 3.0.0
>Reporter: Aihua Xu
>Assignee: Aihua Xu
>Priority: Major
> Attachments: HIVE-19204.1.patch
>
>
> In TaskRunner.java, if the tasks have exception set, then the task result 
> will have such exception set and Driver.java will get such details and 
> display to the client. But some tasks don't set such exceptions so the client 
> won't see such details unless you check the HS2 log.
>   
> {noformat}
>   public void runSequential() {
> int exitVal = -101;
> try {
>   exitVal = tsk.executeTask(ss == null ? null : ss.getHiveHistory());
> } catch (Throwable t) {
>   if (tsk.getException() == null) {
> tsk.setException(t);
>   }
>   LOG.error("Error in executeTask", t);
> }
> result.setExitVal(exitVal);
> if (tsk.getException() != null) {
>   result.setTaskError(tsk.getException());
> }
>   }
>  {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19194) TestDruidStorageHandler fails

2018-04-17 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441623#comment-16441623
 ] 

Hive QA commented on HIVE-19194:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12919296/HIVE-19194.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 32 failed/errored test(s), 14237 tests 
executed
*Failed tests:*
{noformat}
TestDbNotificationListener - did not produce a TEST-*.xml file (likely timed 
out) (batchId=247)
TestHCatHiveCompatibility - did not produce a TEST-*.xml file (likely timed 
out) (batchId=247)
TestNonCatCallsWithCatalog - did not produce a TEST-*.xml file (likely timed 
out) (batchId=217)
TestSequenceFileReadWrite - did not produce a TEST-*.xml file (likely timed 
out) (batchId=247)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[llap_smb] (batchId=92)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_0] 
(batchId=17)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[results_cache_invalidation2]
 (batchId=39)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[tez_join_hash] 
(batchId=54)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez1]
 (batchId=175)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[results_cache_invalidation2]
 (batchId=163)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] 
(batchId=163)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_smb_1] 
(batchId=171)
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[bucketizedhiveinputformat]
 (batchId=183)
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainanalyze_5] 
(batchId=105)
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver[infer_bucket_sort_dyn_part]
 (batchId=93)
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver[infer_bucket_sort_map_operators]
 (batchId=93)
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver[infer_bucket_sort_num_buckets]
 (batchId=93)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[alter_notnull_constraint_violation]
 (batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[default_constraint_invalid_default_value_type]
 (batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[insert_into_acid_notnull]
 (batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[insert_into_notnull_constraint]
 (batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[insert_multi_into_notnull]
 (batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[insert_overwrite_notnull_constraint]
 (batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[insertsel_fail] 
(batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[update_notnull_constraint]
 (batchId=95)
org.apache.hadoop.hive.cli.control.TestDanglingQOuts.checkDanglingQOut 
(batchId=225)
org.apache.hadoop.hive.ql.TestAcidOnTez.testAcidInsertWithRemoveUnion 
(batchId=228)
org.apache.hadoop.hive.ql.TestAcidOnTez.testCtasTezUnion (batchId=228)
org.apache.hadoop.hive.ql.TestAcidOnTez.testNonStandardConversion01 
(batchId=228)
org.apache.hadoop.hive.ql.TestMTQueries.testMTQueries1 (batchId=232)
org.apache.hive.jdbc.TestJdbcWithMiniHS2.testParallelCompilation2 (batchId=242)
org.apache.hive.service.TestDFSErrorHandling.org.apache.hive.service.TestDFSErrorHandling
 (batchId=238)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/10279/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/10279/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-10279/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 32 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12919296 - PreCommit-HIVE-Build

> TestDruidStorageHandler fails
> -
>
> Key: HIVE-19194
> URL: https://issues.apache.org/jira/browse/HIVE-19194
> Project: Hive
>  Issue Type: Sub-task
>  Components: Druid integration
>Reporter: Ashutosh Chauhan
>Assignee: slim bouguerra
>Priority: Major
> Attachments: HIVE-19194.patch
>
>
> This tests fails randomly. If its not reproducible locally consider improving 
> its stability since it does fail once in a while on Hive QA. 
> {code}
> java.lang.AssertionError: expected:<0> but was:<1> at 
> 

[jira] [Commented] (HIVE-19204) Detailed errors from some tasks are not displayed to the client because the tasks don't set exception when they fail

2018-04-17 Thread Aihua Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441605#comment-16441605
 ] 

Aihua Xu commented on HIVE-19204:
-

setException doesn't print the log to log file but it's populated the exception 
to the client. LOG.error() is the way to print to the log file. 

> Detailed errors from some tasks are not displayed to the client because the 
> tasks don't set exception when they fail
> 
>
> Key: HIVE-19204
> URL: https://issues.apache.org/jira/browse/HIVE-19204
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Processor
>Affects Versions: 3.0.0
>Reporter: Aihua Xu
>Assignee: Aihua Xu
>Priority: Major
> Attachments: HIVE-19204.1.patch
>
>
> In TaskRunner.java, if the tasks have exception set, then the task result 
> will have such exception set and Driver.java will get such details and 
> display to the client. But some tasks don't set such exceptions so the client 
> won't see such details unless you check the HS2 log.
>   
> {noformat}
>   public void runSequential() {
> int exitVal = -101;
> try {
>   exitVal = tsk.executeTask(ss == null ? null : ss.getHiveHistory());
> } catch (Throwable t) {
>   if (tsk.getException() == null) {
> tsk.setException(t);
>   }
>   LOG.error("Error in executeTask", t);
> }
> result.setExitVal(exitVal);
> if (tsk.getException() != null) {
>   result.setTaskError(tsk.getException());
> }
>   }
>  {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19195) Fix flaky tests and cleanup testconfiguration to run llap specific tests in llap only.

2018-04-17 Thread Deepak Jaiswal (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441599#comment-16441599
 ] 

Deepak Jaiswal commented on HIVE-19195:
---

[~ashutoshc] thanks for the analysis. The last two should be easy to fix. I 
will take a look why tez_smb_1 is giving different plan again.

 

I have opened HIVE-19234 to track this.

> Fix flaky tests and cleanup testconfiguration to run llap specific tests in 
> llap only.
> --
>
> Key: HIVE-19195
> URL: https://issues.apache.org/jira/browse/HIVE-19195
> Project: Hive
>  Issue Type: Sub-task
>  Components: Test
>Reporter: Ashutosh Chauhan
>Assignee: Deepak Jaiswal
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: HIVE-19195.1.patch
>
>
> This test is certainly flaky. Seems like it makes some assumption about 
> available memory. Consider dropping this altogether.
>  Also remove tests from llaplocal.shared list to llaplocal so that they dont 
> run in MR.
> Makes HIVE-17055 redundant.
> {code:java}
> Client Execution succeeded but contained differences (error code = 1) after 
> executing auto_sortmerge_join_2.q 
> 1101,1103d1100
> < Hive Runtime Error: Map local work exhausted memory
> < FAILED: Execution Error, return code 3 from 
> org.apache.hadoop.hive.ql.exec.mr.MapredLocalTask
> < ATTEMPT: Execute BackupTask: org.apache.hadoop.hive.ql.exec.mr.MapRedTask
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18915) Better client logging when a HoS session can't be opened

2018-04-17 Thread Aihua Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aihua Xu updated HIVE-18915:

Status: Patch Available  (was: In Progress)

patch-4: address comments.

> Better client logging when a HoS session can't be opened
> 
>
> Key: HIVE-18915
> URL: https://issues.apache.org/jira/browse/HIVE-18915
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Affects Versions: 3.0.0
>Reporter: Sahil Takiar
>Assignee: Aihua Xu
>Priority: Major
> Attachments: HIVE-18915.1.patch, HIVE-18915.2.patch, 
> HIVE-18915.3.patch, HIVE-18915.4.patch
>
>
> Users just get a {{FAILED: Execution Error, return code 30041 from 
> org.apache.hadoop.hive.ql.exec.spark.SparkTask. Failed to create Spark client 
> for Spark session [id]}} when a HoS session can't be opened, would be better 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18915) Better client logging when a HoS session can't be opened

2018-04-17 Thread Aihua Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aihua Xu updated HIVE-18915:

Attachment: HIVE-18915.4.patch

> Better client logging when a HoS session can't be opened
> 
>
> Key: HIVE-18915
> URL: https://issues.apache.org/jira/browse/HIVE-18915
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Affects Versions: 3.0.0
>Reporter: Sahil Takiar
>Assignee: Aihua Xu
>Priority: Major
> Attachments: HIVE-18915.1.patch, HIVE-18915.2.patch, 
> HIVE-18915.3.patch, HIVE-18915.4.patch
>
>
> Users just get a {{FAILED: Execution Error, return code 30041 from 
> org.apache.hadoop.hive.ql.exec.spark.SparkTask. Failed to create Spark client 
> for Spark session [id]}} when a HoS session can't be opened, would be better 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18915) Better client logging when a HoS session can't be opened

2018-04-17 Thread Aihua Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441593#comment-16441593
 ] 

Aihua Xu commented on HIVE-18915:
-

[~stakiar] Do you know a way to force a failure? I used a unit test to cover 
that. 



> Better client logging when a HoS session can't be opened
> 
>
> Key: HIVE-18915
> URL: https://issues.apache.org/jira/browse/HIVE-18915
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Affects Versions: 3.0.0
>Reporter: Sahil Takiar
>Assignee: Aihua Xu
>Priority: Major
> Attachments: HIVE-18915.1.patch, HIVE-18915.2.patch, 
> HIVE-18915.3.patch
>
>
> Users just get a {{FAILED: Execution Error, return code 30041 from 
> org.apache.hadoop.hive.ql.exec.spark.SparkTask. Failed to create Spark client 
> for Spark session [id]}} when a HoS session can't be opened, would be better 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18915) Better client logging when a HoS session can't be opened

2018-04-17 Thread Aihua Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aihua Xu updated HIVE-18915:

Status: In Progress  (was: Patch Available)

> Better client logging when a HoS session can't be opened
> 
>
> Key: HIVE-18915
> URL: https://issues.apache.org/jira/browse/HIVE-18915
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Affects Versions: 3.0.0
>Reporter: Sahil Takiar
>Assignee: Aihua Xu
>Priority: Major
> Attachments: HIVE-18915.1.patch, HIVE-18915.2.patch, 
> HIVE-18915.3.patch
>
>
> Users just get a {{FAILED: Execution Error, return code 30041 from 
> org.apache.hadoop.hive.ql.exec.spark.SparkTask. Failed to create Spark client 
> for Spark session [id]}} when a HoS session can't be opened, would be better 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-19234) Fix flaky tests (see description)

2018-04-17 Thread Deepak Jaiswal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Deepak Jaiswal reassigned HIVE-19234:
-


> Fix flaky tests (see description)
> -
>
> Key: HIVE-19234
> URL: https://issues.apache.org/jira/browse/HIVE-19234
> Project: Hive
>  Issue Type: Bug
>Reporter: Deepak Jaiswal
>Assignee: Deepak Jaiswal
>Priority: Major
>
> org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_smb_1]
>  is still failing with plan differences. Age : 161
> org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[llap_smb] fails with 
> exception. Going by the name it should not even run on TestCliDriver.
> org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[tez_join_hash] : fails 
> with exception. Going by the name it should not even run on TestCliDriver.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-19233) Add utility for acid 1.0 to 2.0 migration

2018-04-17 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman reassigned HIVE-19233:
-


> Add utility for acid 1.0 to 2.0 migration
> -
>
> Key: HIVE-19233
> URL: https://issues.apache.org/jira/browse/HIVE-19233
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 3.0.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-19232) results_cache_invalidation2 is failing

2018-04-17 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan reassigned HIVE-19232:
---


> results_cache_invalidation2 is failing
> --
>
> Key: HIVE-19232
> URL: https://issues.apache.org/jira/browse/HIVE-19232
> Project: Hive
>  Issue Type: Sub-task
>  Components: Test
>Reporter: Ashutosh Chauhan
>Assignee: Jason Dere
>Priority: Major
>
> Fails with plan difference on both cli as well as minillaplocal. Plan diffs 
> looks concerning since its now longer using cache.
> Also, it should run only on minillaplocal



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Reopened] (HIVE-19195) Fix flaky tests and cleanup testconfiguration to run llap specific tests in llap only.

2018-04-17 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan reopened HIVE-19195:
-

 org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_smb_1] 
is still failing with plan differences. Age : 161

org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[llap_smb] fails with 
exception. Going by the name it should not even run on TestCliDriver.
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[tez_join_hash] : fails 
with exception. Going by the name it should not even run on TestCliDriver.

> Fix flaky tests and cleanup testconfiguration to run llap specific tests in 
> llap only.
> --
>
> Key: HIVE-19195
> URL: https://issues.apache.org/jira/browse/HIVE-19195
> Project: Hive
>  Issue Type: Sub-task
>  Components: Test
>Reporter: Ashutosh Chauhan
>Assignee: Deepak Jaiswal
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: HIVE-19195.1.patch
>
>
> This test is certainly flaky. Seems like it makes some assumption about 
> available memory. Consider dropping this altogether.
>  Also remove tests from llaplocal.shared list to llaplocal so that they dont 
> run in MR.
> Makes HIVE-17055 redundant.
> {code:java}
> Client Execution succeeded but contained differences (error code = 1) after 
> executing auto_sortmerge_join_2.q 
> 1101,1103d1100
> < Hive Runtime Error: Map local work exhausted memory
> < FAILED: Execution Error, return code 3 from 
> org.apache.hadoop.hive.ql.exec.mr.MapredLocalTask
> < ATTEMPT: Execute BackupTask: org.apache.hadoop.hive.ql.exec.mr.MapRedTask
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18778) Needs to capture input/output entities in explain

2018-04-17 Thread Naveen Gangam (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441563#comment-16441563
 ] 

Naveen Gangam commented on HIVE-18778:
--

Thanks [~thejas]. I will sync up with him when he is back.

> Needs to capture input/output entities in explain
> -
>
> Key: HIVE-18778
> URL: https://issues.apache.org/jira/browse/HIVE-18778
> Project: Hive
>  Issue Type: Bug
>Reporter: Daniel Dai
>Assignee: Daniel Dai
>Priority: Major
> Attachments: HIVE-18778-SparkPositive.patch, HIVE-18778.1.patch, 
> HIVE-18778.2.patch, HIVE-18778.3.patch, HIVE-18778_TestCliDriver.patch, 
> HIVE-18788_SparkNegative.patch, HIVE-18788_SparkPerf.patch
>
>
> With Sentry enabled, commands like explain drop table foo fail with {{explain 
> drop table foo;}}
> {code}
> Error: Error while compiling statement: FAILED: SemanticException No valid 
> privileges
>  Required privilege( Table) not available in input privileges
>  The required privileges: (state=42000,code=4)
> {code}
> Sentry fails to authorize because the ExplainSemanticAnalyzer uses an 
> instance of DDLSemanticAnalyzer to analyze the explain query.
> {code}
> BaseSemanticAnalyzer sem = SemanticAnalyzerFactory.get(conf, input);
> sem.analyze(input, ctx);
> sem.validate()
> {code}
> The inputs/outputs entities for this query are set in the above code. 
> However, these are never set on the instance of ExplainSemanticAnalyzer 
> itself and thus is not propagated into the HookContext in the calling Driver 
> code.
> {code}
> sem.analyze(tree, ctx); --> this results in calling the above code that uses 
> DDLSA
> hookCtx.update(sem); --> sem is an instance of ExplainSemanticAnalyzer, this 
> code attempts to update the HookContext with the input/output info from ESA 
> which is never set.
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19194) TestDruidStorageHandler fails

2018-04-17 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441537#comment-16441537
 ] 

Hive QA commented on HIVE-19194:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
1s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
55s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
25s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
35s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 0s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
26s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
12s{color} | {color:red} druid-handler: The patch generated 13 new + 151 
unchanged - 2 fixed = 164 total (was 153) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} ql: The patch generated 0 new + 52 unchanged - 1 
fixed = 52 total (was 53) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
16s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 20m 25s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-10279/dev-support/hive-personality.sh
 |
| git revision | master / 4cfec3e |
| Default Java | 1.8.0_111 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-10279/yetus/diff-checkstyle-druid-handler.txt
 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-10279/yetus/patch-asflicense-problems.txt
 |
| modules | C: druid-handler ql U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-10279/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> TestDruidStorageHandler fails
> -
>
> Key: HIVE-19194
> URL: https://issues.apache.org/jira/browse/HIVE-19194
> Project: Hive
>  Issue Type: Sub-task
>  Components: Druid integration
>Reporter: Ashutosh Chauhan
>Assignee: slim bouguerra
>Priority: Major
> Attachments: HIVE-19194.patch
>
>
> This tests fails randomly. If its not reproducible locally consider improving 
> its stability since it does fail once in a while on Hive QA. 
> {code}
> java.lang.AssertionError: expected:<0> but was:<1> at 
> org.junit.Assert.fail(Assert.java:88) at 
> org.junit.Assert.failNotEquals(Assert.java:743) at 
> org.junit.Assert.assertEquals(Assert.java:118) at 
> org.junit.Assert.assertEquals(Assert.java:555) at 
> org.junit.Assert.assertEquals(Assert.java:542) at 
> org.apache.hadoop.hive.druid.TestDruidStorageHandler.testCommitMultiInsertOverwriteTable(TestDruidStorageHandler.java:414)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-19231) Beeline generates garbled output when using UnsupportedTerminal

2018-04-17 Thread Naveen Gangam (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naveen Gangam reassigned HIVE-19231:



> Beeline generates garbled output when using UnsupportedTerminal
> ---
>
> Key: HIVE-19231
> URL: https://issues.apache.org/jira/browse/HIVE-19231
> Project: Hive
>  Issue Type: Bug
>  Components: Beeline
>Affects Versions: 2.1.0
>Reporter: Naveen Gangam
>Assignee: Naveen Gangam
>Priority: Major
>
> We had a customer that was using some sort of front end that would invoke 
> beeline commands with some query files on a node that that remote to the HS2 
> node.
> So beeline runs locally on this edge but connects to a remote HS2. Since the 
> fix made in HIVE-14342, the beeline started producing garbled line in the 
> output. Something like
> {code:java}
> ^Mnull   ^Mnull^Mnull 
>   ^Mnull00- All Occupations 
> 135185230   42270
> 11-   Management occupations  6152650 100310{code}
>  
> I havent been able to reproduce the issue locally as I do not have their 
> system, but with some additional instrumentation I have been able to get some 
> info regarding the beeline process.
> Essentially, such invocation causes beeline process to run with 
> {{-Djline.terminal=jline.UnsupportedTerminal}} all the time and thus causes 
> the issue. They can run the same beeline command directly in the shell on the 
> same host and it does not cause this issue.
> PID    S   TTY  TIME COMMAND
> 44107  S    S  ?    00:00:00 bash beeline -u ...
> PID  S TTY  TIME COMMAND
> 48453  S+   S pts/4    00:00:00 bash beeline -u ...
> Somehow that process wasnt attached to any local terminals. So the check made 
> for /dev/stdin wouldnt work.
>  
> Instead an additional check to check the TTY session of the process before 
> using the UnsupportedTerminal (which really should only be used for 
> backgrounded beeline sessions) seems to resolve the issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19211) New streaming ingest API and support for dynamic partitioning

2018-04-17 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441513#comment-16441513
 ] 

Hive QA commented on HIVE-19211:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12919297/HIVE-19211.1.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/10278/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/10278/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-10278/

Messages:
{noformat}
 This message was trimmed, see log for full details 
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/javax/servlet/javax.servlet-api/3.1.0/javax.servlet-api-3.1.0.jar(javax/servlet/annotation/WebFilter.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/javax/servlet/javax.servlet-api/3.1.0/javax.servlet-api-3.1.0.jar(javax/servlet/http/HttpServletRequest.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/javax/servlet/javax.servlet-api/3.1.0/javax.servlet-api-3.1.0.jar(javax/servlet/http/HttpServletResponse.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/apache-github-source-source/classification/target/hive-classification-3.1.0-SNAPSHOT.jar(org/apache/hadoop/hive/common/classification/InterfaceAudience$LimitedPrivate.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/apache-github-source-source/classification/target/hive-classification-3.1.0-SNAPSHOT.jar(org/apache/hadoop/hive/common/classification/InterfaceStability$Unstable.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/io/ByteArrayOutputStream.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/io/OutputStream.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/io/Closeable.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/lang/AutoCloseable.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/io/Flushable.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(javax/xml/bind/annotation/XmlRootElement.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/org/apache/commons/commons-exec/1.1/commons-exec-1.1.jar(org/apache/commons/exec/ExecuteException.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/security/PrivilegedExceptionAction.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/util/concurrent/ExecutionException.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/util/concurrent/TimeoutException.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/org/apache/hadoop/hadoop-common/3.0.0/hadoop-common-3.0.0.jar(org/apache/hadoop/fs/FileSystem.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/apache-github-source-source/shims/common/target/hive-shims-common-3.1.0-SNAPSHOT.jar(org/apache/hadoop/hive/shims/HadoopShimsSecure.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/apache-github-source-source/shims/common/target/hive-shims-common-3.1.0-SNAPSHOT.jar(org/apache/hadoop/hive/shims/ShimLoader.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/apache-github-source-source/shims/common/target/hive-shims-common-3.1.0-SNAPSHOT.jar(org/apache/hadoop/hive/shims/HadoopShims.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/apache-github-source-source/shims/common/target/hive-shims-common-3.1.0-SNAPSHOT.jar(org/apache/hadoop/hive/shims/HadoopShims$WebHCatJTShim.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/org/apache/hadoop/hadoop-common/3.0.0/hadoop-common-3.0.0.jar(org/apache/hadoop/util/ToolRunner.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/util/concurrent/CancellationException.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/util/concurrent/RejectedExecutionException.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/util/concurrent/SynchronousQueue.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/util/concurrent/ThreadPoolExecutor.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/util/concurrent/TimeUnit.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/util/concurrent/Future.class)]]
[loading 

[jira] [Updated] (HIVE-19230) Schema column width inconsistency in Oracle

2018-04-17 Thread Naveen Gangam (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naveen Gangam updated HIVE-19230:
-
Status: Patch Available  (was: Open)

[~aihuaxu] [~vihangk1] Could you please review this schema change fix? Thanks

> Schema column width inconsistency in Oracle 
> 
>
> Key: HIVE-19230
> URL: https://issues.apache.org/jira/browse/HIVE-19230
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 2.1.0
>Reporter: Naveen Gangam
>Assignee: Naveen Gangam
>Priority: Minor
> Attachments: HIVE-19230.patch
>
>
> This is for oracle only. Does not appear to be an issue with other DBs. When 
> you upgrade hive schema from 2.1.0 to hive 3.0.0, the width of 
> TXN_COMPONENTS.TC_TABLE is 256 and COMPLETED_TXN_COMPONENTS.CTC_TABLE is 128.
> But if you install hive 3.0 schema directly, their widths are 128 and 256 
> respectively. This is consistent with schemas for other databases.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19230) Schema column width inconsistency in Oracle

2018-04-17 Thread Naveen Gangam (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naveen Gangam updated HIVE-19230:
-
Attachment: HIVE-19230.patch

> Schema column width inconsistency in Oracle 
> 
>
> Key: HIVE-19230
> URL: https://issues.apache.org/jira/browse/HIVE-19230
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 2.1.0
>Reporter: Naveen Gangam
>Assignee: Naveen Gangam
>Priority: Minor
> Attachments: HIVE-19230.patch
>
>
> This is for oracle only. Does not appear to be an issue with other DBs. When 
> you upgrade hive schema from 2.1.0 to hive 3.0.0, the width of 
> TXN_COMPONENTS.TC_TABLE is 256 and COMPLETED_TXN_COMPONENTS.CTC_TABLE is 128.
> But if you install hive 3.0 schema directly, their widths are 128 and 256 
> respectively. This is consistent with schemas for other databases.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19160) Insert data into decimal column fails with Null Pointer Exception

2018-04-17 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441505#comment-16441505
 ] 

Hive QA commented on HIVE-19160:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12919288/HIVE-19160.3.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/10277/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/10277/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-10277/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ date '+%Y-%m-%d %T.%3N'
2018-04-17 21:07:18.579
+ [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]]
+ export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ export 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'MAVEN_OPTS=-Xmx1g '
+ MAVEN_OPTS='-Xmx1g '
+ cd /data/hiveptest/working/
+ tee /data/hiveptest/logs/PreCommit-HIVE-Build-10277/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ date '+%Y-%m-%d %T.%3N'
2018-04-17 21:07:18.582
+ cd apache-github-source-source
+ git fetch origin
>From https://github.com/apache/hive
   3d1bf34..4cfec3e  master -> origin/master
   9db29e9..624e464  branch-3   -> origin/branch-3
+ git reset --hard HEAD
HEAD is now at 3d1bf34 HIVE-19227 : Update golden files for negative tests
+ git clean -f -d
+ git checkout master
Already on 'master'
Your branch is behind 'origin/master' by 1 commit, and can be fast-forwarded.
  (use "git pull" to update your local branch)
+ git reset --hard origin/master
HEAD is now at 4cfec3e HIVE-19126: CachedStore: Use memory estimation to limit 
cache size during prewarm (Vaibhav Gumashta reviewed by Thejas Nair)
+ git merge --ff-only origin/master
Already up-to-date.
+ date '+%Y-%m-%d %T.%3N'
2018-04-17 21:07:24.855
+ rm -rf ../yetus_PreCommit-HIVE-Build-10277
+ mkdir ../yetus_PreCommit-HIVE-Build-10277
+ git gc
+ cp -R . ../yetus_PreCommit-HIVE-Build-10277
+ mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-10277/yetus
+ patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hiveptest/working/scratch/build.patch
+ [[ -f /data/hiveptest/working/scratch/build.patch ]]
+ chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh
+ /data/hiveptest/working/scratch/smart-apply-patch.sh 
/data/hiveptest/working/scratch/build.patch
error: patch failed: 
standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/columnstats/merge/DecimalColumnStatsMerger.java:31
Falling back to three-way merge...
Applied patch to 
'standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/columnstats/merge/DecimalColumnStatsMerger.java'
 with conflicts.
Going to apply patch with: git apply -p0
error: patch failed: 
standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/columnstats/merge/DecimalColumnStatsMerger.java:31
Falling back to three-way merge...
Applied patch to 
'standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/columnstats/merge/DecimalColumnStatsMerger.java'
 with conflicts.
U 
standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/columnstats/merge/DecimalColumnStatsMerger.java
+ exit 1
'
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12919288 - PreCommit-HIVE-Build

> Insert data into decimal column fails with Null Pointer Exception
> -
>
> Key: HIVE-19160
> URL: https://issues.apache.org/jira/browse/HIVE-19160
> Project: Hive
>  Issue Type: Bug
>Reporter: Janaki Lahorani
>Assignee: Janaki Lahorani
>Priority: Major
> Attachments: HIVE-19160.1.patch, HIVE-19160.2.patch, 
> HIVE-19160.3.patch
>
>
> drop table if exists testDecimal;
> create table testDecimal
> (cIdTINYINT,
>  cBigIntDECIMAL,
>  cInt   DECIMAL,
>  cSmallInt  DECIMAL,
>  cTinyint   DECIMAL);
> insert into testDecimal values
> (1,
>  1234567890123456789,
>  1234567890,
>  12345,
>  123);
> insert into testDecimal values
> (2,
>  1,
>  2,
>  3,
>  4);
> The second insert fails with null pointer 

[jira] [Commented] (HIVE-17647) DDLTask.generateAddMmTasks(Table tbl) and other random code should not start transactions

2018-04-17 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441501#comment-16441501
 ] 

Hive QA commented on HIVE-17647:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12919281/HIVE-17647.02.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 47 failed/errored test(s), 14237 tests 
executed
*Failed tests:*
{noformat}
TestDbNotificationListener - did not produce a TEST-*.xml file (likely timed 
out) (batchId=247)
TestHCatHiveCompatibility - did not produce a TEST-*.xml file (likely timed 
out) (batchId=247)
TestNonCatCallsWithCatalog - did not produce a TEST-*.xml file (likely timed 
out) (batchId=217)
TestSequenceFileReadWrite - did not produce a TEST-*.xml file (likely timed 
out) (batchId=247)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[llap_smb] (batchId=92)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_0] 
(batchId=17)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[results_cache_invalidation2]
 (batchId=39)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[tez_join_hash] 
(batchId=54)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[acid_vectorization_original]
 (batchId=173)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[results_cache_invalidation2]
 (batchId=163)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] 
(batchId=163)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_smb_1] 
(batchId=171)
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[acid_vectorization_original_tez]
 (batchId=106)
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainanalyze_5] 
(batchId=105)
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver[infer_bucket_sort_dyn_part]
 (batchId=93)
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver[infer_bucket_sort_map_operators]
 (batchId=93)
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver[infer_bucket_sort_num_buckets]
 (batchId=93)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[alter_notnull_constraint_violation]
 (batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[default_constraint_invalid_default_value_type]
 (batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[insert_into_acid_notnull]
 (batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[insert_into_notnull_constraint]
 (batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[insert_multi_into_notnull]
 (batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[insert_overwrite_notnull_constraint]
 (batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[insertsel_fail] 
(batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[update_notnull_constraint]
 (batchId=95)
org.apache.hadoop.hive.cli.control.TestDanglingQOuts.checkDanglingQOut 
(batchId=225)
org.apache.hadoop.hive.druid.TestDruidStorageHandler.testCommitMultiInsertOverwriteTable
 (batchId=261)
org.apache.hadoop.hive.ql.TestAcidOnTez.testAcidInsertWithRemoveUnion 
(batchId=228)
org.apache.hadoop.hive.ql.TestAcidOnTez.testCtasTezUnion (batchId=228)
org.apache.hadoop.hive.ql.TestAcidOnTez.testNonStandardConversion01 
(batchId=228)
org.apache.hadoop.hive.ql.TestMTQueries.testMTQueries1 (batchId=232)
org.apache.hadoop.hive.ql.TestTxnCommands.testNonAcidToAcidConversion01 
(batchId=299)
org.apache.hadoop.hive.ql.TestTxnCommands2.testNonAcidToAcidConversion02 
(batchId=287)
org.apache.hadoop.hive.ql.TestTxnCommands2.testNonAcidToAcidConversion3 
(batchId=287)
org.apache.hadoop.hive.ql.TestTxnCommands2WithSplitUpdateAndVectorization.testNonAcidToAcidConversion02
 (batchId=296)
org.apache.hadoop.hive.ql.TestTxnCommands2WithSplitUpdateAndVectorization.testNonAcidToAcidConversion3
 (batchId=296)
org.apache.hadoop.hive.ql.TestTxnCommandsWithSplitUpdateAndVectorization.testNonAcidToAcidConversion01
 (batchId=287)
org.apache.hadoop.hive.ql.TestTxnLoadData.loadDataNonAcid2AcidConversion 
(batchId=264)
org.apache.hadoop.hive.ql.TestTxnLoadData.loadDataNonAcid2AcidConversionVectorized
 (batchId=264)
org.apache.hadoop.hive.ql.TestTxnNoBuckets.testNonAcidToAcidVectorzied 
(batchId=287)
org.apache.hadoop.hive.ql.TestTxnNoBuckets.testToAcidConversion02 (batchId=287)
org.apache.hadoop.hive.ql.TestTxnNoBuckets.testToAcidConversionMultiBucket 
(batchId=287)
org.apache.hadoop.hive.ql.TestTxnNoBucketsVectorized.testNonAcidToAcidVectorzied
 (batchId=287)
org.apache.hadoop.hive.ql.TestTxnNoBucketsVectorized.testToAcidConversion02 
(batchId=287)
org.apache.hadoop.hive.ql.TestTxnNoBucketsVectorized.testToAcidConversionMultiBucket
 (batchId=287)
org.apache.hadoop.hive.ql.lockmgr.TestDbTxnManager2.testMmConversionLocks 

[jira] [Assigned] (HIVE-19230) Schema column width inconsistency in Oracle

2018-04-17 Thread Naveen Gangam (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naveen Gangam reassigned HIVE-19230:



> Schema column width inconsistency in Oracle 
> 
>
> Key: HIVE-19230
> URL: https://issues.apache.org/jira/browse/HIVE-19230
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 2.1.0
>Reporter: Naveen Gangam
>Assignee: Naveen Gangam
>Priority: Minor
>
> This is for oracle only. Does not appear to be an issue with other DBs. When 
> you upgrade hive schema from 2.1.0 to hive 3.0.0, the width of 
> TXN_COMPONENTS.TC_TABLE is 256 and COMPLETED_TXN_COMPONENTS.CTC_TABLE is 128.
> But if you install hive 3.0 schema directly, their widths are 128 and 256 
> respectively. This is consistent with schemas for other databases.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19226) Extend storage-api to print timestamp values in UTC

2018-04-17 Thread Jesus Camacho Rodriguez (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441494#comment-16441494
 ] 

Jesus Camacho Rodriguez commented on HIVE-19226:


[~ashutoshc], Owen suggested in ORC-341 to create a boolean {{useUTC}} in 
{{TimestampColumnVector}}. Then, stringify would print the timestamp with UTC 
or system time zone depending on that boolean. It seems like a good idea, I 
will explore it and update this issue accordingly.

> Extend storage-api to print timestamp values in UTC
> ---
>
> Key: HIVE-19226
> URL: https://issues.apache.org/jira/browse/HIVE-19226
> Project: Hive
>  Issue Type: Bug
>  Components: storage-api
>Affects Versions: 3.0.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Attachments: HIVE-19226.patch
>
>
> Related to HIVE-12192. Create new method that prints values in UTC.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18778) Needs to capture input/output entities in explain

2018-04-17 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441492#comment-16441492
 ] 

Thejas M Nair commented on HIVE-18778:
--

[~ngangam]
Daniel is away for few days, so I doubt if he is going to get back on this soon.


> Needs to capture input/output entities in explain
> -
>
> Key: HIVE-18778
> URL: https://issues.apache.org/jira/browse/HIVE-18778
> Project: Hive
>  Issue Type: Bug
>Reporter: Daniel Dai
>Assignee: Daniel Dai
>Priority: Major
> Attachments: HIVE-18778-SparkPositive.patch, HIVE-18778.1.patch, 
> HIVE-18778.2.patch, HIVE-18778.3.patch, HIVE-18778_TestCliDriver.patch, 
> HIVE-18788_SparkNegative.patch, HIVE-18788_SparkPerf.patch
>
>
> With Sentry enabled, commands like explain drop table foo fail with {{explain 
> drop table foo;}}
> {code}
> Error: Error while compiling statement: FAILED: SemanticException No valid 
> privileges
>  Required privilege( Table) not available in input privileges
>  The required privileges: (state=42000,code=4)
> {code}
> Sentry fails to authorize because the ExplainSemanticAnalyzer uses an 
> instance of DDLSemanticAnalyzer to analyze the explain query.
> {code}
> BaseSemanticAnalyzer sem = SemanticAnalyzerFactory.get(conf, input);
> sem.analyze(input, ctx);
> sem.validate()
> {code}
> The inputs/outputs entities for this query are set in the above code. 
> However, these are never set on the instance of ExplainSemanticAnalyzer 
> itself and thus is not propagated into the HookContext in the calling Driver 
> code.
> {code}
> sem.analyze(tree, ctx); --> this results in calling the above code that uses 
> DDLSA
> hookCtx.update(sem); --> sem is an instance of ExplainSemanticAnalyzer, this 
> code attempts to update the HookContext with the input/output info from ESA 
> which is never set.
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19226) Extend storage-api to print timestamp values in UTC

2018-04-17 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441490#comment-16441490
 ] 

Ashutosh Chauhan commented on HIVE-19226:
-

+1

> Extend storage-api to print timestamp values in UTC
> ---
>
> Key: HIVE-19226
> URL: https://issues.apache.org/jira/browse/HIVE-19226
> Project: Hive
>  Issue Type: Bug
>  Components: storage-api
>Affects Versions: 3.0.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Attachments: HIVE-19226.patch
>
>
> Related to HIVE-12192. Create new method that prints values in UTC.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19186) Multi Table INSERT statements query has a flaw for partitioned table when INSERT INTO and INSERT OVERWRITE are used

2018-04-17 Thread Steve Yeom (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441491#comment-16441491
 ] 

Steve Yeom commented on HIVE-19186:
---

All the about 30 failed tests of age 1 are clear in my environment. 

> Multi Table INSERT statements query has a flaw for partitioned table when 
> INSERT INTO and INSERT OVERWRITE are used
> ---
>
> Key: HIVE-19186
> URL: https://issues.apache.org/jira/browse/HIVE-19186
> Project: Hive
>  Issue Type: Bug
>  Components: Query Planning
>Affects Versions: 3.0.0
>Reporter: Steve Yeom
>Assignee: Steve Yeom
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-19186.01.patch, HIVE-19186.02.patch
>
>
> One problem test case is: 
> create table intermediate(key int) partitioned by (p int) stored as orc;
> insert into table intermediate partition(p='455') select distinct key from 
> src where key >= 0 order by key desc limit 2;
> insert into table intermediate partition(p='456') select distinct key from 
> src where key is not null order by key asc limit 2;
> insert into table intermediate partition(p='457') select distinct key from 
> src where key >= 100 order by key asc limit 2;
> create table multi_partitioned (key int, key2 int) partitioned by (p int);
> from intermediate
> insert into table multi_partitioned partition(p=2) select p, key
> insert overwrite table multi_partitioned partition(p=1) select key, p;



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19009) Retain and use runtime statistics during hs2 lifetime

2018-04-17 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441480#comment-16441480
 ] 

Ashutosh Chauhan commented on HIVE-19009:
-

+1

> Retain and use runtime statistics during hs2 lifetime
> -
>
> Key: HIVE-19009
> URL: https://issues.apache.org/jira/browse/HIVE-19009
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-19009.01.patch, HIVE-19009.02.patch, 
> HIVE-19009.03.patch, HIVE-19009.04.patch, HIVE-19009.05.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18690) Integrate with Spark OutputMetrics

2018-04-17 Thread Sahil Takiar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sahil Takiar updated HIVE-18690:

Attachment: HIVE-18690.2.patch

> Integrate with Spark OutputMetrics
> --
>
> Key: HIVE-18690
> URL: https://issues.apache.org/jira/browse/HIVE-18690
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
>Priority: Major
> Attachments: HIVE-18690.1.patch, HIVE-18690.2.patch
>
>
> Spark has an {{OutputMetrics}} it uses to expose records / bytes written. We 
> currently don't integrate with it and the Spark UI shows a blank value for 
> output records / bytes. We have our own customer accumulators instead (like 
> {{HIVE_RECORDS_OUT}}).
> Spark exposes the {{OutputMetrics}} object inside individual tasks via the 
> {{TaskContext.get()}} method. We can use this method to access the 
> {{OutputMetrics}} object and update it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18778) Needs to capture input/output entities in explain

2018-04-17 Thread Naveen Gangam (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441444#comment-16441444
 ] 

Naveen Gangam commented on HIVE-18778:
--

[~daijy] Any luck with the tests? Thanks

> Needs to capture input/output entities in explain
> -
>
> Key: HIVE-18778
> URL: https://issues.apache.org/jira/browse/HIVE-18778
> Project: Hive
>  Issue Type: Bug
>Reporter: Daniel Dai
>Assignee: Daniel Dai
>Priority: Major
> Attachments: HIVE-18778-SparkPositive.patch, HIVE-18778.1.patch, 
> HIVE-18778.2.patch, HIVE-18778.3.patch, HIVE-18778_TestCliDriver.patch, 
> HIVE-18788_SparkNegative.patch, HIVE-18788_SparkPerf.patch
>
>
> With Sentry enabled, commands like explain drop table foo fail with {{explain 
> drop table foo;}}
> {code}
> Error: Error while compiling statement: FAILED: SemanticException No valid 
> privileges
>  Required privilege( Table) not available in input privileges
>  The required privileges: (state=42000,code=4)
> {code}
> Sentry fails to authorize because the ExplainSemanticAnalyzer uses an 
> instance of DDLSemanticAnalyzer to analyze the explain query.
> {code}
> BaseSemanticAnalyzer sem = SemanticAnalyzerFactory.get(conf, input);
> sem.analyze(input, ctx);
> sem.validate()
> {code}
> The inputs/outputs entities for this query are set in the above code. 
> However, these are never set on the instance of ExplainSemanticAnalyzer 
> itself and thus is not propagated into the HookContext in the calling Driver 
> code.
> {code}
> sem.analyze(tree, ctx); --> this results in calling the above code that uses 
> DDLSA
> hookCtx.update(sem); --> sem is an instance of ExplainSemanticAnalyzer, this 
> code attempts to update the HookContext with the input/output info from ESA 
> which is never set.
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19141) TestNegativeCliDriver insert_into_notnull_constraint, insert_into_acid_notnull failing

2018-04-17 Thread Igor Kryvenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Kryvenko updated HIVE-19141:
-
Attachment: HIVE-19141.03.patch

> TestNegativeCliDriver insert_into_notnull_constraint, 
> insert_into_acid_notnull failing
> --
>
> Key: HIVE-19141
> URL: https://issues.apache.org/jira/browse/HIVE-19141
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Vineet Garg
>Assignee: Igor Kryvenko
>Priority: Major
> Fix For: 3.0.0, 3.1.0
>
> Attachments: HIVE-19141.01.patch, HIVE-19141.02.patch, 
> HIVE-19141.03.patch
>
>
> These tests have been consistently failing for a while. I suspect HIVE-18727 
> has caused these failures. HIVE-18727 changed the code to throw ERROR instead 
> of EXCEPTION if constraints are violated. I guess Negative cli driver doesn't 
> handle errors.
> Following are full list of related failures:
> TestNegativeCliDriver.alter_notnull_constraint_violation
> TestNegativeCliDriver.insert_into_acid_notnull 
> TestNegativeCliDriver.insert_into_notnull_constraint 
> TestNegativeCliDriver.insert_multi_into_notnull 
> TestNegativeCliDriver.insert_overwrite_notnull_constraint 
> TestNegativeCliDriver.update_notnull_constraint



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-17647) DDLTask.generateAddMmTasks(Table tbl) and other random code should not start transactions

2018-04-17 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441415#comment-16441415
 ] 

Hive QA commented on HIVE-17647:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
50s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
25s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
14s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
56s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
57s{color} | {color:red} ql: The patch generated 12 new + 1548 unchanged - 8 
fixed = 1560 total (was 1556) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
15s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 18m 16s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-10275/dev-support/hive-personality.sh
 |
| git revision | master / 3d1bf34 |
| Default Java | 1.8.0_111 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-10275/yetus/diff-checkstyle-ql.txt
 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-10275/yetus/patch-asflicense-problems.txt
 |
| modules | C: itests ql U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-10275/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> DDLTask.generateAddMmTasks(Table tbl) and other random code should not start 
> transactions
> -
>
> Key: HIVE-17647
> URL: https://issues.apache.org/jira/browse/HIVE-17647
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Sergey Shelukhin
>Priority: Major
>  Labels: mm-gap-2
> Attachments: HIVE-17647.01.patch, HIVE-17647.02.patch, 
> HIVE-17647.patch
>
>
> This method (and other places) have 
> {noformat}
>   if (txnManager.isTxnOpen()) {
> mmWriteId = txnManager.getCurrentTxnId();
>   } else {
> mmWriteId = txnManager.openTxn(new Context(conf), conf.getUser());
> txnManager.commitTxn();
>   }
> {noformat}
> this should throw if there is no open transaction.  It should never open one.
> In general the logic seems suspect.  Looks like the intent is to move all 
> existing files into a delta_x_x/ when a plain table is converted to MM table. 
>  This seems like something that needs to be done from under an Exclusive lock 
> to prevent concurrent Insert operations writing data under table/partition 
> root.  But this is too 

[jira] [Updated] (HIVE-19126) CachedStore: Use memory estimation to limit cache size during prewarm

2018-04-17 Thread Vaibhav Gumashta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vaibhav Gumashta updated HIVE-19126:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.1.0
   3.0.0
   Status: Resolved  (was: Patch Available)

Pushed to branch-3 and master. Thanks [~thejas].

> CachedStore: Use memory estimation to limit cache size during prewarm
> -
>
> Key: HIVE-19126
> URL: https://issues.apache.org/jira/browse/HIVE-19126
> Project: Hive
>  Issue Type: Sub-task
>  Components: Metastore
>Affects Versions: 3.0.0
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
>Priority: Major
> Fix For: 3.0.0, 3.1.0
>
> Attachments: HIVE-19126.1.patch, HIVE-19126.2.patch, 
> HIVE-19126.3.patch, HIVE-19126.4.patch, HIVE-19126.5.patch, 
> HIVE-19126.5.patch, HIVE-19126.6.patch
>
>
> We can rely on 
> https://github.com/apache/hive/blob/master/llap-server/src/java/org/apache/hadoop/hive/llap/IncrementalObjectSizeEstimator.java
>  to estimate memory of SharedCache. This jira addresses the size estimation 
> during prewarm, so that we can stop when we hit the memory limit. In a 
> follow-up jira, we will work on estimation/eviction after prewarm is 
> complete, so that we can keep the frequently used tables and their partitions 
> in cache.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19196) TestTriggersMoveWorkloadManager is flaky

2018-04-17 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441404#comment-16441404
 ] 

Sergey Shelukhin commented on HIVE-19196:
-

I can repro the failures very occasionally when running in a loop, however I 
can see the WM events are the same in both failed and successful log file. 
Trying to add some more logging to see what happens... test seems to be timing 
dependent.

> TestTriggersMoveWorkloadManager is flaky
> 
>
> Key: HIVE-19196
> URL: https://issues.apache.org/jira/browse/HIVE-19196
> Project: Hive
>  Issue Type: Sub-task
>  Components: Test
>Reporter: Ashutosh Chauhan
>Assignee: Sergey Shelukhin
>Priority: Major
>
> This is a flaky test which randomly fails. Consider improving its stability.
> {code}
> org.apache.hive.jdbc.TestTriggersMoveWorkloadManager.testTriggerMoveConflictKill
> Failing for the past 1 build (Since Failed#10161 )
> Took 2.4 sec.
> Error Message
> '"eventType" : "GET"' expected in STDERR capture, but not found.
> Stacktrace
> java.lang.AssertionError: '"eventType" : "GET"' expected in STDERR capture, 
> but not found.
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at 
> org.apache.hive.jdbc.AbstractJdbcTriggersTest.runQueryWithTrigger(AbstractJdbcTriggersTest.java:169)
>   at 
> org.apache.hive.jdbc.TestTriggersMoveWorkloadManager.testTriggerMoveConflictKill(TestTriggersMoveWorkloadManager.java:261)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19104) When test MetaStore is started with retry the instances should be independent

2018-04-17 Thread Sahil Takiar (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441401#comment-16441401
 ] 

Sahil Takiar commented on HIVE-19104:
-

+1

> When test MetaStore is started with retry the instances should be independent
> -
>
> Key: HIVE-19104
> URL: https://issues.apache.org/jira/browse/HIVE-19104
> Project: Hive
>  Issue Type: Improvement
>Reporter: Peter Vary
>Assignee: Peter Vary
>Priority: Major
> Attachments: HIVE-19104.2.patch, HIVE-19104.3.patch, 
> HIVE-19104.4.patch, HIVE-19104.5.patch, HIVE-19104.6.patch, 
> HIVE-19104.7.patch, HIVE-19104.patch
>
>
> When multiple MetaStore instances are started with 
> {{MetaStoreTestUtils.startMetaStoreWithRetry}} currently they use the same 
> JDBC url, and warehouse directory. This can cause problem in the tests



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


  1   2   >