[jira] [Commented] (HIVE-18783) ALTER TABLE post-commit listener does not include the transactional listener responses

2018-03-07 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16390862#comment-16390862
 ] 

Hive QA commented on HIVE-18783:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12913410/HIVE-18783.1.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 20 failed/errored test(s), 12952 tests 
executed
*Failed tests:*
{noformat}
TestNegativeCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=94)

[nopart_insert.q,insert_into_with_schema.q,input41.q,having1.q,create_table_failure3.q,default_constraint_invalid_default_value.q,database_drop_not_empty_restrict.q,windowing_after_orderby.q,orderbysortby.q,subquery_select_distinct2.q,authorization_uri_alterpart_loc.q,udf_last_day_error_1.q,constraint_duplicate_name.q,create_table_failure4.q,alter_tableprops_external_with_notnull_constraint.q,semijoin5.q,udf_format_number_wrong4.q,deletejar.q,exim_11_nonpart_noncompat_sorting.q,show_tables_bad_db2.q,drop_func_nonexistent.q,nopart_load.q,alter_table_non_partitioned_table_cascade.q,load_wrong_fileformat.q,lockneg_try_db_lock_conflict.q,udf_field_wrong_args_len.q,create_table_failure2.q,create_with_fk_constraints_enforced.q,groupby2_map_skew_multi_distinct.q,udf_min.q,authorization_update_noupdatepriv.q,show_columns2.q,authorization_insert_noselectpriv.q,orc_replace_columns3_acid.q,compare_double_bigint.q,authorization_set_nonexistent_conf.q,alter_rename_partition_failure3.q,split_sample_wrong_format2.q,create_with_fk_pk_same_tab.q,compare_double_bigint_2.q,authorization_show_roles_no_admin.q,materialized_view_authorization_rebuild_no_grant.q,unionLimit.q,authorization_revoke_table_fail2.q,authorization_insert_noinspriv.q,duplicate_insert3.q,authorization_desc_table_nosel.q,stats_noscan_non_native.q,orc_change_serde_acid.q,create_or_replace_view7.q,exim_07_nonpart_noncompat_ifof.q,create_with_unique_constraints_enforced.q,udf_concat_ws_wrong2.q,fileformat_bad_class.q,merge_negative_2.q,exim_15_part_nonpart.q,authorization_not_owner_drop_view.q,external1.q,authorization_uri_insert.q,create_with_fk_wrong_ref.q,columnstats_tbllvl_incorrect_column.q,authorization_show_parts_nosel.q,authorization_not_owner_drop_tab.q,external2.q,authorization_deletejar.q,temp_table_create_like_partitions.q,udf_greatest_error_1.q,ptf_negative_AggrFuncsWithNoGBYNoPartDef.q,alter_view_as_select_not_exist.q,touch1.q,groupby3_map_skew_multi_distinct.q,insert_into_notnull_constraint.q,exchange_partition_neg_partition_missing.q,groupby_cube_multi_gby.q,columnstats_tbllvl.q,drop_invalid_constraint2.q,alter_table_add_partition.q,update_not_acid.q,archive5.q,alter_table_constraint_invalid_pk_col.q,ivyDownload.q,udf_instr_wrong_type.q,bad_sample_clause.q,authorization_not_owner_drop_tab2.q,authorization_alter_db_owner.q,show_columns1.q,orc_type_promotion3.q,create_view_failure8.q,strict_join.q,udf_add_months_error_1.q,groupby_cube2.q,groupby_cube1.q,groupby_rollup1.q,genericFileFormat.q,invalid_cast_from_binary_4.q,drop_invalid_constraint1.q,serde_regex.q,show_partitions1.q,invalid_cast_from_binary_6.q,create_with_multi_pk_constraint.q,udf_field_wrong_type.q,groupby_grouping_sets4.q,groupby_grouping_sets3.q,insertsel_fail.q,udf_locate_wrong_type.q,orc_type_promotion1_acid.q,set_table_property.q,create_or_replace_view2.q,groupby_grouping_sets2.q,alter_view_failure.q,distinct_windowing_failure1.q,invalid_t_alter2.q,alter_table_constraint_invalid_fk_col1.q,invalid_varchar_length_2.q,authorization_show_grant_otheruser_alltabs.q,subquery_windowing_corr.q,compact_non_acid_table.q,authorization_view_4.q,authorization_disallow_transform.q,materialized_view_authorization_rebuild_other.q,authorization_fail_4.q,dbtxnmgr_nodblock.q,set_hiveconf_internal_variable1.q,input_part0_neg.q,udf_printf_wrong3.q,load_orc_negative2.q,druid_buckets.q,archive2.q,authorization_addjar.q,invalid_sum_syntax.q,insert_into_with_schema1.q,udf_add_months_error_2.q,dyn_part_max_per_node.q,authorization_revoke_table_fail1.q,udf_printf_wrong2.q,archive_multi3.q,udf_printf_wrong1.q,subquery_subquery_chain.q,authorization_view_disable_cbo_4.q,no_matching_udf.q,create_view_failure7.q,drop_native_udf.q,truncate_column_list_bucketing.q,authorization_uri_add_partition.q,authorization_view_disable_cbo_3.q,bad_exec_hooks.q,authorization_view_disable_cbo_2.q,fetchtask_ioexception.q,char_pad_convert_fail2.q,authorization_set_role_neg1.q,serde_regex3.q,authorization_delete_nodeletepriv.q,materialized_view_delete.q,create_or_replace_view6.q,bucket_mapjoin_wrong_table_metadata_2.q,msck_repair_3.q,udf_sort_array_by_wrong2.q,local_mapred_error_cache.q,alter_external_acid.q,mm_concatenate.q,authorization_fail_3.q,set_hiveconf_internal_variable0.q,udf_last_day_error_2.q,alter_table_constraint_invalid_ref.q,create_table_wrong_regex.q,describe_x

[jira] [Resolved] (HIVE-18849) Java 9 Building "mvn clean package -Pdist -rf :hive-common" get Caused by: java.lang.NullPointerException

2018-03-07 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HIVE-18849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Мирон resolved HIVE-18849.
--
  Resolution: Workaround
Release Note: User found solution. Uncertain if this fits enterprise. 
Limited findings, enough to address issue, shared. More will be shared at a 
later time, as there are more apparent build failures with other modules.

> Java 9 Building "mvn clean package -Pdist -rf :hive-common" get Caused by: 
> java.lang.NullPointerException
> -
>
> Key: HIVE-18849
> URL: https://issues.apache.org/jira/browse/HIVE-18849
> Project: Hive
>  Issue Type: Bug
>Reporter: Мирон
>Assignee: Zoltan Haindrich
>Priority: Blocker
> Attachments: Issue_Build_Complete_Log.txt, Issue_Stack_Trace.txt, 
> metastore.pom.xml.working
>
>
> Please see attached stack trace, both, brief and complete capture, both using 
> -X verbose output maven flag.
> Irrespective of the true cause, it would be very nice if this message of 
> build tool ( maven )
> --
> Caused by: java.lang.NullPointerException
>     at com.sun.tools.javac.main.JavaCompiler.readSourceFile 
> (JavaCompiler.java:825)
>     at 
> com.sun.tools.javac.processing.JavacProcessingEnvironment$ImplicitCompleter.complete
>  (JavacProcessingEnvironment.java:1510)
>     at com.sun.tools.javac.code.Symbol.complete (Symbol.java:633)
> --
> carried actual file name, that was attempted reading and failed with 
> exception.
>  
> Git repository cloned from [https://github.com/apache/hive.git] yesterday - 
> today overnight.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18849) Java 9 Building "mvn clean package -Pdist -rf :hive-common" get Caused by: java.lang.NullPointerException

2018-03-07 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HIVE-18849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16390844#comment-16390844
 ] 

Мирон commented on HIVE-18849:
--

I will assign to Zoltan so that this does not get lost.

Zoltan, I will try to branch and push with pull request. If I don't have 
"write" will try to ping. Hope it helps.

-- cheers

> Java 9 Building "mvn clean package -Pdist -rf :hive-common" get Caused by: 
> java.lang.NullPointerException
> -
>
> Key: HIVE-18849
> URL: https://issues.apache.org/jira/browse/HIVE-18849
> Project: Hive
>  Issue Type: Bug
>Reporter: Мирон
>Assignee: Zoltan Haindrich
>Priority: Blocker
> Attachments: Issue_Build_Complete_Log.txt, Issue_Stack_Trace.txt, 
> metastore.pom.xml.working
>
>
> Please see attached stack trace, both, brief and complete capture, both using 
> -X verbose output maven flag.
> Irrespective of the true cause, it would be very nice if this message of 
> build tool ( maven )
> --
> Caused by: java.lang.NullPointerException
>     at com.sun.tools.javac.main.JavaCompiler.readSourceFile 
> (JavaCompiler.java:825)
>     at 
> com.sun.tools.javac.processing.JavacProcessingEnvironment$ImplicitCompleter.complete
>  (JavacProcessingEnvironment.java:1510)
>     at com.sun.tools.javac.code.Symbol.complete (Symbol.java:633)
> --
> carried actual file name, that was attempted reading and failed with 
> exception.
>  
> Git repository cloned from [https://github.com/apache/hive.git] yesterday - 
> today overnight.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-18849) Java 9 Building "mvn clean package -Pdist -rf :hive-common" get Caused by: java.lang.NullPointerException

2018-03-07 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HIVE-18849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Мирон reassigned HIVE-18849:


Assignee: Zoltan Haindrich  (was: Мирон)

> Java 9 Building "mvn clean package -Pdist -rf :hive-common" get Caused by: 
> java.lang.NullPointerException
> -
>
> Key: HIVE-18849
> URL: https://issues.apache.org/jira/browse/HIVE-18849
> Project: Hive
>  Issue Type: Bug
>Reporter: Мирон
>Assignee: Zoltan Haindrich
>Priority: Blocker
> Attachments: Issue_Build_Complete_Log.txt, Issue_Stack_Trace.txt, 
> metastore.pom.xml.working
>
>
> Please see attached stack trace, both, brief and complete capture, both using 
> -X verbose output maven flag.
> Irrespective of the true cause, it would be very nice if this message of 
> build tool ( maven )
> --
> Caused by: java.lang.NullPointerException
>     at com.sun.tools.javac.main.JavaCompiler.readSourceFile 
> (JavaCompiler.java:825)
>     at 
> com.sun.tools.javac.processing.JavacProcessingEnvironment$ImplicitCompleter.complete
>  (JavacProcessingEnvironment.java:1510)
>     at com.sun.tools.javac.code.Symbol.complete (Symbol.java:633)
> --
> carried actual file name, that was attempted reading and failed with 
> exception.
>  
> Git repository cloned from [https://github.com/apache/hive.git] yesterday - 
> today overnight.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18849) Java 9 Building "mvn clean package -Pdist -rf :hive-common" get Caused by: java.lang.NullPointerException

2018-03-07 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HIVE-18849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16390841#comment-16390841
 ] 

Мирон commented on HIVE-18849:
--

At the moment I can only make it work by replacing standalone-metastore pom.xml 
datanucleus-maven-plugin block. I have attached changed pom.xml with name 
standalone-metastore.pom.xml.working.



Initial thought was to incrementally change property of current plugin 
datanucleus-maven-plugin

${project.build.directory}/classes/org/apache/hadoop/hive/metastore/model/*.class

but at the moment it either doesn't work, or maven incorrectly reports no work 
done, message reads:

"[WARNING] No files to run DataNucleus tool 
'org.datanucleus.enhancer.DataNucleusEnhancer'"

Thought not ideal, it works, so, I am unblocked. I will resolve / close the bug 
and leave it to your looks whether to pick up the change or troubleshoot 
further, leave it as it is.

> Java 9 Building "mvn clean package -Pdist -rf :hive-common" get Caused by: 
> java.lang.NullPointerException
> -
>
> Key: HIVE-18849
> URL: https://issues.apache.org/jira/browse/HIVE-18849
> Project: Hive
>  Issue Type: Bug
>Reporter: Мирон
>Assignee: Мирон
>Priority: Blocker
> Attachments: Issue_Build_Complete_Log.txt, Issue_Stack_Trace.txt, 
> metastore.pom.xml.working
>
>
> Please see attached stack trace, both, brief and complete capture, both using 
> -X verbose output maven flag.
> Irrespective of the true cause, it would be very nice if this message of 
> build tool ( maven )
> --
> Caused by: java.lang.NullPointerException
>     at com.sun.tools.javac.main.JavaCompiler.readSourceFile 
> (JavaCompiler.java:825)
>     at 
> com.sun.tools.javac.processing.JavacProcessingEnvironment$ImplicitCompleter.complete
>  (JavacProcessingEnvironment.java:1510)
>     at com.sun.tools.javac.code.Symbol.complete (Symbol.java:633)
> --
> carried actual file name, that was attempted reading and failed with 
> exception.
>  
> Git repository cloned from [https://github.com/apache/hive.git] yesterday - 
> today overnight.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18849) Java 9 Building "mvn clean package -Pdist -rf :hive-common" get Caused by: java.lang.NullPointerException

2018-03-07 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HIVE-18849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Мирон updated HIVE-18849:
-
Attachment: metastore.pom.xml.working

> Java 9 Building "mvn clean package -Pdist -rf :hive-common" get Caused by: 
> java.lang.NullPointerException
> -
>
> Key: HIVE-18849
> URL: https://issues.apache.org/jira/browse/HIVE-18849
> Project: Hive
>  Issue Type: Bug
>Reporter: Мирон
>Assignee: Мирон
>Priority: Blocker
> Attachments: Issue_Build_Complete_Log.txt, Issue_Stack_Trace.txt, 
> metastore.pom.xml.working
>
>
> Please see attached stack trace, both, brief and complete capture, both using 
> -X verbose output maven flag.
> Irrespective of the true cause, it would be very nice if this message of 
> build tool ( maven )
> --
> Caused by: java.lang.NullPointerException
>     at com.sun.tools.javac.main.JavaCompiler.readSourceFile 
> (JavaCompiler.java:825)
>     at 
> com.sun.tools.javac.processing.JavacProcessingEnvironment$ImplicitCompleter.complete
>  (JavacProcessingEnvironment.java:1510)
>     at com.sun.tools.javac.code.Symbol.complete (Symbol.java:633)
> --
> carried actual file name, that was attempted reading and failed with 
> exception.
>  
> Git repository cloned from [https://github.com/apache/hive.git] yesterday - 
> today overnight.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-17178) Spark Partition Pruning Sink Operator can't target multiple Works

2018-03-07 Thread Rui Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Li updated HIVE-17178:
--
Attachment: HIVE-17178.6.patch

> Spark Partition Pruning Sink Operator can't target multiple Works
> -
>
> Key: HIVE-17178
> URL: https://issues.apache.org/jira/browse/HIVE-17178
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Reporter: Sahil Takiar
>Assignee: Rui Li
>Priority: Major
> Attachments: HIVE-17178.1.patch, HIVE-17178.2.patch, 
> HIVE-17178.3.patch, HIVE-17178.4.patch, HIVE-17178.5.patch, HIVE-17178.6.patch
>
>
> A Spark Partition Pruning Sink Operator cannot be used to target multiple Map 
> Work objects. The entire DPP subtree (SEL-GBY-SPARKPRUNINGSINK) is duplicated 
> if a single table needs to be used to target multiple Map Works.
> The following query shows the issue:
> {code}
> set hive.spark.dynamic.partition.pruning=true;
> set hive.auto.convert.join=true;
> create table part_table_1 (col int) partitioned by (part_col int);
> create table part_table_2 (col int) partitioned by (part_col int);
> create table regular_table (col int);
> insert into table regular_table values (1);
> alter table part_table_1 add partition (part_col=1);
> insert into table part_table_1 partition (part_col=1) values (1), (2), (3), 
> (4);
> alter table part_table_1 add partition (part_col=2);
> insert into table part_table_1 partition (part_col=2) values (1), (2), (3), 
> (4);
> alter table part_table_2 add partition (part_col=1);
> insert into table part_table_2 partition (part_col=1) values (1), (2), (3), 
> (4);
> alter table part_table_2 add partition (part_col=2);
> insert into table part_table_2 partition (part_col=2) values (1), (2), (3), 
> (4);
> explain select * from regular_table, part_table_1, part_table_2 where 
> regular_table.col = part_table_1.part_col and regular_table.col = 
> part_table_2.part_col;
> {code}
> The explain plan is
> {code}
> STAGE DEPENDENCIES:
>   Stage-2 is a root stage
>   Stage-1 depends on stages: Stage-2
>   Stage-0 depends on stages: Stage-1
> STAGE PLANS:
>   Stage: Stage-2
> Spark
>  A masked pattern was here 
>   Vertices:
> Map 1 
> Map Operator Tree:
> TableScan
>   alias: regular_table
>   Statistics: Num rows: 1 Data size: 1 Basic stats: COMPLETE 
> Column stats: NONE
>   Filter Operator
> predicate: col is not null (type: boolean)
> Statistics: Num rows: 1 Data size: 1 Basic stats: 
> COMPLETE Column stats: NONE
> Select Operator
>   expressions: col (type: int)
>   outputColumnNames: _col0
>   Statistics: Num rows: 1 Data size: 1 Basic stats: 
> COMPLETE Column stats: NONE
>   Spark HashTable Sink Operator
> keys:
>   0 _col0 (type: int)
>   1 _col1 (type: int)
>   2 _col1 (type: int)
>   Select Operator
> expressions: _col0 (type: int)
> outputColumnNames: _col0
> Statistics: Num rows: 1 Data size: 1 Basic stats: 
> COMPLETE Column stats: NONE
> Group By Operator
>   keys: _col0 (type: int)
>   mode: hash
>   outputColumnNames: _col0
>   Statistics: Num rows: 1 Data size: 1 Basic stats: 
> COMPLETE Column stats: NONE
>   Spark Partition Pruning Sink Operator
> partition key expr: part_col
> Statistics: Num rows: 1 Data size: 1 Basic stats: 
> COMPLETE Column stats: NONE
> target column name: part_col
> target work: Map 2
>   Select Operator
> expressions: _col0 (type: int)
> outputColumnNames: _col0
> Statistics: Num rows: 1 Data size: 1 Basic stats: 
> COMPLETE Column stats: NONE
> Group By Operator
>   keys: _col0 (type: int)
>   mode: hash
>   outputColumnNames: _col0
>   Statistics: Num rows: 1 Data size: 1 Basic stats: 
> COMPLETE Column stats: NONE
>   Spark Partition Pruning Sink Operator
> partition key expr: part_col
> Statistics: Num rows: 1 Data size: 1 Basic stats: 
> COMPLETE Column stats: NONE
> targe

[jira] [Updated] (HIVE-18832) Support change management for trashing data files from ACID tables.

2018-03-07 Thread anishek (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

anishek updated HIVE-18832:
---
Attachment: HIVE-18832.1.patch

> Support change management for trashing data files from ACID tables.
> ---
>
> Key: HIVE-18832
> URL: https://issues.apache.org/jira/browse/HIVE-18832
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Reporter: anishek
>Assignee: anishek
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-18832.0.patch, HIVE-18832.1.patch
>
>
> Currently, cleaner process and DDL drop operations deletes the data files. 
> So, scope to support change management in source warehouse for ACID table 
> operations is given below.
> 1. Cleaner process deletes older files after compaction, aborted files etc. 
> Need to be archived to cmroot path.
> 2. DDL operations such as Drop table, partition already archive the deleted 
> files. Need to extend it for ACID tables too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18885) Cascaded alter table + notifications = disaster

2018-03-07 Thread Alexander Kolbasov (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16390835#comment-16390835
 ] 

Alexander Kolbasov commented on HIVE-18885:
---

[~anishek] You are actually right, event is added after the loop. I guess what 
is happening is that some operation combines more then a single 
event-generating action. The first one obtains the lock which is not dropped 
immediately because the transaction isn't closed and the subsequent one 
executes with lock held. In this particular case the thread that was holding 
the lock was the one executing alter_table_with_cascade, but it could be that 
it was just the lucky guy who managed to get the lock.

> Cascaded alter table + notifications = disaster
> ---
>
> Key: HIVE-18885
> URL: https://issues.apache.org/jira/browse/HIVE-18885
> Project: Hive
>  Issue Type: Bug
>  Components: Hive, Metastore
>Affects Versions: 3.0.0
>Reporter: Alexander Kolbasov
>Priority: Major
>
> You can see the problem from looking at the code, but it actually created 
> severe problems for real life Hive user.
> When {{alter table}} has {{cascade}} option it does the following:
> {code:java}
>  msdb.openTransaction()
>   ...
>   List parts = msdb.getPartitions(dbname, name, -1);
>   for (Partition part : parts) {
> List oldCols = part.getSd().getCols();
> part.getSd().setCols(newt.getSd().getCols());
> String oldPartName = 
> Warehouse.makePartName(oldt.getPartitionKeys(), part.getValues());
> updatePartColumnStatsForAlterColumns(msdb, part, oldPartName, 
> part.getValues(), oldCols, part);
> msdb.alterPartition(dbname, name, part.getValues(), part);
>   }
>  {code}
> So it walks all partitions (and this may be huge list) and does some 
> non-trivial operations in one single uber-transaction.
> When DbNotificationListener is enabled, it adds an event for each partition, 
> all while
> holding a row lock on NOTIFICATION_SEQUENCE table. As a result, while this is 
> happening no other write DDL can proceed. This can sometimes cause DB lock 
> timeouts which cause HMS level operation retries which make things even worse.
> In one particular case this pretty much made HMS unusable.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18783) ALTER TABLE post-commit listener does not include the transactional listener responses

2018-03-07 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16390818#comment-16390818
 ] 

Hive QA commented on HIVE-18783:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
34s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
17s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
7s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
12s{color} | {color:red} The patch generated 49 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 12m 55s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-9546/dev-support/hive-personality.sh
 |
| git revision | master / 073dc88 |
| Default Java | 1.8.0_111 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9546/yetus/patch-asflicense-problems.txt
 |
| modules | C: itests/hcatalog-unit standalone-metastore U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9546/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> ALTER TABLE post-commit listener does not include the transactional listener 
> responses 
> ---
>
> Key: HIVE-18783
> URL: https://issues.apache.org/jira/browse/HIVE-18783
> Project: Hive
>  Issue Type: Bug
>Reporter: Na Li
>Assignee: Sergio Peña
>Priority: Major
> Attachments: HIVE-18783.1.patch
>
>
>  in HiveMetaStore, alter_table_core does NOT call transactional listener, and 
> the notification ID corresponding to the alter table event is NOT set in the 
> event parameters.
> {code}
> + alter_table_core
>   
>   try {
> Table oldt = this.get_table_core(dbname, name);
> this.firePreEvent(new PreAlterTableEvent(oldt, newTable, this));
> this.alterHandler.alterTable(this.getMS(), this.wh, dbname, name, 
> newTable, envContext, this);
> success = true;
> if (!this.listeners.isEmpty()) {
>   MetaStoreListenerNotifier.notifyEvent(this.listeners, 
> EventType.ALTER_TABLE, new AlterTableEvent(oldt, newTable, true, this), 
> envContext);
> }
>   } catch (NoSuchObjectException var12) {
> ex = var12;
> throw new InvalidOperationException(var12.getMessage());
>   } catch (Exception var13) {
> ex = var13;
> if (var13 instanceof MetaException) {
>  

[jira] [Commented] (HIVE-18898) Fix NPEs in HiveMetastore.dropPartition method

2018-03-07 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16390793#comment-16390793
 ] 

Hive QA commented on HIVE-18898:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12913404/HIVE-18898.1.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 24 failed/errored test(s), 13348 tests 
executed
*Failed tests:*
{noformat}
TestNegativeCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=95)

[udf_invalid.q,authorization_uri_export.q,default_constraint_complex_default_value.q,druid_datasource2.q,view_update.q,default_partition_name.q,authorization_public_create.q,load_wrong_fileformat_rc_seq.q,default_constraint_invalid_type.q,altern1.q,describe_xpath1.q,drop_view_failure2.q,temp_table_rename.q,invalid_select_column_with_subquery.q,udf_trunc_error1.q,insert_view_failure.q,dbtxnmgr_nodbunlock.q,authorization_show_columns.q,cte_recursion.q,merge_constraint_notnull.q,load_part_nospec.q,clusterbyorderby.q,orc_type_promotion2.q,ctas_noperm_loc.q,udf_instr_wrong_args_len.q,invalid_create_tbl2.q,part_col_complex_type.q,authorization_drop_db_empty.q,smb_mapjoin_14.q,subquery_scalar_multi_rows.q,alter_partition_coltype_2columns.q,subquery_corr_in_agg.q,insert_overwrite_notnull_constraint.q,authorization_show_grant_otheruser_wtab.q,regex_col_groupby.q,udaf_collect_set_unsupported.q,ptf_negative_DuplicateWindowAlias.q,exim_22_export_authfail.q,udf_likeany_wrong1.q,groupby_key.q,ambiguous_col.q,groupby3_multi_distinct.q,authorization_alter_drop_ptn.q,invalid_cast_from_binary_5.q,show_create_table_does_not_exist.q,invalid_select_column.q,exim_20_managed_location_over_existing.q,interval_3.q,authorization_compile.q,join35.q,merge_negative_3.q,udf_concat_ws_wrong3.q,create_or_replace_view8.q,create_external_with_notnull_constraint.q,split_sample_out_of_range.q,materialized_view_no_transactional_rewrite.q,authorization_show_grant_otherrole.q,create_with_constraints_duplicate_name.q,invalid_stddev_samp_syntax.q,authorization_view_disable_cbo_7.q,autolocal1.q,avro_non_nullable_union.q,load_orc_negative_part.q,drop_view_failure1.q,columnstats_partlvl_invalid_values_autogather.q,exim_13_nonnative_import.q,alter_table_wrong_regex.q,add_partition_with_whitelist.q,udf_next_day_error_2.q,authorization_select.q,udf_trunc_error2.q,authorization_view_7.q,udf_format_number_wrong5.q,touch2.q,exim_03_nonpart_noncompat_colschema.q,orc_type_promotion1.q,lateral_view_alias.q,show_tables_bad_db1.q,unset_table_property.q,alter_non_native.q,nvl_mismatch_type.q,load_orc_negative3.q,authorization_create_role_no_admin.q,invalid_distinct1.q,authorization_grant_server.q,orc_type_promotion3_acid.q,show_tables_bad1.q,macro_unused_parameter.q,drop_invalid_constraint3.q,drop_partition_filter_failure.q,char_pad_convert_fail3.q,exim_23_import_exist_authfail.q,drop_invalid_constraint4.q,authorization_create_macro1.q,archive1.q,subquery_multiple_cols_in_select.q,change_hive_hdfs_session_path.q,udf_trunc_error3.q,invalid_variance_syntax.q,authorization_truncate_2.q,invalid_avg_syntax.q,invalid_select_column_with_tablename.q,mm_truncate_cols.q,groupby_grouping_sets1.q,druid_location.q,groupby2_multi_distinct.q,authorization_sba_drop_table.q,dynamic_partitions_with_whitelist.q,compare_string_bigint_2.q,udf_greatest_error_2.q,authorization_view_6.q,show_tablestatus.q,duplicate_alias_in_transform_schema.q,create_with_fk_uk_same_tab.q,udtf_not_supported3.q,alter_table_constraint_invalid_fk_col2.q,udtf_not_supported1.q,dbtxnmgr_notableunlock.q,ptf_negative_InvalidValueBoundary.q,alter_table_constraint_duplicate_pk.q,udf_printf_wrong4.q,create_view_failure9.q,udf_elt_wrong_type.q,selectDistinctStarNeg_1.q,invalid_mapjoin1.q,load_stored_as_dirs.q,input1.q,udf_sort_array_wrong1.q,invalid_distinct2.q,invalid_select_fn.q,authorization_role_grant_otherrole.q,archive4.q,load_nonpart_authfail.q,recursive_view.q,authorization_view_disable_cbo_1.q,desc_failure4.q,create_not_acid.q,udf_sort_array_wrong3.q,char_pad_convert_fail0.q,udf_map_values_arg_type.q,alter_view_failure6_2.q,alter_partition_change_col_nonexist.q,update_non_acid_table.q,authorization_view_disable_cbo_5.q,ct_noperm_loc.q,interval_1.q,authorization_show_grant_otheruser_all.q,authorization_view_2.q,show_tables_bad2.q,groupby_rollup2.q,truncate_column_seqfile.q,create_view_failure5.q,authorization_create_view.q,ptf_window_boundaries.q,ctasnullcol.q,input_part0_neg_2.q,create_or_replace_view1.q,udf_max.q,exim_01_nonpart_over_loaded.q,msck_repair_1.q,orc_change_fileformat_acid.q,udf_nonexistent_resource.q,exim_19_external_over_existing.q,serde_regex2.q,msck_repair_2.q,exim_06_nonpart_noncompat_storage.q,illegal_partition_type4.q,udf_sort_array_by_wrong1.q,create_or_replace_view5.q,windowing_leadlag_in_udaf.q,avro_decimal.q,materialized_view_updat

[jira] [Commented] (HIVE-18885) Cascaded alter table + notifications = disaster

2018-03-07 Thread anishek (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16390780#comment-16390780
 ] 

anishek commented on HIVE-18885:


[~akolb] notfications are not created in the loop with the below statement 
{code}
msdb.alterPartition(dbname, name, part.getValues(), part);
{code}

the above part of *HiveAlterHandler.alterTable*. Notifications are created at 
the end of all the operations so locks will be taken towards the end just 
before commit. as you said for a large query there may be lot of notifications 
that need to be created and that will take a lock(towards the end near commit) 
and lock dbNotification sequence till that transaction commits. 

For replication we depend on, given a timeline with transaction commits, time 
ordered, then notifications w.r.t to these transactions have to follow the same 
order in their sequence, this was achieved by the use of lock.


> Cascaded alter table + notifications = disaster
> ---
>
> Key: HIVE-18885
> URL: https://issues.apache.org/jira/browse/HIVE-18885
> Project: Hive
>  Issue Type: Bug
>  Components: Hive, Metastore
>Affects Versions: 3.0.0
>Reporter: Alexander Kolbasov
>Priority: Major
>
> You can see the problem from looking at the code, but it actually created 
> severe problems for real life Hive user.
> When {{alter table}} has {{cascade}} option it does the following:
> {code:java}
>  msdb.openTransaction()
>   ...
>   List parts = msdb.getPartitions(dbname, name, -1);
>   for (Partition part : parts) {
> List oldCols = part.getSd().getCols();
> part.getSd().setCols(newt.getSd().getCols());
> String oldPartName = 
> Warehouse.makePartName(oldt.getPartitionKeys(), part.getValues());
> updatePartColumnStatsForAlterColumns(msdb, part, oldPartName, 
> part.getValues(), oldCols, part);
> msdb.alterPartition(dbname, name, part.getValues(), part);
>   }
>  {code}
> So it walks all partitions (and this may be huge list) and does some 
> non-trivial operations in one single uber-transaction.
> When DbNotificationListener is enabled, it adds an event for each partition, 
> all while
> holding a row lock on NOTIFICATION_SEQUENCE table. As a result, while this is 
> happening no other write DDL can proceed. This can sometimes cause DB lock 
> timeouts which cause HMS level operation retries which make things even worse.
> In one particular case this pretty much made HMS unusable.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18811) Fix desc table, column comments are not displayed

2018-03-07 Thread tartarus (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16390779#comment-16390779
 ] 

tartarus commented on HIVE-18811:
-

[~pvary]  sorry,I ignored the Beeline before. I will fix it.

> Fix desc table, column comments are not displayed
> -
>
> Key: HIVE-18811
> URL: https://issues.apache.org/jira/browse/HIVE-18811
> Project: Hive
>  Issue Type: Bug
>  Components: CLI
>Affects Versions: 1.2.1, 2.3.2
> Environment: CentOS 6.5
> Hive-1.2.1
> Hive-3.0.0
>Reporter: tartarus
>Assignee: tartarus
>Priority: Major
>  Labels: patch
> Fix For: 3.0.0
>
> Attachments: HIVE_18811.patch, changes
>
>
> when column comment contain \t 
> eg: CREATE TABLE `zhangmang_test`(`name` string COMMENT 
> 'name{color:#ff}\t{color}zm');
> then execute : {color:#ff}desc zhangmang_test {color}
> {color:#ff}{color:#33}return :{color} name                string      
>         name{color}
> because \t is the separator, so we should translate it



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18898) Fix NPEs in HiveMetastore.dropPartition method

2018-03-07 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16390766#comment-16390766
 ] 

Hive QA commented on HIVE-18898:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
54s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
16s{color} | {color:red} standalone-metastore: The patch generated 1 new + 161 
unchanged - 2 fixed = 162 total (was 163) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
12s{color} | {color:red} The patch generated 49 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 11m 14s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-9545/dev-support/hive-personality.sh
 |
| git revision | master / 073dc88 |
| Default Java | 1.8.0_111 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9545/yetus/diff-checkstyle-standalone-metastore.txt
 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9545/yetus/patch-asflicense-problems.txt
 |
| modules | C: standalone-metastore U: standalone-metastore |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9545/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Fix NPEs in HiveMetastore.dropPartition method
> --
>
> Key: HIVE-18898
> URL: https://issues.apache.org/jira/browse/HIVE-18898
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Reporter: Marta Kuczora
>Assignee: Marta Kuczora
>Priority: Minor
> Attachments: HIVE-18898.1.patch
>
>
> The TestDropPartitions tests revealed that NPE is thrown if the 
> dropPartition(String db_name, String tbl_name, List part_vals, 
> PartitionDropOptions options) method is called with null options and with a 
> part_vals list which contains null elements.
> Example: NPE is thrown in the following test cases
>  * testDropPartitionNullPartDropOptions
>  * testDropPartitionNullVal



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18140) Partitioned tables statistics can go wrong in basic stats mixed case

2018-03-07 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16390752#comment-16390752
 ] 

Hive QA commented on HIVE-18140:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12913402/HIVE-18140.01wip03.patch

{color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 204 failed/errored test(s), 13747 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[smb_mapjoin_10] 
(batchId=250)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_table_stats] 
(batchId=53)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[alterColumnStatsPart] 
(batchId=87)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[analyze_table_null_partition]
 (batchId=82)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[annotate_stats_part] 
(batchId=16)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[autoColumnStats_2] 
(batchId=84)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_sortmerge_join_11] 
(batchId=87)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_sortmerge_join_12] 
(batchId=33)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_sortmerge_join_1] 
(batchId=46)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_sortmerge_join_2] 
(batchId=49)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_sortmerge_join_3] 
(batchId=2)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_sortmerge_join_4] 
(batchId=63)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_sortmerge_join_7] 
(batchId=90)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[bucket_map_join_spark1] 
(batchId=68)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[bucket_map_join_spark2] 
(batchId=3)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[bucket_map_join_spark3] 
(batchId=45)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[bucketcontext_1] 
(batchId=33)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[bucketcontext_2] 
(batchId=66)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[bucketcontext_3] 
(batchId=68)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[bucketcontext_4] 
(batchId=42)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[bucketcontext_6] 
(batchId=84)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[bucketcontext_7] 
(batchId=39)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[bucketcontext_8] 
(batchId=37)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[bucketmapjoin10] 
(batchId=52)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[bucketmapjoin11] 
(batchId=72)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[bucketmapjoin12] 
(batchId=36)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[bucketmapjoin5] 
(batchId=85)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[bucketmapjoin8] 
(batchId=13)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[bucketmapjoin9] 
(batchId=17)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[bucketmapjoin_negative2] 
(batchId=69)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[bucketmapjoin_negative] 
(batchId=23)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[columnstats_partlvl] 
(batchId=36)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[columnstats_partlvl_dp] 
(batchId=52)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[groupby_sort_6] 
(batchId=9)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[input25] (batchId=45)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[insert1_overwrite_partitions]
 (batchId=68)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[insert2_overwrite_partitions]
 (batchId=88)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[list_bucket_query_oneskew_1]
 (batchId=60)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[list_bucket_query_oneskew_2]
 (batchId=3)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[list_bucket_query_oneskew_3]
 (batchId=10)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[merge_dynamic_partition2]
 (batchId=16)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[merge_dynamic_partition3]
 (batchId=70)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[merge_dynamic_partition] 
(batchId=42)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[nullgroup3] (batchId=2)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[nullgroup5] (batchId=31)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=36)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[row__id] (batchId=78)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[smb_mapjoin_10] 
(batchId=88)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[stats12] (batchId=62)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[stats13]

[jira] [Work started] (HIVE-18751) ACID table scan through get_splits UDF doesn't receive ValidWriteIdList configuration.

2018-03-07 Thread Sankar Hariappan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HIVE-18751 started by Sankar Hariappan.
---
> ACID table scan through get_splits UDF doesn't receive ValidWriteIdList 
> configuration.
> --
>
> Key: HIVE-18751
> URL: https://issues.apache.org/jira/browse/HIVE-18751
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Affects Versions: 3.0.0
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: ACID, UDF
> Fix For: 3.0.0
>
>
> Per table write ID (HIVE-18192) have replaced global transaction ID with 
> write ID to version data files in ACID/MM tables,
> To ensure snapshot isolation, need to generate ValidWriteIdList for the given 
> txn/table and use it when scan the ACID/MM tables.
> In case of get_splits UDF which runs on ACID table scan query won't receive 
> it properly through configuration (hive.txn.tables.valid.writeids) and hence 
> throws exception. 
> TestAcidOnTez.testGetSplitsLocks is the test failing for the same. Need to 
> fix it.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18140) Partitioned tables statistics can go wrong in basic stats mixed case

2018-03-07 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16390702#comment-16390702
 ] 

Hive QA commented on HIVE-18140:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
59s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
40s{color} | {color:red} ql: The patch generated 24 new + 77 unchanged - 6 
fixed = 101 total (was 83) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
50s{color} | {color:red} ql generated 1 new + 99 unchanged - 1 fixed = 100 
total (was 100) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
13s{color} | {color:red} The patch generated 49 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 13m 34s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-9544/dev-support/hive-personality.sh
 |
| git revision | master / 073dc88 |
| Default Java | 1.8.0_111 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9544/yetus/diff-checkstyle-ql.txt
 |
| javadoc | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9544/yetus/diff-javadoc-javadoc-ql.txt
 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9544/yetus/patch-asflicense-problems.txt
 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9544/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Partitioned tables statistics can go wrong in basic stats mixed case
> 
>
> Key: HIVE-18140
> URL: https://issues.apache.org/jira/browse/HIVE-18140
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-18140.01wip01.patch, HIVE-18140.01wip03.patch
>
>
> suppose the following scenario:
> * part1 has basic stats {{RC=10,DS=1K}}
> * all other partition has no basic stats (and a bunch of rows)
> then 
> [this|https://github.com/apache/hive/blob/d9924ab3e285536f7e2cc15ecbea36a78c59c66d/ql/src/java/org/apache/hadoop/hive/ql/stats/StatsUtils.java#L378]
>  condition would be false; which in turn produces estimations for the whole 
> partitioned table: {{RC=10,DS=1K}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18864) ValidWriteIdList snapshot seems incorrect if obtained after allocating writeId by current transaction.

2018-03-07 Thread Sankar Hariappan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16390694#comment-16390694
 ] 

Sankar Hariappan commented on HIVE-18864:
-

All test failures are due to some setup issue which got OOM error. But locally 
passed.

Attaching the same 02.patch again to re-run test.

> ValidWriteIdList snapshot seems incorrect if obtained after allocating 
> writeId by current transaction.
> --
>
> Key: HIVE-18864
> URL: https://issues.apache.org/jira/browse/HIVE-18864
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Affects Versions: 3.0.0
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: ACID, pull-request-available
> Fix For: 3.0.0
>
> Attachments: HIVE-18864.01.patch, HIVE-18864.02.patch
>
>
> For multi-statement txns, it is possible that write on a table happens after 
> a read. Let's see the below scenario.
>  # Committed txn=9 writes on table T1 with writeId=5.
>  # Open txn=10. ValidTxnList(open:null, txn_HWM=10),
>  # Read table T1 from txn=10. ValidWriteIdList(open:null, write_HWM=5).
>  # Open txn=11, writes on table T1 with writeid=6.
>  # Read table T1 from txn=10. ValidWriteIdList(open:null, write_HWM=5).
>  # Write table T1 from txn=10 with writeId=7.
>  # Read table T1 from txn=10. {color:#d04437}*ValidWriteIdList(open:null, 
> write_HWM=7)*. – This read will able to see rows added by txn=11 which is 
> still open.{color}
> {color:#d04437}So, it is needed to rebuild the open/aborted list of 
> ValidWriteIdList based on txn_HWM. Any writeId allocated by txnId > txn_HWM 
> should be marked as open. In this example, *ValidWriteIdList(open:6, 
> write_HWM=7)* should be generated.{color}
> {color:#33}cc{color} [~ekoifman], [~thejas]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18864) ValidWriteIdList snapshot seems incorrect if obtained after allocating writeId by current transaction.

2018-03-07 Thread Sankar Hariappan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sankar Hariappan updated HIVE-18864:

Status: Patch Available  (was: Open)

> ValidWriteIdList snapshot seems incorrect if obtained after allocating 
> writeId by current transaction.
> --
>
> Key: HIVE-18864
> URL: https://issues.apache.org/jira/browse/HIVE-18864
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Affects Versions: 3.0.0
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: ACID, pull-request-available
> Fix For: 3.0.0
>
> Attachments: HIVE-18864.01.patch, HIVE-18864.02.patch
>
>
> For multi-statement txns, it is possible that write on a table happens after 
> a read. Let's see the below scenario.
>  # Committed txn=9 writes on table T1 with writeId=5.
>  # Open txn=10. ValidTxnList(open:null, txn_HWM=10),
>  # Read table T1 from txn=10. ValidWriteIdList(open:null, write_HWM=5).
>  # Open txn=11, writes on table T1 with writeid=6.
>  # Read table T1 from txn=10. ValidWriteIdList(open:null, write_HWM=5).
>  # Write table T1 from txn=10 with writeId=7.
>  # Read table T1 from txn=10. {color:#d04437}*ValidWriteIdList(open:null, 
> write_HWM=7)*. – This read will able to see rows added by txn=11 which is 
> still open.{color}
> {color:#d04437}So, it is needed to rebuild the open/aborted list of 
> ValidWriteIdList based on txn_HWM. Any writeId allocated by txnId > txn_HWM 
> should be marked as open. In this example, *ValidWriteIdList(open:6, 
> write_HWM=7)* should be generated.{color}
> {color:#33}cc{color} [~ekoifman], [~thejas]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18864) ValidWriteIdList snapshot seems incorrect if obtained after allocating writeId by current transaction.

2018-03-07 Thread Sankar Hariappan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sankar Hariappan updated HIVE-18864:

Attachment: (was: HIVE-18864.02.patch)

> ValidWriteIdList snapshot seems incorrect if obtained after allocating 
> writeId by current transaction.
> --
>
> Key: HIVE-18864
> URL: https://issues.apache.org/jira/browse/HIVE-18864
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Affects Versions: 3.0.0
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: ACID, pull-request-available
> Fix For: 3.0.0
>
> Attachments: HIVE-18864.01.patch, HIVE-18864.02.patch
>
>
> For multi-statement txns, it is possible that write on a table happens after 
> a read. Let's see the below scenario.
>  # Committed txn=9 writes on table T1 with writeId=5.
>  # Open txn=10. ValidTxnList(open:null, txn_HWM=10),
>  # Read table T1 from txn=10. ValidWriteIdList(open:null, write_HWM=5).
>  # Open txn=11, writes on table T1 with writeid=6.
>  # Read table T1 from txn=10. ValidWriteIdList(open:null, write_HWM=5).
>  # Write table T1 from txn=10 with writeId=7.
>  # Read table T1 from txn=10. {color:#d04437}*ValidWriteIdList(open:null, 
> write_HWM=7)*. – This read will able to see rows added by txn=11 which is 
> still open.{color}
> {color:#d04437}So, it is needed to rebuild the open/aborted list of 
> ValidWriteIdList based on txn_HWM. Any writeId allocated by txnId > txn_HWM 
> should be marked as open. In this example, *ValidWriteIdList(open:6, 
> write_HWM=7)* should be generated.{color}
> {color:#33}cc{color} [~ekoifman], [~thejas]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18864) ValidWriteIdList snapshot seems incorrect if obtained after allocating writeId by current transaction.

2018-03-07 Thread Sankar Hariappan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sankar Hariappan updated HIVE-18864:

Attachment: HIVE-18864.02.patch

> ValidWriteIdList snapshot seems incorrect if obtained after allocating 
> writeId by current transaction.
> --
>
> Key: HIVE-18864
> URL: https://issues.apache.org/jira/browse/HIVE-18864
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Affects Versions: 3.0.0
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: ACID, pull-request-available
> Fix For: 3.0.0
>
> Attachments: HIVE-18864.01.patch, HIVE-18864.02.patch
>
>
> For multi-statement txns, it is possible that write on a table happens after 
> a read. Let's see the below scenario.
>  # Committed txn=9 writes on table T1 with writeId=5.
>  # Open txn=10. ValidTxnList(open:null, txn_HWM=10),
>  # Read table T1 from txn=10. ValidWriteIdList(open:null, write_HWM=5).
>  # Open txn=11, writes on table T1 with writeid=6.
>  # Read table T1 from txn=10. ValidWriteIdList(open:null, write_HWM=5).
>  # Write table T1 from txn=10 with writeId=7.
>  # Read table T1 from txn=10. {color:#d04437}*ValidWriteIdList(open:null, 
> write_HWM=7)*. – This read will able to see rows added by txn=11 which is 
> still open.{color}
> {color:#d04437}So, it is needed to rebuild the open/aborted list of 
> ValidWriteIdList based on txn_HWM. Any writeId allocated by txnId > txn_HWM 
> should be marked as open. In this example, *ValidWriteIdList(open:6, 
> write_HWM=7)* should be generated.{color}
> {color:#33}cc{color} [~ekoifman], [~thejas]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18864) ValidWriteIdList snapshot seems incorrect if obtained after allocating writeId by current transaction.

2018-03-07 Thread Sankar Hariappan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sankar Hariappan updated HIVE-18864:

Status: Open  (was: Patch Available)

> ValidWriteIdList snapshot seems incorrect if obtained after allocating 
> writeId by current transaction.
> --
>
> Key: HIVE-18864
> URL: https://issues.apache.org/jira/browse/HIVE-18864
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Affects Versions: 3.0.0
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: ACID, pull-request-available
> Fix For: 3.0.0
>
> Attachments: HIVE-18864.01.patch, HIVE-18864.02.patch
>
>
> For multi-statement txns, it is possible that write on a table happens after 
> a read. Let's see the below scenario.
>  # Committed txn=9 writes on table T1 with writeId=5.
>  # Open txn=10. ValidTxnList(open:null, txn_HWM=10),
>  # Read table T1 from txn=10. ValidWriteIdList(open:null, write_HWM=5).
>  # Open txn=11, writes on table T1 with writeid=6.
>  # Read table T1 from txn=10. ValidWriteIdList(open:null, write_HWM=5).
>  # Write table T1 from txn=10 with writeId=7.
>  # Read table T1 from txn=10. {color:#d04437}*ValidWriteIdList(open:null, 
> write_HWM=7)*. – This read will able to see rows added by txn=11 which is 
> still open.{color}
> {color:#d04437}So, it is needed to rebuild the open/aborted list of 
> ValidWriteIdList based on txn_HWM. Any writeId allocated by txnId > txn_HWM 
> should be marked as open. In this example, *ValidWriteIdList(open:6, 
> write_HWM=7)* should be generated.{color}
> {color:#33}cc{color} [~ekoifman], [~thejas]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18140) Partitioned tables statistics can go wrong in basic stats mixed case

2018-03-07 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16390687#comment-16390687
 ] 

Hive QA commented on HIVE-18140:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12913402/HIVE-18140.01wip03.patch

{color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 200 failed/errored test(s), 13747 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[smb_mapjoin_10] 
(batchId=250)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_table_stats] 
(batchId=53)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[alterColumnStatsPart] 
(batchId=87)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[analyze_table_null_partition]
 (batchId=82)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[annotate_stats_part] 
(batchId=16)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[autoColumnStats_2] 
(batchId=84)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_sortmerge_join_11] 
(batchId=87)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_sortmerge_join_12] 
(batchId=33)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_sortmerge_join_1] 
(batchId=46)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_sortmerge_join_2] 
(batchId=49)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_sortmerge_join_3] 
(batchId=2)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_sortmerge_join_4] 
(batchId=63)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_sortmerge_join_7] 
(batchId=90)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[bucket_map_join_spark1] 
(batchId=68)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[bucket_map_join_spark2] 
(batchId=3)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[bucket_map_join_spark3] 
(batchId=45)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[bucketcontext_1] 
(batchId=33)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[bucketcontext_2] 
(batchId=66)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[bucketcontext_3] 
(batchId=68)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[bucketcontext_4] 
(batchId=42)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[bucketcontext_6] 
(batchId=84)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[bucketcontext_7] 
(batchId=39)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[bucketcontext_8] 
(batchId=37)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[bucketmapjoin10] 
(batchId=52)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[bucketmapjoin11] 
(batchId=72)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[bucketmapjoin12] 
(batchId=36)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[bucketmapjoin5] 
(batchId=85)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[bucketmapjoin8] 
(batchId=13)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[bucketmapjoin9] 
(batchId=17)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[bucketmapjoin_negative2] 
(batchId=69)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[bucketmapjoin_negative] 
(batchId=23)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[columnstats_partlvl] 
(batchId=36)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[columnstats_partlvl_dp] 
(batchId=52)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[groupby_sort_6] 
(batchId=9)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[input25] (batchId=45)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[insert1_overwrite_partitions]
 (batchId=68)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[insert2_overwrite_partitions]
 (batchId=88)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[list_bucket_query_oneskew_1]
 (batchId=60)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[list_bucket_query_oneskew_2]
 (batchId=3)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[list_bucket_query_oneskew_3]
 (batchId=10)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[merge_dynamic_partition2]
 (batchId=16)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[merge_dynamic_partition3]
 (batchId=70)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[merge_dynamic_partition] 
(batchId=42)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[nullgroup3] (batchId=2)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[nullgroup5] (batchId=31)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=36)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[row__id] (batchId=78)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[smb_mapjoin_10] 
(batchId=88)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[stats12] (batchId=62)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[stats13]

[jira] [Updated] (HIVE-18859) Incorrect handling of thrift metastore exceptions

2018-03-07 Thread Ganesha Shreedhara (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ganesha Shreedhara updated HIVE-18859:
--
Attachment: (was: HIVE-18859.patch.2)

> Incorrect handling of thrift metastore exceptions
> -
>
> Key: HIVE-18859
> URL: https://issues.apache.org/jira/browse/HIVE-18859
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.2.0, 2.1.1
>Reporter: Ganesha Shreedhara
>Assignee: Ganesha Shreedhara
>Priority: Major
> Attachments: HIVE-18859.2.patch, HIVE-18859.patch
>
>
> Currently any run time exception thrown in thrift metastore during the 
> following operations is not getting sent to hive execution engine.
>  * grant/revoke role
>  * grant/revoke privileges
>  * create role
> This is because ThriftHiveMetastore just handles MetaException and throws 
> TException during the processing of these requests.  So, the command just 
> fails at thrift metastore end when there is run time exception (Exception can 
> be seen in metastore log) but the hive execution engine will keep on waiting 
> for the response from thrift metatstore.
>  
> Steps to reproduce this problem :
> Launch thrift metastore
> Launch hive cli by passing --hiveconf 
> hive.metastore.uris=thrift://127.0.0.1:1 (pass the thrift metatstore host 
> and port)
> Execute the following commands:
>  # set role admin
>  # create role test; (succeeds)
>  # create role test; ( hive version 2.1.1 : command is stuck, waiting for the 
> response from thrift metastore; hive version 1.2.1: command fails with 
> exception as null) 
>  
> I have uploaded the patch which has the fix in which I am handling the 
> checked exceptions in MetaException and throwing unchecked exceptions using 
> TException which fixes the problem. Please review and suggest if there is a 
> better way of handling this issue.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18859) Incorrect handling of thrift metastore exceptions

2018-03-07 Thread Ganesha Shreedhara (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ganesha Shreedhara updated HIVE-18859:
--
Attachment: HIVE-18859.2.patch

> Incorrect handling of thrift metastore exceptions
> -
>
> Key: HIVE-18859
> URL: https://issues.apache.org/jira/browse/HIVE-18859
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.2.0, 2.1.1
>Reporter: Ganesha Shreedhara
>Assignee: Ganesha Shreedhara
>Priority: Major
> Attachments: HIVE-18859.2.patch, HIVE-18859.patch
>
>
> Currently any run time exception thrown in thrift metastore during the 
> following operations is not getting sent to hive execution engine.
>  * grant/revoke role
>  * grant/revoke privileges
>  * create role
> This is because ThriftHiveMetastore just handles MetaException and throws 
> TException during the processing of these requests.  So, the command just 
> fails at thrift metastore end when there is run time exception (Exception can 
> be seen in metastore log) but the hive execution engine will keep on waiting 
> for the response from thrift metatstore.
>  
> Steps to reproduce this problem :
> Launch thrift metastore
> Launch hive cli by passing --hiveconf 
> hive.metastore.uris=thrift://127.0.0.1:1 (pass the thrift metatstore host 
> and port)
> Execute the following commands:
>  # set role admin
>  # create role test; (succeeds)
>  # create role test; ( hive version 2.1.1 : command is stuck, waiting for the 
> response from thrift metastore; hive version 1.2.1: command fails with 
> exception as null) 
>  
> I have uploaded the patch which has the fix in which I am handling the 
> checked exceptions in MetaException and throwing unchecked exceptions using 
> TException which fixes the problem. Please review and suggest if there is a 
> better way of handling this issue.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18859) Incorrect handling of thrift metastore exceptions

2018-03-07 Thread Ganesha Shreedhara (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ganesha Shreedhara updated HIVE-18859:
--
Attachment: HIVE-18859.patch.2

> Incorrect handling of thrift metastore exceptions
> -
>
> Key: HIVE-18859
> URL: https://issues.apache.org/jira/browse/HIVE-18859
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.2.0, 2.1.1
>Reporter: Ganesha Shreedhara
>Assignee: Ganesha Shreedhara
>Priority: Major
> Attachments: HIVE-18859.2.patch, HIVE-18859.patch
>
>
> Currently any run time exception thrown in thrift metastore during the 
> following operations is not getting sent to hive execution engine.
>  * grant/revoke role
>  * grant/revoke privileges
>  * create role
> This is because ThriftHiveMetastore just handles MetaException and throws 
> TException during the processing of these requests.  So, the command just 
> fails at thrift metastore end when there is run time exception (Exception can 
> be seen in metastore log) but the hive execution engine will keep on waiting 
> for the response from thrift metatstore.
>  
> Steps to reproduce this problem :
> Launch thrift metastore
> Launch hive cli by passing --hiveconf 
> hive.metastore.uris=thrift://127.0.0.1:1 (pass the thrift metatstore host 
> and port)
> Execute the following commands:
>  # set role admin
>  # create role test; (succeeds)
>  # create role test; ( hive version 2.1.1 : command is stuck, waiting for the 
> response from thrift metastore; hive version 1.2.1: command fails with 
> exception as null) 
>  
> I have uploaded the patch which has the fix in which I am handling the 
> checked exceptions in MetaException and throwing unchecked exceptions using 
> TException which fixes the problem. Please review and suggest if there is a 
> better way of handling this issue.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18883) Add findbugs to yetus pre-commit checks

2018-03-07 Thread Sahil Takiar (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16390680#comment-16390680
 ] 

Sahil Takiar commented on HIVE-18883:
-

Thanks for the tip [~pvary]! Helped a lot. What do you think of my patch? It 
works when running YETUS locally as described in the wiki you linked above.

Implementation:
* This patch just curls the distribution and sets {{FINDBUGS_HOME}} inside the 
{{yetus-wrapper.sh}} file
* The code to download the FindBugs distribution is based on the commands to 
download the YETUS distribution (downloads FindBugs to the same place as YETUS 
is downloaded), with a few modifications
* One thing I couldn't understand is that when I run {{yetus-wrapper.sh}} 
locally it downloads everything to the {{patchprocess}} directory, but I 
couldn't find the corresponding directory on the ptest server. Any idea where 
it is? Does the YETUS distribution get downloaded for each YETUS run? In which 
case FindBugs will be downloaded each time too. 

Some other notes while working on this:
* It seems most of the other Apache projects that use YETUS use some sort of 
YETUS-Docker integration that downloads the FindBugs dependency
* Ideally, we would do the same thing and we could just do {{apt-get install 
findbugs}} and set the {{FINDBUGS_HOME}} appropriately (looks like HBase 
manually curls the FindBugs installation)
* It looks like each pre-commit run of these Apache projects builds the 
Dockerfile, so it downloads the findbugs distro during each run
* So, overall, I think we are still inline with what other projects are doing, 
we just don't use docker (e.g. we download the distribution during each 
pre-commit run)

> Add findbugs to yetus pre-commit checks
> ---
>
> Key: HIVE-18883
> URL: https://issues.apache.org/jira/browse/HIVE-18883
> Project: Hive
>  Issue Type: Sub-task
>  Components: Testing Infrastructure
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
>Priority: Major
> Attachments: HIVE-18883.1.patch
>
>
> We should enable FindBugs for our YETUS pre-commit checks, this will help 
> overall code quality and should decrease the overall number of bugs in Hive.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18883) Add findbugs to yetus pre-commit checks

2018-03-07 Thread Sahil Takiar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sahil Takiar updated HIVE-18883:

Attachment: (was: HIVE-18883.1.patch)

> Add findbugs to yetus pre-commit checks
> ---
>
> Key: HIVE-18883
> URL: https://issues.apache.org/jira/browse/HIVE-18883
> Project: Hive
>  Issue Type: Sub-task
>  Components: Testing Infrastructure
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
>Priority: Major
> Attachments: HIVE-18883.1.patch
>
>
> We should enable FindBugs for our YETUS pre-commit checks, this will help 
> overall code quality and should decrease the overall number of bugs in Hive.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18883) Add findbugs to yetus pre-commit checks

2018-03-07 Thread Sahil Takiar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sahil Takiar updated HIVE-18883:

Attachment: HIVE-18883.1.patch

> Add findbugs to yetus pre-commit checks
> ---
>
> Key: HIVE-18883
> URL: https://issues.apache.org/jira/browse/HIVE-18883
> Project: Hive
>  Issue Type: Sub-task
>  Components: Testing Infrastructure
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
>Priority: Major
> Attachments: HIVE-18883.1.patch
>
>
> We should enable FindBugs for our YETUS pre-commit checks, this will help 
> overall code quality and should decrease the overall number of bugs in Hive.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18739) Add support for Export from unpartitioned Acid table

2018-03-07 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-18739:
--
Attachment: HIVE-18739.04.patch

> Add support for Export from unpartitioned Acid table
> 
>
> Key: HIVE-18739
> URL: https://issues.apache.org/jira/browse/HIVE-18739
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>Priority: Major
> Attachments: HIVE-18739.01.patch, HIVE-18739.04.patch, 
> HIVE-18739.04.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18883) Add findbugs to yetus pre-commit checks

2018-03-07 Thread Sahil Takiar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sahil Takiar updated HIVE-18883:

Status: Patch Available  (was: Open)

> Add findbugs to yetus pre-commit checks
> ---
>
> Key: HIVE-18883
> URL: https://issues.apache.org/jira/browse/HIVE-18883
> Project: Hive
>  Issue Type: Sub-task
>  Components: Testing Infrastructure
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
>Priority: Major
> Attachments: HIVE-18883.1.patch
>
>
> We should enable FindBugs for our YETUS pre-commit checks, this will help 
> overall code quality and should decrease the overall number of bugs in Hive.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18883) Add findbugs to yetus pre-commit checks

2018-03-07 Thread Sahil Takiar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sahil Takiar updated HIVE-18883:

Attachment: HIVE-18883.1.patch

> Add findbugs to yetus pre-commit checks
> ---
>
> Key: HIVE-18883
> URL: https://issues.apache.org/jira/browse/HIVE-18883
> Project: Hive
>  Issue Type: Sub-task
>  Components: Testing Infrastructure
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
>Priority: Major
> Attachments: HIVE-18883.1.patch
>
>
> We should enable FindBugs for our YETUS pre-commit checks, this will help 
> overall code quality and should decrease the overall number of bugs in Hive.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18739) Add support for Export from unpartitioned Acid table

2018-03-07 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-18739:
--
Attachment: HIVE-18739.04.patch

> Add support for Export from unpartitioned Acid table
> 
>
> Key: HIVE-18739
> URL: https://issues.apache.org/jira/browse/HIVE-18739
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>Priority: Major
> Attachments: HIVE-18739.01.patch, HIVE-18739.04.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18140) Partitioned tables statistics can go wrong in basic stats mixed case

2018-03-07 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16390624#comment-16390624
 ] 

Hive QA commented on HIVE-18140:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 0s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
37s{color} | {color:red} ql: The patch generated 24 new + 77 unchanged - 6 
fixed = 101 total (was 83) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
55s{color} | {color:red} ql generated 1 new + 99 unchanged - 1 fixed = 100 
total (was 100) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
12s{color} | {color:red} The patch generated 49 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 13m 43s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-9543/dev-support/hive-personality.sh
 |
| git revision | master / 073dc88 |
| Default Java | 1.8.0_111 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9543/yetus/diff-checkstyle-ql.txt
 |
| javadoc | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9543/yetus/diff-javadoc-javadoc-ql.txt
 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9543/yetus/patch-asflicense-problems.txt
 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9543/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Partitioned tables statistics can go wrong in basic stats mixed case
> 
>
> Key: HIVE-18140
> URL: https://issues.apache.org/jira/browse/HIVE-18140
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-18140.01wip01.patch, HIVE-18140.01wip03.patch
>
>
> suppose the following scenario:
> * part1 has basic stats {{RC=10,DS=1K}}
> * all other partition has no basic stats (and a bunch of rows)
> then 
> [this|https://github.com/apache/hive/blob/d9924ab3e285536f7e2cc15ecbea36a78c59c66d/ql/src/java/org/apache/hadoop/hive/ql/stats/StatsUtils.java#L378]
>  condition would be false; which in turn produces estimations for the whole 
> partitioned table: {{RC=10,DS=1K}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18861) druid-hdfs-storage is pulling in hadoop-aws-2.7.x and aws SDK, creating classpath problems on hadoop 3.x

2018-03-07 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16390604#comment-16390604
 ] 

Hive QA commented on HIVE-18861:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12913465/HIVE-18861.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 21 failed/errored test(s), 12952 tests 
executed
*Failed tests:*
{noformat}
TestNegativeCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=94)

[nopart_insert.q,insert_into_with_schema.q,input41.q,having1.q,create_table_failure3.q,default_constraint_invalid_default_value.q,database_drop_not_empty_restrict.q,windowing_after_orderby.q,orderbysortby.q,subquery_select_distinct2.q,authorization_uri_alterpart_loc.q,udf_last_day_error_1.q,constraint_duplicate_name.q,create_table_failure4.q,alter_tableprops_external_with_notnull_constraint.q,semijoin5.q,udf_format_number_wrong4.q,deletejar.q,exim_11_nonpart_noncompat_sorting.q,show_tables_bad_db2.q,drop_func_nonexistent.q,nopart_load.q,alter_table_non_partitioned_table_cascade.q,load_wrong_fileformat.q,lockneg_try_db_lock_conflict.q,udf_field_wrong_args_len.q,create_table_failure2.q,create_with_fk_constraints_enforced.q,groupby2_map_skew_multi_distinct.q,udf_min.q,authorization_update_noupdatepriv.q,show_columns2.q,authorization_insert_noselectpriv.q,orc_replace_columns3_acid.q,compare_double_bigint.q,authorization_set_nonexistent_conf.q,alter_rename_partition_failure3.q,split_sample_wrong_format2.q,create_with_fk_pk_same_tab.q,compare_double_bigint_2.q,authorization_show_roles_no_admin.q,materialized_view_authorization_rebuild_no_grant.q,unionLimit.q,authorization_revoke_table_fail2.q,authorization_insert_noinspriv.q,duplicate_insert3.q,authorization_desc_table_nosel.q,stats_noscan_non_native.q,orc_change_serde_acid.q,create_or_replace_view7.q,exim_07_nonpart_noncompat_ifof.q,create_with_unique_constraints_enforced.q,udf_concat_ws_wrong2.q,fileformat_bad_class.q,merge_negative_2.q,exim_15_part_nonpart.q,authorization_not_owner_drop_view.q,external1.q,authorization_uri_insert.q,create_with_fk_wrong_ref.q,columnstats_tbllvl_incorrect_column.q,authorization_show_parts_nosel.q,authorization_not_owner_drop_tab.q,external2.q,authorization_deletejar.q,temp_table_create_like_partitions.q,udf_greatest_error_1.q,ptf_negative_AggrFuncsWithNoGBYNoPartDef.q,alter_view_as_select_not_exist.q,touch1.q,groupby3_map_skew_multi_distinct.q,insert_into_notnull_constraint.q,exchange_partition_neg_partition_missing.q,groupby_cube_multi_gby.q,columnstats_tbllvl.q,drop_invalid_constraint2.q,alter_table_add_partition.q,update_not_acid.q,archive5.q,alter_table_constraint_invalid_pk_col.q,ivyDownload.q,udf_instr_wrong_type.q,bad_sample_clause.q,authorization_not_owner_drop_tab2.q,authorization_alter_db_owner.q,show_columns1.q,orc_type_promotion3.q,create_view_failure8.q,strict_join.q,udf_add_months_error_1.q,groupby_cube2.q,groupby_cube1.q,groupby_rollup1.q,genericFileFormat.q,invalid_cast_from_binary_4.q,drop_invalid_constraint1.q,serde_regex.q,show_partitions1.q,invalid_cast_from_binary_6.q,create_with_multi_pk_constraint.q,udf_field_wrong_type.q,groupby_grouping_sets4.q,groupby_grouping_sets3.q,insertsel_fail.q,udf_locate_wrong_type.q,orc_type_promotion1_acid.q,set_table_property.q,create_or_replace_view2.q,groupby_grouping_sets2.q,alter_view_failure.q,distinct_windowing_failure1.q,invalid_t_alter2.q,alter_table_constraint_invalid_fk_col1.q,invalid_varchar_length_2.q,authorization_show_grant_otheruser_alltabs.q,subquery_windowing_corr.q,compact_non_acid_table.q,authorization_view_4.q,authorization_disallow_transform.q,materialized_view_authorization_rebuild_other.q,authorization_fail_4.q,dbtxnmgr_nodblock.q,set_hiveconf_internal_variable1.q,input_part0_neg.q,udf_printf_wrong3.q,load_orc_negative2.q,druid_buckets.q,archive2.q,authorization_addjar.q,invalid_sum_syntax.q,insert_into_with_schema1.q,udf_add_months_error_2.q,dyn_part_max_per_node.q,authorization_revoke_table_fail1.q,udf_printf_wrong2.q,archive_multi3.q,udf_printf_wrong1.q,subquery_subquery_chain.q,authorization_view_disable_cbo_4.q,no_matching_udf.q,create_view_failure7.q,drop_native_udf.q,truncate_column_list_bucketing.q,authorization_uri_add_partition.q,authorization_view_disable_cbo_3.q,bad_exec_hooks.q,authorization_view_disable_cbo_2.q,fetchtask_ioexception.q,char_pad_convert_fail2.q,authorization_set_role_neg1.q,serde_regex3.q,authorization_delete_nodeletepriv.q,materialized_view_delete.q,create_or_replace_view6.q,bucket_mapjoin_wrong_table_metadata_2.q,msck_repair_3.q,udf_sort_array_by_wrong2.q,local_mapred_error_cache.q,alter_external_acid.q,mm_concatenate.q,authorization_fail_3.q,set_hiveconf_internal_variable0.q,udf_last_day_error_2.q,alter_table_constraint_invalid_ref.q,create_table_wrong_regex.q,describe_xpath4

[jira] [Updated] (HIVE-18871) hive on tez execution error due to set hive.aux.jars.path to hdfs://

2018-03-07 Thread zhuwei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhuwei updated HIVE-18871:
--
Attachment: HIVE-18871.2.patch

> hive on tez execution error due to set hive.aux.jars.path to hdfs://
> 
>
> Key: HIVE-18871
> URL: https://issues.apache.org/jira/browse/HIVE-18871
> Project: Hive
>  Issue Type: Bug
>  Components: Tez
>Affects Versions: 2.2.1
> Environment: hadoop 2.6.5
> hive 2.2.1
> tez 0.8.4
>Reporter: zhuwei
>Assignee: zhuwei
>Priority: Major
> Attachments: HIVE-18871.1.patch, HIVE-18871.2.patch
>
>
> When set the properties 
> hive.aux.jars.path=hdfs://mycluster/apps/hive/lib/guava.jar
> and hive.execution.engine=tez; execute any query will fail with below error 
> log:
> exec.Task: Failed to execute tez graph.
> java.lang.IllegalArgumentException: Wrong FS: 
> hdfs://mycluster/apps/hive/lib/guava.jar, expected: file:///
>  at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:645) 
> ~[hadoop-common-2.6.0.jar:?]
>  at 
> org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:80)
>  ~[hadoop-common-2.6.0.jar:?]
>  at 
> org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:529)
>  ~[hadoop-common-2.6.0.jar:?]
>  at 
> org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:747)
>  ~[hadoop-common-2.6.0.jar:?]
>  at 
> org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:524)
>  ~[hadoop-common-2.6.0.jar:?]
>  at 
> org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:409)
>  ~[hadoop-common-2.6.0.jar:?]
>  at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:337) 
> ~[hadoop-common-2.6.0.jar:?]
>  at org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:1905) 
> ~[hadoop-common-2.6.0.jar:?]
>  at 
> org.apache.hadoop.hive.ql.exec.tez.DagUtils.localizeResource(DagUtils.java:1007)
>  ~[hive-exec-2.1.1.jar:2.1.1]
>  at 
> org.apache.hadoop.hive.ql.exec.tez.DagUtils.addTempResources(DagUtils.java:902)
>  ~[hive-exec-2.1.1.jar:2.1.1]
>  at 
> org.apache.hadoop.hive.ql.exec.tez.DagUtils.localizeTempFilesFromConf(DagUtils.java:845)
>  ~[hive-exec-2.1.1.jar:2.1.1]
>  at 
> org.apache.hadoop.hive.ql.exec.tez.TezSessionState.refreshLocalResourcesFromConf(TezSessionState.java:466)
>  ~[hive-exec-2.1.1.jar:2.1.1]
>  at 
> org.apache.hadoop.hive.ql.exec.tez.TezSessionState.openInternal(TezSessionState.java:252)
>  ~[hive-exec-2.1.1.jar:2.1.1]
>  at 
> org.apache.hadoop.hive.ql.exec.tez.TezSessionPoolManager$TezSessionPoolSession.openInternal(TezSessionPoolManager.java:622)
>  ~[hive-exec-2.1.1.jar:2.1.1]
>  at 
> org.apache.hadoop.hive.ql.exec.tez.TezSessionState.open(TezSessionState.java:206)
>  ~[hive-exec-2.1.1.jar:2.1.1]
>  at 
> org.apache.hadoop.hive.ql.exec.tez.TezTask.updateSession(TezTask.java:283) 
> ~[hive-exec-2.1.1.jar:2.1.1]
>  at org.apache.hadoop.hive.ql.exec.tez.TezTask.execute(TezTask.java:155) 
> [hive-exec-2.1.1.jar:2.1.1]
>  at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:197) 
> [hive-exec-2.1.1.jar:2.1.1]
>  at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100) 
> [hive-exec-2.1.1.jar:2.1.1]
>  at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2073) 
> [hive-exec-2.1.1.jar:2.1.1]
>  at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1744) 
> [hive-exec-2.1.1.jar:2.1.1]
>  at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1453) 
> [hive-exec-2.1.1.jar:2.1.1]
>  at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1171) 
> [hive-exec-2.1.1.jar:2.1.1]
>  at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1161) 
> [hive-exec-2.1.1.jar:2.1.1]
>  at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:232) 
> [hive-cli-2.1.1.jar:2.1.1]
>  at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:183) 
> [hive-cli-2.1.1.jar:2.1.1]
>  at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:399) 
> [hive-cli-2.1.1.jar:2.1.1]
>  at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:335) 
> [hive-cli-2.1.1.jar:2.1.1]
>  at org.apache.hadoop.hive.cli.CliDriver.processReader(CliDriver.java:429) 
> [hive-cli-2.1.1.jar:2.1.1]
>  at org.apache.hadoop.hive.cli.CliDriver.processFile(CliDriver.java:445) 
> [hive-cli-2.1.1.jar:2.1.1]
>  at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:151) 
> [hive-cli-2.1.1.jar:2.1.1]
>  at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:399) 
> [hive-cli-2.1.1.jar:2.1.1]
>  at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:776) 
> [hive-cli-2.1.1.jar:2.1.1]
>  at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:714) 
> [hive-cli-2.1.1.jar:2.1.1]
>  at org.apache.hadoop.hive.cli

[jira] [Commented] (HIVE-14792) AvroSerde reads the remote schema-file at least once per mapper, per table reference.

2018-03-07 Thread Mithun Radhakrishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16390579#comment-16390579
 ] 

Mithun Radhakrishnan commented on HIVE-14792:
-

I've still not been able to devote time to this. Sorry.

It isn't meaningful to delay you on this. The latest patch on this JIRA has the 
fix. The only things to be changed are the failing tests, and disabling the 
feature by default.

I'd be obliged if someone else picked this up. I'll update here if I'm able to 
make any progress before then.

> AvroSerde reads the remote schema-file at least once per mapper, per table 
> reference.
> -
>
> Key: HIVE-14792
> URL: https://issues.apache.org/jira/browse/HIVE-14792
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.2.1, 2.1.0
>Reporter: Mithun Radhakrishnan
>Assignee: Mithun Radhakrishnan
>Priority: Major
>  Labels: TODOC2.2, TODOC2.4
> Fix For: 3.0.0, 2.4.0, 2.2.1
>
> Attachments: HIVE-14792.1.patch, HIVE-14792.3.patch
>
>
> Avro tables that use "external" schema files stored on HDFS can cause 
> excessive calls to {{FileSystem::open()}}, especially for queries that spawn 
> large numbers of mappers.
> This is because of the following code in {{AvroSerDe::initialize()}}:
> {code:title=AvroSerDe.java|borderStyle=solid}
> public void initialize(Configuration configuration, Properties properties) 
> throws SerDeException {
> // ...
> if (hasExternalSchema(properties)
> || columnNameProperty == null || columnNameProperty.isEmpty()
> || columnTypeProperty == null || columnTypeProperty.isEmpty()) {
>   schema = determineSchemaOrReturnErrorSchema(configuration, properties);
> } else {
>   // Get column names and sort order
>   columnNames = Arrays.asList(columnNameProperty.split(","));
>   columnTypes = 
> TypeInfoUtils.getTypeInfosFromTypeString(columnTypeProperty);
>   schema = getSchemaFromCols(properties, columnNames, columnTypes, 
> columnCommentProperty);
>  
> properties.setProperty(AvroSerdeUtils.AvroTableProperties.SCHEMA_LITERAL.getPropName(),
>  schema.toString());
> }
> // ...
> }
> {code}
> For tables using {{avro.schema.url}}, every time the SerDe is initialized 
> (i.e. at least once per mapper), the schema file is read remotely. For 
> queries with thousands of mappers, this leads to a stampede to the handful 
> (3?) datanodes that host the schema-file. In the best case, this causes 
> slowdowns.
> It would be preferable to distribute the Avro-schema to all mappers as part 
> of the job-conf. The alternatives aren't exactly appealing:
> # One can't rely solely on the {{column.list.types}} stored in the Hive 
> metastore. (HIVE-14789).
> # {{avro.schema.literal}} might not always be usable, because of the 
> size-limit on table-parameters. The typical size of the Avro-schema file is 
> between 0.5-3MB, in my limited experience. Bumping the max table-parameter 
> size isn't a great solution.
> If the {{avro.schema.file}} were read during query-planning, and made 
> available as part of table-properties (but not serialized into the 
> metastore), the downstream logic will remain largely intact. I have a patch 
> that does this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18264) CachedStore: Store cached partitions/col stats within the table cache and make prewarm non-blocking

2018-03-07 Thread Alexander Kolbasov (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16390573#comment-16390573
 ] 

Alexander Kolbasov commented on HIVE-18264:
---

[~vgumashta] Thanks, I will take a look at all the changes. I'll need a few 
days for that.

> CachedStore: Store cached partitions/col stats within the table cache and 
> make prewarm non-blocking
> ---
>
> Key: HIVE-18264
> URL: https://issues.apache.org/jira/browse/HIVE-18264
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
>Priority: Major
> Attachments: HIVE-18264.1.patch, HIVE-18264.2.patch, 
> HIVE-18264.3.patch, HIVE-18264.4.patch, HIVE-18264.5.patch
>
>
> Currently we have a separate cache for partitions and partition col stats 
> which results in some calls iterating through each of these for 
> retrieving/updating. For example, to modify a partition col stat, currently 
> we need to lock table, partition and partition col stats caches which are all 
> separate hashmaps. We can get better performance by organizing 
> hierarchically. For example, we can have a partition, partition col stats and 
> table col stats cache per table to improve on the previous mechanisms. This 
> will also result in better concurrency, since now instead of locking the 
> whole cache, we can selectively lock the table cache and modify multiple 
> tables in parallel. 
> In addition, currently, the prewarm mechanism populates all the caches 
> initially (it skips tables that do not pass whitelist/blacklist filter) and 
> it is a blocking call. This patch also makes prewarm non-blocking so that the 
> calls for tables that are already cached can be served from the memory and 
> the ones that are not can be served from the rdbms. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18885) Cascaded alter table + notifications = disaster

2018-03-07 Thread Alexander Kolbasov (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16390563#comment-16390563
 ] 

Alexander Kolbasov commented on HIVE-18885:
---

[~vihangk1] +1 for aggregated notifications.

> Cascaded alter table + notifications = disaster
> ---
>
> Key: HIVE-18885
> URL: https://issues.apache.org/jira/browse/HIVE-18885
> Project: Hive
>  Issue Type: Bug
>  Components: Hive, Metastore
>Affects Versions: 3.0.0
>Reporter: Alexander Kolbasov
>Priority: Major
>
> You can see the problem from looking at the code, but it actually created 
> severe problems for real life Hive user.
> When {{alter table}} has {{cascade}} option it does the following:
> {code:java}
>  msdb.openTransaction()
>   ...
>   List parts = msdb.getPartitions(dbname, name, -1);
>   for (Partition part : parts) {
> List oldCols = part.getSd().getCols();
> part.getSd().setCols(newt.getSd().getCols());
> String oldPartName = 
> Warehouse.makePartName(oldt.getPartitionKeys(), part.getValues());
> updatePartColumnStatsForAlterColumns(msdb, part, oldPartName, 
> part.getValues(), oldCols, part);
> msdb.alterPartition(dbname, name, part.getValues(), part);
>   }
>  {code}
> So it walks all partitions (and this may be huge list) and does some 
> non-trivial operations in one single uber-transaction.
> When DbNotificationListener is enabled, it adds an event for each partition, 
> all while
> holding a row lock on NOTIFICATION_SEQUENCE table. As a result, while this is 
> happening no other write DDL can proceed. This can sometimes cause DB lock 
> timeouts which cause HMS level operation retries which make things even worse.
> In one particular case this pretty much made HMS unusable.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18885) Cascaded alter table + notifications = disaster

2018-03-07 Thread Alexander Kolbasov (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16390562#comment-16390562
 ] 

Alexander Kolbasov commented on HIVE-18885:
---

[~vihangk1] Non-transactional listeners do not create notifications - they are 
used to do some extra work. For example, in case of Sentry they are used to 
synchronize events between HMS and Sentry. Even though they may never execute 
nothing really depends on them. SO with the current design we should not have 
missed notifications. Note that before the {{big fat lock}} fix it was possible 
to get multiple notifications with the same ID and we also observed stored 
notification value go backwards.

> Cascaded alter table + notifications = disaster
> ---
>
> Key: HIVE-18885
> URL: https://issues.apache.org/jira/browse/HIVE-18885
> Project: Hive
>  Issue Type: Bug
>  Components: Hive, Metastore
>Affects Versions: 3.0.0
>Reporter: Alexander Kolbasov
>Priority: Major
>
> You can see the problem from looking at the code, but it actually created 
> severe problems for real life Hive user.
> When {{alter table}} has {{cascade}} option it does the following:
> {code:java}
>  msdb.openTransaction()
>   ...
>   List parts = msdb.getPartitions(dbname, name, -1);
>   for (Partition part : parts) {
> List oldCols = part.getSd().getCols();
> part.getSd().setCols(newt.getSd().getCols());
> String oldPartName = 
> Warehouse.makePartName(oldt.getPartitionKeys(), part.getValues());
> updatePartColumnStatsForAlterColumns(msdb, part, oldPartName, 
> part.getValues(), oldCols, part);
> msdb.alterPartition(dbname, name, part.getValues(), part);
>   }
>  {code}
> So it walks all partitions (and this may be huge list) and does some 
> non-trivial operations in one single uber-transaction.
> When DbNotificationListener is enabled, it adds an event for each partition, 
> all while
> holding a row lock on NOTIFICATION_SEQUENCE table. As a result, while this is 
> happening no other write DDL can proceed. This can sometimes cause DB lock 
> timeouts which cause HMS level operation retries which make things even worse.
> In one particular case this pretty much made HMS unusable.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18888) Replace synchronizedMap with ConcurrentHashMap

2018-03-07 Thread Alexander Kolbasov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-1?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Kolbasov updated HIVE-1:
--
Attachment: HIVE-1.02.patch

> Replace synchronizedMap with ConcurrentHashMap
> --
>
> Key: HIVE-1
> URL: https://issues.apache.org/jira/browse/HIVE-1
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 3.0.0, 2.3.3
>Reporter: Alexander Kolbasov
>Assignee: Alexander Kolbasov
>Priority: Major
> Attachments: HIVE-1.01.patch, HIVE-1.02.patch
>
>
> There are a bunch of places that use Collections.synchronizedMap instead of 
> ConcurrentHashMap which are better. We should search/replace the uses.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18861) druid-hdfs-storage is pulling in hadoop-aws-2.7.x and aws SDK, creating classpath problems on hadoop 3.x

2018-03-07 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16390552#comment-16390552
 ] 

Hive QA commented on HIVE-18861:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 7s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
16s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
10s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
12s{color} | {color:red} The patch generated 49 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}  8m 55s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  xml  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-9542/dev-support/hive-personality.sh
 |
| git revision | master / 073dc88 |
| Default Java | 1.8.0_111 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9542/yetus/patch-asflicense-problems.txt
 |
| modules | C: druid-handler U: druid-handler |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9542/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> druid-hdfs-storage is pulling in hadoop-aws-2.7.x and aws SDK, creating 
> classpath problems on hadoop 3.x
> 
>
> Key: HIVE-18861
> URL: https://issues.apache.org/jira/browse/HIVE-18861
> Project: Hive
>  Issue Type: Sub-task
>  Components: Druid integration
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HIVE-18861-001.patch, HIVE-18861-001.patch, 
> HIVE-18861.patch, HIVE-18861.patch
>
>
> druid-hdfs-storage JAR is transitively pulling in hadoop-aws JAR 2.7.3, which 
> creates classpath problems as a set of aws-sdk 1.10.77 JARs get on the CP, 
> even with Hadoop 3 & its move to a full aws-sdk-bundle JAR.
> Two options
> # exclude the dependency
> # force it up to whatever ${hadoop.version} is, so make it consistent



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18849) Java 9 Building "mvn clean package -Pdist -rf :hive-common" get Caused by: java.lang.NullPointerException

2018-03-07 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HIVE-18849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16390548#comment-16390548
 ] 

Мирон commented on HIVE-18849:
--

After adjusting a few details to complete switch to Java 9 had to modify parent 
pom.xml.
 # add dependency to javax.annotation-api
 # switch to javac compiler

> Java 9 Building "mvn clean package -Pdist -rf :hive-common" get Caused by: 
> java.lang.NullPointerException
> -
>
> Key: HIVE-18849
> URL: https://issues.apache.org/jira/browse/HIVE-18849
> Project: Hive
>  Issue Type: Bug
>Reporter: Мирон
>Assignee: Мирон
>Priority: Blocker
> Attachments: Issue_Build_Complete_Log.txt, Issue_Stack_Trace.txt
>
>
> Please see attached stack trace, both, brief and complete capture, both using 
> -X verbose output maven flag.
> Irrespective of the true cause, it would be very nice if this message of 
> build tool ( maven )
> --
> Caused by: java.lang.NullPointerException
>     at com.sun.tools.javac.main.JavaCompiler.readSourceFile 
> (JavaCompiler.java:825)
>     at 
> com.sun.tools.javac.processing.JavacProcessingEnvironment$ImplicitCompleter.complete
>  (JavacProcessingEnvironment.java:1510)
>     at com.sun.tools.javac.code.Symbol.complete (Symbol.java:633)
> --
> carried actual file name, that was attempted reading and failed with 
> exception.
>  
> Git repository cloned from [https://github.com/apache/hive.git] yesterday - 
> today overnight.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18869) Improve SparkTask OOM Error Parsing Logic

2018-03-07 Thread Sahil Takiar (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16390546#comment-16390546
 ] 

Sahil Takiar commented on HIVE-18869:
-

When testing this, this is the error the Spark throws when an executor dies due 
to OOM issues:

{code}
ERROR : Job failed with org.apache.spark.SparkException: Job aborted due to 
stage failure: Task 452 in stage 18.0 failed 4 times, most recent failure: Lost 
task 452.3 in stage 18.0 (TID 1532, vc0540.halxg.cloudera.com, executor 65): 
ExecutorLostFailure (executor 65 exited caused by one of the running tasks) 
Reason: Container marked as failed: container_1520386678430_0109_01_68 on 
host: vc0540.halxg.cloudera.com. Exit status: 143. Diagnostics: [2018-03-07 
14:36:02.029]Container killed on request. Exit code is 143
java.util.concurrent.ExecutionException: Exception thrown by job
at 
org.apache.spark.JavaFutureActionWrapper.getImpl(FutureAction.scala:272)
at org.apache.spark.JavaFutureActionWrapper.get(FutureAction.scala:277)
at 
org.apache.hive.spark.client.RemoteDriver$JobWrapper.call(RemoteDriver.java:364)
at 
org.apache.hive.spark.client.RemoteDriver$JobWrapper.call(RemoteDriver.java:325)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: 
Task 452 in stage 18.0 failed 4 times, most recent failure: Lost task 452.3 in 
stage 18.0 (TID 1532, vc0540.halxg.cloudera.com, executor 65): 
ExecutorLostFailure (executor 65 exited caused by one of the running tasks) 
Reason: Container marked as failed: container_1520386678430_0109_01_68 on 
host: vc0540.halxg.cloudera.com. Exit status: 143. Diagnostics: [2018-03-07 
14:36:02.029]Container killed on request. Exit code is 143
[2018-03-07 14:36:02.029]Container exited with a non-zero exit code 143.
[2018-03-07 14:36:02.029]Killed by external signal
{code}

Might be good to differentiate {{ExecutorLost}} failures too.

> Improve SparkTask OOM Error Parsing Logic
> -
>
> Key: HIVE-18869
> URL: https://issues.apache.org/jira/browse/HIVE-18869
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Reporter: Sahil Takiar
>Priority: Major
>
> The method {{SparkTask#isOOMError}} parses a stack-trace to check if it is 
> due to an OOM error. A few improvements could be made:
> * Differentiate between driver OOM and task OOM
> * The string {{Container killed by YARN for exceeding memory limits}} is 
> printed if a container exceeds its memory limits, but Spark tasks can OOM for 
> other reasons, such as {{GC overhead limit exceeded}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18885) Cascaded alter table + notifications = disaster

2018-03-07 Thread Vihang Karajgaonkar (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16390545#comment-16390545
 ] 

Vihang Karajgaonkar commented on HIVE-18885:


bq. but it allows for missed notifications since creating notification and 
storing it may fail. This means that there is some chance that certain 
operations succeed but do not have corresponding notifications.

Isn't it possible even now? I see that transactional listeners are called 
within a transaction block but non-transaction listeners are called outside the 
transaction block.

bq. Another issue is that we have bulk operations which create many 
notifications. This means that the code should be restructured to save all 
these notification after transaction completes rather then inline.

Yes, this also gives us an opportunity to generate aggregate notifications 
where they make sense. If a table has thousands of partitions does it make 
sense to generate thousands of (expensive) notifications when cascade is true?

> Cascaded alter table + notifications = disaster
> ---
>
> Key: HIVE-18885
> URL: https://issues.apache.org/jira/browse/HIVE-18885
> Project: Hive
>  Issue Type: Bug
>  Components: Hive, Metastore
>Affects Versions: 3.0.0
>Reporter: Alexander Kolbasov
>Priority: Major
>
> You can see the problem from looking at the code, but it actually created 
> severe problems for real life Hive user.
> When {{alter table}} has {{cascade}} option it does the following:
> {code:java}
>  msdb.openTransaction()
>   ...
>   List parts = msdb.getPartitions(dbname, name, -1);
>   for (Partition part : parts) {
> List oldCols = part.getSd().getCols();
> part.getSd().setCols(newt.getSd().getCols());
> String oldPartName = 
> Warehouse.makePartName(oldt.getPartitionKeys(), part.getValues());
> updatePartColumnStatsForAlterColumns(msdb, part, oldPartName, 
> part.getValues(), oldCols, part);
> msdb.alterPartition(dbname, name, part.getValues(), part);
>   }
>  {code}
> So it walks all partitions (and this may be huge list) and does some 
> non-trivial operations in one single uber-transaction.
> When DbNotificationListener is enabled, it adds an event for each partition, 
> all while
> holding a row lock on NOTIFICATION_SEQUENCE table. As a result, while this is 
> happening no other write DDL can proceed. This can sometimes cause DB lock 
> timeouts which cause HMS level operation retries which make things even worse.
> In one particular case this pretty much made HMS unusable.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18831) Differentiate errors that are thrown by Spark tasks

2018-03-07 Thread Sahil Takiar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sahil Takiar updated HIVE-18831:

Attachment: HIVE-18831.2.patch

> Differentiate errors that are thrown by Spark tasks
> ---
>
> Key: HIVE-18831
> URL: https://issues.apache.org/jira/browse/HIVE-18831
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
>Priority: Major
> Attachments: HIVE-18831.1.patch, HIVE-18831.2.patch
>
>
> We propagate exceptions from Spark task failures to the client well, but we 
> don't differentiate between errors from HS2 / RSC vs. errors thrown by 
> individual tasks.
> Main motivation is that when the client sees a propagated Spark exception its 
> difficult to know what part of the excution threw the exception.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18791) Fix TestJdbcWithMiniHS2#testHttpHeaderSize

2018-03-07 Thread Andrew Sherman (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16390540#comment-16390540
 ] 

Andrew Sherman commented on HIVE-18791:
---

Thanks [~pvary]

> Fix TestJdbcWithMiniHS2#testHttpHeaderSize
> --
>
> Key: HIVE-18791
> URL: https://issues.apache.org/jira/browse/HIVE-18791
> Project: Hive
>  Issue Type: Bug
>Reporter: Andrew Sherman
>Assignee: Andrew Sherman
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-18791.1.patch
>
>
> TestJdbcWithMiniHS2#testHttpHeaderSize tests whether config of http header 
> sizes works by using a long username. The local scratch directory for the 
> session uses the username as part of its path. When this name is more than 
> 255 chars (on most modern file systems) then the directory creation will 
> fail. HIVE-18625 made this failure throw an exception, which has caused a 
> regression in testHttpHeaderSize.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18705) Improve HiveMetaStoreClient.dropDatabase

2018-03-07 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16390536#comment-16390536
 ] 

Hive QA commented on HIVE-18705:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12913448/HIVE-18705.2.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 18 failed/errored test(s), 12952 tests 
executed
*Failed tests:*
{noformat}
TestNegativeCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=94)

[nopart_insert.q,insert_into_with_schema.q,input41.q,having1.q,create_table_failure3.q,default_constraint_invalid_default_value.q,database_drop_not_empty_restrict.q,windowing_after_orderby.q,orderbysortby.q,subquery_select_distinct2.q,authorization_uri_alterpart_loc.q,udf_last_day_error_1.q,constraint_duplicate_name.q,create_table_failure4.q,alter_tableprops_external_with_notnull_constraint.q,semijoin5.q,udf_format_number_wrong4.q,deletejar.q,exim_11_nonpart_noncompat_sorting.q,show_tables_bad_db2.q,drop_func_nonexistent.q,nopart_load.q,alter_table_non_partitioned_table_cascade.q,load_wrong_fileformat.q,lockneg_try_db_lock_conflict.q,udf_field_wrong_args_len.q,create_table_failure2.q,create_with_fk_constraints_enforced.q,groupby2_map_skew_multi_distinct.q,udf_min.q,authorization_update_noupdatepriv.q,show_columns2.q,authorization_insert_noselectpriv.q,orc_replace_columns3_acid.q,compare_double_bigint.q,authorization_set_nonexistent_conf.q,alter_rename_partition_failure3.q,split_sample_wrong_format2.q,create_with_fk_pk_same_tab.q,compare_double_bigint_2.q,authorization_show_roles_no_admin.q,materialized_view_authorization_rebuild_no_grant.q,unionLimit.q,authorization_revoke_table_fail2.q,authorization_insert_noinspriv.q,duplicate_insert3.q,authorization_desc_table_nosel.q,stats_noscan_non_native.q,orc_change_serde_acid.q,create_or_replace_view7.q,exim_07_nonpart_noncompat_ifof.q,create_with_unique_constraints_enforced.q,udf_concat_ws_wrong2.q,fileformat_bad_class.q,merge_negative_2.q,exim_15_part_nonpart.q,authorization_not_owner_drop_view.q,external1.q,authorization_uri_insert.q,create_with_fk_wrong_ref.q,columnstats_tbllvl_incorrect_column.q,authorization_show_parts_nosel.q,authorization_not_owner_drop_tab.q,external2.q,authorization_deletejar.q,temp_table_create_like_partitions.q,udf_greatest_error_1.q,ptf_negative_AggrFuncsWithNoGBYNoPartDef.q,alter_view_as_select_not_exist.q,touch1.q,groupby3_map_skew_multi_distinct.q,insert_into_notnull_constraint.q,exchange_partition_neg_partition_missing.q,groupby_cube_multi_gby.q,columnstats_tbllvl.q,drop_invalid_constraint2.q,alter_table_add_partition.q,update_not_acid.q,archive5.q,alter_table_constraint_invalid_pk_col.q,ivyDownload.q,udf_instr_wrong_type.q,bad_sample_clause.q,authorization_not_owner_drop_tab2.q,authorization_alter_db_owner.q,show_columns1.q,orc_type_promotion3.q,create_view_failure8.q,strict_join.q,udf_add_months_error_1.q,groupby_cube2.q,groupby_cube1.q,groupby_rollup1.q,genericFileFormat.q,invalid_cast_from_binary_4.q,drop_invalid_constraint1.q,serde_regex.q,show_partitions1.q,invalid_cast_from_binary_6.q,create_with_multi_pk_constraint.q,udf_field_wrong_type.q,groupby_grouping_sets4.q,groupby_grouping_sets3.q,insertsel_fail.q,udf_locate_wrong_type.q,orc_type_promotion1_acid.q,set_table_property.q,create_or_replace_view2.q,groupby_grouping_sets2.q,alter_view_failure.q,distinct_windowing_failure1.q,invalid_t_alter2.q,alter_table_constraint_invalid_fk_col1.q,invalid_varchar_length_2.q,authorization_show_grant_otheruser_alltabs.q,subquery_windowing_corr.q,compact_non_acid_table.q,authorization_view_4.q,authorization_disallow_transform.q,materialized_view_authorization_rebuild_other.q,authorization_fail_4.q,dbtxnmgr_nodblock.q,set_hiveconf_internal_variable1.q,input_part0_neg.q,udf_printf_wrong3.q,load_orc_negative2.q,druid_buckets.q,archive2.q,authorization_addjar.q,invalid_sum_syntax.q,insert_into_with_schema1.q,udf_add_months_error_2.q,dyn_part_max_per_node.q,authorization_revoke_table_fail1.q,udf_printf_wrong2.q,archive_multi3.q,udf_printf_wrong1.q,subquery_subquery_chain.q,authorization_view_disable_cbo_4.q,no_matching_udf.q,create_view_failure7.q,drop_native_udf.q,truncate_column_list_bucketing.q,authorization_uri_add_partition.q,authorization_view_disable_cbo_3.q,bad_exec_hooks.q,authorization_view_disable_cbo_2.q,fetchtask_ioexception.q,char_pad_convert_fail2.q,authorization_set_role_neg1.q,serde_regex3.q,authorization_delete_nodeletepriv.q,materialized_view_delete.q,create_or_replace_view6.q,bucket_mapjoin_wrong_table_metadata_2.q,msck_repair_3.q,udf_sort_array_by_wrong2.q,local_mapred_error_cache.q,alter_external_acid.q,mm_concatenate.q,authorization_fail_3.q,set_hiveconf_internal_variable0.q,udf_last_day_error_2.q,alter_table_constraint_invalid_ref.q,create_table_wrong_regex.q,describe_x

[jira] [Commented] (HIVE-18885) Cascaded alter table + notifications = disaster

2018-03-07 Thread Alexander Kolbasov (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16390495#comment-16390495
 ] 

Alexander Kolbasov commented on HIVE-18885:
---

[~arjunmishra13] I think there are multiple cases like this. Some examples come 
to mind:

1) Drop table/database
2) Request to create multiple tables or to add multiple partitions.
3) Request to drop multiple tables or partitions.

The issue with ALTER TABLE CASCADE is that it walks across *all* partitions (as 
does drop table/database).

> Cascaded alter table + notifications = disaster
> ---
>
> Key: HIVE-18885
> URL: https://issues.apache.org/jira/browse/HIVE-18885
> Project: Hive
>  Issue Type: Bug
>  Components: Hive, Metastore
>Affects Versions: 3.0.0
>Reporter: Alexander Kolbasov
>Priority: Major
>
> You can see the problem from looking at the code, but it actually created 
> severe problems for real life Hive user.
> When {{alter table}} has {{cascade}} option it does the following:
> {code:java}
>  msdb.openTransaction()
>   ...
>   List parts = msdb.getPartitions(dbname, name, -1);
>   for (Partition part : parts) {
> List oldCols = part.getSd().getCols();
> part.getSd().setCols(newt.getSd().getCols());
> String oldPartName = 
> Warehouse.makePartName(oldt.getPartitionKeys(), part.getValues());
> updatePartColumnStatsForAlterColumns(msdb, part, oldPartName, 
> part.getValues(), oldCols, part);
> msdb.alterPartition(dbname, name, part.getValues(), part);
>   }
>  {code}
> So it walks all partitions (and this may be huge list) and does some 
> non-trivial operations in one single uber-transaction.
> When DbNotificationListener is enabled, it adds an event for each partition, 
> all while
> holding a row lock on NOTIFICATION_SEQUENCE table. As a result, while this is 
> happening no other write DDL can proceed. This can sometimes cause DB lock 
> timeouts which cause HMS level operation retries which make things even worse.
> In one particular case this pretty much made HMS unusable.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18885) Cascaded alter table + notifications = disaster

2018-03-07 Thread Alexander Kolbasov (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16390493#comment-16390493
 ] 

Alexander Kolbasov commented on HIVE-18885:
---

[~vihangk1] Something like this would work in general but it allows for missed 
notifications since creating notification and storing it may fail. This means 
that there is some chance that certain operations succeed but do not have 
corresponding notifications.

Another issue is that we have bulk operations which create many notifications. 
This means that the code should be restructured to save all these notification 
after transaction completes rather then inline.

> Cascaded alter table + notifications = disaster
> ---
>
> Key: HIVE-18885
> URL: https://issues.apache.org/jira/browse/HIVE-18885
> Project: Hive
>  Issue Type: Bug
>  Components: Hive, Metastore
>Affects Versions: 3.0.0
>Reporter: Alexander Kolbasov
>Priority: Major
>
> You can see the problem from looking at the code, but it actually created 
> severe problems for real life Hive user.
> When {{alter table}} has {{cascade}} option it does the following:
> {code:java}
>  msdb.openTransaction()
>   ...
>   List parts = msdb.getPartitions(dbname, name, -1);
>   for (Partition part : parts) {
> List oldCols = part.getSd().getCols();
> part.getSd().setCols(newt.getSd().getCols());
> String oldPartName = 
> Warehouse.makePartName(oldt.getPartitionKeys(), part.getValues());
> updatePartColumnStatsForAlterColumns(msdb, part, oldPartName, 
> part.getValues(), oldCols, part);
> msdb.alterPartition(dbname, name, part.getValues(), part);
>   }
>  {code}
> So it walks all partitions (and this may be huge list) and does some 
> non-trivial operations in one single uber-transaction.
> When DbNotificationListener is enabled, it adds an event for each partition, 
> all while
> holding a row lock on NOTIFICATION_SEQUENCE table. As a result, while this is 
> happening no other write DDL can proceed. This can sometimes cause DB lock 
> timeouts which cause HMS level operation retries which make things even worse.
> In one particular case this pretty much made HMS unusable.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18705) Improve HiveMetaStoreClient.dropDatabase

2018-03-07 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16390475#comment-16390475
 ] 

Hive QA commented on HIVE-18705:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
41s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
27s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
50s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 3s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
53s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
7s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
16s{color} | {color:red} service in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} ql: The patch generated 0 new + 0 unchanged - 12 
fixed = 0 total (was 12) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} The patch service passed checkstyle {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
16s{color} | {color:red} standalone-metastore: The patch generated 14 new + 153 
unchanged - 8 fixed = 167 total (was 161) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
13s{color} | {color:red} The patch generated 49 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 19m 47s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-9541/dev-support/hive-personality.sh
 |
| git revision | master / 073dc88 |
| Default Java | 1.8.0_111 |
| mvninstall | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9541/yetus/patch-mvninstall-service.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9541/yetus/diff-checkstyle-standalone-metastore.txt
 |
| whitespace | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9541/yetus/whitespace-eol.txt 
|
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9541/yetus/patch-asflicense-problems.txt
 |
| modules | C: ql service standalone-metastore U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9541/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Improve HiveMetaStoreClient.dropDatabase
> 
>
> Key: HIVE-18705
> URL: https://issues.apache.org/jira/browse/HIVE-18705
> Project: Hive
>  Issue Type: Improvement
>Reporter: Adam Szita
>Assignee: Adam Szita
>Priority: Major
> Attachments: HIVE-18705.0.patch, HIVE-18705.1.patch, 
> HIVE-18705.2.patch
>
>
> {{HiveMetaStoreClient.dropDatabase}} has a strange implementation to ensure 
> dealing with client side hooks (for non-native tables e.g. HBase). Currently 
> it starts by retrieving

[jira] [Commented] (HIVE-18888) Replace synchronizedMap with ConcurrentHashMap

2018-03-07 Thread Sahil Takiar (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-1?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16390463#comment-16390463
 ] 

Sahil Takiar commented on HIVE-1:
-

+1 pending Peter's comment.

> Replace synchronizedMap with ConcurrentHashMap
> --
>
> Key: HIVE-1
> URL: https://issues.apache.org/jira/browse/HIVE-1
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 3.0.0, 2.3.3
>Reporter: Alexander Kolbasov
>Assignee: Alexander Kolbasov
>Priority: Major
> Attachments: HIVE-1.01.patch
>
>
> There are a bunch of places that use Collections.synchronizedMap instead of 
> ConcurrentHashMap which are better. We should search/replace the uses.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18899) Separate FetchWork required for each query that uses the results cache

2018-03-07 Thread Gopal V (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gopal V updated HIVE-18899:
---
Affects Version/s: 3.0.0

> Separate FetchWork required for each query that uses the results cache
> --
>
> Key: HIVE-18899
> URL: https://issues.apache.org/jira/browse/HIVE-18899
> Project: Hive
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Jason Dere
>Assignee: Jason Dere
>Priority: Major
>  Labels: performance
> Attachments: HIVE-18899.1.patch
>
>
> [~gopalv] found issues when running lots of concurrent queries against HS2 
> with the query cache. Looks like the FetchWork held by the results cache 
> cannot be shared between multiple queries because it contains a 
> ListSinkOperator that is used to hold the results of a fetch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18899) Separate FetchWork required for each query that uses the results cache

2018-03-07 Thread Gopal V (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gopal V updated HIVE-18899:
---
Labels: performance  (was: )

> Separate FetchWork required for each query that uses the results cache
> --
>
> Key: HIVE-18899
> URL: https://issues.apache.org/jira/browse/HIVE-18899
> Project: Hive
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Jason Dere
>Assignee: Jason Dere
>Priority: Major
>  Labels: performance
> Attachments: HIVE-18899.1.patch
>
>
> [~gopalv] found issues when running lots of concurrent queries against HS2 
> with the query cache. Looks like the FetchWork held by the results cache 
> cannot be shared between multiple queries because it contains a 
> ListSinkOperator that is used to hold the results of a fetch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18905) HS2: SASL auth loads HiveConf

2018-03-07 Thread Gopal V (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gopal V updated HIVE-18905:
---
Priority: Minor  (was: Major)

> HS2: SASL auth loads HiveConf
> -
>
> Key: HIVE-18905
> URL: https://issues.apache.org/jira/browse/HIVE-18905
> Project: Hive
>  Issue Type: Bug
>Reporter: Gopal V
>Priority: Minor
>
> SASL authentication filter does a new HiveConf() for no good reason.
> {code}
>   public static PasswdAuthenticationProvider 
> getAuthenticationProvider(AuthMethods authMethod)
> throws AuthenticationException {
> return getAuthenticationProvider(authMethod, new HiveConf());
>   }
> {code}
> The session HiveConf is not needed to do this operation & it can't be changed 
> after the HS2 starts up (today).
> {code}
> org.apache.hadoop.hive.conf.HiveConf.() HiveConf.java:4404
> org.apache.hive.service.auth.AuthenticationProviderFactory.getAuthenticationProvider(AuthenticationProviderFactory$AuthMethods)
>  AuthenticationProviderFactory.java:61
> org.apache.hive.service.auth.PlainSaslHelper$PlainServerCallbackHandler.handle(Callback[])
>  PlainSaslHelper.java:106
> org.apache.hive.service.auth.PlainSaslServer.evaluateResponse(byte[]) 
> PlainSaslServer.java:103
> org.apache.thrift.transport.TSaslTransport$SaslParticipant.evaluateChallengeOrResponse(byte[])
>  TSaslTransport.java:539
> org.apache.thrift.transport.TSaslTransport.open() TSaslTransport.java:283
> org.apache.thrift.transport.TSaslServerTransport.open() 
> TSaslServerTransport.java:41
> org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TTransport)
>  TSaslServerTransport.java:216
> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run() 
> TThreadPoolServer.java:269
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor$Worker) 
> ThreadPoolExecutor.java:1142
> java.util.concurrent.ThreadPoolExecutor$Worker.run() 
> ThreadPoolExecutor.java:617
> java.lang.Thread.run() Thread.java:745
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18905) HS2: SASL auth loads HiveConf for every JDBC call

2018-03-07 Thread Gopal V (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gopal V updated HIVE-18905:
---
Summary: HS2: SASL auth loads HiveConf for every JDBC call  (was: HS2: SASL 
auth loads HiveConf)

> HS2: SASL auth loads HiveConf for every JDBC call
> -
>
> Key: HIVE-18905
> URL: https://issues.apache.org/jira/browse/HIVE-18905
> Project: Hive
>  Issue Type: Bug
>Reporter: Gopal V
>Priority: Minor
>
> SASL authentication filter does a new HiveConf() for no good reason.
> {code}
>   public static PasswdAuthenticationProvider 
> getAuthenticationProvider(AuthMethods authMethod)
> throws AuthenticationException {
> return getAuthenticationProvider(authMethod, new HiveConf());
>   }
> {code}
> The session HiveConf is not needed to do this operation & it can't be changed 
> after the HS2 starts up (today).
> {code}
> org.apache.hadoop.hive.conf.HiveConf.() HiveConf.java:4404
> org.apache.hive.service.auth.AuthenticationProviderFactory.getAuthenticationProvider(AuthenticationProviderFactory$AuthMethods)
>  AuthenticationProviderFactory.java:61
> org.apache.hive.service.auth.PlainSaslHelper$PlainServerCallbackHandler.handle(Callback[])
>  PlainSaslHelper.java:106
> org.apache.hive.service.auth.PlainSaslServer.evaluateResponse(byte[]) 
> PlainSaslServer.java:103
> org.apache.thrift.transport.TSaslTransport$SaslParticipant.evaluateChallengeOrResponse(byte[])
>  TSaslTransport.java:539
> org.apache.thrift.transport.TSaslTransport.open() TSaslTransport.java:283
> org.apache.thrift.transport.TSaslServerTransport.open() 
> TSaslServerTransport.java:41
> org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TTransport)
>  TSaslServerTransport.java:216
> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run() 
> TThreadPoolServer.java:269
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor$Worker) 
> ThreadPoolExecutor.java:1142
> java.util.concurrent.ThreadPoolExecutor$Worker.run() 
> ThreadPoolExecutor.java:617
> java.lang.Thread.run() Thread.java:745
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-17478) Move filesystem stats collection from metastore to ql

2018-03-07 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16390442#comment-16390442
 ] 

Hive QA commented on HIVE-17478:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12913380/HIVE-17478.06.patch

{color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 49 failed/errored test(s), 12951 tests 
executed
*Failed tests:*
{noformat}
TestNegativeCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=94)

[nopart_insert.q,insert_into_with_schema.q,input41.q,having1.q,create_table_failure3.q,default_constraint_invalid_default_value.q,database_drop_not_empty_restrict.q,windowing_after_orderby.q,orderbysortby.q,subquery_select_distinct2.q,authorization_uri_alterpart_loc.q,udf_last_day_error_1.q,constraint_duplicate_name.q,create_table_failure4.q,alter_tableprops_external_with_notnull_constraint.q,semijoin5.q,udf_format_number_wrong4.q,deletejar.q,exim_11_nonpart_noncompat_sorting.q,show_tables_bad_db2.q,drop_func_nonexistent.q,nopart_load.q,alter_table_non_partitioned_table_cascade.q,load_wrong_fileformat.q,lockneg_try_db_lock_conflict.q,udf_field_wrong_args_len.q,create_table_failure2.q,create_with_fk_constraints_enforced.q,groupby2_map_skew_multi_distinct.q,udf_min.q,authorization_update_noupdatepriv.q,show_columns2.q,authorization_insert_noselectpriv.q,orc_replace_columns3_acid.q,compare_double_bigint.q,authorization_set_nonexistent_conf.q,alter_rename_partition_failure3.q,split_sample_wrong_format2.q,create_with_fk_pk_same_tab.q,compare_double_bigint_2.q,authorization_show_roles_no_admin.q,materialized_view_authorization_rebuild_no_grant.q,unionLimit.q,authorization_revoke_table_fail2.q,authorization_insert_noinspriv.q,duplicate_insert3.q,authorization_desc_table_nosel.q,stats_noscan_non_native.q,orc_change_serde_acid.q,create_or_replace_view7.q,exim_07_nonpart_noncompat_ifof.q,create_with_unique_constraints_enforced.q,udf_concat_ws_wrong2.q,fileformat_bad_class.q,merge_negative_2.q,exim_15_part_nonpart.q,authorization_not_owner_drop_view.q,external1.q,authorization_uri_insert.q,create_with_fk_wrong_ref.q,columnstats_tbllvl_incorrect_column.q,authorization_show_parts_nosel.q,authorization_not_owner_drop_tab.q,external2.q,authorization_deletejar.q,temp_table_create_like_partitions.q,udf_greatest_error_1.q,ptf_negative_AggrFuncsWithNoGBYNoPartDef.q,alter_view_as_select_not_exist.q,touch1.q,groupby3_map_skew_multi_distinct.q,insert_into_notnull_constraint.q,exchange_partition_neg_partition_missing.q,groupby_cube_multi_gby.q,columnstats_tbllvl.q,drop_invalid_constraint2.q,alter_table_add_partition.q,update_not_acid.q,archive5.q,alter_table_constraint_invalid_pk_col.q,ivyDownload.q,udf_instr_wrong_type.q,bad_sample_clause.q,authorization_not_owner_drop_tab2.q,authorization_alter_db_owner.q,show_columns1.q,orc_type_promotion3.q,create_view_failure8.q,strict_join.q,udf_add_months_error_1.q,groupby_cube2.q,groupby_cube1.q,groupby_rollup1.q,genericFileFormat.q,invalid_cast_from_binary_4.q,drop_invalid_constraint1.q,serde_regex.q,show_partitions1.q,invalid_cast_from_binary_6.q,create_with_multi_pk_constraint.q,udf_field_wrong_type.q,groupby_grouping_sets4.q,groupby_grouping_sets3.q,insertsel_fail.q,udf_locate_wrong_type.q,orc_type_promotion1_acid.q,set_table_property.q,create_or_replace_view2.q,groupby_grouping_sets2.q,alter_view_failure.q,distinct_windowing_failure1.q,invalid_t_alter2.q,alter_table_constraint_invalid_fk_col1.q,invalid_varchar_length_2.q,authorization_show_grant_otheruser_alltabs.q,subquery_windowing_corr.q,compact_non_acid_table.q,authorization_view_4.q,authorization_disallow_transform.q,materialized_view_authorization_rebuild_other.q,authorization_fail_4.q,dbtxnmgr_nodblock.q,set_hiveconf_internal_variable1.q,input_part0_neg.q,udf_printf_wrong3.q,load_orc_negative2.q,druid_buckets.q,archive2.q,authorization_addjar.q,invalid_sum_syntax.q,insert_into_with_schema1.q,udf_add_months_error_2.q,dyn_part_max_per_node.q,authorization_revoke_table_fail1.q,udf_printf_wrong2.q,archive_multi3.q,udf_printf_wrong1.q,subquery_subquery_chain.q,authorization_view_disable_cbo_4.q,no_matching_udf.q,create_view_failure7.q,drop_native_udf.q,truncate_column_list_bucketing.q,authorization_uri_add_partition.q,authorization_view_disable_cbo_3.q,bad_exec_hooks.q,authorization_view_disable_cbo_2.q,fetchtask_ioexception.q,char_pad_convert_fail2.q,authorization_set_role_neg1.q,serde_regex3.q,authorization_delete_nodeletepriv.q,materialized_view_delete.q,create_or_replace_view6.q,bucket_mapjoin_wrong_table_metadata_2.q,msck_repair_3.q,udf_sort_array_by_wrong2.q,local_mapred_error_cache.q,alter_external_acid.q,mm_concatenate.q,authorization_fail_3.q,set_hiveconf_internal_variable0.q,udf_last_day_error_2.q,alter_table_constraint_invalid_ref.q,create_table_wrong_regex.q,describe_

[jira] [Comment Edited] (HIVE-18899) Separate FetchWork required for each query that uses the results cache

2018-03-07 Thread Gopal V (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16390420#comment-16390420
 ] 

Gopal V edited comment on HIVE-18899 at 3/7/18 11:36 PM:
-

Tested this with 1000 concurrent users and we're doing really well on cache 
hit-rates.

{code}
summary =  83681 in 00:10:51 =  128.6 qps 
{code}

LGTM - +1


was (Author: gopalv):
Tested this with 1000 concurrent users and we're doing really well on cache 
hit-rates.

{code}
summary =  83681 in 00:10:51 =  128.6/s 
{code}

LGTM - +1

> Separate FetchWork required for each query that uses the results cache
> --
>
> Key: HIVE-18899
> URL: https://issues.apache.org/jira/browse/HIVE-18899
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Jason Dere
>Assignee: Jason Dere
>Priority: Major
> Attachments: HIVE-18899.1.patch
>
>
> [~gopalv] found issues when running lots of concurrent queries against HS2 
> with the query cache. Looks like the FetchWork held by the results cache 
> cannot be shared between multiple queries because it contains a 
> ListSinkOperator that is used to hold the results of a fetch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18904) HS2: Static Loggers in hive-exec classes are being initialized per-thread

2018-03-07 Thread Gopal V (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18904?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gopal V updated HIVE-18904:
---
Labels: performance  (was: )

> HS2: Static Loggers in hive-exec classes are being initialized per-thread
> -
>
> Key: HIVE-18904
> URL: https://issues.apache.org/jira/browse/HIVE-18904
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 3.0.0
>Reporter: Gopal V
>Priority: Major
>  Labels: performance
>
> Thread Class loaders shouldn't apply to any class which is part of the 
> install.
> {code}
> HiveServer2-Handler-Pool: Thread-1001 <--- Frozen for at least 11m 25s
> java.util.zip.ZipFile.open(String, int, long, boolean) ZipFile.java (native)
> java.util.zip.ZipFile.(File, int, Charset) ZipFile.java:219
> java.util.zip.ZipFile.(File, int) ZipFile.java:149
> java.util.jar.JarFile.(File, boolean, int) JarFile.java:166
> java.util.jar.JarFile.(String) JarFile.java:103
> sun.misc.URLClassPath$JarLoader.getJarFile(URL) URLClassPath.java:893
> sun.misc.URLClassPath$JarLoader.access$700(URLClassPath$JarLoader, URL) 
> URLClassPath.java:756
> sun.misc.URLClassPath$JarLoader$1.run() URLClassPath.java:838
> sun.misc.URLClassPath$JarLoader$1.run() URLClassPath.java:831
> java.security.AccessController.doPrivileged(PrivilegedExceptionAction) 
> AccessController.java (native)
> sun.misc.URLClassPath$JarLoader.ensureOpen() URLClassPath.java:830
> sun.misc.URLClassPath$JarLoader.(URL, URLStreamHandler, HashMap) 
> URLClassPath.java:803
> sun.misc.URLClassPath$3.run() URLClassPath.java:530
> sun.misc.URLClassPath$3.run() URLClassPath.java:520
> java.security.AccessController.doPrivileged(PrivilegedExceptionAction) 
> AccessController.java (native)
> sun.misc.URLClassPath.getLoader(URL) URLClassPath.java:519
> sun.misc.URLClassPath.getLoader(int) URLClassPath.java:492
> sun.misc.URLClassPath.getNextLoader(int[], int) URLClassPath.java:457
> sun.misc.URLClassPath.getResource(String, boolean) URLClassPath.java:211
> java.net.URLClassLoader$1.run() URLClassLoader.java:365
> java.net.URLClassLoader$1.run() URLClassLoader.java:362
> java.security.AccessController.doPrivileged(PrivilegedExceptionAction, 
> AccessControlContext) AccessController.java (native)
> java.net.URLClassLoader.findClass(String) URLClassLoader.java:361
> java.lang.ClassLoader.loadClass(String, boolean) ClassLoader.java:424
> java.lang.ClassLoader.loadClass(String) ClassLoader.java:357
> org.apache.logging.log4j.util.LoaderUtil.loadClass(String) LoaderUtil.java:163
> org.apache.logging.slf4j.Log4jLogger.createConverter() Log4jLogger.java:416
> org.apache.logging.slf4j.Log4jLogger.(ExtendedLogger, String) 
> Log4jLogger.java:54
> org.apache.logging.slf4j.Log4jLoggerFactory.newLogger(String, LoggerContext) 
> Log4jLoggerFactory.java:37
> org.apache.logging.slf4j.Log4jLoggerFactory.newLogger(String, LoggerContext) 
> Log4jLoggerFactory.java:29
> org.apache.logging.log4j.spi.AbstractLoggerAdapter.getLogger(String) 
> AbstractLoggerAdapter.java:52
> org.apache.logging.slf4j.Log4jLoggerFactory.getLogger(String) 
> Log4jLoggerFactory.java:29
> org.slf4j.LoggerFactory.getLogger(String) LoggerFactory.java:281
> org.slf4j.LoggerFactory.getLogger(Class) LoggerFactory.java:301
> org.apache.hadoop.hive.ql.parse.TableMask.(SemanticAnalyzer, HiveConf, 
> boolean) TableMask.java:42
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genResolvedParseTree(ASTNode,
>  SemanticAnalyzer$PlannerContext) SemanticAnalyzer.java:11558
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(ASTNode, 
> SemanticAnalyzer$PlannerContextFactory) SemanticAnalyzer.java:11665
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(ASTNode) 
> CalcitePlanner.java:304
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(ASTNode, 
> Context) BaseSemanticAnalyzer.java:273
> org.apache.hadoop.hive.ql.Driver.compile(String, boolean, boolean) 
> Driver.java:614
> org.apache.hadoop.hive.ql.Driver.compileInternal(String, boolean) 
> Driver.java:1545
> org.apache.hadoop.hive.ql.Driver.compileAndRespond(String) Driver.java:1498
> org.apache.hive.service.cli.operation.SQLOperation.prepare(QueryState) 
> SQLOperation.java:198
> org.apache.hive.service.cli.operation.SQLOperation.runInternal() 
> SQLOperation.java:284
> org.apache.hive.service.cli.operation.Operation.run() Operation.java:243
> org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementInternal(String,
>  Map, boolean, long) HiveSessionImpl.java:541
> org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementAsync(String,
>  Map, long) HiveSessionImpl.java:527
> org.apache.hive.service.cli.CLIService.executeStatementAsync(SessionHandle, 
> String, Map, long) CLIService.java:311
> org.apache.hive.service.cli.t

[jira] [Updated] (HIVE-18904) HS2: Static Loggers in hive-exec classes are being initialized per-thread

2018-03-07 Thread Gopal V (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18904?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gopal V updated HIVE-18904:
---
Component/s: HiveServer2

> HS2: Static Loggers in hive-exec classes are being initialized per-thread
> -
>
> Key: HIVE-18904
> URL: https://issues.apache.org/jira/browse/HIVE-18904
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 3.0.0
>Reporter: Gopal V
>Priority: Major
>  Labels: performance
>
> Thread Class loaders shouldn't apply to any class which is part of the 
> install.
> {code}
> HiveServer2-Handler-Pool: Thread-1001 <--- Frozen for at least 11m 25s
> java.util.zip.ZipFile.open(String, int, long, boolean) ZipFile.java (native)
> java.util.zip.ZipFile.(File, int, Charset) ZipFile.java:219
> java.util.zip.ZipFile.(File, int) ZipFile.java:149
> java.util.jar.JarFile.(File, boolean, int) JarFile.java:166
> java.util.jar.JarFile.(String) JarFile.java:103
> sun.misc.URLClassPath$JarLoader.getJarFile(URL) URLClassPath.java:893
> sun.misc.URLClassPath$JarLoader.access$700(URLClassPath$JarLoader, URL) 
> URLClassPath.java:756
> sun.misc.URLClassPath$JarLoader$1.run() URLClassPath.java:838
> sun.misc.URLClassPath$JarLoader$1.run() URLClassPath.java:831
> java.security.AccessController.doPrivileged(PrivilegedExceptionAction) 
> AccessController.java (native)
> sun.misc.URLClassPath$JarLoader.ensureOpen() URLClassPath.java:830
> sun.misc.URLClassPath$JarLoader.(URL, URLStreamHandler, HashMap) 
> URLClassPath.java:803
> sun.misc.URLClassPath$3.run() URLClassPath.java:530
> sun.misc.URLClassPath$3.run() URLClassPath.java:520
> java.security.AccessController.doPrivileged(PrivilegedExceptionAction) 
> AccessController.java (native)
> sun.misc.URLClassPath.getLoader(URL) URLClassPath.java:519
> sun.misc.URLClassPath.getLoader(int) URLClassPath.java:492
> sun.misc.URLClassPath.getNextLoader(int[], int) URLClassPath.java:457
> sun.misc.URLClassPath.getResource(String, boolean) URLClassPath.java:211
> java.net.URLClassLoader$1.run() URLClassLoader.java:365
> java.net.URLClassLoader$1.run() URLClassLoader.java:362
> java.security.AccessController.doPrivileged(PrivilegedExceptionAction, 
> AccessControlContext) AccessController.java (native)
> java.net.URLClassLoader.findClass(String) URLClassLoader.java:361
> java.lang.ClassLoader.loadClass(String, boolean) ClassLoader.java:424
> java.lang.ClassLoader.loadClass(String) ClassLoader.java:357
> org.apache.logging.log4j.util.LoaderUtil.loadClass(String) LoaderUtil.java:163
> org.apache.logging.slf4j.Log4jLogger.createConverter() Log4jLogger.java:416
> org.apache.logging.slf4j.Log4jLogger.(ExtendedLogger, String) 
> Log4jLogger.java:54
> org.apache.logging.slf4j.Log4jLoggerFactory.newLogger(String, LoggerContext) 
> Log4jLoggerFactory.java:37
> org.apache.logging.slf4j.Log4jLoggerFactory.newLogger(String, LoggerContext) 
> Log4jLoggerFactory.java:29
> org.apache.logging.log4j.spi.AbstractLoggerAdapter.getLogger(String) 
> AbstractLoggerAdapter.java:52
> org.apache.logging.slf4j.Log4jLoggerFactory.getLogger(String) 
> Log4jLoggerFactory.java:29
> org.slf4j.LoggerFactory.getLogger(String) LoggerFactory.java:281
> org.slf4j.LoggerFactory.getLogger(Class) LoggerFactory.java:301
> org.apache.hadoop.hive.ql.parse.TableMask.(SemanticAnalyzer, HiveConf, 
> boolean) TableMask.java:42
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genResolvedParseTree(ASTNode,
>  SemanticAnalyzer$PlannerContext) SemanticAnalyzer.java:11558
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(ASTNode, 
> SemanticAnalyzer$PlannerContextFactory) SemanticAnalyzer.java:11665
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(ASTNode) 
> CalcitePlanner.java:304
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(ASTNode, 
> Context) BaseSemanticAnalyzer.java:273
> org.apache.hadoop.hive.ql.Driver.compile(String, boolean, boolean) 
> Driver.java:614
> org.apache.hadoop.hive.ql.Driver.compileInternal(String, boolean) 
> Driver.java:1545
> org.apache.hadoop.hive.ql.Driver.compileAndRespond(String) Driver.java:1498
> org.apache.hive.service.cli.operation.SQLOperation.prepare(QueryState) 
> SQLOperation.java:198
> org.apache.hive.service.cli.operation.SQLOperation.runInternal() 
> SQLOperation.java:284
> org.apache.hive.service.cli.operation.Operation.run() Operation.java:243
> org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementInternal(String,
>  Map, boolean, long) HiveSessionImpl.java:541
> org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementAsync(String,
>  Map, long) HiveSessionImpl.java:527
> org.apache.hive.service.cli.CLIService.executeStatementAsync(SessionHandle, 
> String, Map, long) CLIService.java:311
> org.apache.hive.service.cli.thrif

[jira] [Updated] (HIVE-18835) JDBC standalone jar download link in ambari

2018-03-07 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-18835:

   Resolution: Fixed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

Pushed to master. Thanks, Miklos!

> JDBC standalone jar download link in ambari
> ---
>
> Key: HIVE-18835
> URL: https://issues.apache.org/jira/browse/HIVE-18835
> Project: Hive
>  Issue Type: New Feature
>  Components: Hive
>Affects Versions: 3.0.0
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-18835.patch
>
>
> Let HS2 offer the file for download, so that Ambari can create link on it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18899) Separate FetchWork required for each query that uses the results cache

2018-03-07 Thread Gopal V (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16390420#comment-16390420
 ] 

Gopal V commented on HIVE-18899:


Tested this with 1000 concurrent users and we're doing really well on cache 
hit-rates.

{code}
summary =  83681 in 00:10:51 =  128.6/s 
{code}

LGTM - +1

> Separate FetchWork required for each query that uses the results cache
> --
>
> Key: HIVE-18899
> URL: https://issues.apache.org/jira/browse/HIVE-18899
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Jason Dere
>Assignee: Jason Dere
>Priority: Major
> Attachments: HIVE-18899.1.patch
>
>
> [~gopalv] found issues when running lots of concurrent queries against HS2 
> with the query cache. Looks like the FetchWork held by the results cache 
> cannot be shared between multiple queries because it contains a 
> ListSinkOperator that is used to hold the results of a fetch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18903) Lower Logging Level for ObjectStore

2018-03-07 Thread BELUGA BEHR (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HIVE-18903:
---
Labels: noob  (was: )

> Lower Logging Level for ObjectStore
> ---
>
> Key: HIVE-18903
> URL: https://issues.apache.org/jira/browse/HIVE-18903
> Project: Hive
>  Issue Type: Improvement
>  Components: Standalone Metastore
>Affects Versions: 3.0.0, 2.4.0
>Reporter: BELUGA BEHR
>Priority: Minor
>  Labels: noob
>
> [https://github.com/apache/hive/blob/7c22d74c8d0eb0650adf6e84e0536127c103e46c/standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/ObjectStore.java]
>  
> {code:java}
> 2018-03-01 06:51:58,051  INFO  org.apache.hadoop.hive.metastore.ObjectStore: 
> [pool-4-thread-13]: ObjectStore, initialize called
> 2018-03-01 06:51:58,052  INFO  org.apache.hadoop.hive.metastore.ObjectStore: 
> [pool-4-thread-13]: Initialized ObjectStore
> {code}
> Noting actionable or all that useful here.  Please lower to _debug_ or 
> _trace_ level logging.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18888) Replace synchronizedMap with ConcurrentHashMap

2018-03-07 Thread Peter Vary (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-1?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16390404#comment-16390404
 ] 

Peter Vary commented on HIVE-1:
---

Fix the imports. Otherwise looks good to me. (RB has some issues, so posting 
here too)

Thanks,

Peter

> Replace synchronizedMap with ConcurrentHashMap
> --
>
> Key: HIVE-1
> URL: https://issues.apache.org/jira/browse/HIVE-1
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 3.0.0, 2.3.3
>Reporter: Alexander Kolbasov
>Assignee: Alexander Kolbasov
>Priority: Major
> Attachments: HIVE-1.01.patch
>
>
> There are a bunch of places that use Collections.synchronizedMap instead of 
> ConcurrentHashMap which are better. We should search/replace the uses.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18814) Support Add Partition For Acid tables

2018-03-07 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-18814:
--
Description: 
[https://cwiki.apache.org/confluence/display/Hive/LanguageManual%2BDDL#LanguageManualDDL-AddPartitions]

Add Partition command creates a {{Partition}} metadata object and sets the 
location to the directory containing data files.

In current master (Hive 3.0), Add partition on an acid table doesn't fail and 
at read time the data is decorated with row__id but the original transaction is 
0.  I suspect in earlier Hive versions this will throw or return no data.
Since this new partition didn't have data before, assigning txnid:0 isn't going 
to generate duplicate IDs but it could violate Snapshot Isolation in multi stmt 
txns.  Suppose txnid:7 runs {{select * from T}}.  Then txnid:8 adds a partition 
to T.  Now if txnid:7 runs the same query again, it will see the data in the 
new partition.
This can't be release like this since a delete on this data (added via Add 
partition) will use row_ids with txnid:0 so a later upgrade that sees 
un-compacted may generate row_ids with different txnid (assuming this is fixed 
by then)

 

One option is follow Load Data approach and create a new delta_x_x/ and 
move/copy the data there.

 

Another is to allocate a new writeid and save it in Partition metadata.  This 
could then be used to decorate data with ROW__IDs.  This avoids move/copy but 
retains data "outside" of the table tree which make it more likely that this 
data will be modified in some way which can really break things if done after 
and SQL update/delete on this data have happened. 

 

It performs no validations on add (except for partition spec) so any file with 
any format can be added.  It allows add to bucketed tables as well.

Seems like a very dangerous command.  Maybe a better option is to block it and 
advise using Load Data.  Alternatively, make this do Add partition metadata op 
followed by Load Data. 

 

 

  was:
[https://cwiki.apache.org/confluence/display/Hive/LanguageManual%2BDDL#LanguageManualDDL-AddPartitions]

Add Partition command creates a {{Partition}} metadata object and sets the 
location to the directory containing data files.

In current master (Hive 3.0), Add partition on an acid table doesn't fail and 
at read time the data is decorated with row__id but the original transaction is 
0.  I suspect in earlier Hive versions this will throw or return no data.
Since this new partition didn't have data before, assigning txnid:0 isn't going 
to generate duplicate IDs but it could violate Snapshot Isolation in multi stmt 
txns.  Suppose txnid:7 runs {{select * from T}}.  Then txnid:8 adds a partition 
to T.  Now if txnid:7 runs the same query again, it will see the data in the 
new partition.

 

One option is follow Load Data approach and create a new delta_x_x/ and 
move/copy the data there.

 

Another is to allocate a new writeid and save it in Partition metadata.  This 
could then be used to decorate data with ROW__IDs.  This avoids move/copy but 
retains data "outside" of the table tree which make it more likely that this 
data will be modified in some way which can really break things if done after 
and SQL update/delete on this data have happened. 

 

It performs no validations on add (except for partition spec) so any file with 
any format can be added.  It allows add to bucketed tables as well.

Seems like a very dangerous command.  Maybe a better option is to block it and 
advise using Load Data.  Alternatively, make this do Add partition metadata op 
followed by Load Data. 

 

 


> Support Add Partition For Acid tables
> -
>
> Key: HIVE-18814
> URL: https://issues.apache.org/jira/browse/HIVE-18814
> Project: Hive
>  Issue Type: New Feature
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>Priority: Major
> Attachments: HIVE-18814.wip.patch
>
>
> [https://cwiki.apache.org/confluence/display/Hive/LanguageManual%2BDDL#LanguageManualDDL-AddPartitions]
> Add Partition command creates a {{Partition}} metadata object and sets the 
> location to the directory containing data files.
> In current master (Hive 3.0), Add partition on an acid table doesn't fail and 
> at read time the data is decorated with row__id but the original transaction 
> is 0.  I suspect in earlier Hive versions this will throw or return no data.
> Since this new partition didn't have data before, assigning txnid:0 isn't 
> going to generate duplicate IDs but it could violate Snapshot Isolation in 
> multi stmt txns.  Suppose txnid:7 runs {{select * from T}}.  Then txnid:8 
> adds a partition to T.  Now if txnid:7 runs the same query again, it will see 
> the data in the new partition.
> This can't be release like this since a 

[jira] [Updated] (HIVE-18791) Fix TestJdbcWithMiniHS2#testHttpHeaderSize

2018-03-07 Thread Peter Vary (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18791?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Vary updated HIVE-18791:
--
   Resolution: Fixed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

Pushed to master.

Thanks for the patch [~asherman]!

> Fix TestJdbcWithMiniHS2#testHttpHeaderSize
> --
>
> Key: HIVE-18791
> URL: https://issues.apache.org/jira/browse/HIVE-18791
> Project: Hive
>  Issue Type: Bug
>Reporter: Andrew Sherman
>Assignee: Andrew Sherman
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-18791.1.patch
>
>
> TestJdbcWithMiniHS2#testHttpHeaderSize tests whether config of http header 
> sizes works by using a long username. The local scratch directory for the 
> session uses the username as part of its path. When this name is more than 
> 255 chars (on most modern file systems) then the directory creation will 
> fail. HIVE-18625 made this failure throw an exception, which has caused a 
> regression in testHttpHeaderSize.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-17478) Move filesystem stats collection from metastore to ql

2018-03-07 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16390373#comment-16390373
 ] 

Hive QA commented on HIVE-17478:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
44s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
41s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
34s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 2s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
35s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
43s{color} | {color:red} ql: The patch generated 2 new + 644 unchanged - 2 
fixed = 646 total (was 646) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} standalone-metastore: The patch generated 0 new + 
746 unchanged - 18 fixed = 746 total (was 764) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
12s{color} | {color:red} The patch generated 49 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 18m 33s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-9540/dev-support/hive-personality.sh
 |
| git revision | master / b0a58d2 |
| Default Java | 1.8.0_111 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9540/yetus/diff-checkstyle-ql.txt
 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9540/yetus/patch-asflicense-problems.txt
 |
| modules | C: ql standalone-metastore U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9540/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Move filesystem stats collection from metastore to ql
> -
>
> Key: HIVE-17478
> URL: https://issues.apache.org/jira/browse/HIVE-17478
> Project: Hive
>  Issue Type: Sub-task
>  Components: Metastore
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-17478.01.patch, HIVE-17478.01wip01.patch, 
> HIVE-17478.02.patch, HIVE-17478.03.patch, HIVE-17478.04.patch, 
> HIVE-17478.05.patch, HIVE-17478.06.patch
>
>
> filesystem level stats are collected automatically at metastore server 
> side...however computing these stats earlier during planning or query 
> execution may enable to launch stat collection on a newly added partition 
> only if needed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18814) Support Add Partition For Acid tables

2018-03-07 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-18814:
--
Description: 
[https://cwiki.apache.org/confluence/display/Hive/LanguageManual%2BDDL#LanguageManualDDL-AddPartitions]

Add Partition command creates a {{Partition}} metadata object and sets the 
location to the directory containing data files.

In current master (Hive 3.0), Add partition on an acid table doesn't fail and 
at read time the data is decorated with row__id but the original transaction is 
0.  I suspect in earlier Hive versions this will throw or return no data.
Since this new partition didn't have data before, assigning txnid:0 isn't going 
to generate duplicate IDs but it could violate Snapshot Isolation in multi stmt 
txns.  Suppose txnid:7 runs {{select * from T}}.  Then txnid:8 adds a partition 
to T.  Now if txnid:7 runs the same query again, it will see the data in the 
new partition.

 

One option is follow Load Data approach and create a new delta_x_x/ and 
move/copy the data there.

 

Another is to allocate a new writeid and save it in Partition metadata.  This 
could then be used to decorate data with ROW__IDs.  This avoids move/copy but 
retains data "outside" of the table tree which make it more likely that this 
data will be modified in some way which can really break things if done after 
and SQL update/delete on this data have happened. 

 

It performs no validations on add (except for partition spec) so any file with 
any format can be added.  It allows add to bucketed tables as well.

Seems like a very dangerous command.  Maybe a better option is to block it and 
advise using Load Data.  Alternatively, make this do Add partition metadata op 
followed by Load Data. 

 

 

  was:
[https://cwiki.apache.org/confluence/display/Hive/LanguageManual%2BDDL#LanguageManualDDL-AddPartitions]

Add Partition command creates a {{Partition}} metadata object and sets the 
location to the directory containing data files.

In current master (Hive 3.0), Add partition on an acid table doesn't fail and 
at read time the data is decorated with row__id but the original transaction is 
0.  I suspect in earlier Hive versions this will throw or return no data.

 

One option is follow Load Data approach and create a new delta_x_x/ and 
move/copy the data there.

 

Another is to allocate a new writeid and save it in Partition metadata.  This 
could then be used to decorate data with ROW__IDs.  This avoids move/copy but 
retains data "outside" of the table tree which make it more likely that this 
data will be modified in some way which can really break things if done after 
and SQL update/delete on this data have happened. 

 

It performs no validations on add (except for partition spec) so any file with 
any format can be added.  It allows add to bucketed tables as well.

Seems like a very dangerous command.  Maybe a better option is to block it and 
advise using Load Data.  Alternatively, make this do Add partition metadata op 
followed by Load Data. 

 

 


> Support Add Partition For Acid tables
> -
>
> Key: HIVE-18814
> URL: https://issues.apache.org/jira/browse/HIVE-18814
> Project: Hive
>  Issue Type: New Feature
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>Priority: Major
> Attachments: HIVE-18814.wip.patch
>
>
> [https://cwiki.apache.org/confluence/display/Hive/LanguageManual%2BDDL#LanguageManualDDL-AddPartitions]
> Add Partition command creates a {{Partition}} metadata object and sets the 
> location to the directory containing data files.
> In current master (Hive 3.0), Add partition on an acid table doesn't fail and 
> at read time the data is decorated with row__id but the original transaction 
> is 0.  I suspect in earlier Hive versions this will throw or return no data.
> Since this new partition didn't have data before, assigning txnid:0 isn't 
> going to generate duplicate IDs but it could violate Snapshot Isolation in 
> multi stmt txns.  Suppose txnid:7 runs {{select * from T}}.  Then txnid:8 
> adds a partition to T.  Now if txnid:7 runs the same query again, it will see 
> the data in the new partition.
>  
> One option is follow Load Data approach and create a new delta_x_x/ and 
> move/copy the data there.
>  
> Another is to allocate a new writeid and save it in Partition metadata.  This 
> could then be used to decorate data with ROW__IDs.  This avoids move/copy but 
> retains data "outside" of the table tree which make it more likely that this 
> data will be modified in some way which can really break things if done after 
> and SQL update/delete on this data have happened. 
>  
> It performs no validations on add (except for partition spec) so any file 
> with any format can be added.  It allows add

[jira] [Updated] (HIVE-18861) druid-hdfs-storage is pulling in hadoop-aws-2.7.x and aws SDK, creating classpath problems on hadoop 3.x

2018-03-07 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HIVE-18861:
--
Status: Patch Available  (was: Open)

> druid-hdfs-storage is pulling in hadoop-aws-2.7.x and aws SDK, creating 
> classpath problems on hadoop 3.x
> 
>
> Key: HIVE-18861
> URL: https://issues.apache.org/jira/browse/HIVE-18861
> Project: Hive
>  Issue Type: Sub-task
>  Components: Druid integration
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HIVE-18861-001.patch, HIVE-18861-001.patch, 
> HIVE-18861.patch, HIVE-18861.patch
>
>
> druid-hdfs-storage JAR is transitively pulling in hadoop-aws JAR 2.7.3, which 
> creates classpath problems as a set of aws-sdk 1.10.77 JARs get on the CP, 
> even with Hadoop 3 & its move to a full aws-sdk-bundle JAR.
> Two options
> # exclude the dependency
> # force it up to whatever ${hadoop.version} is, so make it consistent



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18861) druid-hdfs-storage is pulling in hadoop-aws-2.7.x and aws SDK, creating classpath problems on hadoop 3.x

2018-03-07 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HIVE-18861:
--
Attachment: HIVE-18861.patch

> druid-hdfs-storage is pulling in hadoop-aws-2.7.x and aws SDK, creating 
> classpath problems on hadoop 3.x
> 
>
> Key: HIVE-18861
> URL: https://issues.apache.org/jira/browse/HIVE-18861
> Project: Hive
>  Issue Type: Sub-task
>  Components: Druid integration
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HIVE-18861-001.patch, HIVE-18861-001.patch, 
> HIVE-18861.patch, HIVE-18861.patch
>
>
> druid-hdfs-storage JAR is transitively pulling in hadoop-aws JAR 2.7.3, which 
> creates classpath problems as a set of aws-sdk 1.10.77 JARs get on the CP, 
> even with Hadoop 3 & its move to a full aws-sdk-bundle JAR.
> Two options
> # exclude the dependency
> # force it up to whatever ${hadoop.version} is, so make it consistent



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18861) druid-hdfs-storage is pulling in hadoop-aws-2.7.x and aws SDK, creating classpath problems on hadoop 3.x

2018-03-07 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16390366#comment-16390366
 ] 

Steve Loughran commented on HIVE-18861:
---

Not seeing any updates after 9h. Cancelling and reattaching

> druid-hdfs-storage is pulling in hadoop-aws-2.7.x and aws SDK, creating 
> classpath problems on hadoop 3.x
> 
>
> Key: HIVE-18861
> URL: https://issues.apache.org/jira/browse/HIVE-18861
> Project: Hive
>  Issue Type: Sub-task
>  Components: Druid integration
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HIVE-18861-001.patch, HIVE-18861-001.patch, 
> HIVE-18861.patch, HIVE-18861.patch
>
>
> druid-hdfs-storage JAR is transitively pulling in hadoop-aws JAR 2.7.3, which 
> creates classpath problems as a set of aws-sdk 1.10.77 JARs get on the CP, 
> even with Hadoop 3 & its move to a full aws-sdk-bundle JAR.
> Two options
> # exclude the dependency
> # force it up to whatever ${hadoop.version} is, so make it consistent



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18861) druid-hdfs-storage is pulling in hadoop-aws-2.7.x and aws SDK, creating classpath problems on hadoop 3.x

2018-03-07 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HIVE-18861:
--
Status: Open  (was: Patch Available)

> druid-hdfs-storage is pulling in hadoop-aws-2.7.x and aws SDK, creating 
> classpath problems on hadoop 3.x
> 
>
> Key: HIVE-18861
> URL: https://issues.apache.org/jira/browse/HIVE-18861
> Project: Hive
>  Issue Type: Sub-task
>  Components: Druid integration
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HIVE-18861-001.patch, HIVE-18861-001.patch, 
> HIVE-18861.patch, HIVE-18861.patch
>
>
> druid-hdfs-storage JAR is transitively pulling in hadoop-aws JAR 2.7.3, which 
> creates classpath problems as a set of aws-sdk 1.10.77 JARs get on the CP, 
> even with Hadoop 3 & its move to a full aws-sdk-bundle JAR.
> Two options
> # exclude the dependency
> # force it up to whatever ${hadoop.version} is, so make it consistent



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18835) JDBC standalone jar download link in ambari

2018-03-07 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16390337#comment-16390337
 ] 

Hive QA commented on HIVE-18835:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12913375/HIVE-18835.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 23 failed/errored test(s), 13346 tests 
executed
*Failed tests:*
{noformat}
TestHiveServer2SessionTimeout - did not produce a TEST-*.xml file (likely timed 
out) (batchId=241)
TestNegativeCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=95)

[udf_invalid.q,authorization_uri_export.q,default_constraint_complex_default_value.q,druid_datasource2.q,view_update.q,default_partition_name.q,authorization_public_create.q,load_wrong_fileformat_rc_seq.q,default_constraint_invalid_type.q,altern1.q,describe_xpath1.q,drop_view_failure2.q,temp_table_rename.q,invalid_select_column_with_subquery.q,udf_trunc_error1.q,insert_view_failure.q,dbtxnmgr_nodbunlock.q,authorization_show_columns.q,cte_recursion.q,merge_constraint_notnull.q,load_part_nospec.q,clusterbyorderby.q,orc_type_promotion2.q,ctas_noperm_loc.q,udf_instr_wrong_args_len.q,invalid_create_tbl2.q,part_col_complex_type.q,authorization_drop_db_empty.q,smb_mapjoin_14.q,subquery_scalar_multi_rows.q,alter_partition_coltype_2columns.q,subquery_corr_in_agg.q,insert_overwrite_notnull_constraint.q,authorization_show_grant_otheruser_wtab.q,regex_col_groupby.q,udaf_collect_set_unsupported.q,ptf_negative_DuplicateWindowAlias.q,exim_22_export_authfail.q,udf_likeany_wrong1.q,groupby_key.q,ambiguous_col.q,groupby3_multi_distinct.q,authorization_alter_drop_ptn.q,invalid_cast_from_binary_5.q,show_create_table_does_not_exist.q,invalid_select_column.q,exim_20_managed_location_over_existing.q,interval_3.q,authorization_compile.q,join35.q,merge_negative_3.q,udf_concat_ws_wrong3.q,create_or_replace_view8.q,create_external_with_notnull_constraint.q,split_sample_out_of_range.q,materialized_view_no_transactional_rewrite.q,authorization_show_grant_otherrole.q,create_with_constraints_duplicate_name.q,invalid_stddev_samp_syntax.q,authorization_view_disable_cbo_7.q,autolocal1.q,avro_non_nullable_union.q,load_orc_negative_part.q,drop_view_failure1.q,columnstats_partlvl_invalid_values_autogather.q,exim_13_nonnative_import.q,alter_table_wrong_regex.q,add_partition_with_whitelist.q,udf_next_day_error_2.q,authorization_select.q,udf_trunc_error2.q,authorization_view_7.q,udf_format_number_wrong5.q,touch2.q,exim_03_nonpart_noncompat_colschema.q,orc_type_promotion1.q,lateral_view_alias.q,show_tables_bad_db1.q,unset_table_property.q,alter_non_native.q,nvl_mismatch_type.q,load_orc_negative3.q,authorization_create_role_no_admin.q,invalid_distinct1.q,authorization_grant_server.q,orc_type_promotion3_acid.q,show_tables_bad1.q,macro_unused_parameter.q,drop_invalid_constraint3.q,drop_partition_filter_failure.q,char_pad_convert_fail3.q,exim_23_import_exist_authfail.q,drop_invalid_constraint4.q,authorization_create_macro1.q,archive1.q,subquery_multiple_cols_in_select.q,change_hive_hdfs_session_path.q,udf_trunc_error3.q,invalid_variance_syntax.q,authorization_truncate_2.q,invalid_avg_syntax.q,invalid_select_column_with_tablename.q,mm_truncate_cols.q,groupby_grouping_sets1.q,druid_location.q,groupby2_multi_distinct.q,authorization_sba_drop_table.q,dynamic_partitions_with_whitelist.q,compare_string_bigint_2.q,udf_greatest_error_2.q,authorization_view_6.q,show_tablestatus.q,duplicate_alias_in_transform_schema.q,create_with_fk_uk_same_tab.q,udtf_not_supported3.q,alter_table_constraint_invalid_fk_col2.q,udtf_not_supported1.q,dbtxnmgr_notableunlock.q,ptf_negative_InvalidValueBoundary.q,alter_table_constraint_duplicate_pk.q,udf_printf_wrong4.q,create_view_failure9.q,udf_elt_wrong_type.q,selectDistinctStarNeg_1.q,invalid_mapjoin1.q,load_stored_as_dirs.q,input1.q,udf_sort_array_wrong1.q,invalid_distinct2.q,invalid_select_fn.q,authorization_role_grant_otherrole.q,archive4.q,load_nonpart_authfail.q,recursive_view.q,authorization_view_disable_cbo_1.q,desc_failure4.q,create_not_acid.q,udf_sort_array_wrong3.q,char_pad_convert_fail0.q,udf_map_values_arg_type.q,alter_view_failure6_2.q,alter_partition_change_col_nonexist.q,update_non_acid_table.q,authorization_view_disable_cbo_5.q,ct_noperm_loc.q,interval_1.q,authorization_show_grant_otheruser_all.q,authorization_view_2.q,show_tables_bad2.q,groupby_rollup2.q,truncate_column_seqfile.q,create_view_failure5.q,authorization_create_view.q,ptf_window_boundaries.q,ctasnullcol.q,input_part0_neg_2.q,create_or_replace_view1.q,udf_max.q,exim_01_nonpart_over_loaded.q,msck_repair_1.q,orc_change_fileformat_acid.q,udf_nonexistent_resource.q,exim_19_external_over_existing.q,serde_regex2.q,msck_repair_2.q,exim_06_nonpart_noncompat_storage.q,illegal_partition_type4.q,udf_sort_array_by_wro

[jira] [Commented] (HIVE-18811) Fix desc table, column comments are not displayed

2018-03-07 Thread Peter Vary (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16390320#comment-16390320
 ] 

Peter Vary commented on HIVE-18811:
---

[~tartarus]: The following two output looks incorrect to me:
{code:java}
describe escape_column_comments;
Beeline patch
[..]
| col1                     | string     | test\\tcol1   |
[..]
| part1                    | string     | test\\tpart1  |
[..]
| part1                    | string     | test\\tpart1  |
describe formatted escape_column_comments;
Beeline patch
[..]
| col1 | string | test\\tcol1 |
[..]
| part1 | string | test\\tpart1 |
[..]{code}
What do you think?
Thanks,
Peter

> Fix desc table, column comments are not displayed
> -
>
> Key: HIVE-18811
> URL: https://issues.apache.org/jira/browse/HIVE-18811
> Project: Hive
>  Issue Type: Bug
>  Components: CLI
>Affects Versions: 1.2.1, 2.3.2
> Environment: CentOS 6.5
> Hive-1.2.1
> Hive-3.0.0
>Reporter: tartarus
>Assignee: tartarus
>Priority: Major
>  Labels: patch
> Fix For: 3.0.0
>
> Attachments: HIVE_18811.patch, changes
>
>
> when column comment contain \t 
> eg: CREATE TABLE `zhangmang_test`(`name` string COMMENT 
> 'name{color:#ff}\t{color}zm');
> then execute : {color:#ff}desc zhangmang_test {color}
> {color:#ff}{color:#33}return :{color} name                string      
>         name{color}
> because \t is the separator, so we should translate it



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18900) Remove WARN for "Hadoop command-line option parsing not performed"

2018-03-07 Thread BELUGA BEHR (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HIVE-18900:
---
Description: 
{code}
2018-03-05 07:23:11,002  WARN  org.apache.hadoop.mapreduce.JobResourceUploader 
[HiveServer2-Background-Pool: Thread-426416]: Hadoop command-line option 
parsing not performed. Implement the Tool interface and execute your 
application with ToolRunner to remedy this.
{code}

Please remove this WARN message from appearing in the logs.  It's not clear to 
me, as an admin, what I should be doing in response to this message.

  was:
{code}
2018-03-05 07:23:11,002  WARN  [org.apache.hadoop.mapreduce.JobResourceUploader 
[HiveServer2-Background-Pool: Thread-426416]: Hadoop command-line option 
parsing not performed. Implement the Tool interface and execute your 
application with ToolRunner to remedy this.
{code}

Please remove this WARN message from appearing in the logs.  It's not clear to 
me, as an admin, what I should be doing in response to this message.


> Remove WARN for "Hadoop command-line option parsing not performed"
> --
>
> Key: HIVE-18900
> URL: https://issues.apache.org/jira/browse/HIVE-18900
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2
>Affects Versions: 3.0.0, 2.4.0
>Reporter: BELUGA BEHR
>Priority: Minor
>
> {code}
> 2018-03-05 07:23:11,002  WARN  
> org.apache.hadoop.mapreduce.JobResourceUploader [HiveServer2-Background-Pool: 
> Thread-426416]: Hadoop command-line option parsing not performed. Implement 
> the Tool interface and execute your application with ToolRunner to remedy 
> this.
> {code}
> Please remove this WARN message from appearing in the logs.  It's not clear 
> to me, as an admin, what I should be doing in response to this message.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18900) Remove WARN for "Hadoop command-line option parsing not performed"

2018-03-07 Thread BELUGA BEHR (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HIVE-18900:
---
Description: 
{code}
2018-03-05 07:23:11,002  WARN  [org.apache.hadoop.mapreduce.JobResourceUploader 
[HiveServer2-Background-Pool: Thread-426416]: Hadoop command-line option 
parsing not performed. Implement the Tool interface and execute your 
application with ToolRunner to remedy this.
{code}

Please remove this WARN message from appearing in the logs.  It's not clear to 
me, as an admin, what I should be doing in response to this message.

  was:
{code}
2018-03-05 07:23:11,002  WARN  
[org.apache.hadoop.mapreduce.JobResourceUploader|https://csi.infra.cloudera.com/synergy/cluster_stats/services/hive/roles/hive-HIVESERVER2-40fe7dfa9435f6d3681cdaa7a7113014/roleType/HIVESERVER2/browse?cluster=Nielsen-Watch-Digital-Production×tamp=1520256759000&ckey=00a921de-7668-4149-896a-1fd08d33e4c7&source=org.apache.hadoop.mapreduce.JobResourceUploader]:
 [HiveServer2-Background-Pool: Thread-426416]: Hadoop command-line option 
parsing not performed. Implement the Tool interface and execute your 
application with ToolRunner to remedy this.
{code}

Please remove this WARN message from appearing in the logs.  It's not clear to 
me, as an admin, what I should be doing in response to this message.


> Remove WARN for "Hadoop command-line option parsing not performed"
> --
>
> Key: HIVE-18900
> URL: https://issues.apache.org/jira/browse/HIVE-18900
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2
>Affects Versions: 3.0.0, 2.4.0
>Reporter: BELUGA BEHR
>Priority: Minor
>
> {code}
> 2018-03-05 07:23:11,002  WARN  
> [org.apache.hadoop.mapreduce.JobResourceUploader 
> [HiveServer2-Background-Pool: Thread-426416]: Hadoop command-line option 
> parsing not performed. Implement the Tool interface and execute your 
> application with ToolRunner to remedy this.
> {code}
> Please remove this WARN message from appearing in the logs.  It's not clear 
> to me, as an admin, what I should be doing in response to this message.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18900) Remove WARN for "Hadoop command-line option parsing not performed"

2018-03-07 Thread BELUGA BEHR (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HIVE-18900:
---
Description: 
{code}
2018-03-05 07:23:11,002  WARN  
[org.apache.hadoop.mapreduce.JobResourceUploader|https://csi.infra.cloudera.com/synergy/cluster_stats/services/hive/roles/hive-HIVESERVER2-40fe7dfa9435f6d3681cdaa7a7113014/roleType/HIVESERVER2/browse?cluster=Nielsen-Watch-Digital-Production×tamp=1520256759000&ckey=00a921de-7668-4149-896a-1fd08d33e4c7&source=org.apache.hadoop.mapreduce.JobResourceUploader]:
 [HiveServer2-Background-Pool: Thread-426416]: Hadoop command-line option 
parsing not performed. Implement the Tool interface and execute your 
application with ToolRunner to remedy this.
{code}

Please remove this WARN message from appearing in the logs.  It's not clear to 
me, as an admin, what I should be doing in response to this message.

  was:
2018-03-05 07:23:11,002  WARN  
[org.apache.hadoop.mapreduce.JobResourceUploader|https://csi.infra.cloudera.com/synergy/cluster_stats/services/hive/roles/hive-HIVESERVER2-40fe7dfa9435f6d3681cdaa7a7113014/roleType/HIVESERVER2/browse?cluster=Nielsen-Watch-Digital-Production×tamp=1520256759000&ckey=00a921de-7668-4149-896a-1fd08d33e4c7&source=org.apache.hadoop.mapreduce.JobResourceUploader]:
 [HiveServer2-Background-Pool: Thread-426416]: Hadoop command-line option 
parsing not performed. Implement the Tool interface and execute your 
application with ToolRunner to remedy this.
How do we remove this WARN message from appearing in the logs?  It's not clear 
to me, as an admin, what I should be doing in response to this message.


> Remove WARN for "Hadoop command-line option parsing not performed"
> --
>
> Key: HIVE-18900
> URL: https://issues.apache.org/jira/browse/HIVE-18900
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2
>Affects Versions: 3.0.0, 2.4.0
>Reporter: BELUGA BEHR
>Priority: Minor
>
> {code}
> 2018-03-05 07:23:11,002  WARN  
> [org.apache.hadoop.mapreduce.JobResourceUploader|https://csi.infra.cloudera.com/synergy/cluster_stats/services/hive/roles/hive-HIVESERVER2-40fe7dfa9435f6d3681cdaa7a7113014/roleType/HIVESERVER2/browse?cluster=Nielsen-Watch-Digital-Production×tamp=1520256759000&ckey=00a921de-7668-4149-896a-1fd08d33e4c7&source=org.apache.hadoop.mapreduce.JobResourceUploader]:
>  [HiveServer2-Background-Pool: Thread-426416]: Hadoop command-line option 
> parsing not performed. Implement the Tool interface and execute your 
> application with ToolRunner to remedy this.
> {code}
> Please remove this WARN message from appearing in the logs.  It's not clear 
> to me, as an admin, what I should be doing in response to this message.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HIVE-18526) Backport HIVE-16886 to Hive 2

2018-03-07 Thread Alexander Kolbasov (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16390314#comment-16390314
 ] 

Alexander Kolbasov edited comment on HIVE-18526 at 3/7/18 10:00 PM:


[~anishek] [~thejas] It turns out that the SELECT FOR UPDATE fix is causing 
very serious troubles in practice. For example, HIVE-18885.


was (Author: akolb):
[~anishek] It turns out that the SELECT FOR UPDATE fix is causing very serious 
troubles in practice. For example, HIVE-18885.

> Backport HIVE-16886 to Hive 2
> -
>
> Key: HIVE-18526
> URL: https://issues.apache.org/jira/browse/HIVE-18526
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Affects Versions: 2.3.3
>Reporter: Alexander Kolbasov
>Assignee: Alexander Kolbasov
>Priority: Major
> Attachments: HIVE-18526.01-branch-2.patch, 
> HIVE-18526.02-branch-2.patch
>
>
> The fix for HIVE-16886 isn't in Hive 2.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18526) Backport HIVE-16886 to Hive 2

2018-03-07 Thread Alexander Kolbasov (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16390314#comment-16390314
 ] 

Alexander Kolbasov commented on HIVE-18526:
---

[~anishek] It turns out that the SELECT FOR UPDATE fix is causing very serious 
troubles in practice. For example, HIVE-18885.

> Backport HIVE-16886 to Hive 2
> -
>
> Key: HIVE-18526
> URL: https://issues.apache.org/jira/browse/HIVE-18526
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Affects Versions: 2.3.3
>Reporter: Alexander Kolbasov
>Assignee: Alexander Kolbasov
>Priority: Major
> Attachments: HIVE-18526.01-branch-2.patch, 
> HIVE-18526.02-branch-2.patch
>
>
> The fix for HIVE-16886 isn't in Hive 2.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18778) Needs to capture input/output entities in explain

2018-03-07 Thread Daniel Dai (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16390299#comment-16390299
 ] 

Daniel Dai commented on HIVE-18778:
---

I am struggling to run tests locally, but hopefully I can get some sections 
done in 1 or 2 days. Will keep you updated.

> Needs to capture input/output entities in explain
> -
>
> Key: HIVE-18778
> URL: https://issues.apache.org/jira/browse/HIVE-18778
> Project: Hive
>  Issue Type: Bug
>Reporter: Daniel Dai
>Assignee: Daniel Dai
>Priority: Major
> Attachments: HIVE-18778.1.patch, HIVE-18778.2.patch
>
>
> With Sentry enabled, commands like explain drop table foo fail with {{explain 
> drop table foo;}}
> {code}
> Error: Error while compiling statement: FAILED: SemanticException No valid 
> privileges
>  Required privilege( Table) not available in input privileges
>  The required privileges: (state=42000,code=4)
> {code}
> Sentry fails to authorize because the ExplainSemanticAnalyzer uses an 
> instance of DDLSemanticAnalyzer to analyze the explain query.
> {code}
> BaseSemanticAnalyzer sem = SemanticAnalyzerFactory.get(conf, input);
> sem.analyze(input, ctx);
> sem.validate()
> {code}
> The inputs/outputs entities for this query are set in the above code. 
> However, these are never set on the instance of ExplainSemanticAnalyzer 
> itself and thus is not propagated into the HookContext in the calling Driver 
> code.
> {code}
> sem.analyze(tree, ctx); --> this results in calling the above code that uses 
> DDLSA
> hookCtx.update(sem); --> sem is an instance of ExplainSemanticAnalyzer, this 
> code attempts to update the HookContext with the input/output info from ESA 
> which is never set.
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18879) Disallow embedded element in UDFXPathUtil needs to work if xercesImpl.jar in classpath

2018-03-07 Thread Daniel Dai (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16390288#comment-16390288
 ] 

Daniel Dai commented on HIVE-18879:
---

Patch pushed to master. Kicking off 2.3 branch ptest.

> Disallow embedded element in UDFXPathUtil needs to work if xercesImpl.jar in 
> classpath
> --
>
> Key: HIVE-18879
> URL: https://issues.apache.org/jira/browse/HIVE-18879
> Project: Hive
>  Issue Type: Bug
>Reporter: Daniel Dai
>Assignee: Daniel Dai
>Priority: Major
> Attachments: HIVE-18879.1-branch-2.3.patch, HIVE-18879.1.patch
>
>
> This is a follow up of HIVE-18789.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18879) Disallow embedded element in UDFXPathUtil needs to work if xercesImpl.jar in classpath

2018-03-07 Thread Daniel Dai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Dai updated HIVE-18879:
--
Attachment: HIVE-18879.1-branch-2.3.patch

> Disallow embedded element in UDFXPathUtil needs to work if xercesImpl.jar in 
> classpath
> --
>
> Key: HIVE-18879
> URL: https://issues.apache.org/jira/browse/HIVE-18879
> Project: Hive
>  Issue Type: Bug
>Reporter: Daniel Dai
>Assignee: Daniel Dai
>Priority: Major
> Attachments: HIVE-18879.1-branch-2.3.patch, HIVE-18879.1.patch
>
>
> This is a follow up of HIVE-18789.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18899) Separate FetchWork required for each query that uses the results cache

2018-03-07 Thread Jason Dere (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Dere updated HIVE-18899:
--
Attachment: HIVE-18899.1.patch

> Separate FetchWork required for each query that uses the results cache
> --
>
> Key: HIVE-18899
> URL: https://issues.apache.org/jira/browse/HIVE-18899
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Jason Dere
>Assignee: Jason Dere
>Priority: Major
> Attachments: HIVE-18899.1.patch
>
>
> [~gopalv] found issues when running lots of concurrent queries against HS2 
> with the query cache. Looks like the FetchWork held by the results cache 
> cannot be shared between multiple queries because it contains a 
> ListSinkOperator that is used to hold the results of a fetch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18899) Separate FetchWork required for each query that uses the results cache

2018-03-07 Thread Jason Dere (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Dere updated HIVE-18899:
--
Status: Patch Available  (was: Open)

> Separate FetchWork required for each query that uses the results cache
> --
>
> Key: HIVE-18899
> URL: https://issues.apache.org/jira/browse/HIVE-18899
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Jason Dere
>Assignee: Jason Dere
>Priority: Major
> Attachments: HIVE-18899.1.patch
>
>
> [~gopalv] found issues when running lots of concurrent queries against HS2 
> with the query cache. Looks like the FetchWork held by the results cache 
> cannot be shared between multiple queries because it contains a 
> ListSinkOperator that is used to hold the results of a fetch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-18899) Separate FetchWork required for each query that uses the results cache

2018-03-07 Thread Jason Dere (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Dere reassigned HIVE-18899:
-


> Separate FetchWork required for each query that uses the results cache
> --
>
> Key: HIVE-18899
> URL: https://issues.apache.org/jira/browse/HIVE-18899
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Jason Dere
>Assignee: Jason Dere
>Priority: Major
>
> [~gopalv] found issues when running lots of concurrent queries against HS2 
> with the query cache. Looks like the FetchWork held by the results cache 
> cannot be shared between multiple queries because it contains a 
> ListSinkOperator that is used to hold the results of a fetch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18879) Disallow embedded element in UDFXPathUtil needs to work if xercesImpl.jar in classpath

2018-03-07 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16390277#comment-16390277
 ] 

Thejas M Nair commented on HIVE-18879:
--

+1

> Disallow embedded element in UDFXPathUtil needs to work if xercesImpl.jar in 
> classpath
> --
>
> Key: HIVE-18879
> URL: https://issues.apache.org/jira/browse/HIVE-18879
> Project: Hive
>  Issue Type: Bug
>Reporter: Daniel Dai
>Assignee: Daniel Dai
>Priority: Major
> Attachments: HIVE-18879.1.patch
>
>
> This is a follow up of HIVE-18789.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18835) JDBC standalone jar download link in ambari

2018-03-07 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16390236#comment-16390236
 ] 

Hive QA commented on HIVE-18835:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
57s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
19s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
12s{color} | {color:red} service: The patch generated 9 new + 22 unchanged - 0 
fixed = 31 total (was 22) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 5 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
12s{color} | {color:red} The patch generated 49 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}  9m 18s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-9539/dev-support/hive-personality.sh
 |
| git revision | master / 0cfd4fe |
| Default Java | 1.8.0_111 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9539/yetus/diff-checkstyle-service.txt
 |
| whitespace | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9539/yetus/whitespace-eol.txt 
|
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9539/yetus/patch-asflicense-problems.txt
 |
| modules | C: service U: service |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9539/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> JDBC standalone jar download link in ambari
> ---
>
> Key: HIVE-18835
> URL: https://issues.apache.org/jira/browse/HIVE-18835
> Project: Hive
>  Issue Type: New Feature
>  Components: Hive
>Affects Versions: 3.0.0
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
> Attachments: HIVE-18835.patch
>
>
> Let HS2 offer the file for download, so that Ambari can create link on it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18705) Improve HiveMetaStoreClient.dropDatabase

2018-03-07 Thread Adam Szita (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Szita updated HIVE-18705:
--
Attachment: HIVE-18705.2.patch

> Improve HiveMetaStoreClient.dropDatabase
> 
>
> Key: HIVE-18705
> URL: https://issues.apache.org/jira/browse/HIVE-18705
> Project: Hive
>  Issue Type: Improvement
>Reporter: Adam Szita
>Assignee: Adam Szita
>Priority: Major
> Attachments: HIVE-18705.0.patch, HIVE-18705.1.patch, 
> HIVE-18705.2.patch
>
>
> {{HiveMetaStoreClient.dropDatabase}} has a strange implementation to ensure 
> dealing with client side hooks (for non-native tables e.g. HBase). Currently 
> it starts by retrieving all the tables from HMS, and then sends {{dropTable}} 
> calls to HMS table-by-table. At the end a {{dropDatabase}} just to be sure :) 
> I believe this could be refactored so that it speeds up the dropDB in 
> situations where the average table count per DB is very high.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18705) Improve HiveMetaStoreClient.dropDatabase

2018-03-07 Thread Adam Szita (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Szita updated HIVE-18705:
--
Status: Patch Available  (was: In Progress)

> Improve HiveMetaStoreClient.dropDatabase
> 
>
> Key: HIVE-18705
> URL: https://issues.apache.org/jira/browse/HIVE-18705
> Project: Hive
>  Issue Type: Improvement
>Reporter: Adam Szita
>Assignee: Adam Szita
>Priority: Major
> Attachments: HIVE-18705.0.patch, HIVE-18705.1.patch, 
> HIVE-18705.2.patch
>
>
> {{HiveMetaStoreClient.dropDatabase}} has a strange implementation to ensure 
> dealing with client side hooks (for non-native tables e.g. HBase). Currently 
> it starts by retrieving all the tables from HMS, and then sends {{dropTable}} 
> calls to HMS table-by-table. At the end a {{dropDatabase}} just to be sure :) 
> I believe this could be refactored so that it speeds up the dropDB in 
> situations where the average table count per DB is very high.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18705) Improve HiveMetaStoreClient.dropDatabase

2018-03-07 Thread Adam Szita (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Szita updated HIVE-18705:
--
Status: In Progress  (was: Patch Available)

> Improve HiveMetaStoreClient.dropDatabase
> 
>
> Key: HIVE-18705
> URL: https://issues.apache.org/jira/browse/HIVE-18705
> Project: Hive
>  Issue Type: Improvement
>Reporter: Adam Szita
>Assignee: Adam Szita
>Priority: Major
> Attachments: HIVE-18705.0.patch, HIVE-18705.1.patch
>
>
> {{HiveMetaStoreClient.dropDatabase}} has a strange implementation to ensure 
> dealing with client side hooks (for non-native tables e.g. HBase). Currently 
> it starts by retrieving all the tables from HMS, and then sends {{dropTable}} 
> calls to HMS table-by-table. At the end a {{dropDatabase}} just to be sure :) 
> I believe this could be refactored so that it speeds up the dropDB in 
> situations where the average table count per DB is very high.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18811) Fix desc table, column comments are not displayed

2018-03-07 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16390220#comment-16390220
 ] 

Hive QA commented on HIVE-18811:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12913372/changes

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/9538/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/9538/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-9538/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ date '+%Y-%m-%d %T.%3N'
2018-03-07 20:56:40.256
+ [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]]
+ export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ export 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'MAVEN_OPTS=-Xmx1g '
+ MAVEN_OPTS='-Xmx1g '
+ cd /data/hiveptest/working/
+ tee /data/hiveptest/logs/PreCommit-HIVE-Build-9538/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ date '+%Y-%m-%d %T.%3N'
2018-03-07 20:56:40.259
+ cd apache-github-source-source
+ git fetch origin
+ git reset --hard HEAD
HEAD is now at 0cfd4fe HIVE-17163: Microbenchmark for vector op processing 
(Prasanth Jayachandran, reviewed by Matt McCline)
+ git clean -f -d
+ git checkout master
Already on 'master'
Your branch is up-to-date with 'origin/master'.
+ git reset --hard origin/master
HEAD is now at 0cfd4fe HIVE-17163: Microbenchmark for vector op processing 
(Prasanth Jayachandran, reviewed by Matt McCline)
+ git merge --ff-only origin/master
Already up-to-date.
+ date '+%Y-%m-%d %T.%3N'
2018-03-07 20:56:42.444
+ rm -rf ../yetus_PreCommit-HIVE-Build-9538
+ mkdir ../yetus_PreCommit-HIVE-Build-9538
+ git gc
+ cp -R . ../yetus_PreCommit-HIVE-Build-9538
+ mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-9538/yetus
+ patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hiveptest/working/scratch/build.patch
+ [[ -f /data/hiveptest/working/scratch/build.patch ]]
+ chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh
+ /data/hiveptest/working/scratch/smart-apply-patch.sh 
/data/hiveptest/working/scratch/build.patch
fatal: unrecognized input
fatal: unrecognized input
fatal: unrecognized input
The patch does not appear to apply with p0, p1, or p2
+ exit 1
'
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12913372 - PreCommit-HIVE-Build

> Fix desc table, column comments are not displayed
> -
>
> Key: HIVE-18811
> URL: https://issues.apache.org/jira/browse/HIVE-18811
> Project: Hive
>  Issue Type: Bug
>  Components: CLI
>Affects Versions: 1.2.1, 2.3.2
> Environment: CentOS 6.5
> Hive-1.2.1
> Hive-3.0.0
>Reporter: tartarus
>Assignee: tartarus
>Priority: Major
>  Labels: patch
> Fix For: 3.0.0
>
> Attachments: HIVE_18811.patch, changes
>
>
> when column comment contain \t 
> eg: CREATE TABLE `zhangmang_test`(`name` string COMMENT 
> 'name{color:#ff}\t{color}zm');
> then execute : {color:#ff}desc zhangmang_test {color}
> {color:#ff}{color:#33}return :{color} name                string      
>         name{color}
> because \t is the separator, so we should translate it



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18892) Fix NPEs in HiveMetastore.exchange_partitions method

2018-03-07 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16390211#comment-16390211
 ] 

Hive QA commented on HIVE-18892:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12913369/HIVE-18892.1.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 24 failed/errored test(s), 12950 tests 
executed
*Failed tests:*
{noformat}
TestNegativeCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=94)

[nopart_insert.q,insert_into_with_schema.q,input41.q,having1.q,create_table_failure3.q,default_constraint_invalid_default_value.q,database_drop_not_empty_restrict.q,windowing_after_orderby.q,orderbysortby.q,subquery_select_distinct2.q,authorization_uri_alterpart_loc.q,udf_last_day_error_1.q,constraint_duplicate_name.q,create_table_failure4.q,alter_tableprops_external_with_notnull_constraint.q,semijoin5.q,udf_format_number_wrong4.q,deletejar.q,exim_11_nonpart_noncompat_sorting.q,show_tables_bad_db2.q,drop_func_nonexistent.q,nopart_load.q,alter_table_non_partitioned_table_cascade.q,load_wrong_fileformat.q,lockneg_try_db_lock_conflict.q,udf_field_wrong_args_len.q,create_table_failure2.q,create_with_fk_constraints_enforced.q,groupby2_map_skew_multi_distinct.q,udf_min.q,authorization_update_noupdatepriv.q,show_columns2.q,authorization_insert_noselectpriv.q,orc_replace_columns3_acid.q,compare_double_bigint.q,authorization_set_nonexistent_conf.q,alter_rename_partition_failure3.q,split_sample_wrong_format2.q,create_with_fk_pk_same_tab.q,compare_double_bigint_2.q,authorization_show_roles_no_admin.q,materialized_view_authorization_rebuild_no_grant.q,unionLimit.q,authorization_revoke_table_fail2.q,authorization_insert_noinspriv.q,duplicate_insert3.q,authorization_desc_table_nosel.q,stats_noscan_non_native.q,orc_change_serde_acid.q,create_or_replace_view7.q,exim_07_nonpart_noncompat_ifof.q,create_with_unique_constraints_enforced.q,udf_concat_ws_wrong2.q,fileformat_bad_class.q,merge_negative_2.q,exim_15_part_nonpart.q,authorization_not_owner_drop_view.q,external1.q,authorization_uri_insert.q,create_with_fk_wrong_ref.q,columnstats_tbllvl_incorrect_column.q,authorization_show_parts_nosel.q,authorization_not_owner_drop_tab.q,external2.q,authorization_deletejar.q,temp_table_create_like_partitions.q,udf_greatest_error_1.q,ptf_negative_AggrFuncsWithNoGBYNoPartDef.q,alter_view_as_select_not_exist.q,touch1.q,groupby3_map_skew_multi_distinct.q,insert_into_notnull_constraint.q,exchange_partition_neg_partition_missing.q,groupby_cube_multi_gby.q,columnstats_tbllvl.q,drop_invalid_constraint2.q,alter_table_add_partition.q,update_not_acid.q,archive5.q,alter_table_constraint_invalid_pk_col.q,ivyDownload.q,udf_instr_wrong_type.q,bad_sample_clause.q,authorization_not_owner_drop_tab2.q,authorization_alter_db_owner.q,show_columns1.q,orc_type_promotion3.q,create_view_failure8.q,strict_join.q,udf_add_months_error_1.q,groupby_cube2.q,groupby_cube1.q,groupby_rollup1.q,genericFileFormat.q,invalid_cast_from_binary_4.q,drop_invalid_constraint1.q,serde_regex.q,show_partitions1.q,invalid_cast_from_binary_6.q,create_with_multi_pk_constraint.q,udf_field_wrong_type.q,groupby_grouping_sets4.q,groupby_grouping_sets3.q,insertsel_fail.q,udf_locate_wrong_type.q,orc_type_promotion1_acid.q,set_table_property.q,create_or_replace_view2.q,groupby_grouping_sets2.q,alter_view_failure.q,distinct_windowing_failure1.q,invalid_t_alter2.q,alter_table_constraint_invalid_fk_col1.q,invalid_varchar_length_2.q,authorization_show_grant_otheruser_alltabs.q,subquery_windowing_corr.q,compact_non_acid_table.q,authorization_view_4.q,authorization_disallow_transform.q,materialized_view_authorization_rebuild_other.q,authorization_fail_4.q,dbtxnmgr_nodblock.q,set_hiveconf_internal_variable1.q,input_part0_neg.q,udf_printf_wrong3.q,load_orc_negative2.q,druid_buckets.q,archive2.q,authorization_addjar.q,invalid_sum_syntax.q,insert_into_with_schema1.q,udf_add_months_error_2.q,dyn_part_max_per_node.q,authorization_revoke_table_fail1.q,udf_printf_wrong2.q,archive_multi3.q,udf_printf_wrong1.q,subquery_subquery_chain.q,authorization_view_disable_cbo_4.q,no_matching_udf.q,create_view_failure7.q,drop_native_udf.q,truncate_column_list_bucketing.q,authorization_uri_add_partition.q,authorization_view_disable_cbo_3.q,bad_exec_hooks.q,authorization_view_disable_cbo_2.q,fetchtask_ioexception.q,char_pad_convert_fail2.q,authorization_set_role_neg1.q,serde_regex3.q,authorization_delete_nodeletepriv.q,materialized_view_delete.q,create_or_replace_view6.q,bucket_mapjoin_wrong_table_metadata_2.q,msck_repair_3.q,udf_sort_array_by_wrong2.q,local_mapred_error_cache.q,alter_external_acid.q,mm_concatenate.q,authorization_fail_3.q,set_hiveconf_internal_variable0.q,udf_last_day_error_2.q,alter_table_constraint_invalid_ref.q,create_table_wrong_regex.q,describe_x

[jira] [Commented] (HIVE-14792) AvroSerde reads the remote schema-file at least once per mapper, per table reference.

2018-03-07 Thread Aihua Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16390199#comment-16390199
 ] 

Aihua Xu commented on HIVE-14792:
-

[~mithun] Can we get this fixed or do you need help on this? If so, I can take 
it. Let me know if you have time to work on this. Thanks.

> AvroSerde reads the remote schema-file at least once per mapper, per table 
> reference.
> -
>
> Key: HIVE-14792
> URL: https://issues.apache.org/jira/browse/HIVE-14792
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.2.1, 2.1.0
>Reporter: Mithun Radhakrishnan
>Assignee: Mithun Radhakrishnan
>Priority: Major
>  Labels: TODOC2.2, TODOC2.4
> Fix For: 3.0.0, 2.4.0, 2.2.1
>
> Attachments: HIVE-14792.1.patch, HIVE-14792.3.patch
>
>
> Avro tables that use "external" schema files stored on HDFS can cause 
> excessive calls to {{FileSystem::open()}}, especially for queries that spawn 
> large numbers of mappers.
> This is because of the following code in {{AvroSerDe::initialize()}}:
> {code:title=AvroSerDe.java|borderStyle=solid}
> public void initialize(Configuration configuration, Properties properties) 
> throws SerDeException {
> // ...
> if (hasExternalSchema(properties)
> || columnNameProperty == null || columnNameProperty.isEmpty()
> || columnTypeProperty == null || columnTypeProperty.isEmpty()) {
>   schema = determineSchemaOrReturnErrorSchema(configuration, properties);
> } else {
>   // Get column names and sort order
>   columnNames = Arrays.asList(columnNameProperty.split(","));
>   columnTypes = 
> TypeInfoUtils.getTypeInfosFromTypeString(columnTypeProperty);
>   schema = getSchemaFromCols(properties, columnNames, columnTypes, 
> columnCommentProperty);
>  
> properties.setProperty(AvroSerdeUtils.AvroTableProperties.SCHEMA_LITERAL.getPropName(),
>  schema.toString());
> }
> // ...
> }
> {code}
> For tables using {{avro.schema.url}}, every time the SerDe is initialized 
> (i.e. at least once per mapper), the schema file is read remotely. For 
> queries with thousands of mappers, this leads to a stampede to the handful 
> (3?) datanodes that host the schema-file. In the best case, this causes 
> slowdowns.
> It would be preferable to distribute the Avro-schema to all mappers as part 
> of the job-conf. The alternatives aren't exactly appealing:
> # One can't rely solely on the {{column.list.types}} stored in the Hive 
> metastore. (HIVE-14789).
> # {{avro.schema.literal}} might not always be usable, because of the 
> size-limit on table-parameters. The typical size of the Avro-schema file is 
> between 0.5-3MB, in my limited experience. Bumping the max table-parameter 
> size isn't a great solution.
> If the {{avro.schema.file}} were read during query-planning, and made 
> available as part of table-properties (but not serialized into the 
> metastore), the downstream logic will remain largely intact. I have a patch 
> that does this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18436) Upgrade to Spark 2.3.0

2018-03-07 Thread Sahil Takiar (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16390151#comment-16390151
 ] 

Sahil Takiar commented on HIVE-18436:
-

[Spark 2.3.0|https://spark.apache.org/releases/spark-release-2-3-0.html] has 
now been officially released. Latest patch adds a dependency on the published 
2.3.0 artifacts. Hive QA tests look good. [~lirui], [~xuefuz] could you review?

> Upgrade to Spark 2.3.0
> --
>
> Key: HIVE-18436
> URL: https://issues.apache.org/jira/browse/HIVE-18436
> Project: Hive
>  Issue Type: Task
>  Components: Spark
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
>Priority: Major
> Attachments: HIVE-18436.1.patch, HIVE-18436.2.patch, 
> HIVE-18436.3.patch
>
>
> Branching has been completed. Release candidates should be published soon. 
> Might be a while before the actual release, but at least we get to identify 
> any issues early.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18889) update all parts of Hive to use the same Guava version

2018-03-07 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-18889:

Attachment: HIVE-18889.01.patch

> update all parts of Hive to use the same Guava version
> --
>
> Key: HIVE-18889
> URL: https://issues.apache.org/jira/browse/HIVE-18889
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HIVE-18889.01.patch, HIVE-18889.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18571) stats issues for MM tables; ACID doesn't check state for CTAS

2018-03-07 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18571?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16390139#comment-16390139
 ] 

Sergey Shelukhin commented on HIVE-18571:
-

Updated one more test result. Negative tests all OOMed

> stats issues for MM tables; ACID doesn't check state for CTAS
> -
>
> Key: HIVE-18571
> URL: https://issues.apache.org/jira/browse/HIVE-18571
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HIVE-18571.01.patch, HIVE-18571.02.patch, 
> HIVE-18571.03.patch, HIVE-18571.04.patch, HIVE-18571.05.patch, 
> HIVE-18571.06.patch, HIVE-18571.07.patch, HIVE-18571.patch
>
>
> There are multiple stats aggregation issues with MM tables.
> Some simple stats are double counted and some stats (simple stats) are 
> invalid for ACID table dirs altogether. 
> I have a patch almost ready, need to fix some more stuff and clean up.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18571) stats issues for MM tables; ACID doesn't check state for CTAS

2018-03-07 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18571?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-18571:

Attachment: HIVE-18571.07.patch

> stats issues for MM tables; ACID doesn't check state for CTAS
> -
>
> Key: HIVE-18571
> URL: https://issues.apache.org/jira/browse/HIVE-18571
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HIVE-18571.01.patch, HIVE-18571.02.patch, 
> HIVE-18571.03.patch, HIVE-18571.04.patch, HIVE-18571.05.patch, 
> HIVE-18571.06.patch, HIVE-18571.07.patch, HIVE-18571.patch
>
>
> There are multiple stats aggregation issues with MM tables.
> Some simple stats are double counted and some stats (simple stats) are 
> invalid for ACID table dirs altogether. 
> I have a patch almost ready, need to fix some more stuff and clean up.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18892) Fix NPEs in HiveMetastore.exchange_partitions method

2018-03-07 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16390099#comment-16390099
 ] 

Hive QA commented on HIVE-18892:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 2s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
37s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} standalone-metastore: The patch generated 0 new + 
351 unchanged - 26 fixed = 351 total (was 377) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
12s{color} | {color:red} The patch generated 49 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 11m 23s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-9537/dev-support/hive-personality.sh
 |
| git revision | master / 0cfd4fe |
| Default Java | 1.8.0_111 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9537/yetus/patch-asflicense-problems.txt
 |
| modules | C: standalone-metastore U: standalone-metastore |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9537/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Fix NPEs in HiveMetastore.exchange_partitions method
> 
>
> Key: HIVE-18892
> URL: https://issues.apache.org/jira/browse/HIVE-18892
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Reporter: Marta Kuczora
>Assignee: Marta Kuczora
>Priority: Minor
> Attachments: HIVE-18892.1.patch
>
>
> The TestExchangePartitions tests revealed that NPE is thrown if the 
> exchange_partitions method is called with null, empty or non-existing DB and 
> table names. These NPEs could be prevented with a simple null check and a 
> MetaException with a proper error message should be thrown instead.
> Example: NPE is thrown in the following test cases
>  * testExchangePartitionsNonExistingSourceTable
>  * testExchangePartitionsNonExistingSourceDB
>  * testExchangePartitionsNonExistingDestTable
>  * testExchangePartitionsNonExistingDestDB
>  * testExchangePartitionsEmptySourceTable
>  * testExchangePartitionsEmptySourceDB
>  * testExchangePartitionsEmptyDestTable
>  * testExchangePartitionsEmptyDestDB
>  * testExchangePartitionsNullSourceTable
>  * testExchangePartitionsNullSourceDB
>  * testExchangePartitionsNullDestTable
>  * testExchangePartitionsNullDestDB
>  * testExchangePartitionsNullPartSpec



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18864) ValidWriteIdList snapshot seems incorrect if obtained after allocating writeId by current transaction.

2018-03-07 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16390083#comment-16390083
 ] 

Hive QA commented on HIVE-18864:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12913364/HIVE-18864.02.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 101 failed/errored test(s), 13744 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_table_stats] 
(batchId=53)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=36)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[llap_smb] 
(batchId=152)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez_empty]
 (batchId=158)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata]
 (batchId=168)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[results_cache_1]
 (batchId=168)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] 
(batchId=161)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_smb_main]
 (batchId=159)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorization_div0]
 (batchId=169)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.org.apache.hadoop.hive.cli.TestNegativeCliDriver
 (batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[smb_bucketmapjoin]
 (batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[smb_mapjoin_14] 
(batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[sortmerge_mapjoin_mismatch_1]
 (batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[special_character_in_tabnames_1]
 (batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[split_sample_out_of_range]
 (batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[split_sample_wrong_format]
 (batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[stats_aggregator_error_1]
 (batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[stats_aggregator_error_2]
 (batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[stats_publisher_error_1]
 (batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[strict_join_2] 
(batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[strict_orderby] 
(batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[strict_orderby_2]
 (batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[strict_pruning_2]
 (batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[subquery_corr_grandparent]
 (batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[subquery_corr_in_agg]
 (batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[subquery_in_groupby]
 (batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[subquery_multiple_cols_in_select]
 (batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[subquery_notin_implicit_gby]
 (batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[subquery_scalar_corr_multi_rows]
 (batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[subquery_scalar_multi_rows]
 (batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[subquery_select_aggregate]
 (batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[subquery_select_distinct]
 (batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[subquery_subquery_chain_exists]
 (batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[temp_table_rename]
 (batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[touch2] 
(batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[truncate_bucketed_column]
 (batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[truncate_column_seqfile]
 (batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[truncate_table_failure3]
 (batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[truncate_table_failure5]
 (batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udaf_collect_set_unsupported]
 (batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_array_contains_wrong2]
 (batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_assert_true2]
 (batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_assert_true] 
(batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_coalesce] 
(batchId=95)
org.apache.hadoop.h

[jira] [Commented] (HIVE-18727) Update GenericUDFEnforceNotNullConstraint to throw an ERROR instead of Exception on failure

2018-03-07 Thread Vineet Garg (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16390054#comment-16390054
 ] 

Vineet Garg commented on HIVE-18727:


[~vbeshka] can you create review board request for this patch?

> Update GenericUDFEnforceNotNullConstraint to throw an ERROR instead of 
> Exception on failure
> ---
>
> Key: HIVE-18727
> URL: https://issues.apache.org/jira/browse/HIVE-18727
> Project: Hive
>  Issue Type: Improvement
>Reporter: Vineet Garg
>Assignee: Kryvenko Igor
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-18727.patch
>
>
> Throwing an exception makes TezProcessor stop retrying the task. Since this 
> is NOT NULL constraint violation we don't want TezProcessor to keep retrying 
> on failure.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


  1   2   >