[jira] [Commented] (HIVE-18481) Create tests for table related methods (get, list, exists)

2018-01-22 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16335440#comment-16335440
 ] 

Hive QA commented on HIVE-18481:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
40s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
12s{color} | {color:red} The patch generated 6 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 10m 54s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh |
| git revision | master / 3bbf35f |
| Default Java | 1.8.0_111 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8769/yetus/patch-asflicense-problems.txt
 |
| modules | C: standalone-metastore U: standalone-metastore |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8769/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Create tests for table related methods (get, list, exists)
> --
>
> Key: HIVE-18481
> URL: https://issues.apache.org/jira/browse/HIVE-18481
> Project: Hive
>  Issue Type: Sub-task
> Environment: Create IMetaStoreClient tests to cover the table query 
> methods
>Reporter: Peter Vary
>Assignee: Peter Vary
>Priority: Major
> Attachments: HIVE-18481.2.patch, HIVE-18481.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-17751) Separate HMS Client and HMS server into separate sub-modules

2018-01-22 Thread Alexander Kolbasov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Kolbasov updated HIVE-17751:
--
Attachment: HIVE-17751.02-standalone-metastore.patch

> Separate HMS Client and HMS server into separate sub-modules
> 
>
> Key: HIVE-17751
> URL: https://issues.apache.org/jira/browse/HIVE-17751
> Project: Hive
>  Issue Type: Sub-task
>  Components: Standalone Metastore
>Reporter: Vihang Karajgaonkar
>Assignee: Alexander Kolbasov
>Priority: Major
> Attachments: HIVE-17751.01-standalone-metastore.patch, 
> HIVE-17751.02-standalone-metastore.patch
>
>
> external applications which are interfacing with HMS should ideally only 
> include HMSClient library instead of one big library containing server as 
> well. We should ideally have a thin client library so that cross version 
> support for external applications is easier. We should sub-divide the 
> standalone module into possibly 3 modules (one for common classes, one for 
> client classes and one for server) or 2 sub-modules (one for client and one 
> for server) so that we can generate separate jars for HMS client and server.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18509) Create tests for table manipulation related methods (create, alter, drop)

2018-01-22 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16335429#comment-16335429
 ] 

Hive QA commented on HIVE-18509:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12907134/HIVE-18509.2.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 15 failed/errored test(s), 11343 tests 
executed
*Failed tests:*
{noformat}
TestNegativeCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=94)

[udf_invalid.q,authorization_uri_export.q,druid_datasource2.q,view_update.q,default_partition_name.q,authorization_public_create.q,load_wrong_fileformat_rc_seq.q,altern1.q,describe_xpath1.q,drop_view_failure2.q,orc_replace_columns2_acid.q,temp_table_rename.q,invalid_select_column_with_subquery.q,udf_trunc_error1.q,insert_view_failure.q,dbtxnmgr_nodbunlock.q,authorization_show_columns.q,cte_recursion.q,load_part_nospec.q,clusterbyorderby.q,orc_type_promotion2.q,ctas_noperm_loc.q,duplicate_alias_in_transform.q,invalid_create_tbl2.q,part_col_complex_type.q,authorization_drop_db_empty.q,smb_mapjoin_14.q,subquery_scalar_multi_rows.q,alter_partition_coltype_2columns.q,subquery_corr_in_agg.q,authorization_show_grant_otheruser_wtab.q,regex_col_groupby.q,udaf_collect_set_unsupported.q,ptf_negative_DuplicateWindowAlias.q,exim_22_export_authfail.q,udf_likeany_wrong1.q,groupby_key.q,ambiguous_col.q,groupby3_multi_distinct.q,authorization_alter_drop_ptn.q,invalid_cast_from_binary_5.q,show_create_table_does_not_exist.q,exim_20_managed_location_over_existing.q,interval_3.q,authorization_compile.q,join35.q,merge_negative_3.q,udf_concat_ws_wrong3.q,create_or_replace_view8.q,split_sample_out_of_range.q,materialized_view_no_transactional_rewrite.q,alter_concatenate_indexed_table.q,authorization_show_grant_otherrole.q,create_with_constraints_duplicate_name.q,invalid_stddev_samp_syntax.q,authorization_view_disable_cbo_7.q,autolocal1.q,analyze_view.q,exim_14_nonpart_part.q,avro_non_nullable_union.q,load_orc_negative_part.q,drop_view_failure1.q,columnstats_partlvl_invalid_values_autogather.q,exim_13_nonnative_import.q,alter_table_wrong_regex.q,add_partition_with_whitelist.q,udf_next_day_error_2.q,authorization_select.q,udf_trunc_error2.q,authorization_view_7.q,udf_format_number_wrong5.q,touch2.q,exim_03_nonpart_noncompat_colschema.q,orc_type_promotion1.q,lateral_view_alias.q,show_tables_bad_db1.q,unset_table_property.q,alter_non_native.q,nvl_mismatch_type.q,load_orc_negative3.q,authorization_create_role_no_admin.q,invalid_distinct1.q,authorization_grant_server.q,orc_type_promotion3_acid.q,show_tables_bad1.q,macro_unused_parameter.q,drop_invalid_constraint3.q,char_pad_convert_fail3.q,exim_23_import_exist_authfail.q,drop_invalid_constraint4.q,archive1.q,subquery_multiple_cols_in_select.q,drop_index_failure.q,change_hive_hdfs_session_path.q,udf_trunc_error3.q,invalid_variance_syntax.q,authorization_truncate_2.q,invalid_avg_syntax.q,invalid_select_column_with_tablename.q,mm_truncate_cols.q,groupby_grouping_sets1.q,druid_location.q,groupby2_multi_distinct.q,authorization_sba_drop_table.q,dynamic_partitions_with_whitelist.q,delete_non_acid_table.q,udf_greatest_error_2.q,create_with_constraints_validate.q,authorization_view_6.q,show_tablestatus.q,describe_xpath3.q,duplicate_alias_in_transform_schema.q,create_with_fk_uk_same_tab.q,authorization_create_tbl.q,udtf_not_supported3.q,alter_table_constraint_invalid_fk_col2.q,udtf_not_supported1.q,dbtxnmgr_notableunlock.q,ptf_negative_InvalidValueBoundary.q,alter_table_constraint_duplicate_pk.q,udf_size_wrong_type.q,exim_04_nonpart_noncompat_colnumber.q,udf_printf_wrong4.q,create_view_failure9.q,udf_elt_wrong_type.q,selectDistinctStarNeg_1.q,invalid_mapjoin1.q,load_stored_as_dirs.q,input1.q,udf_sort_array_wrong1.q,invalid_distinct2.q,dyn_part4.q,invalid_select_fn.q,authorization_role_grant_otherrole.q,archive4.q,load_nonpart_authfail.q,recursive_view.q,authorization_view_disable_cbo_1.q,create_unknown_genericudf.q,desc_failure4.q,create_not_acid.q,udf_sort_array_wrong3.q,udf_map_values_arg_type.q,alter_partition_change_col_nonexist.q,create_with_constraints_enforced.q,update_non_acid_table.q,authorization_view_disable_cbo_5.q,ct_noperm_loc.q,interval_1.q,authorization_uri_index.q,authorization_show_grant_otheruser_all.q,authorization_view_2.q,show_tables_bad2.q,groupby_rollup2.q,truncate_column_seqfile.q,create_view_failure5.q,authorization_create_view.q,ctasnullcol.q,create_or_replace_view1.q,udf_max.q,exim_01_nonpart_over_loaded.q,msck_repair_1.q,orc_change_fileformat_acid.q,udf_nonexistent_resource.q,exim_19_external_over_existing.q,serde_regex2.q,msck_repair_2.q,exim_06_nonpart_noncompat_storage.q,illegal_partition_type4.q,udf_sort_array_by_wrong1.q,create_or_replace_view5.q,windowing_leadlag_in_udaf.q,authorization_drop_index.q,truncate_c

[jira] [Commented] (HIVE-15269) Dynamic Min-Max/BloomFilter runtime-filtering for Tez

2018-01-22 Thread Deepak Jaiswal (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16335426#comment-16335426
 ] 

Deepak Jaiswal commented on HIVE-15269:
---

The code you showed is a place holder for the min-max and bloom filter values.

The 2nd GBY->RS calculates the final min, max and bloom filters by aggregating 
all the min,max and bloom filters in 1st branch.

Please refer to this test and its result file for explain plans. The groupby 
has 3 columns, namely, min, max and bloom_filter.

I hope it helps.

> Dynamic Min-Max/BloomFilter runtime-filtering for Tez
> -
>
> Key: HIVE-15269
> URL: https://issues.apache.org/jira/browse/HIVE-15269
> Project: Hive
>  Issue Type: New Feature
>  Components: Tez
>Reporter: Jason Dere
>Assignee: Deepak Jaiswal
>Priority: Major
>  Labels: TODOC2.2.0
> Fix For: 2.2.0
>
> Attachments: HIVE-15269.1.patch, HIVE-15269.10.patch, 
> HIVE-15269.11.patch, HIVE-15269.12.patch, HIVE-15269.13.patch, 
> HIVE-15269.14.patch, HIVE-15269.15.patch, HIVE-15269.16.patch, 
> HIVE-15269.17.patch, HIVE-15269.18.patch, HIVE-15269.19.patch, 
> HIVE-15269.2.patch, HIVE-15269.3.patch, HIVE-15269.4.patch, 
> HIVE-15269.5.patch, HIVE-15269.6.patch, HIVE-15269.7.patch, 
> HIVE-15269.8.patch, HIVE-15269.9.patch
>
>
> If a dimension table and fact table are joined:
> {noformat}
> select *
> from store join store_sales on (store.id = store_sales.store_id)
> where store.s_store_name = 'My Store'
> {noformat}
> One optimization that can be done is to get the min/max store id values that 
> come out of the scan/filter of the store table, and send this min/max value 
> (via Tez edge) to the task which is scanning the store_sales table.
> We can add a BETWEEN(min, max) predicate to the store_sales TableScan, where 
> this predicate can be pushed down to the storage handler (for example for ORC 
> formats). Pushing a min/max predicate to the ORC reader would allow us to 
> avoid having to entire whole row groups during the table scan.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-15269) Dynamic Min-Max/BloomFilter runtime-filtering for Tez

2018-01-22 Thread Ke Jia (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16335422#comment-16335422
 ] 

Ke Jia commented on HIVE-15269:
---

[~djaiswal] thanks for you reply. 

> The 2nd GBY–>RS is executed in a Reducer vertex where it aggregates all the 
>min-max and bloom filters.

Here, when aggregating the min-max and bloom filters, whether it calculates the 
final min-max and bloom filters or it only combine all the min-max and bloom 
filters? If yes, why calculate the final min-max and bloom filters again in [|

https://github.com/apache/hive/blob/3bbf35f8ecfecc6690832ee43f4e2d2bcdad7660/ql/src/java/org/apache/hadoop/hive/ql/exec/tez/DynamicValueRegistryTez.java#L119]

And I wonder what is the key in two "GBY" operation, because it not show the 
keys in the explain plan. Thanks for your help!

> Dynamic Min-Max/BloomFilter runtime-filtering for Tez
> -
>
> Key: HIVE-15269
> URL: https://issues.apache.org/jira/browse/HIVE-15269
> Project: Hive
>  Issue Type: New Feature
>  Components: Tez
>Reporter: Jason Dere
>Assignee: Deepak Jaiswal
>Priority: Major
>  Labels: TODOC2.2.0
> Fix For: 2.2.0
>
> Attachments: HIVE-15269.1.patch, HIVE-15269.10.patch, 
> HIVE-15269.11.patch, HIVE-15269.12.patch, HIVE-15269.13.patch, 
> HIVE-15269.14.patch, HIVE-15269.15.patch, HIVE-15269.16.patch, 
> HIVE-15269.17.patch, HIVE-15269.18.patch, HIVE-15269.19.patch, 
> HIVE-15269.2.patch, HIVE-15269.3.patch, HIVE-15269.4.patch, 
> HIVE-15269.5.patch, HIVE-15269.6.patch, HIVE-15269.7.patch, 
> HIVE-15269.8.patch, HIVE-15269.9.patch
>
>
> If a dimension table and fact table are joined:
> {noformat}
> select *
> from store join store_sales on (store.id = store_sales.store_id)
> where store.s_store_name = 'My Store'
> {noformat}
> One optimization that can be done is to get the min/max store id values that 
> come out of the scan/filter of the store table, and send this min/max value 
> (via Tez edge) to the task which is scanning the store_sales table.
> We can add a BETWEEN(min, max) predicate to the store_sales TableScan, where 
> this predicate can be pushed down to the storage handler (for example for ORC 
> formats). Pushing a min/max predicate to the ORC reader would allow us to 
> avoid having to entire whole row groups during the table scan.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18509) Create tests for table manipulation related methods (create, alter, drop)

2018-01-22 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16335403#comment-16335403
 ] 

Hive QA commented on HIVE-18509:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
55s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
12s{color} | {color:red} The patch generated 6 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 11m  9s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh |
| git revision | master / 3bbf35f |
| Default Java | 1.8.0_111 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8768/yetus/patch-asflicense-problems.txt
 |
| modules | C: standalone-metastore U: standalone-metastore |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8768/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Create tests for table manipulation related methods (create, alter, drop)
> -
>
> Key: HIVE-18509
> URL: https://issues.apache.org/jira/browse/HIVE-18509
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Peter Vary
>Assignee: Peter Vary
>Priority: Major
> Attachments: HIVE-18509.2.patch, HIVE-18509.patch
>
>
> Create API tests for table metadata manipulations



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18192) Introduce WriteID per table rather than using global transaction ID

2018-01-22 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16335400#comment-16335400
 ] 

Hive QA commented on HIVE-18192:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12907175/HIVE-18192.05.patch

{color:green}SUCCESS:{color} +1 due to 20 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 106 failed/errored test(s), 11633 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[materialized_view_create_rewrite]
 (batchId=246)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_join] (batchId=16)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_mapjoin] 
(batchId=10)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_table_stats] 
(batchId=53)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[autoColumnStats_4] 
(batchId=12)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[delete_tmp_table] 
(batchId=51)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[insert_values_tmp_table] 
(batchId=4)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[llap_acid] (batchId=80)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mapjoin_hook] 
(batchId=12)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[materialized_view_create_rewrite]
 (batchId=3)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[materialized_view_create_rewrite_2]
 (batchId=89)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[materialized_view_create_rewrite_3]
 (batchId=48)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[materialized_view_create_rewrite_4]
 (batchId=16)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[materialized_view_create_rewrite_multi_db]
 (batchId=67)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[materialized_view_rewrite_ssb]
 (batchId=37)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[materialized_view_rewrite_ssb_2]
 (batchId=54)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mm_all] (batchId=67)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mm_buckets] (batchId=60)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mm_conversions] 
(batchId=75)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mm_default] (batchId=82)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mm_exchangepartition] 
(batchId=71)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mm_loaddata] (batchId=46)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=35)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[row__id] (batchId=78)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[udf_case_column_pruning] 
(batchId=78)
org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver[encryption_insert_partition_dynamic]
 (batchId=175)
org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver[encryption_insert_partition_static]
 (batchId=172)
org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver[encryption_move_tbl]
 (batchId=173)
org.apache.hadoop.hive.cli.TestMiniDruidCliDriver.testCliDriver[druidmini_mv] 
(batchId=246)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[acid_bucket_pruning]
 (batchId=148)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[llap_smb] 
(batchId=151)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[mm_all] 
(batchId=150)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[acid_vectorization_original]
 (batchId=169)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez1]
 (batchId=170)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[delete_tmp_table]
 (batchId=163)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata]
 (batchId=165)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_tmp_table]
 (batchId=152)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] 
(batchId=169)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast]
 (batchId=160)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create_rewrite]
 (batchId=152)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create_rewrite_2]
 (batchId=171)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create_rewrite_3]
 (batchId=162)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create_rewrite_dummy]
 (batchId=163)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create_rewrite_multi_db]
 (batchId=166)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create_rewrite_rebuild_dummy]
 (

[jira] [Commented] (HIVE-18192) Introduce WriteID per table rather than using global transaction ID

2018-01-22 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16335390#comment-16335390
 ] 

Hive QA commented on HIVE-18192:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
1s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
21s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
36s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m  
6s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
11s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
44s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m  
0s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m  9s{color} 
| {color:red} storage-api generated 2 new + 0 unchanged - 2 fixed = 2 total 
(was 2) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m  
9s{color} | {color:red} storage-api: The patch generated 23 new + 18 unchanged 
- 1 fixed = 41 total (was 19) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
26s{color} | {color:red} standalone-metastore: The patch generated 18 new + 
1329 unchanged - 13 fixed = 1347 total (was 1342) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
49s{color} | {color:red} ql: The patch generated 55 new + 1589 unchanged - 33 
fixed = 1644 total (was 1622) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
12s{color} | {color:red} hcatalog/streaming: The patch generated 12 new + 201 
unchanged - 7 fixed = 213 total (was 208) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 84 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
47s{color} | {color:red} standalone-metastore generated 3 new + 62 unchanged - 
0 fixed = 65 total (was 62) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
12s{color} | {color:red} The patch generated 6 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 28m 35s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh |
| git revision | master / 3bbf35f |
| Default Java | 1.8.0_111 |
| javac | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8767/yetus/diff-compile-javac-storage-api.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8767/yetus/diff-checkstyle-storage-api.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8767/yetus/diff-checkstyle-standalone-metastore.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8767/yetus/diff-checkstyle-ql.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8767/yetus/diff-checkstyle-hcatalog_streaming.txt
 |
| whitespace | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8767/yetus/whitespace-eol.txt 
|
| javadoc | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8767/yetus/diff-javadoc-javadoc-standalone-metastore.txt
 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8767/yetus/patch-asflicense-problems.txt
 |
| modules | C: storage-api common standalone-metastore metastore ql 
hcatalog/streaming i

[jira] [Commented] (HIVE-18510) Enable running checkstyle on test sources as well

2018-01-22 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16335376#comment-16335376
 ] 

Hive QA commented on HIVE-18510:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12907126/HIVE-18510.0.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 26 failed/errored test(s), 11633 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_sortmerge_join_2] 
(batchId=48)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=35)
org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver[encryption_move_tbl]
 (batchId=173)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[llap_smb] 
(batchId=151)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez1]
 (batchId=170)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata]
 (batchId=165)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] 
(batchId=169)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast]
 (batchId=160)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] 
(batchId=160)
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[bucketizedhiveinputformat]
 (batchId=178)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[authorization_part]
 (batchId=94)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] 
(batchId=121)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query64] 
(batchId=246)
org.apache.hadoop.hive.ql.io.TestDruidRecordWriter.testWrite (batchId=254)
org.apache.hive.jdbc.TestSSL.testConnectionMismatch (batchId=232)
org.apache.hive.jdbc.TestSSL.testConnectionWrongCertCN (batchId=232)
org.apache.hive.jdbc.TestSSL.testMetastoreConnectionWrongCertCN (batchId=232)
org.apache.hive.minikdc.TestJdbcWithDBTokenStoreNoDoAs.testCancelRenewTokenFlow 
(batchId=247)
org.apache.hive.minikdc.TestJdbcWithDBTokenStoreNoDoAs.testConnection 
(batchId=247)
org.apache.hive.minikdc.TestJdbcWithDBTokenStoreNoDoAs.testIsValid (batchId=247)
org.apache.hive.minikdc.TestJdbcWithDBTokenStoreNoDoAs.testIsValidNeg 
(batchId=247)
org.apache.hive.minikdc.TestJdbcWithDBTokenStoreNoDoAs.testNegativeProxyAuth 
(batchId=247)
org.apache.hive.minikdc.TestJdbcWithDBTokenStoreNoDoAs.testNegativeTokenAuth 
(batchId=247)
org.apache.hive.minikdc.TestJdbcWithDBTokenStoreNoDoAs.testProxyAuth 
(batchId=247)
org.apache.hive.minikdc.TestJdbcWithDBTokenStoreNoDoAs.testRenewDelegationToken 
(batchId=247)
org.apache.hive.minikdc.TestJdbcWithDBTokenStoreNoDoAs.testTokenAuth 
(batchId=247)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8766/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8766/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8766/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 26 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12907126 - PreCommit-HIVE-Build

> Enable running checkstyle on test sources as well
> -
>
> Key: HIVE-18510
> URL: https://issues.apache.org/jira/browse/HIVE-18510
> Project: Hive
>  Issue Type: Improvement
>Reporter: Adam Szita
>Assignee: Adam Szita
>Priority: Minor
> Attachments: HIVE-18510.0.patch
>
>
> Currently only source files are in the scope of checkstyle testing. We should 
> expand the scope to include our testing code as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18510) Enable running checkstyle on test sources as well

2018-01-22 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16335365#comment-16335365
 ] 

Hive QA commented on HIVE-18510:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
34s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
50s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
24s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  6m 
29s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
5s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  6m 
46s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
12s{color} | {color:red} The patch generated 9 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 39m  5s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  xml  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh |
| git revision | master / 3bbf35f |
| Default Java | 1.8.0_111 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8766/yetus/patch-asflicense-problems.txt
 |
| modules | C: storage-api standalone-metastore . testutils/ptest2 U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8766/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Enable running checkstyle on test sources as well
> -
>
> Key: HIVE-18510
> URL: https://issues.apache.org/jira/browse/HIVE-18510
> Project: Hive
>  Issue Type: Improvement
>Reporter: Adam Szita
>Assignee: Adam Szita
>Priority: Minor
> Attachments: HIVE-18510.0.patch
>
>
> Currently only source files are in the scope of checkstyle testing. We should 
> expand the scope to include our testing code as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18459) hive-exec.jar leaks contents fb303.jar into classpath

2018-01-22 Thread Naveen Gangam (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18459?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naveen Gangam updated HIVE-18459:
-
Status: Patch Available  (was: In Progress)

> hive-exec.jar leaks contents fb303.jar into classpath
> -
>
> Key: HIVE-18459
> URL: https://issues.apache.org/jira/browse/HIVE-18459
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 2.1.0
>Reporter: Naveen Gangam
>Assignee: Naveen Gangam
>Priority: Major
> Attachments: HIVE-18459.patch, HIVE-18459.patch, HIVE-18459.patch
>
>
> thrift classes are now in the hive classpath in the hive-exec.jar 
> (HIVE-11553). This makes it hard to test with other versions of this library. 
> This library is already a declared dependency and is not required to be 
> included in the hive-exec.jar.
> I am proposing that we not include these classes like we have done in the 
> past releases.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18459) hive-exec.jar leaks contents fb303.jar into classpath

2018-01-22 Thread Naveen Gangam (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18459?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naveen Gangam updated HIVE-18459:
-
Attachment: HIVE-18459.patch

> hive-exec.jar leaks contents fb303.jar into classpath
> -
>
> Key: HIVE-18459
> URL: https://issues.apache.org/jira/browse/HIVE-18459
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 2.1.0
>Reporter: Naveen Gangam
>Assignee: Naveen Gangam
>Priority: Major
> Attachments: HIVE-18459.patch, HIVE-18459.patch, HIVE-18459.patch
>
>
> thrift classes are now in the hive classpath in the hive-exec.jar 
> (HIVE-11553). This makes it hard to test with other versions of this library. 
> This library is already a declared dependency and is not required to be 
> included in the hive-exec.jar.
> I am proposing that we not include these classes like we have done in the 
> past releases.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work started] (HIVE-18459) hive-exec.jar leaks contents fb303.jar into classpath

2018-01-22 Thread Naveen Gangam (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18459?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HIVE-18459 started by Naveen Gangam.

> hive-exec.jar leaks contents fb303.jar into classpath
> -
>
> Key: HIVE-18459
> URL: https://issues.apache.org/jira/browse/HIVE-18459
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 2.1.0
>Reporter: Naveen Gangam
>Assignee: Naveen Gangam
>Priority: Major
> Attachments: HIVE-18459.patch, HIVE-18459.patch, HIVE-18459.patch
>
>
> thrift classes are now in the hive classpath in the hive-exec.jar 
> (HIVE-11553). This makes it hard to test with other versions of this library. 
> This library is already a declared dependency and is not required to be 
> included in the hive-exec.jar.
> I am proposing that we not include these classes like we have done in the 
> past releases.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18459) hive-exec.jar leaks contents fb303.jar into classpath

2018-01-22 Thread Naveen Gangam (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18459?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naveen Gangam updated HIVE-18459:
-
Status: Open  (was: Patch Available)

> hive-exec.jar leaks contents fb303.jar into classpath
> -
>
> Key: HIVE-18459
> URL: https://issues.apache.org/jira/browse/HIVE-18459
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 2.1.0
>Reporter: Naveen Gangam
>Assignee: Naveen Gangam
>Priority: Major
> Attachments: HIVE-18459.patch, HIVE-18459.patch
>
>
> thrift classes are now in the hive classpath in the hive-exec.jar 
> (HIVE-11553). This makes it hard to test with other versions of this library. 
> This library is already a declared dependency and is not required to be 
> included in the hive-exec.jar.
> I am proposing that we not include these classes like we have done in the 
> past releases.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18501) Typo in beeline code

2018-01-22 Thread Naveen Gangam (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18501?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naveen Gangam updated HIVE-18501:
-
Status: Patch Available  (was: In Progress)

> Typo in beeline code
> 
>
> Key: HIVE-18501
> URL: https://issues.apache.org/jira/browse/HIVE-18501
> Project: Hive
>  Issue Type: Bug
>  Components: Beeline
>Affects Versions: 3.0.0
>Reporter: Naveen Gangam
>Assignee: Naveen Gangam
>Priority: Trivial
> Attachments: HIVE-18501.patch, HIVE-18501.patch
>
>
> [https://github.com/apache/hive/blob/master/beeline/src/java/org/apache/hive/beeline/BeeLine.java#L744]
> the string literal used here should be "silent", not "slient". There is no 
> functional bug here just a silly typo.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work started] (HIVE-18501) Typo in beeline code

2018-01-22 Thread Naveen Gangam (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18501?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HIVE-18501 started by Naveen Gangam.

> Typo in beeline code
> 
>
> Key: HIVE-18501
> URL: https://issues.apache.org/jira/browse/HIVE-18501
> Project: Hive
>  Issue Type: Bug
>  Components: Beeline
>Affects Versions: 3.0.0
>Reporter: Naveen Gangam
>Assignee: Naveen Gangam
>Priority: Trivial
> Attachments: HIVE-18501.patch, HIVE-18501.patch
>
>
> [https://github.com/apache/hive/blob/master/beeline/src/java/org/apache/hive/beeline/BeeLine.java#L744]
> the string literal used here should be "silent", not "slient". There is no 
> functional bug here just a silly typo.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18501) Typo in beeline code

2018-01-22 Thread Naveen Gangam (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18501?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naveen Gangam updated HIVE-18501:
-
Attachment: HIVE-18501.patch

> Typo in beeline code
> 
>
> Key: HIVE-18501
> URL: https://issues.apache.org/jira/browse/HIVE-18501
> Project: Hive
>  Issue Type: Bug
>  Components: Beeline
>Affects Versions: 3.0.0
>Reporter: Naveen Gangam
>Assignee: Naveen Gangam
>Priority: Trivial
> Attachments: HIVE-18501.patch, HIVE-18501.patch
>
>
> [https://github.com/apache/hive/blob/master/beeline/src/java/org/apache/hive/beeline/BeeLine.java#L744]
> the string literal used here should be "silent", not "slient". There is no 
> functional bug here just a silly typo.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18501) Typo in beeline code

2018-01-22 Thread Naveen Gangam (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18501?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naveen Gangam updated HIVE-18501:
-
Status: Open  (was: Patch Available)

pre-commits did not pick this up

> Typo in beeline code
> 
>
> Key: HIVE-18501
> URL: https://issues.apache.org/jira/browse/HIVE-18501
> Project: Hive
>  Issue Type: Bug
>  Components: Beeline
>Affects Versions: 3.0.0
>Reporter: Naveen Gangam
>Assignee: Naveen Gangam
>Priority: Trivial
> Attachments: HIVE-18501.patch
>
>
> [https://github.com/apache/hive/blob/master/beeline/src/java/org/apache/hive/beeline/BeeLine.java#L744]
> the string literal used here should be "silent", not "slient". There is no 
> functional bug here just a silly typo.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18472) Beeline gives log4j warnings

2018-01-22 Thread Janaki Lahorani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Janaki Lahorani updated HIVE-18472:
---
Attachment: HIVE-18472.2.patch

> Beeline gives log4j warnings
> 
>
> Key: HIVE-18472
> URL: https://issues.apache.org/jira/browse/HIVE-18472
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Janaki Lahorani
>Assignee: Janaki Lahorani
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-18472.1.patch, HIVE-18472.2.patch
>
>
> Starting Beeline gives the following warnings multiple times:
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in 
> [jar:file:/opt/cloudera/parcels/CDH-6.x-1.cdh6.x.p0.215261/jars/log4j-slf4j-impl-2.8.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/opt/cloudera/parcels/CDH-6.x-1.cdh6.x.p0.215261/jars/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See [http://www.slf4j.org/codes.html#multiple_bindings] for an 
> explanation.
> SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
> ERROR StatusLogger No log4j2 configuration file found. Using default 
> configuration: logging only errors to the console. Set system property 
> 'org.apache.logging.log4j.simplelog.StatusLogger.level' to TRACE to show 
> Log4j2 internal initialization logging.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18495) JUnit rule to enable Driver level testing

2018-01-22 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16335308#comment-16335308
 ] 

Hive QA commented on HIVE-18495:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m 10s{color} 
| {color:red} 
/data/hiveptest/logs/PreCommit-HIVE-Build-8765/patches/PreCommit-HIVE-Build-8765.patch
 does not apply to master. Rebase required? Wrong Branch? See 
http://cwiki.apache.org/confluence/display/Hive/HowToContribute for help. 
{color} |
\\
\\
|| Subsystem || Report/Notes ||
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8765/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> JUnit rule to enable Driver level testing
> -
>
> Key: HIVE-18495
> URL: https://issues.apache.org/jira/browse/HIVE-18495
> Project: Hive
>  Issue Type: Sub-task
>  Components: Testing Infrastructure
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-18495.01.patch, HIVE-18495.02.patch
>
>
> I've tried to write a case for a sophisticated check...it worked so well that 
> I've started using it and eventually created a junit rule to make it easier 
> to reuse
> Currently it takes ~15-25sec to run a test case with this framework (from 
> which most of the time is the launch time of all kind of stuff which are 
> needed to run a driver command).
> * enable to write JUnit tests which has access to the {{IDriver}} level
> * leave out the cli-driver; it sometimes causes problems
> * write tests at the {{ql}} module
> * it should also work from the IDE without changing anything
> Note: JUnit 5 would be great for this task; but unfortunately junit5 needs 
> maven-surefire 2.19.1 ; which causes all kinds of problems for hive devs 
> using idea...so that's not an option.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18383) Qtests: running all cases from TestNegativeCliDriver results in OOMs

2018-01-22 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16335289#comment-16335289
 ] 

Hive QA commented on HIVE-18383:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12907120/HIVE-18383.01.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 223 failed/errored test(s), 11238 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_sortmerge_join_2] 
(batchId=48)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=35)
org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver[encryption_move_tbl]
 (batchId=173)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[llap_smb] 
(batchId=151)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez1]
 (batchId=170)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata]
 (batchId=165)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] 
(batchId=169)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast]
 (batchId=160)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] 
(batchId=160)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.org.apache.hadoop.hive.cli.TestNegativeCliDriver
 (batchId=93)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.org.apache.hadoop.hive.cli.TestNegativeCliDriver
 (batchId=94)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[addpart1] 
(batchId=93)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[allow_change_col_type_par_neg]
 (batchId=93)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[alter_partition_coltype_invalidtype]
 (batchId=93)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[alter_partition_partial_spec_dyndisabled]
 (batchId=93)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[alter_partition_with_whitelist]
 (batchId=93)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[alter_table_wrong_location]
 (batchId=93)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[alter_view_as_select_with_partition]
 (batchId=93)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[alter_view_failure8]
 (batchId=93)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[archive_insert1] 
(batchId=93)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[archive_insert4] 
(batchId=93)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[archive_multi6] 
(batchId=93)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[archive_multi7] 
(batchId=93)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[archive_partspec1]
 (batchId=93)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[archive_partspec3]
 (batchId=93)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[archive_partspec5]
 (batchId=93)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[authorization_addpartition]
 (batchId=93)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[authorization_cannot_create_all_role]
 (batchId=93)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[authorization_cannot_create_default_role]
 (batchId=93)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[authorization_cannot_create_none_role]
 (batchId=93)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[authorization_create_func1]
 (batchId=93)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[authorization_create_func2]
 (batchId=93)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[authorization_createview]
 (batchId=93)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[authorization_dfs]
 (batchId=93)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[authorization_drop_admin_role]
 (batchId=93)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[authorization_drop_db_cascade]
 (batchId=93)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[authorization_drop_role_no_admin]
 (batchId=93)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[authorization_droppartition]
 (batchId=93)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[authorization_export_ptn]
 (batchId=93)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[authorization_fail_1]
 (batchId=93)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[authorization_fail_2]
 (batchId=93)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[authorization_fail_7]
 (batchId=93)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[authorization_grant_group]
 (batchId=93)
org.apache.hadoop.hi

[jira] [Commented] (HIVE-18368) Improve Spark Debug RDD Graph

2018-01-22 Thread Sahil Takiar (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16335253#comment-16335253
 ] 

Sahil Takiar commented on HIVE-18368:
-

Attached updated patch along with new screenshots. 

> Improve Spark Debug RDD Graph
> -
>
> Key: HIVE-18368
> URL: https://issues.apache.org/jira/browse/HIVE-18368
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
>Priority: Major
> Attachments: Completed Stages.png, HIVE-18368.1.patch, 
> HIVE-18368.2.patch, HIVE-18368.3.patch, Job Ids.png, Stage DAG 1.png, Stage 
> DAG 2.png
>
>
> The {{SparkPlan}} class does some logging to show the mapping between 
> different {{SparkTran}}, what shuffle types are used, and what trans are 
> cached. However, there is room for improvement.
> When debug logging is enabled the RDD graph is logged, but there isn't much 
> information printed about each RDD.
> We should combine both of the graphs and improve them. We could even make the 
> Spark Plan graph part of the {{EXPLAIN EXTENDED}} output.
> Ideally, the final graph shows a clear relationship between Tran objects, 
> RDDs, and BaseWorks. Edge should include information about number of 
> partitions, shuffle types, Spark operations used, etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18368) Improve Spark Debug RDD Graph

2018-01-22 Thread Sahil Takiar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sahil Takiar updated HIVE-18368:

Attachment: (was: HIVE-18368.3.patch)

> Improve Spark Debug RDD Graph
> -
>
> Key: HIVE-18368
> URL: https://issues.apache.org/jira/browse/HIVE-18368
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
>Priority: Major
> Attachments: Completed Stages.png, HIVE-18368.1.patch, 
> HIVE-18368.2.patch, HIVE-18368.3.patch, Job Ids.png, Stage DAG 1.png, Stage 
> DAG 2.png
>
>
> The {{SparkPlan}} class does some logging to show the mapping between 
> different {{SparkTran}}, what shuffle types are used, and what trans are 
> cached. However, there is room for improvement.
> When debug logging is enabled the RDD graph is logged, but there isn't much 
> information printed about each RDD.
> We should combine both of the graphs and improve them. We could even make the 
> Spark Plan graph part of the {{EXPLAIN EXTENDED}} output.
> Ideally, the final graph shows a clear relationship between Tran objects, 
> RDDs, and BaseWorks. Edge should include information about number of 
> partitions, shuffle types, Spark operations used, etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18368) Improve Spark Debug RDD Graph

2018-01-22 Thread Sahil Takiar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sahil Takiar updated HIVE-18368:

Attachment: HIVE-18368.3.patch

> Improve Spark Debug RDD Graph
> -
>
> Key: HIVE-18368
> URL: https://issues.apache.org/jira/browse/HIVE-18368
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
>Priority: Major
> Attachments: Completed Stages.png, HIVE-18368.1.patch, 
> HIVE-18368.2.patch, HIVE-18368.3.patch, Job Ids.png, Stage DAG 1.png, Stage 
> DAG 2.png
>
>
> The {{SparkPlan}} class does some logging to show the mapping between 
> different {{SparkTran}}, what shuffle types are used, and what trans are 
> cached. However, there is room for improvement.
> When debug logging is enabled the RDD graph is logged, but there isn't much 
> information printed about each RDD.
> We should combine both of the graphs and improve them. We could even make the 
> Spark Plan graph part of the {{EXPLAIN EXTENDED}} output.
> Ideally, the final graph shows a clear relationship between Tran objects, 
> RDDs, and BaseWorks. Edge should include information about number of 
> partitions, shuffle types, Spark operations used, etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18368) Improve Spark Debug RDD Graph

2018-01-22 Thread Sahil Takiar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sahil Takiar updated HIVE-18368:

Attachment: HIVE-18368.3.patch

> Improve Spark Debug RDD Graph
> -
>
> Key: HIVE-18368
> URL: https://issues.apache.org/jira/browse/HIVE-18368
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
>Priority: Major
> Attachments: Completed Stages.png, HIVE-18368.1.patch, 
> HIVE-18368.2.patch, HIVE-18368.3.patch, Job Ids.png, Stage DAG 1.png, Stage 
> DAG 2.png
>
>
> The {{SparkPlan}} class does some logging to show the mapping between 
> different {{SparkTran}}, what shuffle types are used, and what trans are 
> cached. However, there is room for improvement.
> When debug logging is enabled the RDD graph is logged, but there isn't much 
> information printed about each RDD.
> We should combine both of the graphs and improve them. We could even make the 
> Spark Plan graph part of the {{EXPLAIN EXTENDED}} output.
> Ideally, the final graph shows a clear relationship between Tran objects, 
> RDDs, and BaseWorks. Edge should include information about number of 
> partitions, shuffle types, Spark operations used, etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18368) Improve Spark Debug RDD Graph

2018-01-22 Thread Sahil Takiar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sahil Takiar updated HIVE-18368:

Attachment: Stage DAG 2.png
Stage DAG 1.png
Job Ids.png
Completed Stages.png

> Improve Spark Debug RDD Graph
> -
>
> Key: HIVE-18368
> URL: https://issues.apache.org/jira/browse/HIVE-18368
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
>Priority: Major
> Attachments: Completed Stages.png, HIVE-18368.1.patch, 
> HIVE-18368.2.patch, Job Ids.png, Stage DAG 1.png, Stage DAG 2.png
>
>
> The {{SparkPlan}} class does some logging to show the mapping between 
> different {{SparkTran}}, what shuffle types are used, and what trans are 
> cached. However, there is room for improvement.
> When debug logging is enabled the RDD graph is logged, but there isn't much 
> information printed about each RDD.
> We should combine both of the graphs and improve them. We could even make the 
> Spark Plan graph part of the {{EXPLAIN EXTENDED}} output.
> Ideally, the final graph shows a clear relationship between Tran objects, 
> RDDs, and BaseWorks. Edge should include information about number of 
> partitions, shuffle types, Spark operations used, etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18368) Improve Spark Debug RDD Graph

2018-01-22 Thread Sahil Takiar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sahil Takiar updated HIVE-18368:

Attachment: (was: Spark UI - Named RDDs.png)

> Improve Spark Debug RDD Graph
> -
>
> Key: HIVE-18368
> URL: https://issues.apache.org/jira/browse/HIVE-18368
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
>Priority: Major
> Attachments: HIVE-18368.1.patch, HIVE-18368.2.patch
>
>
> The {{SparkPlan}} class does some logging to show the mapping between 
> different {{SparkTran}}, what shuffle types are used, and what trans are 
> cached. However, there is room for improvement.
> When debug logging is enabled the RDD graph is logged, but there isn't much 
> information printed about each RDD.
> We should combine both of the graphs and improve them. We could even make the 
> Spark Plan graph part of the {{EXPLAIN EXTENDED}} output.
> Ideally, the final graph shows a clear relationship between Tran objects, 
> RDDs, and BaseWorks. Edge should include information about number of 
> partitions, shuffle types, Spark operations used, etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18383) Qtests: running all cases from TestNegativeCliDriver results in OOMs

2018-01-22 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16335242#comment-16335242
 ] 

Hive QA commented on HIVE-18383:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
47s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
41s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
25s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
46s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
11s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
34s{color} | {color:red} ql: The patch generated 1 new + 58 unchanged - 6 fixed 
= 59 total (was 64) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
13s{color} | {color:red} The patch generated 6 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 17m  0s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh |
| git revision | master / 3bbf35f |
| Default Java | 1.8.0_111 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8764/yetus/diff-checkstyle-ql.txt
 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8764/yetus/patch-asflicense-problems.txt
 |
| modules | C: ql itests/util U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8764/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Qtests: running all cases from TestNegativeCliDriver results in OOMs
> 
>
> Key: HIVE-18383
> URL: https://issues.apache.org/jira/browse/HIVE-18383
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore, Tests
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-18383.01.patch
>
>
> I think that it is caused by unclosed SessionState objects which are piling 
> up and cause OOM..
> There is special have been made to start a new sessionstate for every qtest; 
> but the old one is not closed up to this 
> [point|https://github.com/apache/hive/blob/20c9a3905f4b1b627c935ad54a53a7a59015587c/itests/util/src/main/java/org/apache/hadoop/hive/ql/QTestUtil.java#L1202]
> this prevents running all {{TestNegativeCliDriver}} tests in one maven 
> callI keep getting OOMs
> This issues sometimes appears on the ptest executor as well and its reported 
> as a failed batch.
> I've gone back in time a bitseems like at 
> c925cf8d2bdf646f5c3c57ed7252c01b2ab33eec it was ok to execute the whole 
> batch; but at 1b4baf474c15377cc9f0bacdda317feabeefacaf and probably also at 
> a42314deb07a1c8d9d4daeaa799ad1c1ebb0c6c9 its not possible a

[jira] [Commented] (HIVE-18516) load data should rename files consistent with insert statements for ACID Tables

2018-01-22 Thread Deepak Jaiswal (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16335237#comment-16335237
 ] 

Deepak Jaiswal commented on HIVE-18516:
---

[~jdere] [~ekoifman] can you please review?

> load data should rename files consistent with insert statements for ACID 
> Tables
> ---
>
> Key: HIVE-18516
> URL: https://issues.apache.org/jira/browse/HIVE-18516
> Project: Hive
>  Issue Type: Bug
>Reporter: Deepak Jaiswal
>Assignee: Deepak Jaiswal
>Priority: Major
> Attachments: HIVE-18516.1.patch
>
>
> h1. load data should rename files consistent with insert statements for ACID 
> Tables.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18516) load data should rename files consistent with insert statements for ACID Tables

2018-01-22 Thread Deepak Jaiswal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Deepak Jaiswal updated HIVE-18516:
--
Attachment: HIVE-18516.1.patch

> load data should rename files consistent with insert statements for ACID 
> Tables
> ---
>
> Key: HIVE-18516
> URL: https://issues.apache.org/jira/browse/HIVE-18516
> Project: Hive
>  Issue Type: Bug
>Reporter: Deepak Jaiswal
>Assignee: Deepak Jaiswal
>Priority: Major
> Attachments: HIVE-18516.1.patch
>
>
> h1. load data should rename files consistent with insert statements for ACID 
> Tables.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18516) load data should rename files consistent with insert statements for ACID Tables

2018-01-22 Thread Deepak Jaiswal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Deepak Jaiswal updated HIVE-18516:
--
Status: Patch Available  (was: Open)

> load data should rename files consistent with insert statements for ACID 
> Tables
> ---
>
> Key: HIVE-18516
> URL: https://issues.apache.org/jira/browse/HIVE-18516
> Project: Hive
>  Issue Type: Bug
>Reporter: Deepak Jaiswal
>Assignee: Deepak Jaiswal
>Priority: Major
> Attachments: HIVE-18516.1.patch
>
>
> h1. load data should rename files consistent with insert statements for ACID 
> Tables.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18247) Use DB auto-increment for indexes

2018-01-22 Thread Alexander Kolbasov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Kolbasov updated HIVE-18247:
--
Attachment: HIVE-18247.01.patch

> Use DB auto-increment for indexes
> -
>
> Key: HIVE-18247
> URL: https://issues.apache.org/jira/browse/HIVE-18247
> Project: Hive
>  Issue Type: Bug
>  Components: Hive, Metastore
>Affects Versions: 3.0.0
>Reporter: Alexander Kolbasov
>Assignee: Alexander Kolbasov
>Priority: Major
>  Labels: datanucleus, perfomance
> Attachments: HIVE-18247.01.patch
>
>
> I initially noticed this problem in Apache Sentry - see SENTRY-1960. Hive has 
> the same issue. DataNucleus uses SEQUENCE table to allocate IDs which 
> requires raw locks on multiple tables during transactions and this creates 
> scalability problems. 
> Instead DN should rely on DB auto-increment mechanisms which are much more 
> scalable.
> See SENTRY-1960 for extra details.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18247) Use DB auto-increment for indexes

2018-01-22 Thread Alexander Kolbasov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Kolbasov updated HIVE-18247:
--
Status: Patch Available  (was: Open)

> Use DB auto-increment for indexes
> -
>
> Key: HIVE-18247
> URL: https://issues.apache.org/jira/browse/HIVE-18247
> Project: Hive
>  Issue Type: Bug
>  Components: Hive, Metastore
>Affects Versions: 3.0.0
>Reporter: Alexander Kolbasov
>Assignee: Alexander Kolbasov
>Priority: Major
>  Labels: datanucleus, perfomance
> Attachments: HIVE-18247.01.patch
>
>
> I initially noticed this problem in Apache Sentry - see SENTRY-1960. Hive has 
> the same issue. DataNucleus uses SEQUENCE table to allocate IDs which 
> requires raw locks on multiple tables during transactions and this creates 
> scalability problems. 
> Instead DN should rely on DB auto-increment mechanisms which are much more 
> scalable.
> See SENTRY-1960 for extra details.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18247) Use DB auto-increment for indexes

2018-01-22 Thread Alexander Kolbasov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Kolbasov updated HIVE-18247:
--
Attachment: (was: HIVE-18247.01.patch)

> Use DB auto-increment for indexes
> -
>
> Key: HIVE-18247
> URL: https://issues.apache.org/jira/browse/HIVE-18247
> Project: Hive
>  Issue Type: Bug
>  Components: Hive, Metastore
>Affects Versions: 3.0.0
>Reporter: Alexander Kolbasov
>Assignee: Alexander Kolbasov
>Priority: Major
>  Labels: datanucleus, perfomance
>
> I initially noticed this problem in Apache Sentry - see SENTRY-1960. Hive has 
> the same issue. DataNucleus uses SEQUENCE table to allocate IDs which 
> requires raw locks on multiple tables during transactions and this creates 
> scalability problems. 
> Instead DN should rely on DB auto-increment mechanisms which are much more 
> scalable.
> See SENTRY-1960 for extra details.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18247) Use DB auto-increment for indexes

2018-01-22 Thread Alexander Kolbasov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Kolbasov updated HIVE-18247:
--
Status: Open  (was: Patch Available)

> Use DB auto-increment for indexes
> -
>
> Key: HIVE-18247
> URL: https://issues.apache.org/jira/browse/HIVE-18247
> Project: Hive
>  Issue Type: Bug
>  Components: Hive, Metastore
>Affects Versions: 3.0.0
>Reporter: Alexander Kolbasov
>Assignee: Alexander Kolbasov
>Priority: Major
>  Labels: datanucleus, perfomance
>
> I initially noticed this problem in Apache Sentry - see SENTRY-1960. Hive has 
> the same issue. DataNucleus uses SEQUENCE table to allocate IDs which 
> requires raw locks on multiple tables during transactions and this creates 
> scalability problems. 
> Instead DN should rely on DB auto-increment mechanisms which are much more 
> scalable.
> See SENTRY-1960 for extra details.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18480) Create tests for function related methods

2018-01-22 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16335230#comment-16335230
 ] 

Hive QA commented on HIVE-18480:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12907119/HIVE-18480.2.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 26 failed/errored test(s), 11702 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_sortmerge_join_2] 
(batchId=48)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mapjoin_hook] 
(batchId=12)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=35)
org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver[encryption_move_tbl]
 (batchId=173)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez1]
 (batchId=170)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata]
 (batchId=165)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] 
(batchId=169)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast]
 (batchId=160)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] 
(batchId=160)
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver
 (batchId=178)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[authorization_part]
 (batchId=94)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] 
(batchId=121)
org.apache.hadoop.hive.metastore.TestAcidTableSetup.testTransactionalValidation 
(batchId=221)
org.apache.hadoop.hive.ql.io.TestDruidRecordWriter.testWrite (batchId=254)
org.apache.hive.jdbc.TestSSL.testConnectionMismatch (batchId=232)
org.apache.hive.jdbc.TestSSL.testConnectionWrongCertCN (batchId=232)
org.apache.hive.jdbc.TestSSL.testMetastoreConnectionWrongCertCN (batchId=232)
org.apache.hive.minikdc.TestJdbcWithMiniKdc.testCancelRenewTokenFlow 
(batchId=247)
org.apache.hive.minikdc.TestJdbcWithMiniKdc.testConnection (batchId=247)
org.apache.hive.minikdc.TestJdbcWithMiniKdc.testIsValid (batchId=247)
org.apache.hive.minikdc.TestJdbcWithMiniKdc.testIsValidNeg (batchId=247)
org.apache.hive.minikdc.TestJdbcWithMiniKdc.testNegativeProxyAuth (batchId=247)
org.apache.hive.minikdc.TestJdbcWithMiniKdc.testNegativeTokenAuth (batchId=247)
org.apache.hive.minikdc.TestJdbcWithMiniKdc.testProxyAuth (batchId=247)
org.apache.hive.minikdc.TestJdbcWithMiniKdc.testRenewDelegationToken 
(batchId=247)
org.apache.hive.minikdc.TestJdbcWithMiniKdc.testTokenAuth (batchId=247)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8763/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8763/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8763/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 26 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12907119 - PreCommit-HIVE-Build

> Create tests for function related methods
> -
>
> Key: HIVE-18480
> URL: https://issues.apache.org/jira/browse/HIVE-18480
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Peter Vary
>Assignee: Peter Vary
>Priority: Major
> Attachments: HIVE-18480.2.patch, HIVE-18480.patch
>
>
> Create IMetaStoreClient tests to cover the function related methods



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-16472) Update Hive-on-Spark documentation

2018-01-22 Thread Vihang Karajgaonkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vihang Karajgaonkar reassigned HIVE-16472:
--

Assignee: Sahil Takiar  (was: Vihang Karajgaonkar)

> Update Hive-on-Spark documentation
> --
>
> Key: HIVE-16472
> URL: https://issues.apache.org/jira/browse/HIVE-16472
> Project: Hive
>  Issue Type: Task
>Reporter: Vihang Karajgaonkar
>Assignee: Sahil Takiar
>Priority: Major
>
> It has been pointed on the user list that Hive-on-Spark wiki is not updated. 
> This is a task JIRA to track this and fix the wiki problems for Hive-on-Spark 
> and make any bug fixes if necessary



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-16472) Update Hive-on-Spark documentation

2018-01-22 Thread Vihang Karajgaonkar (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16335214#comment-16335214
 ] 

Vihang Karajgaonkar commented on HIVE-16472:


Sahil has been working on HoS a lot more than me and he might be a better 
person to update the wiki. He agreed to take this up. Thanks [~stakiar]

> Update Hive-on-Spark documentation
> --
>
> Key: HIVE-16472
> URL: https://issues.apache.org/jira/browse/HIVE-16472
> Project: Hive
>  Issue Type: Task
>Reporter: Vihang Karajgaonkar
>Assignee: Vihang Karajgaonkar
>Priority: Major
>
> It has been pointed on the user list that Hive-on-Spark wiki is not updated. 
> This is a task JIRA to track this and fix the wiki problems for Hive-on-Spark 
> and make any bug fixes if necessary



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18422) Vectorized input format should not be used when vectorized input format is excluded and row.serde is enabled

2018-01-22 Thread Matt McCline (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16335213#comment-16335213
 ] 

Matt McCline commented on HIVE-18422:
-

I had forgotten we have 2 excludes variables, sorry (I should have remembered 
since I reviewed the 2^nd^ variable change!). FULL OUTER MapJoin has made my 
mind mush. Ok, so I see what you are doing with this change and it makes sense.

 

hive.vectorized.use.vectorized.input.format

hive.vectorized.input.format.excludes

 

hive.vectorized.use.vector.serde.deserialize

 

hive.vectorized.use.row.serde.deserialize

hive.vectorized.row.serde.inputformat.excludes

 

+1 LGTM

> Vectorized input format should not be used when vectorized input format is 
> excluded and row.serde is enabled
> 
>
> Key: HIVE-18422
> URL: https://issues.apache.org/jira/browse/HIVE-18422
> Project: Hive
>  Issue Type: Bug
>  Components: Vectorization
>Affects Versions: 3.0.0, 2.4.0
>Reporter: Vihang Karajgaonkar
>Assignee: Vihang Karajgaonkar
>Priority: Minor
> Attachments: HIVE-18422.01.patch, HIVE-18422.02.patch
>
>
> HIVE-17534 introduced a config which gives a capability to exclude certain 
> inputformat from vectorized execution without affecting other input formats. 
> If an input format is excluded and row.serde is enabled at the same time, 
> vectorizer still sets the {{useVectorizedInputFormat}} to true which causes 
> Vectorized readers to be used in row.serde mode.
> In order to reproduce:
> {noformat}
> set hive.fetch.task.conversion=none;
> set hive.vectorized.use.row.serde.deserialize=true;
> set hive.vectorized.use.vector.serde.deserialize=true;
> set hive.vectorized.execution.enabled=true;
> set hive.vectorized.execution.reduce.enabled=true;
> set hive.vectorized.row.serde.inputformat.excludes=;
> -- SORT_QUERY_RESULTS
> -- exclude MapredParquetInputFormat from vectorization, this should cause 
> mapwork vectorization to be disabled
> set 
> hive.vectorized.input.format.excludes=org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat,org.apache.hadoop.hive.ql.io.orc.OrcInputFormat;
> set hive.vectorized.use.vectorized.input.format=true;
> create table orcTbl (t1 tinyint, t2 tinyint)
> stored as orc;
> insert into orcTbl values (54, 9), (-104, 25), (-112, 24);
> explain vectorization select t1, t2, (t1+t2) from orcTbl where (t1+t2) > 10;
> select t1, t2, (t1+t2) from orcTbl where (t1+t2) > 10;
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18422) Vectorized input format should not be used when vectorized input format is excluded and row.serde is enabled

2018-01-22 Thread Vihang Karajgaonkar (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16335194#comment-16335194
 ] 

Vihang Karajgaonkar commented on HIVE-18422:


ping [~mmccline] [~gopalv]. Please feel free to let me know that my patch is 
not the right way to fix this. I would be happy to fix this any other way as 
well. Also, would be great if you could let me know if you are busy and won't 
have time to review this patch. I will start bugging other committers in that 
case ;)

> Vectorized input format should not be used when vectorized input format is 
> excluded and row.serde is enabled
> 
>
> Key: HIVE-18422
> URL: https://issues.apache.org/jira/browse/HIVE-18422
> Project: Hive
>  Issue Type: Bug
>  Components: Vectorization
>Affects Versions: 3.0.0, 2.4.0
>Reporter: Vihang Karajgaonkar
>Assignee: Vihang Karajgaonkar
>Priority: Minor
> Attachments: HIVE-18422.01.patch, HIVE-18422.02.patch
>
>
> HIVE-17534 introduced a config which gives a capability to exclude certain 
> inputformat from vectorized execution without affecting other input formats. 
> If an input format is excluded and row.serde is enabled at the same time, 
> vectorizer still sets the {{useVectorizedInputFormat}} to true which causes 
> Vectorized readers to be used in row.serde mode.
> In order to reproduce:
> {noformat}
> set hive.fetch.task.conversion=none;
> set hive.vectorized.use.row.serde.deserialize=true;
> set hive.vectorized.use.vector.serde.deserialize=true;
> set hive.vectorized.execution.enabled=true;
> set hive.vectorized.execution.reduce.enabled=true;
> set hive.vectorized.row.serde.inputformat.excludes=;
> -- SORT_QUERY_RESULTS
> -- exclude MapredParquetInputFormat from vectorization, this should cause 
> mapwork vectorization to be disabled
> set 
> hive.vectorized.input.format.excludes=org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat,org.apache.hadoop.hive.ql.io.orc.OrcInputFormat;
> set hive.vectorized.use.vectorized.input.format=true;
> create table orcTbl (t1 tinyint, t2 tinyint)
> stored as orc;
> insert into orcTbl values (54, 9), (-104, 25), (-112, 24);
> explain vectorization select t1, t2, (t1+t2) from orcTbl where (t1+t2) > 10;
> select t1, t2, (t1+t2) from orcTbl where (t1+t2) > 10;
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-18516) load data should rename files consistent with insert statements for ACID Tables

2018-01-22 Thread Deepak Jaiswal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Deepak Jaiswal reassigned HIVE-18516:
-


> load data should rename files consistent with insert statements for ACID 
> Tables
> ---
>
> Key: HIVE-18516
> URL: https://issues.apache.org/jira/browse/HIVE-18516
> Project: Hive
>  Issue Type: Bug
>Reporter: Deepak Jaiswal
>Assignee: Deepak Jaiswal
>Priority: Major
>
> h1. load data should rename files consistent with insert statements for ACID 
> Tables.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-8436) Modify SparkWork to split works with multiple child works [Spark Branch]

2018-01-22 Thread Chao Sun (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16335184#comment-16335184
 ] 

Chao Sun commented on HIVE-8436:


 Hi [~kellyzly], without the copying function, the RDD cache will cache 
*references*, which will get changed as the tuples get forwarded to downstream 
operators. Therefore, it is not correct. Hope this answers your question.

> Modify SparkWork to split works with multiple child works [Spark Branch]
> 
>
> Key: HIVE-8436
> URL: https://issues.apache.org/jira/browse/HIVE-8436
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Reporter: Xuefu Zhang
>Assignee: Chao Sun
>Priority: Major
> Fix For: 1.1.0
>
> Attachments: HIVE-8436.1-spark.patch, HIVE-8436.10-spark.patch, 
> HIVE-8436.11-spark.patch, HIVE-8436.2-spark.patch, HIVE-8436.3-spark.patch, 
> HIVE-8436.4-spark.patch, HIVE-8436.5-spark.patch, HIVE-8436.6-spark.patch, 
> HIVE-8436.7-spark.patch, HIVE-8436.8-spark.patch, HIVE-8436.9-spark.patch
>
>
> Based on the design doc, we need to split the operator tree of a work in 
> SparkWork if the work is connected to multiple child works. The way splitting 
> the operator tree is performed by cloning the original work and removing 
> unwanted branches in the operator tree. Please refer to the design doc for 
> details.
> This process should be done right before we generate SparkPlan. We should 
> have a utility method that takes the orignal SparkWork and return a modified 
> SparkWork.
> This process should also keep the information about the original work and its 
> clones. Such information will be needed during SparkPlan generation 
> (HIVE-8437).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-17580) Remove dependency of get_fields_with_environment_context API to serde

2018-01-22 Thread Vihang Karajgaonkar (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16335179#comment-16335179
 ] 

Vihang Karajgaonkar commented on HIVE-17580:


Added the fix for the failing tests. Updated the PR with the fix as well.

> Remove dependency of get_fields_with_environment_context API to serde
> -
>
> Key: HIVE-17580
> URL: https://issues.apache.org/jira/browse/HIVE-17580
> Project: Hive
>  Issue Type: Sub-task
>  Components: Standalone Metastore
>Reporter: Vihang Karajgaonkar
>Assignee: Vihang Karajgaonkar
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-17580.003-standalone-metastore.patch, 
> HIVE-17580.04-standalone-metastore.patch, 
> HIVE-17580.05-standalone-metastore.patch
>
>
> {{get_fields_with_environment_context}} metastore API uses {{Deserializer}} 
> class to access the fields metadata for the cases where it is stored along 
> with the data files (avro tables). The problem is Deserializer classes is 
> defined in hive-serde module and in order to make metastore independent of 
> Hive we will have to remove this dependency (atleast we should change it to 
> runtime dependency instead of compile time).
> The other option is investigate if we can use SearchArgument to provide this 
> functionality.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-17580) Remove dependency of get_fields_with_environment_context API to serde

2018-01-22 Thread Vihang Karajgaonkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vihang Karajgaonkar updated HIVE-17580:
---
Attachment: HIVE-17580.05-standalone-metastore.patch

> Remove dependency of get_fields_with_environment_context API to serde
> -
>
> Key: HIVE-17580
> URL: https://issues.apache.org/jira/browse/HIVE-17580
> Project: Hive
>  Issue Type: Sub-task
>  Components: Standalone Metastore
>Reporter: Vihang Karajgaonkar
>Assignee: Vihang Karajgaonkar
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-17580.003-standalone-metastore.patch, 
> HIVE-17580.04-standalone-metastore.patch, 
> HIVE-17580.05-standalone-metastore.patch
>
>
> {{get_fields_with_environment_context}} metastore API uses {{Deserializer}} 
> class to access the fields metadata for the cases where it is stored along 
> with the data files (avro tables). The problem is Deserializer classes is 
> defined in hive-serde module and in order to make metastore independent of 
> Hive we will have to remove this dependency (atleast we should change it to 
> runtime dependency instead of compile time).
> The other option is investigate if we can use SearchArgument to provide this 
> functionality.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18480) Create tests for function related methods

2018-01-22 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16335176#comment-16335176
 ] 

Hive QA commented on HIVE-18480:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
52s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
12s{color} | {color:red} The patch generated 6 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 11m 11s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh |
| git revision | master / 3bbf35f |
| Default Java | 1.8.0_111 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8763/yetus/patch-asflicense-problems.txt
 |
| modules | C: standalone-metastore U: standalone-metastore |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8763/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Create tests for function related methods
> -
>
> Key: HIVE-18480
> URL: https://issues.apache.org/jira/browse/HIVE-18480
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Peter Vary
>Assignee: Peter Vary
>Priority: Major
> Attachments: HIVE-18480.2.patch, HIVE-18480.patch
>
>
> Create IMetaStoreClient tests to cover the function related methods



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18514) add service output for ranger to WM DDL operations

2018-01-22 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-18514:

Status: Patch Available  (was: Open)

[~thejas] can you take a look? a simple patch

> add service output for ranger to WM DDL operations
> --
>
> Key: HIVE-18514
> URL: https://issues.apache.org/jira/browse/HIVE-18514
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HIVE-18514.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18514) add service output for ranger to WM DDL operations

2018-01-22 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-18514:

Attachment: HIVE-18514.patch

> add service output for ranger to WM DDL operations
> --
>
> Key: HIVE-18514
> URL: https://issues.apache.org/jira/browse/HIVE-18514
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HIVE-18514.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-18514) add service output for ranger to WM DDL operations

2018-01-22 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin reassigned HIVE-18514:
---

Assignee: Sergey Shelukhin

> add service output for ranger to WM DDL operations
> --
>
> Key: HIVE-18514
> URL: https://issues.apache.org/jira/browse/HIVE-18514
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HIVE-18514.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-18513) Query results caching

2018-01-22 Thread Jason Dere (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Dere reassigned HIVE-18513:
-


> Query results caching
> -
>
> Key: HIVE-18513
> URL: https://issues.apache.org/jira/browse/HIVE-18513
> Project: Hive
>  Issue Type: Bug
>  Components: Query Planning
>Reporter: Jason Dere
>Assignee: Jason Dere
>Priority: Major
>
> Add a query results cache that can save the results of an executed Hive query 
> for reuse on subsequent queries. This may be useful in cases where the same 
> query is issued many times, since Hive can return back the results of a 
> cached query rather than having to execute the full query on the cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18479) Create tests to cover dropPartition methods

2018-01-22 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16335149#comment-16335149
 ] 

Hive QA commented on HIVE-18479:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12907110/HIVE-18479.2.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 18 failed/errored test(s), 11697 tests 
executed
*Failed tests:*
{noformat}
TestMinimrCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=91)

[infer_bucket_sort_num_buckets.q,infer_bucket_sort_reducers_power_two.q,parallel_orderby.q,bucket_num_reducers_acid.q,infer_bucket_sort_map_operators.q,root_dir_external_table.q,infer_bucket_sort_dyn_part.q,udf_using.q,index_bitmap_auto.q,bucket_num_reducers_acid2.q]
org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_queries]
 (batchId=238)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mapjoin_hook] 
(batchId=12)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=35)
org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver[encryption_move_tbl]
 (batchId=173)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez1]
 (batchId=170)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata]
 (batchId=165)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] 
(batchId=169)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast]
 (batchId=160)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[smb_mapjoin_15]
 (batchId=168)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] 
(batchId=160)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[authorization_part]
 (batchId=94)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[stats_aggregator_error_1]
 (batchId=94)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] 
(batchId=121)
org.apache.hadoop.hive.ql.io.TestDruidRecordWriter.testWrite (batchId=254)
org.apache.hive.jdbc.TestSSL.testConnectionMismatch (batchId=232)
org.apache.hive.jdbc.TestSSL.testConnectionWrongCertCN (batchId=232)
org.apache.hive.jdbc.TestSSL.testMetastoreConnectionWrongCertCN (batchId=232)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8762/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8762/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8762/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 18 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12907110 - PreCommit-HIVE-Build

> Create tests to cover dropPartition methods
> ---
>
> Key: HIVE-18479
> URL: https://issues.apache.org/jira/browse/HIVE-18479
> Project: Hive
>  Issue Type: Sub-task
>  Components: Test
>Reporter: Marta Kuczora
>Assignee: Marta Kuczora
>Priority: Major
> Attachments: HIVE-18479.1.patch, HIVE-18479.2.patch
>
>
> The following methods of IMetaStoreClient are covered in this Jira:
> {code}
> - boolean dropPartition(String, String, List, boolean)
> - boolean dropPartition(String, String, List, PartitionDropOptions)
> - boolean dropPartition(String, String, String, boolean){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18493) Add display escape for CR/LF to Hive CLI and Beeline

2018-01-22 Thread Matt McCline (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16335096#comment-16335096
 ] 

Matt McCline commented on HIVE-18493:
-

Hive QA #8772

> Add display escape for CR/LF to Hive CLI and Beeline
> 
>
> Key: HIVE-18493
> URL: https://issues.apache.org/jira/browse/HIVE-18493
> Project: Hive
>  Issue Type: Bug
>  Components: Beeline, Hive
>Affects Versions: 3.0.0
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Attachments: HIVE-18493.01.patch, HIVE-18493.02.patch, 
> HIVE-18493.03.patch, HIVE-18493.04.patch, HIVE-18493.05.patch
>
>
> Add optional display escaping of carriage return and line feed so row output 
> remains one line.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18372) Create testing infra to test different HMS instances

2018-01-22 Thread Peter Vary (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Vary updated HIVE-18372:
--
   Resolution: Fixed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

Pushed to master.

Thanks [~vihangk1] for the review!

> Create testing infra to test different HMS instances
> 
>
> Key: HIVE-18372
> URL: https://issues.apache.org/jira/browse/HIVE-18372
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Peter Vary
>Assignee: Peter Vary
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-18372.2.patch, HIVE-18372.3.patch, 
> HIVE-18372.5.patch, HIVE-18372.6.patch, HIVE-18372.patch
>
>
> Since there will be multiple tests, it would be good to have a good 
> infrastructure to help creating those faster, easier.
> This patch will also include the test cases for the Database related methods 
> to showcase the infra



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18387) Minimize time that REBUILD locks the materialized view

2018-01-22 Thread Jesus Camacho Rodriguez (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16335060#comment-16335060
 ] 

Jesus Camacho Rodriguez commented on HIVE-18387:


[~ashutoshc], I include HIVE-18321 within same patch, as this patch only 
required minor changes to make it all work. Could you take a look?
https://reviews.apache.org/r/65250/
Thanks

> Minimize time that REBUILD locks the materialized view
> --
>
> Key: HIVE-18387
> URL: https://issues.apache.org/jira/browse/HIVE-18387
> Project: Hive
>  Issue Type: Improvement
>  Components: Materialized views
>Affects Versions: 3.0.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Attachments: HIVE-18387.01.patch, HIVE-18387.patch
>
>
> Currently, REBUILD will block the materialized view while the final move task 
> is being executed. The idea for this improvement is to create the new 
> materialization in a new folder (new version) and then just flip the pointer 
> to the folder in the MV definition in the metastore. REBUILD operations for a 
> given MV should get an exclusive lock though, i.e., they cannot be executed 
> concurrently.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18387) Minimize time that REBUILD locks the materialized view

2018-01-22 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-18387:
---
Attachment: HIVE-18387.01.patch

> Minimize time that REBUILD locks the materialized view
> --
>
> Key: HIVE-18387
> URL: https://issues.apache.org/jira/browse/HIVE-18387
> Project: Hive
>  Issue Type: Improvement
>  Components: Materialized views
>Affects Versions: 3.0.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Attachments: HIVE-18387.01.patch, HIVE-18387.patch
>
>
> Currently, REBUILD will block the materialized view while the final move task 
> is being executed. The idea for this improvement is to create the new 
> materialization in a new folder (new version) and then just flip the pointer 
> to the folder in the MV definition in the metastore. REBUILD operations for a 
> given MV should get an exclusive lock though, i.e., they cannot be executed 
> concurrently.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18479) Create tests to cover dropPartition methods

2018-01-22 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16335032#comment-16335032
 ] 

Hive QA commented on HIVE-18479:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
1s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
58s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
12s{color} | {color:red} The patch generated 6 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 11m 14s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh |
| git revision | master / 3bbf35f |
| Default Java | 1.8.0_111 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8762/yetus/patch-asflicense-problems.txt
 |
| modules | C: standalone-metastore U: standalone-metastore |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8762/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Create tests to cover dropPartition methods
> ---
>
> Key: HIVE-18479
> URL: https://issues.apache.org/jira/browse/HIVE-18479
> Project: Hive
>  Issue Type: Sub-task
>  Components: Test
>Reporter: Marta Kuczora
>Assignee: Marta Kuczora
>Priority: Major
> Attachments: HIVE-18479.1.patch, HIVE-18479.2.patch
>
>
> The following methods of IMetaStoreClient are covered in this Jira:
> {code}
> - boolean dropPartition(String, String, List, boolean)
> - boolean dropPartition(String, String, List, PartitionDropOptions)
> - boolean dropPartition(String, String, String, boolean){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18481) Create tests for table related methods (get, list, exists)

2018-01-22 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16335011#comment-16335011
 ] 

Hive QA commented on HIVE-18481:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12907136/HIVE-18481.2.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 19 failed/errored test(s), 11669 tests 
executed
*Failed tests:*
{noformat}
TestTriggersWorkloadManager - did not produce a TEST-*.xml file (likely timed 
out) (batchId=233)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_sortmerge_join_2] 
(batchId=48)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mapjoin_hook] 
(batchId=12)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=35)
org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver[encryption_move_tbl]
 (batchId=173)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[llap_smb] 
(batchId=151)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez1]
 (batchId=170)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata]
 (batchId=165)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] 
(batchId=169)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast]
 (batchId=160)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] 
(batchId=160)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[authorization_part]
 (batchId=94)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[stats_aggregator_error_1]
 (batchId=94)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] 
(batchId=121)
org.apache.hadoop.hive.metastore.TestMarkPartition.testMarkingPartitionSet 
(batchId=212)
org.apache.hadoop.hive.ql.io.TestDruidRecordWriter.testWrite (batchId=254)
org.apache.hive.jdbc.TestSSL.testConnectionMismatch (batchId=232)
org.apache.hive.jdbc.TestSSL.testConnectionWrongCertCN (batchId=232)
org.apache.hive.jdbc.TestSSL.testMetastoreConnectionWrongCertCN (batchId=232)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8761/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8761/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8761/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 19 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12907136 - PreCommit-HIVE-Build

> Create tests for table related methods (get, list, exists)
> --
>
> Key: HIVE-18481
> URL: https://issues.apache.org/jira/browse/HIVE-18481
> Project: Hive
>  Issue Type: Sub-task
> Environment: Create IMetaStoreClient tests to cover the table query 
> methods
>Reporter: Peter Vary
>Assignee: Peter Vary
>Priority: Major
> Attachments: HIVE-18481.2.patch, HIVE-18481.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18472) Beeline gives log4j warnings

2018-01-22 Thread Vihang Karajgaonkar (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16334998#comment-16334998
 ] 

Vihang Karajgaonkar commented on HIVE-18472:


Lets update the comment to make it clear that we are excluding Hive's log4j 
jars from the CLASSPATH before executing hadoop and hbase commands from the 
script. Hadoop and hbase use their log4j bindings and we add the Hive's log4j 
binding in the end. This gets rid of the annoying log4j warning before starting 
beeline. Thanks [~janulatha] for fixing this.

+1

> Beeline gives log4j warnings
> 
>
> Key: HIVE-18472
> URL: https://issues.apache.org/jira/browse/HIVE-18472
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Janaki Lahorani
>Assignee: Janaki Lahorani
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-18472.1.patch
>
>
> Starting Beeline gives the following warnings multiple times:
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in 
> [jar:file:/opt/cloudera/parcels/CDH-6.x-1.cdh6.x.p0.215261/jars/log4j-slf4j-impl-2.8.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/opt/cloudera/parcels/CDH-6.x-1.cdh6.x.p0.215261/jars/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See [http://www.slf4j.org/codes.html#multiple_bindings] for an 
> explanation.
> SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
> ERROR StatusLogger No log4j2 configuration file found. Using default 
> configuration: logging only errors to the console. Set system property 
> 'org.apache.logging.log4j.simplelog.StatusLogger.level' to TRACE to show 
> Log4j2 internal initialization logging.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-17751) Separate HMS Client and HMS server into separate sub-modules

2018-01-22 Thread Alexander Kolbasov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Kolbasov updated HIVE-17751:
--
Attachment: HIVE-17751.01-standalone-metastore.patch

> Separate HMS Client and HMS server into separate sub-modules
> 
>
> Key: HIVE-17751
> URL: https://issues.apache.org/jira/browse/HIVE-17751
> Project: Hive
>  Issue Type: Sub-task
>  Components: Standalone Metastore
>Reporter: Vihang Karajgaonkar
>Assignee: Alexander Kolbasov
>Priority: Major
> Attachments: HIVE-17751.01-standalone-metastore.patch
>
>
> external applications which are interfacing with HMS should ideally only 
> include HMSClient library instead of one big library containing server as 
> well. We should ideally have a thin client library so that cross version 
> support for external applications is easier. We should sub-divide the 
> standalone module into possibly 3 modules (one for common classes, one for 
> client classes and one for server) or 2 sub-modules (one for client and one 
> for server) so that we can generate separate jars for HMS client and server.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-17751) Separate HMS Client and HMS server into separate sub-modules

2018-01-22 Thread Alexander Kolbasov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Kolbasov updated HIVE-17751:
--
Status: Patch Available  (was: Open)

> Separate HMS Client and HMS server into separate sub-modules
> 
>
> Key: HIVE-17751
> URL: https://issues.apache.org/jira/browse/HIVE-17751
> Project: Hive
>  Issue Type: Sub-task
>  Components: Standalone Metastore
>Reporter: Vihang Karajgaonkar
>Assignee: Alexander Kolbasov
>Priority: Major
> Attachments: HIVE-17751.01-standalone-metastore.patch
>
>
> external applications which are interfacing with HMS should ideally only 
> include HMSClient library instead of one big library containing server as 
> well. We should ideally have a thin client library so that cross version 
> support for external applications is easier. We should sub-divide the 
> standalone module into possibly 3 modules (one for common classes, one for 
> client classes and one for server) or 2 sub-modules (one for client and one 
> for server) so that we can generate separate jars for HMS client and server.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-17751) Separate HMS Client and HMS server into separate sub-modules

2018-01-22 Thread Alexander Kolbasov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Kolbasov updated HIVE-17751:
--
Status: Open  (was: Patch Available)

> Separate HMS Client and HMS server into separate sub-modules
> 
>
> Key: HIVE-17751
> URL: https://issues.apache.org/jira/browse/HIVE-17751
> Project: Hive
>  Issue Type: Sub-task
>  Components: Standalone Metastore
>Reporter: Vihang Karajgaonkar
>Assignee: Alexander Kolbasov
>Priority: Major
>
> external applications which are interfacing with HMS should ideally only 
> include HMSClient library instead of one big library containing server as 
> well. We should ideally have a thin client library so that cross version 
> support for external applications is easier. We should sub-divide the 
> standalone module into possibly 3 modules (one for common classes, one for 
> client classes and one for server) or 2 sub-modules (one for client and one 
> for server) so that we can generate separate jars for HMS client and server.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-17751) Separate HMS Client and HMS server into separate sub-modules

2018-01-22 Thread Alexander Kolbasov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Kolbasov updated HIVE-17751:
--
Attachment: (was: HIVE-17751.01-standalone-metastore.patch)

> Separate HMS Client and HMS server into separate sub-modules
> 
>
> Key: HIVE-17751
> URL: https://issues.apache.org/jira/browse/HIVE-17751
> Project: Hive
>  Issue Type: Sub-task
>  Components: Standalone Metastore
>Reporter: Vihang Karajgaonkar
>Assignee: Alexander Kolbasov
>Priority: Major
>
> external applications which are interfacing with HMS should ideally only 
> include HMSClient library instead of one big library containing server as 
> well. We should ideally have a thin client library so that cross version 
> support for external applications is easier. We should sub-divide the 
> standalone module into possibly 3 modules (one for common classes, one for 
> client classes and one for server) or 2 sub-modules (one for client and one 
> for server) so that we can generate separate jars for HMS client and server.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18485) Add more unit tests for hive.strict.checks.* properties

2018-01-22 Thread Sahil Takiar (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16334910#comment-16334910
 ] 

Sahil Takiar commented on HIVE-18485:
-

[~vihangk1], [~pvary] could you take a look - 
https://reviews.apache.org/r/65272/

> Add more unit tests for hive.strict.checks.* properties
> ---
>
> Key: HIVE-18485
> URL: https://issues.apache.org/jira/browse/HIVE-18485
> Project: Hive
>  Issue Type: Test
>  Components: Test
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
>Priority: Major
> Attachments: HIVE-18485.1.patch, HIVE-18485.2.patch, 
> HIVE-18485.3.patch
>
>
> We should add some more negative tests for {{hive.strict.checks.\*}} 
> properties that explicitly check the {{hive.strict.checks.*}} properties - 
> right now they all rely on {{hive.mapred.mode=strict}} which is deprecated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18393) Error returned when some other type is read as string from parquet tables

2018-01-22 Thread Vihang Karajgaonkar (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16334902#comment-16334902
 ] 

Vihang Karajgaonkar commented on HIVE-18393:


+1 LGTM.

> Error returned when some other type is read as string from parquet tables
> -
>
> Key: HIVE-18393
> URL: https://issues.apache.org/jira/browse/HIVE-18393
> Project: Hive
>  Issue Type: Bug
>Reporter: Janaki Lahorani
>Assignee: Janaki Lahorani
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-18393.1.patch, HIVE-18393.2.patch, 
> HIVE-18393.3.patch, HIVE-18393.4.patch, HIVE-18393.5.patch
>
>
> TimeStamp, Decimal, Double, Float, BigInt, Int, SmallInt, Tinyint and Boolean 
> when read as String, Varchar or Char should return the correct data.  Now 
> this results in error for parquet tables.
> Test Case:
> {code}
> drop table if exists testAltCol;
> create table testAltCol
> (cId  TINYINT,
>  cTimeStamp TIMESTAMP,
>  cDecimal   DECIMAL(38,18),
>  cDoubleDOUBLE,
>  cFloat   FLOAT,
>  cBigIntBIGINT,
>  cInt INT,
>  cSmallInt  SMALLINT,
>  cTinyint   TINYINT,
>  cBoolean   BOOLEAN);
> insert into testAltCol values
> (1,
>  '2017-11-07 09:02:49.9',
>  12345678901234567890.123456789012345678,
>  1.79e308,
>  3.4e38,
>  1234567890123456789,
>  1234567890,
>  12345,
>  123,
>  TRUE);
> insert into testAltCol values
> (2,
>  '1400-01-01 01:01:01.1',
>  1.1,
>  2.2,
>  3.3,
>  1,
>  2,
>  3,
>  4,
>  FALSE);
> insert into testAltCol values
> (3,
>  '1400-01-01 01:01:01.1',
>  10.1,
>  20.2,
>  30.3,
>  1234567890123456789,
>  1234567890,
>  12345,
>  123,
>  TRUE);
> select cId, cTimeStamp from testAltCol order by cId;
> select cId, cDecimal, cDouble, cFloat from testAltCol order by cId;
> select cId, cBigInt, cInt, cSmallInt, cTinyint from testAltCol order by cId;
> select cId, cBoolean from testAltCol order by cId;
> drop table if exists testAltColP;
> create table testAltColP stored as parquet as select * from testAltCol;
> select cId, cTimeStamp from testAltColP order by cId;
> select cId, cDecimal, cDouble, cFloat from testAltColP order by cId;
> select cId, cBigInt, cInt, cSmallInt, cTinyint from testAltColP order by cId;
> select cId, cBoolean from testAltColP order by cId;
> alter table testAltColP replace columns
> (cId  TINYINT,
>  cTimeStamp STRING,
>  cDecimal   STRING,
>  cDoubleSTRING,
>  cFloat   STRING,
>  cBigIntSTRING,
>  cInt STRING,
>  cSmallInt  STRING,
>  cTinyint   STRING,
>  cBoolean   STRING);
> select cId, cTimeStamp from testAltColP order by cId;
> select cId, cDecimal, cDouble, cFloat from testAltColP order by cId;
> select cId, cBigInt, cInt, cSmallInt, cTinyint from testAltColP order by cId;
> select cId, cBoolean from testAltColP order by cId;
> alter table testAltColP replace columns
> (cId  TINYINT,
>  cTimeStamp VARCHAR(100),
>  cDecimal   VARCHAR(100),
>  cDoubleVARCHAR(100),
>  cFloat   VARCHAR(100),
>  cBigIntVARCHAR(100),
>  cInt VARCHAR(100),
>  cSmallInt  VARCHAR(100),
>  cTinyint   VARCHAR(100),
>  cBoolean   VARCHAR(100));
> select cId, cTimeStamp from testAltColP order by cId;
> select cId, cDecimal, cDouble, cFloat from testAltColP order by cId;
> select cId, cBigInt, cInt, cSmallInt, cTinyint from testAltColP order by cId;
> select cId, cBoolean from testAltColP order by cId;
> alter table testAltColP replace columns
> (cId  TINYINT,
>  cTimeStamp CHAR(100),
>  cDecimal   CHAR(100),
>  cDoubleCHAR(100),
>  cFloat   CHAR(100),
>  cBigIntCHAR(100),
>  cInt CHAR(100),
>  cSmallInt  CHAR(100),
>  cTinyint   CHAR(100),
>  cBoolean   CHAR(100));
> select cId, cTimeStamp from testAltColP order by cId;
> select cId, cDecimal, cDouble, cFloat from testAltColP order by cId;
> select cId, cBigInt, cInt, cSmallInt, cTinyint from testAltColP order by cId;
> select cId, cBoolean from testAltColP order by cId;
> drop table if exists testAltColP;
> {code}
> {code}
> Error:
> FAILED: Execution Error, return code 2 from 
> org.apache.hadoop.hive.ql.exec.mr.MapRedTask
> Excerpt for log:
> 2018-01-05T15:54:05,756 ERROR [LocalJobRunner Map Task Executor #0] 
> mr.ExecMapper: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime 
> Error while processing row [Error getting row data with exception 
> java.lang.UnsupportedOperationException: Cannot inspect 
> org.apache.hadoop.hive.serde2.io.TimestampWritable
>   at 
> org.apache.hadoop.hive.ql.io.parquet.serde.primitive.ParquetStringInspector.getPrimitiveJavaObject(ParquetStringInspector.java:77)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18481) Create tests for table related methods (get, list, exists)

2018-01-22 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16334900#comment-16334900
 ] 

Hive QA commented on HIVE-18481:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
1s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
55s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
12s{color} | {color:red} The patch generated 6 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 11m  9s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh |
| git revision | master / 3bbf35f |
| Default Java | 1.8.0_111 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8761/yetus/patch-asflicense-problems.txt
 |
| modules | C: standalone-metastore U: standalone-metastore |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8761/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Create tests for table related methods (get, list, exists)
> --
>
> Key: HIVE-18481
> URL: https://issues.apache.org/jira/browse/HIVE-18481
> Project: Hive
>  Issue Type: Sub-task
> Environment: Create IMetaStoreClient tests to cover the table query 
> methods
>Reporter: Peter Vary
>Assignee: Peter Vary
>Priority: Major
> Attachments: HIVE-18481.2.patch, HIVE-18481.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HIVE-18504) Hive is throwing InvalidObjectException(message:Invalid column type name is too long.

2018-01-22 Thread Naveen Gangam (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naveen Gangam resolved HIVE-18504.
--
Resolution: Not A Bug

Thanks for the confirmation. Closing the jira.

> Hive is throwing InvalidObjectException(message:Invalid column type name is 
> too long.
> -
>
> Key: HIVE-18504
> URL: https://issues.apache.org/jira/browse/HIVE-18504
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Jimson K James
>Assignee: Naveen Gangam
>Priority: Major
> Fix For: 3.0.0, 2.3.0
>
> Attachments: hive2.log, tweets.sql
>
>
> Hive in HDP 2.6.3 is still throwing InvalidObjectException(message:Invalid 
> column type name is too long.
> Please find attached the create table query. For more details please refer to 
> HIVE-15249
> {code:java}
> FAILED: Execution Error, return code 1 from 
> org.apache.hadoop.hive.ql.exec.DDLTask. 
> InvalidObjectException(message:Invalid column type name length 2980 exceeds 
> max allowed length 2000, type 
> struct,entities:struct,text:string>>,symbols:array...
> {code}
>  
> {code:java}
> [root@sandbox-hdp hive-json]# hive --version
> Hive 1.2.1000.2.6.3.0-235
> Subversion 
> git://ctr-e134-1499953498516-254436-01-04.hwx.site/grid/0/jenkins/workspace/HDP-parallel-centos6/SOURCES/hive
>  -r 5f360bda08bb5489fbb3189b5aeaaf58029ed4b5
> Compiled by jenkins on Mon Oct 30 02:48:31 UTC 2017
> From source with checksum 94298cc1f5f5bf0f3470f3ea2e92d646
> [root@sandbox-hdp hive-json]# beeline
> Beeline version 1.2.1000.2.6.3.0-235 by Apache Hive
> beeline> !connect 
> jdbc:hive2://sandbox-hdp.hortonworks.com:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2
> Connecting to 
> jdbc:hive2://sandbox-hdp.hortonworks.com:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2
> Enter username for 
> jdbc:hive2://sandbox-hdp.hortonworks.com:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2:
>  hive
> Enter password for 
> jdbc:hive2://sandbox-hdp.hortonworks.com:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2:
>  
> Connected to: Apache Hive (version 1.2.1000.2.6.3.0-235)
> Driver: Hive JDBC (version 1.2.1000.2.6.3.0-235)
> Transaction isolation: TRANSACTION_REPEATABLE_READ
> 0: jdbc:hive2://sandbox-hdp.hortonworks.com:2>
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18504) Hive is throwing InvalidObjectException(message:Invalid column type name is too long.

2018-01-22 Thread Jimson K James (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16334888#comment-16334888
 ] 

Jimson K James commented on HIVE-18504:
---

Thank you!

> Hive is throwing InvalidObjectException(message:Invalid column type name is 
> too long.
> -
>
> Key: HIVE-18504
> URL: https://issues.apache.org/jira/browse/HIVE-18504
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Jimson K James
>Assignee: Naveen Gangam
>Priority: Major
> Fix For: 2.3.0, 3.0.0
>
> Attachments: hive2.log, tweets.sql
>
>
> Hive in HDP 2.6.3 is still throwing InvalidObjectException(message:Invalid 
> column type name is too long.
> Please find attached the create table query. For more details please refer to 
> HIVE-15249
> {code:java}
> FAILED: Execution Error, return code 1 from 
> org.apache.hadoop.hive.ql.exec.DDLTask. 
> InvalidObjectException(message:Invalid column type name length 2980 exceeds 
> max allowed length 2000, type 
> struct,entities:struct,text:string>>,symbols:array...
> {code}
>  
> {code:java}
> [root@sandbox-hdp hive-json]# hive --version
> Hive 1.2.1000.2.6.3.0-235
> Subversion 
> git://ctr-e134-1499953498516-254436-01-04.hwx.site/grid/0/jenkins/workspace/HDP-parallel-centos6/SOURCES/hive
>  -r 5f360bda08bb5489fbb3189b5aeaaf58029ed4b5
> Compiled by jenkins on Mon Oct 30 02:48:31 UTC 2017
> From source with checksum 94298cc1f5f5bf0f3470f3ea2e92d646
> [root@sandbox-hdp hive-json]# beeline
> Beeline version 1.2.1000.2.6.3.0-235 by Apache Hive
> beeline> !connect 
> jdbc:hive2://sandbox-hdp.hortonworks.com:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2
> Connecting to 
> jdbc:hive2://sandbox-hdp.hortonworks.com:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2
> Enter username for 
> jdbc:hive2://sandbox-hdp.hortonworks.com:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2:
>  hive
> Enter password for 
> jdbc:hive2://sandbox-hdp.hortonworks.com:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2:
>  
> Connected to: Apache Hive (version 1.2.1000.2.6.3.0-235)
> Driver: Hive JDBC (version 1.2.1000.2.6.3.0-235)
> Transaction isolation: TRANSACTION_REPEATABLE_READ
> 0: jdbc:hive2://sandbox-hdp.hortonworks.com:2>
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18504) Hive is throwing InvalidObjectException(message:Invalid column type name is too long.

2018-01-22 Thread Jimson K James (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16334887#comment-16334887
 ] 

Jimson K James commented on HIVE-18504:
---

Sure, it looks like the issue is with HDP only.

> Hive is throwing InvalidObjectException(message:Invalid column type name is 
> too long.
> -
>
> Key: HIVE-18504
> URL: https://issues.apache.org/jira/browse/HIVE-18504
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Jimson K James
>Assignee: Naveen Gangam
>Priority: Major
> Fix For: 2.3.0, 3.0.0
>
> Attachments: hive2.log, tweets.sql
>
>
> Hive in HDP 2.6.3 is still throwing InvalidObjectException(message:Invalid 
> column type name is too long.
> Please find attached the create table query. For more details please refer to 
> HIVE-15249
> {code:java}
> FAILED: Execution Error, return code 1 from 
> org.apache.hadoop.hive.ql.exec.DDLTask. 
> InvalidObjectException(message:Invalid column type name length 2980 exceeds 
> max allowed length 2000, type 
> struct,entities:struct,text:string>>,symbols:array...
> {code}
>  
> {code:java}
> [root@sandbox-hdp hive-json]# hive --version
> Hive 1.2.1000.2.6.3.0-235
> Subversion 
> git://ctr-e134-1499953498516-254436-01-04.hwx.site/grid/0/jenkins/workspace/HDP-parallel-centos6/SOURCES/hive
>  -r 5f360bda08bb5489fbb3189b5aeaaf58029ed4b5
> Compiled by jenkins on Mon Oct 30 02:48:31 UTC 2017
> From source with checksum 94298cc1f5f5bf0f3470f3ea2e92d646
> [root@sandbox-hdp hive-json]# beeline
> Beeline version 1.2.1000.2.6.3.0-235 by Apache Hive
> beeline> !connect 
> jdbc:hive2://sandbox-hdp.hortonworks.com:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2
> Connecting to 
> jdbc:hive2://sandbox-hdp.hortonworks.com:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2
> Enter username for 
> jdbc:hive2://sandbox-hdp.hortonworks.com:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2:
>  hive
> Enter password for 
> jdbc:hive2://sandbox-hdp.hortonworks.com:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2:
>  
> Connected to: Apache Hive (version 1.2.1000.2.6.3.0-235)
> Driver: Hive JDBC (version 1.2.1000.2.6.3.0-235)
> Transaction isolation: TRANSACTION_REPEATABLE_READ
> 0: jdbc:hive2://sandbox-hdp.hortonworks.com:2>
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18504) Hive is throwing InvalidObjectException(message:Invalid column type name is too long.

2018-01-22 Thread Jimson K James (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimson K James updated HIVE-18504:
--
Description: 
Hive in HDP 2.6.3 is still throwing InvalidObjectException(message:Invalid 
column type name is too long.

Please find attached the create table query. For more details please refer to 
HIVE-15249
{code:java}

FAILED: Execution Error, return code 1 from 
org.apache.hadoop.hive.ql.exec.DDLTask. InvalidObjectException(message:Invalid 
column type name length 2980 exceeds max allowed length 2000, type 
struct,entities:struct,text:string>>,symbols:array...
{code}
 
{code:java}
[root@sandbox-hdp hive-json]# hive --version
Hive 1.2.1000.2.6.3.0-235
Subversion 
git://ctr-e134-1499953498516-254436-01-04.hwx.site/grid/0/jenkins/workspace/HDP-parallel-centos6/SOURCES/hive
 -r 5f360bda08bb5489fbb3189b5aeaaf58029ed4b5
Compiled by jenkins on Mon Oct 30 02:48:31 UTC 2017
>From source with checksum 94298cc1f5f5bf0f3470f3ea2e92d646
[root@sandbox-hdp hive-json]# beeline
Beeline version 1.2.1000.2.6.3.0-235 by Apache Hive
beeline> !connect 
jdbc:hive2://sandbox-hdp.hortonworks.com:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2
Connecting to 
jdbc:hive2://sandbox-hdp.hortonworks.com:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2
Enter username for 
jdbc:hive2://sandbox-hdp.hortonworks.com:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2:
 hive
Enter password for 
jdbc:hive2://sandbox-hdp.hortonworks.com:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2:
 
Connected to: Apache Hive (version 1.2.1000.2.6.3.0-235)
Driver: Hive JDBC (version 1.2.1000.2.6.3.0-235)
Transaction isolation: TRANSACTION_REPEATABLE_READ
0: jdbc:hive2://sandbox-hdp.hortonworks.com:2>
{code}

  was:
Hive 2.6.3 is still throwing InvalidObjectException(message:Invalid column type 
name is too long.

Please find attached the create table query. For more details please refer to 
HIVE-15249
{code:java}

FAILED: Execution Error, return code 1 from 
org.apache.hadoop.hive.ql.exec.DDLTask. InvalidObjectException(message:Invalid 
column type name length 2980 exceeds max allowed length 2000, type 
struct,entities:struct,text:string>>,symbols:array...
{code}
 
{code:java}
[root@sandbox-hdp hive-json]# hive --version
Hive 1.2.1000.2.6.3.0-235
Subversion 
git://ctr-e134-1499953498516-254436-01-04.hwx.site/grid/0/jenkins/workspace/HDP-parallel-centos6/SOURCES/hive
 -r 5f360bda08bb5489fbb3189b5aeaaf58029ed4b5
Compiled by jenkins on Mon Oct 30 02:48:31 UTC 2017
>From source with checksum 94298cc1f5f5bf0f3470f3ea2e92d646
[root@sandbox-hdp hive-json]# beeline
Beeline version 1.2.1000.2.6.3.0-235 by Apache Hive
beeline> !connect 
jdbc:hive2://sandbox-hdp.hortonworks.com:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2
Connecting to 
jdbc:hive2://sandbox-hdp.hortonworks.com:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2
Enter username for 
jdbc:hive2://sandbox-hdp.hortonworks.com:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2:
 hive
Enter password for 
jdbc:hive2://sandbox-hdp.hortonworks.com:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2:
 
Connected to: Apache Hive (version 1.2.1000.2.6.3.0-235)
Driver: Hive JDBC (version 1.2.1000.2.6.3.0-235)
Transaction isolation: TRANSACTION_REPEATABLE_READ
0: jdbc:hive2://sandbox-hdp.hortonworks.com:2>
{code}


> Hive is throwing InvalidObjectException(message:Invalid column type name is 
> too long.
> -
>
> Key: HIVE-18504
> URL: https://issues.apache.org/jira/browse/HIVE-18504
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Jimson K James
>Assignee: Naveen Gangam
>Priority: Major
> Fix For: 2.3.0, 3.0.0
>
> Attachments: hive2.log, tweets.sql
>
>
> Hive in HDP 2.6.3 is still throwing InvalidObjectException(message:Invalid 
> column type name is too long.
> Please find attached the create table query. For more details please refer to 
> HIVE-15249
> {code:java}
> FAILED: Execution Error, return code 1 from 
> org.apache.hadoop.hive.ql.exec.DDLTask. 
> InvalidObjectException(message:Invalid column type name length 2980 exceeds 
> max allowed length 2000, type 
> struct,entities:struct,text:string>>,symbols:array...
> {code}
>  
> {code:java}
> [root@sandbox-hdp hive-json]# hive --version
> Hive 1.2.1000.2.6.3.0-235
> Subversion 
> git://ctr-e134-1499953498516-254436-01-04.hwx.site/grid/0/jenkins/workspace/HDP-parallel-centos6/SOURCES/hive
>  -r 5f360bda08bb5489fbb3189b5aeaaf58029ed4b5
> Compiled by jenkins on Mon Oct 30 02:48:31 UTC 2017
> From source with checksum 94298cc1f5f5bf0f3470f3ea2e92d646
> [root@sandbox-hdp hive-json]# bee

[jira] [Commented] (HIVE-17178) Spark Partition Pruning Sink Operator can't target multiple Works

2018-01-22 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16334874#comment-16334874
 ] 

Hive QA commented on HIVE-17178:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12907104/HIVE-17178.2.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 19 failed/errored test(s), 11600 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.org.apache.hadoop.hive.cli.TestCliDriver
 (batchId=16)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mapjoin_hook] 
(batchId=12)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=35)
org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver[encryption_move_tbl]
 (batchId=173)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[llap_smb] 
(batchId=151)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez1]
 (batchId=170)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata]
 (batchId=165)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] 
(batchId=169)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast]
 (batchId=160)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] 
(batchId=160)
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[spark_dynamic_partition_pruning_6]
 (batchId=178)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[authorization_part]
 (batchId=94)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] 
(batchId=121)
org.apache.hadoop.hive.ql.io.TestDruidRecordWriter.testWrite (batchId=254)
org.apache.hive.jdbc.TestSSL.testConnectionMismatch (batchId=232)
org.apache.hive.jdbc.TestSSL.testConnectionWrongCertCN (batchId=232)
org.apache.hive.jdbc.TestSSL.testMetastoreConnectionWrongCertCN (batchId=232)
org.apache.hive.jdbc.TestTriggersMoveWorkloadManager.testTriggerMoveConflictKill
 (batchId=233)
org.apache.hive.minikdc.TestJdbcWithDBTokenStoreNoDoAs.org.apache.hive.minikdc.TestJdbcWithDBTokenStoreNoDoAs
 (batchId=247)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8759/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8759/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8759/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 19 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12907104 - PreCommit-HIVE-Build

> Spark Partition Pruning Sink Operator can't target multiple Works
> -
>
> Key: HIVE-17178
> URL: https://issues.apache.org/jira/browse/HIVE-17178
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Reporter: Sahil Takiar
>Assignee: Rui Li
>Priority: Major
> Attachments: HIVE-17178.1.patch, HIVE-17178.2.patch
>
>
> A Spark Partition Pruning Sink Operator cannot be used to target multiple Map 
> Work objects. The entire DPP subtree (SEL-GBY-SPARKPRUNINGSINK) is duplicated 
> if a single table needs to be used to target multiple Map Works.
> The following query shows the issue:
> {code}
> set hive.spark.dynamic.partition.pruning=true;
> set hive.auto.convert.join=true;
> create table part_table_1 (col int) partitioned by (part_col int);
> create table part_table_2 (col int) partitioned by (part_col int);
> create table regular_table (col int);
> insert into table regular_table values (1);
> alter table part_table_1 add partition (part_col=1);
> insert into table part_table_1 partition (part_col=1) values (1), (2), (3), 
> (4);
> alter table part_table_1 add partition (part_col=2);
> insert into table part_table_1 partition (part_col=2) values (1), (2), (3), 
> (4);
> alter table part_table_2 add partition (part_col=1);
> insert into table part_table_2 partition (part_col=1) values (1), (2), (3), 
> (4);
> alter table part_table_2 add partition (part_col=2);
> insert into table part_table_2 partition (part_col=2) values (1), (2), (3), 
> (4);
> explain select * from regular_table, part_table_1, part_table_2 where 
> regular_table.col = part_table_1.part_col and regular_table.col = 
> part_table_2.part_col;
> {code}
> The explain plan is
> {code}
> STAGE DEPENDENCIES:
>   Stage-2 is a root stage
>   Stage-1 depends on stages: Stage-2
>   Stage-0 depends on stages

[jira] [Comment Edited] (HIVE-18192) Introduce WriteID per table rather than using global transaction ID

2018-01-22 Thread Sankar Hariappan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16334311#comment-16334311
 ] 

Sankar Hariappan edited comment on HIVE-18192 at 1/22/18 8:24 PM:
--

Added 05.patch with below changes.

1. Reverted field name in ROW__ID struct from "writeid" to "transactionid" to 
fix several test failures.

2. Used lowercase for DB and table names stored in txn meta tables.

3. Corrected test code to refer write Id instead of txn Id.

All these changes will fix several test failures in previous run.


was (Author: sankarh):
Added 05.patch after reverting field name in ROW__ID struct from "writeid" to 
"transactionid" to fix several test failures.

Also, used lowercase for DB and table names stored in txn meta tables.

> Introduce WriteID per table rather than using global transaction ID
> ---
>
> Key: HIVE-18192
> URL: https://issues.apache.org/jira/browse/HIVE-18192
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2, Transactions
>Affects Versions: 3.0.0
>Reporter: anishek
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: ACID, DR, pull-request-available
> Fix For: 3.0.0
>
> Attachments: HIVE-18192.01.patch, HIVE-18192.02.patch, 
> HIVE-18192.03.patch, HIVE-18192.04.patch, HIVE-18192.05.patch
>
>
> To support ACID replication, we will be introducing a per table write Id 
> which will replace the transaction id in the primary key for each row in a 
> ACID table.
> The current primary key is determined via 
>  
> which will move to 
>  
> For each table modified by the given transaction will have a table level 
> write ID allocated and a persisted map of global txn id -> to table -> write 
> id for that table has to be maintained to allow Snapshot isolation.
> Readers should use the combination of ValidTxnList and 
> ValidWriteIdList(Table) for snapshot isolation.
>  
>  [Hive Replication - ACID 
> Tables.pdf|https://issues.apache.org/jira/secure/attachment/12903157/Hive%20Replication-%20ACID%20Tables.pdf]
>  has a section "Per Table Sequences (Write-Id)" with more detials



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HIVE-18508) Port schema changes from HIVE-14498 to standalone-metastore

2018-01-22 Thread Vihang Karajgaonkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18508?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vihang Karajgaonkar resolved HIVE-18508.

Resolution: Duplicate

Thanks for the pointer. Closing this as duplicate.

> Port schema changes from HIVE-14498 to standalone-metastore
> ---
>
> Key: HIVE-18508
> URL: https://issues.apache.org/jira/browse/HIVE-18508
> Project: Hive
>  Issue Type: Task
>  Components: Standalone Metastore
>Reporter: Vihang Karajgaonkar
>Assignee: Vihang Karajgaonkar
>Priority: Critical
>
> HIVE-14498 introduced a number of schema changes which are missing from the 
> standalone-metastore *.sql files. Due to this queries are erroring out using 
> standalone-metastore.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-17983) Make the standalone metastore generate tarballs etc.

2018-01-22 Thread Vihang Karajgaonkar (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16334825#comment-16334825
 ] 

Vihang Karajgaonkar commented on HIVE-17983:


Thanks [~alangates] for the patch. Left some comments on the PR.

> Make the standalone metastore generate tarballs etc.
> 
>
> Key: HIVE-17983
> URL: https://issues.apache.org/jira/browse/HIVE-17983
> Project: Hive
>  Issue Type: Sub-task
>  Components: Standalone Metastore
>Reporter: Alan Gates
>Assignee: Alan Gates
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-17983.2.patch, HIVE-17983.patch
>
>
> In order to be separately installable the standalone metastore needs its own 
> tarballs, startup scripts, etc.  All of the SQL installation and upgrade 
> scripts also need to move from metastore to standalone-metastore.
> I also plan to create Dockerfiles for different database types so that 
> developers can test the SQL installation and upgrade scripts.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-16821) Vectorization: support Explain Analyze in vectorized mode

2018-01-22 Thread Gopal V (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gopal V updated HIVE-16821:
---
Attachment: HIVE-16821.10.patch

> Vectorization: support Explain Analyze in vectorized mode
> -
>
> Key: HIVE-16821
> URL: https://issues.apache.org/jira/browse/HIVE-16821
> Project: Hive
>  Issue Type: Bug
>  Components: Diagnosability, Vectorization
>Affects Versions: 2.1.1, 3.0.0
>Reporter: Gopal V
>Assignee: Gopal V
>Priority: Minor
> Attachments: HIVE-16821.1.patch, HIVE-16821.10.patch, 
> HIVE-16821.2.patch, HIVE-16821.2.patch, HIVE-16821.3.patch, 
> HIVE-16821.7.patch, HIVE-16821.8.patch, HIVE-16821.9.patch
>
>
> Currently, to avoid a branch in the operator inner loop - the runtime stats 
> are only available in non-vector mode.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18500) annoying exceptions from LLAP Jmx view in the logs

2018-01-22 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-18500:

   Resolution: Fixed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

Committed to master. Thanks for the review!

> annoying exceptions from LLAP Jmx view in the logs
> --
>
> Key: HIVE-18500
> URL: https://issues.apache.org/jira/browse/HIVE-18500
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-18500.patch
>
>
> This can show for multiple things; should be handled better 
> {noformat}
>  - /jmx ()] org.apache.hive.http.JMXJsonServlet: getting attribute 
> UsageThreshold of java.lang:type=MemoryPool,name=G1 Survivor Space threw an 
> exception
> javax.management.RuntimeMBeanException: 
> java.lang.UnsupportedOperationException: Usage threshold is not supported
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.rethrow(DefaultMBeanServerInterceptor.java:839)
>  ~[?:1.8.0_25]
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.rethrowMaybeMBeanException(DefaultMBeanServerInterceptor.java:852)
>  ~[?:1.8.0_25]
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:651)
>  ~[?:1.8.0_25]
>   at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.getAttribute(JmxMBeanServer.java:678) 
> ~[?:1.8.0_25]
>   at 
> org.apache.hive.http.JMXJsonServlet.writeAttribute(JMXJsonServlet.java:322) 
> ~[hive-exec-2.1.0.2.6.3.0-235.jar:2.1.0.2.6.3.0-235]
>   at 
> org.apache.hive.http.JMXJsonServlet.listBeans(JMXJsonServlet.java:300) 
> ~[hive-exec-2.1.0.2.6.3.0-235.jar:2.1.0.2.6.3.0-235]
>   at org.apache.hive.http.JMXJsonServlet.doGet(JMXJsonServlet.java:194) 
> ~[hive-exec-2.1.0.2.6.3.0-235.jar:2.1.0.2.6.3.0-235]
>   at javax.servlet.http.HttpServlet.service(HttpServlet.java:707) 
> ~[servlet-api-2.5.jar:2.5]
>   at javax.servlet.http.HttpServlet.service(HttpServlet.java:820) 
> ~[servlet-api-2.5.jar:2.5]
>   at 
> org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:565) 
> ~[jetty-all-7.6.0.v20120127.jar:7.6.0.v20120127]
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:479) 
> ~[jetty-all-7.6.0.v20120127.jar:7.6.0.v20120127]
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:119) 
> ~[jetty-all-7.6.0.v20120127.jar:7.6.0.v20120127]
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:521) 
> ~[jetty-all-7.6.0.v20120127.jar:7.6.0.v20120127]
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:227)
>  ~[jetty-all-7.6.0.v20120127.jar:7.6.0.v20120127]
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1031)
>  ~[jetty-all-7.6.0.v20120127.jar:7.6.0.v20120127]
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:406) 
> ~[jetty-all-7.6.0.v20120127.jar:7.6.0.v20120127]
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:186)
>  ~[jetty-all-7.6.0.v20120127.jar:7.6.0.v20120127]
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:965)
>  ~[jetty-all-7.6.0.v20120127.jar:7.6.0.v20120127]
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:117) 
> ~[jetty-all-7.6.0.v20120127.jar:7.6.0.v20120127]
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:111)
>  ~[jetty-all-7.6.0.v20120127.jar:7.6.0.v20120127]
>   at 
> org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:312)
>  ~[jetty-all-7.6.0.v20120127.jar:7.6.0.v20120127]
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:250)
>  ~[jetty-all-7.6.0.v20120127.jar:7.6.0.v20120127]
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:111)
>  ~[jetty-all-7.6.0.v20120127.jar:7.6.0.v20120127]
>   at org.eclipse.jetty.server.Server.handle(Server.java:345) 
> ~[jetty-all-7.6.0.v20120127.jar:7.6.0.v20120127]
>   at 
> org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:449)
>  ~[jetty-all-7.6.0.v20120127.jar:7.6.0.v20120127]
>   at 
> org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:910)
>  ~[jetty-all-7.6.0.v20120127.jar:7.6.0.v20120127]
>   at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:634) 
> ~[jetty-all-7.6.0.v20120127.jar:7.6.0.v20120127]
>   at 
> org.eclipse.jetty.http.HttpParser.parseAv

[jira] [Updated] (HIVE-18192) Introduce WriteID per table rather than using global transaction ID

2018-01-22 Thread Sankar Hariappan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sankar Hariappan updated HIVE-18192:

Attachment: HIVE-18192.05.patch

> Introduce WriteID per table rather than using global transaction ID
> ---
>
> Key: HIVE-18192
> URL: https://issues.apache.org/jira/browse/HIVE-18192
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2, Transactions
>Affects Versions: 3.0.0
>Reporter: anishek
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: ACID, DR, pull-request-available
> Fix For: 3.0.0
>
> Attachments: HIVE-18192.01.patch, HIVE-18192.02.patch, 
> HIVE-18192.03.patch, HIVE-18192.04.patch, HIVE-18192.05.patch
>
>
> To support ACID replication, we will be introducing a per table write Id 
> which will replace the transaction id in the primary key for each row in a 
> ACID table.
> The current primary key is determined via 
>  
> which will move to 
>  
> For each table modified by the given transaction will have a table level 
> write ID allocated and a persisted map of global txn id -> to table -> write 
> id for that table has to be maintained to allow Snapshot isolation.
> Readers should use the combination of ValidTxnList and 
> ValidWriteIdList(Table) for snapshot isolation.
>  
>  [Hive Replication - ACID 
> Tables.pdf|https://issues.apache.org/jira/secure/attachment/12903157/Hive%20Replication-%20ACID%20Tables.pdf]
>  has a section "Per Table Sequences (Write-Id)" with more detials



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18192) Introduce WriteID per table rather than using global transaction ID

2018-01-22 Thread Sankar Hariappan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sankar Hariappan updated HIVE-18192:

Attachment: (was: HIVE-18192.05.patch)

> Introduce WriteID per table rather than using global transaction ID
> ---
>
> Key: HIVE-18192
> URL: https://issues.apache.org/jira/browse/HIVE-18192
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2, Transactions
>Affects Versions: 3.0.0
>Reporter: anishek
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: ACID, DR, pull-request-available
> Fix For: 3.0.0
>
> Attachments: HIVE-18192.01.patch, HIVE-18192.02.patch, 
> HIVE-18192.03.patch, HIVE-18192.04.patch, HIVE-18192.05.patch
>
>
> To support ACID replication, we will be introducing a per table write Id 
> which will replace the transaction id in the primary key for each row in a 
> ACID table.
> The current primary key is determined via 
>  
> which will move to 
>  
> For each table modified by the given transaction will have a table level 
> write ID allocated and a persisted map of global txn id -> to table -> write 
> id for that table has to be maintained to allow Snapshot isolation.
> Readers should use the combination of ValidTxnList and 
> ValidWriteIdList(Table) for snapshot isolation.
>  
>  [Hive Replication - ACID 
> Tables.pdf|https://issues.apache.org/jira/secure/attachment/12903157/Hive%20Replication-%20ACID%20Tables.pdf]
>  has a section "Per Table Sequences (Write-Id)" with more detials



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-17481) LLAP workload management

2018-01-22 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-17481:

Summary: LLAP workload management  (was: LLAP workload management 
(umbrella))

> LLAP workload management
> 
>
> Key: HIVE-17481
> URL: https://issues.apache.org/jira/browse/HIVE-17481
> Project: Hive
>  Issue Type: New Feature
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: Workload management design doc.pdf
>
>
> This effort is intended to improve various aspects of cluster sharing for 
> LLAP. Some of these are applicable to non-LLAP queries and may later be 
> extended to all queries. Administrators will be able to specify and apply 
> policies for workload management ("resource plans") that apply to the entire 
> cluster, with only one resource plan being active at a time. The policies 
> will be created and modified using new Hive DDL statements. 
> The policies will cover:
> * Dividing the cluster into a set of (optionally, nested) query pools that 
> are each allocated a fraction of the cluster, a set query parallelism, 
> resource sharing policy between queries, and potentially others like 
> priority, etc.
> * Mapping the incoming queries into pools based on the query user, groups, 
> explicit configuration, etc.
> * Specifying rules that perform actions on queries based on counter values 
> (e.g. killing or moving queries).
> One would also be able to switch policies on a live cluster without (usually) 
> affecting running queries, including e.g. to change policies for daytime and 
> nighttime usage patterns, and other similar scenarios. The switches would be 
> safe and atomic; versioning may eventually be supported.
> Some implementation details:
> * WM will only be supported in HS2 (for obvious reasons).
> * All LLAP query AMs will run in "interactive" YARN queue and will be 
> fungible between Hive pools.
> * We will use the concept of "guaranteed tasks" (also known as ducks) to 
> enforce cluster allocation without a central scheduler and without 
> compromising throughput. Guaranteed tasks preempt other (speculative) tasks 
> and are distributed from HS2 to AMs, and from AMs to tasks, in accordance 
> with percentage allocations in the policy. Each "duck" corresponds to a CPU 
> resource on the cluster. The implementation will be isolated so as to allow 
> different ones later.
> * In future, we may consider improved task placement and late binding, 
> similar to the ones described in Sparrow paper, to work around potential 
> hotspots/etc. that are not avoided with the decentralized scheme.
> * Only one HS2 will initially be supported to avoid split-brain workload 
> management. We will also implement (in a tangential set of work items) 
> active-passive HS2 recovery. Eventually, we intend to switch to full 
> active-active HS2 configuration with shared WM and Tez session pool (unlike 
> the current case with 2 separate session pools). 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HIVE-17481) LLAP workload management (umbrella)

2018-01-22 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin resolved HIVE-17481.
-
   Resolution: Fixed
Fix Version/s: 3.0.0

All the initial work on workload management is done. I've moved all phase 
2/3/... items from subtasks to tasks; bug fixes will also be handled in 
separate JIRAs.


> LLAP workload management (umbrella)
> ---
>
> Key: HIVE-17481
> URL: https://issues.apache.org/jira/browse/HIVE-17481
> Project: Hive
>  Issue Type: New Feature
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: Workload management design doc.pdf
>
>
> This effort is intended to improve various aspects of cluster sharing for 
> LLAP. Some of these are applicable to non-LLAP queries and may later be 
> extended to all queries. Administrators will be able to specify and apply 
> policies for workload management ("resource plans") that apply to the entire 
> cluster, with only one resource plan being active at a time. The policies 
> will be created and modified using new Hive DDL statements. 
> The policies will cover:
> * Dividing the cluster into a set of (optionally, nested) query pools that 
> are each allocated a fraction of the cluster, a set query parallelism, 
> resource sharing policy between queries, and potentially others like 
> priority, etc.
> * Mapping the incoming queries into pools based on the query user, groups, 
> explicit configuration, etc.
> * Specifying rules that perform actions on queries based on counter values 
> (e.g. killing or moving queries).
> One would also be able to switch policies on a live cluster without (usually) 
> affecting running queries, including e.g. to change policies for daytime and 
> nighttime usage patterns, and other similar scenarios. The switches would be 
> safe and atomic; versioning may eventually be supported.
> Some implementation details:
> * WM will only be supported in HS2 (for obvious reasons).
> * All LLAP query AMs will run in "interactive" YARN queue and will be 
> fungible between Hive pools.
> * We will use the concept of "guaranteed tasks" (also known as ducks) to 
> enforce cluster allocation without a central scheduler and without 
> compromising throughput. Guaranteed tasks preempt other (speculative) tasks 
> and are distributed from HS2 to AMs, and from AMs to tasks, in accordance 
> with percentage allocations in the policy. Each "duck" corresponds to a CPU 
> resource on the cluster. The implementation will be isolated so as to allow 
> different ones later.
> * In future, we may consider improved task placement and late binding, 
> similar to the ones described in Sparrow paper, to work around potential 
> hotspots/etc. that are not avoided with the decentralized scheme.
> * Only one HS2 will initially be supported to avoid split-brain workload 
> management. We will also implement (in a tangential set of work items) 
> active-passive HS2 recovery. Eventually, we intend to switch to full 
> active-active HS2 configuration with shared WM and Tez session pool (unlike 
> the current case with 2 separate session pools). 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-17178) Spark Partition Pruning Sink Operator can't target multiple Works

2018-01-22 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16334801#comment-16334801
 ] 

Hive QA commented on HIVE-17178:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
30s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
43s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
31s{color} | {color:red} ql: The patch generated 3 new + 70 unchanged - 3 fixed 
= 73 total (was 73) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
12s{color} | {color:red} The patch generated 6 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 14m  7s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh |
| git revision | master / 1281852 |
| Default Java | 1.8.0_111 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8759/yetus/diff-checkstyle-ql.txt
 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8759/yetus/patch-asflicense-problems.txt
 |
| modules | C: ql itests U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8759/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Spark Partition Pruning Sink Operator can't target multiple Works
> -
>
> Key: HIVE-17178
> URL: https://issues.apache.org/jira/browse/HIVE-17178
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Reporter: Sahil Takiar
>Assignee: Rui Li
>Priority: Major
> Attachments: HIVE-17178.1.patch, HIVE-17178.2.patch
>
>
> A Spark Partition Pruning Sink Operator cannot be used to target multiple Map 
> Work objects. The entire DPP subtree (SEL-GBY-SPARKPRUNINGSINK) is duplicated 
> if a single table needs to be used to target multiple Map Works.
> The following query shows the issue:
> {code}
> set hive.spark.dynamic.partition.pruning=true;
> set hive.auto.convert.join=true;
> create table part_table_1 (col int) partitioned by (part_col int);
> create table part_table_2 (col int) partitioned by (part_col int);
> create table regular_table (col int);
> insert into table regular_table values (1);
> alter table part_table_1 add partition (part_col=1);
> insert into table part_table_1 partition (part_col=1) values (1), (2), (3), 
> (4);
> alter table part_table_1 add partition (part_col=2);
> insert into table part_table_1 partition (part_col=2) values (1), (2), (3), 
> (4)

[jira] [Commented] (HIVE-15355) Concurrency issues during parallel moveFile due to HDFSUtils.setFullFileStatus

2018-01-22 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16334790#comment-16334790
 ] 

Sergey Shelukhin commented on HIVE-15355:
-

This bugfix seems to imply that aclStatus can change; calling 
sourceStatus.getAclEntries() twice may return lists derived from different 
aclStatus.
So, if ... gets some list, background thread sets aclStatus to null, so 
aclEntries = ... line gets the null value.

> Concurrency issues during parallel moveFile due to HDFSUtils.setFullFileStatus
> --
>
> Key: HIVE-15355
> URL: https://issues.apache.org/jira/browse/HIVE-15355
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 2.1.0, 2.2.0
>Reporter: Vihang Karajgaonkar
>Assignee: Vihang Karajgaonkar
>Priority: Major
> Fix For: 2.2.0
>
> Attachments: HIVE-15355.01.patch, HIVE-15355.02.patch
>
>
> It is possible to run into concurrency issues during multi-threaded moveFile 
> issued when processing queries like {{INSERT OVERWRITE TABLE ... SELECT ..}} 
> when there are multiple files in the staging directory which is a 
> subdirectory of the target directory. The issue is hard to reproduce but 
> following stacktrace is one such example:
> {noformat}
> INFO  : Loading data to table 
> functional_text_gzip.alltypesaggmultifilesnopart from 
> hdfs://localhost:20500/test-warehouse/alltypesaggmultifilesnopart_text_gzip/.hive-staging_hive_2016-12-01_19-58-21_712_8968735301422943318-1/-ext-1
> ERROR : Failed with exception java.lang.ArrayIndexOutOfBoundsException
> org.apache.hadoop.hive.ql.metadata.HiveException: 
> java.lang.ArrayIndexOutOfBoundsException
> at org.apache.hadoop.hive.ql.metadata.Hive.moveFile(Hive.java:2858)
> at 
> org.apache.hadoop.hive.ql.metadata.Hive.replaceFiles(Hive.java:3124)
> at org.apache.hadoop.hive.ql.metadata.Hive.loadTable(Hive.java:1701)
> at org.apache.hadoop.hive.ql.exec.MoveTask.execute(MoveTask.java:313)
> at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:214)
> at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100)
> at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1976)
> at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1689)
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1421)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1205)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1200)
> at 
> org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:237)
> at 
> org.apache.hive.service.cli.operation.SQLOperation.access$300(SQLOperation.java:88)
> at 
> org.apache.hive.service.cli.operation.SQLOperation$3$1.run(SQLOperation.java:293)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1796)
> Getting log thread is interrupted, since query is done!
> at 
> org.apache.hive.service.cli.operation.SQLOperation$3.run(SQLOperation.java:306)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.ArrayIndexOutOfBoundsException
> at java.lang.System.arraycopy(Native Method)
> at java.util.ArrayList.removeRange(ArrayList.java:616)
> at java.util.ArrayList$SubList.removeRange(ArrayList.java:1021)
> at java.util.AbstractList.clear(AbstractList.java:234)
> at 
> com.google.common.collect.Iterables.removeIfFromRandomAccessList(Iterables.java:213)
> at com.google.common.collect.Iterables.removeIf(Iterables.java:184)
> at 
> org.apache.hadoop.hive.shims.Hadoop23Shims.removeBaseAclEntries(Hadoop23Shims.java:865)
> at 
> org.apache.hadoop.hive.shims.Hadoop23Shims.setFullFileStatus(Hadoop23Shims.java:757)
> at org.apache.hadoop.hive.ql.metadata.Hive$3.call(Hive.java:2835)
> at org.apache.hadoop.hive.ql.metadata.Hive$3.call(Hive.java:2828)
> ... 4 more
> ERROR : FAILED: Execution Error, return code 1 from 
> org.apache.hadoop.hive.ql.exec.MoveTask
> {noformat}
> Quick online search also shows some other instances like the one mentioned in 
> http://stackoverflow.com/questions/38900333/get-concurrentmodificationexception-in-step-2-create-intermediate-flat-hive-

[jira] [Commented] (HIVE-18504) Hive is throwing InvalidObjectException(message:Invalid column type name is too long.

2018-01-22 Thread Naveen Gangam (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16334786#comment-16334786
 ] 

Naveen Gangam commented on HIVE-18504:
--

[~tomsmaily] Can I close this Jira then?

> Hive is throwing InvalidObjectException(message:Invalid column type name is 
> too long.
> -
>
> Key: HIVE-18504
> URL: https://issues.apache.org/jira/browse/HIVE-18504
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Jimson K James
>Assignee: Naveen Gangam
>Priority: Major
> Fix For: 2.3.0, 3.0.0
>
> Attachments: hive2.log, tweets.sql
>
>
> Hive 2.6.3 is still throwing InvalidObjectException(message:Invalid column 
> type name is too long.
> Please find attached the create table query. For more details please refer to 
> HIVE-15249
> {code:java}
> FAILED: Execution Error, return code 1 from 
> org.apache.hadoop.hive.ql.exec.DDLTask. 
> InvalidObjectException(message:Invalid column type name length 2980 exceeds 
> max allowed length 2000, type 
> struct,entities:struct,text:string>>,symbols:array...
> {code}
>  
> {code:java}
> [root@sandbox-hdp hive-json]# hive --version
> Hive 1.2.1000.2.6.3.0-235
> Subversion 
> git://ctr-e134-1499953498516-254436-01-04.hwx.site/grid/0/jenkins/workspace/HDP-parallel-centos6/SOURCES/hive
>  -r 5f360bda08bb5489fbb3189b5aeaaf58029ed4b5
> Compiled by jenkins on Mon Oct 30 02:48:31 UTC 2017
> From source with checksum 94298cc1f5f5bf0f3470f3ea2e92d646
> [root@sandbox-hdp hive-json]# beeline
> Beeline version 1.2.1000.2.6.3.0-235 by Apache Hive
> beeline> !connect 
> jdbc:hive2://sandbox-hdp.hortonworks.com:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2
> Connecting to 
> jdbc:hive2://sandbox-hdp.hortonworks.com:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2
> Enter username for 
> jdbc:hive2://sandbox-hdp.hortonworks.com:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2:
>  hive
> Enter password for 
> jdbc:hive2://sandbox-hdp.hortonworks.com:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2:
>  
> Connected to: Apache Hive (version 1.2.1000.2.6.3.0-235)
> Driver: Hive JDBC (version 1.2.1000.2.6.3.0-235)
> Transaction isolation: TRANSACTION_REPEATABLE_READ
> 0: jdbc:hive2://sandbox-hdp.hortonworks.com:2>
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18458) Workload manager initializes even when interactive queue is not set

2018-01-22 Thread Prasanth Jayachandran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran updated HIVE-18458:
-
   Resolution: Fixed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

Test failures are unrelated and are already happening in master. 

Committed to master. Thanks for the reviews!

> Workload manager initializes even when interactive queue is not set
> ---
>
> Key: HIVE-18458
> URL: https://issues.apache.org/jira/browse/HIVE-18458
> Project: Hive
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-18458.1.patch, HIVE-18458.2.patch
>
>
> Workload manager gets initialized even when interactive queue is not defined 
> (however there is an active resource plan in metastore). Active resource plan 
> is used for tez in this case. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-15631) Optimize for hive client logs , you can filter the log for each session itself.

2018-01-22 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16334779#comment-16334779
 ] 

Hive QA commented on HIVE-15631:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12907091/HIVE-15631.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 16 failed/errored test(s), 11633 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mapjoin_hook] 
(batchId=12)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=35)
org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver[encryption_move_tbl]
 (batchId=173)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[llap_smb] 
(batchId=151)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez1]
 (batchId=170)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata]
 (batchId=165)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] 
(batchId=169)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast]
 (batchId=160)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] 
(batchId=160)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[authorization_part]
 (batchId=94)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[stats_aggregator_error_1]
 (batchId=94)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] 
(batchId=121)
org.apache.hadoop.hive.ql.io.TestDruidRecordWriter.testWrite (batchId=254)
org.apache.hive.jdbc.TestSSL.testConnectionMismatch (batchId=232)
org.apache.hive.jdbc.TestSSL.testConnectionWrongCertCN (batchId=232)
org.apache.hive.jdbc.TestSSL.testMetastoreConnectionWrongCertCN (batchId=232)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8758/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8758/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8758/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 16 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12907091 - PreCommit-HIVE-Build

> Optimize for hive client logs , you can filter the log for each session 
> itself.
> ---
>
> Key: HIVE-15631
> URL: https://issues.apache.org/jira/browse/HIVE-15631
> Project: Hive
>  Issue Type: Improvement
>  Components: CLI, Clients, Hive
>Reporter: tartarus
>Assignee: tartarus
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-15631.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> We have several hadoop cluster, about 15 thousand nodes. Every day we use 
> hive to submit above 100 thousand jobs. 
> So we have a large file of hive logs on every client host every day, but i 
> don not know the logs of my session submitted was which line. 
> So i hope to print the hive.session.id on every line of logs, and then i 
> could use grep to find the logs of my session submitted. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-17833) Publish split generation counters

2018-01-22 Thread Prasanth Jayachandran (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16334772#comment-16334772
 ] 

Prasanth Jayachandran commented on HIVE-17833:
--

Fixes NPE in test

> Publish split generation counters
> -
>
> Key: HIVE-17833
> URL: https://issues.apache.org/jira/browse/HIVE-17833
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Major
> Attachments: HIVE-17833.1.patch, HIVE-17833.10.patch, 
> HIVE-17833.11.patch, HIVE-17833.2.patch, HIVE-17833.3.patch, 
> HIVE-17833.4.patch, HIVE-17833.5.patch, HIVE-17833.6.patch, 
> HIVE-17833.7.patch, HIVE-17833.8.patch, HIVE-17833.9.patch
>
>
> With TEZ-3856, tez counters are exposed via input initializers which can be 
> used to publish split generation counters. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-17833) Publish split generation counters

2018-01-22 Thread Prasanth Jayachandran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran updated HIVE-17833:
-
Attachment: HIVE-17833.11.patch

> Publish split generation counters
> -
>
> Key: HIVE-17833
> URL: https://issues.apache.org/jira/browse/HIVE-17833
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Major
> Attachments: HIVE-17833.1.patch, HIVE-17833.10.patch, 
> HIVE-17833.11.patch, HIVE-17833.2.patch, HIVE-17833.3.patch, 
> HIVE-17833.4.patch, HIVE-17833.5.patch, HIVE-17833.6.patch, 
> HIVE-17833.7.patch, HIVE-17833.8.patch, HIVE-17833.9.patch
>
>
> With TEZ-3856, tez counters are exposed via input initializers which can be 
> used to publish split generation counters. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18495) JUnit rule to enable Driver level testing

2018-01-22 Thread Andrew Sherman (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16334756#comment-16334756
 ] 

Andrew Sherman commented on HIVE-18495:
---

Thanks [~kgyrtkirk], the example made it much clearer. It seems like this could 
be a useful way to run tests. Can it be made to work with good old mapreduce?

> JUnit rule to enable Driver level testing
> -
>
> Key: HIVE-18495
> URL: https://issues.apache.org/jira/browse/HIVE-18495
> Project: Hive
>  Issue Type: Sub-task
>  Components: Testing Infrastructure
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-18495.01.patch, HIVE-18495.02.patch
>
>
> I've tried to write a case for a sophisticated check...it worked so well that 
> I've started using it and eventually created a junit rule to make it easier 
> to reuse
> Currently it takes ~15-25sec to run a test case with this framework (from 
> which most of the time is the launch time of all kind of stuff which are 
> needed to run a driver command).
> * enable to write JUnit tests which has access to the {{IDriver}} level
> * leave out the cli-driver; it sometimes causes problems
> * write tests at the {{ql}} module
> * it should also work from the IDE without changing anything
> Note: JUnit 5 would be great for this task; but unfortunately junit5 needs 
> maven-surefire 2.19.1 ; which causes all kinds of problems for hive devs 
> using idea...so that's not an option.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18508) Port schema changes from HIVE-14498 to standalone-metastore

2018-01-22 Thread Alan Gates (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16334732#comment-16334732
 ] 

Alan Gates commented on HIVE-18508:
---

I believe I already addressed these in HIVE-17983 patch 2.

> Port schema changes from HIVE-14498 to standalone-metastore
> ---
>
> Key: HIVE-18508
> URL: https://issues.apache.org/jira/browse/HIVE-18508
> Project: Hive
>  Issue Type: Task
>  Components: Standalone Metastore
>Reporter: Vihang Karajgaonkar
>Assignee: Vihang Karajgaonkar
>Priority: Critical
>
> HIVE-14498 introduced a number of schema changes which are missing from the 
> standalone-metastore *.sql files. Due to this queries are erroring out using 
> standalone-metastore.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18504) Hive is throwing InvalidObjectException(message:Invalid column type name is too long.

2018-01-22 Thread Jimson K James (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16334726#comment-16334726
 ] 

Jimson K James commented on HIVE-18504:
---

Yup, sorry. 2.6.3 is the HDP version. Thought HDP2.6.3 is using Hive2? Can you 
please check the log file.
I tried to connect to Hive2 using the beeline. But still going to Hive1.2.

Any idea how can I use the Hive2 from HDP?


> Hive is throwing InvalidObjectException(message:Invalid column type name is 
> too long.
> -
>
> Key: HIVE-18504
> URL: https://issues.apache.org/jira/browse/HIVE-18504
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Jimson K James
>Assignee: Naveen Gangam
>Priority: Major
> Fix For: 2.3.0, 3.0.0
>
> Attachments: hive2.log, tweets.sql
>
>
> Hive 2.6.3 is still throwing InvalidObjectException(message:Invalid column 
> type name is too long.
> Please find attached the create table query. For more details please refer to 
> HIVE-15249
> {code:java}
> FAILED: Execution Error, return code 1 from 
> org.apache.hadoop.hive.ql.exec.DDLTask. 
> InvalidObjectException(message:Invalid column type name length 2980 exceeds 
> max allowed length 2000, type 
> struct,entities:struct,text:string>>,symbols:array...
> {code}
>  
> {code:java}
> [root@sandbox-hdp hive-json]# hive --version
> Hive 1.2.1000.2.6.3.0-235
> Subversion 
> git://ctr-e134-1499953498516-254436-01-04.hwx.site/grid/0/jenkins/workspace/HDP-parallel-centos6/SOURCES/hive
>  -r 5f360bda08bb5489fbb3189b5aeaaf58029ed4b5
> Compiled by jenkins on Mon Oct 30 02:48:31 UTC 2017
> From source with checksum 94298cc1f5f5bf0f3470f3ea2e92d646
> [root@sandbox-hdp hive-json]# beeline
> Beeline version 1.2.1000.2.6.3.0-235 by Apache Hive
> beeline> !connect 
> jdbc:hive2://sandbox-hdp.hortonworks.com:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2
> Connecting to 
> jdbc:hive2://sandbox-hdp.hortonworks.com:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2
> Enter username for 
> jdbc:hive2://sandbox-hdp.hortonworks.com:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2:
>  hive
> Enter password for 
> jdbc:hive2://sandbox-hdp.hortonworks.com:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2:
>  
> Connected to: Apache Hive (version 1.2.1000.2.6.3.0-235)
> Driver: Hive JDBC (version 1.2.1000.2.6.3.0-235)
> Transaction isolation: TRANSACTION_REPEATABLE_READ
> 0: jdbc:hive2://sandbox-hdp.hortonworks.com:2>
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-15631) Optimize for hive client logs , you can filter the log for each session itself.

2018-01-22 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16334706#comment-16334706
 ] 

Hive QA commented on HIVE-15631:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 7s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
1s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
12s{color} | {color:red} The patch generated 6 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 13m 47s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh |
| git revision | master / 15dd294 |
| Default Java | 1.8.0_111 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8758/yetus/patch-asflicense-problems.txt
 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8758/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Optimize for hive client logs , you can filter the log for each session 
> itself.
> ---
>
> Key: HIVE-15631
> URL: https://issues.apache.org/jira/browse/HIVE-15631
> Project: Hive
>  Issue Type: Improvement
>  Components: CLI, Clients, Hive
>Reporter: tartarus
>Assignee: tartarus
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-15631.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> We have several hadoop cluster, about 15 thousand nodes. Every day we use 
> hive to submit above 100 thousand jobs. 
> So we have a large file of hive logs on every client host every day, but i 
> don not know the logs of my session submitted was which line. 
> So i hope to print the hive.session.id on every line of logs, and then i 
> could use grep to find the logs of my session submitted. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18504) Hive is throwing InvalidObjectException(message:Invalid column type name is too long.

2018-01-22 Thread Eugene Koifman (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16334702#comment-16334702
 ] 

Eugene Koifman commented on HIVE-18504:
---

2.6.3 looks like HDP version

> Hive is throwing InvalidObjectException(message:Invalid column type name is 
> too long.
> -
>
> Key: HIVE-18504
> URL: https://issues.apache.org/jira/browse/HIVE-18504
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Jimson K James
>Assignee: Naveen Gangam
>Priority: Major
> Fix For: 2.3.0, 3.0.0
>
> Attachments: hive2.log, tweets.sql
>
>
> Hive 2.6.3 is still throwing InvalidObjectException(message:Invalid column 
> type name is too long.
> Please find attached the create table query. For more details please refer to 
> HIVE-15249
> {code:java}
> FAILED: Execution Error, return code 1 from 
> org.apache.hadoop.hive.ql.exec.DDLTask. 
> InvalidObjectException(message:Invalid column type name length 2980 exceeds 
> max allowed length 2000, type 
> struct,entities:struct,text:string>>,symbols:array...
> {code}
>  
> {code:java}
> [root@sandbox-hdp hive-json]# hive --version
> Hive 1.2.1000.2.6.3.0-235
> Subversion 
> git://ctr-e134-1499953498516-254436-01-04.hwx.site/grid/0/jenkins/workspace/HDP-parallel-centos6/SOURCES/hive
>  -r 5f360bda08bb5489fbb3189b5aeaaf58029ed4b5
> Compiled by jenkins on Mon Oct 30 02:48:31 UTC 2017
> From source with checksum 94298cc1f5f5bf0f3470f3ea2e92d646
> [root@sandbox-hdp hive-json]# beeline
> Beeline version 1.2.1000.2.6.3.0-235 by Apache Hive
> beeline> !connect 
> jdbc:hive2://sandbox-hdp.hortonworks.com:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2
> Connecting to 
> jdbc:hive2://sandbox-hdp.hortonworks.com:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2
> Enter username for 
> jdbc:hive2://sandbox-hdp.hortonworks.com:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2:
>  hive
> Enter password for 
> jdbc:hive2://sandbox-hdp.hortonworks.com:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2:
>  
> Connected to: Apache Hive (version 1.2.1000.2.6.3.0-235)
> Driver: Hive JDBC (version 1.2.1000.2.6.3.0-235)
> Transaction isolation: TRANSACTION_REPEATABLE_READ
> 0: jdbc:hive2://sandbox-hdp.hortonworks.com:2>
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18383) Qtests: running all cases from TestNegativeCliDriver results in OOMs

2018-01-22 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16334656#comment-16334656
 ] 

Ashutosh Chauhan commented on HIVE-18383:
-

+1 pending tests

> Qtests: running all cases from TestNegativeCliDriver results in OOMs
> 
>
> Key: HIVE-18383
> URL: https://issues.apache.org/jira/browse/HIVE-18383
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore, Tests
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-18383.01.patch
>
>
> I think that it is caused by unclosed SessionState objects which are piling 
> up and cause OOM..
> There is special have been made to start a new sessionstate for every qtest; 
> but the old one is not closed up to this 
> [point|https://github.com/apache/hive/blob/20c9a3905f4b1b627c935ad54a53a7a59015587c/itests/util/src/main/java/org/apache/hadoop/hive/ql/QTestUtil.java#L1202]
> this prevents running all {{TestNegativeCliDriver}} tests in one maven 
> callI keep getting OOMs
> This issues sometimes appears on the ptest executor as well and its reported 
> as a failed batch.
> I've gone back in time a bitseems like at 
> c925cf8d2bdf646f5c3c57ed7252c01b2ab33eec it was ok to execute the whole 
> batch; but at 1b4baf474c15377cc9f0bacdda317feabeefacaf and probably also at 
> a42314deb07a1c8d9d4daeaa799ad1c1ebb0c6c9 its not possible anymore. I suspect 
> that there is possibly another issueor these are just the consequences 
> that the sessionstate got heavier by a few hundred bytes; and made it easier 
> to fill up the heap



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18367) Describe Extended output is truncated on a table with an explicit row format containing tabs or newlines.

2018-01-22 Thread Andrew Sherman (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16334657#comment-16334657
 ] 

Andrew Sherman commented on HIVE-18367:
---

Than you [~pvary]

> Describe Extended output is truncated on a table with an explicit row format 
> containing tabs or newlines.
> -
>
> Key: HIVE-18367
> URL: https://issues.apache.org/jira/browse/HIVE-18367
> Project: Hive
>  Issue Type: Bug
>Reporter: Andrew Sherman
>Assignee: Andrew Sherman
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-18367.1.patch, HIVE-18367.2.patch, 
> HIVE-18367.3.patch, HIVE-18367.4.patch, HIVE-18367.5.patch, HIVE-18367.6.patch
>
>
> 'Describe Extended' dumps information about a table. The protocol for sending 
> this data relies on tabs and newlines to separate pieces of data. If a table 
> has 'FIELDS terminated by XXX' or 'LINES terminated by XXX' where XXX is a 
> tab or newline then the output seen by the user is prematurely truncated. Fix 
> this by replacing tabs and newlines in the table description with “\n” and 
> “\t”.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18484) Create tests to cover listPartition(s) methods

2018-01-22 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16334659#comment-16334659
 ] 

Hive QA commented on HIVE-18484:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12907094/HIVE-18484.1.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 17 failed/errored test(s), 11831 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_sortmerge_join_2] 
(batchId=48)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[fp_literal_arithmetic] 
(batchId=69)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mapjoin_hook] 
(batchId=12)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=35)
org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver[encryption_move_tbl]
 (batchId=173)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez1]
 (batchId=170)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata]
 (batchId=165)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] 
(batchId=169)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast]
 (batchId=160)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] 
(batchId=160)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[authorization_part]
 (batchId=94)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[stats_aggregator_error_1]
 (batchId=94)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] 
(batchId=121)
org.apache.hadoop.hive.ql.io.TestDruidRecordWriter.testWrite (batchId=254)
org.apache.hive.jdbc.TestSSL.testConnectionMismatch (batchId=232)
org.apache.hive.jdbc.TestSSL.testConnectionWrongCertCN (batchId=232)
org.apache.hive.jdbc.TestSSL.testMetastoreConnectionWrongCertCN (batchId=232)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8757/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8757/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8757/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 17 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12907094 - PreCommit-HIVE-Build

> Create tests to cover listPartition(s) methods
> --
>
> Key: HIVE-18484
> URL: https://issues.apache.org/jira/browse/HIVE-18484
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Adam Szita
>Assignee: Adam Szita
>Priority: Major
> Attachments: HIVE-18484.0.patch, HIVE-18484.1.patch
>
>
> Methods of IMetaStoreClient covered in this task are:
> {code:java}
> listPartitions(String,String,short)
> listPartitions(String,String,List(String),short)
> listPartitionSpecs(String,String,int)
> listPartitionsWithAuthInfo(String,String,short,String,List(String))
> listPartitionsWithAuthInfo(String,String,List(String),short,String,List(String))
> listPartitionsByFilter(String,String,String,short)
> listPartitionSpecsByFilter(String,String,String,int)
> listPartitionNames(String,String,short)
> listPartitionNames(String,String,List(String),short)
> listPartitionValues(PartitionValuesRequest){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-16821) Vectorization: support Explain Analyze in vectorized mode

2018-01-22 Thread Gopal V (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gopal V updated HIVE-16821:
---
Status: Open  (was: Patch Available)

> Vectorization: support Explain Analyze in vectorized mode
> -
>
> Key: HIVE-16821
> URL: https://issues.apache.org/jira/browse/HIVE-16821
> Project: Hive
>  Issue Type: Bug
>  Components: Diagnosability, Vectorization
>Affects Versions: 2.1.1, 3.0.0
>Reporter: Gopal V
>Assignee: Gopal V
>Priority: Minor
> Attachments: HIVE-16821.1.patch, HIVE-16821.2.patch, 
> HIVE-16821.2.patch, HIVE-16821.3.patch, HIVE-16821.7.patch, 
> HIVE-16821.8.patch, HIVE-16821.9.patch
>
>
> Currently, to avoid a branch in the operator inner loop - the runtime stats 
> are only available in non-vector mode.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-16821) Vectorization: support Explain Analyze in vectorized mode

2018-01-22 Thread Gopal V (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gopal V updated HIVE-16821:
---
Status: Patch Available  (was: Open)

> Vectorization: support Explain Analyze in vectorized mode
> -
>
> Key: HIVE-16821
> URL: https://issues.apache.org/jira/browse/HIVE-16821
> Project: Hive
>  Issue Type: Bug
>  Components: Diagnosability, Vectorization
>Affects Versions: 2.1.1, 3.0.0
>Reporter: Gopal V
>Assignee: Gopal V
>Priority: Minor
> Attachments: HIVE-16821.1.patch, HIVE-16821.2.patch, 
> HIVE-16821.2.patch, HIVE-16821.3.patch, HIVE-16821.7.patch, 
> HIVE-16821.8.patch, HIVE-16821.9.patch
>
>
> Currently, to avoid a branch in the operator inner loop - the runtime stats 
> are only available in non-vector mode.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18504) Hive is throwing InvalidObjectException(message:Invalid column type name is too long.

2018-01-22 Thread Naveen Gangam (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16334650#comment-16334650
 ] 

Naveen Gangam commented on HIVE-18504:
--

[~tomsmaily] Based on this version string, it appears it is Hive 1.2 version 
and not 2.6.3. I am not sure if there is a hive 2.6.3 release? Could you please 
double check. This fix is not available in hive 1.x releases.

I have had a chance to try this on my local machine on hive 2.1 release that 
has this fix. It appears to be working. What backend DB are you running this 
against? Thanks
{code:java}
0: jdbc:hive2://localhost:1> describe tweets;
INFO  : Compiling 
command(queryId=hive_20180122094040_03a4b910-9872-4b81-8d07-12114a8908ec): 
describe tweets
INFO  : Semantic Analysis Completed
INFO  : Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:col_name, 
type:string, comment:from deserializer), FieldSchema(name:data_type, 
type:string, comment:from deserializer), FieldSchema(name:comment, type:string, 
comment:from deserializer)], properties:null)
INFO  : Completed compiling 
command(queryId=hive_20180122094040_03a4b910-9872-4b81-8d07-12114a8908ec); Time 
taken: 0.027 seconds
INFO  : Executing 
command(queryId=hive_20180122094040_03a4b910-9872-4b81-8d07-12114a8908ec): 
describe tweets
INFO  : Starting task [Stage-0:DDL] in serial mode
INFO  : Completed executing 
command(queryId=hive_20180122094040_03a4b910-9872-4b81-8d07-12114a8908ec); Time 
taken: 0.014 seconds
INFO  : OK
+++--+--+
|  col_name  | data_type
  | comment  |
+++--+--+
| contributors   | string   
  |  |
| coordinates    | string   
  |  |
| created_at | string   
  |  |
| entities   | 
struct,text:string>>,symbols:array,urls:array,user_mentions:array,name:string,screen_name:string>>>
 |  |
| favorite_count | tinyint  
  |  |
| favorited  | boolean  
  |  |
| filter_level   | string   
  |  |
| geo    | string   
  |  |
| id | bigint   
  |  |
| id_str | binary   
  |  |
| in_reply_to_screen_name    | string   
  |  |
| in_reply_to_status_id  | string   
  |  |
| in_reply_to_status_id_str  | string   
  |  |
| in_reply_to_user_id    | string   
  |  |
| in_reply_to_user_id_str    | string   
  |  |
| is_quote_status    | boolean  
  |  |
| lang   | string   
  |  |
| place  | string   
  |  |
| quote_count    | tinyint  
  |  |
| reply_count    | tinyint  
  |  |
| retweet_count  | tinyint  
  |  |
| retweeted  | boolean  
  |  |
| retweeted_status   | 
struct,entities:struct,text:string>>,symbols:array,urls:array,url:string>>,user_mentions:array>,extended_tweet:struct,entities:struct,text:string>>,media:array,media_url:string,media_url_https:string,sizes:struct,medium:struct,small:struct,thumb:struct>,type:string,url:string,video_info:struct,duration_millis:int,variants:array,symbols:array,urls:array,user_mentions:array>,extended_entities:struct,media_url:string,media_url_https:string,sizes:struct,medium:struct,small:struct,thumb:struct>,type:string,url:string,video_info:struct,duration_millis:int,variants:array>,full_text:string>,favorite_count:smallint,favorited:boolean,filter_level:string,geo:string,id:bigint,id_str:binary,in_reply_to_screen_name:string,in_reply_to_status_id:string,in_reply_to_status_id_str:string,in_reply_to_user_id:string,in_reply_to_user_id_str:string,is_quote_status:boolean,lang:string,place:string,possibly_sensitive:boolean,quote_count:smallint,r

[jira] [Updated] (HIVE-18504) Hive is throwing InvalidObjectException(message:Invalid column type name is too long.

2018-01-22 Thread Jimson K James (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimson K James updated HIVE-18504:
--
Attachment: hive2.log

> Hive is throwing InvalidObjectException(message:Invalid column type name is 
> too long.
> -
>
> Key: HIVE-18504
> URL: https://issues.apache.org/jira/browse/HIVE-18504
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Jimson K James
>Assignee: Naveen Gangam
>Priority: Major
> Fix For: 2.3.0, 3.0.0
>
> Attachments: hive2.log, tweets.sql
>
>
> Hive 2.6.3 is still throwing InvalidObjectException(message:Invalid column 
> type name is too long.
> Please find attached the create table query. For more details please refer to 
> HIVE-15249
> {code:java}
> FAILED: Execution Error, return code 1 from 
> org.apache.hadoop.hive.ql.exec.DDLTask. 
> InvalidObjectException(message:Invalid column type name length 2980 exceeds 
> max allowed length 2000, type 
> struct,entities:struct,text:string>>,symbols:array...
> {code}
>  
> {code:java}
> [root@sandbox-hdp hive-json]# hive --version
> Hive 1.2.1000.2.6.3.0-235
> Subversion 
> git://ctr-e134-1499953498516-254436-01-04.hwx.site/grid/0/jenkins/workspace/HDP-parallel-centos6/SOURCES/hive
>  -r 5f360bda08bb5489fbb3189b5aeaaf58029ed4b5
> Compiled by jenkins on Mon Oct 30 02:48:31 UTC 2017
> From source with checksum 94298cc1f5f5bf0f3470f3ea2e92d646
> [root@sandbox-hdp hive-json]# beeline
> Beeline version 1.2.1000.2.6.3.0-235 by Apache Hive
> beeline> !connect 
> jdbc:hive2://sandbox-hdp.hortonworks.com:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2
> Connecting to 
> jdbc:hive2://sandbox-hdp.hortonworks.com:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2
> Enter username for 
> jdbc:hive2://sandbox-hdp.hortonworks.com:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2:
>  hive
> Enter password for 
> jdbc:hive2://sandbox-hdp.hortonworks.com:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2:
>  
> Connected to: Apache Hive (version 1.2.1000.2.6.3.0-235)
> Driver: Hive JDBC (version 1.2.1000.2.6.3.0-235)
> Transaction isolation: TRANSACTION_REPEATABLE_READ
> 0: jdbc:hive2://sandbox-hdp.hortonworks.com:2>
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18499) Amend point lookup tests to check for data

2018-01-22 Thread Janaki Lahorani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Janaki Lahorani updated HIVE-18499:
---
Attachment: HIVE-18499.1.patch

> Amend point lookup tests to check for data
> --
>
> Key: HIVE-18499
> URL: https://issues.apache.org/jira/browse/HIVE-18499
> Project: Hive
>  Issue Type: Bug
>Reporter: Janaki Lahorani
>Assignee: Janaki Lahorani
>Priority: Major
> Attachments: HIVE-18499.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18499) Amend point lookup tests to check for data

2018-01-22 Thread Janaki Lahorani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Janaki Lahorani updated HIVE-18499:
---
Status: Patch Available  (was: Open)

> Amend point lookup tests to check for data
> --
>
> Key: HIVE-18499
> URL: https://issues.apache.org/jira/browse/HIVE-18499
> Project: Hive
>  Issue Type: Bug
>Reporter: Janaki Lahorani
>Assignee: Janaki Lahorani
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18393) Error returned when some other type is read as string from parquet tables

2018-01-22 Thread Janaki Lahorani (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16334627#comment-16334627
 ] 

Janaki Lahorani commented on HIVE-18393:


Hi [~vihangk1].  The test failure is not related to this patch.  I verified 
locally.

> Error returned when some other type is read as string from parquet tables
> -
>
> Key: HIVE-18393
> URL: https://issues.apache.org/jira/browse/HIVE-18393
> Project: Hive
>  Issue Type: Bug
>Reporter: Janaki Lahorani
>Assignee: Janaki Lahorani
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-18393.1.patch, HIVE-18393.2.patch, 
> HIVE-18393.3.patch, HIVE-18393.4.patch, HIVE-18393.5.patch
>
>
> TimeStamp, Decimal, Double, Float, BigInt, Int, SmallInt, Tinyint and Boolean 
> when read as String, Varchar or Char should return the correct data.  Now 
> this results in error for parquet tables.
> Test Case:
> {code}
> drop table if exists testAltCol;
> create table testAltCol
> (cId  TINYINT,
>  cTimeStamp TIMESTAMP,
>  cDecimal   DECIMAL(38,18),
>  cDoubleDOUBLE,
>  cFloat   FLOAT,
>  cBigIntBIGINT,
>  cInt INT,
>  cSmallInt  SMALLINT,
>  cTinyint   TINYINT,
>  cBoolean   BOOLEAN);
> insert into testAltCol values
> (1,
>  '2017-11-07 09:02:49.9',
>  12345678901234567890.123456789012345678,
>  1.79e308,
>  3.4e38,
>  1234567890123456789,
>  1234567890,
>  12345,
>  123,
>  TRUE);
> insert into testAltCol values
> (2,
>  '1400-01-01 01:01:01.1',
>  1.1,
>  2.2,
>  3.3,
>  1,
>  2,
>  3,
>  4,
>  FALSE);
> insert into testAltCol values
> (3,
>  '1400-01-01 01:01:01.1',
>  10.1,
>  20.2,
>  30.3,
>  1234567890123456789,
>  1234567890,
>  12345,
>  123,
>  TRUE);
> select cId, cTimeStamp from testAltCol order by cId;
> select cId, cDecimal, cDouble, cFloat from testAltCol order by cId;
> select cId, cBigInt, cInt, cSmallInt, cTinyint from testAltCol order by cId;
> select cId, cBoolean from testAltCol order by cId;
> drop table if exists testAltColP;
> create table testAltColP stored as parquet as select * from testAltCol;
> select cId, cTimeStamp from testAltColP order by cId;
> select cId, cDecimal, cDouble, cFloat from testAltColP order by cId;
> select cId, cBigInt, cInt, cSmallInt, cTinyint from testAltColP order by cId;
> select cId, cBoolean from testAltColP order by cId;
> alter table testAltColP replace columns
> (cId  TINYINT,
>  cTimeStamp STRING,
>  cDecimal   STRING,
>  cDoubleSTRING,
>  cFloat   STRING,
>  cBigIntSTRING,
>  cInt STRING,
>  cSmallInt  STRING,
>  cTinyint   STRING,
>  cBoolean   STRING);
> select cId, cTimeStamp from testAltColP order by cId;
> select cId, cDecimal, cDouble, cFloat from testAltColP order by cId;
> select cId, cBigInt, cInt, cSmallInt, cTinyint from testAltColP order by cId;
> select cId, cBoolean from testAltColP order by cId;
> alter table testAltColP replace columns
> (cId  TINYINT,
>  cTimeStamp VARCHAR(100),
>  cDecimal   VARCHAR(100),
>  cDoubleVARCHAR(100),
>  cFloat   VARCHAR(100),
>  cBigIntVARCHAR(100),
>  cInt VARCHAR(100),
>  cSmallInt  VARCHAR(100),
>  cTinyint   VARCHAR(100),
>  cBoolean   VARCHAR(100));
> select cId, cTimeStamp from testAltColP order by cId;
> select cId, cDecimal, cDouble, cFloat from testAltColP order by cId;
> select cId, cBigInt, cInt, cSmallInt, cTinyint from testAltColP order by cId;
> select cId, cBoolean from testAltColP order by cId;
> alter table testAltColP replace columns
> (cId  TINYINT,
>  cTimeStamp CHAR(100),
>  cDecimal   CHAR(100),
>  cDoubleCHAR(100),
>  cFloat   CHAR(100),
>  cBigIntCHAR(100),
>  cInt CHAR(100),
>  cSmallInt  CHAR(100),
>  cTinyint   CHAR(100),
>  cBoolean   CHAR(100));
> select cId, cTimeStamp from testAltColP order by cId;
> select cId, cDecimal, cDouble, cFloat from testAltColP order by cId;
> select cId, cBigInt, cInt, cSmallInt, cTinyint from testAltColP order by cId;
> select cId, cBoolean from testAltColP order by cId;
> drop table if exists testAltColP;
> {code}
> {code}
> Error:
> FAILED: Execution Error, return code 2 from 
> org.apache.hadoop.hive.ql.exec.mr.MapRedTask
> Excerpt for log:
> 2018-01-05T15:54:05,756 ERROR [LocalJobRunner Map Task Executor #0] 
> mr.ExecMapper: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime 
> Error while processing row [Error getting row data with exception 
> java.lang.UnsupportedOperationException: Cannot inspect 
> org.apache.hadoop.hive.serde2.io.TimestampWritable
>   at 
> org.apache.hadoop.hive.ql.io.parquet.serde.primitive.ParquetStringInspector.getPrimitiveJavaObject(ParquetStringInspector.java:77)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#760

  1   2   3   >