[jira] [Commented] (HIVE-8888) Mapjoin with LateralViewJoin generates wrong plan in Tez

2014-11-28 Thread Gunther Hagleitner (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14228138#comment-14228138
 ] 

Gunther Hagleitner commented on HIVE-:
--

failures are unrelated.

> Mapjoin with LateralViewJoin generates wrong plan in Tez
> 
>
> Key: HIVE-
> URL: https://issues.apache.org/jira/browse/HIVE-
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.13.0, 0.14.0, 0.13.1, 0.15.0
>Reporter: Prasanth J
>Assignee: Prasanth J
> Fix For: 0.14.1
>
> Attachments: HIVE-.1.patch, HIVE-.2.patch, HIVE-.3.patch, 
> HIVE-.4.patch, HIVE-.5.patch
>
>
> Queries like these 
> {code}
> with sub1 as
> (select aid, avalue from expod1 lateral view explode(av) avs as avalue ),
> sub2 as
> (select bid, bvalue from expod2 lateral view explode(bv) bvs as bvalue)
> select sub1.aid, sub1.avalue, sub2.bvalue
> from sub1,sub2
> where sub1.aid=sub2.bid;
> {code}
> generates twice the number of rows in Tez when compared to MR.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8964) Some TestMiniTezCliDriver tests taking two hours

2014-11-28 Thread Gunther Hagleitner (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14228139#comment-14228139
 ] 

Gunther Hagleitner commented on HIVE-8964:
--

Alright with the Tez guys' help figured out that the planner was producing a 
cyclic graph. I've re-opened HIVE- and added a new patch. [~brocknoland] 
when the new patch in HIVE-, can you re-enable the test on the build 
machine? (or did you disable in the source somewhere?)

> Some TestMiniTezCliDriver tests taking two hours
> 
>
> Key: HIVE-8964
> URL: https://issues.apache.org/jira/browse/HIVE-8964
> Project: Hive
>  Issue Type: Bug
>Reporter: Brock Noland
>Assignee: Gunther Hagleitner
>Priority: Blocker
>
> The test {{TestMiniTezCliDriver}} with the following query files:
> vectorization_16.q,mapjoin_mapjoin.q,groupby2.q,lvj_mapjoin.q,vectorization_5.q,vectorization_pushdown.q,orc_merge_incompat1.q,cbo_gby.q,vectorization_4.q,auto_join0.q,cross_product_check_1.q,vectorization_not.q,update_where_no_match.q,ctas.q,cbo_udf_udaf.q
> is timing out after two hours severely delaying the Hive precommits
> http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-1898/failed/TestMiniTezCliDriver-vectorization_16.q-mapjoin_mapjoin.q-groupby2.q-and-12-more/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8979) Merge shims/common-secure into shims/common

2014-11-28 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8979?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-8979:
---
   Resolution: Fixed
Fix Version/s: 0.15.0
   Status: Resolved  (was: Patch Available)

Committed to trunk.

> Merge shims/common-secure into shims/common
> ---
>
> Key: HIVE-8979
> URL: https://issues.apache.org/jira/browse/HIVE-8979
> Project: Hive
>  Issue Type: Task
>  Components: Shims
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
> Fix For: 0.15.0
>
> Attachments: HIVE-8979.patch
>
>
> After HIVE-8828 there is no reason to keep both. HIVE-8828 already migrated 
> many of classes from common-secure into common. We should move rest as well 
> and delete common-secure.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8990) mapjoin_mapjoin.q is failing on Tez (missed golden file update)

2014-11-28 Thread Gunther Hagleitner (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gunther Hagleitner updated HIVE-8990:
-
Attachment: HIVE-8990.1.patch

> mapjoin_mapjoin.q is failing on Tez (missed golden file update)
> ---
>
> Key: HIVE-8990
> URL: https://issues.apache.org/jira/browse/HIVE-8990
> Project: Hive
>  Issue Type: Bug
>Reporter: Gunther Hagleitner
>Assignee: Gunther Hagleitner
> Attachments: HIVE-8990.1.patch
>
>
> mapjoin_mapjoin.q was updated (SORT_BEFORE_DIFF). However, since the tez test 
> were stuck the accompanying update to the golden file was missed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8990) mapjoin_mapjoin.q is failing on Tez (missed golden file update)

2014-11-28 Thread Gunther Hagleitner (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gunther Hagleitner updated HIVE-8990:
-
Status: Patch Available  (was: Open)

> mapjoin_mapjoin.q is failing on Tez (missed golden file update)
> ---
>
> Key: HIVE-8990
> URL: https://issues.apache.org/jira/browse/HIVE-8990
> Project: Hive
>  Issue Type: Bug
>Reporter: Gunther Hagleitner
>Assignee: Gunther Hagleitner
> Attachments: HIVE-8990.1.patch
>
>
> mapjoin_mapjoin.q was updated (SORT_BEFORE_DIFF). However, since the tez test 
> were stuck the accompanying update to the golden file was missed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-8990) mapjoin_mapjoin.q is failing on Tez (missed golden file update)

2014-11-28 Thread Gunther Hagleitner (JIRA)
Gunther Hagleitner created HIVE-8990:


 Summary: mapjoin_mapjoin.q is failing on Tez (missed golden file 
update)
 Key: HIVE-8990
 URL: https://issues.apache.org/jira/browse/HIVE-8990
 Project: Hive
  Issue Type: Bug
Reporter: Gunther Hagleitner
Assignee: Gunther Hagleitner


mapjoin_mapjoin.q was updated (SORT_BEFORE_DIFF). However, since the tez test 
were stuck the accompanying update to the golden file was missed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8875) hive.optimize.sort.dynamic.partition should be turned off for ACID

2014-11-28 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14228162#comment-14228162
 ] 

Lefty Leverenz commented on HIVE-8875:
--

No user doc needed?

> hive.optimize.sort.dynamic.partition should be turned off for ACID
> --
>
> Key: HIVE-8875
> URL: https://issues.apache.org/jira/browse/HIVE-8875
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.14.0
>Reporter: Alan Gates
>Assignee: Alan Gates
> Fix For: 0.15.0
>
> Attachments: HIVE-8875.2.patch, HIVE-8875.patch
>
>
> Turning this on causes ACID insert, updates, and deletes to produce 
> non-optimal plans with extra reduce phases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8935) Add debug logging around token stores

2014-11-28 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14228167#comment-14228167
 ] 

Lefty Leverenz commented on HIVE-8935:
--

No user doc needed?

> Add debug logging around token stores
> -
>
> Key: HIVE-8935
> URL: https://issues.apache.org/jira/browse/HIVE-8935
> Project: Hive
>  Issue Type: Task
>Reporter: Brock Noland
>Assignee: Brock Noland
> Fix For: 0.15.0
>
> Attachments: HIVE-8935.patch, HIVE-8935.patch
>
>
> It's hard to debug issues related to delegation tokens due to a lack of debug 
> logging. This jira is to add debug logging.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8834) enable job progress monitoring of Remote Spark Context [Spark Branch]

2014-11-28 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14228173#comment-14228173
 ] 

Lefty Leverenz commented on HIVE-8834:
--

Does this need any documentation?

> enable job progress monitoring of Remote Spark Context [Spark Branch]
> -
>
> Key: HIVE-8834
> URL: https://issues.apache.org/jira/browse/HIVE-8834
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Reporter: Chengxiang Li
>Assignee: Rui Li
>  Labels: Spark-M3
> Fix For: spark-branch
>
> Attachments: HIVE-8834.1-spark.patch, HIVE-8834.2-spark.patch, 
> HIVE-8834.3-spark.patch, HIVE-8834.4-spark.patch, HIVE-8834.5-spark.patch, 
> HIVE-8834.6-spark.patch
>
>
> We should enable job progress monitor in Remote Spark Context, the spark job 
> progress info should fit into SparkJobStatus. SPARK-2321 supply new spark 
> progress API, which should make this task easier.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8916) Handle user@domain username under LDAP authentication

2014-11-28 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14228196#comment-14228196
 ] 

Lefty Leverenz commented on HIVE-8916:
--

Does this need documentation?

Possible locations:

* [Configuration Properties -- hive.server2.authentication.ldap.Domain | 
https://cwiki.apache.org/confluence/display/Hive/Configuration+Properties#ConfigurationProperties-hive.server2.authentication.ldap.Domain]
* [HiveServer2 Clients | 
https://cwiki.apache.org/confluence/display/Hive/HiveServer2+Clients]
* [Setting Up HiveServer2 -- Authentication/Security Configuration | 
https://cwiki.apache.org/confluence/display/Hive/Setting+Up+HiveServer2#SettingUpHiveServer2-Authentication/SecurityConfiguration]

> Handle user@domain username under LDAP authentication
> -
>
> Key: HIVE-8916
> URL: https://issues.apache.org/jira/browse/HIVE-8916
> Project: Hive
>  Issue Type: Bug
>  Components: Authentication
>Reporter: Mohit Sabharwal
>Assignee: Mohit Sabharwal
> Fix For: 0.15.0
>
> Attachments: HIVE-8916.2.patch, HIVE-8916.3.patch, HIVE-8916.patch
>
>
> If LDAP is configured with multiple domains for authentication, users can be 
> in different domains.
> Currently, LdapAuthenticationProviderImpl blindly appends the domain 
> configured "hive.server2.authentication.ldap.Domain" to the username, which 
> limits user to that domain. However, under multi-domain authentication, the 
> username may already include the domain (ex:  u...@domain.foo.com). We should 
> not append a domain if one is already present.
> Also, if username already includes the domain, rest of Hive and authorization 
> providers still expects the "short name" ("user" and not 
> "u...@domain.foo.com") for looking up privilege rules, etc.  As such, any 
> domain info in the username should be stripped off.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8374) schematool fails on Postgres versions < 9.2

2014-11-28 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14228204#comment-14228204
 ] 

Lefty Leverenz commented on HIVE-8374:
--

bq.  Kept the dbOpts option as that is useful when we add any db specific 
options.

Does dbOpts need to be documented, or is it for future use?  Any other 
documentation, or is this just a bug fix?

* [Hive Schema Tool | 
https://cwiki.apache.org/confluence/display/Hive/Hive+Schema+Tool#HiveSchemaTool-TheHiveSchemaTool]

> schematool fails on Postgres versions < 9.2
> ---
>
> Key: HIVE-8374
> URL: https://issues.apache.org/jira/browse/HIVE-8374
> Project: Hive
>  Issue Type: Bug
>  Components: Database/Schema
>Reporter: Mohit Sabharwal
>Assignee: Mohit Sabharwal
> Fix For: 0.15.0
>
> Attachments: HIVE-8374.1.patch, HIVE-8374.2.patch, HIVE-8374.3.patch, 
> HIVE-8374.patch
>
>
> The upgrade script for HIVE-5700 creates an UDF with language 'plpgsql',
> which is available by default only for Postgres 9.2+.
> For older Postgres versions, the language must be explicitly created,
> otherwise schematool fails with the error:
> {code}
> Error: ERROR: language "plpgsql" does not exist
>   Hint: Use CREATE LANGUAGE to load the language into the database. 
> (state=42704,code=0)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8988) Support advanced aggregation in Hive to Calcite path

2014-11-28 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-8988:
--
Issue Type: Improvement  (was: Bug)

> Support advanced aggregation in Hive to Calcite path 
> -
>
> Key: HIVE-8988
> URL: https://issues.apache.org/jira/browse/HIVE-8988
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 0.15.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>  Labels: grouping, logical, optiq
> Fix For: 0.15.0
>
>
> To close the gap between Hive and Calcite, we need to support the translation 
> of GroupingSets into Calcite; currently this is not implemented.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-8991) Fix custom_input_output_format

2014-11-28 Thread Rui Li (JIRA)
Rui Li created HIVE-8991:


 Summary: Fix custom_input_output_format
 Key: HIVE-8991
 URL: https://issues.apache.org/jira/browse/HIVE-8991
 Project: Hive
  Issue Type: Bug
Reporter: Rui Li


After HIVE-8836, {{custom_input_output_format}} fails because of missing 
hive-it-util in remote driver's class path.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8991) Fix custom_input_output_format

2014-11-28 Thread Rui Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Li updated HIVE-8991:
-
Issue Type: Sub-task  (was: Bug)
Parent: HIVE-8548

> Fix custom_input_output_format
> --
>
> Key: HIVE-8991
> URL: https://issues.apache.org/jira/browse/HIVE-8991
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Rui Li
>
> After HIVE-8836, {{custom_input_output_format}} fails because of missing 
> hive-it-util in remote driver's class path.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8991) Fix custom_input_output_format [Spark Branch]

2014-11-28 Thread Rui Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Li updated HIVE-8991:
-
Summary: Fix custom_input_output_format [Spark Branch]  (was: Fix 
custom_input_output_format)

> Fix custom_input_output_format [Spark Branch]
> -
>
> Key: HIVE-8991
> URL: https://issues.apache.org/jira/browse/HIVE-8991
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Reporter: Rui Li
>
> After HIVE-8836, {{custom_input_output_format}} fails because of missing 
> hive-it-util in remote driver's class path.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8991) Fix custom_input_output_format [Spark Branch]

2014-11-28 Thread Rui Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Li updated HIVE-8991:
-
Component/s: Spark

> Fix custom_input_output_format [Spark Branch]
> -
>
> Key: HIVE-8991
> URL: https://issues.apache.org/jira/browse/HIVE-8991
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Reporter: Rui Li
>
> After HIVE-8836, {{custom_input_output_format}} fails because of missing 
> hive-it-util in remote driver's class path.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8834) enable job progress monitoring of Remote Spark Context [Spark Branch]

2014-11-28 Thread Xuefu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14228268#comment-14228268
 ] 

Xuefu Zhang commented on HIVE-8834:
---

[~leftylev], I do think this needs documentation as it only presents progress 
info to the user while requiring no user intervention. Nevertheless, it would 
be nice to have doc for  job monitoring for Spark jobs as general info. 
HIVE-7439 is the JIRA for that. Maybe we can set doc tag on that one.

> enable job progress monitoring of Remote Spark Context [Spark Branch]
> -
>
> Key: HIVE-8834
> URL: https://issues.apache.org/jira/browse/HIVE-8834
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Reporter: Chengxiang Li
>Assignee: Rui Li
>  Labels: Spark-M3
> Fix For: spark-branch
>
> Attachments: HIVE-8834.1-spark.patch, HIVE-8834.2-spark.patch, 
> HIVE-8834.3-spark.patch, HIVE-8834.4-spark.patch, HIVE-8834.5-spark.patch, 
> HIVE-8834.6-spark.patch
>
>
> We should enable job progress monitor in Remote Spark Context, the spark job 
> progress info should fit into SparkJobStatus. SPARK-2321 supply new spark 
> progress API, which should make this task easier.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8989) Make groupby_multi_single_reducer.q and smb_mapjoin_3.q deterministic

2014-11-28 Thread Xuefu Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuefu Zhang updated HIVE-8989:
--
   Resolution: Fixed
Fix Version/s: 0.15.0
   Status: Resolved  (was: Patch Available)

Patch committed to trunk. Thanks, Brock.

> Make groupby_multi_single_reducer.q and smb_mapjoin_3.q deterministic
> -
>
> Key: HIVE-8989
> URL: https://issues.apache.org/jira/browse/HIVE-8989
> Project: Hive
>  Issue Type: Task
>Reporter: Brock Noland
>Assignee: Brock Noland
> Fix For: 0.15.0
>
> Attachments: HIVE-8989.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-860) Persistent distributed cache

2014-11-28 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14228280#comment-14228280
 ] 

Hive QA commented on HIVE-860:
--



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12684139/HIVE-860.3.patch

{color:red}ERROR:{color} -1 due to 96 failed/errored test(s), 6620 tests 
executed
*Failed tests:*
{noformat}
TestCliDriver-bucketmapjoin3.q-udf_between.q-union_remove_10.q-and-12-more - 
did not produce a TEST-*.xml file
TestCliDriver-date_udf.q-transform_ppr2.q-union_date.q-and-12-more - did not 
produce a TEST-*.xml file
TestCliDriver-nonblock_op_deduplicate.q-cbo_windowing.q-avro_decimal_native.q-and-12-more
 - did not produce a TEST-*.xml file
TestHWISessionManager - did not produce a TEST-*.xml file
TestParseNegative - did not produce a TEST-*.xml file
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_acid_vectorization
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_alter_rename_partition
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_annotate_stats_table
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_archive_excludeHadoop20
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join13
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join22
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join5
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join8
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketmapjoin13
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_cbo_simple_select
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_columnarserde_create_shortcut
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_correlationoptimizer8
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_create_udaf
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_dynpart_sort_optimization2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_filter_numeric
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_gby_star
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby11
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby4_noskew
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby7_noskew
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby8
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby8_map
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby8_noskew
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby_multi_insert_common_distinct
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby_sort_skew_1_23
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_innerjoin
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_input42
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_input_part0
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_input_testsequencefile
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_input_testxpath2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_join29
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_join32_lessSize
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_join33
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_join_cond_pushdown_1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_join_cond_pushdown_2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_join_merging
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_join_nulls
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_11
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_14
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_literal_double
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_load_dyn_part10
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_load_dyn_part15
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_load_dyn_part3
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_load_dyn_part7
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_merge4
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_merge_dynamic_partition3
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_multi_insert_move_tasks_share_dependencies
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_partition_timestamp
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_partition_wise_fileformat3
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_ppd_udf_case
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_ppd_vc
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_ptf_general_queries
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_push_or
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_quote1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_sample4
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_serde_opencsv
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_stats8
org.apache.hadoop.h

[jira] [Updated] (HIVE-8981) Not a directory error in mapjoin_hook.q [Spark Branch]

2014-11-28 Thread Xuefu Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuefu Zhang updated HIVE-8981:
--
Issue Type: Sub-task  (was: Bug)
Parent: HIVE-8699

> Not a directory error in mapjoin_hook.q [Spark Branch]
> --
>
> Key: HIVE-8981
> URL: https://issues.apache.org/jira/browse/HIVE-8981
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Affects Versions: spark-branch
> Environment: Using remote-spark context with 
> spark-master=local-cluster [2,2,1024]
>Reporter: Szehon Ho
>Assignee: Chao
>
> Hits the following exception:
> {noformat}
> 2014-11-26 15:17:11,728 INFO  [stderr-redir-1]: client.SparkClientImpl 
> (SparkClientImpl.java:run(364)) - 14/11/26 15:17:11 WARN TaskSetManager: Lost 
> task 0.0 in stage 8.0 (TID 18, 172.16.3.52): java.lang.RuntimeException: 
> org.apache.hadoop.hive.ql.metadata.HiveException: 
> org.apache.hadoop.hive.ql.metadata.HiveException: Error while trying to 
> create table container
> 2014-11-26 15:17:11,728 INFO  [stderr-redir-1]: client.SparkClientImpl 
> (SparkClientImpl.java:run(364)) - at 
> org.apache.hadoop.hive.ql.exec.spark.SparkMapRecordHandler.processRow(SparkMapRecordHandler.java:160)
> 2014-11-26 15:17:11,728 INFO  [stderr-redir-1]: client.SparkClientImpl 
> (SparkClientImpl.java:run(364)) - at 
> org.apache.hadoop.hive.ql.exec.spark.HiveMapFunctionResultList.processNextRecord(HiveMapFunctionResultList.java:47)
> 2014-11-26 15:17:11,729 INFO  [stderr-redir-1]: client.SparkClientImpl 
> (SparkClientImpl.java:run(364)) - at 
> org.apache.hadoop.hive.ql.exec.spark.HiveMapFunctionResultList.processNextRecord(HiveMapFunctionResultList.java:28)
> 2014-11-26 15:17:11,729 INFO  [stderr-redir-1]: client.SparkClientImpl 
> (SparkClientImpl.java:run(364)) - at 
> org.apache.hadoop.hive.ql.exec.spark.HiveBaseFunctionResultList$ResultIterator.hasNext(HiveBaseFunctionResultList.java:96)
> 2014-11-26 15:17:11,729 INFO  [stderr-redir-1]: client.SparkClientImpl 
> (SparkClientImpl.java:run(364)) - at 
> scala.collection.convert.Wrappers$JIteratorWrapper.hasNext(Wrappers.scala:41)
> 2014-11-26 15:17:11,729 INFO  [stderr-redir-1]: client.SparkClientImpl 
> (SparkClientImpl.java:run(364)) - at 
> scala.collection.Iterator$class.foreach(Iterator.scala:727)
> 2014-11-26 15:17:11,729 INFO  [stderr-redir-1]: client.SparkClientImpl 
> (SparkClientImpl.java:run(364)) - at 
> scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
> 2014-11-26 15:17:11,729 INFO  [stderr-redir-1]: client.SparkClientImpl 
> (SparkClientImpl.java:run(364)) - at 
> org.apache.spark.rdd.AsyncRDDActions$$anonfun$foreachAsync$2.apply(AsyncRDDActions.scala:115)
> 2014-11-26 15:17:11,729 INFO  [stderr-redir-1]: client.SparkClientImpl 
> (SparkClientImpl.java:run(364)) - at 
> org.apache.spark.rdd.AsyncRDDActions$$anonfun$foreachAsync$2.apply(AsyncRDDActions.scala:115)
> 2014-11-26 15:17:11,729 INFO  [stderr-redir-1]: client.SparkClientImpl 
> (SparkClientImpl.java:run(364)) - at 
> org.apache.spark.SparkContext$$anonfun$30.apply(SparkContext.scala:1390)
> 2014-11-26 15:17:11,729 INFO  [stderr-redir-1]: client.SparkClientImpl 
> (SparkClientImpl.java:run(364)) - at 
> org.apache.spark.SparkContext$$anonfun$30.apply(SparkContext.scala:1390)
> 2014-11-26 15:17:11,729 INFO  [stderr-redir-1]: client.SparkClientImpl 
> (SparkClientImpl.java:run(364)) - at 
> org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:61)
> 2014-11-26 15:17:11,729 INFO  [stderr-redir-1]: client.SparkClientImpl 
> (SparkClientImpl.java:run(364)) - at 
> org.apache.spark.scheduler.Task.run(Task.scala:56)
> 2014-11-26 15:17:11,729 INFO  [stderr-redir-1]: client.SparkClientImpl 
> (SparkClientImpl.java:run(364)) - at 
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:196)
> 2014-11-26 15:17:11,729 INFO  [stderr-redir-1]: client.SparkClientImpl 
> (SparkClientImpl.java:run(364)) - at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> 2014-11-26 15:17:11,729 INFO  [stderr-redir-1]: client.SparkClientImpl 
> (SparkClientImpl.java:run(364)) - at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> 2014-11-26 15:17:11,729 INFO  [stderr-redir-1]: client.SparkClientImpl 
> (SparkClientImpl.java:run(364)) - at java.lang.Thread.run(Thread.java:744)
> 2014-11-26 15:17:11,729 INFO  [stderr-redir-1]: client.SparkClientImpl 
> (SparkClientImpl.java:run(364)) - Caused by: 
> org.apache.hadoop.hive.ql.metadata.HiveException: 
> org.apache.hadoop.hive.ql.metadata.HiveException: Error while trying to 
> create table container
> 2014-11-26 15:17:11,729 INFO  [stderr-redir-1]: client.SparkClientImpl 
> (SparkClientImpl.java:run(364)) - at 
> org

[jira] [Created] (HIVE-8992) Fix two bucket related test failures, infer_bucket_sort_convert_join.q and parquet_join.q [Spark Branch]

2014-11-28 Thread Xuefu Zhang (JIRA)
Xuefu Zhang created HIVE-8992:
-

 Summary: Fix two bucket related test failures, 
infer_bucket_sort_convert_join.q and parquet_join.q [Spark Branch]
 Key: HIVE-8992
 URL: https://issues.apache.org/jira/browse/HIVE-8992
 Project: Hive
  Issue Type: Sub-task
  Components: spark-branch
Reporter: Xuefu Zhang


Failures shown in HIVE-8836. The seemed related to wrong reducer numbers in 
terms of bucket join.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8836) Enable automatic tests with remote spark client [Spark Branch]

2014-11-28 Thread Xuefu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14228289#comment-14228289
 ] 

Xuefu Zhang commented on HIVE-8836:
---

To summarize the above Spark related test failures:

1. custom_input_output_format.q, tracked by HIVE-8991
2. infer_bucket_sort_convert_join.q and  parquet_join.q, tracked by HIVE-8992
3. mapjoin_hook.q, tracked by HIVE-8981

Above tez and MR related failrues should be unrelated. They need to be fixed in 
trunk.

Thank everyone for making this happen.

> Enable automatic tests with remote spark client [Spark Branch]
> --
>
> Key: HIVE-8836
> URL: https://issues.apache.org/jira/browse/HIVE-8836
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Affects Versions: spark-branch
>Reporter: Chengxiang Li
>Assignee: Rui Li
>  Labels: Spark-M3
> Fix For: spark-branch
>
> Attachments: HIVE-8836.13-spark.patch, HIVE-8836.14-spark.patch, 
> HIVE-8836.14-spark.patch, HIVE-8836.7-spark.patch, HIVE-8836.8-spark.patch, 
> HIVE-8836.9-spark.patch, additional-enable-spark-log.patch
>
>
> In real production environment, remote spark client should be used to submit 
> spark job for Hive mostly, we should enable automatic test with remote spark 
> client to make sure the Hive feature workable with it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HIVE-8795) Switch precommit test from local to local-cluster [Spark Branch]

2014-11-28 Thread Xuefu Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuefu Zhang resolved HIVE-8795.
---
Resolution: Duplicate

Dupe of HIVE-8836. Fix was evetually made in HIVE-8836 as well.

> Switch precommit test from local to local-cluster [Spark Branch]
> 
>
> Key: HIVE-8795
> URL: https://issues.apache.org/jira/browse/HIVE-8795
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Reporter: Xuefu Zhang
>Assignee: Szehon Ho
>
>  It seems unlikely that Spark community will provide MRMiniCluster equivalent 
> (SPARK-3691), and Spark local-cluster was the recommendation. Latest research 
> shows that Spark local-cluster works with Hive. Therefore, for now, we use 
> Spark local-cluster (instead of current local) for our precommit test.
> It's previous belived (HIVE-7382) that a Spark installation is required and 
> SPARK_HOME env variable needs to set. Since Spark pulls in Spark's assembly 
> jar, it's believed now we only need a few script from Spark installation 
> instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HIVE-8991) Fix custom_input_output_format [Spark Branch]

2014-11-28 Thread Rui Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Li reassigned HIVE-8991:


Assignee: Rui Li

> Fix custom_input_output_format [Spark Branch]
> -
>
> Key: HIVE-8991
> URL: https://issues.apache.org/jira/browse/HIVE-8991
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Reporter: Rui Li
>Assignee: Rui Li
>
> After HIVE-8836, {{custom_input_output_format}} fails because of missing 
> hive-it-util in remote driver's class path.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8991) Fix custom_input_output_format [Spark Branch]

2014-11-28 Thread Rui Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Li updated HIVE-8991:
-
Status: Patch Available  (was: Open)

> Fix custom_input_output_format [Spark Branch]
> -
>
> Key: HIVE-8991
> URL: https://issues.apache.org/jira/browse/HIVE-8991
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Reporter: Rui Li
>Assignee: Rui Li
> Attachments: HIVE-8991.1-spark.patch
>
>
> After HIVE-8836, {{custom_input_output_format}} fails because of missing 
> hive-it-util in remote driver's class path.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8991) Fix custom_input_output_format [Spark Branch]

2014-11-28 Thread Rui Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Li updated HIVE-8991:
-
Attachment: HIVE-8991.1-spark.patch

This patch can fix the test on my machine.
Strange thing is that, if I add hive-it-util to 
{{spark.driver.extraClassPath}}, I'll have to add hive-exec to it as well, 
which I suppose should be always added to driver's class path by default.
Looked a little bit into this. It seems spark will use 
{{SparkSubmitDriverBootstrapper}} to launch {{SparkSubmit}} if there're 
{{spark.driver.extra*}} properties. So I suspect 
{{SparkSubmitDriverBootstrapper}} somehow doesn't set CP properly for the 
driver.
Also tried setting {{--driver-class-path}} in {{SparkClientImpl}}, but it will 
override {{spark.driver.extraClassPath}}.
Another thing is that {{SparkSubmitDriverBootstrapper}} just hangs after client 
and driver have shut down. Others may help me verify this.

> Fix custom_input_output_format [Spark Branch]
> -
>
> Key: HIVE-8991
> URL: https://issues.apache.org/jira/browse/HIVE-8991
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Reporter: Rui Li
>Assignee: Rui Li
> Attachments: HIVE-8991.1-spark.patch
>
>
> After HIVE-8836, {{custom_input_output_format}} fails because of missing 
> hive-it-util in remote driver's class path.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8991) Fix custom_input_output_format [Spark Branch]

2014-11-28 Thread Rui Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14228335#comment-14228335
 ] 

Rui Li commented on HIVE-8991:
--

[~vanzin] could you help look at this? Thanks!

> Fix custom_input_output_format [Spark Branch]
> -
>
> Key: HIVE-8991
> URL: https://issues.apache.org/jira/browse/HIVE-8991
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Reporter: Rui Li
>Assignee: Rui Li
> Attachments: HIVE-8991.1-spark.patch
>
>
> After HIVE-8836, {{custom_input_output_format}} fails because of missing 
> hive-it-util in remote driver's class path.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-8993) Make sure Spark + HS2 work [Spark Branch]

2014-11-28 Thread Xuefu Zhang (JIRA)
Xuefu Zhang created HIVE-8993:
-

 Summary: Make sure Spark + HS2 work [Spark Branch]
 Key: HIVE-8993
 URL: https://issues.apache.org/jira/browse/HIVE-8993
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Xuefu Zhang


We haven't formally tested this combination yet.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8991) Fix custom_input_output_format [Spark Branch]

2014-11-28 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14228395#comment-14228395
 ] 

Hive QA commented on HIVE-8991:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12684187/HIVE-8991.1-spark.patch

{color:red}ERROR:{color} -1 due to 6 failed/errored test(s), 7182 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join_without_localtask
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_sample_islocalmode_hook
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_vector_decimal_aggregate
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_infer_bucket_sort_convert_join
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_mapjoin_hook
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_parquet_join
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-SPARK-Build/464/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-SPARK-Build/464/console
Test logs: 
http://ec2-50-18-27-0.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-SPARK-Build-464/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 6 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12684187 - PreCommit-HIVE-SPARK-Build

> Fix custom_input_output_format [Spark Branch]
> -
>
> Key: HIVE-8991
> URL: https://issues.apache.org/jira/browse/HIVE-8991
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Reporter: Rui Li
>Assignee: Rui Li
> Attachments: HIVE-8991.1-spark.patch
>
>
> After HIVE-8836, {{custom_input_output_format}} fails because of missing 
> hive-it-util in remote driver's class path.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8990) mapjoin_mapjoin.q is failing on Tez (missed golden file update)

2014-11-28 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14228403#comment-14228403
 ] 

Hive QA commented on HIVE-8990:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12684145/HIVE-8990.1.patch

{color:red}ERROR:{color} -1 due to 3 failed/errored test(s), 6694 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_vector_decimal_aggregate
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_schemeAuthority
org.apache.hive.hcatalog.streaming.TestStreaming.testTransactionBatchEmptyCommit
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/1932/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/1932/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-1932/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 3 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12684145 - PreCommit-HIVE-TRUNK-Build

> mapjoin_mapjoin.q is failing on Tez (missed golden file update)
> ---
>
> Key: HIVE-8990
> URL: https://issues.apache.org/jira/browse/HIVE-8990
> Project: Hive
>  Issue Type: Bug
>Reporter: Gunther Hagleitner
>Assignee: Gunther Hagleitner
> Attachments: HIVE-8990.1.patch
>
>
> mapjoin_mapjoin.q was updated (SORT_BEFORE_DIFF). However, since the tez test 
> were stuck the accompanying update to the golden file was missed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8889) JDBC Driver ResultSet.getXXXXXX(String columnLabel) methods Broken

2014-11-28 Thread Chaoyu Tang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chaoyu Tang updated HIVE-8889:
--
Attachment: HIVE-8889.2.patch

Limited the patch change only in JDBC so that its ResultSet.getXXX is still 
able to use shortname to get result for query select *  when HS2 
hive.resultset.use.unique.column.names is set to true.

To support getXXX using either shortname or qualified name for all queries 
needs more and complicated change in Hive which will be tracked in a new JIRA. 


> JDBC Driver ResultSet.getXX(String columnLabel) methods Broken
> --
>
> Key: HIVE-8889
> URL: https://issues.apache.org/jira/browse/HIVE-8889
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.13.1
>Reporter: G Lingle
>Assignee: Chaoyu Tang
>Priority: Critical
> Fix For: 0.15.0, 0.14.1
>
> Attachments: HIVE-8889.1.patch, HIVE-8889.2.patch, HIVE-8889.patch
>
>
> Using hive-jdbc-0.13.1-cdh5.2.0.jar.
> All of the get-by-column-label methods of HiveBaseResultSet are now broken.  
> They don't take just the column label as they should.  Instead you have to 
> pass in ..  This requirement doesn't conform to the 
> java ResultSet API which specifies:
> "columnLabel - the label for the column specified with the SQL AS clause. If 
> the SQL AS clause was not specified, then the label is the name of the column"
> Looking at the code, it seems that the problem is that findColumn() method is 
> looking in normalizedColumnNames instead of the columnNames.
> BTW, Another annoying issue with the code is that the SQLException thrown 
> gives no indication of what the problem is.  It should at least say that the 
> column name wasn't found in the description string.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 28403: HIVE-8889:JDBC Driver ResultSet.getXXXXXX(String columnLabel) methods Broken

2014-11-28 Thread Chaoyu Tang

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/28403/
---

(Updated Nov. 28, 2014, 6:04 p.m.)


Review request for hive, Ashutosh Chauhan, Prasad Mujumdar, and Szehon Ho.


Changes
---

Update patch to limit the change in Hive JDBC driver.


Repository: hive-git


Description (updated)
---

For JDBC application using query like select * , the columnLabel in 
ResultSet.getXXX(String columnLabel) has to be a full-qualified name (e.g. 
tableName.colName) unless you set the hive.resultset.use.unique.column.names to 
false, but this setting will break other cases which require the qualified name 
(see HIVE-6687). 
This patch supports Hive JDBC ResultSet.getXXX(key) works for queries like 
following when hive.resultset.use.unique.column.names is true:
a)select * from src
b)select key from src
c)select * from srcview
d)select key from srcview.

The changes in this patch include:
1. findColumn in HiveBaseResultSet will find a column either by short name or 
qualified name
2. Unit tests


Diffs (updated)
-

  itests/hive-unit/src/test/java/org/apache/hive/jdbc/TestJdbcDriver2.java 
f2560e2e4793cca11950519708b1a666eb700e50 
  jdbc/src/java/org/apache/hive/jdbc/HiveBaseResultSet.java 
8cbf9e7092489a2adb0bc2ba6b5ee38e41c041f8 

Diff: https://reviews.apache.org/r/28403/diff/


Testing
---

1. New test cases in TestJdbcDriver2.java passed
2. pre-committed tests were submitted


Thanks,

Chaoyu Tang



[jira] [Commented] (HIVE-8964) Some TestMiniTezCliDriver tests taking two hours

2014-11-28 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14228459#comment-14228459
 ] 

Brock Noland commented on HIVE-8964:


I disabled it in some properties files on the build host. Really need to get 
those in svn. I'll try that today.

I will continue the discussion on how to test the patch in HIVE- over there.

> Some TestMiniTezCliDriver tests taking two hours
> 
>
> Key: HIVE-8964
> URL: https://issues.apache.org/jira/browse/HIVE-8964
> Project: Hive
>  Issue Type: Bug
>Reporter: Brock Noland
>Assignee: Gunther Hagleitner
>Priority: Blocker
>
> The test {{TestMiniTezCliDriver}} with the following query files:
> vectorization_16.q,mapjoin_mapjoin.q,groupby2.q,lvj_mapjoin.q,vectorization_5.q,vectorization_pushdown.q,orc_merge_incompat1.q,cbo_gby.q,vectorization_4.q,auto_join0.q,cross_product_check_1.q,vectorization_not.q,update_where_no_match.q,ctas.q,cbo_udf_udaf.q
> is timing out after two hours severely delaying the Hive precommits
> http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-1898/failed/TestMiniTezCliDriver-vectorization_16.q-mapjoin_mapjoin.q-groupby2.q-and-12-more/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8888) Mapjoin with LateralViewJoin generates wrong plan in Tez

2014-11-28 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14228460#comment-14228460
 ] 

Brock Noland commented on HIVE-:


[~hagleitn] due to HIVE-8964 the latest ptest run won't actually test 
{{lvj_mapjoin.q}} if you've verified it works locally, I will enable that test 
after the latest patch is committed.

> Mapjoin with LateralViewJoin generates wrong plan in Tez
> 
>
> Key: HIVE-
> URL: https://issues.apache.org/jira/browse/HIVE-
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.13.0, 0.14.0, 0.13.1, 0.15.0
>Reporter: Prasanth J
>Assignee: Prasanth J
> Fix For: 0.14.1
>
> Attachments: HIVE-.1.patch, HIVE-.2.patch, HIVE-.3.patch, 
> HIVE-.4.patch, HIVE-.5.patch
>
>
> Queries like these 
> {code}
> with sub1 as
> (select aid, avalue from expod1 lateral view explode(av) avs as avalue ),
> sub2 as
> (select bid, bvalue from expod2 lateral view explode(bv) bvs as bvalue)
> select sub1.aid, sub1.avalue, sub2.bvalue
> from sub1,sub2
> where sub1.aid=sub2.bid;
> {code}
> generates twice the number of rows in Tez when compared to MR.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8935) Add debug logging around token stores

2014-11-28 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14228462#comment-14228462
 ] 

Brock Noland commented on HIVE-8935:


Hi Lefty,

I don't expect users to enable this. I expect this to be used by support folks. 
Long story short, I think there is at least one bug around tokens but have not 
been able to track it down.

> Add debug logging around token stores
> -
>
> Key: HIVE-8935
> URL: https://issues.apache.org/jira/browse/HIVE-8935
> Project: Hive
>  Issue Type: Task
>Reporter: Brock Noland
>Assignee: Brock Noland
> Fix For: 0.15.0
>
> Attachments: HIVE-8935.patch, HIVE-8935.patch
>
>
> It's hard to debug issues related to delegation tokens due to a lack of debug 
> logging. This jira is to add debug logging.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HIVE-7164) Support non-string partition types in HCatalog

2014-11-28 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J resolved HIVE-7164.
---
Resolution: Duplicate

Resolved via HIVE-2702 for HMS. You will need to set property 
{{hive.metastore.integral.jdo.pushdown}} to {{true}} on the HMS' hive-site.xml 
to enable this ability, however. It is false by default.

> Support non-string partition types in HCatalog
> --
>
> Key: HIVE-7164
> URL: https://issues.apache.org/jira/browse/HIVE-7164
> Project: Hive
>  Issue Type: Bug
>  Components: HCatalog
>Reporter: bharath v
>
> Currently querying hive tables with non-string partition columns using HCat  
> gives us the following error. 
> Error: Filtering is supported only on partition keys of type string
> Related discussion here : 
> https://www.mail-archive.com/dev@hive.apache.org/msg18011.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (HIVE-8900) Create encryption testing framework

2014-11-28 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-8900:
---
Comment: was deleted

(was: 

{color:red}Overall{color}: -1 no tests executed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12684129/HIVE-8065.patch

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/1928/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/1928/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-1928/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hive-ptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ [[ -n /usr/java/jdk1.7.0_45-cloudera ]]
+ export JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera
+ JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera
+ export 
PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-maven-3.0.5/bin:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin
+ 
PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-maven-3.0.5/bin:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'M2_OPTS=-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ M2_OPTS='-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ cd /data/hive-ptest/working/
+ tee /data/hive-ptest/logs/PreCommit-HIVE-TRUNK-Build-1928/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ svn = \s\v\n ]]
+ [[ -n '' ]]
+ [[ -d apache-svn-trunk-source ]]
+ [[ ! -d apache-svn-trunk-source/.svn ]]
+ [[ ! -d apache-svn-trunk-source ]]
+ cd apache-svn-trunk-source
+ svn revert -R .
Reverted 'shims/aggregator/pom.xml'
Reverted 
'shims/common/src/main/java/org/apache/hadoop/security/token/delegation/DelegationTokenSelector.java'
Reverted 'shims/common/src/main/java/org/apache/hadoop/hive/shims/Utils.java'
Reverted 'shims/common/pom.xml'
Reverted 'shims/pom.xml'
Reverted 
'shims/common-secure/src/main/java/org/apache/hadoop/hive/thrift/ZooKeeperTokenStore.java'
Reverted 
'shims/common-secure/src/main/java/org/apache/hadoop/hive/thrift/DBTokenStore.java'
Reverted 
'shims/common-secure/src/main/java/org/apache/hadoop/hive/shims/HadoopShimsSecure.java'
Reverted 'shims/common-secure/pom.xml'
Reverted 
'hcatalog/core/src/main/java/org/apache/hive/hcatalog/mapreduce/Security.java'
++ egrep -v '^X|^Performing status on external'
++ awk '{print $2}'
++ svn status --no-ignore
+ rm -rf target datanucleus.log ant/target shims/target shims/0.20S/target 
shims/0.23/target shims/aggregator/target shims/common/target 
shims/common/src/main/java/org/apache/hadoop/hive/thrift/ZooKeeperTokenStore.java
 shims/common/src/main/java/org/apache/hadoop/hive/thrift/DBTokenStore.java 
shims/common/src/main/java/org/apache/hadoop/hive/thrift/DelegationTokenSelector.java
 shims/common/src/main/java/org/apache/hadoop/hive/shims/HadoopShimsSecure.java 
shims/scheduler/target packaging/target hbase-handler/target testutils/target 
jdbc/target metastore/target itests/target itests/hcatalog-unit/target 
itests/test-serde/target itests/qtest/target itests/hive-unit-hadoop2/target 
itests/hive-minikdc/target itests/hive-unit/target itests/custom-serde/target 
itests/util/target hcatalog/target hcatalog/core/target 
hcatalog/streaming/target hcatalog/server-extensions/target 
hcatalog/hcatalog-pig-adapter/target hcatalog/webhcat/svr/target 
hcatalog/webhcat/java-client/target accumulo-handler/target hwi/target 
common/target common/src/gen contrib/target service/target serde/target 
beeline/target odbc/target cli/target ql/dependency-reduced-pom.xml ql/target
+ svn update

Fetching external item into 'hcatalog/src/test/e2e/harness'
External at revision 1642257.

At revision 1642257.
+ patchCommandPath=/data/hive-ptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hive-ptest/working/scratch/build.patch
+ [[ -f /data/hive-ptest/working/scratch/build.patch ]]
+ chmod +x /data/hive-ptest/working/scratch/smart-apply-patch.sh
+ /data/hive-ptest/working/scratch/smart-apply-patch.sh 
/data/hive-ptest/working/scratch/build.patch
The patch does not appear to apply with p0, p1, or p2
+ exit 1
'
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12684129 - PreCommit-HIVE-TRUNK-Build)

> Create encryption testing framework
> -

[jira] [Created] (HIVE-8994) Merge from trunk Nov 28 2014

2014-11-28 Thread Brock Noland (JIRA)
Brock Noland created HIVE-8994:
--

 Summary: Merge from trunk Nov 28 2014
 Key: HIVE-8994
 URL: https://issues.apache.org/jira/browse/HIVE-8994
 Project: Hive
  Issue Type: Sub-task
Reporter: Brock Noland
Assignee: Brock Noland






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8993) Make sure Spark + HS2 work [Spark Branch]

2014-11-28 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14228476#comment-14228476
 ] 

Brock Noland commented on HIVE-8993:


I think we should create a simple class {{TestSparkViaJdbcWithMiniHS2}} based 
on {{TestJdbcWithMiniHS2}}

> Make sure Spark + HS2 work [Spark Branch]
> -
>
> Key: HIVE-8993
> URL: https://issues.apache.org/jira/browse/HIVE-8993
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Reporter: Xuefu Zhang
>
> We haven't formally tested this combination yet.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8994) Merge from trunk Nov 28 2014

2014-11-28 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-8994:
---
Attachment: HIVE-8994.2-spark.patch

> Merge from trunk Nov 28 2014
> 
>
> Key: HIVE-8994
> URL: https://issues.apache.org/jira/browse/HIVE-8994
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Reporter: Brock Noland
>Assignee: Brock Noland
> Attachments: HIVE-8994.2-spark.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8994) Merge from trunk Nov 28 2014

2014-11-28 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-8994:
---
Fix Version/s: spark-branch
Affects Version/s: spark-branch
   Status: Patch Available  (was: Open)

> Merge from trunk Nov 28 2014
> 
>
> Key: HIVE-8994
> URL: https://issues.apache.org/jira/browse/HIVE-8994
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Affects Versions: spark-branch
>Reporter: Brock Noland
>Assignee: Brock Noland
> Fix For: spark-branch
>
> Attachments: HIVE-8994.2-spark.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8888) Mapjoin with LateralViewJoin generates wrong plan in Tez

2014-11-28 Thread Gunther Hagleitner (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14228485#comment-14228485
 ] 

Gunther Hagleitner commented on HIVE-:
--

[~brocknoland] I have verified that lvj_mapjoin works locally. Thanks.

> Mapjoin with LateralViewJoin generates wrong plan in Tez
> 
>
> Key: HIVE-
> URL: https://issues.apache.org/jira/browse/HIVE-
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.13.0, 0.14.0, 0.13.1, 0.15.0
>Reporter: Prasanth J
>Assignee: Prasanth J
> Fix For: 0.14.1
>
> Attachments: HIVE-.1.patch, HIVE-.2.patch, HIVE-.3.patch, 
> HIVE-.4.patch, HIVE-.5.patch
>
>
> Queries like these 
> {code}
> with sub1 as
> (select aid, avalue from expod1 lateral view explode(av) avs as avalue ),
> sub2 as
> (select bid, bvalue from expod2 lateral view explode(bv) bvs as bvalue)
> select sub1.aid, sub1.avalue, sub2.bvalue
> from sub1,sub2
> where sub1.aid=sub2.bid;
> {code}
> generates twice the number of rows in Tez when compared to MR.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8889) JDBC Driver ResultSet.getXXXXXX(String columnLabel) methods Broken

2014-11-28 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14228503#comment-14228503
 ] 

Hive QA commented on HIVE-8889:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12684204/HIVE-8889.2.patch

{color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 6695 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_vector_decimal_aggregate
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_mapjoin_mapjoin
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/1933/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/1933/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-1933/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 2 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12684204 - PreCommit-HIVE-TRUNK-Build

> JDBC Driver ResultSet.getXX(String columnLabel) methods Broken
> --
>
> Key: HIVE-8889
> URL: https://issues.apache.org/jira/browse/HIVE-8889
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.13.1
>Reporter: G Lingle
>Assignee: Chaoyu Tang
>Priority: Critical
> Fix For: 0.15.0, 0.14.1
>
> Attachments: HIVE-8889.1.patch, HIVE-8889.2.patch, HIVE-8889.patch
>
>
> Using hive-jdbc-0.13.1-cdh5.2.0.jar.
> All of the get-by-column-label methods of HiveBaseResultSet are now broken.  
> They don't take just the column label as they should.  Instead you have to 
> pass in ..  This requirement doesn't conform to the 
> java ResultSet API which specifies:
> "columnLabel - the label for the column specified with the SQL AS clause. If 
> the SQL AS clause was not specified, then the label is the name of the column"
> Looking at the code, it seems that the problem is that findColumn() method is 
> looking in normalizedColumnNames instead of the columnNames.
> BTW, Another annoying issue with the code is that the SQLException thrown 
> gives no indication of what the problem is.  It should at least say that the 
> column name wasn't found in the description string.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8994) Merge from trunk Nov 28 2014

2014-11-28 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14228527#comment-14228527
 ] 

Hive QA commented on HIVE-8994:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12684211/HIVE-8994.2-spark.patch

{color:red}ERROR:{color} -1 due to 309 failed/errored test(s), 7215 tests 
executed
*Failed tests:*
{noformat}
TestAccumuloCliDriver - did not produce a TEST-*.xml file
TestMiniTezCliDriver-vectorization_13.q-auto_sortmerge_join_13.q-tez_bmj_schema_evolution.q-and-12-more
 - did not produce a TEST-*.xml file
TestParquetDirect - did not produce a TEST-*.xml file
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_join_vc
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_multi_insert_mixed
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_parallel_join0
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_parallel_join1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_parquet_array_of_multi_field_struct
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_parquet_array_of_optional_elements
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_parquet_array_of_required_elements
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_parquet_array_of_single_field_struct
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_parquet_array_of_structs
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_parquet_array_of_unannotated_groups
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_parquet_array_of_unannotated_primitives
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_parquet_avro_array_of_primitives
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_parquet_avro_array_of_single_field_struct
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_parquet_map_of_maps
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_parquet_thrift_array_of_primitives
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_parquet_thrift_array_of_single_field_struct
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_sample_islocalmode_hook
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_vector_cast_constant
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_vector_decimal_aggregate
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_vector_cast_constant
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_annotate_stats_join
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_auto_join0
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_auto_join1
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_auto_join10
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_auto_join11
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_auto_join12
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_auto_join13
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_auto_join14
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_auto_join15
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_auto_join16
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_auto_join17
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_auto_join18
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_auto_join18_multi_distinct
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_auto_join19
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_auto_join2
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_auto_join20
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_auto_join21
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_auto_join22
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_auto_join23
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_auto_join24
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_auto_join26
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_auto_join27
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_auto_join28
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_auto_join29
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_auto_join3
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_auto_join30
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_auto_join31
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_auto_join32
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_auto_join9
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_auto_join_reordering_values
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_auto_join_without_localtask
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_auto_smb_mapjoin_14
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_auto_sortmerge_join_1
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_auto_sortmerge_join_10

[jira] [Commented] (HIVE-8889) JDBC Driver ResultSet.getXXXXXX(String columnLabel) methods Broken

2014-11-28 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14228537#comment-14228537
 ] 

Brock Noland commented on HIVE-8889:


+1

> JDBC Driver ResultSet.getXX(String columnLabel) methods Broken
> --
>
> Key: HIVE-8889
> URL: https://issues.apache.org/jira/browse/HIVE-8889
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.13.1
>Reporter: G Lingle
>Assignee: Chaoyu Tang
>Priority: Critical
> Fix For: 0.15.0, 0.14.1
>
> Attachments: HIVE-8889.1.patch, HIVE-8889.2.patch, HIVE-8889.patch
>
>
> Using hive-jdbc-0.13.1-cdh5.2.0.jar.
> All of the get-by-column-label methods of HiveBaseResultSet are now broken.  
> They don't take just the column label as they should.  Instead you have to 
> pass in ..  This requirement doesn't conform to the 
> java ResultSet API which specifies:
> "columnLabel - the label for the column specified with the SQL AS clause. If 
> the SQL AS clause was not specified, then the label is the name of the column"
> Looking at the code, it seems that the problem is that findColumn() method is 
> looking in normalizedColumnNames instead of the columnNames.
> BTW, Another annoying issue with the code is that the SQLException thrown 
> gives no indication of what the problem is.  It should at least say that the 
> column name wasn't found in the description string.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-8995) Find thread leak in RSC Tests

2014-11-28 Thread Brock Noland (JIRA)
Brock Noland created HIVE-8995:
--

 Summary: Find thread leak in RSC Tests
 Key: HIVE-8995
 URL: https://issues.apache.org/jira/browse/HIVE-8995
 Project: Hive
  Issue Type: Sub-task
Reporter: Brock Noland


I was regenerating output as part of the merge:
{noformat}
mvn test -Dtest=TestSparkCliDriver -Phadoop-2 -Dtest.output.overwrite=true 
-Dqfile=annotate_stats_join.q,auto_join0.q,auto_join1.q,auto_join10.q,auto_join11.q,auto_join12.q,auto_join13.q,auto_join14.q,auto_join15.q,auto_join16.q,auto_join17.q,auto_join18.q,auto_join18_multi_distinct.q,auto_join19.q,auto_join2.q,auto_join20.q,auto_join21.q,auto_join22.q,auto_join23.q,auto_join24.q,auto_join26.q,auto_join27.q,auto_join28.q,auto_join29.q,auto_join3.q,auto_join30.q,auto_join31.q,auto_join32.q,auto_join9.q,auto_join_reordering_values.q
 
auto_join_without_localtask.q,auto_smb_mapjoin_14.q,auto_sortmerge_join_1.q,auto_sortmerge_join_10.q,auto_sortmerge_join_11.q,auto_sortmerge_join_12.q,auto_sortmerge_join_14.q,auto_sortmerge_join_15.q,auto_sortmerge_join_2.q,auto_sortmerge_join_3.q,auto_sortmerge_join_4.q,auto_sortmerge_join_5.q,auto_sortmerge_join_6.q,auto_sortmerge_join_7.q,auto_sortmerge_join_8.q,auto_sortmerge_join_9.q,bucket_map_join_1.q,bucket_map_join_2.q,bucket_map_join_tez1.q,bucket_map_join_tez2.q,bucketmapjoin1.q,bucketmapjoin10.q,bucketmapjoin11.q,bucketmapjoin12.q,bucketmapjoin13.q,bucketmapjoin2.q,bucketmapjoin3.q,bucketmapjoin4.q,bucketmapjoin5.q,bucketmapjoin7.q
 
bucketmapjoin8.q,bucketmapjoin9.q,bucketmapjoin_negative.q,bucketmapjoin_negative2.q,bucketmapjoin_negative3.q,column_access_stats.q,cross_join.q,ctas.q,custom_input_output_format.q,groupby4.q,groupby7_noskew_multi_single_reducer.q,groupby_complex_types.q,groupby_complex_types_multi_single_reducer.q,groupby_multi_single_reducer2.q,groupby_multi_single_reducer3.q,groupby_position.q,groupby_sort_1_23.q,groupby_sort_skew_1_23.q,having.q,index_auto_self_join.q,infer_bucket_sort_convert_join.q,innerjoin.q,input12.q,join0.q,join1.q,join11.q,join12.q,join13.q,join14.q,join15.q
 
join17.q,join18.q,join18_multi_distinct.q,join19.q,join2.q,join20.q,join21.q,join22.q,join23.q,join25.q,join26.q,join27.q,join28.q,join29.q,join3.q,join30.q,join31.q,join32.q,join32_lessSize.q,join33.q,join35.q,join36.q,join37.q,join38.q,join39.q,join40.q,join41.q,join9.q,join_alt_syntax.q,join_cond_pushdown_1.q
 
join_cond_pushdown_2.q,join_cond_pushdown_3.q,join_cond_pushdown_4.q,join_cond_pushdown_unqual1.q,join_cond_pushdown_unqual2.q,join_cond_pushdown_unqual3.q,join_cond_pushdown_unqual4.q,join_filters_overlap.q,join_hive_626.q,join_map_ppr.q,join_merge_multi_expressions.q,join_merging.q,join_nullsafe.q,join_rc.q,join_reorder.q,join_reorder2.q,join_reorder3.q,join_reorder4.q,join_star.q,join_thrift.q,join_vc.q,join_view.q,limit_pushdown.q,load_dyn_part13.q,load_dyn_part14.q,louter_join_ppr.q,mapjoin1.q,mapjoin_decimal.q,mapjoin_distinct.q,mapjoin_filter_on_outerjoin.q
 
mapjoin_hook.q,mapjoin_mapjoin.q,mapjoin_memcheck.q,mapjoin_subquery.q,mapjoin_subquery2.q,mapjoin_test_outer.q,mergejoins.q,mergejoins_mixed.q,multi_insert.q,multi_insert_gby.q,multi_insert_gby2.q,multi_insert_gby3.q,multi_insert_lateral_view.q,multi_insert_mixed.q,multi_insert_move_tasks_share_dependencies.q,multi_join_union.q,optimize_nullscan.q,outer_join_ppr.q,parallel.q,parallel_join0.q,parallel_join1.q,parquet_join.q,pcr.q,ppd_gby_join.q,ppd_join.q,ppd_join2.q,ppd_join3.q,ppd_join4.q,ppd_join5.q,ppd_join_filter.q
 
ppd_multi_insert.q,ppd_outer_join1.q,ppd_outer_join2.q,ppd_outer_join3.q,ppd_outer_join4.q,ppd_outer_join5.q,ppd_transform.q,reduce_deduplicate_exclude_join.q,router_join_ppr.q,sample10.q,sample8.q,script_pipe.q,semijoin.q,skewjoin.q,skewjoin_noskew.q,skewjoin_union_remove_1.q,skewjoin_union_remove_2.q,skewjoinopt1.q,skewjoinopt10.q,skewjoinopt11.q,skewjoinopt12.q,skewjoinopt13.q,skewjoinopt14.q,skewjoinopt15.q,skewjoinopt16.q,skewjoinopt17.q,skewjoinopt18.q,skewjoinopt19.q,skewjoinopt2.q,skewjoinopt20.q
 
skewjoinopt3.q,skewjoinopt4.q,skewjoinopt5.q,skewjoinopt6.q,skewjoinopt7.q,skewjoinopt8.q,skewjoinopt9.q,smb_mapjoin9.q,smb_mapjoin_1.q,smb_mapjoin_10.q,smb_mapjoin_13.q,smb_mapjoin_14.q,smb_mapjoin_15.q,smb_mapjoin_16.q,smb_mapjoin_17.q,smb_mapjoin_2.q,smb_mapjoin_25.q,smb_mapjoin_3.q,smb_mapjoin_4.q,smb_mapjoin_5.q,smb_mapjoin_6.q,smb_mapjoin_7.q,sort_merge_join_desc_1.q,sort_merge_join_desc_2.q,sort_merge_join_desc_3.q,sort_merge_join_desc_4.q,sort_merge_join_desc_5.q,sort_merge_join_desc_6.q,sort_merge_join_desc_7.q,sort_merge_join_desc_8.q
 
stats1.q,subquery_in.q,subquery_multiinsert.q,table_access_keys_stats.q,temp_table.q,temp_table_join1.q,tez_join_tests.q,tez_joins_explain.q,union18.q,union19.q,union23.q,union25.q,union3.q,union30.q,union33.q,union6.q,union_remove_1.q,union_remove_10.q,union_remove_11.q,union_remove_15.q,union_remove_16.q,union_remove_17.q,union_remove_18.q,union_remove_19.q,union

[jira] [Commented] (HIVE-8995) Find thread leak in RSC Tests

2014-11-28 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14228561#comment-14228561
 ] 

Brock Noland commented on HIVE-8995:


I see three kinds of threads which appear to be leaked:

{noformat}
"9b6aa26e-db45-424d-89d0-3763f04f4b6b-akka.actor.default-dispatcher-3" daemon 
prio=5 tid=0x7fd646195000 nid=0x11e07 waiting on condition 
[0x000120afc000]
   java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for  <0x00078331caf8> (a 
akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinPool)
at scala.concurrent.forkjoin.ForkJoinPool.scan(ForkJoinPool.java:2075)
at 
scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at 
scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
{noformat}

{noformat}
"9b6aa26e-db45-424d-89d0-3763f04f4b6b-scheduler-1" daemon prio=5 
tid=0x7fd64609d800 nid=0x121e3 sleeping[0x00011ee4]
   java.lang.Thread.State: TIMED_WAITING (sleeping)
at java.lang.Thread.sleep(Native Method)
at akka.actor.LightArrayRevolverScheduler.waitNanos(Scheduler.scala:226)
at 
akka.actor.LightArrayRevolverScheduler$$anon$8.nextTick(Scheduler.scala:405)
at 
akka.actor.LightArrayRevolverScheduler$$anon$8.run(Scheduler.scala:375)
at java.lang.Thread.run(Thread.java:744)
{noformat}


{noformat}
"New I/O server boss #48" daemon prio=5 tid=0x7fd644886000 nid=0x11b03 
runnable [0x000122956000]
   java.lang.Thread.State: RUNNABLE
at sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method)
at sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:200)
at sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:103)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
- locked <0x000782fe25d0> (a sun.nio.ch.Util$2)
- locked <0x000782fe25e0> (a java.util.Collections$UnmodifiableSet)
- locked <0x000782fe2580> (a sun.nio.ch.KQueueSelectorImpl)
at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:102)
at 
org.jboss.netty.channel.socket.nio.NioServerBoss.select(NioServerBoss.java:163)
at 
org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:206)
at 
org.jboss.netty.channel.socket.nio.NioServerBoss.run(NioServerBoss.java:42)
at 
org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at 
org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
{noformat}

> Find thread leak in RSC Tests
> -
>
> Key: HIVE-8995
> URL: https://issues.apache.org/jira/browse/HIVE-8995
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Reporter: Brock Noland
>
> I was regenerating output as part of the merge:
> {noformat}
> mvn test -Dtest=TestSparkCliDriver -Phadoop-2 -Dtest.output.overwrite=true 
> -Dqfile=annotate_stats_join.q,auto_join0.q,auto_join1.q,auto_join10.q,auto_join11.q,auto_join12.q,auto_join13.q,auto_join14.q,auto_join15.q,auto_join16.q,auto_join17.q,auto_join18.q,auto_join18_multi_distinct.q,auto_join19.q,auto_join2.q,auto_join20.q,auto_join21.q,auto_join22.q,auto_join23.q,auto_join24.q,auto_join26.q,auto_join27.q,auto_join28.q,auto_join29.q,auto_join3.q,auto_join30.q,auto_join31.q,auto_join32.q,auto_join9.q,auto_join_reordering_values.q
>  
> auto_join_without_localtask.q,auto_smb_mapjoin_14.q,auto_sortmerge_join_1.q,auto_sortmerge_join_10.q,auto_sortmerge_join_11.q,auto_sortmerge_join_12.q,auto_sortmerge_join_14.q,auto_sortmerge_join_15.q,auto_sortmerge_join_2.q,auto_sortmerge_join_3.q,auto_sortmerge_join_4.q,auto_sortmerge_join_5.q,auto_sortmerge_join_6.q,auto_sortmerge_join_7.q,auto_sortmerge_join_8.q,auto_sortmerge_join_9.q,bucket_map_join_1.q,bucket_map_join_2.q,bucket_map_join_tez1.q,bucket_map_join_tez2.q,bucketmapjoin1.q,bucketmapjoin10.q,bucketmapjoin11.q,bucketmapjoin12.q,bucketmapjoin13.q,bucketmapjoin2.q,bucketmapjoin3.q,bucketmapjoin4.q,bucketmapjoin5.q,bucketmapjoin7.q
>  
> bucketmapjoin8.q,bucketmapjoin9.q,bucketmapjoin_negative.q,bucketmapjoin_negative2.q,bucketmapjoin_negative3.q,column_access_stats.q,cross_join.q,ctas.q,custom_input_output_format.q,groupby4.q,groupby7_noskew_multi_single_reducer.q,groupby_complex_types.q,groupby_complex_types_multi_single_reducer.q,groupby_multi_single_reducer2.q,groupby_multi_single_reducer3.q,groupby_position.q,groupby_sort_1_23.q,groupby_s

[jira] [Updated] (HIVE-8994) Merge from trunk Nov 28 2014

2014-11-28 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-8994:
---
Attachment: HIVE-8994.3-spark.patch

> Merge from trunk Nov 28 2014
> 
>
> Key: HIVE-8994
> URL: https://issues.apache.org/jira/browse/HIVE-8994
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Affects Versions: spark-branch
>Reporter: Brock Noland
>Assignee: Brock Noland
> Fix For: spark-branch
>
> Attachments: HIVE-8994.2-spark.patch, HIVE-8994.3-spark.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-7164) Support non-string partition types in HCatalog

2014-11-28 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14228586#comment-14228586
 ] 

Lefty Leverenz commented on HIVE-7164:
--

bq.  You will need to set property hive.metastore.integral.jdo.pushdown to true 
on the HMS' hive-site.xml to enable this ability

This could be documented in the wiki, but where?

Some candidates:

* [Configuration Properties  -- hive.metastore.integral.jdo.pushdown | 
https://cwiki.apache.org/confluence/display/Hive/Configuration+Properties#ConfigurationProperties-hive.metastore.integral.jdo.pushdown]
* [WebHCat Reference -- Put Partition | 
https://cwiki.apache.org/confluence/display/Hive/WebHCat+Reference+PutPartition]
* [HCatalog CLI -- Create/Drop/Alter Table | 
https://cwiki.apache.org/confluence/display/Hive/HCatalog+CLI#HCatalogCLI-Create/Drop/AlterTable]
* [HCatalog -- Dynamic Partitions | 
https://cwiki.apache.org/confluence/display/Hive/HCatalog+DynamicPartitions]
* [Running MapReduce with HCatalog -- Write Filter | 
https://cwiki.apache.org/confluence/display/Hive/HCatalog+InputOutput#HCatalogInputOutput-WriteFilter]

> Support non-string partition types in HCatalog
> --
>
> Key: HIVE-7164
> URL: https://issues.apache.org/jira/browse/HIVE-7164
> Project: Hive
>  Issue Type: Bug
>  Components: HCatalog
>Reporter: bharath v
>
> Currently querying hive tables with non-string partition columns using HCat  
> gives us the following error. 
> Error: Filtering is supported only on partition keys of type string
> Related discussion here : 
> https://www.mail-archive.com/dev@hive.apache.org/msg18011.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8935) Add debug logging around token stores

2014-11-28 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14228589#comment-14228589
 ] 

Lefty Leverenz commented on HIVE-8935:
--

Okay, thanks Brock.

> Add debug logging around token stores
> -
>
> Key: HIVE-8935
> URL: https://issues.apache.org/jira/browse/HIVE-8935
> Project: Hive
>  Issue Type: Task
>Reporter: Brock Noland
>Assignee: Brock Noland
> Fix For: 0.15.0
>
> Attachments: HIVE-8935.patch, HIVE-8935.patch
>
>
> It's hard to debug issues related to delegation tokens due to a lack of debug 
> logging. This jira is to add debug logging.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8994) Merge from trunk Nov 28 2014

2014-11-28 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14228606#comment-14228606
 ] 

Hive QA commented on HIVE-8994:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12684219/HIVE-8994.3-spark.patch

{color:red}ERROR:{color} -1 due to 17 failed/errored test(s), 7229 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_parquet_array_of_multi_field_struct
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_parquet_array_of_optional_elements
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_parquet_array_of_required_elements
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_parquet_array_of_single_field_struct
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_parquet_array_of_structs
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_parquet_array_of_unannotated_groups
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_parquet_array_of_unannotated_primitives
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_parquet_avro_array_of_primitives
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_parquet_avro_array_of_single_field_struct
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_parquet_map_of_maps
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_parquet_thrift_array_of_primitives
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_parquet_thrift_array_of_single_field_struct
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_sample_islocalmode_hook
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_vector_cast_constant
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_custom_input_output_format
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_parquet_join
org.apache.hive.hcatalog.streaming.TestStreaming.testRemainingTransactions
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-SPARK-Build/466/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-SPARK-Build/466/console
Test logs: 
http://ec2-50-18-27-0.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-SPARK-Build-466/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 17 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12684219 - PreCommit-HIVE-SPARK-Build

> Merge from trunk Nov 28 2014
> 
>
> Key: HIVE-8994
> URL: https://issues.apache.org/jira/browse/HIVE-8994
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Affects Versions: spark-branch
>Reporter: Brock Noland
>Assignee: Brock Noland
> Fix For: spark-branch
>
> Attachments: HIVE-8994.2-spark.patch, HIVE-8994.3-spark.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8994) Merge from trunk Nov 28 2014

2014-11-28 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14228617#comment-14228617
 ] 

Brock Noland commented on HIVE-8994:


The parquet tests fail because svn doesn't handle binary files. Committed merge 
to branch.

> Merge from trunk Nov 28 2014
> 
>
> Key: HIVE-8994
> URL: https://issues.apache.org/jira/browse/HIVE-8994
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Affects Versions: spark-branch
>Reporter: Brock Noland
>Assignee: Brock Noland
> Fix For: spark-branch
>
> Attachments: HIVE-8994.2-spark.patch, HIVE-8994.3-spark.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8994) Merge from trunk Nov 28 2014

2014-11-28 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-8994:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Merge from trunk Nov 28 2014
> 
>
> Key: HIVE-8994
> URL: https://issues.apache.org/jira/browse/HIVE-8994
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Affects Versions: spark-branch
>Reporter: Brock Noland
>Assignee: Brock Noland
> Fix For: spark-branch
>
> Attachments: HIVE-8994.2-spark.patch, HIVE-8994.3-spark.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-8996) Rename getUGIForConf

2014-11-28 Thread Brock Noland (JIRA)
Brock Noland created HIVE-8996:
--

 Summary: Rename getUGIForConf
 Key: HIVE-8996
 URL: https://issues.apache.org/jira/browse/HIVE-8996
 Project: Hive
  Issue Type: Improvement
Reporter: Brock Noland
 Attachments: HIVE-8996.patch

getUGIForConf doesn't use the argument, let's rename it and remove the argument



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8996) Rename getUGIForConf

2014-11-28 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-8996:
---
Assignee: Brock Noland
  Status: Patch Available  (was: Open)

> Rename getUGIForConf
> 
>
> Key: HIVE-8996
> URL: https://issues.apache.org/jira/browse/HIVE-8996
> Project: Hive
>  Issue Type: Improvement
>Reporter: Brock Noland
>Assignee: Brock Noland
> Attachments: HIVE-8996.patch
>
>
> getUGIForConf doesn't use the argument, let's rename it and remove the 
> argument



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8996) Rename getUGIForConf

2014-11-28 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-8996:
---
Attachment: HIVE-8996.patch

> Rename getUGIForConf
> 
>
> Key: HIVE-8996
> URL: https://issues.apache.org/jira/browse/HIVE-8996
> Project: Hive
>  Issue Type: Improvement
>Reporter: Brock Noland
> Attachments: HIVE-8996.patch
>
>
> getUGIForConf doesn't use the argument, let's rename it and remove the 
> argument



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8996) Rename getUGIForConf

2014-11-28 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14228626#comment-14228626
 ] 

Brock Noland commented on HIVE-8996:


FYI [~ashutoshc]

> Rename getUGIForConf
> 
>
> Key: HIVE-8996
> URL: https://issues.apache.org/jira/browse/HIVE-8996
> Project: Hive
>  Issue Type: Improvement
>Reporter: Brock Noland
>Assignee: Brock Noland
> Attachments: HIVE-8996.patch
>
>
> getUGIForConf doesn't use the argument, let's rename it and remove the 
> argument



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8995) Find thread leak in RSC Tests

2014-11-28 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14228636#comment-14228636
 ] 

Brock Noland commented on HIVE-8995:


FYI [~vanzin] [~xuefuz]

> Find thread leak in RSC Tests
> -
>
> Key: HIVE-8995
> URL: https://issues.apache.org/jira/browse/HIVE-8995
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Reporter: Brock Noland
>
> I was regenerating output as part of the merge:
> {noformat}
> mvn test -Dtest=TestSparkCliDriver -Phadoop-2 -Dtest.output.overwrite=true 
> -Dqfile=annotate_stats_join.q,auto_join0.q,auto_join1.q,auto_join10.q,auto_join11.q,auto_join12.q,auto_join13.q,auto_join14.q,auto_join15.q,auto_join16.q,auto_join17.q,auto_join18.q,auto_join18_multi_distinct.q,auto_join19.q,auto_join2.q,auto_join20.q,auto_join21.q,auto_join22.q,auto_join23.q,auto_join24.q,auto_join26.q,auto_join27.q,auto_join28.q,auto_join29.q,auto_join3.q,auto_join30.q,auto_join31.q,auto_join32.q,auto_join9.q,auto_join_reordering_values.q
>  
> auto_join_without_localtask.q,auto_smb_mapjoin_14.q,auto_sortmerge_join_1.q,auto_sortmerge_join_10.q,auto_sortmerge_join_11.q,auto_sortmerge_join_12.q,auto_sortmerge_join_14.q,auto_sortmerge_join_15.q,auto_sortmerge_join_2.q,auto_sortmerge_join_3.q,auto_sortmerge_join_4.q,auto_sortmerge_join_5.q,auto_sortmerge_join_6.q,auto_sortmerge_join_7.q,auto_sortmerge_join_8.q,auto_sortmerge_join_9.q,bucket_map_join_1.q,bucket_map_join_2.q,bucket_map_join_tez1.q,bucket_map_join_tez2.q,bucketmapjoin1.q,bucketmapjoin10.q,bucketmapjoin11.q,bucketmapjoin12.q,bucketmapjoin13.q,bucketmapjoin2.q,bucketmapjoin3.q,bucketmapjoin4.q,bucketmapjoin5.q,bucketmapjoin7.q
>  
> bucketmapjoin8.q,bucketmapjoin9.q,bucketmapjoin_negative.q,bucketmapjoin_negative2.q,bucketmapjoin_negative3.q,column_access_stats.q,cross_join.q,ctas.q,custom_input_output_format.q,groupby4.q,groupby7_noskew_multi_single_reducer.q,groupby_complex_types.q,groupby_complex_types_multi_single_reducer.q,groupby_multi_single_reducer2.q,groupby_multi_single_reducer3.q,groupby_position.q,groupby_sort_1_23.q,groupby_sort_skew_1_23.q,having.q,index_auto_self_join.q,infer_bucket_sort_convert_join.q,innerjoin.q,input12.q,join0.q,join1.q,join11.q,join12.q,join13.q,join14.q,join15.q
>  
> join17.q,join18.q,join18_multi_distinct.q,join19.q,join2.q,join20.q,join21.q,join22.q,join23.q,join25.q,join26.q,join27.q,join28.q,join29.q,join3.q,join30.q,join31.q,join32.q,join32_lessSize.q,join33.q,join35.q,join36.q,join37.q,join38.q,join39.q,join40.q,join41.q,join9.q,join_alt_syntax.q,join_cond_pushdown_1.q
>  
> join_cond_pushdown_2.q,join_cond_pushdown_3.q,join_cond_pushdown_4.q,join_cond_pushdown_unqual1.q,join_cond_pushdown_unqual2.q,join_cond_pushdown_unqual3.q,join_cond_pushdown_unqual4.q,join_filters_overlap.q,join_hive_626.q,join_map_ppr.q,join_merge_multi_expressions.q,join_merging.q,join_nullsafe.q,join_rc.q,join_reorder.q,join_reorder2.q,join_reorder3.q,join_reorder4.q,join_star.q,join_thrift.q,join_vc.q,join_view.q,limit_pushdown.q,load_dyn_part13.q,load_dyn_part14.q,louter_join_ppr.q,mapjoin1.q,mapjoin_decimal.q,mapjoin_distinct.q,mapjoin_filter_on_outerjoin.q
>  
> mapjoin_hook.q,mapjoin_mapjoin.q,mapjoin_memcheck.q,mapjoin_subquery.q,mapjoin_subquery2.q,mapjoin_test_outer.q,mergejoins.q,mergejoins_mixed.q,multi_insert.q,multi_insert_gby.q,multi_insert_gby2.q,multi_insert_gby3.q,multi_insert_lateral_view.q,multi_insert_mixed.q,multi_insert_move_tasks_share_dependencies.q,multi_join_union.q,optimize_nullscan.q,outer_join_ppr.q,parallel.q,parallel_join0.q,parallel_join1.q,parquet_join.q,pcr.q,ppd_gby_join.q,ppd_join.q,ppd_join2.q,ppd_join3.q,ppd_join4.q,ppd_join5.q,ppd_join_filter.q
>  
> ppd_multi_insert.q,ppd_outer_join1.q,ppd_outer_join2.q,ppd_outer_join3.q,ppd_outer_join4.q,ppd_outer_join5.q,ppd_transform.q,reduce_deduplicate_exclude_join.q,router_join_ppr.q,sample10.q,sample8.q,script_pipe.q,semijoin.q,skewjoin.q,skewjoin_noskew.q,skewjoin_union_remove_1.q,skewjoin_union_remove_2.q,skewjoinopt1.q,skewjoinopt10.q,skewjoinopt11.q,skewjoinopt12.q,skewjoinopt13.q,skewjoinopt14.q,skewjoinopt15.q,skewjoinopt16.q,skewjoinopt17.q,skewjoinopt18.q,skewjoinopt19.q,skewjoinopt2.q,skewjoinopt20.q
>  
> skewjoinopt3.q,skewjoinopt4.q,skewjoinopt5.q,skewjoinopt6.q,skewjoinopt7.q,skewjoinopt8.q,skewjoinopt9.q,smb_mapjoin9.q,smb_mapjoin_1.q,smb_mapjoin_10.q,smb_mapjoin_13.q,smb_mapjoin_14.q,smb_mapjoin_15.q,smb_mapjoin_16.q,smb_mapjoin_17.q,smb_mapjoin_2.q,smb_mapjoin_25.q,smb_mapjoin_3.q,smb_mapjoin_4.q,smb_mapjoin_5.q,smb_mapjoin_6.q,smb_mapjoin_7.q,sort_merge_join_desc_1.q,sort_merge_join_desc_2.q,sort_merge_join_desc_3.q,sort_merge_join_desc_4.q,sort_merge_join_desc_5.q,sort_merge_join_desc_6.q,sort_merge_join_desc_7.q,sort_merge_join_desc_8.q
>  
> stats1.q,subquery_in.q,subquery_multiinsert.q,table_access_keys_stats.q,t

[jira] [Commented] (HIVE-8996) Rename getUGIForConf

2014-11-28 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14228649#comment-14228649
 ] 

Hive QA commented on HIVE-8996:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12684228/HIVE-8996.patch

{color:red}ERROR:{color} -1 due to 5 failed/errored test(s), 6694 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_vector_decimal_aggregate
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_mapjoin_mapjoin
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_optimize_nullscan
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_schemeAuthority
org.apache.hive.hcatalog.streaming.TestStreaming.testTransactionBatchCommit_Json
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/1934/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/1934/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-1934/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 5 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12684228 - PreCommit-HIVE-TRUNK-Build

> Rename getUGIForConf
> 
>
> Key: HIVE-8996
> URL: https://issues.apache.org/jira/browse/HIVE-8996
> Project: Hive
>  Issue Type: Improvement
>Reporter: Brock Noland
>Assignee: Brock Noland
> Attachments: HIVE-8996.patch
>
>
> getUGIForConf doesn't use the argument, let's rename it and remove the 
> argument



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8996) Rename getUGIForConf

2014-11-28 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14228651#comment-14228651
 ] 

Ashutosh Chauhan commented on HIVE-8996:


+1

> Rename getUGIForConf
> 
>
> Key: HIVE-8996
> URL: https://issues.apache.org/jira/browse/HIVE-8996
> Project: Hive
>  Issue Type: Improvement
>Reporter: Brock Noland
>Assignee: Brock Noland
> Attachments: HIVE-8996.patch
>
>
> getUGIForConf doesn't use the argument, let's rename it and remove the 
> argument



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8774) CBO: enable groupBy index

2014-11-28 Thread Pengcheng Xiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pengcheng Xiong updated HIVE-8774:
--
Status: Patch Available  (was: Open)

> CBO: enable groupBy index
> -
>
> Key: HIVE-8774
> URL: https://issues.apache.org/jira/browse/HIVE-8774
> Project: Hive
>  Issue Type: Improvement
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
> Attachments: HIVE-8774.1.patch, HIVE-8774.2.patch, HIVE-8774.3.patch, 
> HIVE-8774.4.patch, HIVE-8774.5.patch, HIVE-8774.6.patch, HIVE-8774.7.patch, 
> HIVE-8774.8.patch, HIVE-8774.9.patch
>
>
> Right now, even when groupby index is build, CBO is not able to use it. In 
> this patch, we are trying to make it use groupby index that we build.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8774) CBO: enable groupBy index

2014-11-28 Thread Pengcheng Xiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pengcheng Xiong updated HIVE-8774:
--
Status: Open  (was: Patch Available)

> CBO: enable groupBy index
> -
>
> Key: HIVE-8774
> URL: https://issues.apache.org/jira/browse/HIVE-8774
> Project: Hive
>  Issue Type: Improvement
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
> Attachments: HIVE-8774.1.patch, HIVE-8774.2.patch, HIVE-8774.3.patch, 
> HIVE-8774.4.patch, HIVE-8774.5.patch, HIVE-8774.6.patch, HIVE-8774.7.patch, 
> HIVE-8774.8.patch, HIVE-8774.9.patch
>
>
> Right now, even when groupby index is build, CBO is not able to use it. In 
> this patch, we are trying to make it use groupby index that we build.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8774) CBO: enable groupBy index

2014-11-28 Thread Pengcheng Xiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pengcheng Xiong updated HIVE-8774:
--
Attachment: HIVE-8774.9.patch

address [~jpullokkaran]'s comments:
(1) study the schema of grp by index
(2) file a jira bug
(3) fix the broken grp index

> CBO: enable groupBy index
> -
>
> Key: HIVE-8774
> URL: https://issues.apache.org/jira/browse/HIVE-8774
> Project: Hive
>  Issue Type: Improvement
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
> Attachments: HIVE-8774.1.patch, HIVE-8774.2.patch, HIVE-8774.3.patch, 
> HIVE-8774.4.patch, HIVE-8774.5.patch, HIVE-8774.6.patch, HIVE-8774.7.patch, 
> HIVE-8774.8.patch, HIVE-8774.9.patch
>
>
> Right now, even when groupby index is build, CBO is not able to use it. In 
> this patch, we are trying to make it use groupby index that we build.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 27713: CBO: enable groupBy index

2014-11-28 Thread pengcheng xiong

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/27713/
---

(Updated Nov. 29, 2014, 7:05 a.m.)


Review request for hive and John Pullokkaran.


Changes
---

address john's comments


Repository: hive-git


Description
---

Right now, even when groupby index is build, CBO is not able to use it. In this 
patch, we are trying to make it use groupby index that we build. The basic 
problem is that 
for SEL1-SEL2-GRY-...-SEL3,
the previous version only modify SEL2, which immediately precedes GRY.
Now, with CBO, we have lots of SELs, e.g., SEL1.
So, the solution is to modify all of them.


Diffs (updated)
-

  ql/src/java/org/apache/hadoop/hive/ql/optimizer/index/RewriteCanApplyCtx.java 
9ffa708 
  
ql/src/java/org/apache/hadoop/hive/ql/optimizer/index/RewriteCanApplyProcFactory.java
 02216de 
  
ql/src/java/org/apache/hadoop/hive/ql/optimizer/index/RewriteGBUsingIndex.java 
0f06ec9 
  
ql/src/java/org/apache/hadoop/hive/ql/optimizer/index/RewriteQueryUsingAggregateIndex.java
 74614f3 
  
ql/src/java/org/apache/hadoop/hive/ql/optimizer/index/RewriteQueryUsingAggregateIndexCtx.java
 d699308 
  ql/src/test/queries/clientpositive/ql_rewrite_gbtoidx_cbo_1.q PRE-CREATION 
  ql/src/test/queries/clientpositive/ql_rewrite_gbtoidx_cbo_2.q PRE-CREATION 
  ql/src/test/results/clientpositive/ql_rewrite_gbtoidx.q.out fdc1dc6 
  ql/src/test/results/clientpositive/ql_rewrite_gbtoidx_cbo_1.q.out 
PRE-CREATION 
  ql/src/test/results/clientpositive/ql_rewrite_gbtoidx_cbo_2.q.out 
PRE-CREATION 

Diff: https://reviews.apache.org/r/27713/diff/


Testing
---


Thanks,

pengcheng xiong



[jira] [Created] (HIVE-8997) Groupby index will fail if an indexed group by operator is followed by a non-indexed group by operator

2014-11-28 Thread Pengcheng Xiong (JIRA)
Pengcheng Xiong created HIVE-8997:
-

 Summary: Groupby index will fail if an indexed group by operator 
is followed by a non-indexed group by operator
 Key: HIVE-8997
 URL: https://issues.apache.org/jira/browse/HIVE-8997
 Project: Hive
  Issue Type: Bug
Reporter: Pengcheng Xiong
Assignee: Pengcheng Xiong


following ql_rewrite_gbtoidx.q, if we run

explain
select ckeysum, count(ckeysum)
from
(select l_shipdate, count(l_shipdate) as ckeysum
from lineitem_ix
group by l_shipdate) tabA
group by ckeysum

We will get an error:

junit.framework.AssertionFailedError: Client Execution failed with error code = 
4 running

The trace is 

MismatchedTokenException(-1!=12)
at 
org.antlr.runtime.BaseRecognizer.recoverFromMismatchedToken(BaseRecognizer.java:617)
at org.antlr.runtime.BaseRecognizer.match(BaseRecognizer.java:115)
at 
org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.charSetStringLiteral(HiveParser_IdentifiersParser.java:6099)
at 
org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.constant(HiveParser_IdentifiersParser.java:5891)
at 
org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.atomExpression(HiveParser_IdentifiersParser.java:6478)
at 
org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.precedenceFieldExpression(HiveParser_IdentifiersParser.java:6641)
at 
org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.precedenceUnaryPrefixExpression(HiveParser_IdentifiersParser.java:7026)
at 
org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.precedenceUnarySuffixExpression(HiveParser_IdentifiersParser.java:7086)
at 
org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.precedenceBitwiseXorExpression(HiveParser_IdentifiersParser.java:7270)
at 
org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.precedenceStarExpression(HiveParser_IdentifiersParser.java:7430)
at 
org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.precedencePlusExpression(HiveParser_IdentifiersParser.java:7590)
at 
org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.precedenceAmpersandExpression(HiveParser_IdentifiersParser.java:7750)
at 
org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.precedenceBitwiseOrExpression(HiveParser_IdentifiersParser.java:7909)
at 
org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.precedenceEqualExpression(HiveParser_IdentifiersParser.java:8439)
at 
org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.precedenceNotExpression(HiveParser_IdentifiersParser.java:9452)
at 
org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.precedenceAndExpression(HiveParser_IdentifiersParser.java:9571)
at 
org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.precedenceOrExpression(HiveParser_IdentifiersParser.java:9730)
at 
org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.expression(HiveParser_IdentifiersParser.java:6363)
at 
org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.groupByExpression(HiveParser_IdentifiersParser.java:1386)
at 
org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.groupByClause(HiveParser_IdentifiersParser.java:774)
at 
org.apache.hadoop.hive.ql.parse.HiveParser.groupByClause(HiveParser.java:44007)
at 
org.apache.hadoop.hive.ql.parse.HiveParser.singleSelectStatement(HiveParser.java:41504)
at 
org.apache.hadoop.hive.ql.parse.HiveParser.selectStatement(HiveParser.java:41135)
at org.apache.hadoop.hive.ql.parse.HiveParser.regularBody(HiveParser.java:41072)
at 
org.apache.hadoop.hive.ql.parse.HiveParser.queryStatementExpressionBody(HiveParser.java:40125)
at 
org.apache.hadoop.hive.ql.parse.HiveParser.queryStatementExpression(HiveParser.java:40001)
at 
org.apache.hadoop.hive.ql.parse.HiveParser.execStatement(HiveParser.java:1519)
at org.apache.hadoop.hive.ql.parse.HiveParser.statement(HiveParser.java:1057)
at org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:199)
at org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:166)
at 
org.apache.hadoop.hive.ql.optimizer.index.RewriteParseContextGenerator.generateOperatorTree(RewriteParseContextGenerator.java:67)
at 
org.apache.hadoop.hive.ql.optimizer.index.RewriteQueryUsingAggregateIndex$NewQueryGroupbySchemaProc.process(RewriteQueryUsingAggregateIndex.java:255)
at 
org.apache.hadoop.hive.ql.lib.DefaultRuleDispatcher.dispatch(DefaultRuleDispatcher.java:90)
at 
org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatchAndReturn(DefaultGraphWalker.java:94)
at 
org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatch(DefaultGraphWalker.java:78)
at 
org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.walk(DefaultGraphWalker.java:132)
at 
org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.startWalking(DefaultGraphWalker.java:109)
at 
org.apache.hadoop.hive.ql.optimizer.index.RewriteQueryUsingAggregateIndexCtx.invokeRewriteQueryProc(RewriteQueryUsingAggregateIndexCtx

[jira] [Commented] (HIVE-8997) Groupby index will fail if an indexed group by operator is followed by a non-indexed group by operator

2014-11-28 Thread Pengcheng Xiong (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14228660#comment-14228660
 ] 

Pengcheng Xiong commented on HIVE-8997:
---

This bug will be addressed by HIVE-8774.

> Groupby index will fail if an indexed group by operator is followed by a 
> non-indexed group by operator
> --
>
> Key: HIVE-8997
> URL: https://issues.apache.org/jira/browse/HIVE-8997
> Project: Hive
>  Issue Type: Bug
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
>
> following ql_rewrite_gbtoidx.q, if we run
> explain
> select ckeysum, count(ckeysum)
> from
> (select l_shipdate, count(l_shipdate) as ckeysum
> from lineitem_ix
> group by l_shipdate) tabA
> group by ckeysum
> We will get an error:
> junit.framework.AssertionFailedError: Client Execution failed with error code 
> = 4 running
> The trace is 
> MismatchedTokenException(-1!=12)
> at 
> org.antlr.runtime.BaseRecognizer.recoverFromMismatchedToken(BaseRecognizer.java:617)
> at org.antlr.runtime.BaseRecognizer.match(BaseRecognizer.java:115)
> at 
> org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.charSetStringLiteral(HiveParser_IdentifiersParser.java:6099)
> at 
> org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.constant(HiveParser_IdentifiersParser.java:5891)
> at 
> org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.atomExpression(HiveParser_IdentifiersParser.java:6478)
> at 
> org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.precedenceFieldExpression(HiveParser_IdentifiersParser.java:6641)
> at 
> org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.precedenceUnaryPrefixExpression(HiveParser_IdentifiersParser.java:7026)
> at 
> org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.precedenceUnarySuffixExpression(HiveParser_IdentifiersParser.java:7086)
> at 
> org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.precedenceBitwiseXorExpression(HiveParser_IdentifiersParser.java:7270)
> at 
> org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.precedenceStarExpression(HiveParser_IdentifiersParser.java:7430)
> at 
> org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.precedencePlusExpression(HiveParser_IdentifiersParser.java:7590)
> at 
> org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.precedenceAmpersandExpression(HiveParser_IdentifiersParser.java:7750)
> at 
> org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.precedenceBitwiseOrExpression(HiveParser_IdentifiersParser.java:7909)
> at 
> org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.precedenceEqualExpression(HiveParser_IdentifiersParser.java:8439)
> at 
> org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.precedenceNotExpression(HiveParser_IdentifiersParser.java:9452)
> at 
> org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.precedenceAndExpression(HiveParser_IdentifiersParser.java:9571)
> at 
> org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.precedenceOrExpression(HiveParser_IdentifiersParser.java:9730)
> at 
> org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.expression(HiveParser_IdentifiersParser.java:6363)
> at 
> org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.groupByExpression(HiveParser_IdentifiersParser.java:1386)
> at 
> org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.groupByClause(HiveParser_IdentifiersParser.java:774)
> at 
> org.apache.hadoop.hive.ql.parse.HiveParser.groupByClause(HiveParser.java:44007)
> at 
> org.apache.hadoop.hive.ql.parse.HiveParser.singleSelectStatement(HiveParser.java:41504)
> at 
> org.apache.hadoop.hive.ql.parse.HiveParser.selectStatement(HiveParser.java:41135)
> at 
> org.apache.hadoop.hive.ql.parse.HiveParser.regularBody(HiveParser.java:41072)
> at 
> org.apache.hadoop.hive.ql.parse.HiveParser.queryStatementExpressionBody(HiveParser.java:40125)
> at 
> org.apache.hadoop.hive.ql.parse.HiveParser.queryStatementExpression(HiveParser.java:40001)
> at 
> org.apache.hadoop.hive.ql.parse.HiveParser.execStatement(HiveParser.java:1519)
> at org.apache.hadoop.hive.ql.parse.HiveParser.statement(HiveParser.java:1057)
> at org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:199)
> at org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:166)
> at 
> org.apache.hadoop.hive.ql.optimizer.index.RewriteParseContextGenerator.generateOperatorTree(RewriteParseContextGenerator.java:67)
> at 
> org.apache.hadoop.hive.ql.optimizer.index.RewriteQueryUsingAggregateIndex$NewQueryGroupbySchemaProc.process(RewriteQueryUsingAggregateIndex.java:255)
> at 
> org.apache.hadoop.hive.ql.lib.DefaultRuleDispatcher.dispatch(DefaultRuleDispatcher.java:90)
> at 
> org.apache.hadoop.hive.ql.lib.Default

[jira] [Commented] (HIVE-8997) Groupby index will fail if an indexed group by operator is followed by a non-indexed group by operator

2014-11-28 Thread Pengcheng Xiong (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14228661#comment-14228661
 ] 

Pengcheng Xiong commented on HIVE-8997:
---

according to [~jpullokkaran]'s comments

> Groupby index will fail if an indexed group by operator is followed by a 
> non-indexed group by operator
> --
>
> Key: HIVE-8997
> URL: https://issues.apache.org/jira/browse/HIVE-8997
> Project: Hive
>  Issue Type: Bug
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
>
> following ql_rewrite_gbtoidx.q, if we run
> explain
> select ckeysum, count(ckeysum)
> from
> (select l_shipdate, count(l_shipdate) as ckeysum
> from lineitem_ix
> group by l_shipdate) tabA
> group by ckeysum
> We will get an error:
> junit.framework.AssertionFailedError: Client Execution failed with error code 
> = 4 running
> The trace is 
> MismatchedTokenException(-1!=12)
> at 
> org.antlr.runtime.BaseRecognizer.recoverFromMismatchedToken(BaseRecognizer.java:617)
> at org.antlr.runtime.BaseRecognizer.match(BaseRecognizer.java:115)
> at 
> org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.charSetStringLiteral(HiveParser_IdentifiersParser.java:6099)
> at 
> org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.constant(HiveParser_IdentifiersParser.java:5891)
> at 
> org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.atomExpression(HiveParser_IdentifiersParser.java:6478)
> at 
> org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.precedenceFieldExpression(HiveParser_IdentifiersParser.java:6641)
> at 
> org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.precedenceUnaryPrefixExpression(HiveParser_IdentifiersParser.java:7026)
> at 
> org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.precedenceUnarySuffixExpression(HiveParser_IdentifiersParser.java:7086)
> at 
> org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.precedenceBitwiseXorExpression(HiveParser_IdentifiersParser.java:7270)
> at 
> org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.precedenceStarExpression(HiveParser_IdentifiersParser.java:7430)
> at 
> org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.precedencePlusExpression(HiveParser_IdentifiersParser.java:7590)
> at 
> org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.precedenceAmpersandExpression(HiveParser_IdentifiersParser.java:7750)
> at 
> org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.precedenceBitwiseOrExpression(HiveParser_IdentifiersParser.java:7909)
> at 
> org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.precedenceEqualExpression(HiveParser_IdentifiersParser.java:8439)
> at 
> org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.precedenceNotExpression(HiveParser_IdentifiersParser.java:9452)
> at 
> org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.precedenceAndExpression(HiveParser_IdentifiersParser.java:9571)
> at 
> org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.precedenceOrExpression(HiveParser_IdentifiersParser.java:9730)
> at 
> org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.expression(HiveParser_IdentifiersParser.java:6363)
> at 
> org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.groupByExpression(HiveParser_IdentifiersParser.java:1386)
> at 
> org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.groupByClause(HiveParser_IdentifiersParser.java:774)
> at 
> org.apache.hadoop.hive.ql.parse.HiveParser.groupByClause(HiveParser.java:44007)
> at 
> org.apache.hadoop.hive.ql.parse.HiveParser.singleSelectStatement(HiveParser.java:41504)
> at 
> org.apache.hadoop.hive.ql.parse.HiveParser.selectStatement(HiveParser.java:41135)
> at 
> org.apache.hadoop.hive.ql.parse.HiveParser.regularBody(HiveParser.java:41072)
> at 
> org.apache.hadoop.hive.ql.parse.HiveParser.queryStatementExpressionBody(HiveParser.java:40125)
> at 
> org.apache.hadoop.hive.ql.parse.HiveParser.queryStatementExpression(HiveParser.java:40001)
> at 
> org.apache.hadoop.hive.ql.parse.HiveParser.execStatement(HiveParser.java:1519)
> at org.apache.hadoop.hive.ql.parse.HiveParser.statement(HiveParser.java:1057)
> at org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:199)
> at org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:166)
> at 
> org.apache.hadoop.hive.ql.optimizer.index.RewriteParseContextGenerator.generateOperatorTree(RewriteParseContextGenerator.java:67)
> at 
> org.apache.hadoop.hive.ql.optimizer.index.RewriteQueryUsingAggregateIndex$NewQueryGroupbySchemaProc.process(RewriteQueryUsingAggregateIndex.java:255)
> at 
> org.apache.hadoop.hive.ql.lib.DefaultRuleDispatcher.dispatch(DefaultRuleDispatcher.java:90)
> at 
> org.apache.hadoop.hive.ql.lib.DefaultG

[jira] [Updated] (HIVE-7439) Spark job monitoring and error reporting [Spark Branch]

2014-11-28 Thread Lefty Leverenz (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lefty Leverenz updated HIVE-7439:
-
Labels: Spark-M3 TODOC-SPARK  (was: Spark-M3)

> Spark job monitoring and error reporting [Spark Branch]
> ---
>
> Key: HIVE-7439
> URL: https://issues.apache.org/jira/browse/HIVE-7439
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Reporter: Xuefu Zhang
>Assignee: Chengxiang Li
>  Labels: Spark-M3, TODOC-SPARK
> Fix For: spark-branch
>
> Attachments: HIVE-7439.1-spark.patch, HIVE-7439.2-spark.patch, 
> HIVE-7439.2-spark.patch, HIVE-7439.3-spark.patch, HIVE-7439.3-spark.patch, 
> hive on spark job status.PNG
>
>
> After Hive submits a job to Spark cluster, we need to report to user the job 
> progress, such as the percentage done, to the user. This is especially 
> important for long running queries. Moreover, if there is an error during job 
> submission or execution, it's also crucial for hive to fetch the error log 
> and/or stacktrace and feedback it to the user.
> Please refer design doc on wiki for more information.
> CLEAR LIBRARY CACHE



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8834) enable job progress monitoring of Remote Spark Context [Spark Branch]

2014-11-28 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14228662#comment-14228662
 ] 

Lefty Leverenz commented on HIVE-8834:
--

Good plan, [~xuefuz], I added a TODOC label to HIVE-7439.

> enable job progress monitoring of Remote Spark Context [Spark Branch]
> -
>
> Key: HIVE-8834
> URL: https://issues.apache.org/jira/browse/HIVE-8834
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Reporter: Chengxiang Li
>Assignee: Rui Li
>  Labels: Spark-M3
> Fix For: spark-branch
>
> Attachments: HIVE-8834.1-spark.patch, HIVE-8834.2-spark.patch, 
> HIVE-8834.3-spark.patch, HIVE-8834.4-spark.patch, HIVE-8834.5-spark.patch, 
> HIVE-8834.6-spark.patch
>
>
> We should enable job progress monitor in Remote Spark Context, the spark job 
> progress info should fit into SparkJobStatus. SPARK-2321 supply new spark 
> progress API, which should make this task easier.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-7439) Spark job monitoring and error reporting [Spark Branch]

2014-11-28 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14228664#comment-14228664
 ] 

Lefty Leverenz commented on HIVE-7439:
--

Hm, maybe we want some documentation for this after all.  (See doc comments on 
HIVE-8834:  
https://issues.apache.org/jira/browse/HIVE-8834?focusedCommentId=14228268&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14228268.)

> Spark job monitoring and error reporting [Spark Branch]
> ---
>
> Key: HIVE-7439
> URL: https://issues.apache.org/jira/browse/HIVE-7439
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Reporter: Xuefu Zhang
>Assignee: Chengxiang Li
>  Labels: Spark-M3, TODOC-SPARK
> Fix For: spark-branch
>
> Attachments: HIVE-7439.1-spark.patch, HIVE-7439.2-spark.patch, 
> HIVE-7439.2-spark.patch, HIVE-7439.3-spark.patch, HIVE-7439.3-spark.patch, 
> hive on spark job status.PNG
>
>
> After Hive submits a job to Spark cluster, we need to report to user the job 
> progress, such as the percentage done, to the user. This is especially 
> important for long running queries. Moreover, if there is an error during job 
> submission or execution, it's also crucial for hive to fetch the error log 
> and/or stacktrace and feedback it to the user.
> Please refer design doc on wiki for more information.
> CLEAR LIBRARY CACHE



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8806) Potential null dereference in Metrics#incrementCounter()

2014-11-28 Thread denny joseph (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

denny joseph updated HIVE-8806:
---
Status: Patch Available  (was: Open)

> Potential null dereference in Metrics#incrementCounter()
> 
>
> Key: HIVE-8806
> URL: https://issues.apache.org/jira/browse/HIVE-8806
> Project: Hive
>  Issue Type: Bug
>Reporter: Ted Yu
>Priority: Minor
> Attachments: HIVE-8806.1.patch
>
>
> {code}
>   if (!metrics.hasKey(name)) {
> value = Long.valueOf(increment);
> set(name, value);
>   } else {
> value = ((Long)get(name)) + increment;
> set(name, value);
>   }
> {code}
> In the else block, if get(name) returns null, unboxing null object would lead 
> to exception.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8806) Potential null dereference in Metrics#incrementCounter()

2014-11-28 Thread denny joseph (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

denny joseph updated HIVE-8806:
---
Attachment: HIVE-8806.1.patch

patch attached

> Potential null dereference in Metrics#incrementCounter()
> 
>
> Key: HIVE-8806
> URL: https://issues.apache.org/jira/browse/HIVE-8806
> Project: Hive
>  Issue Type: Bug
>Reporter: Ted Yu
>Priority: Minor
> Attachments: HIVE-8806.1.patch
>
>
> {code}
>   if (!metrics.hasKey(name)) {
> value = Long.valueOf(increment);
> set(name, value);
>   } else {
> value = ((Long)get(name)) + increment;
> set(name, value);
>   }
> {code}
> In the else block, if get(name) returns null, unboxing null object would lead 
> to exception.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)