[jira] [Commented] (HIVE-13760) Add a HIVE_QUERY_TIMEOUT configuration to kill a query if a query is running for more than the configured timeout value.

2016-06-07 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15320123#comment-15320123
 ] 

Lefty Leverenz commented on HIVE-13760:
---

Doc note:  This adds hive.query.timeout.seconds to HiveConf.java in release 
2.2.0, so it will need to be documented in the wiki.

* [Configuration Properties -- Query and DDL Execution | 
https://cwiki.apache.org/confluence/display/Hive/Configuration+Properties#ConfigurationProperties-QueryandDDLExecution]

Added a TODOC2.2 label.

> Add a HIVE_QUERY_TIMEOUT configuration to kill a query if a query is running 
> for more than the configured timeout value.
> 
>
> Key: HIVE-13760
> URL: https://issues.apache.org/jira/browse/HIVE-13760
> Project: Hive
>  Issue Type: Improvement
>  Components: Configuration
>Affects Versions: 2.0.0
>Reporter: zhihai xu
>Assignee: zhihai xu
>  Labels: TODOC2.2
> Fix For: 2.2.0
>
> Attachments: HIVE-13760.000.patch, HIVE-13760.001.patch
>
>
> Add a HIVE_QUERY_TIMEOUT configuration to kill a query if a query is running 
> for more than the configured timeout value. The default value will be -1 , 
> which means no timeout. This will be useful for  user to manage queries with 
> SLA.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13760) Add a HIVE_QUERY_TIMEOUT configuration to kill a query if a query is running for more than the configured timeout value.

2016-06-07 Thread Lefty Leverenz (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lefty Leverenz updated HIVE-13760:
--
Labels: TODOC2.2  (was: )

> Add a HIVE_QUERY_TIMEOUT configuration to kill a query if a query is running 
> for more than the configured timeout value.
> 
>
> Key: HIVE-13760
> URL: https://issues.apache.org/jira/browse/HIVE-13760
> Project: Hive
>  Issue Type: Improvement
>  Components: Configuration
>Affects Versions: 2.0.0
>Reporter: zhihai xu
>Assignee: zhihai xu
>  Labels: TODOC2.2
> Fix For: 2.2.0
>
> Attachments: HIVE-13760.000.patch, HIVE-13760.001.patch
>
>
> Add a HIVE_QUERY_TIMEOUT configuration to kill a query if a query is running 
> for more than the configured timeout value. The default value will be -1 , 
> which means no timeout. This will be useful for  user to manage queries with 
> SLA.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13972) Resolve class dependency issue introduced by HIVE-13354

2016-06-07 Thread Wei Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Zheng updated HIVE-13972:
-
Attachment: HIVE-13972.1.patch

[~ekoifman] Can you take a look?

> Resolve class dependency issue introduced by HIVE-13354
> ---
>
> Key: HIVE-13972
> URL: https://issues.apache.org/jira/browse/HIVE-13972
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 1.3.0, 2.1.0, 2.2.0
>Reporter: Wei Zheng
>Assignee: Wei Zheng
>Priority: Blocker
> Attachments: HIVE-13972.1.patch
>
>
> HIVE-13354 moved a helper class StringableMap from 
> ql/txn/compactor/CompactorMR.java to metastore/txn/TxnUtils.java
> This introduced a dependency from ql package to metastore package which is 
> not allowed and fails in a real cluster.
> Instead of moving it to metastore, it should be moved to common package.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13853) Add X-XSRF-Header filter to HS2 HTTP mode and WebHCat

2016-06-07 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15320109#comment-15320109
 ] 

Lefty Leverenz commented on HIVE-13853:
---

Changed TODOC2.2 to TODOC2.1.

> Add X-XSRF-Header filter to HS2 HTTP mode and WebHCat
> -
>
> Key: HIVE-13853
> URL: https://issues.apache.org/jira/browse/HIVE-13853
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, WebHCat
>Reporter: Sushanth Sowmyan
>Assignee: Sushanth Sowmyan
>  Labels: TODOC2.1
> Fix For: 2.1.0
>
> Attachments: HIVE-13853.2.patch, HIVE-13853.patch
>
>
> There is a possibility that there may be a CSRF-based attack on various 
> hadoop components, and thus, there is an effort to add a block for all 
> incoming http requests if they do not contain a X-XSRF-Header header. (See 
> HADOOP-12691 for motivation)
> This has potential to affect HS2 when running on thrift-over-http mode(if 
> cookie-based-auth is used), and webhcat.
> We introduce new flags to determine whether or not we're using the filter, 
> and if we are, we will automatically reject any http requests which do not 
> contain this header.
> To allow this to work, we also need to make changes to our JDBC driver to 
> automatically inject this header into any requests it makes. Also, any 
> client-side programs/api not using the JDBC driver directly will need to make 
> changes to add a X-XSRF-Header header to the request to make calls to 
> HS2/WebHCat if this filter is enabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13853) Add X-XSRF-Header filter to HS2 HTTP mode and WebHCat

2016-06-07 Thread Lefty Leverenz (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lefty Leverenz updated HIVE-13853:
--
Labels: TODOC2.1  (was: TODOC2.2)

> Add X-XSRF-Header filter to HS2 HTTP mode and WebHCat
> -
>
> Key: HIVE-13853
> URL: https://issues.apache.org/jira/browse/HIVE-13853
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, WebHCat
>Reporter: Sushanth Sowmyan
>Assignee: Sushanth Sowmyan
>  Labels: TODOC2.1
> Fix For: 2.1.0
>
> Attachments: HIVE-13853.2.patch, HIVE-13853.patch
>
>
> There is a possibility that there may be a CSRF-based attack on various 
> hadoop components, and thus, there is an effort to add a block for all 
> incoming http requests if they do not contain a X-XSRF-Header header. (See 
> HADOOP-12691 for motivation)
> This has potential to affect HS2 when running on thrift-over-http mode(if 
> cookie-based-auth is used), and webhcat.
> We introduce new flags to determine whether or not we're using the filter, 
> and if we are, we will automatically reject any http requests which do not 
> contain this header.
> To allow this to work, we also need to make changes to our JDBC driver to 
> automatically inject this header into any requests it makes. Also, any 
> client-side programs/api not using the JDBC driver directly will need to make 
> changes to add a X-XSRF-Header header to the request to make calls to 
> HS2/WebHCat if this filter is enabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13954) Parquet logs should go to STDERR

2016-06-07 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15320087#comment-15320087
 ] 

Lefty Leverenz commented on HIVE-13954:
---

Doc note:  parquet-logging.properties should be documented for releases 2.1.0 
and 1.3.0 in the Parquet wikidoc, with a cross-reference from the Hive Logging 
section in Getting Started.

* [Parquet | https://cwiki.apache.org/confluence/display/Hive/Parquet]
* [Getting Started -- Hive Logging | 
https://cwiki.apache.org/confluence/display/Hive/GettingStarted#GettingStarted-HiveLogging]

Added TODOC2.1 and TODOC1.3 labels.

> Parquet logs should go to STDERR
> 
>
> Key: HIVE-13954
> URL: https://issues.apache.org/jira/browse/HIVE-13954
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 2.2.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>  Labels: TODOC1.3, TODOC2.1
> Fix For: 1.3.0, 2.1.0, 2.2.0
>
> Attachments: HIVE-13954-branch-1.patch, HIVE-13954.1.patch
>
>
> Parquet uses java util logging. When java logging is not configured using 
> default logging.properties file, parquet's default fallback handler writes to 
> STDOUT at INFO level. Hive writes all logging to STDERR and writes only the 
> query output to STDOUT. Writing logs to STDOUT may cause issues when 
> comparing query results. 
> If we provide default logging.properties for parquet then we can configure it 
> to write to file or stderr.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13662) Set file permission and ACL in file sink operator

2016-06-07 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15320084#comment-15320084
 ] 

Ashutosh Chauhan commented on HIVE-13662:
-

sounds like your need to & apply reverse umask to get desired behavior : 
https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsPermissionsGuide.html#Changes_to_the_File_System_API

> Set file permission and ACL in file sink operator
> -
>
> Key: HIVE-13662
> URL: https://issues.apache.org/jira/browse/HIVE-13662
> Project: Hive
>  Issue Type: Bug
>Reporter: Rui Li
>Assignee: Pengcheng Xiong
> Attachments: HIVE-13662.01.patch
>
>
> As suggested 
> [here|https://issues.apache.org/jira/browse/HIVE-13572?focusedCommentId=15254438&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15254438].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13954) Parquet logs should go to STDERR

2016-06-07 Thread Lefty Leverenz (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lefty Leverenz updated HIVE-13954:
--
Labels: TODOC1.3 TODOC2.1  (was: )

> Parquet logs should go to STDERR
> 
>
> Key: HIVE-13954
> URL: https://issues.apache.org/jira/browse/HIVE-13954
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 2.2.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>  Labels: TODOC1.3, TODOC2.1
> Fix For: 1.3.0, 2.1.0, 2.2.0
>
> Attachments: HIVE-13954-branch-1.patch, HIVE-13954.1.patch
>
>
> Parquet uses java util logging. When java logging is not configured using 
> default logging.properties file, parquet's default fallback handler writes to 
> STDOUT at INFO level. Hive writes all logging to STDERR and writes only the 
> query output to STDOUT. Writing logs to STDOUT may cause issues when 
> comparing query results. 
> If we provide default logging.properties for parquet then we can configure it 
> to write to file or stderr.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13662) Set file permission and ACL in file sink operator

2016-06-07 Thread Pengcheng Xiong (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15320074#comment-15320074
 ] 

Pengcheng Xiong commented on HIVE-13662:


Hi [~ashutoshc], I found a bigger problem. It sounds like that when we call
{code}
HiveFileFormatUtils.getHiveRecordWriter(jc, conf.getTableInfo(),
outputClass, conf, fsp.outPaths[filesIdx], permission, reporter);
{code}
in case of HiveIgnoreKeyTextOutputFormat, for example, 
{code}
createOutStream = Utilities.createCompressedStream(
  jc,
  fs.create(outPath, permission, true, 
fs.getConf().getInt("io.file.buffer.size", 4096),
  fs.getDefaultReplication(outPath), 
fs.getDefaultBlockSize(outPath), progress),
  isCompressed);
{code}
if {code} permission{code} is rwxrwxrwx, the outPath is still rwxr-xr-x. It 
sounds like this fs.create with permission does not work

> Set file permission and ACL in file sink operator
> -
>
> Key: HIVE-13662
> URL: https://issues.apache.org/jira/browse/HIVE-13662
> Project: Hive
>  Issue Type: Bug
>Reporter: Rui Li
>Assignee: Pengcheng Xiong
> Attachments: HIVE-13662.01.patch
>
>
> As suggested 
> [here|https://issues.apache.org/jira/browse/HIVE-13572?focusedCommentId=15254438&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15254438].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13972) Resolve class dependency issue introduced by HIVE-13354

2016-06-07 Thread Wei Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15320045#comment-15320045
 ] 

Wei Zheng commented on HIVE-13972:
--

[~jcamachorodriguez] FYI, this need to make 2.1 release to avoid breaking ACID.

> Resolve class dependency issue introduced by HIVE-13354
> ---
>
> Key: HIVE-13972
> URL: https://issues.apache.org/jira/browse/HIVE-13972
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 1.3.0, 2.1.0, 2.2.0
>Reporter: Wei Zheng
>Assignee: Wei Zheng
>Priority: Blocker
>
> HIVE-13354 moved a helper class StringableMap from 
> ql/txn/compactor/CompactorMR.java to metastore/txn/TxnUtils.java
> This introduced a dependency from ql package to metastore package which is 
> not allowed and fails in a real cluster.
> Instead of moving it to metastore, it should be moved to common package.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13563) Hive Streaming does not honor orc.compress.size and orc.stripe.size table properties

2016-06-07 Thread Wei Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Zheng updated HIVE-13563:
-
Attachment: HIVE-13563.3.patch

patch 3 addressed review comment

> Hive Streaming does not honor orc.compress.size and orc.stripe.size table 
> properties
> 
>
> Key: HIVE-13563
> URL: https://issues.apache.org/jira/browse/HIVE-13563
> Project: Hive
>  Issue Type: Bug
>  Components: ORC
>Affects Versions: 2.1.0
>Reporter: Wei Zheng
>Assignee: Wei Zheng
>  Labels: TODOC2.1
> Attachments: HIVE-13563.1.patch, HIVE-13563.2.patch, 
> HIVE-13563.3.patch
>
>
> According to the doc:
> https://cwiki.apache.org/confluence/display/Hive/LanguageManual+ORC#LanguageManualORC-HiveQLSyntax
> One should be able to specify tblproperties for many ORC options.
> But the settings for orc.compress.size and orc.stripe.size don't take effect.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13921) Fix spark on yarn tests for HoS

2016-06-07 Thread Rui Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15320037#comment-15320037
 ] 

Rui Li commented on HIVE-13921:
---

Hey [~ashutoshc], the failure happens only if the target file somehow exists 
before the moving. That means we need multiple tests writing to the same 
directory to trigger it. For spark, we have {{index_bitmap3.q}} and 
{{index_bitmap_auto.q}} that both write to 
{code}${system:test.tmp.dir}/index_result{code}
If you run any of them alone, it'll pass successfully. But the latter will fail 
if you run both of them.
You can also reproduce the failure for {{TestMinimrCliDriver}} by running the 
two tests together.

We can remove the directory after each test to clean up the environment. I 
verified it can fix the failure.
But ideally we don't need to do it because the query is an {{INSERT OVERWRITE 
DIRECTORY}}. That's why I think the code change makes sense. What's your 
opinion?

> Fix spark on yarn tests for HoS
> ---
>
> Key: HIVE-13921
> URL: https://issues.apache.org/jira/browse/HIVE-13921
> Project: Hive
>  Issue Type: Test
>Reporter: Rui Li
>Assignee: Rui Li
> Attachments: HIVE-13921.1.patch
>
>
> {{index_bitmap3}} and {{constprog_partitioner}} have been failing. Let's fix 
> them here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HIVE-13971) Address testcase failures of acid_globallimit.q and acid_table_stats.q

2016-06-07 Thread Pengcheng Xiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pengcheng Xiong resolved HIVE-13971.

Resolution: Fixed

> Address testcase failures of acid_globallimit.q and acid_table_stats.q
> --
>
> Key: HIVE-13971
> URL: https://issues.apache.org/jira/browse/HIVE-13971
> Project: Hive
>  Issue Type: Bug
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
>Priority: Trivial
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HIVE-13838) Set basic stats as inaccurate for all ACID tables

2016-06-07 Thread Pengcheng Xiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pengcheng Xiong resolved HIVE-13838.

Resolution: Fixed

update the golden files.

> Set basic stats as inaccurate for all ACID tables
> -
>
> Key: HIVE-13838
> URL: https://issues.apache.org/jira/browse/HIVE-13838
> Project: Hive
>  Issue Type: Sub-task
>Affects Versions: 2.0.0
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
> Fix For: 2.1.0
>
> Attachments: HIVE-13838.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HIVE-13838) Set basic stats as inaccurate for all ACID tables

2016-06-07 Thread Pengcheng Xiong (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15320034#comment-15320034
 ] 

Pengcheng Xiong edited comment on HIVE-13838 at 6/8/16 5:33 AM:


updated the golden files.


was (Author: pxiong):
update the golden files.

> Set basic stats as inaccurate for all ACID tables
> -
>
> Key: HIVE-13838
> URL: https://issues.apache.org/jira/browse/HIVE-13838
> Project: Hive
>  Issue Type: Sub-task
>Affects Versions: 2.0.0
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
> Fix For: 2.1.0
>
> Attachments: HIVE-13838.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12656) Turn hive.compute.query.using.stats on by default

2016-06-07 Thread Pengcheng Xiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pengcheng Xiong updated HIVE-12656:
---
Status: Open  (was: Patch Available)

> Turn hive.compute.query.using.stats on by default
> -
>
> Key: HIVE-12656
> URL: https://issues.apache.org/jira/browse/HIVE-12656
> Project: Hive
>  Issue Type: Bug
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
> Attachments: HIVE-12656.01.patch
>
>
> We now have hive.compute.query.using.stats=false by default. We plan to turn 
> it on by default so that we can have better performance. We can also set it 
> to false in some test cases to maintain the original purpose of those tests..



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12656) Turn hive.compute.query.using.stats on by default

2016-06-07 Thread Pengcheng Xiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pengcheng Xiong updated HIVE-12656:
---
Status: Patch Available  (was: Open)

> Turn hive.compute.query.using.stats on by default
> -
>
> Key: HIVE-12656
> URL: https://issues.apache.org/jira/browse/HIVE-12656
> Project: Hive
>  Issue Type: Bug
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
> Attachments: HIVE-12656.01.patch
>
>
> We now have hive.compute.query.using.stats=false by default. We plan to turn 
> it on by default so that we can have better performance. We can also set it 
> to false in some test cases to maintain the original purpose of those tests..



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13968) CombineHiveInputFormat does not honor InputFormat that implements AvoidSplitCombination

2016-06-07 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15319948#comment-15319948
 ] 

Hive QA commented on HIVE-13968:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12808785/HIVE-13968.1.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 12 failed/errored test(s), 10223 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_acid_globallimit
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_acid_table_stats
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_create_func1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_12
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_13
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_stats_list_bucket
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_subquery_multiinsert
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_constprog_partitioner
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_index_bitmap3
org.apache.hadoop.hive.ql.TestTxnCommands.testSimpleAcidInsert
org.apache.hive.jdbc.TestJdbcWithLocalClusterSpark.testPermFunc
org.apache.hive.jdbc.TestJdbcWithMiniMr.testPermFunc
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/40/testReport
Console output: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/40/console
Test logs: 
http://ec2-50-18-27-0.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-40/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 12 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12808785 - PreCommit-HIVE-MASTER-Build

> CombineHiveInputFormat does not honor InputFormat that implements 
> AvoidSplitCombination
> ---
>
> Key: HIVE-13968
> URL: https://issues.apache.org/jira/browse/HIVE-13968
> Project: Hive
>  Issue Type: Bug
>Reporter: Prasanna Rajaperumal
>Assignee: Prasanna Rajaperumal
> Attachments: HIVE-13968.1.patch
>
>
> If I have 100 path[] , the nonCombinablePaths will have only the paths 
> paths[0-9] and the rest of the paths will be in combinablePaths, even if the 
> inputformat returns false for AvoidSplitCombination.shouldSkipCombine() for 
> all the paths. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12988) Improve dynamic partition loading IV

2016-06-07 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15319911#comment-15319911
 ] 

Lefty Leverenz commented on HIVE-12988:
---

HIVE-13933 changes the description and possible values of 
*hive.mv.files.thread*, also in 2.1.0.

> Improve dynamic partition loading IV
> 
>
> Key: HIVE-12988
> URL: https://issues.apache.org/jira/browse/HIVE-12988
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Processor
>Affects Versions: 1.2.0, 2.0.0
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
>  Labels: TODOC2.1
> Fix For: 2.1.0
>
> Attachments: HIVE-12988.2.patch, HIVE-12988.2.patch, 
> HIVE-12988.3.patch, HIVE-12988.4.patch, HIVE-12988.5.patch, 
> HIVE-12988.6.patch, HIVE-12988.7.patch, HIVE-12988.patch
>
>
> Parallelize copyFiles()



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13933) Add an option to turn off parallel file moves

2016-06-07 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15319910#comment-15319910
 ] 

Lefty Leverenz commented on HIVE-13933:
---

Doc note:  This changes the description and possible values of 
*hive.mv.files.thread*, which was introduced by HIVE-12988 for release 2.1.0.

* [Configuration Properties -- Query and DDL Execution | 
https://cwiki.apache.org/confluence/display/Hive/Configuration+Properties#ConfigurationProperties-QueryandDDLExecution]

Added a TODOC2.1 label.

> Add an option to turn off parallel file moves
> -
>
> Key: HIVE-13933
> URL: https://issues.apache.org/jira/browse/HIVE-13933
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Processor
>Affects Versions: 2.1.0
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
>  Labels: TODOC2.1
> Fix For: 2.1.0
>
> Attachments: HIVE-13933.patch
>
>
> Since this is a new feature, it make sense to have an ability to turn it off.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13933) Add an option to turn off parallel file moves

2016-06-07 Thread Lefty Leverenz (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lefty Leverenz updated HIVE-13933:
--
Labels: TODOC2.1  (was: )

> Add an option to turn off parallel file moves
> -
>
> Key: HIVE-13933
> URL: https://issues.apache.org/jira/browse/HIVE-13933
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Processor
>Affects Versions: 2.1.0
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
>  Labels: TODOC2.1
> Fix For: 2.1.0
>
> Attachments: HIVE-13933.patch
>
>
> Since this is a new feature, it make sense to have an ability to turn it off.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13443) LLAP: signing for the second state of submit (the event)

2016-06-07 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-13443:

Attachment: HIVE-13443.03.patch

Minor build fixes


> LLAP: signing for the second state of submit (the event)
> 
>
> Key: HIVE-13443
> URL: https://issues.apache.org/jira/browse/HIVE-13443
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-13443.01.patch, HIVE-13443.02.patch, 
> HIVE-13443.02.wo.13675.nogen.patch, HIVE-13443.03.patch, HIVE-13443.03.patch, 
> HIVE-13443.03.wo.13675.nogen.patch, HIVE-13443.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13392) disable speculative execution for ACID Compactor

2016-06-07 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15319854#comment-15319854
 ] 

Hive QA commented on HIVE-13392:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12808782/HIVE-13392.2.patch

{color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 12 failed/errored test(s), 10208 tests 
executed
*Failed tests:*
{noformat}
TestMiniTezCliDriver-cbo_windowing.q-tez_join.q-bucket_map_join_tez1.q-and-12-more
 - did not produce a TEST-*.xml file
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_acid_globallimit
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_acid_table_stats
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_create_func1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_12
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_13
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_stats_list_bucket
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_subquery_multiinsert
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_constprog_partitioner
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_index_bitmap3
org.apache.hive.jdbc.TestJdbcWithLocalClusterSpark.testPermFunc
org.apache.hive.jdbc.TestJdbcWithMiniMr.testPermFunc
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/39/testReport
Console output: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/39/console
Test logs: 
http://ec2-50-18-27-0.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-39/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 12 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12808782 - PreCommit-HIVE-MASTER-Build

> disable speculative execution for ACID Compactor
> 
>
> Key: HIVE-13392
> URL: https://issues.apache.org/jira/browse/HIVE-13392
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 1.0.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
> Attachments: HIVE-13392.2.patch, HIVE-13392.3.patch, HIVE-13392.patch
>
>
> https://developer.yahoo.com/hadoop/tutorial/module4.html
> Speculative execution is enabled by default. You can disable speculative 
> execution for the mappers and reducers by setting the 
> mapred.map.tasks.speculative.execution and 
> mapred.reduce.tasks.speculative.execution JobConf options to false, 
> respectively.
> CompactorMR is currently not set up to handle speculative execution and may 
> lead to something like
> {code}
> 2016-02-08 22:56:38,256 WARN [main] org.apache.hadoop.mapred.YarnChild: 
> Exception running child : 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException):
>  Failed to CREATE_FILE 
> /apps/hive/warehouse/service_logs_v2/ds=2016-01-20/_tmp_6cf08b9f-c2e2-4182-bc81-e032801b147f/base_13858600/bucket_4
>  for DFSClient_attempt_1454628390210_27756_m_01_1_131224698_1 on 
> 172.18.129.12 because this file lease is currently owned by 
> DFSClient_attempt_1454628390210_27756_m_01_0_-2027182532_1 on 
> 172.18.129.18
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2937)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:2562)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2451)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2335)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:688)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:397)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2151)
> {code}
> Short term: disable speculative execution for this job
> Longer term perhaps make each task write to dir with UUID...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13968) CombineHiveInputFormat does not honor InputFormat that implements AvoidSplitCombination

2016-06-07 Thread Rui Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15319842#comment-15319842
 ] 

Rui Li commented on HIVE-13968:
---

Thanks [~prasanna@gmail.com] for fixing it. The patch looks good to me.
Can you add a test to cover this?

> CombineHiveInputFormat does not honor InputFormat that implements 
> AvoidSplitCombination
> ---
>
> Key: HIVE-13968
> URL: https://issues.apache.org/jira/browse/HIVE-13968
> Project: Hive
>  Issue Type: Bug
>Reporter: Prasanna Rajaperumal
>Assignee: Prasanna Rajaperumal
> Attachments: HIVE-13968.1.patch
>
>
> If I have 100 path[] , the nonCombinablePaths will have only the paths 
> paths[0-9] and the rest of the paths will be in combinablePaths, even if the 
> inputformat returns false for AvoidSplitCombination.shouldSkipCombine() for 
> all the paths. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13723) Executing join query on type Float using Thrift Serde will result in Float cast to Double error

2016-06-07 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13723?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-13723:

Assignee: Ziyang Zhao

> Executing join query on type Float using Thrift Serde will result in Float 
> cast to Double error
> ---
>
> Key: HIVE-13723
> URL: https://issues.apache.org/jira/browse/HIVE-13723
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2, JDBC, Serializers/Deserializers
>Affects Versions: 2.1.0
>Reporter: Ziyang Zhao
>Assignee: Ziyang Zhao
>Priority: Critical
> Attachments: HIVE-13723.1.patch, HIVE-13723.2.patch
>
>
> After enable thrift Serde, execute the following queries in beeline,
> >create table test1 (a int);
> >create table test2 (b float);
> >insert into test1 values (1);
> >insert into test2 values (1);
> >select * from test1 join test2 on test1.a=test2.b;
> this will give the error:
> java.lang.Exception: java.lang.RuntimeException: 
> org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while 
> processing row {"b":1.0}
> at 
> org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462) 
> ~[hadoop-mapreduce-client-common-2.7.1.2.4.0.0-169.jar:?]
> at 
> org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:522) 
> [hadoop-mapreduce-client-common-2.7.1.2.4.0.0-169.jar:?]
> Caused by: java.lang.RuntimeException: 
> org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while 
> processing row {"b":1.0}
> at 
> org.apache.hadoop.hive.ql.exec.mr.ExecMapper.map(ExecMapper.java:168) 
> ~[hive-exec-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT]
> at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54) 
> ~[hadoop-mapreduce-client-core-2.7.1.2.4.0.0-169.jar:?]
> at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:453) 
> ~[hadoop-mapreduce-client-core-2.7.1.2.4.0.0-169.jar:?]
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343) 
> ~[hadoop-mapreduce-client-core-2.7.1.2.4.0.0-169.jar:?]
> at 
> org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:243)
>  ~[hadoop-mapreduce-client-common-2.7.1.2.4.0.0-169.jar:?]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
> ~[?:1.7.0_95]
> at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
> ~[?:1.7.0_95]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  ~[?:1.7.0_95]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  ~[?:1.7.0_95]
> at java.lang.Thread.run(Thread.java:745) ~[?:1.7.0_95]
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime 
> Error while processing row {"b":1.0}
> at 
> org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:568) 
> ~[hive-exec-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.exec.mr.ExecMapper.map(ExecMapper.java:159) 
> ~[hive-exec-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT]
> at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54) 
> ~[hadoop-mapreduce-client-core-2.7.1.2.4.0.0-169.jar:?]
> at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:453) 
> ~[hadoop-mapreduce-client-core-2.7.1.2.4.0.0-169.jar:?]
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343) 
> ~[hadoop-mapreduce-client-core-2.7.1.2.4.0.0-169.jar:?]
> at 
> org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:243)
>  ~[hadoop-mapreduce-client-common-2.7.1.2.4.0.0-169.jar:?]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
> ~[?:1.7.0_95]
> at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
> ~[?:1.7.0_95]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  ~[?:1.7.0_95]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  ~[?:1.7.0_95]
> at java.lang.Thread.run(Thread.java:745) ~[?:1.7.0_95]
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Unexpected 
> exception from MapJoinOperator : 
> org.apache.hadoop.hive.serde2.SerDeException: java.lang.ClassCastException: 
> java.lang.Float cannot be cast to java.lang.Double
> at 
> org.apache.hadoop.hive.ql.exec.MapJoinOperator.process(MapJoinOperator.java:454)
>  ~[hive-exec-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT]
> at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:837) 
> ~[hive-exec-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.exec.FilterOperator.process(FilterOperator.java:126)
>  ~[hive-exec-2.1.0-

[jira] [Updated] (HIVE-13443) LLAP: signing for the second state of submit (the event)

2016-06-07 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-13443:

Attachment: HIVE-13443.03.patch

> LLAP: signing for the second state of submit (the event)
> 
>
> Key: HIVE-13443
> URL: https://issues.apache.org/jira/browse/HIVE-13443
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-13443.01.patch, HIVE-13443.02.patch, 
> HIVE-13443.02.wo.13675.nogen.patch, HIVE-13443.03.patch, 
> HIVE-13443.03.wo.13675.nogen.patch, HIVE-13443.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13443) LLAP: signing for the second state of submit (the event)

2016-06-07 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-13443:

Attachment: (was: HIVE-13443.02.wo.13675.nogen.patch)

> LLAP: signing for the second state of submit (the event)
> 
>
> Key: HIVE-13443
> URL: https://issues.apache.org/jira/browse/HIVE-13443
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-13443.01.patch, HIVE-13443.02.patch, 
> HIVE-13443.02.wo.13675.nogen.patch, HIVE-13443.03.patch, 
> HIVE-13443.03.wo.13675.nogen.patch, HIVE-13443.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13443) LLAP: signing for the second state of submit (the event)

2016-06-07 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-13443:

Attachment: HIVE-13443.03.wo.13675.nogen.patch

> LLAP: signing for the second state of submit (the event)
> 
>
> Key: HIVE-13443
> URL: https://issues.apache.org/jira/browse/HIVE-13443
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-13443.01.patch, HIVE-13443.02.patch, 
> HIVE-13443.02.wo.13675.nogen.patch, HIVE-13443.03.patch, 
> HIVE-13443.03.wo.13675.nogen.patch, HIVE-13443.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13443) LLAP: signing for the second state of submit (the event)

2016-06-07 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-13443:

Attachment: HIVE-13443.02.wo.13675.nogen.patch

> LLAP: signing for the second state of submit (the event)
> 
>
> Key: HIVE-13443
> URL: https://issues.apache.org/jira/browse/HIVE-13443
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-13443.01.patch, HIVE-13443.02.patch, 
> HIVE-13443.02.wo.13675.nogen.patch, HIVE-13443.02.wo.13675.nogen.patch, 
> HIVE-13443.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13392) disable speculative execution for ACID Compactor

2016-06-07 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-13392:
--
Attachment: HIVE-13392.3.patch

patch 3: really move the class to common

> disable speculative execution for ACID Compactor
> 
>
> Key: HIVE-13392
> URL: https://issues.apache.org/jira/browse/HIVE-13392
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 1.0.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
> Attachments: HIVE-13392.2.patch, HIVE-13392.3.patch, HIVE-13392.patch
>
>
> https://developer.yahoo.com/hadoop/tutorial/module4.html
> Speculative execution is enabled by default. You can disable speculative 
> execution for the mappers and reducers by setting the 
> mapred.map.tasks.speculative.execution and 
> mapred.reduce.tasks.speculative.execution JobConf options to false, 
> respectively.
> CompactorMR is currently not set up to handle speculative execution and may 
> lead to something like
> {code}
> 2016-02-08 22:56:38,256 WARN [main] org.apache.hadoop.mapred.YarnChild: 
> Exception running child : 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException):
>  Failed to CREATE_FILE 
> /apps/hive/warehouse/service_logs_v2/ds=2016-01-20/_tmp_6cf08b9f-c2e2-4182-bc81-e032801b147f/base_13858600/bucket_4
>  for DFSClient_attempt_1454628390210_27756_m_01_1_131224698_1 on 
> 172.18.129.12 because this file lease is currently owned by 
> DFSClient_attempt_1454628390210_27756_m_01_0_-2027182532_1 on 
> 172.18.129.18
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2937)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:2562)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2451)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2335)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:688)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:397)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2151)
> {code}
> Short term: disable speculative execution for this job
> Longer term perhaps make each task write to dir with UUID...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13563) Hive Streaming does not honor orc.compress.size and orc.stripe.size table properties

2016-06-07 Thread Prasanth Jayachandran (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15319811#comment-15319811
 ] 

Prasanth Jayachandran commented on HIVE-13563:
--

looks good, +1. Minor comment: 

{code}
AcidOutputFormat.Options optionsCloneForDelta = null;
{code}

can be moved inside else part as we are not referencing it outside the condition

{code}
else {
AcidOutputFormat.Options optionsCloneForDelta = options.clone();
}
{code}

> Hive Streaming does not honor orc.compress.size and orc.stripe.size table 
> properties
> 
>
> Key: HIVE-13563
> URL: https://issues.apache.org/jira/browse/HIVE-13563
> Project: Hive
>  Issue Type: Bug
>  Components: ORC
>Affects Versions: 2.1.0
>Reporter: Wei Zheng
>Assignee: Wei Zheng
>  Labels: TODOC2.1
> Attachments: HIVE-13563.1.patch, HIVE-13563.2.patch
>
>
> According to the doc:
> https://cwiki.apache.org/confluence/display/Hive/LanguageManual+ORC#LanguageManualORC-HiveQLSyntax
> One should be able to specify tblproperties for many ORC options.
> But the settings for orc.compress.size and orc.stripe.size don't take effect.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13723) Executing join query on type Float using Thrift Serde will result in Float cast to Double error

2016-06-07 Thread Ziyang Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13723?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ziyang Zhao updated HIVE-13723:
---
Attachment: HIVE-13723.2.patch

Fixed some white space issue.

> Executing join query on type Float using Thrift Serde will result in Float 
> cast to Double error
> ---
>
> Key: HIVE-13723
> URL: https://issues.apache.org/jira/browse/HIVE-13723
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2, JDBC, Serializers/Deserializers
>Affects Versions: 2.1.0
>Reporter: Ziyang Zhao
>Priority: Critical
> Attachments: HIVE-13723.1.patch, HIVE-13723.2.patch
>
>
> After enable thrift Serde, execute the following queries in beeline,
> >create table test1 (a int);
> >create table test2 (b float);
> >insert into test1 values (1);
> >insert into test2 values (1);
> >select * from test1 join test2 on test1.a=test2.b;
> this will give the error:
> java.lang.Exception: java.lang.RuntimeException: 
> org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while 
> processing row {"b":1.0}
> at 
> org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462) 
> ~[hadoop-mapreduce-client-common-2.7.1.2.4.0.0-169.jar:?]
> at 
> org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:522) 
> [hadoop-mapreduce-client-common-2.7.1.2.4.0.0-169.jar:?]
> Caused by: java.lang.RuntimeException: 
> org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while 
> processing row {"b":1.0}
> at 
> org.apache.hadoop.hive.ql.exec.mr.ExecMapper.map(ExecMapper.java:168) 
> ~[hive-exec-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT]
> at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54) 
> ~[hadoop-mapreduce-client-core-2.7.1.2.4.0.0-169.jar:?]
> at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:453) 
> ~[hadoop-mapreduce-client-core-2.7.1.2.4.0.0-169.jar:?]
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343) 
> ~[hadoop-mapreduce-client-core-2.7.1.2.4.0.0-169.jar:?]
> at 
> org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:243)
>  ~[hadoop-mapreduce-client-common-2.7.1.2.4.0.0-169.jar:?]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
> ~[?:1.7.0_95]
> at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
> ~[?:1.7.0_95]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  ~[?:1.7.0_95]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  ~[?:1.7.0_95]
> at java.lang.Thread.run(Thread.java:745) ~[?:1.7.0_95]
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime 
> Error while processing row {"b":1.0}
> at 
> org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:568) 
> ~[hive-exec-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.exec.mr.ExecMapper.map(ExecMapper.java:159) 
> ~[hive-exec-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT]
> at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54) 
> ~[hadoop-mapreduce-client-core-2.7.1.2.4.0.0-169.jar:?]
> at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:453) 
> ~[hadoop-mapreduce-client-core-2.7.1.2.4.0.0-169.jar:?]
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343) 
> ~[hadoop-mapreduce-client-core-2.7.1.2.4.0.0-169.jar:?]
> at 
> org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:243)
>  ~[hadoop-mapreduce-client-common-2.7.1.2.4.0.0-169.jar:?]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
> ~[?:1.7.0_95]
> at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
> ~[?:1.7.0_95]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  ~[?:1.7.0_95]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  ~[?:1.7.0_95]
> at java.lang.Thread.run(Thread.java:745) ~[?:1.7.0_95]
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Unexpected 
> exception from MapJoinOperator : 
> org.apache.hadoop.hive.serde2.SerDeException: java.lang.ClassCastException: 
> java.lang.Float cannot be cast to java.lang.Double
> at 
> org.apache.hadoop.hive.ql.exec.MapJoinOperator.process(MapJoinOperator.java:454)
>  ~[hive-exec-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT]
> at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:837) 
> ~[hive-exec-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.exec.FilterOperator.process(FilterOperator.java:126)
>  ~[hive-exec-2.1.0-SNAPS

[jira] [Updated] (HIVE-13563) Hive Streaming does not honor orc.compress.size and orc.stripe.size table properties

2016-06-07 Thread Wei Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Zheng updated HIVE-13563:
-
Attachment: HIVE-13563.2.patch

Patch 2. [~prasanth_j] Can you take a look?

> Hive Streaming does not honor orc.compress.size and orc.stripe.size table 
> properties
> 
>
> Key: HIVE-13563
> URL: https://issues.apache.org/jira/browse/HIVE-13563
> Project: Hive
>  Issue Type: Bug
>  Components: ORC
>Affects Versions: 2.1.0
>Reporter: Wei Zheng
>Assignee: Wei Zheng
>  Labels: TODOC2.1
> Attachments: HIVE-13563.1.patch, HIVE-13563.2.patch
>
>
> According to the doc:
> https://cwiki.apache.org/confluence/display/Hive/LanguageManual+ORC#LanguageManualORC-HiveQLSyntax
> One should be able to specify tblproperties for many ORC options.
> But the settings for orc.compress.size and orc.stripe.size don't take effect.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13957) vectorized IN is inconsistent with non-vectorized (at least for decimal in (string))

2016-06-07 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-13957:

Attachment: HIVE-13957.02.patch

Updated the patch to use default precision and scale instead of max values, 
this will make the other UDFs valid in vast majority of cases.

> vectorized IN is inconsistent with non-vectorized (at least for decimal in 
> (string))
> 
>
> Key: HIVE-13957
> URL: https://issues.apache.org/jira/browse/HIVE-13957
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-13957.01.patch, HIVE-13957.02.patch, 
> HIVE-13957.patch, HIVE-13957.patch
>
>
> The cast is applied to the column in regular IN, but vectorized IN applies it 
> to the IN() list.
> This can cause queries to produce incorrect results.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13961) ACID: Major compaction fails to include the original bucket files if there's no delta directory

2016-06-07 Thread Wei Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Zheng updated HIVE-13961:
-
Attachment: HIVE-13961.3.patch

> ACID: Major compaction fails to include the original bucket files if there's 
> no delta directory
> ---
>
> Key: HIVE-13961
> URL: https://issues.apache.org/jira/browse/HIVE-13961
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 1.3.0, 2.1.0, 2.2.0
>Reporter: Wei Zheng
>Assignee: Wei Zheng
>Priority: Blocker
> Attachments: HIVE-13961.1.patch, HIVE-13961.2.patch, 
> HIVE-13961.3.patch
>
>
> The issue can be reproduced by steps below:
> 1. Insert a row to Non-ACID table
> 2. Convert Non-ACID to ACID table (i.e. set transactional=true table property)
> 3. Perform Major compaction



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13964) Add a parameter to beeline to allow a properties file to be passed in

2016-06-07 Thread Xuefu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15319778#comment-15319778
 ] 

Xuefu Zhang commented on HIVE-13964:


Got it. Sounds good then.

> Add a parameter to beeline to allow a properties file to be passed in
> -
>
> Key: HIVE-13964
> URL: https://issues.apache.org/jira/browse/HIVE-13964
> Project: Hive
>  Issue Type: New Feature
>  Components: Beeline
>Affects Versions: 2.0.1
>Reporter: Abdullah Yousufi
>Assignee: Abdullah Yousufi
>Priority: Minor
> Fix For: 2.2.0
>
> Attachments: HIVE-13964.01.patch
>
>
> HIVE-6652 removed the ability to pass in a properties file as a beeline 
> parameter. It may be a useful feature to be able to pass the file in is a 
> parameter, such as --property-file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13963) vectorization - string arguments may be converted to decimal null

2016-06-07 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15319775#comment-15319775
 ] 

Sergey Shelukhin commented on HIVE-13963:
-

Yeah, we could do that too. Perhaps I should also do it in the other patch as 
part of the quick fix.

> vectorization - string arguments may be converted to decimal null
> -
>
> Key: HIVE-13963
> URL: https://issues.apache.org/jira/browse/HIVE-13963
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Matt McCline
>Priority: Critical
>
> See HIVE-13957.
> The default precision and scale for the implicit decimal cast are max,max, ie 
> 38,38. Those don't do what the code may assume they do. All the values >=1 
> become invalid and precision-scale enforcement automatically converts them to 
> null.
> We need to 
> 1) Validate when this happens in/after the conversion code and bail;
> 2) Or, derive precision and scale from the constants themselves so they all 
> fit, instead;
> 3) Or, derive it from the type of whatever caused the conversion in the first 
> place (e.g. IN column decimal); however, this could be function-specific 
> (e.g. IN just needs equality, BETWEEN would need at least one extra digit, 
> arithmetic, if this ever happens, would need everything, etc.);
> 4) Something else? :)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13963) vectorization - string arguments may be converted to decimal null

2016-06-07 Thread Xuefu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15319762#comment-15319762
 ] 

Xuefu Zhang commented on HIVE-13963:


Got it. Thanks for the explanation. Yeah, casting based on the input's max 
precision/scale is problematic as decimal(38, 38) applies practically no 
values. Using target type is one option. The other option is to use default 
decimal precision/scale which is (38, 18). Such default is assumed as the type 
for any decimal value that doesn't have precision/scale.

> vectorization - string arguments may be converted to decimal null
> -
>
> Key: HIVE-13963
> URL: https://issues.apache.org/jira/browse/HIVE-13963
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Matt McCline
>Priority: Critical
>
> See HIVE-13957.
> The default precision and scale for the implicit decimal cast are max,max, ie 
> 38,38. Those don't do what the code may assume they do. All the values >=1 
> become invalid and precision-scale enforcement automatically converts them to 
> null.
> We need to 
> 1) Validate when this happens in/after the conversion code and bail;
> 2) Or, derive precision and scale from the constants themselves so they all 
> fit, instead;
> 3) Or, derive it from the type of whatever caused the conversion in the first 
> place (e.g. IN column decimal); however, this could be function-specific 
> (e.g. IN just needs equality, BETWEEN would need at least one extra digit, 
> arithmetic, if this ever happens, would need everything, etc.);
> 4) Something else? :)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13959) MoveTask should only release its query associated locks

2016-06-07 Thread Chaoyu Tang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15319761#comment-15319761
 ] 

Chaoyu Tang commented on HIVE-13959:


[~ychena] Thanks for the review. For your analysis and questions, please see 
below:
Yes -- one WriteEntity map to one List
Yes -- These list of HiveLockObj are all created during acquireLocks related to 
the query.
Yes -- In the releaseLocks code, lockObj.getObj() return HiveLockObject
The problem is here: List locks = lockMgr.getLocks(lockObj.getObj(), 
false, true); it returns all locks under the pathName which might not related 
to this MoveTask query:
{code} 
The getLocks method in ZookeeperHiveLockManager:
private static List getLocks(HiveConf conf,
  HiveLockObject key, String parent, boolean verifyTablePartition, boolean 
fetchData)
  throws LockException {
List locks = new ArrayList();
List children;
boolean recurse = true;
String commonParent;

try {
  if (key != null) {
commonParent = "/" + parent + "/" + key.getName();
children = curatorFramework.getChildren().forPath(commonParent); 
/* ==> this call returns all locks under commonParent, say 
db/cdhpart/LOCK-SHARE-00, db/cdhpart/LOCK-SHARE-01 for pathNames 
db/cdhpart */
recurse = false;
  }
  else {
commonParent = "/" + parent;
children = curatorFramework.getChildren().forPath(commonParent);
  }
} catch (Exception e) {
  // no locks present
  return locks;
}
{code}
For an example, if we run query1 in one session "insert overwrite table cdhpart 
partition (level1= 'l1', level2='l2', level3 = 'l3', level4) select key, value, 
level4 from cdhsrc;" and query2 in the other session concurrently "select * 
from cdhpart where level1 = 'l1'"
query1 and query2 both have its own znode (lock) under pathNames (db/cdhpart/) 
say LOCK-SHARE-00, LOCK-SHARE-01 respectively, the getLocks for 
HiveLockObject key with its getName() value db/cdhpart/ will return both 
LOCK-SHARE-00, LOCK-SHARE-01. But LOCK-SHARE-01 is not in the 
ctx.getHiveLocks(), the lock list for the query1, so 
ctx.getHiveLocks().remove() returns false because the HiveLockObjectData.equals 
always return false due to the different queryStr/queryId, therefore 
lockMgr.unlock(lock) should not be called to unlock the LOCK-SHARE-01 for 
query2.



> MoveTask should only release its query associated locks
> ---
>
> Key: HIVE-13959
> URL: https://issues.apache.org/jira/browse/HIVE-13959
> Project: Hive
>  Issue Type: Bug
>  Components: Locking
>Reporter: Chaoyu Tang
>Assignee: Chaoyu Tang
> Attachments: HIVE-13959.patch, HIVE-13959.patch
>
>
> releaseLocks in MoveTask releases all locks under a HiveLockObject pathNames. 
> But some of locks under this pathNames might be for other queries and should 
> not be released.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HIVE-13964) Add a parameter to beeline to allow a properties file to be passed in

2016-06-07 Thread Abdullah Yousufi (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15319746#comment-15319746
 ] 

Abdullah Yousufi edited comment on HIVE-13964 at 6/8/16 12:14 AM:
--

This patch shouldn't reintroduce that error message as it existed before, 
unless an invalid file is passed to the --property-file parameter. Are you 
suggesting the error message for such a parameter be different? Thanks for the 
feedback!

To clarify, this patch adds a new parameter that will allow passing in the 
property file. It adds a description of the parameter in the command line help 
as well.


was (Author: ayousufi):
This patch shouldn't reintroduce that error message as it existed before, 
unless an invalid file is passed to the --property-file parameter. Are you 
suggesting the error message for such a parameter be different? Thanks for the 
feedback!

> Add a parameter to beeline to allow a properties file to be passed in
> -
>
> Key: HIVE-13964
> URL: https://issues.apache.org/jira/browse/HIVE-13964
> Project: Hive
>  Issue Type: New Feature
>  Components: Beeline
>Affects Versions: 2.0.1
>Reporter: Abdullah Yousufi
>Assignee: Abdullah Yousufi
>Priority: Minor
> Fix For: 2.2.0
>
> Attachments: HIVE-13964.01.patch
>
>
> HIVE-6652 removed the ability to pass in a properties file as a beeline 
> parameter. It may be a useful feature to be able to pass the file in is a 
> parameter, such as --property-file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13964) Add a parameter to beeline to allow a properties file to be passed in

2016-06-07 Thread Abdullah Yousufi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abdullah Yousufi updated HIVE-13964:

Description: HIVE-6652 removed the ability to pass in a properties file as 
a beeline parameter. It may be a useful feature to be able to pass the file in 
is a parameter, such as --property-file.  (was: HIVE-6652 removed the ability 
to pass in a properties file as a beeline parameter. It may be a useful feature 
to be able to pass the file in is a parameter.)

> Add a parameter to beeline to allow a properties file to be passed in
> -
>
> Key: HIVE-13964
> URL: https://issues.apache.org/jira/browse/HIVE-13964
> Project: Hive
>  Issue Type: New Feature
>  Components: Beeline
>Affects Versions: 2.0.1
>Reporter: Abdullah Yousufi
>Assignee: Abdullah Yousufi
>Priority: Minor
> Fix For: 2.2.0
>
> Attachments: HIVE-13964.01.patch
>
>
> HIVE-6652 removed the ability to pass in a properties file as a beeline 
> parameter. It may be a useful feature to be able to pass the file in is a 
> parameter, such as --property-file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13964) Add a parameter to beeline to allow a properties file to be passed in

2016-06-07 Thread Abdullah Yousufi (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15319746#comment-15319746
 ] 

Abdullah Yousufi commented on HIVE-13964:
-

This patch shouldn't reintroduce that error message as it existed before, 
unless an invalid file is passed to the --property-file parameter. Are you 
suggesting the error message for such a parameter be different? Thanks for the 
feedback!

> Add a parameter to beeline to allow a properties file to be passed in
> -
>
> Key: HIVE-13964
> URL: https://issues.apache.org/jira/browse/HIVE-13964
> Project: Hive
>  Issue Type: New Feature
>  Components: Beeline
>Affects Versions: 2.0.1
>Reporter: Abdullah Yousufi
>Assignee: Abdullah Yousufi
>Priority: Minor
> Fix For: 2.2.0
>
> Attachments: HIVE-13964.01.patch
>
>
> HIVE-6652 removed the ability to pass in a properties file as a beeline 
> parameter. It may be a useful feature to be able to pass the file in is a 
> parameter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13963) vectorization - string arguments may be converted to decimal null

2016-06-07 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13963?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-13963:

Description: 
See HIVE-13957.
The default precision and scale for the implicit decimal cast are max,max, ie 
38,38. Those don't do what the code may assume they do. All the values >=1 
become invalid and precision-scale enforcement automatically converts them to 
null.

We need to 
1) Validate when this happens in/after the conversion code and bail;
2) Or, derive precision and scale from the constants themselves so they all 
fit, instead;
3) Or, derive it from the type of whatever caused the conversion in the first 
place (e.g. IN column decimal); however, this could be function-specific (e.g. 
IN just needs equality, BETWEEN would need at least one extra digit, 
arithmetic, if this ever happens, would need everything, etc.);
4) Something else? :)


  was:
See HIVE-13957.
The default precision and scale for the implicit decimal cast are max,max, ie 
38,38. Those don't do what the code may assume they do. All the values > 0 
become invalid and precision-scale enforcement automatically converts them to 
null.

We need to 
1) Validate when this happens in/after the conversion code and bail;
2) Or, derive precision and scale from the constants themselves so they all 
fit, instead;
3) Or, derive it from the type of whatever caused the conversion in the first 
place (e.g. IN column decimal); however, this could be function-specific (e.g. 
IN just needs equality, BETWEEN would need at least one extra digit, 
arithmetic, if this ever happens, would need everything, etc.);
4) Something else? :)



> vectorization - string arguments may be converted to decimal null
> -
>
> Key: HIVE-13963
> URL: https://issues.apache.org/jira/browse/HIVE-13963
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Matt McCline
>Priority: Critical
>
> See HIVE-13957.
> The default precision and scale for the implicit decimal cast are max,max, ie 
> 38,38. Those don't do what the code may assume they do. All the values >=1 
> become invalid and precision-scale enforcement automatically converts them to 
> null.
> We need to 
> 1) Validate when this happens in/after the conversion code and bail;
> 2) Or, derive precision and scale from the constants themselves so they all 
> fit, instead;
> 3) Or, derive it from the type of whatever caused the conversion in the first 
> place (e.g. IN column decimal); however, this could be function-specific 
> (e.g. IN just needs equality, BETWEEN would need at least one extra digit, 
> arithmetic, if this ever happens, would need everything, etc.);
> 4) Something else? :)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13963) vectorization - string arguments may be converted to decimal null

2016-06-07 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15319735#comment-15319735
 ] 

Sergey Shelukhin commented on HIVE-13963:
-

[~xuefuz] see the new Q file added in  HIVE-13957. Before the fix there (that 
disables vectorization for IN for such cases), the vectorized query returns no 
results. 
The code in vectorization that adds casts to arguments (before evaluating them) 
for UDFs like IN, and gets precision and scale for the cast depending on the 
type, is the problem.

> vectorization - string arguments may be converted to decimal null
> -
>
> Key: HIVE-13963
> URL: https://issues.apache.org/jira/browse/HIVE-13963
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Matt McCline
>Priority: Critical
>
> See HIVE-13957.
> The default precision and scale for the implicit decimal cast are max,max, ie 
> 38,38. Those don't do what the code may assume they do. All the values > 0 
> become invalid and precision-scale enforcement automatically converts them to 
> null.
> We need to 
> 1) Validate when this happens in/after the conversion code and bail;
> 2) Or, derive precision and scale from the constants themselves so they all 
> fit, instead;
> 3) Or, derive it from the type of whatever caused the conversion in the first 
> place (e.g. IN column decimal); however, this could be function-specific 
> (e.g. IN just needs equality, BETWEEN would need at least one extra digit, 
> arithmetic, if this ever happens, would need everything, etc.);
> 4) Something else? :)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-7443) Fix HiveConnection to communicate with Kerberized Hive JDBC server and alternative JDKs

2016-06-07 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15319733#comment-15319733
 ] 

Hive QA commented on HIVE-7443:
---



Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12658595/HIVE-7443.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/38/testReport
Console output: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/38/console
Test logs: 
http://ec2-50-18-27-0.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-38/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hive-ptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ [[ -n /usr/java/jdk1.8.0_25 ]]
+ export JAVA_HOME=/usr/java/jdk1.8.0_25
+ JAVA_HOME=/usr/java/jdk1.8.0_25
+ export 
PATH=/usr/java/jdk1.8.0_25/bin/:/usr/lib64/qt-3.3/bin:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin
+ 
PATH=/usr/java/jdk1.8.0_25/bin/:/usr/lib64/qt-3.3/bin:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'M2_OPTS=-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ M2_OPTS='-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ cd /data/hive-ptest/working/
+ tee /data/hive-ptest/logs/PreCommit-HIVE-MASTER-Build-38/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ cd apache-github-source-source
+ git fetch origin
>From https://github.com/apache/hive
   0e975f0..52c7c04  branch-1   -> origin/branch-1
+ git reset --hard HEAD
HEAD is now at ef7dc77 HIVE-13954: Parquet logs should go to STDERR (Prasanth 
Jayachandran reviewed by Gunther Hagleitner)
+ git clean -f -d
+ git checkout master
Already on 'master'
+ git reset --hard origin/master
HEAD is now at ef7dc77 HIVE-13954: Parquet logs should go to STDERR (Prasanth 
Jayachandran reviewed by Gunther Hagleitner)
+ git merge --ff-only origin/master
Already up-to-date.
+ git gc
+ patchCommandPath=/data/hive-ptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hive-ptest/working/scratch/build.patch
+ [[ -f /data/hive-ptest/working/scratch/build.patch ]]
+ chmod +x /data/hive-ptest/working/scratch/smart-apply-patch.sh
+ /data/hive-ptest/working/scratch/smart-apply-patch.sh 
/data/hive-ptest/working/scratch/build.patch
The patch does not appear to apply with p0, p1, or p2
+ exit 1
'
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12658595 - PreCommit-HIVE-MASTER-Build

> Fix HiveConnection to communicate with Kerberized Hive JDBC server and 
> alternative JDKs
> ---
>
> Key: HIVE-7443
> URL: https://issues.apache.org/jira/browse/HIVE-7443
> Project: Hive
>  Issue Type: Bug
>  Components: JDBC, Security
>Affects Versions: 0.12.0, 0.13.1
> Environment: Kerberos
> Run Hive server2 and client with IBM JDK7.1
>Reporter: Yu Gao
>Assignee: Yu Gao
> Attachments: HIVE-7443.patch
>
>
> Hive Kerberos authentication has been enabled in my cluster. I ran kinit to 
> initialize the current login user's ticket cache successfully, and then tried 
> to use beeline to connect to Hive Server2, but failed. After I manually added 
> some logging to catch the failure exception, this is what I got that caused 
> the failure:
> beeline>  !connect 
> jdbc:hive2://:1/default;principal=hive/@REALM.COM
>  org.apache.hive.jdbc.HiveDriver
> scan complete in 2ms
> Connecting to 
> jdbc:hive2://:1/default;principal=hive/@REALM.COM
> Enter password for 
> jdbc:hive2://:1/default;principal=hive/@REALM.COM:
> 14/07/17 15:12:45 ERROR jdbc.HiveConnection: Failed to open client transport
> javax.security.sasl.SaslException: Failed to open client transport [Caused by 
> java.io.IOException: Could not instantiate SASL transport]
> at 
> org.apache.hive.service.auth.KerberosSaslHelper.getKerberosTransport(KerberosSaslHelper.java:78)
> at 
> org.apache.hive.jdbc.HiveConnection.createBinaryTransport(HiveConnection.java:342)
> at 
> org.apache.hive.jdbc.HiveCo

[jira] [Commented] (HIVE-13961) ACID: Major compaction fails to include the original bucket files if there's no delta directory

2016-06-07 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15319731#comment-15319731
 ] 

Hive QA commented on HIVE-13961:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12808755/HIVE-13961.2.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 11 failed/errored test(s), 10225 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_acid_globallimit
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_acid_table_stats
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_create_func1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_12
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_13
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_subquery_multiinsert
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_constprog_partitioner
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_index_bitmap3
org.apache.hadoop.hive.ql.TestTxnCommands2.testNonAcidToAcidConversion3
org.apache.hive.jdbc.TestJdbcWithLocalClusterSpark.testPermFunc
org.apache.hive.jdbc.TestJdbcWithMiniMr.testPermFunc
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/37/testReport
Console output: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/37/console
Test logs: 
http://ec2-50-18-27-0.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-37/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 11 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12808755 - PreCommit-HIVE-MASTER-Build

> ACID: Major compaction fails to include the original bucket files if there's 
> no delta directory
> ---
>
> Key: HIVE-13961
> URL: https://issues.apache.org/jira/browse/HIVE-13961
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 1.3.0, 2.1.0, 2.2.0
>Reporter: Wei Zheng
>Assignee: Wei Zheng
>Priority: Blocker
> Attachments: HIVE-13961.1.patch, HIVE-13961.2.patch
>
>
> The issue can be reproduced by steps below:
> 1. Insert a row to Non-ACID table
> 2. Convert Non-ACID to ACID table (i.e. set transactional=true table property)
> 3. Perform Major compaction



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13968) CombineHiveInputFormat does not honor InputFormat that implements AvoidSplitCombination

2016-06-07 Thread Xuefu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15319722#comment-15319722
 ] 

Xuefu Zhang commented on HIVE-13968:


[~lirui], it seemed you did the original work. Would you mind taking a look at 
the patch? Thanks.

> CombineHiveInputFormat does not honor InputFormat that implements 
> AvoidSplitCombination
> ---
>
> Key: HIVE-13968
> URL: https://issues.apache.org/jira/browse/HIVE-13968
> Project: Hive
>  Issue Type: Bug
>Reporter: Prasanna Rajaperumal
>Assignee: Prasanna Rajaperumal
> Attachments: HIVE-13968.1.patch
>
>
> If I have 100 path[] , the nonCombinablePaths will have only the paths 
> paths[0-9] and the rest of the paths will be in combinablePaths, even if the 
> inputformat returns false for AvoidSplitCombination.shouldSkipCombine() for 
> all the paths. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13964) Add a parameter to beeline to allow a properties file to be passed in

2016-06-07 Thread Xuefu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15319720#comment-15319720
 ] 

Xuefu Zhang commented on HIVE-13964:


We cannot just bring what's removed from HIVE-6652. We need also to address the 
problems as described in that JIRA.

> Add a parameter to beeline to allow a properties file to be passed in
> -
>
> Key: HIVE-13964
> URL: https://issues.apache.org/jira/browse/HIVE-13964
> Project: Hive
>  Issue Type: New Feature
>  Components: Beeline
>Affects Versions: 2.0.1
>Reporter: Abdullah Yousufi
>Assignee: Abdullah Yousufi
>Priority: Minor
> Fix For: 2.2.0
>
> Attachments: HIVE-13964.01.patch
>
>
> HIVE-6652 removed the ability to pass in a properties file as a beeline 
> parameter. It may be a useful feature to be able to pass the file in is a 
> parameter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13963) vectorization - string arguments may be converted to decimal null

2016-06-07 Thread Xuefu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15319715#comment-15319715
 ] 

Xuefu Zhang commented on HIVE-13963:


Re. The default precision and scale for the implicit decimal cast are max,max, 
ie 38,38. 

Can you give an example for this?

> vectorization - string arguments may be converted to decimal null
> -
>
> Key: HIVE-13963
> URL: https://issues.apache.org/jira/browse/HIVE-13963
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Matt McCline
>Priority: Critical
>
> See HIVE-13957.
> The default precision and scale for the implicit decimal cast are max,max, ie 
> 38,38. Those don't do what the code may assume they do. All the values > 0 
> become invalid and precision-scale enforcement automatically converts them to 
> null.
> We need to 
> 1) Validate when this happens in/after the conversion code and bail;
> 2) Or, derive precision and scale from the constants themselves so they all 
> fit, instead;
> 3) Or, derive it from the type of whatever caused the conversion in the first 
> place (e.g. IN column decimal); however, this could be function-specific 
> (e.g. IN just needs equality, BETWEEN would need at least one extra digit, 
> arithmetic, if this ever happens, would need everything, etc.);
> 4) Something else? :)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13675) LLAP: add HMAC signatures to LLAPIF splits

2016-06-07 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-13675:

Attachment: HIVE-13675.08.patch

The patch has disappeared from the queue. Resubmitting.

> LLAP: add HMAC signatures to LLAPIF splits
> --
>
> Key: HIVE-13675
> URL: https://issues.apache.org/jira/browse/HIVE-13675
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-13675.01.patch, HIVE-13675.02.patch, 
> HIVE-13675.03.patch, HIVE-13675.04.patch, HIVE-13675.05.patch, 
> HIVE-13675.06.patch, HIVE-13675.07.patch, HIVE-13675.08.patch, 
> HIVE-13675.WIP.patch, HIVE-13675.wo.13444.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13957) vectorized IN is inconsistent with non-vectorized (at least for decimal in (string))

2016-06-07 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-13957:

Attachment: HIVE-13957.01.patch

Updated the out files

> vectorized IN is inconsistent with non-vectorized (at least for decimal in 
> (string))
> 
>
> Key: HIVE-13957
> URL: https://issues.apache.org/jira/browse/HIVE-13957
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-13957.01.patch, HIVE-13957.patch, HIVE-13957.patch
>
>
> The cast is applied to the column in regular IN, but vectorized IN applies it 
> to the IN() list.
> This can cause queries to produce incorrect results.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13968) CombineHiveInputFormat does not honor InputFormat that implements AvoidSplitCombination

2016-06-07 Thread Prasanna Rajaperumal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanna Rajaperumal updated HIVE-13968:

Attachment: HIVE-13968.1.patch

> CombineHiveInputFormat does not honor InputFormat that implements 
> AvoidSplitCombination
> ---
>
> Key: HIVE-13968
> URL: https://issues.apache.org/jira/browse/HIVE-13968
> Project: Hive
>  Issue Type: Bug
>Reporter: Prasanna Rajaperumal
>Assignee: Prasanna Rajaperumal
> Attachments: HIVE-13968.1.patch
>
>
> If I have 100 path[] , the nonCombinablePaths will have only the paths 
> paths[0-9] and the rest of the paths will be in combinablePaths, even if the 
> inputformat returns false for AvoidSplitCombination.shouldSkipCombine() for 
> all the paths. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13931) Add support for HikariCP and replace BoneCP usage with HikariCP

2016-06-07 Thread Vaibhav Gumashta (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15319682#comment-15319682
 ] 

Vaibhav Gumashta commented on HIVE-13931:
-

Just verified that the datanucleus version on master is compatible with 
HikariCP. Please ignore comment above.

> Add support for HikariCP and replace BoneCP usage with HikariCP
> ---
>
> Key: HIVE-13931
> URL: https://issues.apache.org/jira/browse/HIVE-13931
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Reporter: Sushanth Sowmyan
>Assignee: Sushanth Sowmyan
> Attachments: HIVE-13931.2.patch, HIVE-13931.patch
>
>
> Currently, we use BoneCP as our primary connection pooling mechanism 
> (overridable by users). However, BoneCP is no longer being actively 
> developed, and is considered deprecated, replaced by HikariCP.
> Thus, we should add support for HikariCP, and try to replace our primary 
> usage of BoneCP with it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13968) CombineHiveInputFormat does not honor InputFormat that implements AvoidSplitCombination

2016-06-07 Thread Prasanna Rajaperumal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanna Rajaperumal updated HIVE-13968:

Status: Patch Available  (was: Open)

> CombineHiveInputFormat does not honor InputFormat that implements 
> AvoidSplitCombination
> ---
>
> Key: HIVE-13968
> URL: https://issues.apache.org/jira/browse/HIVE-13968
> Project: Hive
>  Issue Type: Bug
>Reporter: Prasanna Rajaperumal
>Assignee: Prasanna Rajaperumal
>
> If I have 100 path[] , the nonCombinablePaths will have only the paths 
> paths[0-9] and the rest of the paths will be in combinablePaths, even if the 
> inputformat returns false for AvoidSplitCombination.shouldSkipCombine() for 
> all the paths. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13960) Session timeout may happen before HIVE_SERVER2_IDLE_SESSION_TIMEOUT for back-to-back synchronous operations.

2016-06-07 Thread Jimmy Xiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15319671#comment-15319671
 ] 

Jimmy Xiang commented on HIVE-13960:


Cool. +1.

> Session timeout may happen before HIVE_SERVER2_IDLE_SESSION_TIMEOUT for 
> back-to-back synchronous operations.
> 
>
> Key: HIVE-13960
> URL: https://issues.apache.org/jira/browse/HIVE-13960
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Reporter: zhihai xu
>Assignee: zhihai xu
> Attachments: HIVE-13960.000.patch
>
>
> Session timeout may happen before 
> HIVE_SERVER2_IDLE_SESSION_TIMEOUT(hive.server2.idle.session.timeout) for 
> back-to-back synchronous operations.
> This issue can happen with the following two operations op1 and op2: op2 is a 
> synchronous long running operation, op2 is running right after op1 is closed.
>  
> 1. closeOperation(op1) is called:
> this will set {{lastIdleTime}} with value System.currentTimeMillis() because 
> {{opHandleSet}} becomes empty after {{closeOperation}} remove op1 from 
> {{opHandleSet}}.
> 2. op2 is running for long time by calling {{executeStatement}} right after 
> closeOperation(op1) is called.
> If op2 is running for more than HIVE_SERVER2_IDLE_SESSION_TIMEOUT, then the 
> session will timeout even when op2 is still running.
> We hit this issue when we use PyHive to execute non-async operation 
> The following is the exception we see:
> {code}
> File "/usr/local/lib/python2.7/dist-packages/pyhive/hive.py", line 126, in 
> close
> _check_status(response)
>   File "/usr/local/lib/python2.7/dist-packages/pyhive/hive.py", line 362, in 
> _check_status
> raise OperationalError(response)
> OperationalError: TCloseSessionResp(status=TStatus(errorCode=0, 
> errorMessage='Session does not exist!', sqlState=None, 
> infoMessages=['*org.apache.hive.service.cli.HiveSQLException:Session does not 
> exist!:12:11', 
> 'org.apache.hive.service.cli.session.SessionManager:closeSession:SessionManager.java:311',
>  'org.apache.hive.service.cli.CLIService:closeSession:CLIService.java:221', 
> 'org.apache.hive.service.cli.thrift.ThriftCLIService:CloseSession:ThriftCLIService.java:471',
>  
> 'org.apache.hive.service.cli.thrift.TCLIService$Processor$CloseSession:getResult:TCLIService.java:1273',
>  
> 'org.apache.hive.service.cli.thrift.TCLIService$Processor$CloseSession:getResult:TCLIService.java:1258',
>  'org.apache.thrift.ProcessFunction:process:ProcessFunction.java:39', 
> 'org.apache.thrift.TBaseProcessor:process:TBaseProcessor.java:39', 
> 'org.apache.hive.service.auth.TSetIpAddressProcessor:process:TSetIpAddressProcessor.java:56',
>  
> 'org.apache.thrift.server.TThreadPoolServer$WorkerProcess:run:TThreadPoolServer.java:285',
>  
> 'java.util.concurrent.ThreadPoolExecutor:runWorker:ThreadPoolExecutor.java:1145',
>  
> 'java.util.concurrent.ThreadPoolExecutor$Worker:run:ThreadPoolExecutor.java:615',
>  'java.lang.Thread:run:Thread.java:745'], statusCode=3))
> TCloseSessionResp(status=TStatus(errorCode=0, errorMessage='Session does not 
> exist!', sqlState=None, 
> infoMessages=['*org.apache.hive.service.cli.HiveSQLException:Session does not 
> exist!:12:11', 
> 'org.apache.hive.service.cli.session.SessionManager:closeSession:SessionManager.java:311',
>  'org.apache.hive.service.cli.CLIService:closeSession:CLIService.java:221', 
> 'org.apache.hive.service.cli.thrift.ThriftCLIService:CloseSession:ThriftCLIService.java:471',
>  
> 'org.apache.hive.service.cli.thrift.TCLIService$Processor$CloseSession:getResult:TCLIService.java:1273',
>  
> 'org.apache.hive.service.cli.thrift.TCLIService$Processor$CloseSession:getResult:TCLIService.java:1258',
>  'org.apache.thrift.ProcessFunction:process:ProcessFunction.java:39', 
> 'org.apache.thrift.TBaseProcessor:process:TBaseProcessor.java:39', 
> 'org.apache.hive.service.auth.TSetIpAddressProcessor:process:TSetIpAddressProcessor.java:56',
>  
> 'org.apache.thrift.server.TThreadPoolServer$WorkerProcess:run:TThreadPoolServer.java:285',
>  
> 'java.util.concurrent.ThreadPoolExecutor:runWorker:ThreadPoolExecutor.java:1145',
>  
> 'java.util.concurrent.ThreadPoolExecutor$Worker:run:ThreadPoolExecutor.java:615',
>  'java.lang.Thread:run:Thread.java:745'], statusCode=3))
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13392) disable speculative execution for ACID Compactor

2016-06-07 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-13392:
--
Attachment: HIVE-13392.2.patch

> disable speculative execution for ACID Compactor
> 
>
> Key: HIVE-13392
> URL: https://issues.apache.org/jira/browse/HIVE-13392
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 1.0.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
> Attachments: HIVE-13392.2.patch, HIVE-13392.patch
>
>
> https://developer.yahoo.com/hadoop/tutorial/module4.html
> Speculative execution is enabled by default. You can disable speculative 
> execution for the mappers and reducers by setting the 
> mapred.map.tasks.speculative.execution and 
> mapred.reduce.tasks.speculative.execution JobConf options to false, 
> respectively.
> CompactorMR is currently not set up to handle speculative execution and may 
> lead to something like
> {code}
> 2016-02-08 22:56:38,256 WARN [main] org.apache.hadoop.mapred.YarnChild: 
> Exception running child : 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException):
>  Failed to CREATE_FILE 
> /apps/hive/warehouse/service_logs_v2/ds=2016-01-20/_tmp_6cf08b9f-c2e2-4182-bc81-e032801b147f/base_13858600/bucket_4
>  for DFSClient_attempt_1454628390210_27756_m_01_1_131224698_1 on 
> 172.18.129.12 because this file lease is currently owned by 
> DFSClient_attempt_1454628390210_27756_m_01_0_-2027182532_1 on 
> 172.18.129.18
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2937)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:2562)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2451)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2335)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:688)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:397)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2151)
> {code}
> Short term: disable speculative execution for this job
> Longer term perhaps make each task write to dir with UUID...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13392) disable speculative execution for ACID Compactor

2016-06-07 Thread Eugene Koifman (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15319654#comment-15319654
 ] 

Eugene Koifman commented on HIVE-13392:
---

patch 2 moves move ValidCompactorTxnList to common. 

> disable speculative execution for ACID Compactor
> 
>
> Key: HIVE-13392
> URL: https://issues.apache.org/jira/browse/HIVE-13392
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 1.0.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
> Attachments: HIVE-13392.2.patch, HIVE-13392.patch
>
>
> https://developer.yahoo.com/hadoop/tutorial/module4.html
> Speculative execution is enabled by default. You can disable speculative 
> execution for the mappers and reducers by setting the 
> mapred.map.tasks.speculative.execution and 
> mapred.reduce.tasks.speculative.execution JobConf options to false, 
> respectively.
> CompactorMR is currently not set up to handle speculative execution and may 
> lead to something like
> {code}
> 2016-02-08 22:56:38,256 WARN [main] org.apache.hadoop.mapred.YarnChild: 
> Exception running child : 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException):
>  Failed to CREATE_FILE 
> /apps/hive/warehouse/service_logs_v2/ds=2016-01-20/_tmp_6cf08b9f-c2e2-4182-bc81-e032801b147f/base_13858600/bucket_4
>  for DFSClient_attempt_1454628390210_27756_m_01_1_131224698_1 on 
> 172.18.129.12 because this file lease is currently owned by 
> DFSClient_attempt_1454628390210_27756_m_01_0_-2027182532_1 on 
> 172.18.129.18
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2937)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:2562)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2451)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2335)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:688)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:397)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2151)
> {code}
> Short term: disable speculative execution for this job
> Longer term perhaps make each task write to dir with UUID...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13392) disable speculative execution for ACID Compactor

2016-06-07 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-13392:
--
Status: Patch Available  (was: Open)

> disable speculative execution for ACID Compactor
> 
>
> Key: HIVE-13392
> URL: https://issues.apache.org/jira/browse/HIVE-13392
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 1.0.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
> Attachments: HIVE-13392.2.patch, HIVE-13392.patch
>
>
> https://developer.yahoo.com/hadoop/tutorial/module4.html
> Speculative execution is enabled by default. You can disable speculative 
> execution for the mappers and reducers by setting the 
> mapred.map.tasks.speculative.execution and 
> mapred.reduce.tasks.speculative.execution JobConf options to false, 
> respectively.
> CompactorMR is currently not set up to handle speculative execution and may 
> lead to something like
> {code}
> 2016-02-08 22:56:38,256 WARN [main] org.apache.hadoop.mapred.YarnChild: 
> Exception running child : 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException):
>  Failed to CREATE_FILE 
> /apps/hive/warehouse/service_logs_v2/ds=2016-01-20/_tmp_6cf08b9f-c2e2-4182-bc81-e032801b147f/base_13858600/bucket_4
>  for DFSClient_attempt_1454628390210_27756_m_01_1_131224698_1 on 
> 172.18.129.12 because this file lease is currently owned by 
> DFSClient_attempt_1454628390210_27756_m_01_0_-2027182532_1 on 
> 172.18.129.18
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2937)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:2562)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2451)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2335)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:688)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:397)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2151)
> {code}
> Short term: disable speculative execution for this job
> Longer term perhaps make each task write to dir with UUID...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13392) disable speculative execution for ACID Compactor

2016-06-07 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-13392:
--
Status: Open  (was: Patch Available)

this needs a new patch.  The current change makes a dependency from ql package 
to metastore package.  Need to move ValidCompactorTxnList to common.

> disable speculative execution for ACID Compactor
> 
>
> Key: HIVE-13392
> URL: https://issues.apache.org/jira/browse/HIVE-13392
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 1.0.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
> Attachments: HIVE-13392.patch
>
>
> https://developer.yahoo.com/hadoop/tutorial/module4.html
> Speculative execution is enabled by default. You can disable speculative 
> execution for the mappers and reducers by setting the 
> mapred.map.tasks.speculative.execution and 
> mapred.reduce.tasks.speculative.execution JobConf options to false, 
> respectively.
> CompactorMR is currently not set up to handle speculative execution and may 
> lead to something like
> {code}
> 2016-02-08 22:56:38,256 WARN [main] org.apache.hadoop.mapred.YarnChild: 
> Exception running child : 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException):
>  Failed to CREATE_FILE 
> /apps/hive/warehouse/service_logs_v2/ds=2016-01-20/_tmp_6cf08b9f-c2e2-4182-bc81-e032801b147f/base_13858600/bucket_4
>  for DFSClient_attempt_1454628390210_27756_m_01_1_131224698_1 on 
> 172.18.129.12 because this file lease is currently owned by 
> DFSClient_attempt_1454628390210_27756_m_01_0_-2027182532_1 on 
> 172.18.129.18
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2937)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:2562)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2451)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2335)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:688)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:397)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2151)
> {code}
> Short term: disable speculative execution for this job
> Longer term perhaps make each task write to dir with UUID...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HIVE-13967) CREATE table fails when 'values' column name is found on the table spec.

2016-06-07 Thread Abdullah Yousufi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abdullah Yousufi reassigned HIVE-13967:
---

Assignee: Abdullah Yousufi

> CREATE table fails when 'values' column name is found on the table spec.
> 
>
> Key: HIVE-13967
> URL: https://issues.apache.org/jira/browse/HIVE-13967
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 2.2.0
>Reporter: Sergio Peña
>Assignee: Abdullah Yousufi
>
> {noformat}
> hive> create table pkv (key int, values string);  
>   
> [0/4271]
> FailedPredicateException(identifier,{useSQL11ReservedKeywordsForIdentifier()}?)
> at 
> org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.identifier(HiveParser_IdentifiersParser.java:11914)
> at 
> org.apache.hadoop.hive.ql.parse.HiveParser.identifier(HiveParser.java:51795)
> at 
> org.apache.hadoop.hive.ql.parse.HiveParser.columnNameType(HiveParser.java:42051)
> at 
> org.apache.hadoop.hive.ql.parse.HiveParser.columnNameTypeOrPKOrFK(HiveParser.java:42308)
> at 
> org.apache.hadoop.hive.ql.parse.HiveParser.columnNameTypeOrPKOrFKList(HiveParser.java:37966)
> at 
> org.apache.hadoop.hive.ql.parse.HiveParser.createTableStatement(HiveParser.java:5259)
> at 
> org.apache.hadoop.hive.ql.parse.HiveParser.ddlStatement(HiveParser.java:2763)
> at 
> org.apache.hadoop.hive.ql.parse.HiveParser.execStatement(HiveParser.java:1756)
> at 
> org.apache.hadoop.hive.ql.parse.HiveParser.statement(HiveParser.java:1178)
> at 
> org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:204)
> at 
> org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:166)
> at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:404)
> at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:329)
> at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1158)
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1253)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1084)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1072)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:232)
> at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:183)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:399)
> at 
> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:776)
> at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:714)
> at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:641)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> FAILED: ParseException line 1:27 Failed to recognize predicate 'values'. 
> Failed rule: 'identifier' in column specification
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13967) CREATE table fails when 'values' column name is found on the table spec.

2016-06-07 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HIVE-13967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergio Peña updated HIVE-13967:
---
Affects Version/s: 2.2.0

> CREATE table fails when 'values' column name is found on the table spec.
> 
>
> Key: HIVE-13967
> URL: https://issues.apache.org/jira/browse/HIVE-13967
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 2.2.0
>Reporter: Sergio Peña
>
> {noformat}
> hive> create table pkv (key int, values string);  
>   
> [0/4271]
> FailedPredicateException(identifier,{useSQL11ReservedKeywordsForIdentifier()}?)
> at 
> org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.identifier(HiveParser_IdentifiersParser.java:11914)
> at 
> org.apache.hadoop.hive.ql.parse.HiveParser.identifier(HiveParser.java:51795)
> at 
> org.apache.hadoop.hive.ql.parse.HiveParser.columnNameType(HiveParser.java:42051)
> at 
> org.apache.hadoop.hive.ql.parse.HiveParser.columnNameTypeOrPKOrFK(HiveParser.java:42308)
> at 
> org.apache.hadoop.hive.ql.parse.HiveParser.columnNameTypeOrPKOrFKList(HiveParser.java:37966)
> at 
> org.apache.hadoop.hive.ql.parse.HiveParser.createTableStatement(HiveParser.java:5259)
> at 
> org.apache.hadoop.hive.ql.parse.HiveParser.ddlStatement(HiveParser.java:2763)
> at 
> org.apache.hadoop.hive.ql.parse.HiveParser.execStatement(HiveParser.java:1756)
> at 
> org.apache.hadoop.hive.ql.parse.HiveParser.statement(HiveParser.java:1178)
> at 
> org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:204)
> at 
> org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:166)
> at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:404)
> at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:329)
> at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1158)
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1253)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1084)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1072)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:232)
> at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:183)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:399)
> at 
> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:776)
> at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:714)
> at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:641)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> FAILED: ParseException line 1:27 Failed to recognize predicate 'values'. 
> Failed rule: 'identifier' in column specification
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13931) Add support for HikariCP and replace BoneCP usage with HikariCP

2016-06-07 Thread Vaibhav Gumashta (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15319616#comment-15319616
 ] 

Vaibhav Gumashta commented on HIVE-13931:
-

This will need a datanucleus upgrade as well: 
http://www.datanucleus.org/servlet/jira/browse/NUCRDBMS-771

> Add support for HikariCP and replace BoneCP usage with HikariCP
> ---
>
> Key: HIVE-13931
> URL: https://issues.apache.org/jira/browse/HIVE-13931
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Reporter: Sushanth Sowmyan
>Assignee: Sushanth Sowmyan
> Attachments: HIVE-13931.2.patch, HIVE-13931.patch
>
>
> Currently, we use BoneCP as our primary connection pooling mechanism 
> (overridable by users). However, BoneCP is no longer being actively 
> developed, and is considered deprecated, replaced by HikariCP.
> Thus, we should add support for HikariCP, and try to replace our primary 
> usage of BoneCP with it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HIVE-13778) DROP TABLE PURGE on S3A table with too many files does not delete the files

2016-06-07 Thread Sailesh Mukil (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sailesh Mukil resolved HIVE-13778.
--
Resolution: Duplicate

Duplicated by HADOOP-13230

> DROP TABLE PURGE on S3A table with too many files does not delete the files
> ---
>
> Key: HIVE-13778
> URL: https://issues.apache.org/jira/browse/HIVE-13778
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Reporter: Sailesh Mukil
>Priority: Critical
>  Labels: metastore, s3
>
> I've noticed that when we do a DROP TABLE tablename PURGE on a table on S3A 
> that has many files, the files never get deleted. However, the Hive metastore 
> logs do say that the path was deleted:
> "Not moving [path] to trash"
> "Deleted the diretory [path]"
> I initially thought that this was due to the eventually consistent nature of 
> S3 for deletes, however, a week later, the files still exist.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (HIVE-13778) DROP TABLE PURGE on S3A table with too many files does not delete the files

2016-06-07 Thread Sailesh Mukil (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sailesh Mukil reopened HIVE-13778:
--

> DROP TABLE PURGE on S3A table with too many files does not delete the files
> ---
>
> Key: HIVE-13778
> URL: https://issues.apache.org/jira/browse/HIVE-13778
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Reporter: Sailesh Mukil
>Priority: Critical
>  Labels: metastore, s3
>
> I've noticed that when we do a DROP TABLE tablename PURGE on a table on S3A 
> that has many files, the files never get deleted. However, the Hive metastore 
> logs do say that the path was deleted:
> "Not moving [path] to trash"
> "Deleted the diretory [path]"
> I initially thought that this was due to the eventually consistent nature of 
> S3 for deletes, however, a week later, the files still exist.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13954) Parquet logs should go to STDERR

2016-06-07 Thread Prasanth Jayachandran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran updated HIVE-13954:
-
Attachment: HIVE-13954-branch-1.patch

Committed to branch-1 as well

> Parquet logs should go to STDERR
> 
>
> Key: HIVE-13954
> URL: https://issues.apache.org/jira/browse/HIVE-13954
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 2.2.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
> Fix For: 1.3.0, 2.1.0, 2.2.0
>
> Attachments: HIVE-13954-branch-1.patch, HIVE-13954.1.patch
>
>
> Parquet uses java util logging. When java logging is not configured using 
> default logging.properties file, parquet's default fallback handler writes to 
> STDOUT at INFO level. Hive writes all logging to STDERR and writes only the 
> query output to STDOUT. Writing logs to STDOUT may cause issues when 
> comparing query results. 
> If we provide default logging.properties for parquet then we can configure it 
> to write to file or stderr.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13954) Parquet logs should go to STDERR

2016-06-07 Thread Prasanth Jayachandran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran updated HIVE-13954:
-
Fix Version/s: 1.3.0

> Parquet logs should go to STDERR
> 
>
> Key: HIVE-13954
> URL: https://issues.apache.org/jira/browse/HIVE-13954
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 2.2.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
> Fix For: 1.3.0, 2.1.0, 2.2.0
>
> Attachments: HIVE-13954-branch-1.patch, HIVE-13954.1.patch
>
>
> Parquet uses java util logging. When java logging is not configured using 
> default logging.properties file, parquet's default fallback handler writes to 
> STDOUT at INFO level. Hive writes all logging to STDERR and writes only the 
> query output to STDOUT. Writing logs to STDOUT may cause issues when 
> comparing query results. 
> If we provide default logging.properties for parquet then we can configure it 
> to write to file or stderr.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13884) Disallow queries fetching more than a configured number of partitions in PartitionPruner

2016-06-07 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HIVE-13884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15319592#comment-15319592
 ] 

Sergio Peña commented on HIVE-13884:


[~hagleitn] [~selinazh] I saw you added the partition limit on HIVE-6492. This 
ticket is extending that limit to the metastore to avoid OOM exceptions when 
fetching too many partitions. Could you help me reviewing this patch? Or do you 
know any other person who understand the metastore too?

> Disallow queries fetching more than a configured number of partitions in 
> PartitionPruner
> 
>
> Key: HIVE-13884
> URL: https://issues.apache.org/jira/browse/HIVE-13884
> Project: Hive
>  Issue Type: Improvement
>Reporter: Mohit Sabharwal
>Assignee: Sergio Peña
> Attachments: HIVE-13884.1.patch
>
>
> Currently the PartitionPruner requests either all partitions or partitions 
> based on filter expression. In either scenarios, if the number of partitions 
> accessed is large there can be significant memory pressure at the HMS server 
> end.
> We already have a config {{hive.limit.query.max.table.partition}} that 
> enforces limits on number of partitions that may be scanned per operator. But 
> this check happens after the PartitionPruner has already fetched all 
> partitions.
> We should add an option at PartitionPruner level to disallow queries that 
> attempt to access number of partitions beyond a configurable limit.
> Note that {{hive.mapred.mode=strict}} disallow queries without a partition 
> filter in PartitionPruner, but this check accepts any query with a pruning 
> condition, even if partitions fetched are large. In multi-tenant 
> environments, admins could use more control w.r.t. number of partitions 
> allowed based on HMS memory capacity.
> One option is to have PartitionPruner first fetch the partition names 
> (instead of partition specs) and throw an exception if number of partitions 
> exceeds the configured value. Otherwise, fetch the partition specs.
> Looks like the existing {{listPartitionNames}} call could be used if extended 
> to take partition filter expressions like {{getPartitionsByExpr}} call does.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13778) DROP TABLE PURGE on S3A table with too many files does not delete the files

2016-06-07 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15319577#comment-15319577
 ] 

Aaron Fabbri commented on HIVE-13778:
-

Thanks.. You could also resolve as duplicated by.

> DROP TABLE PURGE on S3A table with too many files does not delete the files
> ---
>
> Key: HIVE-13778
> URL: https://issues.apache.org/jira/browse/HIVE-13778
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Reporter: Sailesh Mukil
>Priority: Critical
>  Labels: metastore, s3
>
> I've noticed that when we do a DROP TABLE tablename PURGE on a table on S3A 
> that has many files, the files never get deleted. However, the Hive metastore 
> logs do say that the path was deleted:
> "Not moving [path] to trash"
> "Deleted the diretory [path]"
> I initially thought that this was due to the eventually consistent nature of 
> S3 for deletes, however, a week later, the files still exist.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13959) MoveTask should only release its query associated locks

2016-06-07 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15319545#comment-15319545
 ] 

Hive QA commented on HIVE-13959:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12808724/HIVE-13959.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 11 failed/errored test(s), 10223 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_acid_globallimit
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_acid_table_stats
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_create_func1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_12
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_13
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_stats_list_bucket
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_subquery_multiinsert
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_constprog_partitioner
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_index_bitmap3
org.apache.hive.jdbc.TestJdbcWithLocalClusterSpark.testPermFunc
org.apache.hive.jdbc.TestJdbcWithMiniMr.testPermFunc
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/36/testReport
Console output: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/36/console
Test logs: 
http://ec2-50-18-27-0.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-36/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 11 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12808724 - PreCommit-HIVE-MASTER-Build

> MoveTask should only release its query associated locks
> ---
>
> Key: HIVE-13959
> URL: https://issues.apache.org/jira/browse/HIVE-13959
> Project: Hive
>  Issue Type: Bug
>  Components: Locking
>Reporter: Chaoyu Tang
>Assignee: Chaoyu Tang
> Attachments: HIVE-13959.patch, HIVE-13959.patch
>
>
> releaseLocks in MoveTask releases all locks under a HiveLockObject pathNames. 
> But some of locks under this pathNames might be for other queries and should 
> not be released.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13954) Parquet logs should go to STDERR

2016-06-07 Thread Prasanth Jayachandran (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15319542#comment-15319542
 ] 

Prasanth Jayachandran commented on HIVE-13954:
--

Test failures are unrelated. Committed patch to branch-2.1 and master.

> Parquet logs should go to STDERR
> 
>
> Key: HIVE-13954
> URL: https://issues.apache.org/jira/browse/HIVE-13954
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 2.2.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
> Fix For: 2.1.0, 2.2.0
>
> Attachments: HIVE-13954.1.patch
>
>
> Parquet uses java util logging. When java logging is not configured using 
> default logging.properties file, parquet's default fallback handler writes to 
> STDOUT at INFO level. Hive writes all logging to STDERR and writes only the 
> query output to STDOUT. Writing logs to STDOUT may cause issues when 
> comparing query results. 
> If we provide default logging.properties for parquet then we can configure it 
> to write to file or stderr.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13954) Parquet logs should go to STDERR

2016-06-07 Thread Prasanth Jayachandran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran updated HIVE-13954:
-
   Resolution: Fixed
Fix Version/s: 2.2.0
   2.1.0
   Status: Resolved  (was: Patch Available)

> Parquet logs should go to STDERR
> 
>
> Key: HIVE-13954
> URL: https://issues.apache.org/jira/browse/HIVE-13954
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 2.2.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
> Fix For: 2.1.0, 2.2.0
>
> Attachments: HIVE-13954.1.patch
>
>
> Parquet uses java util logging. When java logging is not configured using 
> default logging.properties file, parquet's default fallback handler writes to 
> STDOUT at INFO level. Hive writes all logging to STDERR and writes only the 
> query output to STDOUT. Writing logs to STDOUT may cause issues when 
> comparing query results. 
> If we provide default logging.properties for parquet then we can configure it 
> to write to file or stderr.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13964) Add a parameter to beeline to allow a properties file to be passed in

2016-06-07 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HIVE-13964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergio Peña updated HIVE-13964:
---
Fix Version/s: (was: 2.1.0)
   2.2.0

> Add a parameter to beeline to allow a properties file to be passed in
> -
>
> Key: HIVE-13964
> URL: https://issues.apache.org/jira/browse/HIVE-13964
> Project: Hive
>  Issue Type: New Feature
>  Components: Beeline
>Affects Versions: 2.0.1
>Reporter: Abdullah Yousufi
>Assignee: Abdullah Yousufi
>Priority: Minor
> Fix For: 2.2.0
>
> Attachments: HIVE-13964.01.patch
>
>
> HIVE-6652 removed the ability to pass in a properties file as a beeline 
> parameter. It may be a useful feature to be able to pass the file in is a 
> parameter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13964) Add a parameter to beeline to allow a properties file to be passed in

2016-06-07 Thread Abdullah Yousufi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abdullah Yousufi updated HIVE-13964:

Attachment: HIVE-13964.01.patch

> Add a parameter to beeline to allow a properties file to be passed in
> -
>
> Key: HIVE-13964
> URL: https://issues.apache.org/jira/browse/HIVE-13964
> Project: Hive
>  Issue Type: New Feature
>  Components: Beeline
>Affects Versions: 2.0.1
>Reporter: Abdullah Yousufi
>Assignee: Abdullah Yousufi
>Priority: Minor
> Fix For: 2.1.0
>
> Attachments: HIVE-13964.01.patch
>
>
> HIVE-6652 removed the ability to pass in a properties file as a beeline 
> parameter. It may be a useful feature to be able to pass the file in is a 
> parameter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13964) Add a parameter to beeline to allow a properties file to be passed in

2016-06-07 Thread Abdullah Yousufi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abdullah Yousufi updated HIVE-13964:

Affects Version/s: 2.0.1

> Add a parameter to beeline to allow a properties file to be passed in
> -
>
> Key: HIVE-13964
> URL: https://issues.apache.org/jira/browse/HIVE-13964
> Project: Hive
>  Issue Type: New Feature
>  Components: Beeline
>Affects Versions: 2.0.1
>Reporter: Abdullah Yousufi
>Assignee: Abdullah Yousufi
>Priority: Minor
> Fix For: 2.1.0
>
>
> HIVE-6652 removed the ability to pass in a properties file as a beeline 
> parameter. It may be a useful feature to be able to pass the file in is a 
> parameter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13964) Add a parameter to beeline to allow a properties file to be passed in

2016-06-07 Thread Abdullah Yousufi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abdullah Yousufi updated HIVE-13964:

Fix Version/s: 2.1.0

> Add a parameter to beeline to allow a properties file to be passed in
> -
>
> Key: HIVE-13964
> URL: https://issues.apache.org/jira/browse/HIVE-13964
> Project: Hive
>  Issue Type: New Feature
>  Components: Beeline
>Reporter: Abdullah Yousufi
>Assignee: Abdullah Yousufi
>Priority: Minor
> Fix For: 2.1.0
>
>
> HIVE-6652 removed the ability to pass in a properties file as a beeline 
> parameter. It may be a useful feature to be able to pass the file in is a 
> parameter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HIVE-13778) DROP TABLE PURGE on S3A table with too many files does not delete the files

2016-06-07 Thread Sailesh Mukil (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sailesh Mukil resolved HIVE-13778.
--
Resolution: Fixed

[~fabbri], I don't have Assign privileges here. I'll just resolve the issue 
myself. The root cause of the issue was found and is tracked by the following 
JIRA:
HADOOP-13230

> DROP TABLE PURGE on S3A table with too many files does not delete the files
> ---
>
> Key: HIVE-13778
> URL: https://issues.apache.org/jira/browse/HIVE-13778
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Reporter: Sailesh Mukil
>Priority: Critical
>  Labels: metastore, s3
>
> I've noticed that when we do a DROP TABLE tablename PURGE on a table on S3A 
> that has many files, the files never get deleted. However, the Hive metastore 
> logs do say that the path was deleted:
> "Not moving [path] to trash"
> "Deleted the diretory [path]"
> I initially thought that this was due to the eventually consistent nature of 
> S3 for deletes, however, a week later, the files still exist.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12454) in tez model when i use add jar xxx query will fail

2016-06-07 Thread Nilesh Maheshwari (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15319469#comment-15319469
 ] 

Nilesh Maheshwari commented on HIVE-12454:
--

What was the issue and how it got resolved?

> in tez model when i use add jar xxx query will fail
> ---
>
> Key: HIVE-12454
> URL: https://issues.apache.org/jira/browse/HIVE-12454
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 0.14.0
>Reporter: Feng Yuan
> Fix For: 1.3.0
>
>
> whatever hql only if i use add jar udf.jar hql will throws out these:
> Status: Running (Executing on YARN cluster with App id 
> application_1447448264041_0723)
> 
> VERTICES  STATUS  TOTAL  COMPLETED  RUNNING  PENDING  FAILED  
> KILLED
> 
> Map 1 FAILED -1  00   -1   0  
>  0
> Map 2 FAILED -1  00   -1   0  
>  0
> 
> VERTICES: 00/02  [>>--] 0%ELAPSED TIME: 0.27 s
>  
> 
> Status: Failed
> Vertex failed, vertexName=Map 2, vertexId=vertex_1447448264041_0723_1_00, 
> diagnostics=[Vertex vertex_1447448264041_0723_1_00 [Map 2] killed/failed due 
> to:ROOT_INPUT_INIT_FAILURE, Vertex Input: mid_bdi_customer_online initializer 
> failed, vertex=vertex_1447448264041_0723_1_00 [Map 2], 
> java.lang.NoSuchMethodError: 
> org.apache.hadoop.hive.shims.HadoopShims.getMergedCredentials(Lorg/apache/hadoop/mapred/JobConf;)V
> at 
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:104)
> at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:245)
> at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:239)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
> at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:239)
> at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:226)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> ]
> Vertex failed, vertexName=Map 1, vertexId=vertex_1447448264041_0723_1_01, 
> diagnostics=[Vertex vertex_1447448264041_0723_1_01 [Map 1] killed/failed due 
> to:ROOT_INPUT_INIT_FAILURE, Vertex Input: raw_kafka_event_pageview_dt0 
> initializer failed, vertex=vertex_1447448264041_0723_1_01 [Map 1], 
> java.lang.NoSuchMethodError: 
> org.apache.hadoop.hive.shims.HadoopShims.getMergedCredentials(Lorg/apache/hadoop/mapred/JobConf;)V
> at 
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:104)
> at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:245)
> at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:239)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
> at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:239)
> at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:226)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> ]
> DAG failed due to vertex failure. failedVertices:2 killedVertices:0
> FAILED: Execution Error, return code 2 from 
> org.apache.hado

[jira] [Updated] (HIVE-13965) Empty resultset run into Exception when using Thrift Binary Serde

2016-06-07 Thread Vaibhav Gumashta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13965?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vaibhav Gumashta updated HIVE-13965:

Issue Type: Sub-task  (was: Bug)
Parent: HIVE-12427

> Empty resultset run into Exception when using Thrift Binary Serde
> -
>
> Key: HIVE-13965
> URL: https://issues.apache.org/jira/browse/HIVE-13965
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2
>Affects Versions: 2.1.0
>Reporter: Ziyang Zhao
>
> This error can be reproduced by enabling thrift binary serde, using beeline 
> connect to hiveserver2 and executing the following commands:
> >create table test3(num1 int);
> >create table test4(num1 int);
> >insert into test3 values(1);
> >insert into test4 values(2);
> >select * from test3 join test4 on test3.num1=test4.num1;
> The result should be empty, but it gives an exception:
> Diagnostic Messages for this Task:
> Error: java.lang.RuntimeException: Hive Runtime Error while closing operators
> at 
> org.apache.hadoop.hive.ql.exec.mr.ExecMapper.close(ExecMapper.java:206)
> at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:61)
> at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:453)
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162)
> Caused by: java.lang.NullPointerException
> at 
> org.apache.hadoop.hive.ql.exec.FileSinkOperator.closeOp(FileSinkOperator.java:1029)
> at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:641)
> at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:655)
> at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:655)
> at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:655)
> at 
> org.apache.hadoop.hive.ql.exec.mr.ExecMapper.close(ExecMapper.java:195)
> ... 8 more
> This error is caused in FileSinkOperator.java. 
> If the resultset is empty, function process() will not be called, so variable 
> "fpaths" will not be set. When run into CloseOp(), 
> if (conf.isHiveServerQuery() && HiveConf.getBoolVar(hconf,
>  HiveConf.ConfVars.HIVE_SERVER2_THRIFT_RESULTSET_SERIALIZE_IN_TASKS) 
> &&
>  
> serializer.getClass().getName().equalsIgnoreCase(ThriftJDBCBinarySerDe.class.getName()))
>  {
>  try {
>recordValue = serializer.serialize(null, inputObjInspectors[0]);
>rowOutWriters = fpaths.outWriters;
>rowOutWriters[0].write(recordValue);
>  } catch (SerDeException | IOException e) {
>throw new HiveException(e);
>  }
>  }
> Here fpaths is null.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13965) Empty resultset run into Exception when using Thrift Binary Serde

2016-06-07 Thread Ziyang Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13965?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ziyang Zhao updated HIVE-13965:
---
Description: 
This error can be reproduced by enabling thrift binary serde, using beeline 
connect to hiveserver2 and executing the following commands:
>create table test3(num1 int);
>create table test4(num1 int);
>insert into test3 values(1);
>insert into test4 values(2);
>select * from test3 join test4 on test3.num1=test4.num1;

The result should be empty, but it gives an exception:

Diagnostic Messages for this Task:
Error: java.lang.RuntimeException: Hive Runtime Error while closing operators
at 
org.apache.hadoop.hive.ql.exec.mr.ExecMapper.close(ExecMapper.java:206)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:61)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:453)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162)
Caused by: java.lang.NullPointerException
at 
org.apache.hadoop.hive.ql.exec.FileSinkOperator.closeOp(FileSinkOperator.java:1029)
at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:641)
at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:655)
at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:655)
at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:655)
at 
org.apache.hadoop.hive.ql.exec.mr.ExecMapper.close(ExecMapper.java:195)
... 8 more

This error is caused in FileSinkOperator.java. 
If the resultset is empty, function process() will not be called, so variable 
"fpaths" will not be set. When run into CloseOp(), 
if (conf.isHiveServerQuery() && HiveConf.getBoolVar(hconf,
 HiveConf.ConfVars.HIVE_SERVER2_THRIFT_RESULTSET_SERIALIZE_IN_TASKS) &&
 
serializer.getClass().getName().equalsIgnoreCase(ThriftJDBCBinarySerDe.class.getName()))
 {
 try {
   recordValue = serializer.serialize(null, inputObjInspectors[0]);
   rowOutWriters = fpaths.outWriters;
   rowOutWriters[0].write(recordValue);
 } catch (SerDeException | IOException e) {
   throw new HiveException(e);
 }
 }
Here fpaths is null.


  was:
This error can be reproduced by enabling thrift binary serde, and using beeline 
connect to hiveserver2, executing the following commands:
>create table test3(num1 int);
>create table test4(num1 int);
>insert into test3 values(1);
>insert into test4 values(2);
>select * from test3 join test4 on test3.num1=test4.num1;

The result should be empty, but it gives an exception:

Diagnostic Messages for this Task:
Error: java.lang.RuntimeException: Hive Runtime Error while closing operators
at 
org.apache.hadoop.hive.ql.exec.mr.ExecMapper.close(ExecMapper.java:206)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:61)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:453)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162)
Caused by: java.lang.NullPointerException
at 
org.apache.hadoop.hive.ql.exec.FileSinkOperator.closeOp(FileSinkOperator.java:1029)
at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:641)
at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:655)
at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:655)
at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:655)
at 
org.apache.hadoop.hive.ql.exec.mr.ExecMapper.close(ExecMapper.java:195)
... 8 more

This error is caused in FileSinkOperator.java. If the resultset is empty, 
function process() will not be called. So when run into CloseOp(), 
if (conf.isHiveServerQuery() && HiveConf.getBoolVar(hconf,
 HiveConf.ConfVars.HIVE_SERVER2_THRIFT_RESULTSET_SERIALIZE_IN_TASKS) &&
 
serializer.getClass().getName().equalsIgnoreCase(ThriftJDBCBinarySerDe.class.getName()))
 {
 try {
   recordValue = serializer.serialize(null, inputObjInspectors[0]);
   rowOutWriters = fpaths.outWriters;
   rowOutWriters[0].write(recordValue);
 } catch (SerDeException | IOException e) {
   throw new Hi

[jira] [Updated] (HIVE-13961) ACID: Major compaction fails to include the original bucket files if there's no delta directory

2016-06-07 Thread Wei Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Zheng updated HIVE-13961:
-
Attachment: HIVE-13961.2.patch

patch 2 addresses Eugene's comments

> ACID: Major compaction fails to include the original bucket files if there's 
> no delta directory
> ---
>
> Key: HIVE-13961
> URL: https://issues.apache.org/jira/browse/HIVE-13961
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 1.3.0, 2.1.0, 2.2.0
>Reporter: Wei Zheng
>Assignee: Wei Zheng
>Priority: Blocker
> Attachments: HIVE-13961.1.patch, HIVE-13961.2.patch
>
>
> The issue can be reproduced by steps below:
> 1. Insert a row to Non-ACID table
> 2. Convert Non-ACID to ACID table (i.e. set transactional=true table property)
> 3. Perform Major compaction



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13723) Executing join query on type Float using Thrift Serde will result in Float cast to Double error

2016-06-07 Thread Ziyang Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13723?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ziyang Zhao updated HIVE-13723:
---
Attachment: HIVE-13723.1.patch

In this patch, instead of set the type of float ColumnBuffer as TYPE_DOUBLE I 
set it as TYPE_FLOAT, but let it go through the same process as double 
ColumnBuffer.

> Executing join query on type Float using Thrift Serde will result in Float 
> cast to Double error
> ---
>
> Key: HIVE-13723
> URL: https://issues.apache.org/jira/browse/HIVE-13723
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2, JDBC, Serializers/Deserializers
>Affects Versions: 2.1.0
>Reporter: Ziyang Zhao
>Priority: Critical
> Attachments: HIVE-13723.1.patch
>
>
> After enable thrift Serde, execute the following queries in beeline,
> >create table test1 (a int);
> >create table test2 (b float);
> >insert into test1 values (1);
> >insert into test2 values (1);
> >select * from test1 join test2 on test1.a=test2.b;
> this will give the error:
> java.lang.Exception: java.lang.RuntimeException: 
> org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while 
> processing row {"b":1.0}
> at 
> org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462) 
> ~[hadoop-mapreduce-client-common-2.7.1.2.4.0.0-169.jar:?]
> at 
> org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:522) 
> [hadoop-mapreduce-client-common-2.7.1.2.4.0.0-169.jar:?]
> Caused by: java.lang.RuntimeException: 
> org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while 
> processing row {"b":1.0}
> at 
> org.apache.hadoop.hive.ql.exec.mr.ExecMapper.map(ExecMapper.java:168) 
> ~[hive-exec-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT]
> at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54) 
> ~[hadoop-mapreduce-client-core-2.7.1.2.4.0.0-169.jar:?]
> at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:453) 
> ~[hadoop-mapreduce-client-core-2.7.1.2.4.0.0-169.jar:?]
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343) 
> ~[hadoop-mapreduce-client-core-2.7.1.2.4.0.0-169.jar:?]
> at 
> org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:243)
>  ~[hadoop-mapreduce-client-common-2.7.1.2.4.0.0-169.jar:?]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
> ~[?:1.7.0_95]
> at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
> ~[?:1.7.0_95]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  ~[?:1.7.0_95]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  ~[?:1.7.0_95]
> at java.lang.Thread.run(Thread.java:745) ~[?:1.7.0_95]
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime 
> Error while processing row {"b":1.0}
> at 
> org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:568) 
> ~[hive-exec-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.exec.mr.ExecMapper.map(ExecMapper.java:159) 
> ~[hive-exec-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT]
> at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54) 
> ~[hadoop-mapreduce-client-core-2.7.1.2.4.0.0-169.jar:?]
> at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:453) 
> ~[hadoop-mapreduce-client-core-2.7.1.2.4.0.0-169.jar:?]
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343) 
> ~[hadoop-mapreduce-client-core-2.7.1.2.4.0.0-169.jar:?]
> at 
> org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:243)
>  ~[hadoop-mapreduce-client-common-2.7.1.2.4.0.0-169.jar:?]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
> ~[?:1.7.0_95]
> at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
> ~[?:1.7.0_95]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  ~[?:1.7.0_95]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  ~[?:1.7.0_95]
> at java.lang.Thread.run(Thread.java:745) ~[?:1.7.0_95]
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Unexpected 
> exception from MapJoinOperator : 
> org.apache.hadoop.hive.serde2.SerDeException: java.lang.ClassCastException: 
> java.lang.Float cannot be cast to java.lang.Double
> at 
> org.apache.hadoop.hive.ql.exec.MapJoinOperator.process(MapJoinOperator.java:454)
>  ~[hive-exec-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT]
> at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:837) 
> ~[hive-exec-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT]
> 

[jira] [Commented] (HIVE-13961) ACID: Major compaction fails to include the original bucket files if there's no delta directory

2016-06-07 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15319256#comment-15319256
 ] 

Hive QA commented on HIVE-13961:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12808726/HIVE-13961.1.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 12 failed/errored test(s), 10225 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_acid_globallimit
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_acid_table_stats
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_create_func1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_12
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_13
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_stats_list_bucket
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_subquery_multiinsert
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_constprog_partitioner
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_index_bitmap3
org.apache.hadoop.hive.ql.TestTxnCommands2.testNonAcidToAcidConversion3
org.apache.hive.jdbc.TestJdbcWithLocalClusterSpark.testPermFunc
org.apache.hive.jdbc.TestJdbcWithMiniMr.testPermFunc
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/35/testReport
Console output: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/35/console
Test logs: 
http://ec2-50-18-27-0.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-35/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 12 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12808726 - PreCommit-HIVE-MASTER-Build

> ACID: Major compaction fails to include the original bucket files if there's 
> no delta directory
> ---
>
> Key: HIVE-13961
> URL: https://issues.apache.org/jira/browse/HIVE-13961
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 1.3.0, 2.1.0, 2.2.0
>Reporter: Wei Zheng
>Assignee: Wei Zheng
>Priority: Blocker
> Attachments: HIVE-13961.1.patch
>
>
> The issue can be reproduced by steps below:
> 1. Insert a row to Non-ACID table
> 2. Convert Non-ACID to ACID table (i.e. set transactional=true table property)
> 3. Perform Major compaction



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13959) MoveTask should only release its query associated locks

2016-06-07 Thread Yongzhi Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15319241#comment-15319241
 ] 

Yongzhi Chen commented on HIVE-13959:
-

[~ctang.ma]
I do not quite understand how MoveTask releases all locks under a 
HiveLockObject pathNames.
It looks for me as following:
One LoadTableDesc object map to one WriteEntity;
one WriteEntity map to one List.
These list of HiveLockObj are all created during acquireLocks related to the 
query.
In the releaseLocks code, lockObj.getObj() return HiveLockObject
List locks = lockMgr.getLocks(lockObj.getObj(), false, true);
HiveLockObject equals method need both 
pathNames are same and HiveLockObjectData are same. 
HiveLockObjectData equals method need query_id, lock_mode, lock_time... are all 
same. 
So I think the locks return by getLocks should all related to the query. 
Therefore, MoveTask has no chance to release other query's locks. 
Do I miss something? 


> MoveTask should only release its query associated locks
> ---
>
> Key: HIVE-13959
> URL: https://issues.apache.org/jira/browse/HIVE-13959
> Project: Hive
>  Issue Type: Bug
>  Components: Locking
>Reporter: Chaoyu Tang
>Assignee: Chaoyu Tang
> Attachments: HIVE-13959.patch, HIVE-13959.patch
>
>
> releaseLocks in MoveTask releases all locks under a HiveLockObject pathNames. 
> But some of locks under this pathNames might be for other queries and should 
> not be released.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13957) vectorized IN is inconsistent with non-vectorized (at least for decimal in (string))

2016-06-07 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-13957:

Description: 
The cast is applied to the column in regular IN, but vectorized IN applies it 
to the IN() list.
This can cause queries to produce incorrect results.


  was:The cast is applied to the column in regular IN, but vectorized IN 
applies it to the IN() list


> vectorized IN is inconsistent with non-vectorized (at least for decimal in 
> (string))
> 
>
> Key: HIVE-13957
> URL: https://issues.apache.org/jira/browse/HIVE-13957
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-13957.patch, HIVE-13957.patch
>
>
> The cast is applied to the column in regular IN, but vectorized IN applies it 
> to the IN() list.
> This can cause queries to produce incorrect results.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13948) Incorrect timezone handling in Writable results in wrong dates in queries

2016-06-07 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15319209#comment-15319209
 ] 

Sergey Shelukhin commented on HIVE-13948:
-

Thanks for the review!

> Incorrect timezone handling in Writable results in wrong dates in queries
> -
>
> Key: HIVE-13948
> URL: https://issues.apache.org/jira/browse/HIVE-13948
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Blocker
> Fix For: 1.3.0, 1.2.2, 2.1.0, 2.0.2
>
> Attachments: HIVE-13948.patch, HIVE-13948.patch
>
>
> Modifying TestDateWritable to cover 200 years,  adding all timezones to the 
> set, and making it accumulate errors, results in the following set (I bet 
> many are duplicates via different names, but there's enough).
> This ONLY logs errors where YMD date mismatches. There are many more where 
> YMD is the same but the time mismatches, omitted for brevity.
> Queries as simple as "select date(...);" reproduce the error (if Java tz is 
> set to a problematic tz)
> I was investigating some case for a specific date and it seems like the 
> conversion from dates to ms, namely offset calculation that takes the offset 
> at UTC midnight and the offset at arbitrary time derived from that, is 
> completely bogus and it's not clear why it would work.
> I think we either need to derive date from UTC and then create local date 
> from YMD if needed (for many cases e.g. toString for sinks, it would not be 
> needed at all), and/or add a lookup table for timezone used (for popular 
> dates, e.g. 1900-present, it would be 40k-odd entries, although the price of 
> building it is another question).
> Format: tz-expected-actual
> {noformat}
> 2016-06-04T18:33:57,499 ERROR [main[]]: io.TestDateWritable 
> (TestDateWritable.java:testDaylightSavingsTime(234)) - 
> DATE MISMATCH:
> Africa/Abidjan: 1912-01-01 00:16:08 != 1912-12-31 23:43:52
> Africa/Accra: 1918-01-01 00:00:52 != 1918-12-31 23:59:08
> Africa/Bamako: 1912-01-01 00:16:08 != 1912-12-31 23:43:52
> Africa/Banjul: 1912-01-01 00:16:08 != 1912-12-31 23:43:52
> Africa/Bissau: 1912-01-01 00:02:20 != 1912-12-31 23:57:40
> Africa/Bissau: 1975-01-01 01:00:00 != 1975-12-31 23:00:00
> Africa/Casablanca: 1913-10-26 00:30:20 != 1913-10-25 23:29:40
> Africa/Ceuta: 1901-01-01 00:21:16 != 1901-12-31 23:38:44
> Africa/Conakry: 1912-01-01 00:16:08 != 1912-12-31 23:43:52
> Africa/Dakar: 1912-01-01 00:16:08 != 1912-12-31 23:43:52
> Africa/El_Aaiun: 1976-04-14 01:00:00 != 1976-04-13 23:00:00
> Africa/Freetown: 1912-01-01 00:16:08 != 1912-12-31 23:43:52
> Africa/Lome: 1912-01-01 00:16:08 != 1912-12-31 23:43:52
> Africa/Monrovia: 1972-05-01 00:44:30 != 1972-04-30 23:15:30
> Africa/Nouakchott: 1912-01-01 00:16:08 != 1912-12-31 23:43:52
> Africa/Ouagadougou: 1912-01-01 00:16:08 != 1912-12-31 23:43:52
> Africa/Sao_Tome: 1912-01-01 00:16:08 != 1912-12-31 23:43:52
> Africa/Timbuktu: 1912-01-01 00:16:08 != 1912-12-31 23:43:52
> America/Anguilla: 1912-03-02 00:06:04 != 1912-03-01 23:53:56
> America/Antigua: 1951-01-01 01:00:00 != 1951-12-31 23:00:00
> America/Araguaina: 1914-01-01 00:12:48 != 1914-12-31 23:47:12
> America/Araguaina: 1932-10-03 01:00:00 != 1932-10-02 23:00:00
> America/Araguaina: 1949-12-01 01:00:00 != 1949-11-30 23:00:00
> America/Argentina/Buenos_Aires: 1920-05-01 00:16:48 != 1920-04-30 23:43:12
> America/Argentina/Buenos_Aires: 1930-12-01 01:00:00 != 1930-11-30 23:00:00
> America/Argentina/Buenos_Aires: 1931-10-15 01:00:00 != 1931-10-14 23:00:00
> America/Argentina/Buenos_Aires: 1932-11-01 01:00:00 != 1932-10-31 23:00:00
> America/Argentina/Buenos_Aires: 1933-11-01 01:00:00 != 1933-10-31 23:00:00
> America/Argentina/Buenos_Aires: 1934-11-01 01:00:00 != 1934-10-31 23:00:00
> America/Argentina/Buenos_Aires: 1935-11-01 01:00:00 != 1935-10-31 23:00:00
> America/Argentina/Buenos_Aires: 1936-11-01 01:00:00 != 1936-10-31 23:00:00
> America/Argentina/Buenos_Aires: 1937-11-01 01:00:00 != 1937-10-31 23:00:00
> America/Argentina/Buenos_Aires: 1938-11-01 01:00:00 != 1938-10-31 23:00:00
> America/Argentina/Buenos_Aires: 1939-11-01 01:00:00 != 1939-10-31 23:00:00
> America/Argentina/Buenos_Aires: 1940-07-01 01:00:00 != 1940-06-30 23:00:00
> America/Argentina/Buenos_Aires: 1941-10-15 01:00:00 != 1941-10-14 23:00:00
> America/Argentina/Buenos_Aires: 1943-10-15 01:00:00 != 1943-10-14 23:00:00
> America/Argentina/Buenos_Aires: 1946-10-01 01:00:00 != 1946-09-30 23:00:00
> America/Argentina/Buenos_Aires: 1963-12-15 01:00:00 != 1963-12-14 23:00:00
> America/Argentina/Buenos_Aires: 1964-10-15 01:00:00 != 1964-10-14 23:00:00
> America/Argentina/Buenos_Aires: 1965-10-15 01:00:00 != 1965-10-14 23:00:00
> America/Argentina/Buenos_Aires: 1966-10-15 01:00:00 != 1966-10-14 23:00:00
> America/Arg

[jira] [Commented] (HIVE-13957) vectorized IN is inconsistent with non-vectorized (at least for decimal in (string))

2016-06-07 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15319216#comment-15319216
 ] 

Sergey Shelukhin commented on HIVE-13957:
-

Well, then we should do this first for correctness, then do other things as a 
follow-up.

> vectorized IN is inconsistent with non-vectorized (at least for decimal in 
> (string))
> 
>
> Key: HIVE-13957
> URL: https://issues.apache.org/jira/browse/HIVE-13957
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-13957.patch, HIVE-13957.patch
>
>
> The cast is applied to the column in regular IN, but vectorized IN applies it 
> to the IN() list



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-7443) Fix HiveConnection to communicate with Kerberized Hive JDBC server and alternative JDKs

2016-06-07 Thread Aihua Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15319223#comment-15319223
 ] 

Aihua Xu commented on HIVE-7443:


[~gss2002] So do you mean HIVE-13020 also fixes this issue? Or do we need the 
patch from HIVE-7443?

> Fix HiveConnection to communicate with Kerberized Hive JDBC server and 
> alternative JDKs
> ---
>
> Key: HIVE-7443
> URL: https://issues.apache.org/jira/browse/HIVE-7443
> Project: Hive
>  Issue Type: Bug
>  Components: JDBC, Security
>Affects Versions: 0.12.0, 0.13.1
> Environment: Kerberos
> Run Hive server2 and client with IBM JDK7.1
>Reporter: Yu Gao
>Assignee: Yu Gao
> Attachments: HIVE-7443.patch
>
>
> Hive Kerberos authentication has been enabled in my cluster. I ran kinit to 
> initialize the current login user's ticket cache successfully, and then tried 
> to use beeline to connect to Hive Server2, but failed. After I manually added 
> some logging to catch the failure exception, this is what I got that caused 
> the failure:
> beeline>  !connect 
> jdbc:hive2://:1/default;principal=hive/@REALM.COM
>  org.apache.hive.jdbc.HiveDriver
> scan complete in 2ms
> Connecting to 
> jdbc:hive2://:1/default;principal=hive/@REALM.COM
> Enter password for 
> jdbc:hive2://:1/default;principal=hive/@REALM.COM:
> 14/07/17 15:12:45 ERROR jdbc.HiveConnection: Failed to open client transport
> javax.security.sasl.SaslException: Failed to open client transport [Caused by 
> java.io.IOException: Could not instantiate SASL transport]
> at 
> org.apache.hive.service.auth.KerberosSaslHelper.getKerberosTransport(KerberosSaslHelper.java:78)
> at 
> org.apache.hive.jdbc.HiveConnection.createBinaryTransport(HiveConnection.java:342)
> at 
> org.apache.hive.jdbc.HiveConnection.openTransport(HiveConnection.java:200)
> at org.apache.hive.jdbc.HiveConnection.(HiveConnection.java:178)
> at org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:105)
> at java.sql.DriverManager.getConnection(DriverManager.java:582)
> at java.sql.DriverManager.getConnection(DriverManager.java:198)
> at 
> org.apache.hive.beeline.DatabaseConnection.connect(DatabaseConnection.java:145)
> at 
> org.apache.hive.beeline.DatabaseConnection.getConnection(DatabaseConnection.java:186)
> at org.apache.hive.beeline.Commands.connect(Commands.java:959)
> at org.apache.hive.beeline.Commands.connect(Commands.java:880)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:94)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55)
> at java.lang.reflect.Method.invoke(Method.java:619)
> at 
> org.apache.hive.beeline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:44)
> at org.apache.hive.beeline.BeeLine.dispatch(BeeLine.java:801)
> at org.apache.hive.beeline.BeeLine.begin(BeeLine.java:659)
> at 
> org.apache.hive.beeline.BeeLine.mainWithInputRedirection(BeeLine.java:368)
> at org.apache.hive.beeline.BeeLine.main(BeeLine.java:351)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:94)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55)
> at java.lang.reflect.Method.invoke(Method.java:619)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
> Caused by: java.io.IOException: Could not instantiate SASL transport
> at 
> org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge20S$Client.createClientTransport(HadoopThriftAuthBridge20S.java:177)
> at 
> org.apache.hive.service.auth.KerberosSaslHelper.getKerberosTransport(KerberosSaslHelper.java:74)
> ... 24 more
> Caused by: javax.security.sasl.SaslException: Failure to initialize security 
> context [Caused by org.ietf.jgss.GSSException, major code: 13, minor code: 0
> major string: Invalid credentials
> minor string: SubjectCredFinder: no JAAS Subject]
> at 
> com.ibm.security.sasl.gsskerb.GssKrb5Client.(GssKrb5Client.java:131)
> at 
> com.ibm.security.sasl.gsskerb.FactoryImpl.createSaslClient(FactoryImpl.java:53)
> at javax.security.sasl.Sasl.createSaslClient(Sasl.java:362)
> at 
> org.apache.thrift.transport.TSaslClientTransport.(TSaslClientTransport.java:72)
> at 
> org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge20S$Client.createClientTransport(HadoopThriftAuthBridge20S.java:169)
> ... 25 more
> Caused by: org.ietf

[jira] [Updated] (HIVE-13948) Incorrect timezone handling in Writable results in wrong dates in queries

2016-06-07 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-13948:

   Resolution: Fixed
Fix Version/s: 2.0.2
   2.1.0
   1.2.2
   1.3.0
   Status: Resolved  (was: Patch Available)

Committed to 1 branches.
[~jcamachorodriguez] fyi another one went into 2.1. Please let me know if the 
RC is out (doesn't look like it), I can change to 2.1.1

> Incorrect timezone handling in Writable results in wrong dates in queries
> -
>
> Key: HIVE-13948
> URL: https://issues.apache.org/jira/browse/HIVE-13948
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Blocker
> Fix For: 1.3.0, 1.2.2, 2.1.0, 2.0.2
>
> Attachments: HIVE-13948.patch, HIVE-13948.patch
>
>
> Modifying TestDateWritable to cover 200 years,  adding all timezones to the 
> set, and making it accumulate errors, results in the following set (I bet 
> many are duplicates via different names, but there's enough).
> This ONLY logs errors where YMD date mismatches. There are many more where 
> YMD is the same but the time mismatches, omitted for brevity.
> Queries as simple as "select date(...);" reproduce the error (if Java tz is 
> set to a problematic tz)
> I was investigating some case for a specific date and it seems like the 
> conversion from dates to ms, namely offset calculation that takes the offset 
> at UTC midnight and the offset at arbitrary time derived from that, is 
> completely bogus and it's not clear why it would work.
> I think we either need to derive date from UTC and then create local date 
> from YMD if needed (for many cases e.g. toString for sinks, it would not be 
> needed at all), and/or add a lookup table for timezone used (for popular 
> dates, e.g. 1900-present, it would be 40k-odd entries, although the price of 
> building it is another question).
> Format: tz-expected-actual
> {noformat}
> 2016-06-04T18:33:57,499 ERROR [main[]]: io.TestDateWritable 
> (TestDateWritable.java:testDaylightSavingsTime(234)) - 
> DATE MISMATCH:
> Africa/Abidjan: 1912-01-01 00:16:08 != 1912-12-31 23:43:52
> Africa/Accra: 1918-01-01 00:00:52 != 1918-12-31 23:59:08
> Africa/Bamako: 1912-01-01 00:16:08 != 1912-12-31 23:43:52
> Africa/Banjul: 1912-01-01 00:16:08 != 1912-12-31 23:43:52
> Africa/Bissau: 1912-01-01 00:02:20 != 1912-12-31 23:57:40
> Africa/Bissau: 1975-01-01 01:00:00 != 1975-12-31 23:00:00
> Africa/Casablanca: 1913-10-26 00:30:20 != 1913-10-25 23:29:40
> Africa/Ceuta: 1901-01-01 00:21:16 != 1901-12-31 23:38:44
> Africa/Conakry: 1912-01-01 00:16:08 != 1912-12-31 23:43:52
> Africa/Dakar: 1912-01-01 00:16:08 != 1912-12-31 23:43:52
> Africa/El_Aaiun: 1976-04-14 01:00:00 != 1976-04-13 23:00:00
> Africa/Freetown: 1912-01-01 00:16:08 != 1912-12-31 23:43:52
> Africa/Lome: 1912-01-01 00:16:08 != 1912-12-31 23:43:52
> Africa/Monrovia: 1972-05-01 00:44:30 != 1972-04-30 23:15:30
> Africa/Nouakchott: 1912-01-01 00:16:08 != 1912-12-31 23:43:52
> Africa/Ouagadougou: 1912-01-01 00:16:08 != 1912-12-31 23:43:52
> Africa/Sao_Tome: 1912-01-01 00:16:08 != 1912-12-31 23:43:52
> Africa/Timbuktu: 1912-01-01 00:16:08 != 1912-12-31 23:43:52
> America/Anguilla: 1912-03-02 00:06:04 != 1912-03-01 23:53:56
> America/Antigua: 1951-01-01 01:00:00 != 1951-12-31 23:00:00
> America/Araguaina: 1914-01-01 00:12:48 != 1914-12-31 23:47:12
> America/Araguaina: 1932-10-03 01:00:00 != 1932-10-02 23:00:00
> America/Araguaina: 1949-12-01 01:00:00 != 1949-11-30 23:00:00
> America/Argentina/Buenos_Aires: 1920-05-01 00:16:48 != 1920-04-30 23:43:12
> America/Argentina/Buenos_Aires: 1930-12-01 01:00:00 != 1930-11-30 23:00:00
> America/Argentina/Buenos_Aires: 1931-10-15 01:00:00 != 1931-10-14 23:00:00
> America/Argentina/Buenos_Aires: 1932-11-01 01:00:00 != 1932-10-31 23:00:00
> America/Argentina/Buenos_Aires: 1933-11-01 01:00:00 != 1933-10-31 23:00:00
> America/Argentina/Buenos_Aires: 1934-11-01 01:00:00 != 1934-10-31 23:00:00
> America/Argentina/Buenos_Aires: 1935-11-01 01:00:00 != 1935-10-31 23:00:00
> America/Argentina/Buenos_Aires: 1936-11-01 01:00:00 != 1936-10-31 23:00:00
> America/Argentina/Buenos_Aires: 1937-11-01 01:00:00 != 1937-10-31 23:00:00
> America/Argentina/Buenos_Aires: 1938-11-01 01:00:00 != 1938-10-31 23:00:00
> America/Argentina/Buenos_Aires: 1939-11-01 01:00:00 != 1939-10-31 23:00:00
> America/Argentina/Buenos_Aires: 1940-07-01 01:00:00 != 1940-06-30 23:00:00
> America/Argentina/Buenos_Aires: 1941-10-15 01:00:00 != 1941-10-14 23:00:00
> America/Argentina/Buenos_Aires: 1943-10-15 01:00:00 != 1943-10-14 23:00:00
> America/Argentina/Buenos_Aires: 1946-10-01 01:00:00 != 1946-09-30 23:00:00
> America/Argentina/Buenos_Aires: 1963-12-15 01:00:00 != 196

[jira] [Commented] (HIVE-13957) vectorized IN is inconsistent with non-vectorized (at least for decimal in (string))

2016-06-07 Thread Gopal V (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15319192#comment-15319192
 ] 

Gopal V commented on HIVE-13957:


It is not handled for concat and concat fails vectorizing entire vertices 
because it doesn't wrap a UDFToString() around all args  - see HIVE-7160

> vectorized IN is inconsistent with non-vectorized (at least for decimal in 
> (string))
> 
>
> Key: HIVE-13957
> URL: https://issues.apache.org/jira/browse/HIVE-13957
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-13957.patch, HIVE-13957.patch
>
>
> The cast is applied to the column in regular IN, but vectorized IN applies it 
> to the IN() list



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13963) vectorization - string arguments may be converted to decimal null

2016-06-07 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15319190#comment-15319190
 ] 

Sergey Shelukhin commented on HIVE-13963:
-

[~mmccline] [~gopalv] fyi

> vectorization - string arguments may be converted to decimal null
> -
>
> Key: HIVE-13963
> URL: https://issues.apache.org/jira/browse/HIVE-13963
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Matt McCline
>Priority: Critical
>
> See HIVE-13957.
> The default precision and scale for the implicit decimal cast are max,max, ie 
> 38,38. Those don't do what the code may assume they do. All the values > 0 
> become invalid and precision-scale enforcement automatically converts them to 
> null.
> We need to 
> 1) Validate when this happens in/after the conversion code and bail;
> 2) Or, derive precision and scale from the constants themselves so they all 
> fit, instead;
> 3) Or, derive it from the type of whatever caused the conversion in the first 
> place (e.g. IN column decimal); however, this could be function-specific 
> (e.g. IN just needs equality, BETWEEN would need at least one extra digit, 
> arithmetic, if this ever happens, would need everything, etc.);
> 4) Something else? :)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13957) vectorized IN is inconsistent with non-vectorized (at least for decimal in (string))

2016-06-07 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15319187#comment-15319187
 ] 

Sergey Shelukhin commented on HIVE-13957:
-

Yeah, that's what I was alluding to above. It seems like it would be a complex 
fix. Where is that handled for concat?

> vectorized IN is inconsistent with non-vectorized (at least for decimal in 
> (string))
> 
>
> Key: HIVE-13957
> URL: https://issues.apache.org/jira/browse/HIVE-13957
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-13957.patch, HIVE-13957.patch
>
>
> The cast is applied to the column in regular IN, but vectorized IN applies it 
> to the IN() list



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HIVE-13957) vectorized IN is inconsistent with non-vectorized (at least for decimal in (string))

2016-06-07 Thread Gopal V (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15319177#comment-15319177
 ] 

Gopal V edited comment on HIVE-13957 at 6/7/16 7:27 PM:


bq. derives the common type for the column and constants, and casts both 
columns and constants to that whenever needed.

The cast is implicit right now - the UDF does the conversion behind the scenes. 
Making the cast explicit actually would do the trick instead.

{{id in ('1', '2')}}

could be written in 2 ways.

{{cast(id as string) in (...)}} or {{id in (cast('1' as decimal(17,2)), 
...)}}

Vectorization should have no trouble with either disambiguated forms, but 
cannot deal with the actual lack of type conversions in the plan.

The interesting constraint is that all types inside the IN() clause have to be 
identical, so that the conversion is exactly once - not like {{ id in (1.0, 
'1.0', '1.00') }}


was (Author: gopalv):
bq. derives the common type for the column and constants, and casts both 
columns and constants to that whenever needed.

The cast is implicit right now - the UDF does the conversion behind the scenes. 
Making the cast explicit actually would do the trick instead.

{{id in ('1', '2')}}

could be written in 2 ways.

{{cast(id as string) in (...)}} or {{id in (cast('1' as decimal(17,2)), 
...)}}

Vectorization should have no trouble with either disambiguated forms, but 
cannot deal with the actual lack of type conversions in the plan.

> vectorized IN is inconsistent with non-vectorized (at least for decimal in 
> (string))
> 
>
> Key: HIVE-13957
> URL: https://issues.apache.org/jira/browse/HIVE-13957
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-13957.patch, HIVE-13957.patch
>
>
> The cast is applied to the column in regular IN, but vectorized IN applies it 
> to the IN() list



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13957) vectorized IN is inconsistent with non-vectorized (at least for decimal in (string))

2016-06-07 Thread Gopal V (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15319177#comment-15319177
 ] 

Gopal V commented on HIVE-13957:


bq. derives the common type for the column and constants, and casts both 
columns and constants to that whenever needed.

The cast is implicit right now - the UDF does the conversion behind the scenes. 
Making the cast explicit actually would do the trick instead.

{{id in ('1', '2')}}

could be written in 2 ways.

{{cast(id as string) in (...)}} or {{id in (cast('1' as decimal(17,2)), 
...)}}

Vectorization should have no trouble with either disambiguated forms, but 
cannot deal with the actual lack of type conversions in the plan.

> vectorized IN is inconsistent with non-vectorized (at least for decimal in 
> (string))
> 
>
> Key: HIVE-13957
> URL: https://issues.apache.org/jira/browse/HIVE-13957
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-13957.patch, HIVE-13957.patch
>
>
> The cast is applied to the column in regular IN, but vectorized IN applies it 
> to the IN() list



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13957) vectorized IN is inconsistent with non-vectorized (at least for decimal in (string))

2016-06-07 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15319171#comment-15319171
 ] 

Sergey Shelukhin commented on HIVE-13957:
-

Can you elaborate? The problem is the difference between the approaches, not 
the type of the cast per se.

Normal IN, at some point that doesn't really matter, derives the common type 
for the column and constants, and casts both columns and constants to that 
whenever needed.
Whereas vectorization always tries to convert the constants to the column type, 
the reason being (I supposed) that the specializations for IN all have a 
particular column type in mind. I am not actually very familiar with these and 
whether it would be easy to incorporate a cast; I assume the cast of the column 
would need to come earlier than the specialized IN (i.e. specialized IN should 
already be able to utilize values of the correct type straight out of the VRB), 
which would require the vectorizer to modify the plan above the IN. Or 
something like that.

We could do that, however, as far as I see, it's not the solution we want, 
because of the following.
First, in case of decimal-string, this issue can produce incorrect results, so 
we want a simple fix for that, which the above isn't.
>From the long term perspective, I'd say we need to prohibit implicit casts in 
>this case (I opened a separate JIRA) AND/OR change non-vectorized pipeline 
>rather than vectorized, because casting decimal column to string in this case 
>(what the non-vectorized IN does) is not the intuitively logical thing for the 
>user and may produce unexpected result.

With the latter in mind, we /could/ fix the proximate issue in vectorized code 
(cast to decimal(38,38) that ends up converting all reasonable values to null), 
e.g. constrain the precision and scale to the column type (potentially +2/+1 
for NOT, although the enforcement will probably convert the values that don't 
fit to NULL), assuming the values are trimmed, since more should never be 
needed. But that's still inconsistent with normal IN, and we should probably do 
it later. 
Actually, come think of it, this might also be broken for other UDFs, where 
constraining is not as easy or at least is different (e.g. between needs more 
than strict equality, and with arithmetic ops, if this problem applies, the 
only way would be to derive the maximum values from the value list). I can also 
file a separate JIRA for that...


> vectorized IN is inconsistent with non-vectorized (at least for decimal in 
> (string))
> 
>
> Key: HIVE-13957
> URL: https://issues.apache.org/jira/browse/HIVE-13957
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-13957.patch, HIVE-13957.patch
>
>
> The cast is applied to the column in regular IN, but vectorized IN applies it 
> to the IN() list



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13962) create_func1 test fails with NPE

2016-06-07 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15319126#comment-15319126
 ] 

Sergey Shelukhin commented on HIVE-13962:
-

[~jcamachorodriguez] [~pxiong] can you take a look? Started happening recently. 

> create_func1 test fails with NPE
> 
>
> Key: HIVE-13962
> URL: https://issues.apache.org/jira/browse/HIVE-13962
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>
> {noformat}
> 2016-06-07T11:19:50,843 ERROR [82a55b04-c058-475d-8ba4-d1a3007eb213 main[]]: 
> ql.Driver (SessionState.java:printError(1055)) - FAILED: NullPointerException 
> null
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hive.ql.plan.ExprNodeGenericFuncDesc.newInstance(ExprNodeGenericFuncDesc.java:236)
>   at 
> org.apache.hadoop.hive.ql.parse.TypeCheckProcFactory$DefaultExprProcessor.getXpathOrFuncExprNodeDesc(TypeCheckProcFactory.java:1072)
>   at 
> org.apache.hadoop.hive.ql.parse.TypeCheckProcFactory$DefaultExprProcessor.process(TypeCheckProcFactory.java:1317)
>   at 
> org.apache.hadoop.hive.ql.lib.DefaultRuleDispatcher.dispatch(DefaultRuleDispatcher.java:90)
>   at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatchAndReturn(DefaultGraphWalker.java:105)
>   at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatch(DefaultGraphWalker.java:89)
>   at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.walk(DefaultGraphWalker.java:158)
>   at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.startWalking(DefaultGraphWalker.java:120)
>   at 
> org.apache.hadoop.hive.ql.parse.TypeCheckProcFactory.genExprNode(TypeCheckProcFactory.java:219)
>   at 
> org.apache.hadoop.hive.ql.parse.TypeCheckProcFactory.genExprNode(TypeCheckProcFactory.java:163)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genAllExprNodeDesc(SemanticAnalyzer.java:11182)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genExprNodeDesc(SemanticAnalyzer.java:11137)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.genSelectLogicalPlan(CalcitePlanner.java:2996)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.genLogicalPlan(CalcitePlanner.java:3158)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.apply(CalcitePlanner.java:939)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.apply(CalcitePlanner.java:893)
>   at org.apache.calcite.tools.Frameworks$1.apply(Frameworks.java:113)
>   at 
> org.apache.calcite.prepare.CalcitePrepareImpl.perform(CalcitePrepareImpl.java:969)
>   at org.apache.calcite.tools.Frameworks.withPrepare(Frameworks.java:149)
>   at org.apache.calcite.tools.Frameworks.withPlanner(Frameworks.java:106)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.getOptimizedAST(CalcitePlanner.java:712)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.genOPTree(CalcitePlanner.java:280)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:10755)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:239)
>   at 
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:250)
>   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:437)
>   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:329)
>   at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1158)
>   at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1253)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1084)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1072)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:232)
>   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:183)
>   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:399)
>   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:335)
>   at 
> org.apache.hadoop.hive.ql.QTestUtil.executeClientInternal(QTestUtil.java:1143)
>   at 
> org.apache.hadoop.hive.ql.QTestUtil.executeClient(QTestUtil.java:1117)
>   at 
> org.apache.hadoop.hive.cli.TestCliDriver.runTest(TestCliDriver.java:120)
>   at 
> org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_create_func1(TestCliDriver.java:103)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at 
> org.junit.runners.model.FrameworkMethod$

[jira] [Commented] (HIVE-13931) Add support for HikariCP and replace BoneCP usage with HikariCP

2016-06-07 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15319106#comment-15319106
 ] 

Sergey Shelukhin commented on HIVE-13931:
-

Figuring out why the exceptions happen in the first place would be my 
preference :P

> Add support for HikariCP and replace BoneCP usage with HikariCP
> ---
>
> Key: HIVE-13931
> URL: https://issues.apache.org/jira/browse/HIVE-13931
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Reporter: Sushanth Sowmyan
>Assignee: Sushanth Sowmyan
> Attachments: HIVE-13931.2.patch, HIVE-13931.patch
>
>
> Currently, we use BoneCP as our primary connection pooling mechanism 
> (overridable by users). However, BoneCP is no longer being actively 
> developed, and is considered deprecated, replaced by HikariCP.
> Thus, we should add support for HikariCP, and try to replace our primary 
> usage of BoneCP with it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13932) Hive SMB Map Join with small set of LIMIT failed with NPE

2016-06-07 Thread Yongzhi Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13932?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongzhi Chen updated HIVE-13932:

Fix Version/s: (was: 2.1.0)
   2.2.0

> Hive SMB Map Join with small set of LIMIT failed with NPE
> -
>
> Key: HIVE-13932
> URL: https://issues.apache.org/jira/browse/HIVE-13932
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.0.0, 2.0.0
>Reporter: Yongzhi Chen
>Assignee: Yongzhi Chen
> Fix For: 1.3.0, 2.2.0
>
> Attachments: HIVE-13932.1.patch
>
>
> 1) prepare sample data:
> a=1
> while [[ $a -lt 100 ]]; do echo $a ; let a=$a+1; done > data
> 2) prepare source hive table:
> CREATE TABLE `s`(`c` string);
> load data local inpath 'data' into table s;
> 3) prepare the bucketed table:
> set hive.enforce.bucketing=true;
> set hive.enforce.sorting=true;
> CREATE TABLE `t`(`c` string) CLUSTERED BY (c) SORTED BY (c) INTO 5 BUCKETS;
> insert into t select * from s;
> 4) reproduce this issue:
> SET hive.auto.convert.sortmerge.join = true;
> SET hive.auto.convert.sortmerge.join.bigtable.selection.policy = 
> org.apache.hadoop.hive.ql.optimizer.LeftmostBigTableSelectorForAutoSMJ;
> SET hive.auto.convert.sortmerge.join.noconditionaltask = true;
> SET hive.optimize.bucketmapjoin = true;
> SET hive.optimize.bucketmapjoin.sortedmerge = true;
> select * from t join t t1 on t.c=t1.c limit 1;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13931) Add support for HikariCP and replace BoneCP usage with HikariCP

2016-06-07 Thread Sushanth Sowmyan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15319087#comment-15319087
 ] 

Sushanth Sowmyan commented on HIVE-13931:
-

I was about to state that the above test failiures had nothing to do with this 
patch, but I see that some golden files now have additional entries as follows:

{noformat}
19,21d18
<   at 
com.zaxxer.hikari.pool.ProxyPreparedStatement.executeUpdate(ProxyPreparedStatement.java:61)
<   at 
com.zaxxer.hikari.pool.HikariProxyPreparedStatement.executeUpdate(HikariProxyPreparedStatement.java)
<  A masked pattern was here 
25,27d21
<   at 
com.zaxxer.hikari.pool.ProxyPreparedStatement.executeUpdate(ProxyPreparedStatement.java:61)
<   at 
com.zaxxer.hikari.pool.HikariProxyPreparedStatement.executeUpdate(HikariProxyPreparedStatement.java)
<  A masked pattern was here 
{noformat}

This is annoying. This means that either we leak the Hikari exception 
signatures into the q.out files(definitely not to my liking), or we add an 
additional masking pattern for this (not to my liking either, but this is 
better)

Although it is funny that these weren't reported with the last test run. I will 
verify with local runs.

I will look into this further and update as needed. [~sershe], 
thoughts/preferences on approach?

> Add support for HikariCP and replace BoneCP usage with HikariCP
> ---
>
> Key: HIVE-13931
> URL: https://issues.apache.org/jira/browse/HIVE-13931
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Reporter: Sushanth Sowmyan
>Assignee: Sushanth Sowmyan
> Attachments: HIVE-13931.2.patch, HIVE-13931.patch
>
>
> Currently, we use BoneCP as our primary connection pooling mechanism 
> (overridable by users). However, BoneCP is no longer being actively 
> developed, and is considered deprecated, replaced by HikariCP.
> Thus, we should add support for HikariCP, and try to replace our primary 
> usage of BoneCP with it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13953) Issues in HiveLockObject equals method

2016-06-07 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-13953:
---
Fix Version/s: (was: 2.1.1)
   (was: 2.2.0)
   2.1.0

> Issues in HiveLockObject equals method
> --
>
> Key: HIVE-13953
> URL: https://issues.apache.org/jira/browse/HIVE-13953
> Project: Hive
>  Issue Type: Bug
>  Components: Locking
>Reporter: Chaoyu Tang
>Assignee: Chaoyu Tang
> Fix For: 2.1.0
>
> Attachments: HIVE-13953.patch
>
>
> There are two issues in equals method in HiveLockObject:
> {code}
>   @Override
>   public boolean equals(Object o) {
> if (!(o instanceof HiveLockObject)) {
>   return false;
> }
> HiveLockObject tgt = (HiveLockObject) o;
> return Arrays.equals(pathNames, tgt.pathNames) &&
> data == null ? tgt.getData() == null :
> tgt.getData() != null && data.equals(tgt.getData());
>   }
> {code}
> 1. Arrays.equals(pathNames, tgt.pathNames) might return false for the same 
> path in HiveLockObject since in current Hive, the pathname components might 
> be stored in two ways, taking a dynamic partition path db/tbl/part1/part2 as 
> an example, it might be stored in the pathNames as an array of four elements, 
> db, tbl, part1, and part2 or as an array only having one element 
> db/tbl/part1/part2. It will be safer to comparing the pathNames using 
> StringUtils.equals(this.getName(), tgt.getName())
> 2. The comparison logic is not right.
> {code}
>   @Override
>   public boolean equals(Object o) {
> if (!(o instanceof HiveLockObject)) {
>   return false;
> }
> HiveLockObject tgt = (HiveLockObject) o;
> return StringUtils.equals(this.getName(), tgt.getName()) &&
> (data == null ? tgt.getData() == null : data.equals(tgt.getData()));
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13953) Issues in HiveLockObject equals method

2016-06-07 Thread Chaoyu Tang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chaoyu Tang updated HIVE-13953:
---
   Resolution: Fixed
Fix Version/s: 2.1.1
   2.2.0
   Status: Resolved  (was: Patch Available)

Patch is committed to 2.2.0 and 2.1.1. Thanks [~ychena] for review.

> Issues in HiveLockObject equals method
> --
>
> Key: HIVE-13953
> URL: https://issues.apache.org/jira/browse/HIVE-13953
> Project: Hive
>  Issue Type: Bug
>  Components: Locking
>Reporter: Chaoyu Tang
>Assignee: Chaoyu Tang
> Fix For: 2.2.0, 2.1.1
>
> Attachments: HIVE-13953.patch
>
>
> There are two issues in equals method in HiveLockObject:
> {code}
>   @Override
>   public boolean equals(Object o) {
> if (!(o instanceof HiveLockObject)) {
>   return false;
> }
> HiveLockObject tgt = (HiveLockObject) o;
> return Arrays.equals(pathNames, tgt.pathNames) &&
> data == null ? tgt.getData() == null :
> tgt.getData() != null && data.equals(tgt.getData());
>   }
> {code}
> 1. Arrays.equals(pathNames, tgt.pathNames) might return false for the same 
> path in HiveLockObject since in current Hive, the pathname components might 
> be stored in two ways, taking a dynamic partition path db/tbl/part1/part2 as 
> an example, it might be stored in the pathNames as an array of four elements, 
> db, tbl, part1, and part2 or as an array only having one element 
> db/tbl/part1/part2. It will be safer to comparing the pathNames using 
> StringUtils.equals(this.getName(), tgt.getName())
> 2. The comparison logic is not right.
> {code}
>   @Override
>   public boolean equals(Object o) {
> if (!(o instanceof HiveLockObject)) {
>   return false;
> }
> HiveLockObject tgt = (HiveLockObject) o;
> return StringUtils.equals(this.getName(), tgt.getName()) &&
> (data == null ? tgt.getData() == null : data.equals(tgt.getData()));
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >