[jira] [Commented] (HIVE-13439) JDBC: provide a way to retrieve GUID to query Yarn ATS

2016-04-08 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15233361#comment-15233361
 ] 

Hive QA commented on HIVE-13439:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12797607/HIVE-13439.2.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 4 failed/errored test(s), 9967 tests executed
*Failed tests:*
{noformat}
TestMiniTezCliDriver-vector_decimal_2.q-schema_evol_text_fetchwork_table.q-constprog_semijoin.q-and-12-more
 - did not produce a TEST-*.xml file
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_index_auto_partitioned
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_ivyDownload
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_index_bitmap3
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/7516/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/7516/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-7516/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 4 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12797607 - PreCommit-HIVE-TRUNK-Build

> JDBC: provide a way to retrieve GUID to query Yarn ATS
> --
>
> Key: HIVE-13439
> URL: https://issues.apache.org/jira/browse/HIVE-13439
> Project: Hive
>  Issue Type: Bug
>  Components: JDBC
>Affects Versions: 1.2.1, 2.0.0
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
> Attachments: HIVE-13439.1.patch, HIVE-13439.2.patch
>
>
> HIVE-9673 added support for passing base64 encoded operation handles to ATS. 
> We should a method on client side to retrieve that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9660) store end offset of compressed data for RG in RowIndex in ORC

2016-04-08 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9660?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-9660:
---
Attachment: HIVE-9660.08.patch

Fixing the tests so they could run over the weekend.
Will address the rest of the feedback later.

> store end offset of compressed data for RG in RowIndex in ORC
> -
>
> Key: HIVE-9660
> URL: https://issues.apache.org/jira/browse/HIVE-9660
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-9660.01.patch, HIVE-9660.02.patch, 
> HIVE-9660.03.patch, HIVE-9660.04.patch, HIVE-9660.05.patch, 
> HIVE-9660.06.patch, HIVE-9660.07.patch, HIVE-9660.07.patch, 
> HIVE-9660.08.patch, HIVE-9660.patch, HIVE-9660.patch
>
>
> Right now the end offset is estimated, which in some cases results in tons of 
> extra data being read.
> We can add a separate array to RowIndex (positions_v2?) that stores number of 
> compressed buffers for each RG, or end offset, or something, to remove this 
> estimation magic



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13401) Kerberized HS2 with LDAP auth enabled fails kerberos/delegation token authentication

2016-04-08 Thread Chaoyu Tang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15233326#comment-15233326
 ] 

Chaoyu Tang commented on HIVE-13401:


Yes, committed it again. Thanks [~sershe]. 

> Kerberized HS2 with LDAP auth enabled fails kerberos/delegation token 
> authentication
> 
>
> Key: HIVE-13401
> URL: https://issues.apache.org/jira/browse/HIVE-13401
> Project: Hive
>  Issue Type: Bug
>  Components: Authentication
>Reporter: Chaoyu Tang
>Assignee: Chaoyu Tang
> Fix For: 2.1.0
>
> Attachments: HIVE-13401-branch2.0.1.patch, HIVE-13401.patch
>
>
> When HS2 is running in kerberos cluster but with other Sasl authentication 
> (e.g. LDAP) enabled, it fails in kerberos/delegation token authentication. It 
> is because the HS2 server uses the TSetIpAddressProcess when other 
> authentication is enabled. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13401) Kerberized HS2 with LDAP auth enabled fails kerberos/delegation token authentication

2016-04-08 Thread Chaoyu Tang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15233319#comment-15233319
 ] 

Chaoyu Tang commented on HIVE-13401:


Apology, my mistake. I committed the patch HIVE-13401-branch2.0.1.patch 
partially :-(.
[~szehon], could you help to review and double check? I think it should be in 
2.0.1 since it fixes the issue from HIVE-10115, without it, the feature 
provided by HIVE-10115 does not work at all.

> Kerberized HS2 with LDAP auth enabled fails kerberos/delegation token 
> authentication
> 
>
> Key: HIVE-13401
> URL: https://issues.apache.org/jira/browse/HIVE-13401
> Project: Hive
>  Issue Type: Bug
>  Components: Authentication
>Reporter: Chaoyu Tang
>Assignee: Chaoyu Tang
> Fix For: 2.1.0
>
> Attachments: HIVE-13401-branch2.0.1.patch, HIVE-13401.patch
>
>
> When HS2 is running in kerberos cluster but with other Sasl authentication 
> (e.g. LDAP) enabled, it fails in kerberos/delegation token authentication. It 
> is because the HS2 server uses the TSetIpAddressProcess when other 
> authentication is enabled. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13401) Kerberized HS2 with LDAP auth enabled fails kerberos/delegation token authentication

2016-04-08 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15233317#comment-15233317
 ] 

Sergey Shelukhin commented on HIVE-13401:
-

Can you commit it again then with build working? ;)

> Kerberized HS2 with LDAP auth enabled fails kerberos/delegation token 
> authentication
> 
>
> Key: HIVE-13401
> URL: https://issues.apache.org/jira/browse/HIVE-13401
> Project: Hive
>  Issue Type: Bug
>  Components: Authentication
>Reporter: Chaoyu Tang
>Assignee: Chaoyu Tang
> Fix For: 2.1.0
>
> Attachments: HIVE-13401-branch2.0.1.patch, HIVE-13401.patch
>
>
> When HS2 is running in kerberos cluster but with other Sasl authentication 
> (e.g. LDAP) enabled, it fails in kerberos/delegation token authentication. It 
> is because the HS2 server uses the TSetIpAddressProcess when other 
> authentication is enabled. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13223) HoS may hang for queries that run on 0 splits

2016-04-08 Thread Prasanth Jayachandran (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15233316#comment-15233316
 ] 

Prasanth Jayachandran commented on HIVE-13223:
--

For text formats, this makes sense as we don't have to read any footer/metadata 
to read rows out of it. Hence 0 length files can be handled just be returning 
false in hasNext(). But I don't think it's valid for row columnar formats that 
reads MAGIC bytes/footers etc.. 

> HoS  may hang for queries that run on 0 splits 
> ---
>
> Key: HIVE-13223
> URL: https://issues.apache.org/jira/browse/HIVE-13223
> Project: Hive
>  Issue Type: Bug
>  Components: Spark
>Affects Versions: 2.1.0
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
> Attachments: HIVE-13223.1.patch, HIVE-13223.2.patch, HIVE-13223.patch
>
>
> Can be seen on all timed out tests after HIVE-13040 went in



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12959) LLAP: Add task scheduler timeout when no nodes are alive

2016-04-08 Thread Prasanth Jayachandran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran updated HIVE-12959:
-
Attachment: (was: HIVE-12959.5.patch)

> LLAP: Add task scheduler timeout when no nodes are alive
> 
>
> Key: HIVE-12959
> URL: https://issues.apache.org/jira/browse/HIVE-12959
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
> Attachments: HIVE-12959.1.patch, HIVE-12959.2.patch, 
> HIVE-12959.3.patch
>
>
> When there are no llap daemons running task scheduler should have a timeout 
> to fail the query instead of waiting forever. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13401) Kerberized HS2 with LDAP auth enabled fails kerberos/delegation token authentication

2016-04-08 Thread Chaoyu Tang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15233309#comment-15233309
 ] 

Chaoyu Tang commented on HIVE-13401:


[~sershe] It actually fixes an issue from HIVE-10115 which is already in 2.0.1, 
I resolved some minor conflicts in its tests which used some test methods not 
yet available in 2.0.1. So I think it should be OK in 2.0.1 given that without 
it the feature provided in HIVE-10115 does totally not work.

> Kerberized HS2 with LDAP auth enabled fails kerberos/delegation token 
> authentication
> 
>
> Key: HIVE-13401
> URL: https://issues.apache.org/jira/browse/HIVE-13401
> Project: Hive
>  Issue Type: Bug
>  Components: Authentication
>Reporter: Chaoyu Tang
>Assignee: Chaoyu Tang
> Fix For: 2.1.0
>
> Attachments: HIVE-13401-branch2.0.1.patch, HIVE-13401.patch
>
>
> When HS2 is running in kerberos cluster but with other Sasl authentication 
> (e.g. LDAP) enabled, it fails in kerberos/delegation token authentication. It 
> is because the HS2 server uses the TSetIpAddressProcess when other 
> authentication is enabled. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12959) LLAP: Add task scheduler timeout when no nodes are alive

2016-04-08 Thread Prasanth Jayachandran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran updated HIVE-12959:
-
Attachment: HIVE-12959.5.patch

> LLAP: Add task scheduler timeout when no nodes are alive
> 
>
> Key: HIVE-12959
> URL: https://issues.apache.org/jira/browse/HIVE-12959
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
> Attachments: HIVE-12959.1.patch, HIVE-12959.2.patch, 
> HIVE-12959.3.patch, HIVE-12959.5.patch
>
>
> When there are no llap daemons running task scheduler should have a timeout 
> to fail the query instead of waiting forever. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12959) LLAP: Add task scheduler timeout when no nodes are alive

2016-04-08 Thread Prasanth Jayachandran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran updated HIVE-12959:
-
Attachment: (was: HIVE-12959.5.patch)

> LLAP: Add task scheduler timeout when no nodes are alive
> 
>
> Key: HIVE-12959
> URL: https://issues.apache.org/jira/browse/HIVE-12959
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
> Attachments: HIVE-12959.1.patch, HIVE-12959.2.patch, 
> HIVE-12959.3.patch, HIVE-12959.5.patch
>
>
> When there are no llap daemons running task scheduler should have a timeout 
> to fail the query instead of waiting forever. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12959) LLAP: Add task scheduler timeout when no nodes are alive

2016-04-08 Thread Prasanth Jayachandran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran updated HIVE-12959:
-
Attachment: HIVE-12959.5.patch

[~sseth] You should have told me about awesome timertask before :) Made code 
much simpler. Replaced the original implementation with 
ScheduledExecutorService (effective java recommends using this over timertask).

> LLAP: Add task scheduler timeout when no nodes are alive
> 
>
> Key: HIVE-12959
> URL: https://issues.apache.org/jira/browse/HIVE-12959
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
> Attachments: HIVE-12959.1.patch, HIVE-12959.2.patch, 
> HIVE-12959.3.patch, HIVE-12959.5.patch
>
>
> When there are no llap daemons running task scheduler should have a timeout 
> to fail the query instead of waiting forever. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-6535) JDBC: async wait should happen during fetch for results

2016-04-08 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15233302#comment-15233302
 ] 

Hive QA commented on HIVE-6535:
---



Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12797833/HIVE-6535.3.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 4 failed/errored test(s), 9981 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_index_bitmap3
org.apache.hadoop.hive.metastore.TestHiveMetaStorePartitionSpecs.testAddPartitions
org.apache.hadoop.hive.metastore.TestHiveMetaStorePartitionSpecs.testFetchingPartitionsWithDifferentSchemas
org.apache.hadoop.hive.metastore.TestHiveMetaStorePartitionSpecs.testGetPartitionSpecs_WithAndWithoutPartitionGrouping
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/7515/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/7515/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-7515/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 4 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12797833 - PreCommit-HIVE-TRUNK-Build

> JDBC: async wait should happen during fetch for results
> ---
>
> Key: HIVE-6535
> URL: https://issues.apache.org/jira/browse/HIVE-6535
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2, JDBC
>Affects Versions: 0.14.0, 1.2.1, 2.0.0
>Reporter: Thejas M Nair
>Assignee: Vaibhav Gumashta
> Attachments: HIVE-6535.1.patch, HIVE-6535.2.patch, HIVE-6535.3.patch
>
>
> The hive jdbc client waits query completion during execute() call. It would 
> be better to block in the jdbc for completion when the results are being 
> fetched.
> This way the application using hive jdbc driver can do other tasks while 
> asynchronous query execution is happening, until it needs to fetch the result 
> set.
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13445) LLAP: token should encode application and cluster ids

2016-04-08 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-13445:

Attachment: HIVE-13445.patch

The patch. I think we'll have HS2 generate the app ID for external client. I am 
not sure if we'll even have the LLAP-based API if HS2 gets the token directly 
(some other JIRA), for now just pass null there.

> LLAP: token should encode application and cluster ids
> -
>
> Key: HIVE-13445
> URL: https://issues.apache.org/jira/browse/HIVE-13445
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-13445.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HIVE-13468) branch-2 build is broken

2016-04-08 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13468?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin resolved HIVE-13468.
-
   Resolution: Fixed
Fix Version/s: 2.0.1

> branch-2 build is broken
> 
>
> Key: HIVE-13468
> URL: https://issues.apache.org/jira/browse/HIVE-13468
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Fix For: 2.0.1
>
>
> HIVE-13401 backport



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13401) Kerberized HS2 with LDAP auth enabled fails kerberos/delegation token authentication

2016-04-08 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15233294#comment-15233294
 ] 

Sergey Shelukhin commented on HIVE-13401:
-

Looks like this depends on other patches. I will remove from 2.0.x for now. 

> Kerberized HS2 with LDAP auth enabled fails kerberos/delegation token 
> authentication
> 
>
> Key: HIVE-13401
> URL: https://issues.apache.org/jira/browse/HIVE-13401
> Project: Hive
>  Issue Type: Bug
>  Components: Authentication
>Reporter: Chaoyu Tang
>Assignee: Chaoyu Tang
> Fix For: 2.1.0
>
> Attachments: HIVE-13401-branch2.0.1.patch, HIVE-13401.patch
>
>
> When HS2 is running in kerberos cluster but with other Sasl authentication 
> (e.g. LDAP) enabled, it fails in kerberos/delegation token authentication. It 
> is because the HS2 server uses the TSetIpAddressProcess when other 
> authentication is enabled. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13401) Kerberized HS2 with LDAP auth enabled fails kerberos/delegation token authentication

2016-04-08 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-13401:

Fix Version/s: (was: 2.0.1)

> Kerberized HS2 with LDAP auth enabled fails kerberos/delegation token 
> authentication
> 
>
> Key: HIVE-13401
> URL: https://issues.apache.org/jira/browse/HIVE-13401
> Project: Hive
>  Issue Type: Bug
>  Components: Authentication
>Reporter: Chaoyu Tang
>Assignee: Chaoyu Tang
> Fix For: 2.1.0
>
> Attachments: HIVE-13401-branch2.0.1.patch, HIVE-13401.patch
>
>
> When HS2 is running in kerberos cluster but with other Sasl authentication 
> (e.g. LDAP) enabled, it fails in kerberos/delegation token authentication. It 
> is because the HS2 server uses the TSetIpAddressProcess when other 
> authentication is enabled. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13445) LLAP: token should encode application and cluster ids

2016-04-08 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15233257#comment-15233257
 ] 

Sergey Shelukhin commented on HIVE-13445:
-

Actually, this patch is incomplete.

> LLAP: token should encode application and cluster ids
> -
>
> Key: HIVE-13445
> URL: https://issues.apache.org/jira/browse/HIVE-13445
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13445) LLAP: token should encode application and cluster ids

2016-04-08 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-13445:

Attachment: (was: HIVE-13445.patch)

> LLAP: token should encode application and cluster ids
> -
>
> Key: HIVE-13445
> URL: https://issues.apache.org/jira/browse/HIVE-13445
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13445) LLAP: token should encode application and cluster ids

2016-04-08 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-13445:

Attachment: HIVE-13445.patch

The patch. I need to test it in the cluster.

> LLAP: token should encode application and cluster ids
> -
>
> Key: HIVE-13445
> URL: https://issues.apache.org/jira/browse/HIVE-13445
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-13445.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13445) LLAP: token should encode application and cluster ids

2016-04-08 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-13445:

Target Version/s:   (was: 2.0.1)
  Status: Patch Available  (was: Open)

[~sseth] can you review?

> LLAP: token should encode application and cluster ids
> -
>
> Key: HIVE-13445
> URL: https://issues.apache.org/jira/browse/HIVE-13445
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-13445.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13437) httpserver getPort does not return the actual port when attempting to use a dynamic port

2016-04-08 Thread Siddharth Seth (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15233245#comment-15233245
 ] 

Siddharth Seth commented on HIVE-13437:
---

After the HiveServer web ui configuration is documented, we can add a note 
there to say that 0 is a valid port, and will bring up the UI on a random port.

> httpserver getPort does not return the actual port when attempting to use a 
> dynamic port
> 
>
> Key: HIVE-13437
> URL: https://issues.apache.org/jira/browse/HIVE-13437
> Project: Hive
>  Issue Type: Bug
>Reporter: Siddharth Seth
>Assignee: Siddharth Seth
> Fix For: 2.1.0
>
> Attachments: HIVE-13437.01.patch, HIVE-13437.02.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13398) LLAP: Simple /status and /peers web services

2016-04-08 Thread Siddharth Seth (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15233243#comment-15233243
 ] 

Siddharth Seth commented on HIVE-13398:
---

It would be useful to document this for LLAP - along with the other information 
like metrics published by the UI. Do we have this information documented 
somewhere at the moment.

> LLAP: Simple /status and /peers web services
> 
>
> Key: HIVE-13398
> URL: https://issues.apache.org/jira/browse/HIVE-13398
> Project: Hive
>  Issue Type: Improvement
>  Components: llap
>Affects Versions: 2.1.0
>Reporter: Gopal V
>Assignee: Gopal V
> Fix For: 2.1.0
>
> Attachments: HIVE-13398.02.patch, HIVE-13398.1.patch
>
>
> MiniLLAP doesn't have a UI service, so this has no easy tests.
> {code}
> curl localhost:15002/status
> {
>   "status" : "STARTED",
>   "uptime" : 139093,
>   "build" : "2.1.0-SNAPSHOT from 77474581df4016e3899a986e079513087a945674 by 
> gopal source checksum a9caa5faad5906d5139c33619f1368bb"
> }
> {code}
> {code}
> curl localhost:15002/peers
> {
>   "dynamic" : true,
>   "identity" : "718264f1-722e-40f1-8265-ac25587bf336",
>   "peers" : [ 
>  {
> "identity" : "940d6838-4dd7-4e85-95cc-5a6a2c537c04",
> "host" : "sandbox121.hortonworks.com",
> "management-port" : 15004,
> "rpc-port" : 15001,
> "shuffle-port" : 15551,
> "resource" : {
>   "vcores" : 24,
>   "memory" : 128000
> },
> "host" : "sandbox121.hortonworks.com"
>   }, 
> ]
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13418) HiveServer2 HTTP mode should support X-Forwarded-Host header for authorization/audits

2016-04-08 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-13418:
-
Status: Patch Available  (was: Open)

> HiveServer2 HTTP mode should support X-Forwarded-Host header for 
> authorization/audits
> -
>
> Key: HIVE-13418
> URL: https://issues.apache.org/jira/browse/HIVE-13418
> Project: Hive
>  Issue Type: New Feature
>  Components: Authorization, HiveServer2
>Reporter: Thejas M Nair
>Assignee: Thejas M Nair
> Attachments: HIVE-13418.1.patch
>
>
> Apache Knox acts as a proxy for requests coming from the end users. In these 
> cases, the IP address that HiveServer2 passes to the authorization/audit 
> plugins via the HiveAuthzContext object only the IP address of the proxy, and 
> not the end user.
> For auditing purposes, the IP address of the end user and any proxies in 
> between are useful.
> HiveServer2 should pass the information from  'X-Forwarded-Host' header to 
> the HiveAuthorizer plugins.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13418) HiveServer2 HTTP mode should support X-Forwarded-Host header for authorization/audits

2016-04-08 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-13418:
-
Description: 
Apache Knox acts as a proxy for requests coming from the end users. In these 
cases, the IP address that HiveServer2 passes to the authorization/audit 
plugins via the HiveAuthzContext object only the IP address of the proxy, and 
not the end user.

For auditing purposes, the IP address of the end user and any proxies in 
between are useful.
HiveServer2 should pass the information from  'X-Forwarded-Host' header to the 
HiveAuthorizer plugins.


  was:
Apache Knox acts as a proxy for requests coming from the end users. In these 
cases, the IP address that HiveServer2 passes to the authorization/audit 
plugins via the HiveAuthzContext object is the IP address of the proxy, and not 
the end user.

For auditing and authorization purposes, the IP address of the end use is more 
meaningful.
HiveServer2 should pass the information from  'X-Forwarded-Host' header to the 
HiveAuthorizer plugins if the request is coming from a trusted proxy.



> HiveServer2 HTTP mode should support X-Forwarded-Host header for 
> authorization/audits
> -
>
> Key: HIVE-13418
> URL: https://issues.apache.org/jira/browse/HIVE-13418
> Project: Hive
>  Issue Type: New Feature
>  Components: Authorization, HiveServer2
>Reporter: Thejas M Nair
>Assignee: Thejas M Nair
> Attachments: HIVE-13418.1.patch
>
>
> Apache Knox acts as a proxy for requests coming from the end users. In these 
> cases, the IP address that HiveServer2 passes to the authorization/audit 
> plugins via the HiveAuthzContext object only the IP address of the proxy, and 
> not the end user.
> For auditing purposes, the IP address of the end user and any proxies in 
> between are useful.
> HiveServer2 should pass the information from  'X-Forwarded-Host' header to 
> the HiveAuthorizer plugins.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13418) HiveServer2 HTTP mode should support X-Forwarded-Host header for authorization/audits

2016-04-08 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-13418:
-
Attachment: HIVE-13418.1.patch

> HiveServer2 HTTP mode should support X-Forwarded-Host header for 
> authorization/audits
> -
>
> Key: HIVE-13418
> URL: https://issues.apache.org/jira/browse/HIVE-13418
> Project: Hive
>  Issue Type: New Feature
>  Components: Authorization, HiveServer2
>Reporter: Thejas M Nair
>Assignee: Thejas M Nair
> Attachments: HIVE-13418.1.patch
>
>
> Apache Knox acts as a proxy for requests coming from the end users. In these 
> cases, the IP address that HiveServer2 passes to the authorization/audit 
> plugins via the HiveAuthzContext object only the IP address of the proxy, and 
> not the end user.
> For auditing purposes, the IP address of the end user and any proxies in 
> between are useful.
> HiveServer2 should pass the information from  'X-Forwarded-Host' header to 
> the HiveAuthorizer plugins.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13418) HiveServer2 HTTP mode should support X-Forwarded-Host header for authorization/audits

2016-04-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15233229#comment-15233229
 ] 

ASF GitHub Bot commented on HIVE-13418:
---

GitHub user thejasmn opened a pull request:

https://github.com/apache/hive/pull/69

HIVE-13418



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/thejasmn/hive HIVE-13418

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/hive/pull/69.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #69


commit 9afad78243c0eeedd7571ac7961f177ebf20e771
Author: Thejas Nair 
Date:   2016-04-08T06:35:25Z

set x-forwarded-for

commit 400406a0765253f14e570061375923431d7f304c
Author: Thejas Nair 
Date:   2016-04-08T06:38:02Z

set forwarded address in HiveAuthzContext

commit ef438d7498cac59a665b92c5d3e5fffb6bbdac19
Author: Thejas Nair 
Date:   2016-04-08T21:23:55Z

add test in TestThriftHttpCLIService

commit a475bf1d077acf7335f4efcbcdd6bce7e75017fb
Author: Thejas Nair 
Date:   2016-04-08T21:47:34Z

rename impls of ThriftCLIServiceTest

commit eb6982c9f013f02df26ff7ea8d78e658224c4f95
Author: Thejas Nair 
Date:   2016-04-08T21:48:38Z

reorganize   ThriftCLIServiceTest tests

commit a3cac6ef692dcd1c89405e0cead4a0d949613122
Author: Thejas Nair 
Date:   2016-04-08T21:53:47Z

rename test class

commit e31cd18d7fd9be2ba0373949fa2e39d19a4aa943
Author: Thejas Nair 
Date:   2016-04-08T21:53:58Z

new classname

commit c48a21fab62f11f17213f2680cd414e69e155398
Author: Thejas Nair 
Date:   2016-04-09T00:17:55Z

test now checks the forwarded ips passed on

commit 131cd7208cc8e244a312253d63a250d7541f0a90
Author: Thejas Nair 
Date:   2016-04-09T00:19:04Z

fix test imports

commit ac227e05d931a906987a53cfcccf31b37fa8b95e
Author: Thejas Nair 
Date:   2016-04-09T00:40:07Z

fix test compile, post rebase




> HiveServer2 HTTP mode should support X-Forwarded-Host header for 
> authorization/audits
> -
>
> Key: HIVE-13418
> URL: https://issues.apache.org/jira/browse/HIVE-13418
> Project: Hive
>  Issue Type: New Feature
>  Components: Authorization, HiveServer2
>Reporter: Thejas M Nair
>Assignee: Thejas M Nair
>
> Apache Knox acts as a proxy for requests coming from the end users. In these 
> cases, the IP address that HiveServer2 passes to the authorization/audit 
> plugins via the HiveAuthzContext object is the IP address of the proxy, and 
> not the end user.
> For auditing and authorization purposes, the IP address of the end use is 
> more meaningful.
> HiveServer2 should pass the information from  'X-Forwarded-Host' header to 
> the HiveAuthorizer plugins if the request is coming from a trusted proxy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13223) HoS may hang for queries that run on 0 splits

2016-04-08 Thread Szehon Ho (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15233218#comment-15233218
 ] 

Szehon Ho commented on HIVE-13223:
--

Thanks for reply, but I guess it will not help this case?

bq. I can add a check to ORC reader to throw exception when 0 length files are 
encountered.

> HoS  may hang for queries that run on 0 splits 
> ---
>
> Key: HIVE-13223
> URL: https://issues.apache.org/jira/browse/HIVE-13223
> Project: Hive
>  Issue Type: Bug
>  Components: Spark
>Affects Versions: 2.1.0
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
> Attachments: HIVE-13223.1.patch, HIVE-13223.2.patch, HIVE-13223.patch
>
>
> Can be seen on all timed out tests after HIVE-13040 went in



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-6535) JDBC: async wait should happen during fetch for results

2016-04-08 Thread Vaibhav Gumashta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6535?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vaibhav Gumashta updated HIVE-6535:
---
Attachment: HIVE-6535.3.patch

> JDBC: async wait should happen during fetch for results
> ---
>
> Key: HIVE-6535
> URL: https://issues.apache.org/jira/browse/HIVE-6535
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2, JDBC
>Affects Versions: 0.14.0, 1.2.1, 2.0.0
>Reporter: Thejas M Nair
>Assignee: Vaibhav Gumashta
> Attachments: HIVE-6535.1.patch, HIVE-6535.2.patch, HIVE-6535.3.patch
>
>
> The hive jdbc client waits query completion during execute() call. It would 
> be better to block in the jdbc for completion when the results are being 
> fetched.
> This way the application using hive jdbc driver can do other tasks while 
> asynchronous query execution is happening, until it needs to fetch the result 
> set.
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13467) Show llap info on hs2 ui when available

2016-04-08 Thread Gunther Hagleitner (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15233198#comment-15233198
 ] 

Gunther Hagleitner commented on HIVE-13467:
---

fyi [~sseth]

> Show llap info on hs2 ui when available
> ---
>
> Key: HIVE-13467
> URL: https://issues.apache.org/jira/browse/HIVE-13467
> Project: Hive
>  Issue Type: Bug
>  Components: llap
>Reporter: Gunther Hagleitner
>Assignee: Gunther Hagleitner
> Attachments: HIVE-13467.1.patch, screen.png
>
>
> When llap is on and hs2 is configured with access to an llap cluster, HS2 UI 
> should show some status of the daemons and provide a mechanism to click 
> through to their respective UIs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13467) Show llap info on hs2 ui when available

2016-04-08 Thread Gunther Hagleitner (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13467?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gunther Hagleitner updated HIVE-13467:
--
Attachment: HIVE-13467.1.patch

> Show llap info on hs2 ui when available
> ---
>
> Key: HIVE-13467
> URL: https://issues.apache.org/jira/browse/HIVE-13467
> Project: Hive
>  Issue Type: Bug
>  Components: llap
>Reporter: Gunther Hagleitner
>Assignee: Gunther Hagleitner
> Attachments: HIVE-13467.1.patch, screen.png
>
>
> When llap is on and hs2 is configured with access to an llap cluster, HS2 UI 
> should show some status of the daemons and provide a mechanism to click 
> through to their respective UIs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13467) Show llap info on hs2 ui when available

2016-04-08 Thread Gunther Hagleitner (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15233192#comment-15233192
 ] 

Gunther Hagleitner commented on HIVE-13467:
---

Related to HIVE-13413. Needs that code to fetch info about the cluster.

> Show llap info on hs2 ui when available
> ---
>
> Key: HIVE-13467
> URL: https://issues.apache.org/jira/browse/HIVE-13467
> Project: Hive
>  Issue Type: Bug
>  Components: llap
>Reporter: Gunther Hagleitner
>Assignee: Gunther Hagleitner
> Attachments: screen.png
>
>
> When llap is on and hs2 is configured with access to an llap cluster, HS2 UI 
> should show some status of the daemons and provide a mechanism to click 
> through to their respective UIs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13467) Show llap info on hs2 ui when available

2016-04-08 Thread Gunther Hagleitner (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15233189#comment-15233189
 ] 

Gunther Hagleitner commented on HIVE-13467:
---

Looks like this right now:
!screen.png|width=800!

> Show llap info on hs2 ui when available
> ---
>
> Key: HIVE-13467
> URL: https://issues.apache.org/jira/browse/HIVE-13467
> Project: Hive
>  Issue Type: Bug
>  Components: llap
>Reporter: Gunther Hagleitner
>Assignee: Gunther Hagleitner
> Attachments: screen.png
>
>
> When llap is on and hs2 is configured with access to an llap cluster, HS2 UI 
> should show some status of the daemons and provide a mechanism to click 
> through to their respective UIs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13467) Show llap info on hs2 ui when available

2016-04-08 Thread Gunther Hagleitner (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13467?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gunther Hagleitner updated HIVE-13467:
--
Attachment: (was: Screen Shot 2016-04-08 at 4.57.43 PM.png)

> Show llap info on hs2 ui when available
> ---
>
> Key: HIVE-13467
> URL: https://issues.apache.org/jira/browse/HIVE-13467
> Project: Hive
>  Issue Type: Bug
>  Components: llap
>Reporter: Gunther Hagleitner
>Assignee: Gunther Hagleitner
> Attachments: screen.png
>
>
> When llap is on and hs2 is configured with access to an llap cluster, HS2 UI 
> should show some status of the daemons and provide a mechanism to click 
> through to their respective UIs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13467) Show llap info on hs2 ui when available

2016-04-08 Thread Gunther Hagleitner (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13467?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gunther Hagleitner updated HIVE-13467:
--
Attachment: screen.png

> Show llap info on hs2 ui when available
> ---
>
> Key: HIVE-13467
> URL: https://issues.apache.org/jira/browse/HIVE-13467
> Project: Hive
>  Issue Type: Bug
>  Components: llap
>Reporter: Gunther Hagleitner
>Assignee: Gunther Hagleitner
> Attachments: screen.png
>
>
> When llap is on and hs2 is configured with access to an llap cluster, HS2 UI 
> should show some status of the daemons and provide a mechanism to click 
> through to their respective UIs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13467) Show llap info on hs2 ui when available

2016-04-08 Thread Gunther Hagleitner (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13467?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gunther Hagleitner updated HIVE-13467:
--
Attachment: Screen Shot 2016-04-08 at 4.57.43 PM.png

> Show llap info on hs2 ui when available
> ---
>
> Key: HIVE-13467
> URL: https://issues.apache.org/jira/browse/HIVE-13467
> Project: Hive
>  Issue Type: Bug
>  Components: llap
>Reporter: Gunther Hagleitner
>Assignee: Gunther Hagleitner
> Attachments: Screen Shot 2016-04-08 at 4.57.43 PM.png
>
>
> When llap is on and hs2 is configured with access to an llap cluster, HS2 UI 
> should show some status of the daemons and provide a mechanism to click 
> through to their respective UIs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12959) LLAP: Add task scheduler timeout when no nodes are alive

2016-04-08 Thread Prasanth Jayachandran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran updated HIVE-12959:
-
Attachment: HIVE-12959.3.patch

Thanks [~sseth] for the reviews! I will see if I can add some unit tests for 
these in the next patch. For now addressed your other comments. 

> LLAP: Add task scheduler timeout when no nodes are alive
> 
>
> Key: HIVE-12959
> URL: https://issues.apache.org/jira/browse/HIVE-12959
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
> Attachments: HIVE-12959.1.patch, HIVE-12959.2.patch, 
> HIVE-12959.3.patch
>
>
> When there are no llap daemons running task scheduler should have a timeout 
> to fail the query instead of waiting forever. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13398) LLAP: Simple /status and /peers web services

2016-04-08 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15233133#comment-15233133
 ] 

Lefty Leverenz commented on HIVE-13398:
---

Does this need to be documented in the wiki (or the design doc attached to 
HIVE-7926)?

> LLAP: Simple /status and /peers web services
> 
>
> Key: HIVE-13398
> URL: https://issues.apache.org/jira/browse/HIVE-13398
> Project: Hive
>  Issue Type: Improvement
>  Components: llap
>Affects Versions: 2.1.0
>Reporter: Gopal V
>Assignee: Gopal V
> Fix For: 2.1.0
>
> Attachments: HIVE-13398.02.patch, HIVE-13398.1.patch
>
>
> MiniLLAP doesn't have a UI service, so this has no easy tests.
> {code}
> curl localhost:15002/status
> {
>   "status" : "STARTED",
>   "uptime" : 139093,
>   "build" : "2.1.0-SNAPSHOT from 77474581df4016e3899a986e079513087a945674 by 
> gopal source checksum a9caa5faad5906d5139c33619f1368bb"
> }
> {code}
> {code}
> curl localhost:15002/peers
> {
>   "dynamic" : true,
>   "identity" : "718264f1-722e-40f1-8265-ac25587bf336",
>   "peers" : [ 
>  {
> "identity" : "940d6838-4dd7-4e85-95cc-5a6a2c537c04",
> "host" : "sandbox121.hortonworks.com",
> "management-port" : 15004,
> "rpc-port" : 15001,
> "shuffle-port" : 15551,
> "resource" : {
>   "vcores" : 24,
>   "memory" : 128000
> },
> "host" : "sandbox121.hortonworks.com"
>   }, 
> ]
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9862) Vectorized execution corrupts timestamp values

2016-04-08 Thread Matt McCline (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15233129#comment-15233129
 ] 

Matt McCline commented on HIVE-9862:


Here is timestamp math:

{code}
Long.MAX_VALUE = 9,223,372,036,854,775,807

Divide by NanosecondsPerSecond:
MaxLongSeconds = 9,223,372,036,854,775,807 / 1,000,000,000 = 9,223,372,036

SecondsPerYear =  60 * 60 * 24 * 365 = 31536000
MaxYears = 9,223,372,036 / 31536000 = 292 years

Linux 0 timestamp = 1970

MaxLongYear = 1970 + 292 = 2,262
MinLongYear = 1970 – 292 = 1,678
{code}

> Vectorized execution corrupts timestamp values
> --
>
> Key: HIVE-9862
> URL: https://issues.apache.org/jira/browse/HIVE-9862
> Project: Hive
>  Issue Type: Bug
>  Components: Vectorization
>Affects Versions: 1.0.0
>Reporter: Nathan Howell
>Assignee: Matt McCline
> Fix For: 2.1.0
>
> Attachments: HIVE-9862.01.patch, HIVE-9862.02.patch, 
> HIVE-9862.03.patch, HIVE-9862.04.patch, HIVE-9862.05.patch, 
> HIVE-9862.06.patch, HIVE-9862.07.patch, HIVE-9862.08.patch, HIVE-9862.09.patch
>
>
> Timestamps in the future (year 2250?) and before ~1700 are silently corrupted 
> in vectorized execution mode. Simple repro:
> {code}
> hive> DROP TABLE IF EXISTS test;
> hive> CREATE TABLE test(ts TIMESTAMP) STORED AS ORC;
> hive> INSERT INTO TABLE test VALUES ('-12-31 23:59:59');
> hive> SET hive.vectorized.execution.enabled = false;
> hive> SELECT MAX(ts) FROM test;
> -12-31 23:59:59
> hive> SET hive.vectorized.execution.enabled = true;
> hive> SELECT MAX(ts) FROM test;
> 1816-03-30 05:56:07.066277376
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13223) HoS may hang for queries that run on 0 splits

2016-04-08 Thread Prasanth Jayachandran (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15233127#comment-15233127
 ] 

Prasanth Jayachandran commented on HIVE-13223:
--

ORC reader handles only non-orc files correctly by throwing 
FileFormatException. The reader itself does not handle 0 length files. The way 
it's handled currently is OrcInputFormat just ignores 0 length files from split 
computation as it knows that it cannot be valid orc file. Also there are 
filters to prune hidden and _* files which are also not valid orc files. So ORC 
reader expects only valid ORC files. I think it should be handled at both 
places (split generation and reader) as both can be used together or 
independently. I can add a check to ORC reader to throw exception when 0 length 
files are encountered. 

> HoS  may hang for queries that run on 0 splits 
> ---
>
> Key: HIVE-13223
> URL: https://issues.apache.org/jira/browse/HIVE-13223
> Project: Hive
>  Issue Type: Bug
>  Components: Spark
>Affects Versions: 2.1.0
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
> Attachments: HIVE-13223.1.patch, HIVE-13223.2.patch, HIVE-13223.patch
>
>
> Can be seen on all timed out tests after HIVE-13040 went in



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13437) httpserver getPort does not return the actual port when attempting to use a dynamic port

2016-04-08 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15233124#comment-15233124
 ] 

Lefty Leverenz commented on HIVE-13437:
---

Does this need to be documented in the wiki?

> httpserver getPort does not return the actual port when attempting to use a 
> dynamic port
> 
>
> Key: HIVE-13437
> URL: https://issues.apache.org/jira/browse/HIVE-13437
> Project: Hive
>  Issue Type: Bug
>Reporter: Siddharth Seth
>Assignee: Siddharth Seth
> Fix For: 2.1.0
>
> Attachments: HIVE-13437.01.patch, HIVE-13437.02.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13223) HoS may hang for queries that run on 0 splits

2016-04-08 Thread Szehon Ho (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15233095#comment-15233095
 ] 

Szehon Ho commented on HIVE-13223:
--

[~prasanth_j] do you think this is something ORC can do feasibly?  Thanks

> HoS  may hang for queries that run on 0 splits 
> ---
>
> Key: HIVE-13223
> URL: https://issues.apache.org/jira/browse/HIVE-13223
> Project: Hive
>  Issue Type: Bug
>  Components: Spark
>Affects Versions: 2.1.0
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
> Attachments: HIVE-13223.1.patch, HIVE-13223.2.patch, HIVE-13223.patch
>
>
> Can be seen on all timed out tests after HIVE-13040 went in



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13458) Heartbeater doesn't fail query when heartbeat fails

2016-04-08 Thread Wei Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Zheng updated HIVE-13458:
-
Attachment: HIVE-13458.1.patch

patch 1 for test

> Heartbeater doesn't fail query when heartbeat fails
> ---
>
> Key: HIVE-13458
> URL: https://issues.apache.org/jira/browse/HIVE-13458
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 2.1.0
>Reporter: Wei Zheng
>Assignee: Wei Zheng
> Attachments: HIVE-13458.1.patch
>
>
> When a heartbeat fails to locate a lock, it should fail the current query. 
> That doesn't happen, which is a bug.
> Another thing is, we need to make sure stopHeartbeat really stops the 
> heartbeat, i.e. no additional heartbeat will be sent, since that will break 
> the assumption and cause the query to fail.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13458) Heartbeater doesn't fail query when heartbeat fails

2016-04-08 Thread Wei Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Zheng updated HIVE-13458:
-
Status: Patch Available  (was: Open)

> Heartbeater doesn't fail query when heartbeat fails
> ---
>
> Key: HIVE-13458
> URL: https://issues.apache.org/jira/browse/HIVE-13458
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 2.1.0
>Reporter: Wei Zheng
>Assignee: Wei Zheng
> Attachments: HIVE-13458.1.patch
>
>
> When a heartbeat fails to locate a lock, it should fail the current query. 
> That doesn't happen, which is a bug.
> Another thing is, we need to make sure stopHeartbeat really stops the 
> heartbeat, i.e. no additional heartbeat will be sent, since that will break 
> the assumption and cause the query to fail.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13429) Tool to remove dangling scratch dir

2016-04-08 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15233077#comment-15233077
 ] 

Thejas M Nair commented on HIVE-13429:
--

+1

> Tool to remove dangling scratch dir
> ---
>
> Key: HIVE-13429
> URL: https://issues.apache.org/jira/browse/HIVE-13429
> Project: Hive
>  Issue Type: Improvement
>Reporter: Daniel Dai
>Assignee: Daniel Dai
> Attachments: HIVE-13429.1.patch, HIVE-13429.2.patch, 
> HIVE-13429.3.patch, HIVE-13429.4.patch
>
>
> We have seen in some cases, user will leave the scratch dir behind, and 
> eventually eat out hdfs storage. This could happen when vm restarts and leave 
> no chance for Hive to run shutdown hook. This is applicable for both HiveCli 
> and HiveServer2. Here we provide an external tool to clear dead scratch dir 
> as needed.
> We need a way to identify which scratch dir is in use. We will rely on HDFS 
> write lock for that. Here is how HDFS write lock works:
> 1. A HDFS client open HDFS file for write and only close at the time of 
> shutdown
> 2. Cleanup process can try to open HDFS file for write. If the client holding 
> this file is still running, we will get exception. Otherwise, we know the 
> client is dead
> 3. If the HDFS client dies without closing the HDFS file, NN will reclaim the 
> lease after 10 min, ie, the HDFS file hold by the dead client is writable 
> again after 10 min
> So here is how we remove dangling scratch directory in Hive:
> 1. HiveCli/HiveServer2 opens a well-named lock file in scratch directory and 
> only close it when we about to drop scratch directory
> 2. A command line tool cleardanglingscratchdir  will check every scratch 
> directory and try open the lock file for write. If it does not get exception, 
> meaning the owner is dead and we can safely remove the scratch directory
> 3. The 10 min window means it is possible a HiveCli/HiveServer2 is dead but 
> we still cannot reclaim the scratch directory for another 10 min. But this 
> should be tolerable



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-11793) SHOW LOCKS with DbTxnManager ignores filter options

2016-04-08 Thread Wei Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Zheng updated HIVE-11793:
-
Assignee: Wei Zheng  (was: Eugene Koifman)
  Status: Patch Available  (was: Open)

> SHOW LOCKS with DbTxnManager ignores filter options
> ---
>
> Key: HIVE-11793
> URL: https://issues.apache.org/jira/browse/HIVE-11793
> Project: Hive
>  Issue Type: Improvement
>  Components: Transactions
>Affects Versions: 1.0.0
>Reporter: Eugene Koifman
>Assignee: Wei Zheng
>Priority: Minor
> Attachments: HIVE-11793.1.patch
>
>
> https://cwiki.apache.org/confluence/display/Hive/Locking and 
> https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL#LanguageManualDDL-ShowLocks
>  list various options that can be used with SHOW LOCKS, e.g. 
> When ACID is enabled, all these options are ignored and a full list is 
> returned.
> (also only ext lock id is shown, int lock id is not).
> see DDLTask.showLocks() and TxnHandler.showLocks()
> requires extending ShowLocksRequest which is a Thrift object



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-11793) SHOW LOCKS with DbTxnManager ignores filter options

2016-04-08 Thread Wei Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Zheng updated HIVE-11793:
-
Attachment: HIVE-11793.1.patch

patch 1 for test

> SHOW LOCKS with DbTxnManager ignores filter options
> ---
>
> Key: HIVE-11793
> URL: https://issues.apache.org/jira/browse/HIVE-11793
> Project: Hive
>  Issue Type: Improvement
>  Components: Transactions
>Affects Versions: 1.0.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>Priority: Minor
> Attachments: HIVE-11793.1.patch
>
>
> https://cwiki.apache.org/confluence/display/Hive/Locking and 
> https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL#LanguageManualDDL-ShowLocks
>  list various options that can be used with SHOW LOCKS, e.g. 
> When ACID is enabled, all these options are ignored and a full list is 
> returned.
> (also only ext lock id is shown, int lock id is not).
> see DDLTask.showLocks() and TxnHandler.showLocks()
> requires extending ShowLocksRequest which is a Thrift object



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13465) Add ZK settings to MiniLlapCluster clusterSpecificConfiguration

2016-04-08 Thread Jason Dere (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Dere updated HIVE-13465:
--
Status: Patch Available  (was: Open)

> Add ZK settings to MiniLlapCluster clusterSpecificConfiguration
> ---
>
> Key: HIVE-13465
> URL: https://issues.apache.org/jira/browse/HIVE-13465
> Project: Hive
>  Issue Type: Bug
>  Components: llap, Testing Infrastructure
>Reporter: Jason Dere
>Assignee: Jason Dere
> Attachments: HIVE-13465.1.patch
>
>
> HIVE-13365 added zookeeper support to MiniLlapCluster. These settings should 
> also be added to the clusterSpecificConfiguration to make whoever created the 
> mini cluster gets their confs updated with this info.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13465) Add ZK settings to MiniLlapCluster clusterSpecificConfiguration

2016-04-08 Thread Jason Dere (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Dere updated HIVE-13465:
--
Attachment: HIVE-13465.1.patch

> Add ZK settings to MiniLlapCluster clusterSpecificConfiguration
> ---
>
> Key: HIVE-13465
> URL: https://issues.apache.org/jira/browse/HIVE-13465
> Project: Hive
>  Issue Type: Bug
>  Components: llap, Testing Infrastructure
>Reporter: Jason Dere
>Assignee: Jason Dere
> Attachments: HIVE-13465.1.patch
>
>
> HIVE-13365 added zookeeper support to MiniLlapCluster. These settings should 
> also be added to the clusterSpecificConfiguration to make whoever created the 
> mini cluster gets their confs updated with this info.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9660) store end offset of compressed data for RG in RowIndex in ORC

2016-04-08 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9660?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15233018#comment-15233018
 ] 

Sergey Shelukhin commented on HIVE-9660:


Hmm, looks like recent refactoring broke a bunch of stuff. I will take a look.

> store end offset of compressed data for RG in RowIndex in ORC
> -
>
> Key: HIVE-9660
> URL: https://issues.apache.org/jira/browse/HIVE-9660
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-9660.01.patch, HIVE-9660.02.patch, 
> HIVE-9660.03.patch, HIVE-9660.04.patch, HIVE-9660.05.patch, 
> HIVE-9660.06.patch, HIVE-9660.07.patch, HIVE-9660.07.patch, HIVE-9660.patch, 
> HIVE-9660.patch
>
>
> Right now the end offset is estimated, which in some cases results in tons of 
> extra data being read.
> We can add a separate array to RowIndex (positions_v2?) that stores number of 
> compressed buffers for each RG, or end offset, or something, to remove this 
> estimation magic



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13431) Improvements to LLAPTaskReporter

2016-04-08 Thread Siddharth Seth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Seth updated HIVE-13431:
--
Issue Type: Task  (was: Sub-task)
Parent: (was: HIVE-13097)

> Improvements to LLAPTaskReporter
> 
>
> Key: HIVE-13431
> URL: https://issues.apache.org/jira/browse/HIVE-13431
> Project: Hive
>  Issue Type: Task
>Reporter: Siddharth Seth
>Assignee: Siddharth Seth
> Attachments: HIVE-13431.1.txt
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13223) HoS may hang for queries that run on 0 splits

2016-04-08 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15233012#comment-15233012
 ] 

Ashutosh Chauhan commented on HIVE-13223:
-

bq. Just to explore the options, is there any JIRA tracking ORC-side fix for 
handle 0-length files?
None that I am am aware of.

> HoS  may hang for queries that run on 0 splits 
> ---
>
> Key: HIVE-13223
> URL: https://issues.apache.org/jira/browse/HIVE-13223
> Project: Hive
>  Issue Type: Bug
>  Components: Spark
>Affects Versions: 2.1.0
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
> Attachments: HIVE-13223.1.patch, HIVE-13223.2.patch, HIVE-13223.patch
>
>
> Can be seen on all timed out tests after HIVE-13040 went in



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13437) httpserver getPort does not return the actual port when attempting to use a dynamic port

2016-04-08 Thread Siddharth Seth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Seth updated HIVE-13437:
--
   Resolution: Fixed
Fix Version/s: 2.1.0
   Status: Resolved  (was: Patch Available)

> httpserver getPort does not return the actual port when attempting to use a 
> dynamic port
> 
>
> Key: HIVE-13437
> URL: https://issues.apache.org/jira/browse/HIVE-13437
> Project: Hive
>  Issue Type: Bug
>Reporter: Siddharth Seth
>Assignee: Siddharth Seth
> Fix For: 2.1.0
>
> Attachments: HIVE-13437.01.patch, HIVE-13437.02.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13436) Allow the package directory to be specified for the llap setup script

2016-04-08 Thread Siddharth Seth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Seth updated HIVE-13436:
--
   Resolution: Fixed
Fix Version/s: 2.1.0
   Status: Resolved  (was: Patch Available)

> Allow the package directory to be specified for the llap setup script
> -
>
> Key: HIVE-13436
> URL: https://issues.apache.org/jira/browse/HIVE-13436
> Project: Hive
>  Issue Type: Improvement
>Reporter: Siddharth Seth
>Assignee: Siddharth Seth
> Fix For: 2.1.0
>
> Attachments: HIVE-13436.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13398) LLAP: Simple /status and /peers web services

2016-04-08 Thread Siddharth Seth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Seth updated HIVE-13398:
--
   Resolution: Fixed
Fix Version/s: 2.1.0
   Status: Resolved  (was: Patch Available)

> LLAP: Simple /status and /peers web services
> 
>
> Key: HIVE-13398
> URL: https://issues.apache.org/jira/browse/HIVE-13398
> Project: Hive
>  Issue Type: Improvement
>  Components: llap
>Affects Versions: 2.1.0
>Reporter: Gopal V
>Assignee: Gopal V
> Fix For: 2.1.0
>
> Attachments: HIVE-13398.02.patch, HIVE-13398.1.patch
>
>
> MiniLLAP doesn't have a UI service, so this has no easy tests.
> {code}
> curl localhost:15002/status
> {
>   "status" : "STARTED",
>   "uptime" : 139093,
>   "build" : "2.1.0-SNAPSHOT from 77474581df4016e3899a986e079513087a945674 by 
> gopal source checksum a9caa5faad5906d5139c33619f1368bb"
> }
> {code}
> {code}
> curl localhost:15002/peers
> {
>   "dynamic" : true,
>   "identity" : "718264f1-722e-40f1-8265-ac25587bf336",
>   "peers" : [ 
>  {
> "identity" : "940d6838-4dd7-4e85-95cc-5a6a2c537c04",
> "host" : "sandbox121.hortonworks.com",
> "management-port" : 15004,
> "rpc-port" : 15001,
> "shuffle-port" : 15551,
> "resource" : {
>   "vcores" : 24,
>   "memory" : 128000
> },
> "host" : "sandbox121.hortonworks.com"
>   }, 
> ]
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13437) httpserver getPort does not return the actual port when attempting to use a dynamic port

2016-04-08 Thread Siddharth Seth (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15233005#comment-15233005
 ] 

Siddharth Seth commented on HIVE-13437:
---

Test failures are unrelated. Ran a set of them locally without any issues. 
Committing.

> httpserver getPort does not return the actual port when attempting to use a 
> dynamic port
> 
>
> Key: HIVE-13437
> URL: https://issues.apache.org/jira/browse/HIVE-13437
> Project: Hive
>  Issue Type: Bug
>Reporter: Siddharth Seth
>Assignee: Siddharth Seth
> Attachments: HIVE-13437.01.patch, HIVE-13437.02.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13436) Allow the package directory to be specified for the llap setup script

2016-04-08 Thread Siddharth Seth (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15232999#comment-15232999
 ] 

Siddharth Seth commented on HIVE-13436:
---

Test failures are unrelated. Committing.

> Allow the package directory to be specified for the llap setup script
> -
>
> Key: HIVE-13436
> URL: https://issues.apache.org/jira/browse/HIVE-13436
> Project: Hive
>  Issue Type: Improvement
>Reporter: Siddharth Seth
>Assignee: Siddharth Seth
> Attachments: HIVE-13436.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13398) LLAP: Simple /status and /peers web services

2016-04-08 Thread Siddharth Seth (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15232993#comment-15232993
 ] 

Siddharth Seth commented on HIVE-13398:
---

Test failures are unrelated. Committing.

> LLAP: Simple /status and /peers web services
> 
>
> Key: HIVE-13398
> URL: https://issues.apache.org/jira/browse/HIVE-13398
> Project: Hive
>  Issue Type: Improvement
>  Components: llap
>Affects Versions: 2.1.0
>Reporter: Gopal V
>Assignee: Gopal V
> Attachments: HIVE-13398.02.patch, HIVE-13398.1.patch
>
>
> MiniLLAP doesn't have a UI service, so this has no easy tests.
> {code}
> curl localhost:15002/status
> {
>   "status" : "STARTED",
>   "uptime" : 139093,
>   "build" : "2.1.0-SNAPSHOT from 77474581df4016e3899a986e079513087a945674 by 
> gopal source checksum a9caa5faad5906d5139c33619f1368bb"
> }
> {code}
> {code}
> curl localhost:15002/peers
> {
>   "dynamic" : true,
>   "identity" : "718264f1-722e-40f1-8265-ac25587bf336",
>   "peers" : [ 
>  {
> "identity" : "940d6838-4dd7-4e85-95cc-5a6a2c537c04",
> "host" : "sandbox121.hortonworks.com",
> "management-port" : 15004,
> "rpc-port" : 15001,
> "shuffle-port" : 15551,
> "resource" : {
>   "vcores" : 24,
>   "memory" : 128000
> },
> "host" : "sandbox121.hortonworks.com"
>   }, 
> ]
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9660) store end offset of compressed data for RG in RowIndex in ORC

2016-04-08 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9660?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15232951#comment-15232951
 ] 

Hive QA commented on HIVE-9660:
---



Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12797641/HIVE-9660.07.patch

{color:green}SUCCESS:{color} +1 due to 11 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 172 failed/errored test(s), 9771 tests 
executed
*Failed tests:*
{noformat}
TestMiniSparkOnYarnCliDriver - did not produce a TEST-*.xml file
TestSparkCliDriver-auto_join30.q-vector_data_types.q-scriptfile1.q-and-12-more 
- did not produce a TEST-*.xml file
TestSparkCliDriver-auto_join9.q-bucketmapjoin11.q-smb_mapjoin_2.q-and-12-more - 
did not produce a TEST-*.xml file
TestSparkCliDriver-date_udf.q-join23.q-auto_join4.q-and-12-more - did not 
produce a TEST-*.xml file
TestSparkCliDriver-groupby4.q-timestamp_null.q-auto_join23.q-and-12-more - did 
not produce a TEST-*.xml file
TestSparkCliDriver-ppd_gby_join.q-groupby_rollup1.q-auto_sortmerge_join_4.q-and-12-more
 - did not produce a TEST-*.xml file
TestSparkCliDriver-ppd_join3.q-union26.q-load_dyn_part15.q-and-12-more - did 
not produce a TEST-*.xml file
TestSparkCliDriver-stats13.q-groupby6_map.q-join_casesensitive.q-and-12-more - 
did not produce a TEST-*.xml file
TestSparkCliDriver-vector_distinct_2.q-input17.q-load_dyn_part2.q-and-12-more - 
did not produce a TEST-*.xml file
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_annotate_stats_select
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_dynpart_sort_opt_vectorization
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_insert_orig_table
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_insert_values_non_partitioned
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_insert_values_orig_table
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_insert_values_partitioned
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_orc_file_dump
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_orc_int_type_promotion
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_orc_lengths
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_orc_llap
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_orc_merge8
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_orc_merge9
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_orc_min_max
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_orc_predicate_pushdown
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_union_fast_stats
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_update_all_types
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_update_orig_table
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_vector_aggregate_9
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_vector_binary_join_groupby
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_vector_cast_constant
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_vector_char_4
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_vector_data_types
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_vector_distinct_2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_vector_groupby_3
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_vector_interval_mapjoin
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_vector_number_compare_projection
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_vector_orderby_5
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_vector_outer_join1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_vector_outer_join2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_vector_outer_join3
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_vector_outer_join4
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_vector_outer_join5
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_vector_reduce1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_vector_reduce2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_vector_reduce3
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_vector_string_concat
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_vector_varchar_4
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_vectorization_part
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_vectorization_part_varchar
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_orc_ppd_basic
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_dynpart_sort_opt_vectorization
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_insert_orig_table
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_insert_values_non_partitioned
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_orc_merge8
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_orc_merge9
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_orc_ppd_basic

[jira] [Commented] (HIVE-9660) store end offset of compressed data for RG in RowIndex in ORC

2016-04-08 Thread Prasanth Jayachandran (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9660?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15232946#comment-15232946
 ] 

Prasanth Jayachandran commented on HIVE-9660:
-

I still don't think we need a config for writer. I can see that the config is 
added to avoid writing wrong lengths or disable that feature. But the problem 
is that the we won't be able to identify the files that are already written 
wrongly. So I would recommend bumping up the writerVersion to reflect this jira 
(HIVE-9660). With this we can identify files that are written after HIVE-9660. 
In future if we find anything wrong, we bump up the writerVersion again and 
make reader resilient by ignoring lengths from files written with HIVE-9660. 
There should also be a reader config that use lengths when available or 
fallback to old codepath.

> store end offset of compressed data for RG in RowIndex in ORC
> -
>
> Key: HIVE-9660
> URL: https://issues.apache.org/jira/browse/HIVE-9660
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-9660.01.patch, HIVE-9660.02.patch, 
> HIVE-9660.03.patch, HIVE-9660.04.patch, HIVE-9660.05.patch, 
> HIVE-9660.06.patch, HIVE-9660.07.patch, HIVE-9660.07.patch, HIVE-9660.patch, 
> HIVE-9660.patch
>
>
> Right now the end offset is estimated, which in some cases results in tons of 
> extra data being read.
> We can add a separate array to RowIndex (positions_v2?) that stores number of 
> compressed buffers for each RG, or end offset, or something, to remove this 
> estimation magic



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13223) HoS may hang for queries that run on 0 splits

2016-04-08 Thread Szehon Ho (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15232910#comment-15232910
 ] 

Szehon Ho commented on HIVE-13223:
--

We can of course ask, and of course hanging is not a good behavior, but Spark 
community might consider a job with 0 splits as invalid.  It has always been 
not so-great behavior on Hive side to have empty files in buckets (I guess 
HIVE-12638 is a similar complaint).

Just to explore the options, is there any JIRA tracking ORC-side fix for handle 
0-length files?

> HoS  may hang for queries that run on 0 splits 
> ---
>
> Key: HIVE-13223
> URL: https://issues.apache.org/jira/browse/HIVE-13223
> Project: Hive
>  Issue Type: Bug
>  Components: Spark
>Affects Versions: 2.1.0
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
> Attachments: HIVE-13223.1.patch, HIVE-13223.2.patch, HIVE-13223.patch
>
>
> Can be seen on all timed out tests after HIVE-13040 went in



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13275) Add a toString method to BytesRefArrayWritable

2016-04-08 Thread Harsh J (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15232906#comment-15232906
 ] 

Harsh J commented on HIVE-13275:


Failing tests don't appear to be related.

> Add a toString method to BytesRefArrayWritable
> --
>
> Key: HIVE-13275
> URL: https://issues.apache.org/jira/browse/HIVE-13275
> Project: Hive
>  Issue Type: Improvement
>  Components: File Formats, Serializers/Deserializers
>Affects Versions: 1.1.0
>Reporter: Harsh J
>Assignee: Harsh J
>Priority: Trivial
> Attachments: HIVE-13275.000.patch
>
>
> RCFileInputFormat cannot be used externally for Hadoop Streaming today cause 
> Streaming generally relies on the K/V pairs to be able to emit text 
> representations (via toString()).
> Since BytesRefArrayWritable has no toString() methods, the usage of the 
> RCFileInputFormat causes object representation prints which are not useful.
> Also, unlike SequenceFiles, RCFiles store multiple "values" per row (i.e. an 
> array), so its important to output them in a valid/parseable manner, as 
> opposed to choosing a simple joining delimiter over the string 
> representations of the inner elements.
> I propose adding a standardised CSV formatting of the array data, such that 
> users of Streaming can then parse the results in their own script. Since we 
> have OpenCSV as a dependency already, we can make use of it for this purpose.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13463) Fix ImportSemanticAnalyzer to allow for different src/dst filesystems

2016-04-08 Thread Zach York (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13463?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zach York updated HIVE-13463:
-
Attachment: HIVE-13463.patch

> Fix ImportSemanticAnalyzer to allow for different src/dst filesystems
> -
>
> Key: HIVE-13463
> URL: https://issues.apache.org/jira/browse/HIVE-13463
> Project: Hive
>  Issue Type: Bug
>  Components: Import/Export
>Affects Versions: 2.0.0
>Reporter: Zach York
>Assignee: Zach York
> Attachments: HIVE-13463.patch
>
>
> In ImportSemanticAnalyzer, there is an assumption that the src filesystem for 
> import and the final location are on the same filesystem. Therefore the check 
> for emptiness and getExternalTmpLocation will be looking on the wrong 
> filesystem and will cause an error. The output path should be fed into 
> getExternalTmpLocation to get a temporary file on the correct filesystem. The 
> check for emptiness should use the output filesystem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13463) Fix ImportSemanticAnalyzer to allow for different src/dst filesystems

2016-04-08 Thread Zach York (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13463?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zach York updated HIVE-13463:
-
Status: Patch Available  (was: Open)

> Fix ImportSemanticAnalyzer to allow for different src/dst filesystems
> -
>
> Key: HIVE-13463
> URL: https://issues.apache.org/jira/browse/HIVE-13463
> Project: Hive
>  Issue Type: Bug
>  Components: Import/Export
>Affects Versions: 2.0.0
>Reporter: Zach York
>Assignee: Zach York
> Attachments: HIVE-13463.patch
>
>
> In ImportSemanticAnalyzer, there is an assumption that the src filesystem for 
> import and the final location are on the same filesystem. Therefore the check 
> for emptiness and getExternalTmpLocation will be looking on the wrong 
> filesystem and will cause an error. The output path should be fed into 
> getExternalTmpLocation to get a temporary file on the correct filesystem. The 
> check for emptiness should use the output filesystem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13462) HiveResultSetMetaData.getPrecision() fails for NULL columns

2016-04-08 Thread Jason Dere (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Dere updated HIVE-13462:
--
Attachment: HIVE-13462.1.patch

Attaching patch with fix and test. Also moved some stuff around in that test to 
make the order of the tests match the order of the columns in the query.

> HiveResultSetMetaData.getPrecision() fails for NULL columns
> ---
>
> Key: HIVE-13462
> URL: https://issues.apache.org/jira/browse/HIVE-13462
> Project: Hive
>  Issue Type: Bug
>  Components: JDBC
>Reporter: Jason Dere
>Assignee: Jason Dere
> Attachments: HIVE-13462.1.patch
>
>
> Can happen if you have a null in the select clause, for example "select null, 
> key, value from src"
> {noformat}
> java.sql.SQLException: Unrecognized column type: NULL
>   at 
> org.apache.hive.jdbc.JdbcColumn.typeStringToHiveType(JdbcColumn.java:160)
>   at 
> org.apache.hive.jdbc.HiveResultSetMetaData.getHiveType(HiveResultSetMetaData.java:48)
>   at 
> org.apache.hive.jdbc.HiveResultSetMetaData.getPrecision(HiveResultSetMetaData.java:86)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13462) HiveResultSetMetaData.getPrecision() fails for NULL columns

2016-04-08 Thread Jason Dere (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Dere updated HIVE-13462:
--
Status: Patch Available  (was: Open)

> HiveResultSetMetaData.getPrecision() fails for NULL columns
> ---
>
> Key: HIVE-13462
> URL: https://issues.apache.org/jira/browse/HIVE-13462
> Project: Hive
>  Issue Type: Bug
>  Components: JDBC
>Reporter: Jason Dere
>Assignee: Jason Dere
> Attachments: HIVE-13462.1.patch
>
>
> Can happen if you have a null in the select clause, for example "select null, 
> key, value from src"
> {noformat}
> java.sql.SQLException: Unrecognized column type: NULL
>   at 
> org.apache.hive.jdbc.JdbcColumn.typeStringToHiveType(JdbcColumn.java:160)
>   at 
> org.apache.hive.jdbc.HiveResultSetMetaData.getHiveType(HiveResultSetMetaData.java:48)
>   at 
> org.apache.hive.jdbc.HiveResultSetMetaData.getPrecision(HiveResultSetMetaData.java:86)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HIVE-13461) LLAP output format service not actually registered in LLAP registry

2016-04-08 Thread Jason Dere (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Dere resolved HIVE-13461.
---
Resolution: Fixed

committed to llap branch.

> LLAP output format service not actually registered in LLAP registry
> ---
>
> Key: HIVE-13461
> URL: https://issues.apache.org/jira/browse/HIVE-13461
> Project: Hive
>  Issue Type: Sub-task
>  Components: llap
>Reporter: Jason Dere
>Assignee: Jason Dere
> Fix For: llap
>
> Attachments: HIVE-13461.1.patch
>
>
> Should have been done in HIVE-13305, but missed this part.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13461) LLAP output format service not actually registered in LLAP registry

2016-04-08 Thread Jason Dere (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Dere updated HIVE-13461:
--
Attachment: HIVE-13461.1.patch

> LLAP output format service not actually registered in LLAP registry
> ---
>
> Key: HIVE-13461
> URL: https://issues.apache.org/jira/browse/HIVE-13461
> Project: Hive
>  Issue Type: Sub-task
>  Components: llap
>Reporter: Jason Dere
>Assignee: Jason Dere
> Fix For: llap
>
> Attachments: HIVE-13461.1.patch
>
>
> Should have been done in HIVE-13305, but missed this part.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8177) Wrong parameter order in ExplainTask#getJSONLogicalPlan()

2016-04-08 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HIVE-8177:
-
Description: 
{code}
  JSONObject jsonPlan = outputMap(work.getParseContext().getTopOps(), true,
  out, jsonOutput, work.getExtended(), 0);
{code}
The order of 4th and 5th parameters is reverted.

  was:
{code}
  JSONObject jsonPlan = outputMap(work.getParseContext().getTopOps(), true,
  out, jsonOutput, work.getExtended(), 0);
{code}

The order of 4th and 5th parameters is reverted.


> Wrong parameter order in ExplainTask#getJSONLogicalPlan()
> -
>
> Key: HIVE-8177
> URL: https://issues.apache.org/jira/browse/HIVE-8177
> Project: Hive
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: SUYEON LEE
>Priority: Minor
> Attachments: HIVE-8177.patch
>
>
> {code}
>   JSONObject jsonPlan = outputMap(work.getParseContext().getTopOps(), 
> true,
>   out, jsonOutput, work.getExtended(), 0);
> {code}
> The order of 4th and 5th parameters is reverted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8176) Close of FSDataOutputStream in OrcRecordUpdater ctor should be in finally clause

2016-04-08 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HIVE-8176:
-
Description: 
{code}
try {
  FSDataOutputStream strm = fs.create(new Path(path, ACID_FORMAT), false);
  strm.writeInt(ORC_ACID_VERSION);
  strm.close();
} catch (IOException ioe) {
{code}
If strm.writeInt() throws IOE, strm would be left unclosed.

  was:
{code}
try {
  FSDataOutputStream strm = fs.create(new Path(path, ACID_FORMAT), false);
  strm.writeInt(ORC_ACID_VERSION);
  strm.close();
} catch (IOException ioe) {
{code}

If strm.writeInt() throws IOE, strm would be left unclosed.


> Close of FSDataOutputStream in OrcRecordUpdater ctor should be in finally 
> clause
> 
>
> Key: HIVE-8176
> URL: https://issues.apache.org/jira/browse/HIVE-8176
> Project: Hive
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: SUYEON LEE
>Priority: Minor
> Attachments: HIVE-8176.patch, HIVE-8176.v1.patch
>
>
> {code}
> try {
>   FSDataOutputStream strm = fs.create(new Path(path, ACID_FORMAT), false);
>   strm.writeInt(ORC_ACID_VERSION);
>   strm.close();
> } catch (IOException ioe) {
> {code}
> If strm.writeInt() throws IOE, strm would be left unclosed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-11983) Hive streaming API uses incorrect logic to assign buckets to incoming records

2016-04-08 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-11983:
--
Fix Version/s: 2.0.0

> Hive streaming API uses incorrect logic to assign buckets to incoming records
> -
>
> Key: HIVE-11983
> URL: https://issues.apache.org/jira/browse/HIVE-11983
> Project: Hive
>  Issue Type: Bug
>  Components: HCatalog, Transactions
>Affects Versions: 1.2.1
>Reporter: Roshan Naik
>Assignee: Roshan Naik
>  Labels: streaming, streaming_api
> Fix For: 1.3.0, 2.0.0
>
> Attachments: HIVE-11983.3.patch, HIVE-11983.4.patch, 
> HIVE-11983.5.patch, HIVE-11983.patch
>
>
> The Streaming API tries to distribute records evenly into buckets. 
> All records in every Transaction that is part of TransactionBatch goes to the 
> same bucket and a new bucket number is chose for each TransactionBatch.
> Fix: API needs to hash each record to determine which bucket it belongs to. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13223) HoS may hang for queries that run on 0 splits

2016-04-08 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15232725#comment-15232725
 ] 

Ashutosh Chauhan commented on HIVE-13223:
-

HIVE-13040 has two changes. One is to not generate 0-length files when its 
known that they don't contain any data and other is to skip 0-length file while 
generating splits. This second change may result in a job which has no split 
e.g. when buckets are empty. For exact repro, you may run any of the failures 
reported by HiveQA here, e.g. 
orc_merge5,orc_merge6,vector_outer_join1,vector_outer_join4 etc.

> HoS  may hang for queries that run on 0 splits 
> ---
>
> Key: HIVE-13223
> URL: https://issues.apache.org/jira/browse/HIVE-13223
> Project: Hive
>  Issue Type: Bug
>  Components: Spark
>Affects Versions: 2.1.0
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
> Attachments: HIVE-13223.1.patch, HIVE-13223.2.patch, HIVE-13223.patch
>
>
> Can be seen on all timed out tests after HIVE-13040 went in



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13429) Tool to remove dangling scratch dir

2016-04-08 Thread Daniel Dai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13429?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Dai updated HIVE-13429:
--
Attachment: HIVE-13429.3.patch

> Tool to remove dangling scratch dir
> ---
>
> Key: HIVE-13429
> URL: https://issues.apache.org/jira/browse/HIVE-13429
> Project: Hive
>  Issue Type: Improvement
>Reporter: Daniel Dai
>Assignee: Daniel Dai
> Attachments: HIVE-13429.1.patch, HIVE-13429.2.patch, 
> HIVE-13429.3.patch
>
>
> We have seen in some cases, user will leave the scratch dir behind, and 
> eventually eat out hdfs storage. This could happen when vm restarts and leave 
> no chance for Hive to run shutdown hook. This is applicable for both HiveCli 
> and HiveServer2. Here we provide an external tool to clear dead scratch dir 
> as needed.
> We need a way to identify which scratch dir is in use. We will rely on HDFS 
> write lock for that. Here is how HDFS write lock works:
> 1. A HDFS client open HDFS file for write and only close at the time of 
> shutdown
> 2. Cleanup process can try to open HDFS file for write. If the client holding 
> this file is still running, we will get exception. Otherwise, we know the 
> client is dead
> 3. If the HDFS client dies without closing the HDFS file, NN will reclaim the 
> lease after 10 min, ie, the HDFS file hold by the dead client is writable 
> again after 10 min
> So here is how we remove dangling scratch directory in Hive:
> 1. HiveCli/HiveServer2 opens a well-named lock file in scratch directory and 
> only close it when we about to drop scratch directory
> 2. A command line tool cleardanglingscratchdir  will check every scratch 
> directory and try open the lock file for write. If it does not get exception, 
> meaning the owner is dead and we can safely remove the scratch directory
> 3. The 10 min window means it is possible a HiveCli/HiveServer2 is dead but 
> we still cannot reclaim the scratch directory for another 10 min. But this 
> should be tolerable



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13428) ZK SM in LLAP should have unique paths per cluster

2016-04-08 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-13428:

   Resolution: Fixed
Fix Version/s: 2.0.1
   2.1.0
   Status: Resolved  (was: Patch Available)

Committed. Thanks for the review!

> ZK SM in LLAP should have unique paths per cluster
> --
>
> Key: HIVE-13428
> URL: https://issues.apache.org/jira/browse/HIVE-13428
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Blocker
> Fix For: 2.1.0, 2.0.1
>
> Attachments: HIVE-13428.patch
>
>
> Noticed this while working on some other patch



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13223) HoS may hang for queries that run on 0 splits

2016-04-08 Thread Szehon Ho (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15232706#comment-15232706
 ] 

Szehon Ho commented on HIVE-13223:
--

Hi [~ashutoshc] , sorry as I was not following HIVE-13040 and I couldnt see any 
description in that JIRA, what is the use case that leads to generating job on 
split or 0 splits and/or 0-length files?  Is it empty buckets?  How were empty 
buckets handled before HIVE-13040?  Thanks.

> HoS  may hang for queries that run on 0 splits 
> ---
>
> Key: HIVE-13223
> URL: https://issues.apache.org/jira/browse/HIVE-13223
> Project: Hive
>  Issue Type: Bug
>  Components: Spark
>Affects Versions: 2.1.0
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
> Attachments: HIVE-13223.1.patch, HIVE-13223.2.patch, HIVE-13223.patch
>
>
> Can be seen on all timed out tests after HIVE-13040 went in



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12553) MySQL Metastore hive.stats.autogather

2016-04-08 Thread Aleksey Vovchenko (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15232642#comment-15232642
 ] 

Aleksey Vovchenko commented on HIVE-12553:
--

This table is created in JDBCStatsPublisher.java in 291 line. When 
JDBCStatsUtils.getCreate("")(in line 290) executes it uses constants containing 
table name and Primary Key size.

> MySQL Metastore hive.stats.autogather
> -
>
> Key: HIVE-12553
> URL: https://issues.apache.org/jira/browse/HIVE-12553
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Affects Versions: 1.2.1
> Environment: AWS EMR Hadoop Cluster(Amazon Linux AMI release 2015.03) 
> , MySQL v5.5.42 (Hive Metastore)
>Reporter: Timothy Garza
>
> PARTITION_STATS_VS table is autogenerated in the Hive Metastore when 
> AutoGather Stats setting is active (default).
> The CREATE TABLE statement fails due to syntax errors because the Primary Key 
> column of the table is set to varchar(4000) which is beyond the maximum 
> limitation of a MySQL key of 767 bytes.
> Even if you create the table manually using an appropriate setting, eg 
> varchar(255) the AutoGather functionality will detect this and ALTER the 
> table column to be varchar(4000). This again fails to execute.
> Error:
> ERROR jdbc.JDBCStatsPublisher: Error during JDBC initialization.
> com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Specified key was 
> too long; max key length is 767 bytes



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13457) Create HS2 REST API endpoints for monitoring information

2016-04-08 Thread Szehon Ho (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15232634#comment-15232634
 ] 

Szehon Ho commented on HIVE-13457:
--

Ah I see, great idea, it is a little more advanced as the pre-req is to figure 
out what is OK and NOT_OK, but definitely something that sounds very useful and 
possible today based on metrics we are collecting.

> Create HS2 REST API endpoints for monitoring information
> 
>
> Key: HIVE-13457
> URL: https://issues.apache.org/jira/browse/HIVE-13457
> Project: Hive
>  Issue Type: Improvement
>Reporter: Szehon Ho
>
> Similar to what is exposed in HS2 webui in HIVE-12338, it would be nice if 
> other UI's like admin tools or Hue can access and display this information as 
> well.  Hence, we will create some REST endpoints to expose this information.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-10339) Allow JDBC Driver to pass HTTP header Key/Value pairs

2016-04-08 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-10339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-10339:
-
Release Note: Doc - 
https://cwiki.apache.org/confluence/display/Hive/HiveServer2+Clients#HiveServer2Clients-PassingHTTPHeaderKey/ValuePairsviaJDBCDriver

> Allow JDBC Driver to pass HTTP header Key/Value pairs
> -
>
> Key: HIVE-10339
> URL: https://issues.apache.org/jira/browse/HIVE-10339
> Project: Hive
>  Issue Type: Improvement
>  Components: Beeline
>Reporter: Hari Sankar Sivarama Subramaniyan
>Assignee: Hari Sankar Sivarama Subramaniyan
> Fix For: 1.2.0
>
> Attachments: HIVE-10339.1.patch, HIVE-10339.2.patch
>
>
> Currently Beeline & ODBC driver does not support carrying user specified HTTP 
> header.
> The beeline JDBC driver in HTTP mode connection string is as 
> jdbc:hive2://:/?hive.server2.transport.mode=http;hive.server2.thrift.http.path=,
> When transport mode is http Beeline/ODBC driver should allow end user to send 
> arbitrary HTTP Header name value pair.
> All the beeline driver needs to do is to use the user specified name values 
> and call the underlying HTTPClient API to set the header.
> E.g the Beeline connection string could be 
> jdbc:hive2://:/?hive.server2.transport.mode=http;hive.server2.thrift.http.path=,http.header.name1=value1,
> And the beeline will call underlying to set HTTP header to name1 and value1
> This is required for the  end user to send  identity in a HTTP header down to 
> Knox via beeline.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9660) store end offset of compressed data for RG in RowIndex in ORC

2016-04-08 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9660?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15232625#comment-15232625
 ] 

Sergey Shelukhin commented on HIVE-9660:


It doesn't look like RB is working correctly. I cannot get the patch to 
display. Recent patch may need to be reviewed by applying and diff-ing 2 
branches locally..

> store end offset of compressed data for RG in RowIndex in ORC
> -
>
> Key: HIVE-9660
> URL: https://issues.apache.org/jira/browse/HIVE-9660
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-9660.01.patch, HIVE-9660.02.patch, 
> HIVE-9660.03.patch, HIVE-9660.04.patch, HIVE-9660.05.patch, 
> HIVE-9660.06.patch, HIVE-9660.07.patch, HIVE-9660.07.patch, HIVE-9660.patch, 
> HIVE-9660.patch
>
>
> Right now the end offset is estimated, which in some cases results in tons of 
> extra data being read.
> We can add a separate array to RowIndex (positions_v2?) that stores number of 
> compressed buffers for each RG, or end offset, or something, to remove this 
> estimation magic



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13457) Create HS2 REST API endpoints for monitoring information

2016-04-08 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15232621#comment-15232621
 ] 

Thejas M Nair commented on HIVE-13457:
--

I was thinking more in terms of a summary of the status of the service, which 
indicates if things are OK/NOT OK, and what is OK/NOT OK, so that any logic for 
alerts can be in Hive instead of having it in a management & monitoring tool 
like ambari.

I realize your goal in this jira is different, we should probably explore this 
in a different jira. 

> Create HS2 REST API endpoints for monitoring information
> 
>
> Key: HIVE-13457
> URL: https://issues.apache.org/jira/browse/HIVE-13457
> Project: Hive
>  Issue Type: Improvement
>Reporter: Szehon Ho
>
> Similar to what is exposed in HS2 webui in HIVE-12338, it would be nice if 
> other UI's like admin tools or Hue can access and display this information as 
> well.  Hence, we will create some REST endpoints to expose this information.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13410) PerfLog metrics scopes not closed if there are exceptions on HS2

2016-04-08 Thread Szehon Ho (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szehon Ho updated HIVE-13410:
-
Attachment: HIVE-13410.4.patch

Not sure why precommit did not pick this up.

> PerfLog metrics scopes not closed if there are exceptions on HS2
> 
>
> Key: HIVE-13410
> URL: https://issues.apache.org/jira/browse/HIVE-13410
> Project: Hive
>  Issue Type: Bug
>  Components: Diagnosability
>Affects Versions: 2.0.0
>Reporter: Szehon Ho
>Assignee: Szehon Ho
> Attachments: HIVE-13410.2.patch, HIVE-13410.3.patch, 
> HIVE-13410.4.patch, HIVE-13410.4.patch, HIVE-13410.patch
>
>
> If there are errors, the HS2 PerfLog api scopes are not closed.  Then there 
> are sometimes messages like 'java.io.IOException: Scope named api_parse is 
> not closed, cannot be opened.'
> I had simply forgetting to close the dangling scopes if there is an 
> exception.  Doing so now.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HIVE-6535) JDBC: async wait should happen during fetch for results

2016-04-08 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15232548#comment-15232548
 ] 

Thejas M Nair edited comment on HIVE-6535 at 4/8/16 6:01 PM:
-

More details on the issue in waiting in fetch results and returning immediately 
after compile in stmt.execute, based on discussion with Vaibhav -
 * There is no fetch results when the query being executed is not a select 
query. We could block on calls like getUpdateCount(), but not everyone calls 
that.
 * In such cases, the user might call execute() followed by another Statement 
function such as execute, or close. It is not clear if the user would expect 
the previous query to block on this call after the first execute. Should the 
next call return an error if the first execute fails ? - That is not an 
intuitive behavior.
 * Statement.execute() documentation says ,that it throws SQLTimeoutException - 
when the driver has determined that the timeout value that was specified by the 
setQueryTimeout method has been exceeded and has at least attempted to cancel 
the currently running Statement. This cannot be implemented execute is not 
blocking.
 * Statement.cancel documentation talks about creating a separate thread for 
cancelling, another place where the assumption that stmt.execute is blocking is 
called out.



was (Author: thejas):
More details on the issue in waiting in fetch results and returning immediately 
after compile in stmt.execute, based on discussion with Vaibhav -
 * There is no fetch results when the query being executed is not a select 
query. We could block on calls like getUpdateCount(), but not everyone calls 
that.
 * In such cases, the user might call execute() followed by another Statement 
function such as execute, or close or cancel. It is not clear if the user would 
expect the previous query to block on this call after the first execute. Should 
the next call return an error if the first execute fails ? - That is not an 
intuitive behavior.
 * Statement.execute() documentation says ,that it throws SQLTimeoutException - 
when the driver has determined that the timeout value that was specified by the 
setQueryTimeout method has been exceeded and has at least attempted to cancel 
the currently running Statement. This cannot be implemented execute is not 
blocking.
 * Statement.cancel documentation talks about creating a separate thread for 
cancelling, another place where the assumption that stmt.execute is blocking is 
called out.


> JDBC: async wait should happen during fetch for results
> ---
>
> Key: HIVE-6535
> URL: https://issues.apache.org/jira/browse/HIVE-6535
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2, JDBC
>Affects Versions: 0.14.0, 1.2.1, 2.0.0
>Reporter: Thejas M Nair
>Assignee: Vaibhav Gumashta
> Attachments: HIVE-6535.1.patch, HIVE-6535.2.patch
>
>
> The hive jdbc client waits query completion during execute() call. It would 
> be better to block in the jdbc for completion when the results are being 
> fetched.
> This way the application using hive jdbc driver can do other tasks while 
> asynchronous query execution is happening, until it needs to fetch the result 
> set.
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13457) Create HS2 REST API endpoints for monitoring information

2016-04-08 Thread Szehon Ho (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15232601#comment-15232601
 ] 

Szehon Ho commented on HIVE-13457:
--

Hey [~thejas], Codahale metrics should already be already exposed via json 
today.. if you check [/jmx].  If anything is missing we 
should add it.

I am planning in this JIRA to expose even other things via REST, I am thinking 
like running operations, queries, and their infos, that show up on the WEBUI 
today via jsp.

> Create HS2 REST API endpoints for monitoring information
> 
>
> Key: HIVE-13457
> URL: https://issues.apache.org/jira/browse/HIVE-13457
> Project: Hive
>  Issue Type: Improvement
>Reporter: Szehon Ho
>
> Similar to what is exposed in HS2 webui in HIVE-12338, it would be nice if 
> other UI's like admin tools or Hue can access and display this information as 
> well.  Hence, we will create some REST endpoints to expose this information.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-6535) JDBC: async wait should happen during fetch for results

2016-04-08 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15232548#comment-15232548
 ] 

Thejas M Nair commented on HIVE-6535:
-

More details on the issue in waiting in fetch results and returning immediately 
after compile in stmt.execute, based on discussion with Vaibhav -
 * There is no fetch results when the query being executed is not a select 
query. We could block on calls like getUpdateCount(), but not everyone calls 
that.
 * In such cases, the user might call execute() followed by another Statement 
function such as execute, or close or cancel. It is not clear if the user would 
expect the previous query to block on this call after the first execute. Should 
the next call return an error if the first execute fails ? - That is not an 
intuitive behavior.
 * Statement.execute() documentation says ,that it throws SQLTimeoutException - 
when the driver has determined that the timeout value that was specified by the 
setQueryTimeout method has been exceeded and has at least attempted to cancel 
the currently running Statement. This cannot be implemented execute is not 
blocking.
 * Statement.cancel documentation talks about creating a separate thread for 
cancelling, another place where the assumption that stmt.execute is blocking is 
called out.


> JDBC: async wait should happen during fetch for results
> ---
>
> Key: HIVE-6535
> URL: https://issues.apache.org/jira/browse/HIVE-6535
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2, JDBC
>Affects Versions: 0.14.0, 1.2.1, 2.0.0
>Reporter: Thejas M Nair
>Assignee: Vaibhav Gumashta
> Attachments: HIVE-6535.1.patch, HIVE-6535.2.patch
>
>
> The hive jdbc client waits query completion during execute() call. It would 
> be better to block in the jdbc for completion when the results are being 
> fetched.
> This way the application using hive jdbc driver can do other tasks while 
> asynchronous query execution is happening, until it needs to fetch the result 
> set.
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13460) ANALYZE TABLE COMPUTE STATISTICS FAILED max key length is 1000 bytes

2016-04-08 Thread Aleksey Vovchenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Vovchenko updated HIVE-13460:
-
Status: Patch Available  (was: Open)

> ANALYZE TABLE COMPUTE STATISTICS FAILED max key length is 1000 bytes
> 
>
> Key: HIVE-13460
> URL: https://issues.apache.org/jira/browse/HIVE-13460
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 1.0.1
>Reporter: Aleksey Vovchenko
>Assignee: Aleksey Vovchenko
> Attachments: HIVE-13460-branch-1.0.patch
>
>
> When Hive configured to Store Statistics in MySQL we have next error:
> {noformat} 
> 2016-04-08 15:53:28,047 ERROR [main]: jdbc.JDBCStatsPublisher 
> (JDBCStatsPublisher.java:init(316)) - Error during JDBC initialization.
> com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Specified key was 
> too long; max key length is 767 bytes
> {noformat} 
> If set MySql properties as:
> {noformat} 
> set global innodb_large_prefix = ON;
> set global innodb_file_format = BARRACUDA;
> {noformat} 
> Now we have next Error:
> {noformat} 
> 2016-04-08 15:56:05,552 ERROR [main]: jdbc.JDBCStatsPublisher 
> (JDBCStatsPublisher.java:init(316)) - Error during JDBC initialization.
> com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Specified key was 
> too long; max key length is 3072 bytes
> {noformat} 
>  As a result of my investigation I figured out that MySQL does not allow to 
> create primary key with size more than 3072 bytes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13460) ANALYZE TABLE COMPUTE STATISTICS FAILED max key length is 1000 bytes

2016-04-08 Thread Aleksey Vovchenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Vovchenko updated HIVE-13460:
-
Attachment: HIVE-13460-branch-1.0.patch

> ANALYZE TABLE COMPUTE STATISTICS FAILED max key length is 1000 bytes
> 
>
> Key: HIVE-13460
> URL: https://issues.apache.org/jira/browse/HIVE-13460
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 1.0.1
>Reporter: Aleksey Vovchenko
>Assignee: Aleksey Vovchenko
> Attachments: HIVE-13460-branch-1.0.patch
>
>
> When Hive configured to Store Statistics in MySQL we have next error:
> {noformat} 
> 2016-04-08 15:53:28,047 ERROR [main]: jdbc.JDBCStatsPublisher 
> (JDBCStatsPublisher.java:init(316)) - Error during JDBC initialization.
> com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Specified key was 
> too long; max key length is 767 bytes
> {noformat} 
> If set MySql properties as:
> {noformat} 
> set global innodb_large_prefix = ON;
> set global innodb_file_format = BARRACUDA;
> {noformat} 
> Now we have next Error:
> {noformat} 
> 2016-04-08 15:56:05,552 ERROR [main]: jdbc.JDBCStatsPublisher 
> (JDBCStatsPublisher.java:init(316)) - Error during JDBC initialization.
> com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Specified key was 
> too long; max key length is 3072 bytes
> {noformat} 
>  As a result of my investigation I figured out that MySQL does not allow to 
> create primary key with size more than 3072 bytes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13422) Analyse command not working for column having datatype as decimal(38,0)

2016-04-08 Thread Xuefu Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuefu Zhang updated HIVE-13422:
---
Description: 
For the repro
{code}
drop table sample_test;
CREATE TABLE IF NOT EXISTS sample_test( key decimal(38,0),b int ) ROW FORMAT 
DELIMITED FIELDS TERMINATED BY ',' STORED AS TEXTFILE;
load data local inpath '/home/hive/analyse.txt' into table sample_test;
ANALYZE TABLE sample_test COMPUTE STATISTICS FOR COLUMNS;
{code}
Sample data
{code}
2023456789456749825082498304 0
5032080754887849825069508304 0
4012080754887849825068718304 0
2012080754887849825066778304 0
4012080754887849625065678304 0
(code}

  was:
For the repro
drop table sample_test;
CREATE TABLE IF NOT EXISTS sample_test( key decimal(38,0),b int ) ROW FORMAT 
DELIMITED FIELDS TERMINATED BY ',' STORED AS TEXTFILE;
load data local inpath '/home/hive/analyse.txt' into table sample_test;
ANALYZE TABLE sample_test COMPUTE STATISTICS FOR COLUMNS;
Sample data
2023456789456749825082498304 0
5032080754887849825069508304 0
4012080754887849825068718304 0
2012080754887849825066778304 0
4012080754887849625065678304 0


> Analyse command not working for column having datatype as decimal(38,0)
> ---
>
> Key: HIVE-13422
> URL: https://issues.apache.org/jira/browse/HIVE-13422
> Project: Hive
>  Issue Type: Bug
>  Components: Hive, Statistics
>Affects Versions: 1.1.0
>Reporter: ashim sinha
>
> For the repro
> {code}
> drop table sample_test;
> CREATE TABLE IF NOT EXISTS sample_test( key decimal(38,0),b int ) ROW FORMAT 
> DELIMITED FIELDS TERMINATED BY ',' STORED AS TEXTFILE;
> load data local inpath '/home/hive/analyse.txt' into table sample_test;
> ANALYZE TABLE sample_test COMPUTE STATISTICS FOR COLUMNS;
> {code}
> Sample data
> {code}
> 2023456789456749825082498304 0
> 5032080754887849825069508304 0
> 4012080754887849825068718304 0
> 2012080754887849825066778304 0
> 4012080754887849625065678304 0
> (code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (HIVE-13422) Analyse command not working for column having datatype as decimal(38,0)

2016-04-08 Thread Xuefu Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuefu Zhang updated HIVE-13422:
---
Comment: was deleted

(was: For the repro
drop table sample_test;
CREATE TABLE IF NOT EXISTS sample_test( key decimal(38,0),b int ) ROW FORMAT 
DELIMITED FIELDS TERMINATED BY ',' STORED AS TEXTFILE;
 load data local inpath '/home/hive/analyse.txt' into table sample_test;
  ANALYZE TABLE sample_test COMPUTE STATISTICS  FOR COLUMNS;
Sample data
20234567894567498250824983040
50320807548878498250695083040
40120807548878498250687183040
20120807548878498250667783040
40120807548878496250656783040)

> Analyse command not working for column having datatype as decimal(38,0)
> ---
>
> Key: HIVE-13422
> URL: https://issues.apache.org/jira/browse/HIVE-13422
> Project: Hive
>  Issue Type: Bug
>  Components: Hive, Statistics
>Affects Versions: 1.1.0
>Reporter: ashim sinha
>
> For the repro
> drop table sample_test;
> CREATE TABLE IF NOT EXISTS sample_test( key decimal(38,0),b int ) ROW FORMAT 
> DELIMITED FIELDS TERMINATED BY ',' STORED AS TEXTFILE;
> load data local inpath '/home/hive/analyse.txt' into table sample_test;
> ANALYZE TABLE sample_test COMPUTE STATISTICS FOR COLUMNS;
> Sample data
> 2023456789456749825082498304 0
> 5032080754887849825069508304 0
> 4012080754887849825068718304 0
> 2012080754887849825066778304 0
> 4012080754887849625065678304 0



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13422) Analyse command not working for column having datatype as decimal(38,0)

2016-04-08 Thread Xuefu Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuefu Zhang updated HIVE-13422:
---
Description: 
For the repro
drop table sample_test;
CREATE TABLE IF NOT EXISTS sample_test( key decimal(38,0),b int ) ROW FORMAT 
DELIMITED FIELDS TERMINATED BY ',' STORED AS TEXTFILE;
load data local inpath '/home/hive/analyse.txt' into table sample_test;
ANALYZE TABLE sample_test COMPUTE STATISTICS FOR COLUMNS;
Sample data
2023456789456749825082498304 0
5032080754887849825069508304 0
4012080754887849825068718304 0
2012080754887849825066778304 0
4012080754887849625065678304 0

  was:Any update on this?


> Analyse command not working for column having datatype as decimal(38,0)
> ---
>
> Key: HIVE-13422
> URL: https://issues.apache.org/jira/browse/HIVE-13422
> Project: Hive
>  Issue Type: Bug
>  Components: Hive, Statistics
>Affects Versions: 1.1.0
>Reporter: ashim sinha
>
> For the repro
> drop table sample_test;
> CREATE TABLE IF NOT EXISTS sample_test( key decimal(38,0),b int ) ROW FORMAT 
> DELIMITED FIELDS TERMINATED BY ',' STORED AS TEXTFILE;
> load data local inpath '/home/hive/analyse.txt' into table sample_test;
> ANALYZE TABLE sample_test COMPUTE STATISTICS FOR COLUMNS;
> Sample data
> 2023456789456749825082498304 0
> 5032080754887849825069508304 0
> 4012080754887849825068718304 0
> 2012080754887849825066778304 0
> 4012080754887849625065678304 0



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13436) Allow the package directory to be specified for the llap setup script

2016-04-08 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15232326#comment-15232326
 ] 

Hive QA commented on HIVE-13436:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12797347/HIVE-13436.1.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 12 failed/errored test(s), 9982 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_index_bitmap3
org.apache.hadoop.hive.llap.tezplugins.TestLlapTaskCommunicator.testFinishableStateUpdateFailure
org.apache.hadoop.hive.llap.tezplugins.TestLlapTaskSchedulerService.testForcedLocalityPreemption
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStore.testSimpleTable
org.apache.hadoop.hive.metastore.hbase.TestHBaseImport.org.apache.hadoop.hive.metastore.hbase.TestHBaseImport
org.apache.hadoop.hive.ql.security.TestAuthorizationPreEventListener.testListener
org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationProviderWithACL.testSimplePrivileges
org.apache.hadoop.hive.thrift.TestHadoopAuthBridge23.testDelegationTokenSharedStore
org.apache.hadoop.hive.thrift.TestHadoopAuthBridge23.testMetastoreProxyUser
org.apache.hadoop.hive.thrift.TestHadoopAuthBridge23.testSaslWithHiveMetaStore
org.apache.hive.hcatalog.listener.TestDbNotificationListener.cleanupNotifs
org.apache.hive.spark.client.TestSparkClient.testSyncRpc
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/7511/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/7511/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-7511/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 12 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12797347 - PreCommit-HIVE-TRUNK-Build

> Allow the package directory to be specified for the llap setup script
> -
>
> Key: HIVE-13436
> URL: https://issues.apache.org/jira/browse/HIVE-13436
> Project: Hive
>  Issue Type: Improvement
>Reporter: Siddharth Seth
>Assignee: Siddharth Seth
> Attachments: HIVE-13436.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13424) Refactoring the code to pass a QueryState object rather than HiveConf object

2016-04-08 Thread Aihua Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aihua Xu updated HIVE-13424:

Attachment: (was: HIVE-13424.3.patch)

> Refactoring the code to pass a QueryState object rather than HiveConf object
> 
>
> Key: HIVE-13424
> URL: https://issues.apache.org/jira/browse/HIVE-13424
> Project: Hive
>  Issue Type: Sub-task
>  Components: Query Processor
>Reporter: Aihua Xu
>Assignee: Aihua Xu
> Attachments: HIVE-13424.1.patch, HIVE-13424.2.patch, 
> HIVE-13424.3.patch
>
>
> Step1: to refractor the code by creating the QueryState class and moving 
> query related info from SessionState. Then during compilation, execution 
> stages, pass single QueryState object for each query.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13424) Refactoring the code to pass a QueryState object rather than HiveConf object

2016-04-08 Thread Aihua Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aihua Xu updated HIVE-13424:

Attachment: HIVE-13424.3.patch

> Refactoring the code to pass a QueryState object rather than HiveConf object
> 
>
> Key: HIVE-13424
> URL: https://issues.apache.org/jira/browse/HIVE-13424
> Project: Hive
>  Issue Type: Sub-task
>  Components: Query Processor
>Reporter: Aihua Xu
>Assignee: Aihua Xu
> Attachments: HIVE-13424.1.patch, HIVE-13424.2.patch, 
> HIVE-13424.3.patch
>
>
> Step1: to refractor the code by creating the QueryState class and moving 
> query related info from SessionState. Then during compilation, execution 
> stages, pass single QueryState object for each query.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13400) Following up HIVE-12481, add retry for Zookeeper service discovery

2016-04-08 Thread Aihua Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15232257#comment-15232257
 ] 

Aihua Xu commented on HIVE-13400:
-

[~thejas], [~ychena], [~szehon] Can you take a look at the change? The tests 
are not related. Let me know if RB is needed.

> Following up HIVE-12481, add retry for Zookeeper service discovery
> --
>
> Key: HIVE-13400
> URL: https://issues.apache.org/jira/browse/HIVE-13400
> Project: Hive
>  Issue Type: Improvement
>  Components: JDBC
>Affects Versions: 2.1.0
>Reporter: Aihua Xu
>Assignee: Aihua Xu
> Attachments: HIVE-13400.1.patch, HIVE-13400.2.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13437) httpserver getPort does not return the actual port when attempting to use a dynamic port

2016-04-08 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15232178#comment-15232178
 ] 

Hive QA commented on HIVE-13437:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12797402/HIVE-13437.02.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 51 failed/errored test(s), 9908 tests 
executed
*Failed tests:*
{noformat}
TestMiniTezCliDriver-dynpart_sort_optimization2.q-cte_mat_1.q-tez_bmj_schema_evolution.q-and-12-more
 - did not produce a TEST-*.xml file
TestMiniTezCliDriver-schema_evol_orc_acidvec_mapwork_part.q-vector_partitioned_date_time.q-vector_non_string_partition.q-and-12-more
 - did not produce a TEST-*.xml file
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_bucket4
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_bucket5
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_bucket6
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_disable_merge_for_bucketing
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_index_bitmap3
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_infer_bucket_sort_map_operators
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_infer_bucket_sort_num_buckets
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_infer_bucket_sort_reducers_power_two
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_list_bucket_dml_10
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_orc_merge1
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_orc_merge2
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_orc_merge9
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_orc_merge_diff_fs
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_reduce_deduplicate
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_vector_outer_join1
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_vector_outer_join2
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_vector_outer_join3
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_vector_outer_join4
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_vector_outer_join5
org.apache.hadoop.hive.llap.daemon.impl.comparator.TestShortestJobFirstComparator.testWaitQueueComparatorParallelism
org.apache.hadoop.hive.llap.daemon.impl.comparator.TestShortestJobFirstComparator.testWaitQueueComparatorWithinDagPriority
org.apache.hadoop.hive.llap.tezplugins.TestLlapTaskCommunicator.testFinishableStateUpdateFailure
org.apache.hadoop.hive.metastore.TestAuthzApiEmbedAuthorizerInRemote.org.apache.hadoop.hive.metastore.TestAuthzApiEmbedAuthorizerInRemote
org.apache.hadoop.hive.metastore.TestHiveMetaStorePartitionSpecs.testAddPartitions
org.apache.hadoop.hive.metastore.TestHiveMetaStorePartitionSpecs.testFetchingPartitionsWithDifferentSchemas
org.apache.hadoop.hive.metastore.TestHiveMetaStorePartitionSpecs.testGetPartitionSpecs_WithAndWithoutPartitionGrouping
org.apache.hadoop.hive.metastore.TestMetaStoreEndFunctionListener.testEndFunctionListener
org.apache.hadoop.hive.metastore.TestMetaStoreMetrics.org.apache.hadoop.hive.metastore.TestMetaStoreMetrics
org.apache.hadoop.hive.ql.TestTxnCommands2.testInitiatorWithMultipleFailedCompactions
org.apache.hadoop.hive.ql.security.TestClientSideAuthorizationProvider.testSimplePrivileges
org.apache.hadoop.hive.ql.security.TestExtendedAcls.org.apache.hadoop.hive.ql.security.TestExtendedAcls
org.apache.hadoop.hive.ql.security.TestFolderPermissions.org.apache.hadoop.hive.ql.security.TestFolderPermissions
org.apache.hadoop.hive.ql.security.TestMetastoreAuthorizationProvider.testSimplePrivileges
org.apache.hadoop.hive.ql.security.TestMultiAuthorizationPreEventListener.org.apache.hadoop.hive.ql.security.TestMultiAuthorizationPreEventListener
org.apache.hadoop.hive.ql.security.TestStorageBasedClientSideAuthorizationProvider.testSimplePrivileges
org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationDrops.testDropDatabase
org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationDrops.testDropPartition
org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationDrops.testDropTable
org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationProvider.testSimplePrivileges
org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationProviderWithACL.testSimplePrivileges
org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationReads.testReadDbFailure
org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationReads.testReadDbSuccess

[jira] [Updated] (HIVE-13316) Upgrade to Calcite 1.7

2016-04-08 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-13316:
---
Attachment: HIVE-13316.01.patch

> Upgrade to Calcite 1.7
> --
>
> Key: HIVE-13316
> URL: https://issues.apache.org/jira/browse/HIVE-13316
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 2.1.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
> Attachments: HIVE-13316.01.patch, HIVE-13316.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13316) Upgrade to Calcite 1.7

2016-04-08 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-13316:
---
Attachment: (was: HIVE-13316.01.patch)

> Upgrade to Calcite 1.7
> --
>
> Key: HIVE-13316
> URL: https://issues.apache.org/jira/browse/HIVE-13316
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 2.1.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
> Attachments: HIVE-13316.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13316) Upgrade to Calcite 1.7

2016-04-08 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-13316:
---
Status: Patch Available  (was: In Progress)

> Upgrade to Calcite 1.7
> --
>
> Key: HIVE-13316
> URL: https://issues.apache.org/jira/browse/HIVE-13316
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 2.1.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
> Attachments: HIVE-13316.01.patch, HIVE-13316.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13316) Upgrade to Calcite 1.7

2016-04-08 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-13316:
---
Attachment: HIVE-13316.01.patch

> Upgrade to Calcite 1.7
> --
>
> Key: HIVE-13316
> URL: https://issues.apache.org/jira/browse/HIVE-13316
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 2.1.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
> Attachments: HIVE-13316.01.patch, HIVE-13316.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (HIVE-13316) Upgrade to Calcite 1.7

2016-04-08 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HIVE-13316 started by Jesus Camacho Rodriguez.
--
> Upgrade to Calcite 1.7
> --
>
> Key: HIVE-13316
> URL: https://issues.apache.org/jira/browse/HIVE-13316
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 2.1.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
> Attachments: HIVE-13316.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13316) Upgrade to Calcite 1.7

2016-04-08 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-13316:
---
Status: Open  (was: Patch Available)

> Upgrade to Calcite 1.7
> --
>
> Key: HIVE-13316
> URL: https://issues.apache.org/jira/browse/HIVE-13316
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 2.1.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
> Attachments: HIVE-13316.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12894) Detect whether ORC is reading from ACID table correctly for Schema Evolution

2016-04-08 Thread Matt McCline (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15232051#comment-15232051
 ] 

Matt McCline commented on HIVE-12894:
-

Added 2.0.1 to Fix Version.  Working on committing it.

> Detect whether ORC is reading from ACID table correctly for Schema Evolution
> 
>
> Key: HIVE-12894
> URL: https://issues.apache.org/jira/browse/HIVE-12894
> Project: Hive
>  Issue Type: Bug
>  Components: Hive, ORC
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Fix For: 2.1.0, 2.0.1
>
> Attachments: HIVE-12894.01.patch, HIVE-12894.02.patch, 
> HIVE-12894.03.patch
>
>
> Set an configuration variable with 'transactional' property to indicate the 
> table is ACID.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >