[jira] [Updated] (HIVE-9956) use BigDecimal.valueOf instead of new in TestFileDump

2015-03-12 Thread Alexander Pivovarov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Pivovarov updated HIVE-9956:
--
Attachment: HIVE-9956.1.patch

patch #1

> use BigDecimal.valueOf instead of new in TestFileDump
> -
>
> Key: HIVE-9956
> URL: https://issues.apache.org/jira/browse/HIVE-9956
> Project: Hive
>  Issue Type: Bug
>  Components: File Formats
>Reporter: Alexander Pivovarov
>Assignee: Alexander Pivovarov
>Priority: Minor
> Attachments: HIVE-9956.1.patch
>
>
> TestFileDump builds data row where one of the column is BigDecimal
> The test adds value 2.
> There are 2 ways to create BigDecimal object.
> 1. use new
> 2. use valueOf
> in this particular case 
> 1. "new" will create 2.222153
> 2. valueOf will use the canonical String representation and the result will 
> be 2.
> Probably we should use valueOf to create BigDecimal object
> TestTimestampWritable and TestHCatStores use valueOf



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9956) use BigDecimal.valueOf instead of new in TestFileDump

2015-03-12 Thread Alexander Pivovarov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Pivovarov updated HIVE-9956:
--
Component/s: File Formats

> use BigDecimal.valueOf instead of new in TestFileDump
> -
>
> Key: HIVE-9956
> URL: https://issues.apache.org/jira/browse/HIVE-9956
> Project: Hive
>  Issue Type: Bug
>  Components: File Formats
>Reporter: Alexander Pivovarov
>Assignee: Alexander Pivovarov
>Priority: Minor
>
> TestFileDump builds data row where one of the column is BigDecimal
> The test adds value 2.
> There are 2 ways to create BigDecimal object.
> 1. use new
> 2. use valueOf
> in this particular case 
> 1. "new" will create 2.222153
> 2. valueOf will use the canonical String representation and the result will 
> be 2.
> Probably we should use valueOf to create BigDecimal object
> TestTimestampWritable and TestHCatStores use valueOf



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9877) Beeline cannot run multiple statements in the same row

2015-03-12 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14360010#comment-14360010
 ] 

Lefty Leverenz commented on HIVE-9877:
--

Doc note:  This needs to be documented for 1.2.0 in Beeline Command Options.

* [HiveServer2 Clients -- Beeline Command Options | 
https://cwiki.apache.org/confluence/display/Hive/HiveServer2+Clients#HiveServer2Clients-BeelineCommandOptions]

> Beeline cannot run multiple statements in the same row
> --
>
> Key: HIVE-9877
> URL: https://issues.apache.org/jira/browse/HIVE-9877
> Project: Hive
>  Issue Type: Bug
>  Components: Beeline
>Affects Versions: 0.12.0
> Environment: Oracle Linux 6.5, x64, Cloudera 5.1.3, Hive 0.12.0
>Reporter: Zoltan Fedor
>Assignee: Chaoyu Tang
>  Labels: TODOC1.2
> Fix For: 1.2.0
>
> Attachments: HIVE-9877.patch, HIVE-9877.patch
>
>
> I'm trying to switch from hive cli to beeline and found the below working 
> with hive cli, but not with beeline.
> This works in hive cli:
> $ hive -e "USE my_db;SHOW TABLES;" 
> The same does not work in beeline:
> $ beeline -u jdbc:hive2://my_server.com -n my_user -p my_password -e "USE 
> my_db;SHOW TABLES;"
> Error: Error while compiling statement: FAILED: ParseException line 1:9 
> missing EOF at ';' near 'my_db' (state=42000,code=4)
> Beeline version 0.12.0-cdh5.1.3 by Apache Hive 
> I have also tried with beeline -f [filename]
> The issue is the same, except (!) when the two statements are listed in 
> separate lines in the file supplied via the -f parameter.
> So when using 
> beeline -f my.hql
> This works:
> my.hql:
> USE my_db;
> SHOW TABLES;
> This does not work:
> my.hql:
> USE my_db;SHOW TABLES;
> $ beeline -u jdbc:hive2://my_server.com -n my_user -p my_password -f my.hql
> Connected to: Apache Hive (version 0.12.0-cdh5.1.3)
> Driver: Hive JDBC (version 0.12.0-cdh5.1.3)
> Transaction isolation: TRANSACTION_REPEATABLE_READ
> Beeline version 0.12.0-cdh5.1.3 by Apache Hive
> 0: jdbc:hive2://my_server.com> USE my_db;SHOW TABLES;
> Error: Error while compiling statement: FAILED: ParseException line 1:9 
> missing EOF at ';' near 'my_db' (state=42000,code=4)
> Closing: org.apache.hive.jdbc.HiveConnection
> How to reproduce:
> Run any type of multiple statements with beeline where the statements are in 
> the same line separated by ; whether using "beeline -e [statement]" or 
> "beeline -f [file]"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9877) Beeline cannot run multiple statements in the same row

2015-03-12 Thread Lefty Leverenz (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lefty Leverenz updated HIVE-9877:
-
Labels: TODOC1.2  (was: )

> Beeline cannot run multiple statements in the same row
> --
>
> Key: HIVE-9877
> URL: https://issues.apache.org/jira/browse/HIVE-9877
> Project: Hive
>  Issue Type: Bug
>  Components: Beeline
>Affects Versions: 0.12.0
> Environment: Oracle Linux 6.5, x64, Cloudera 5.1.3, Hive 0.12.0
>Reporter: Zoltan Fedor
>Assignee: Chaoyu Tang
>  Labels: TODOC1.2
> Fix For: 1.2.0
>
> Attachments: HIVE-9877.patch, HIVE-9877.patch
>
>
> I'm trying to switch from hive cli to beeline and found the below working 
> with hive cli, but not with beeline.
> This works in hive cli:
> $ hive -e "USE my_db;SHOW TABLES;" 
> The same does not work in beeline:
> $ beeline -u jdbc:hive2://my_server.com -n my_user -p my_password -e "USE 
> my_db;SHOW TABLES;"
> Error: Error while compiling statement: FAILED: ParseException line 1:9 
> missing EOF at ';' near 'my_db' (state=42000,code=4)
> Beeline version 0.12.0-cdh5.1.3 by Apache Hive 
> I have also tried with beeline -f [filename]
> The issue is the same, except (!) when the two statements are listed in 
> separate lines in the file supplied via the -f parameter.
> So when using 
> beeline -f my.hql
> This works:
> my.hql:
> USE my_db;
> SHOW TABLES;
> This does not work:
> my.hql:
> USE my_db;SHOW TABLES;
> $ beeline -u jdbc:hive2://my_server.com -n my_user -p my_password -f my.hql
> Connected to: Apache Hive (version 0.12.0-cdh5.1.3)
> Driver: Hive JDBC (version 0.12.0-cdh5.1.3)
> Transaction isolation: TRANSACTION_REPEATABLE_READ
> Beeline version 0.12.0-cdh5.1.3 by Apache Hive
> 0: jdbc:hive2://my_server.com> USE my_db;SHOW TABLES;
> Error: Error while compiling statement: FAILED: ParseException line 1:9 
> missing EOF at ';' near 'my_db' (state=42000,code=4)
> Closing: org.apache.hive.jdbc.HiveConnection
> How to reproduce:
> Run any type of multiple statements with beeline where the statements are in 
> the same line separated by ; whether using "beeline -e [statement]" or 
> "beeline -f [file]"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9955) TestVectorizedRowBatchCtx compares byte[] using equals() method

2015-03-12 Thread Alexander Pivovarov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9955?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Pivovarov updated HIVE-9955:
--
Attachment: HIVE-9955.1.patch

patch #1

> TestVectorizedRowBatchCtx compares byte[] using equals() method
> ---
>
> Key: HIVE-9955
> URL: https://issues.apache.org/jira/browse/HIVE-9955
> Project: Hive
>  Issue Type: Bug
>  Components: Vectorization
>Reporter: Alexander Pivovarov
>Assignee: Alexander Pivovarov
>Priority: Minor
> Attachments: HIVE-9955.1.patch
>
>
> Found several issues TestVectorizedRowBatchCtx
> 1. compares byte[] using equals() method
> 2. creates RuntimeException but does not throw it
> 3. uses assertEquals to compare String with boolean



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9955) TestVectorizedRowBatchCtx compares byte[] using equals() method

2015-03-12 Thread Alexander Pivovarov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9955?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Pivovarov updated HIVE-9955:
--
Description: 
Found several issues with TestVectorizedRowBatchCtx:
1. compares byte[] using equals() method
2. creates RuntimeException but does not throw it
3. uses assertEquals to compare String with boolean


  was:
Found several issues TestVectorizedRowBatchCtx
1. compares byte[] using equals() method
2. creates RuntimeException but does not throw it
3. uses assertEquals to compare String with boolean



> TestVectorizedRowBatchCtx compares byte[] using equals() method
> ---
>
> Key: HIVE-9955
> URL: https://issues.apache.org/jira/browse/HIVE-9955
> Project: Hive
>  Issue Type: Bug
>  Components: Vectorization
>Reporter: Alexander Pivovarov
>Assignee: Alexander Pivovarov
>Priority: Minor
> Attachments: HIVE-9955.1.patch
>
>
> Found several issues with TestVectorizedRowBatchCtx:
> 1. compares byte[] using equals() method
> 2. creates RuntimeException but does not throw it
> 3. uses assertEquals to compare String with boolean



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9955) TestVectorizedRowBatchCtx compares byte[] using equals() method

2015-03-12 Thread Alexander Pivovarov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9955?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Pivovarov updated HIVE-9955:
--
Description: 
Found several issues TestVectorizedRowBatchCtx
1. compares byte[] using equals() method
2. creates RuntimeException but does not throw it
3. uses assertEquals to compare String with boolean


  was:
Found several issues
1. compares byte[] using equals() method
2. creates RuntimeException but does not throw it
3. uses assertEquals to compare String with boolean


> TestVectorizedRowBatchCtx compares byte[] using equals() method
> ---
>
> Key: HIVE-9955
> URL: https://issues.apache.org/jira/browse/HIVE-9955
> Project: Hive
>  Issue Type: Bug
>  Components: Vectorization
>Reporter: Alexander Pivovarov
>Assignee: Alexander Pivovarov
>Priority: Minor
>
> Found several issues TestVectorizedRowBatchCtx
> 1. compares byte[] using equals() method
> 2. creates RuntimeException but does not throw it
> 3. uses assertEquals to compare String with boolean



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9955) TestVectorizedRowBatchCtx compares byte[] using equals() method

2015-03-12 Thread Alexander Pivovarov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9955?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Pivovarov updated HIVE-9955:
--
Description: 
Found several issues
1. compares byte[] using equals() method
2. creates RuntimeException but does not throw it
3. uses assertEquals to compare String with boolean

  was:
Found several issues
1. TestVectorizedRowBatchCtx
creates RuntimeException but does not throw it
2. 


> TestVectorizedRowBatchCtx compares byte[] using equals() method
> ---
>
> Key: HIVE-9955
> URL: https://issues.apache.org/jira/browse/HIVE-9955
> Project: Hive
>  Issue Type: Bug
>  Components: Vectorization
>Reporter: Alexander Pivovarov
>Assignee: Alexander Pivovarov
>Priority: Minor
>
> Found several issues
> 1. compares byte[] using equals() method
> 2. creates RuntimeException but does not throw it
> 3. uses assertEquals to compare String with boolean



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9955) TestVectorizedRowBatchCtx compares byte[] using equals() method

2015-03-12 Thread Alexander Pivovarov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9955?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Pivovarov updated HIVE-9955:
--
Description: 
Found several issues
1. TestVectorizedRowBatchCtx
creates RuntimeException but does not throw it
2. 

  was:
1. creates RuntimeException but does not throw it
2. 


> TestVectorizedRowBatchCtx compares byte[] using equals() method
> ---
>
> Key: HIVE-9955
> URL: https://issues.apache.org/jira/browse/HIVE-9955
> Project: Hive
>  Issue Type: Bug
>  Components: Vectorization
>Reporter: Alexander Pivovarov
>Assignee: Alexander Pivovarov
>Priority: Minor
>
> Found several issues
> 1. TestVectorizedRowBatchCtx
> creates RuntimeException but does not throw it
> 2. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9955) TestVectorizedRowBatchCtx compares byte[] using equals() method

2015-03-12 Thread Alexander Pivovarov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9955?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Pivovarov updated HIVE-9955:
--
Description: 
1. creates RuntimeException but does not throw it
2. 

> TestVectorizedRowBatchCtx compares byte[] using equals() method
> ---
>
> Key: HIVE-9955
> URL: https://issues.apache.org/jira/browse/HIVE-9955
> Project: Hive
>  Issue Type: Bug
>  Components: Vectorization
>Reporter: Alexander Pivovarov
>Assignee: Alexander Pivovarov
>Priority: Minor
>
> 1. creates RuntimeException but does not throw it
> 2. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9947) ScriptOperator replaceAll uses unescaped dot and result is not assigned

2015-03-12 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9947?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14359984#comment-14359984
 ] 

Hive QA commented on HIVE-9947:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12704320/HIVE-9947.1.patch

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 7762 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_transform_acid
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/3026/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/3026/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-3026/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12704320 - PreCommit-HIVE-TRUNK-Build

> ScriptOperator replaceAll uses unescaped dot and result is not assigned
> ---
>
> Key: HIVE-9947
> URL: https://issues.apache.org/jira/browse/HIVE-9947
> Project: Hive
>  Issue Type: Bug
>Reporter: Alexander Pivovarov
>Assignee: Alexander Pivovarov
>Priority: Minor
> Attachments: HIVE-9947.1.patch
>
>
> ScriptOperator line 155
> {code}
> //now
> b.replaceAll(".", "_");
> // should be
> b = b.replace('.', '_');
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9954) UDFJson uses the == operator to compare Strings

2015-03-12 Thread Alexander Pivovarov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Pivovarov updated HIVE-9954:
--
Attachment: HIVE-9954.1.patch

patch #1

> UDFJson uses the == operator to compare Strings
> ---
>
> Key: HIVE-9954
> URL: https://issues.apache.org/jira/browse/HIVE-9954
> Project: Hive
>  Issue Type: Bug
>  Components: UDF
>Reporter: Alexander Pivovarov
>Assignee: Alexander Pivovarov
>Priority: Minor
> Attachments: HIVE-9954.1.patch
>
>
> {code}
> if (jsonString == null || jsonString == "" || pathString == null
> || pathString == "") {
>   return null;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9953) fix NPE in WindowingTableFunction

2015-03-12 Thread Alexander Pivovarov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Pivovarov updated HIVE-9953:
--
Attachment: HIVE-9953.1.patch

patch #1

> fix NPE in WindowingTableFunction
> -
>
> Key: HIVE-9953
> URL: https://issues.apache.org/jira/browse/HIVE-9953
> Project: Hive
>  Issue Type: Bug
>  Components: UDF
>Reporter: Alexander Pivovarov
>Assignee: Alexander Pivovarov
>Priority: Trivial
> Attachments: HIVE-9953.1.patch
>
>
> WindowingTableFunction line 1193
> {code}
> // now
> return (s1 == null && s2 == null) || s1.equals(s2);
> // should be
> return (s1 == null && s2 == null) || (s1 != null && s1.equals(s2));
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9953) fix NPE in WindowingTableFunction

2015-03-12 Thread Alexander Pivovarov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Pivovarov updated HIVE-9953:
--
Component/s: UDF

> fix NPE in WindowingTableFunction
> -
>
> Key: HIVE-9953
> URL: https://issues.apache.org/jira/browse/HIVE-9953
> Project: Hive
>  Issue Type: Bug
>  Components: UDF
>Reporter: Alexander Pivovarov
>Assignee: Alexander Pivovarov
>Priority: Trivial
>
> WindowingTableFunction line 1193
> {code}
> // now
> return (s1 == null && s2 == null) || s1.equals(s2);
> // should be
> return (s1 == null && s2 == null) || (s1 != null && s1.equals(s2));
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9952) fix NPE in CorrelationUtilities

2015-03-12 Thread Alexander Pivovarov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Pivovarov updated HIVE-9952:
--
Description: CorrelationUtilities.isNullOperator will throw NPE if operator 
is null  (was: CorrelationUtilities.isNullOperator will throw NPE is operator 
is null)

> fix NPE in CorrelationUtilities
> ---
>
> Key: HIVE-9952
> URL: https://issues.apache.org/jira/browse/HIVE-9952
> Project: Hive
>  Issue Type: Bug
>  Components: Logical Optimizer
>Reporter: Alexander Pivovarov
>Assignee: Alexander Pivovarov
>Priority: Minor
> Attachments: HIVE-9952.1.patch
>
>
> CorrelationUtilities.isNullOperator will throw NPE if operator is null



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9952) fix NPE in CorrelationUtilities

2015-03-12 Thread Alexander Pivovarov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Pivovarov updated HIVE-9952:
--
Attachment: HIVE-9952.1.patch

patch #1

> fix NPE in CorrelationUtilities
> ---
>
> Key: HIVE-9952
> URL: https://issues.apache.org/jira/browse/HIVE-9952
> Project: Hive
>  Issue Type: Bug
>  Components: Logical Optimizer
>Reporter: Alexander Pivovarov
>Assignee: Alexander Pivovarov
>Priority: Minor
> Attachments: HIVE-9952.1.patch
>
>
> CorrelationUtilities.isNullOperator will throw NPE is operator is null



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9945) FunctionTask.conf hides Task.conf field

2015-03-12 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14359921#comment-14359921
 ] 

Hive QA commented on HIVE-9945:
---



{color:green}Overall{color}: +1 all checks pass

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12704301/HIVE-9945.1.patch

{color:green}SUCCESS:{color} +1 7762 tests passed

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/3025/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/3025/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-3025/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12704301 - PreCommit-HIVE-TRUNK-Build

> FunctionTask.conf hides Task.conf field
> ---
>
> Key: HIVE-9945
> URL: https://issues.apache.org/jira/browse/HIVE-9945
> Project: Hive
>  Issue Type: Bug
>Reporter: Alexander Pivovarov
>Assignee: Alexander Pivovarov
>Priority: Minor
> Attachments: HIVE-9945.1.patch
>
>
> Task class has protected field conf.
> FunctionTask can use it instead of creating another conf field.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9946) CBO (Calcite Return Path): Metadata provider for bucketing [CBO branch]

2015-03-12 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-9946:
--
Attachment: HIVE-9946.cbo.patch

[~ashutoshc], this patch provides more information to the MD providers to 
figure out the bucketing properly. Can you take a look? Thanks

> CBO (Calcite Return Path): Metadata provider for bucketing [CBO branch]
> ---
>
> Key: HIVE-9946
> URL: https://issues.apache.org/jira/browse/HIVE-9946
> Project: Hive
>  Issue Type: Sub-task
>  Components: CBO
>Affects Versions: cbo-branch
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
> Fix For: cbo-branch
>
> Attachments: HIVE-9946.cbo.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9951) VectorizedRCFileRecordReader creates Exception but does not throw it

2015-03-12 Thread Alexander Pivovarov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Pivovarov updated HIVE-9951:
--
Attachment: HIVE-9951.1.patch

patch #1

> VectorizedRCFileRecordReader creates Exception but does not throw it
> 
>
> Key: HIVE-9951
> URL: https://issues.apache.org/jira/browse/HIVE-9951
> Project: Hive
>  Issue Type: Bug
>  Components: File Formats, Vectorization
>Reporter: Alexander Pivovarov
>Assignee: Alexander Pivovarov
>Priority: Minor
> Attachments: HIVE-9951.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9944) Convert array[] to string properly in log messages

2015-03-12 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14359862#comment-14359862
 ] 

Hive QA commented on HIVE-9944:
---



{color:green}Overall{color}: +1 all checks pass

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12704298/HIVE-9944.1.patch

{color:green}SUCCESS:{color} +1 7762 tests passed

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/3024/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/3024/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-3024/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12704298 - PreCommit-HIVE-TRUNK-Build

> Convert array[] to string properly in log messages
> --
>
> Key: HIVE-9944
> URL: https://issues.apache.org/jira/browse/HIVE-9944
> Project: Hive
>  Issue Type: Bug
>  Components: Logical Optimizer
>Reporter: Alexander Pivovarov
>Assignee: Alexander Pivovarov
>Priority: Minor
> Attachments: HIVE-9944.1.patch
>
>
> DemuxOperator (181) and ListBucketingPruner (194, 219, 361) concatenate 
> String with array[] and log the resulting message.
> array[] uses Object.toString which returns className + @ + hashCode hex
> we can use Arrays.toString() to convert array[] to string properly



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9950) fix rehash in CuckooSetBytes and CuckooSetLong

2015-03-12 Thread Alexander Pivovarov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Pivovarov updated HIVE-9950:
--
Attachment: HIVE-9950.1.patch

patch #1

> fix rehash in CuckooSetBytes and CuckooSetLong
> --
>
> Key: HIVE-9950
> URL: https://issues.apache.org/jira/browse/HIVE-9950
> Project: Hive
>  Issue Type: Bug
>  Components: Vectorization
>Reporter: Alexander Pivovarov
>Assignee: Alexander Pivovarov
>Priority: Minor
> Attachments: HIVE-9950.1.patch
>
>
> both classes have the following
> {code}
> if (prev1 == null) {
>   prev1 = t1;
>   prev1 = t2;
> }
> {code}
> most probably it should be
> {code}
> if (prev1 == null) {
>   prev1 = t1;
>   prev2 = t2;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9948) SparkUtilities.getFileName passes File.separator to String.split() method

2015-03-12 Thread Xuefu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14359832#comment-14359832
 ] 

Xuefu Zhang commented on HIVE-9948:
---

+1 pending on test.

> SparkUtilities.getFileName passes File.separator to String.split() method
> -
>
> Key: HIVE-9948
> URL: https://issues.apache.org/jira/browse/HIVE-9948
> Project: Hive
>  Issue Type: Bug
>  Components: Spark
>Reporter: Alexander Pivovarov
>Assignee: Alexander Pivovarov
>Priority: Minor
> Attachments: HIVE-9948.1.patch
>
>
> String.split() method expects regex. This is why File.separator can not be 
> passed to split.
> In this particular case we can use FilenameUtils.getName to get file name



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9756) LLAP: use log4j 2 for llap

2015-03-12 Thread Gopal V (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14359826#comment-14359826
 ] 

Gopal V commented on HIVE-9756:
---

[~sseth]: turns out I have to switch Tez over to log4j, since the logging 
classes are being provided by tez.tar.gz 

> LLAP: use log4j 2 for llap
> --
>
> Key: HIVE-9756
> URL: https://issues.apache.org/jira/browse/HIVE-9756
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Gunther Hagleitner
>Assignee: Gopal V
>
> For the INFO logging, we'll need to use the log4j-jcl 2.x upgrade-path to get 
> throughput friendly logging.
> http://logging.apache.org/log4j/2.0/manual/async.html#Performance



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9949) remove not used parameters from String.format

2015-03-12 Thread Alexander Pivovarov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9949?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Pivovarov updated HIVE-9949:
--
Attachment: HIVE-9949.1.patch

patch #1

> remove not used parameters from String.format
> -
>
> Key: HIVE-9949
> URL: https://issues.apache.org/jira/browse/HIVE-9949
> Project: Hive
>  Issue Type: Bug
>  Components: Spark, Tez
>Reporter: Alexander Pivovarov
>Assignee: Alexander Pivovarov
>Priority: Trivial
> Attachments: HIVE-9949.1.patch
>
>
> SparkJobMonitor (79) and TezJobMonitor (788) call
> {code}
> String.format("%s: -/-\t", stageName, complete, total)
> {code}
> complete, total can be removed because pattern uses only the first parameter 
> stageName



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9949) remove not used parameters from String.format

2015-03-12 Thread Alexander Pivovarov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9949?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Pivovarov updated HIVE-9949:
--
Component/s: Tez
 Spark

> remove not used parameters from String.format
> -
>
> Key: HIVE-9949
> URL: https://issues.apache.org/jira/browse/HIVE-9949
> Project: Hive
>  Issue Type: Bug
>  Components: Spark, Tez
>Reporter: Alexander Pivovarov
>Assignee: Alexander Pivovarov
>Priority: Trivial
>
> SparkJobMonitor (79) and TezJobMonitor (788) call
> {code}
> String.format("%s: -/-\t", stageName, complete, total)
> {code}
> complete, total can be removed because pattern uses only the first parameter 
> stageName



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9948) SparkUtilities.getFileName passes File.separator to String.split() method

2015-03-12 Thread Alexander Pivovarov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Pivovarov updated HIVE-9948:
--
Attachment: HIVE-9948.1.patch

patch #1

> SparkUtilities.getFileName passes File.separator to String.split() method
> -
>
> Key: HIVE-9948
> URL: https://issues.apache.org/jira/browse/HIVE-9948
> Project: Hive
>  Issue Type: Bug
>  Components: Spark
>Reporter: Alexander Pivovarov
>Assignee: Alexander Pivovarov
>Priority: Minor
> Attachments: HIVE-9948.1.patch
>
>
> String.split() method expects regex. This is why File.separator can not be 
> passed to split.
> In this particular case we can use FilenameUtils.getName to get file name



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9729) LLAP: design and implement proper metadata cache

2015-03-12 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14359786#comment-14359786
 ] 

Sergey Shelukhin commented on HIVE-9729:


Most of this is now committed, need to add some more tests (q file passes)

> LLAP: design and implement proper metadata cache
> 
>
> Key: HIVE-9729
> URL: https://issues.apache.org/jira/browse/HIVE-9729
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Fix For: llap
>
>
> Simple approach: add external priorities to data cache, read metadata parts 
> of orc file into it. Advantage: simple; consistent management (no need to 
> coordinate sizes and eviction between data and metadata caches, etc); 
> disadvantage - have to decode every time.
> Maybe add decoded metadata cache on top - fixed size, small and 
> opportunistic? Or some other approach.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9915) Allow specifying file format for managed tables

2015-03-12 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14359772#comment-14359772
 ] 

Hive QA commented on HIVE-9915:
---



{color:green}Overall{color}: +1 all checks pass

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12704294/HIVE-9915.2.patch

{color:green}SUCCESS:{color} +1 7763 tests passed

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/3023/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/3023/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-3023/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12704294 - PreCommit-HIVE-TRUNK-Build

> Allow specifying file format for managed tables
> ---
>
> Key: HIVE-9915
> URL: https://issues.apache.org/jira/browse/HIVE-9915
> Project: Hive
>  Issue Type: Bug
>  Components: File Formats
>Reporter: Gunther Hagleitner
>Assignee: Gunther Hagleitner
> Attachments: HIVE-9915.1.patch, HIVE-9915.2.patch
>
>
> We already allow setting a system wide default format. In some cases it's 
> useful though to specify this only for managed tables, or distinguish 
> external and managed via two variables. You might want to set a more 
> efficient (than text) format for managed tables, but leave external to text 
> (as they often are log files etc.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9947) ScriptOperator replaceAll uses unescaped dot and result is not assigned

2015-03-12 Thread Alexander Pivovarov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9947?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Pivovarov updated HIVE-9947:
--
Attachment: HIVE-9947.1.patch

patch #1

> ScriptOperator replaceAll uses unescaped dot and result is not assigned
> ---
>
> Key: HIVE-9947
> URL: https://issues.apache.org/jira/browse/HIVE-9947
> Project: Hive
>  Issue Type: Bug
>Reporter: Alexander Pivovarov
>Assignee: Alexander Pivovarov
>Priority: Minor
> Attachments: HIVE-9947.1.patch
>
>
> ScriptOperator line 155
> {code}
> //now
> b.replaceAll(".", "_");
> // should be
> b = b.replace('.', '_');
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9921) Compile hive failed

2015-03-12 Thread dqpylf (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14359758#comment-14359758
 ] 

dqpylf commented on HIVE-9921:
--

you mean the branch? I use the apache original hive.
I don't confirm what you said the repo,I think you mean the maven repository,I 
use following:

  
  nexus-osc  
*  
  Nexusosc  
http://maven.oschina.net/content/groups/public/  
 

I suspect the mirror misses some jar files.

> Compile hive failed
> ---
>
> Key: HIVE-9921
> URL: https://issues.apache.org/jira/browse/HIVE-9921
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 1.1.0
> Environment: red hat linux 6.3
>Reporter: dqpylf
> Fix For: 1.1.0
>
>
> Hi,
> When I compiled the hive,it was failed.More detail is following:
>  [INFO] 
> 
> [INFO] Reactor Summary:
> [INFO] 
> [INFO] Hive ... SUCCESS [ 37.774 
> s]
> [INFO] Hive Shims Common .. SUCCESS [ 31.780 
> s]
> [INFO] Hive Shims 0.20S ... SUCCESS [  8.757 
> s]
> [INFO] Hive Shims 0.23  SUCCESS [ 26.350 
> s]
> [INFO] Hive Shims Scheduler ... SUCCESS [  8.711 
> s]
> [INFO] Hive Shims . SUCCESS [  8.684 
> s]
> [INFO] Hive Common  SUCCESS [ 30.964 
> s]
> [INFO] Hive Serde . SUCCESS [01:01 
> min]
> [INFO] Hive Metastore . SUCCESS [02:03 
> min]
> [INFO] Hive Ant Utilities . SUCCESS [  5.928 
> s]
> [INFO] Spark Remote Client  SUCCESS [ 59.160 
> s]
> [INFO] Hive Query Language  FAILURE [ 39.002 
> s]
> [INFO] Hive Service ... SKIPPED
> [INFO] Hive Accumulo Handler .. SKIPPED
> [INFO] Hive JDBC .. SKIPPED
> [INFO] Hive Beeline ... SKIPPED
> [INFO] Hive CLI ... SKIPPED
> [INFO] Hive Contrib ... SKIPPED
> [INFO] Hive HBase Handler . SKIPPED
> [INFO] Hive HCatalog .. SKIPPED
> [INFO] Hive HCatalog Core . SKIPPED
> [INFO] Hive HCatalog Pig Adapter .. SKIPPED
> [INFO] Hive HCatalog Server Extensions  SKIPPED
> [INFO] Hive HCatalog Webhcat Java Client .. SKIPPED
> [INFO] Hive HCatalog Webhcat .. SKIPPED
> [INFO] Hive HCatalog Streaming  SKIPPED
> [INFO] Hive HWI ... SKIPPED
> [INFO] Hive ODBC .. SKIPPED
> [INFO] Hive Shims Aggregator .. SKIPPED
> [INFO] Hive TestUtils . SKIPPED
> [INFO] Hive Packaging . SKIPPED
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 07:31 min
> [INFO] Finished at: 2015-03-10T20:50:17-07:00
> [INFO] Final Memory: 74M/441M
> [INFO] 
> 
> [ERROR] Failed to execute goal on project hive-exec: Could not resolve 
> dependencies for project org.apache.hive:hive-exec:jar:1.1.0: The following 
> artifacts could not be resolved: 
> org.pentaho:pentaho-aggdesigner-algorithm:jar:5.1.5-jhyde, 
> eigenbase:eigenbase-properties:jar:1.1.4: Could not find artifact 
> org.pentaho:pentaho-aggdesigner-algorithm:jar:5.1.5-jhyde in nexus-osc 
> (http://maven.oschina.net/content/groups/public/) -> [Help 1]
> [ERROR] 
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR] 
> [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/DependencyResolutionException
> [ERROR] 
> [ERROR] After correcting the problems, you can resume the build with the 
> command
> [ERROR]   mvn  -rf :hive-exec
> thanks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9947) ScriptOperator replaceAll uses unescaped dot and result is not assigned

2015-03-12 Thread Alexander Pivovarov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9947?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Pivovarov updated HIVE-9947:
--
Description: 
ScriptOperator line 155
{code}
//now
b.replaceAll(".", "_");
// should be
b = b.replace('.', '_');
{code}

  was:
{code}
//now
b.replaceAll(".", "_");
// should be
b = b.replace('.', '_');
{code}


> ScriptOperator replaceAll uses unescaped dot and result is not assigned
> ---
>
> Key: HIVE-9947
> URL: https://issues.apache.org/jira/browse/HIVE-9947
> Project: Hive
>  Issue Type: Bug
>Reporter: Alexander Pivovarov
>Assignee: Alexander Pivovarov
>Priority: Minor
>
> ScriptOperator line 155
> {code}
> //now
> b.replaceAll(".", "_");
> // should be
> b = b.replace('.', '_');
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9947) ScriptOperator replaceAll uses unescaped dot and result is not assigned

2015-03-12 Thread Alexander Pivovarov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9947?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Pivovarov updated HIVE-9947:
--
Description: 
{code}
//now
b.replaceAll(".", "_");
// should be
b = b.replace('.', '_');
{code}

> ScriptOperator replaceAll uses unescaped dot and result is not assigned
> ---
>
> Key: HIVE-9947
> URL: https://issues.apache.org/jira/browse/HIVE-9947
> Project: Hive
>  Issue Type: Bug
>Reporter: Alexander Pivovarov
>Assignee: Alexander Pivovarov
>Priority: Minor
>
> {code}
> //now
> b.replaceAll(".", "_");
> // should be
> b = b.replace('.', '_');
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-7428) OrcSplit fails to account for columnar projections in its size estimates

2015-03-12 Thread Gopal V (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gopal V updated HIVE-7428:
--
Assignee: Prasanth Jayachandran  (was: Gopal V)

> OrcSplit fails to account for columnar projections in its size estimates
> 
>
> Key: HIVE-7428
> URL: https://issues.apache.org/jira/browse/HIVE-7428
> Project: Hive
>  Issue Type: Bug
>Reporter: Gopal V
>Assignee: Prasanth Jayachandran
>
> Currently, ORC generates splits based on stripe offset + stripe length.
> This means that the splits for all columnar projections are exactly the same 
> size, despite reading the footer which gives the estimated sizes for each 
> column.
> This is a hold-out from FileSplit which uses getLen() as the I/O cost of 
> reading a file in a map-task.
> RCFile didn't have a footer with column statistics information, but for ORC 
> this would be extremely useful to reduce task overheads when processing 
> extremely wide tables with highly selective column projections.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9946) CBO (Calcite Return Path): Metadata provider for bucketing [CBO branch]

2015-03-12 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-9946:
--
Fix Version/s: (was: 1.2.0)
   cbo-branch

> CBO (Calcite Return Path): Metadata provider for bucketing [CBO branch]
> ---
>
> Key: HIVE-9946
> URL: https://issues.apache.org/jira/browse/HIVE-9946
> Project: Hive
>  Issue Type: Sub-task
>  Components: CBO
>Affects Versions: cbo-branch
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
> Fix For: cbo-branch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9915) Allow specifying file format for managed tables

2015-03-12 Thread Gopal V (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14359716#comment-14359716
 ] 

Gopal V commented on HIVE-9915:
---

[~leftylev]: While reviewing Gunther's patch I ran into some questions which 
the doc answered, but weren't obvious.

The quadrant of native vs non-native and external vs managed needs to be drawn 
somehow for this to explain the feature in the docs.

|| \ || Native || Non-Native ||
| Managed | hive.default.fileformat.managed (or fall back to the other) | not 
covered by default file-formats | 
| External | hive.default.fileformat  |  not covered by default file-formats | 

> Allow specifying file format for managed tables
> ---
>
> Key: HIVE-9915
> URL: https://issues.apache.org/jira/browse/HIVE-9915
> Project: Hive
>  Issue Type: Bug
>  Components: File Formats
>Reporter: Gunther Hagleitner
>Assignee: Gunther Hagleitner
> Attachments: HIVE-9915.1.patch, HIVE-9915.2.patch
>
>
> We already allow setting a system wide default format. In some cases it's 
> useful though to specify this only for managed tables, or distinguish 
> external and managed via two variables. You might want to set a more 
> efficient (than text) format for managed tables, but leave external to text 
> (as they often are log files etc.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9946) CBO (Calcite Return Path): Metadata provider for bucketing [CBO branch]

2015-03-12 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-9946:
--
Affects Version/s: cbo-branch

> CBO (Calcite Return Path): Metadata provider for bucketing [CBO branch]
> ---
>
> Key: HIVE-9946
> URL: https://issues.apache.org/jira/browse/HIVE-9946
> Project: Hive
>  Issue Type: Sub-task
>  Components: CBO
>Affects Versions: cbo-branch
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
> Fix For: cbo-branch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9915) Allow specifying file format for managed tables

2015-03-12 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14359709#comment-14359709
 ] 

Lefty Leverenz commented on HIVE-9915:
--

Looks good, thanks Gunther.

> Allow specifying file format for managed tables
> ---
>
> Key: HIVE-9915
> URL: https://issues.apache.org/jira/browse/HIVE-9915
> Project: Hive
>  Issue Type: Bug
>  Components: File Formats
>Reporter: Gunther Hagleitner
>Assignee: Gunther Hagleitner
> Attachments: HIVE-9915.1.patch, HIVE-9915.2.patch
>
>
> We already allow setting a system wide default format. In some cases it's 
> useful though to specify this only for managed tables, or distinguish 
> external and managed via two variables. You might want to set a more 
> efficient (than text) format for managed tables, but leave external to text 
> (as they often are log files etc.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9945) FunctionTask.conf hides Task.conf field

2015-03-12 Thread Alexander Pivovarov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9945?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Pivovarov updated HIVE-9945:
--
Attachment: HIVE-9945.1.patch

patch #1

> FunctionTask.conf hides Task.conf field
> ---
>
> Key: HIVE-9945
> URL: https://issues.apache.org/jira/browse/HIVE-9945
> Project: Hive
>  Issue Type: Bug
>Reporter: Alexander Pivovarov
>Assignee: Alexander Pivovarov
>Priority: Minor
> Attachments: HIVE-9945.1.patch
>
>
> Task class has protected field conf.
> FunctionTask can use it instead of creating another conf field.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9942) Implement functions methods in HBaseStore [hbase-metastore branch]

2015-03-12 Thread Alan Gates (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Gates updated HIVE-9942:
-
Attachment: HIVE-9942-2.patch

Added functions to hbase schema tool.  Also added new FUNC_TABLE to list of 
tables to install in HBaseReadWrite.

> Implement functions methods in HBaseStore [hbase-metastore branch]
> --
>
> Key: HIVE-9942
> URL: https://issues.apache.org/jira/browse/HIVE-9942
> Project: Hive
>  Issue Type: Sub-task
>  Components: Metastore
>Affects Versions: hbase-metastore-branch
>Reporter: Alan Gates
>Assignee: Alan Gates
> Attachments: HIVE-9942-2.patch, HIVE-9942.patch
>
>
> All the methods relating to functions are not yet implemented.  We need to 
> add them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9944) Convert array[] to string properly in log messages

2015-03-12 Thread Alan Gates (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14359689#comment-14359689
 ] 

Alan Gates commented on HIVE-9944:
--

+1

> Convert array[] to string properly in log messages
> --
>
> Key: HIVE-9944
> URL: https://issues.apache.org/jira/browse/HIVE-9944
> Project: Hive
>  Issue Type: Bug
>  Components: Logical Optimizer
>Reporter: Alexander Pivovarov
>Assignee: Alexander Pivovarov
>Priority: Minor
> Attachments: HIVE-9944.1.patch
>
>
> DemuxOperator (181) and ListBucketingPruner (194, 219, 361) concatenate 
> String with array[] and log the resulting message.
> array[] uses Object.toString which returns className + @ + hashCode hex
> we can use Arrays.toString() to convert array[] to string properly



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9944) Convert array[] to string properly in log messages

2015-03-12 Thread Alexander Pivovarov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9944?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Pivovarov updated HIVE-9944:
--
Attachment: HIVE-9944.1.patch

patch #1

> Convert array[] to string properly in log messages
> --
>
> Key: HIVE-9944
> URL: https://issues.apache.org/jira/browse/HIVE-9944
> Project: Hive
>  Issue Type: Bug
>  Components: Logical Optimizer
>Reporter: Alexander Pivovarov
>Assignee: Alexander Pivovarov
>Priority: Minor
> Attachments: HIVE-9944.1.patch
>
>
> DemuxOperator (181) and ListBucketingPruner (194, 219, 361) concatenate 
> String with array[] and log the resulting message.
> array[] uses Object.toString which returns className + @ + hashCode hex
> we can use Arrays.toString() to convert array[] to string properly



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9944) Convert array[] to string properly in log messages

2015-03-12 Thread Alexander Pivovarov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9944?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Pivovarov updated HIVE-9944:
--
Component/s: Logical Optimizer

> Convert array[] to string properly in log messages
> --
>
> Key: HIVE-9944
> URL: https://issues.apache.org/jira/browse/HIVE-9944
> Project: Hive
>  Issue Type: Bug
>  Components: Logical Optimizer
>Reporter: Alexander Pivovarov
>Assignee: Alexander Pivovarov
>Priority: Minor
>
> DemuxOperator (181) and ListBucketingPruner (194, 219, 361) concatenate 
> String with array[] and log the resulting message.
> array[] uses Object.toString which returns className + @ + hashCode hex
> we can use Arrays.toString() to convert array[] to string properly



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9944) Convert array[] to string properly in log messages

2015-03-12 Thread Alexander Pivovarov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9944?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Pivovarov updated HIVE-9944:
--
Description: 
DemuxOperator (181) and ListBucketingPruner (194, 219, 361) concatenate String 
with array[] and log the resulting message.
array[] uses Object.toString which returns className + @ + hashCode hex
we can use Arrays.toString() to convert array[] to string properly

  was:
DemuxOperator and ListBucketingPruner concatenate String with array[] and log 
the resulting message.
array[] uses Object.toString which returns className + @ + hashCode hex
we can use Arrays.toString() to convert array[] to string properly


> Convert array[] to string properly in log messages
> --
>
> Key: HIVE-9944
> URL: https://issues.apache.org/jira/browse/HIVE-9944
> Project: Hive
>  Issue Type: Bug
>Reporter: Alexander Pivovarov
>Assignee: Alexander Pivovarov
>Priority: Minor
>
> DemuxOperator (181) and ListBucketingPruner (194, 219, 361) concatenate 
> String with array[] and log the resulting message.
> array[] uses Object.toString which returns className + @ + hashCode hex
> we can use Arrays.toString() to convert array[] to string properly



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9725) Need to add indices and privileges to HBaseImport and HBaseSchemaTool [hbase-metastore branch]

2015-03-12 Thread Alan Gates (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9725?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Gates updated HIVE-9725:
-
Summary: Need to add indices and privileges to HBaseImport and 
HBaseSchemaTool [hbase-metastore branch]  (was: Need to add indices and 
privileges to HBaseImport [hbase-metastore branch])

> Need to add indices and privileges to HBaseImport and HBaseSchemaTool 
> [hbase-metastore branch]
> --
>
> Key: HIVE-9725
> URL: https://issues.apache.org/jira/browse/HIVE-9725
> Project: Hive
>  Issue Type: Sub-task
>  Components: Metastore
>Reporter: Alan Gates
>Assignee: Alan Gates
>
> HBaseImport doesn't include these objects because they weren't supported in 
> the metastore yet when it was created.  These need to be added.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9944) convert array to string properly in log messages

2015-03-12 Thread Alexander Pivovarov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9944?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Pivovarov updated HIVE-9944:
--
Description: 
DemuxOperator and ListBucketingPruner concatenate String with array[] and log 
the resulting message.
array[] uses Object.toString which returns className + @ + hashCode hex
we can use Arrays.toString() to convert array[] to string properly

> convert array to string properly in log messages
> 
>
> Key: HIVE-9944
> URL: https://issues.apache.org/jira/browse/HIVE-9944
> Project: Hive
>  Issue Type: Bug
>Reporter: Alexander Pivovarov
>Assignee: Alexander Pivovarov
>Priority: Minor
>
> DemuxOperator and ListBucketingPruner concatenate String with array[] and log 
> the resulting message.
> array[] uses Object.toString which returns className + @ + hashCode hex
> we can use Arrays.toString() to convert array[] to string properly



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9944) Convert array[] to string properly in log messages

2015-03-12 Thread Alexander Pivovarov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9944?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Pivovarov updated HIVE-9944:
--
Summary: Convert array[] to string properly in log messages  (was: convert 
array to string properly in log messages)

> Convert array[] to string properly in log messages
> --
>
> Key: HIVE-9944
> URL: https://issues.apache.org/jira/browse/HIVE-9944
> Project: Hive
>  Issue Type: Bug
>Reporter: Alexander Pivovarov
>Assignee: Alexander Pivovarov
>Priority: Minor
>
> DemuxOperator and ListBucketingPruner concatenate String with array[] and log 
> the resulting message.
> array[] uses Object.toString which returns className + @ + hashCode hex
> we can use Arrays.toString() to convert array[] to string properly



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9915) Allow specifying file format for managed tables

2015-03-12 Thread Gunther Hagleitner (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gunther Hagleitner updated HIVE-9915:
-
Attachment: HIVE-9915.2.patch

.2 addresses [~leftylev]'s comment and fixes the test case (apparently using a 
non existing path works on my machine but not for the build).

> Allow specifying file format for managed tables
> ---
>
> Key: HIVE-9915
> URL: https://issues.apache.org/jira/browse/HIVE-9915
> Project: Hive
>  Issue Type: Bug
>  Components: File Formats
>Reporter: Gunther Hagleitner
>Assignee: Gunther Hagleitner
> Attachments: HIVE-9915.1.patch, HIVE-9915.2.patch
>
>
> We already allow setting a system wide default format. In some cases it's 
> useful though to specify this only for managed tables, or distinguish 
> external and managed via two variables. You might want to set a more 
> efficient (than text) format for managed tables, but leave external to text 
> (as they often are log files etc.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9856) CBO (Calcite Return Path): Join cost calculation improvements and algorithm selection implementation [CBO branch]

2015-03-12 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-9856:
---
Component/s: Logical Optimizer

> CBO (Calcite Return Path): Join cost calculation improvements and algorithm 
> selection implementation [CBO branch]
> -
>
> Key: HIVE-9856
> URL: https://issues.apache.org/jira/browse/HIVE-9856
> Project: Hive
>  Issue Type: Sub-task
>  Components: CBO, Logical Optimizer
>Affects Versions: cbo-branch
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
> Fix For: cbo-branch
>
> Attachments: HIVE-9856.01.cbo.patch, HIVE-9856.02.cbo.patch, 
> HIVE-9856.03.cbo.patch, HIVE-9856.cbo.patch
>
>
> This patch implements more precise cost functions for join operators that may 
> help us decide which join algorithm we want to execute directly in the CBO.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HIVE-9856) CBO (Calcite Return Path): Join cost calculation improvements and algorithm selection implementation [CBO branch]

2015-03-12 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan resolved HIVE-9856.

Resolution: Fixed

Committed to branch. Thanks, Jesus!

> CBO (Calcite Return Path): Join cost calculation improvements and algorithm 
> selection implementation [CBO branch]
> -
>
> Key: HIVE-9856
> URL: https://issues.apache.org/jira/browse/HIVE-9856
> Project: Hive
>  Issue Type: Sub-task
>  Components: CBO
>Affects Versions: cbo-branch
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
> Fix For: cbo-branch
>
> Attachments: HIVE-9856.01.cbo.patch, HIVE-9856.02.cbo.patch, 
> HIVE-9856.03.cbo.patch, HIVE-9856.cbo.patch
>
>
> This patch implements more precise cost functions for join operators that may 
> help us decide which join algorithm we want to execute directly in the CBO.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9942) Implement functions methods in HBaseStore [hbase-metastore branch]

2015-03-12 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14359566#comment-14359566
 ] 

Hive QA commented on HIVE-9942:
---



{color:red}Overall{color}: -1 no tests executed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12704271/HIVE-9942.patch

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/3022/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/3022/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-3022/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hive-ptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ [[ -n /usr/java/jdk1.7.0_45-cloudera ]]
+ export JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera
+ JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera
+ export 
PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin
+ 
PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'M2_OPTS=-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ M2_OPTS='-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ cd /data/hive-ptest/working/
+ tee /data/hive-ptest/logs/PreCommit-HIVE-TRUNK-Build-3022/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ svn = \s\v\n ]]
+ [[ -n '' ]]
+ [[ -d apache-svn-trunk-source ]]
+ [[ ! -d apache-svn-trunk-source/.svn ]]
+ [[ ! -d apache-svn-trunk-source ]]
+ cd apache-svn-trunk-source
+ svn revert -R .
Reverted 'common/src/java/org/apache/hive/common/util/DateUtils.java'
Reverted 'serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyUtils.java'
Reverted 'serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyFactory.java'
Reverted 
'serde/src/java/org/apache/hadoop/hive/serde2/lazy/objectinspector/primitive/LazyPrimitiveObjectInspectorFactory.java'
Reverted 
'serde/src/java/org/apache/hadoop/hive/serde2/lazybinary/LazyBinaryUtils.java'
Reverted 
'serde/src/java/org/apache/hadoop/hive/serde2/lazybinary/LazyBinarySerDe.java'
Reverted 
'serde/src/java/org/apache/hadoop/hive/serde2/lazybinary/LazyBinaryFactory.java'
Reverted 
'serde/src/java/org/apache/hadoop/hive/serde2/typeinfo/TypeInfoFactory.java'
Reverted 
'serde/src/java/org/apache/hadoop/hive/serde2/binarysortable/BinarySortableSerDe.java'
Reverted 
'serde/src/java/org/apache/hadoop/hive/serde2/objectinspector/ObjectInspectorConverters.java'
Reverted 
'serde/src/java/org/apache/hadoop/hive/serde2/objectinspector/PrimitiveObjectInspector.java'
Reverted 
'serde/src/java/org/apache/hadoop/hive/serde2/objectinspector/ObjectInspectorUtils.java'
Reverted 
'serde/src/java/org/apache/hadoop/hive/serde2/objectinspector/primitive/PrimitiveObjectInspectorUtils.java'
Reverted 
'serde/src/java/org/apache/hadoop/hive/serde2/objectinspector/primitive/PrimitiveObjectInspectorConverter.java'
Reverted 
'serde/src/java/org/apache/hadoop/hive/serde2/objectinspector/primitive/PrimitiveObjectInspectorFactory.java'
Reverted 'serde/src/gen/thrift/gen-py/org_apache_hadoop_hive_serde/constants.py'
Reverted 'serde/src/gen/thrift/gen-cpp/serde_constants.cpp'
Reverted 'serde/src/gen/thrift/gen-cpp/serde_constants.h'
Reverted 'serde/src/gen/thrift/gen-rb/serde_constants.rb'
Reverted 
'serde/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/serde/serdeConstants.java'
Reverted 'serde/src/gen/thrift/gen-php/org/apache/hadoop/hive/serde/Types.php'
Reverted 'serde/if/serde.thrift'
Reverted 'ql/src/test/results/clientnegative/invalid_arithmetic_type.q.out'
Reverted 
'ql/src/test/org/apache/hadoop/hive/ql/udf/generic/TestGenericUDFOPPlus.java'
Reverted 
'ql/src/test/org/apache/hadoop/hive/ql/udf/generic/TestGenericUDFOPMinus.java'
Reverted 'ql/src/test/queries/clientnegative/invalid_arithmetic_type.q'
Reverted 
'ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/translator/TypeConverter.java'
Reverted 
'ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/translator/RexNodeConverter.java'
Reverted 
'ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/translator/ASTBuilder.java'
Reverted 'ql/src/java/org/apache/hadoop/hive/ql/parse/IdentifiersParser.g'
Reverted 'ql/src/java/org/apache/hadoop/hive/ql/parse/TypeCheckProcFactory.java'
Reverted 'ql/src/java/org/apache/hadoop/hive/ql/parse/HiveParser.g'
Reverted 'ql/src/ja

[jira] [Commented] (HIVE-9792) Support interval type in expressions/predicates

2015-03-12 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14359563#comment-14359563
 ] 

Hive QA commented on HIVE-9792:
---



{color:green}Overall{color}: +1 all checks pass

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12704245/HIVE-9792.6.patch

{color:green}SUCCESS:{color} +1 7809 tests passed

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/3021/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/3021/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-3021/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12704245 - PreCommit-HIVE-TRUNK-Build

> Support interval type in expressions/predicates 
> 
>
> Key: HIVE-9792
> URL: https://issues.apache.org/jira/browse/HIVE-9792
> Project: Hive
>  Issue Type: Sub-task
>  Components: Types
>Reporter: Jason Dere
>Assignee: Jason Dere
> Attachments: HIVE-9792.1.patch, HIVE-9792.2.patch, HIVE-9792.3.patch, 
> HIVE-9792.4.patch, HIVE-9792.5.patch, HIVE-9792.6.patch
>
>
> Provide partial support for the interval year-month/interval day-time types 
> in Hive. Intervals will be usable in expressions/predicates/joins:
> {noformat}
>   select birthdate + interval '30-0' year to month as thirtieth_birthday
>   from table
>   where (current_timestamp - ts1 < interval '3 0:0:0' day to second)
> {noformat}
> This stops short of adding making the interval types usable as a storable 
> column type.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9942) Implement functions methods in HBaseStore [hbase-metastore branch]

2015-03-12 Thread Alan Gates (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Gates updated HIVE-9942:
-
Attachment: HIVE-9942.patch

> Implement functions methods in HBaseStore [hbase-metastore branch]
> --
>
> Key: HIVE-9942
> URL: https://issues.apache.org/jira/browse/HIVE-9942
> Project: Hive
>  Issue Type: Sub-task
>  Components: Metastore
>Affects Versions: hbase-metastore-branch
>Reporter: Alan Gates
>Assignee: Alan Gates
> Attachments: HIVE-9942.patch
>
>
> All the methods relating to functions are not yet implemented.  We need to 
> add them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9725) Need to add indices and privileges to HBaseImport [hbase-metastore branch]

2015-03-12 Thread Alan Gates (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14359519#comment-14359519
 ] 

Alan Gates commented on HIVE-9725:
--

Functions are added in HIVE-9942, so removing from this ticket.

> Need to add indices and privileges to HBaseImport [hbase-metastore branch]
> --
>
> Key: HIVE-9725
> URL: https://issues.apache.org/jira/browse/HIVE-9725
> Project: Hive
>  Issue Type: Sub-task
>  Components: Metastore
>Reporter: Alan Gates
>Assignee: Alan Gates
>
> HBaseImport doesn't include these objects because they weren't supported in 
> the metastore yet when it was created.  These need to be added.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9725) Need to add indices and privileges to HBaseImport [hbase-metastore branch]

2015-03-12 Thread Alan Gates (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9725?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Gates updated HIVE-9725:
-
Summary: Need to add indices and privileges to HBaseImport [hbase-metastore 
branch]  (was: Need to add indices, privileges, and functions to HBaseImport 
[hbase-metastore branch])

> Need to add indices and privileges to HBaseImport [hbase-metastore branch]
> --
>
> Key: HIVE-9725
> URL: https://issues.apache.org/jira/browse/HIVE-9725
> Project: Hive
>  Issue Type: Sub-task
>  Components: Metastore
>Reporter: Alan Gates
>Assignee: Alan Gates
>
> HBaseImport doesn't include these objects because they weren't supported in 
> the metastore yet when it was created.  These need to be added.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-3454) Problem with CAST(BIGINT as TIMESTAMP)

2015-03-12 Thread Jason Dere (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14359502#comment-14359502
 ] 

Jason Dere commented on HIVE-3454:
--

I've just started looking at vectorized code .. looks like the corresponding 
change to make for the vectorized path will be in MathExpr.doubleToTimestamp(). 
CC'ing [~mmccline] in case there is any more details to add here.

> Problem with CAST(BIGINT as TIMESTAMP)
> --
>
> Key: HIVE-3454
> URL: https://issues.apache.org/jira/browse/HIVE-3454
> Project: Hive
>  Issue Type: Bug
>  Components: Types, UDF
>Affects Versions: 0.8.0, 0.8.1, 0.9.0, 0.10.0, 0.11.0, 0.12.0, 0.13.0, 
> 0.13.1
>Reporter: Ryan Harris
>Assignee: Aihua Xu
>  Labels: newbie, newdev, patch
> Attachments: HIVE-3454.1.patch.txt, HIVE-3454.3.patch, HIVE-3454.patch
>
>
> Ran into an issue while working with timestamp conversion.
> CAST(unix_timestamp() as TIMESTAMP) should create a timestamp for the current 
> time from the BIGINT returned by unix_timestamp()
> Instead, however, a 1970-01-16 timestamp is returned.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9943) CBO (Calcite Return Path): GroupingID translation from Calcite [CBO branch]

2015-03-12 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9943?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-9943:
--
Attachment: HIVE-9943.cbo.patch

[~jpullokkaran], the GroupingID patch for the CBO branch. Thanks

> CBO (Calcite Return Path): GroupingID translation from Calcite [CBO branch]
> ---
>
> Key: HIVE-9943
> URL: https://issues.apache.org/jira/browse/HIVE-9943
> Project: Hive
>  Issue Type: Sub-task
>  Components: CBO
>Affects Versions: cbo-branch
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
> Fix For: cbo-branch
>
> Attachments: HIVE-9943.cbo.patch
>
>
> The translation from Calcite back to Hive might produce wrong results while 
> interacting with other Calcite optimization rules. Further, we could ease 
> translation of Aggregate operator with grouping sets to Hive.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9943) CBO (Calcite Return Path): GroupingID translation from Calcite [CBO branch]

2015-03-12 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9943?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-9943:
--
Fix Version/s: (was: 1.2.0)
   cbo-branch

> CBO (Calcite Return Path): GroupingID translation from Calcite [CBO branch]
> ---
>
> Key: HIVE-9943
> URL: https://issues.apache.org/jira/browse/HIVE-9943
> Project: Hive
>  Issue Type: Sub-task
>  Components: CBO
>Affects Versions: cbo-branch
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
> Fix For: cbo-branch
>
> Attachments: HIVE-9943.cbo.patch
>
>
> The translation from Calcite back to Hive might produce wrong results while 
> interacting with other Calcite optimization rules. Further, we could ease 
> translation of Aggregate operator with grouping sets to Hive.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9943) CBO (Calcite Return Path): GroupingID translation from Calcite [CBO branch]

2015-03-12 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9943?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-9943:
--
Affects Version/s: cbo-branch

> CBO (Calcite Return Path): GroupingID translation from Calcite [CBO branch]
> ---
>
> Key: HIVE-9943
> URL: https://issues.apache.org/jira/browse/HIVE-9943
> Project: Hive
>  Issue Type: Sub-task
>  Components: CBO
>Affects Versions: cbo-branch
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
> Fix For: cbo-branch
>
> Attachments: HIVE-9943.cbo.patch
>
>
> The translation from Calcite back to Hive might produce wrong results while 
> interacting with other Calcite optimization rules. Further, we could ease 
> translation of Aggregate operator with grouping sets to Hive.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-3454) Problem with CAST(BIGINT as TIMESTAMP)

2015-03-12 Thread Aihua Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aihua Xu updated HIVE-3454:
---
Attachment: (was: HIVE-3454.2.patch)

> Problem with CAST(BIGINT as TIMESTAMP)
> --
>
> Key: HIVE-3454
> URL: https://issues.apache.org/jira/browse/HIVE-3454
> Project: Hive
>  Issue Type: Bug
>  Components: Types, UDF
>Affects Versions: 0.8.0, 0.8.1, 0.9.0, 0.10.0, 0.11.0, 0.12.0, 0.13.0, 
> 0.13.1
>Reporter: Ryan Harris
>Assignee: Aihua Xu
>  Labels: newbie, newdev, patch
> Attachments: HIVE-3454.1.patch.txt, HIVE-3454.3.patch, HIVE-3454.patch
>
>
> Ran into an issue while working with timestamp conversion.
> CAST(unix_timestamp() as TIMESTAMP) should create a timestamp for the current 
> time from the BIGINT returned by unix_timestamp()
> Instead, however, a 1970-01-16 timestamp is returned.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9792) Support interval type in expressions/predicates

2015-03-12 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14359439#comment-14359439
 ] 

Hive QA commented on HIVE-9792:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12704237/HIVE-9792.5.patch

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 7809 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_udaf_context_ngrams
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/3020/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/3020/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-3020/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12704237 - PreCommit-HIVE-TRUNK-Build

> Support interval type in expressions/predicates 
> 
>
> Key: HIVE-9792
> URL: https://issues.apache.org/jira/browse/HIVE-9792
> Project: Hive
>  Issue Type: Sub-task
>  Components: Types
>Reporter: Jason Dere
>Assignee: Jason Dere
> Attachments: HIVE-9792.1.patch, HIVE-9792.2.patch, HIVE-9792.3.patch, 
> HIVE-9792.4.patch, HIVE-9792.5.patch, HIVE-9792.6.patch
>
>
> Provide partial support for the interval year-month/interval day-time types 
> in Hive. Intervals will be usable in expressions/predicates/joins:
> {noformat}
>   select birthdate + interval '30-0' year to month as thirtieth_birthday
>   from table
>   where (current_timestamp - ts1 < interval '3 0:0:0' day to second)
> {noformat}
> This stops short of adding making the interval types usable as a storable 
> column type.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-3454) Problem with CAST(BIGINT as TIMESTAMP)

2015-03-12 Thread Aihua Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14359406#comment-14359406
 ] 

Aihua Xu commented on HIVE-3454:


Yes. That's right. That patch is to change in the correct way to interpreting 
all the datatypes as seconds. Thanks for looking.

> Problem with CAST(BIGINT as TIMESTAMP)
> --
>
> Key: HIVE-3454
> URL: https://issues.apache.org/jira/browse/HIVE-3454
> Project: Hive
>  Issue Type: Bug
>  Components: Types, UDF
>Affects Versions: 0.8.0, 0.8.1, 0.9.0, 0.10.0, 0.11.0, 0.12.0, 0.13.0, 
> 0.13.1
>Reporter: Ryan Harris
>Assignee: Aihua Xu
>  Labels: newbie, newdev, patch
> Attachments: HIVE-3454.1.patch.txt, HIVE-3454.2.patch, 
> HIVE-3454.3.patch, HIVE-3454.patch
>
>
> Ran into an issue while working with timestamp conversion.
> CAST(unix_timestamp() as TIMESTAMP) should create a timestamp for the current 
> time from the BIGINT returned by unix_timestamp()
> Instead, however, a 1970-01-16 timestamp is returned.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-3454) Problem with CAST(BIGINT as TIMESTAMP)

2015-03-12 Thread Jason Dere (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14359383#comment-14359383
 ] 

Jason Dere commented on HIVE-3454:
--

This has been marked patch available - which one should we be looking at - 
HIVE-3454.3.patch?

> Problem with CAST(BIGINT as TIMESTAMP)
> --
>
> Key: HIVE-3454
> URL: https://issues.apache.org/jira/browse/HIVE-3454
> Project: Hive
>  Issue Type: Bug
>  Components: Types, UDF
>Affects Versions: 0.8.0, 0.8.1, 0.9.0, 0.10.0, 0.11.0, 0.12.0, 0.13.0, 
> 0.13.1
>Reporter: Ryan Harris
>Assignee: Aihua Xu
>  Labels: newbie, newdev, patch
> Attachments: HIVE-3454.1.patch.txt, HIVE-3454.2.patch, 
> HIVE-3454.3.patch, HIVE-3454.patch
>
>
> Ran into an issue while working with timestamp conversion.
> CAST(unix_timestamp() as TIMESTAMP) should create a timestamp for the current 
> time from the BIGINT returned by unix_timestamp()
> Instead, however, a 1970-01-16 timestamp is returned.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9792) Support interval type in expressions/predicates

2015-03-12 Thread Jason Dere (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Dere updated HIVE-9792:
-
Attachment: HIVE-9792.6.patch

As [~pxiong] has pointed out to me (offline), if INTERVAL is meant to be a 
reserved word it should not be added to sql11ReservedKeywordsUsedAsIdentifier - 
that is only meant for reserved words that we want to also allow as 
identifiers. Uploading patch v6.

> Support interval type in expressions/predicates 
> 
>
> Key: HIVE-9792
> URL: https://issues.apache.org/jira/browse/HIVE-9792
> Project: Hive
>  Issue Type: Sub-task
>  Components: Types
>Reporter: Jason Dere
>Assignee: Jason Dere
> Attachments: HIVE-9792.1.patch, HIVE-9792.2.patch, HIVE-9792.3.patch, 
> HIVE-9792.4.patch, HIVE-9792.5.patch, HIVE-9792.6.patch
>
>
> Provide partial support for the interval year-month/interval day-time types 
> in Hive. Intervals will be usable in expressions/predicates/joins:
> {noformat}
>   select birthdate + interval '30-0' year to month as thirtieth_birthday
>   from table
>   where (current_timestamp - ts1 < interval '3 0:0:0' day to second)
> {noformat}
> This stops short of adding making the interval types usable as a storable 
> column type.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HIVE-9739) Various queries fails with Tez/ORC file org.apache.hadoop.hive.ql.exec.tez.TezTask due to Caused by: java.lang.ClassCastException

2015-03-12 Thread Gunther Hagleitner (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gunther Hagleitner resolved HIVE-9739.
--
Resolution: Duplicate

> Various queries fails with Tez/ORC file 
> org.apache.hadoop.hive.ql.exec.tez.TezTask due to Caused by: 
> java.lang.ClassCastException
> -
>
> Key: HIVE-9739
> URL: https://issues.apache.org/jira/browse/HIVE-9739
> Project: Hive
>  Issue Type: Bug
>  Components: SQL
>Reporter: N Campbell
>
> This fails when using Tez and ORC. 
> It will run when text files are used or text/ORC and MapReduce and not Tez 
> used.
> Is this another example of a type issue per 
> https://issues.apache.org/jira/browse/HIVE-9735
> select rnum, c1, c2 from tset1 as t1 where exists ( select c1 from tset2 
> where c1 = t1.c1 )
> This will run in both Tez and MapReduce using a text file
> select rnum, c1, c2 from t_tset1 as t1 where exists ( select c1 from t_tset2 
> where c1 = t1.c1 )
> Caused by: java.lang.RuntimeException: 
> org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while 
> processing row 
>   at 
> org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.processRow(MapRecordSource.java:91)
>   at 
> org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.pushRecord(MapRecordSource.java:68)
>   at 
> org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.run(MapRecordProcessor.java:294)
>   at 
> org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:163)
>   ... 13 more
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime 
> Error while processing row 
>   at 
> org.apache.hadoop.hive.ql.exec.vector.VectorMapOperator.process(VectorMapOperator.java:52)
>   at 
> org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.processRow(MapRecordSource.java:83)
>   ... 16 more
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Unexpected 
> exception: org.apache.hadoop.hive.serde2.io.HiveCharWritable cannot be cast 
> to org.apache.hadoop.hive.common.type.HiveChar
>   at 
> org.apache.hadoop.hive.ql.exec.MapJoinOperator.processOp(MapJoinOperator.java:311)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.VectorMapJoinOperator.processOp(VectorMapJoinOperator.java:249)
>   at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:815)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.VectorFilterOperator.processOp(VectorFilterOperator.java:111)
>   at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:815)
>   at 
> org.apache.hadoop.hive.ql.exec.TableScanOperator.processOp(TableScanOperator.java:95)
>   at 
> org.apache.hadoop.hive.ql.exec.MapOperator$MapOpCtx.forward(MapOperator.java:157)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.VectorMapOperator.process(VectorMapOperator.java:45)
>   ... 17 more
> Caused by: java.lang.ClassCastException: 
> org.apache.hadoop.hive.serde2.io.HiveCharWritable cannot be cast to 
> org.apache.hadoop.hive.common.type.HiveChar
>   at 
> org.apache.hadoop.hive.ql.exec.vector.VectorColumnAssignFactory$18.assignObjectValue(VectorColumnAssignFactory.java:432)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.VectorMapJoinOperator.internalForward(VectorMapJoinOperator.java:196)
>   at 
> org.apache.hadoop.hive.ql.exec.CommonJoinOperator.genAllOneUniqueJoinObject(CommonJoinOperator.java:670)
>   at 
> org.apache.hadoop.hive.ql.exec.CommonJoinOperator.checkAndGenObject(CommonJoinOperator.java:748)
>   at 
> org.apache.hadoop.hive.ql.exec.MapJoinOperator.processOp(MapJoinOperator.java:299)
>   ... 24 more
> create table  if not exists T_TSET1 (RNUM int , C1 int, C2 char(3))
>  ROW FORMAT DELIMITED FIELDS TERMINATED BY '|' LINES TERMINATED BY '\n' 
>  STORED AS textfile ;
> create table  if not exists T_TSET1 (RNUM int , C1 int, C2 char(3))
>  STORED AS ORC ;
> create table  if not exists T_TSET2 (RNUM int , C1 int, C2 char(3))
>  ROW FORMAT DELIMITED FIELDS TERMINATED BY '|' LINES TERMINATED BY '\n' 
>  STORED AS textfile ;
> TSET1 data
> 0|10|AAA
> 1|10|AAA
> 2|10|AAA
> 3|20|BBB
> 4|30|CCC
> 5|40|DDD
> 6|50|\N
> 7|60|\N
> 8|\N|AAA
> 9|\N|AAA
> 10|\N|\N
> 11|\N|\N
> TSET2 DATA
> 0|10|AAA
> 1|10|AAA
> 2|40|DDD
> 3|50|EEE
> 4|60|FFF



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9739) Various queries fails with Tez/ORC file org.apache.hadoop.hive.ql.exec.tez.TezTask due to Caused by: java.lang.ClassCastException

2015-03-12 Thread Gunther Hagleitner (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gunther Hagleitner updated HIVE-9739:
-
Assignee: Matt McCline

> Various queries fails with Tez/ORC file 
> org.apache.hadoop.hive.ql.exec.tez.TezTask due to Caused by: 
> java.lang.ClassCastException
> -
>
> Key: HIVE-9739
> URL: https://issues.apache.org/jira/browse/HIVE-9739
> Project: Hive
>  Issue Type: Bug
>  Components: SQL
>Reporter: N Campbell
>Assignee: Matt McCline
>
> This fails when using Tez and ORC. 
> It will run when text files are used or text/ORC and MapReduce and not Tez 
> used.
> Is this another example of a type issue per 
> https://issues.apache.org/jira/browse/HIVE-9735
> select rnum, c1, c2 from tset1 as t1 where exists ( select c1 from tset2 
> where c1 = t1.c1 )
> This will run in both Tez and MapReduce using a text file
> select rnum, c1, c2 from t_tset1 as t1 where exists ( select c1 from t_tset2 
> where c1 = t1.c1 )
> Caused by: java.lang.RuntimeException: 
> org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while 
> processing row 
>   at 
> org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.processRow(MapRecordSource.java:91)
>   at 
> org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.pushRecord(MapRecordSource.java:68)
>   at 
> org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.run(MapRecordProcessor.java:294)
>   at 
> org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:163)
>   ... 13 more
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime 
> Error while processing row 
>   at 
> org.apache.hadoop.hive.ql.exec.vector.VectorMapOperator.process(VectorMapOperator.java:52)
>   at 
> org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.processRow(MapRecordSource.java:83)
>   ... 16 more
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Unexpected 
> exception: org.apache.hadoop.hive.serde2.io.HiveCharWritable cannot be cast 
> to org.apache.hadoop.hive.common.type.HiveChar
>   at 
> org.apache.hadoop.hive.ql.exec.MapJoinOperator.processOp(MapJoinOperator.java:311)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.VectorMapJoinOperator.processOp(VectorMapJoinOperator.java:249)
>   at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:815)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.VectorFilterOperator.processOp(VectorFilterOperator.java:111)
>   at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:815)
>   at 
> org.apache.hadoop.hive.ql.exec.TableScanOperator.processOp(TableScanOperator.java:95)
>   at 
> org.apache.hadoop.hive.ql.exec.MapOperator$MapOpCtx.forward(MapOperator.java:157)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.VectorMapOperator.process(VectorMapOperator.java:45)
>   ... 17 more
> Caused by: java.lang.ClassCastException: 
> org.apache.hadoop.hive.serde2.io.HiveCharWritable cannot be cast to 
> org.apache.hadoop.hive.common.type.HiveChar
>   at 
> org.apache.hadoop.hive.ql.exec.vector.VectorColumnAssignFactory$18.assignObjectValue(VectorColumnAssignFactory.java:432)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.VectorMapJoinOperator.internalForward(VectorMapJoinOperator.java:196)
>   at 
> org.apache.hadoop.hive.ql.exec.CommonJoinOperator.genAllOneUniqueJoinObject(CommonJoinOperator.java:670)
>   at 
> org.apache.hadoop.hive.ql.exec.CommonJoinOperator.checkAndGenObject(CommonJoinOperator.java:748)
>   at 
> org.apache.hadoop.hive.ql.exec.MapJoinOperator.processOp(MapJoinOperator.java:299)
>   ... 24 more
> create table  if not exists T_TSET1 (RNUM int , C1 int, C2 char(3))
>  ROW FORMAT DELIMITED FIELDS TERMINATED BY '|' LINES TERMINATED BY '\n' 
>  STORED AS textfile ;
> create table  if not exists T_TSET1 (RNUM int , C1 int, C2 char(3))
>  STORED AS ORC ;
> create table  if not exists T_TSET2 (RNUM int , C1 int, C2 char(3))
>  ROW FORMAT DELIMITED FIELDS TERMINATED BY '|' LINES TERMINATED BY '\n' 
>  STORED AS textfile ;
> TSET1 data
> 0|10|AAA
> 1|10|AAA
> 2|10|AAA
> 3|20|BBB
> 4|30|CCC
> 5|40|DDD
> 6|50|\N
> 7|60|\N
> 8|\N|AAA
> 9|\N|AAA
> 10|\N|\N
> 11|\N|\N
> TSET2 DATA
> 0|10|AAA
> 1|10|AAA
> 2|40|DDD
> 3|50|EEE
> 4|60|FFF



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9739) Various queries fails with Tez/ORC file org.apache.hadoop.hive.ql.exec.tez.TezTask due to Caused by: java.lang.ClassCastException

2015-03-12 Thread Matt McCline (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14359259#comment-14359259
 ] 

Matt McCline commented on HIVE-9739:


[~hagleitn]]  Yes, the call stack look exactly the same as HIVE-9249.  Jason is 
correct -- only work around without a patch is to turn off vectorization.

> Various queries fails with Tez/ORC file 
> org.apache.hadoop.hive.ql.exec.tez.TezTask due to Caused by: 
> java.lang.ClassCastException
> -
>
> Key: HIVE-9739
> URL: https://issues.apache.org/jira/browse/HIVE-9739
> Project: Hive
>  Issue Type: Bug
>  Components: SQL
>Reporter: N Campbell
>
> This fails when using Tez and ORC. 
> It will run when text files are used or text/ORC and MapReduce and not Tez 
> used.
> Is this another example of a type issue per 
> https://issues.apache.org/jira/browse/HIVE-9735
> select rnum, c1, c2 from tset1 as t1 where exists ( select c1 from tset2 
> where c1 = t1.c1 )
> This will run in both Tez and MapReduce using a text file
> select rnum, c1, c2 from t_tset1 as t1 where exists ( select c1 from t_tset2 
> where c1 = t1.c1 )
> Caused by: java.lang.RuntimeException: 
> org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while 
> processing row 
>   at 
> org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.processRow(MapRecordSource.java:91)
>   at 
> org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.pushRecord(MapRecordSource.java:68)
>   at 
> org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.run(MapRecordProcessor.java:294)
>   at 
> org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:163)
>   ... 13 more
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime 
> Error while processing row 
>   at 
> org.apache.hadoop.hive.ql.exec.vector.VectorMapOperator.process(VectorMapOperator.java:52)
>   at 
> org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.processRow(MapRecordSource.java:83)
>   ... 16 more
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Unexpected 
> exception: org.apache.hadoop.hive.serde2.io.HiveCharWritable cannot be cast 
> to org.apache.hadoop.hive.common.type.HiveChar
>   at 
> org.apache.hadoop.hive.ql.exec.MapJoinOperator.processOp(MapJoinOperator.java:311)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.VectorMapJoinOperator.processOp(VectorMapJoinOperator.java:249)
>   at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:815)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.VectorFilterOperator.processOp(VectorFilterOperator.java:111)
>   at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:815)
>   at 
> org.apache.hadoop.hive.ql.exec.TableScanOperator.processOp(TableScanOperator.java:95)
>   at 
> org.apache.hadoop.hive.ql.exec.MapOperator$MapOpCtx.forward(MapOperator.java:157)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.VectorMapOperator.process(VectorMapOperator.java:45)
>   ... 17 more
> Caused by: java.lang.ClassCastException: 
> org.apache.hadoop.hive.serde2.io.HiveCharWritable cannot be cast to 
> org.apache.hadoop.hive.common.type.HiveChar
>   at 
> org.apache.hadoop.hive.ql.exec.vector.VectorColumnAssignFactory$18.assignObjectValue(VectorColumnAssignFactory.java:432)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.VectorMapJoinOperator.internalForward(VectorMapJoinOperator.java:196)
>   at 
> org.apache.hadoop.hive.ql.exec.CommonJoinOperator.genAllOneUniqueJoinObject(CommonJoinOperator.java:670)
>   at 
> org.apache.hadoop.hive.ql.exec.CommonJoinOperator.checkAndGenObject(CommonJoinOperator.java:748)
>   at 
> org.apache.hadoop.hive.ql.exec.MapJoinOperator.processOp(MapJoinOperator.java:299)
>   ... 24 more
> create table  if not exists T_TSET1 (RNUM int , C1 int, C2 char(3))
>  ROW FORMAT DELIMITED FIELDS TERMINATED BY '|' LINES TERMINATED BY '\n' 
>  STORED AS textfile ;
> create table  if not exists T_TSET1 (RNUM int , C1 int, C2 char(3))
>  STORED AS ORC ;
> create table  if not exists T_TSET2 (RNUM int , C1 int, C2 char(3))
>  ROW FORMAT DELIMITED FIELDS TERMINATED BY '|' LINES TERMINATED BY '\n' 
>  STORED AS textfile ;
> TSET1 data
> 0|10|AAA
> 1|10|AAA
> 2|10|AAA
> 3|20|BBB
> 4|30|CCC
> 5|40|DDD
> 6|50|\N
> 7|60|\N
> 8|\N|AAA
> 9|\N|AAA
> 10|\N|\N
> 11|\N|\N
> TSET2 DATA
> 0|10|AAA
> 1|10|AAA
> 2|40|DDD
> 3|50|EEE
> 4|60|FFF



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9792) Support interval type in expressions/predicates

2015-03-12 Thread Jason Dere (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Dere updated HIVE-9792:
-
Attachment: HIVE-9792.5.patch

Patch v5:
- Rebasing patch with trunk, due to the parser changes in HIVE-6617. This adds 
year/month/day/hour/minute/second to the list of nonReserved words, and 
interval to the list of reserved words.
- Switches DateTimeMath to use Calendar rather than Joda, was getting a NPE 
while trying to do use DateTimeMath for vectorized intervals (work separate 
from this Jira). I suspect something in MutableDateTime was not getting 
serialized properly during plan serialization.

> Support interval type in expressions/predicates 
> 
>
> Key: HIVE-9792
> URL: https://issues.apache.org/jira/browse/HIVE-9792
> Project: Hive
>  Issue Type: Sub-task
>  Components: Types
>Reporter: Jason Dere
>Assignee: Jason Dere
> Attachments: HIVE-9792.1.patch, HIVE-9792.2.patch, HIVE-9792.3.patch, 
> HIVE-9792.4.patch, HIVE-9792.5.patch
>
>
> Provide partial support for the interval year-month/interval day-time types 
> in Hive. Intervals will be usable in expressions/predicates/joins:
> {noformat}
>   select birthdate + interval '30-0' year to month as thirtieth_birthday
>   from table
>   where (current_timestamp - ts1 < interval '3 0:0:0' day to second)
> {noformat}
> This stops short of adding making the interval types usable as a storable 
> column type.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9874) Partition storage descriptors being set from table sd without copying [hbase-metastore branch]

2015-03-12 Thread Alan Gates (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Gates updated HIVE-9874:
-
Attachment: HIVE-9874-2.patch

New version of the patch that doesn't include code from HIVE-9885

> Partition storage descriptors being set from table sd without copying 
> [hbase-metastore branch]
> --
>
> Key: HIVE-9874
> URL: https://issues.apache.org/jira/browse/HIVE-9874
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Affects Versions: hbase-metastore-branch
>Reporter: Alan Gates
>Assignee: Alan Gates
> Attachments: HIVE-9874-2.patch, HIVE-9874.patch
>
>
> There are a number of places in the code where something like the following 
> is done:
> {code}
> partition.setSd(table.getSd());
> {code}
> This causes problems in the HBase metastore case because of the way it shares 
> storage descriptors when they are identical.  This means when using a storage 
> descriptor as a template for another we need to actually create a new one.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-5672) Insert with custom separator not supported for non-local directory

2015-03-12 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14359232#comment-14359232
 ] 

Hive QA commented on HIVE-5672:
---



{color:red}Overall{color}: -1 no tests executed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12704117/HIVE-5672.1.patch

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/3019/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/3019/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-3019/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hive-ptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ [[ -n /usr/java/jdk1.7.0_45-cloudera ]]
+ export JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera
+ JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera
+ export 
PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin
+ 
PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'M2_OPTS=-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ M2_OPTS='-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ cd /data/hive-ptest/working/
+ tee /data/hive-ptest/logs/PreCommit-HIVE-TRUNK-Build-3019/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ svn = \s\v\n ]]
+ [[ -n '' ]]
+ [[ -d apache-svn-trunk-source ]]
+ [[ ! -d apache-svn-trunk-source/.svn ]]
+ [[ ! -d apache-svn-trunk-source ]]
+ cd apache-svn-trunk-source
+ svn revert -R .
Reverted 
'metastore/src/java/org/apache/hadoop/hive/metastore/MetaStoreUtils.java'
Reverted 
'metastore/src/java/org/apache/hadoop/hive/metastore/HiveAlterHandler.java'
++ awk '{print $2}'
++ egrep -v '^X|^Performing status on external'
++ svn status --no-ignore
+ rm -rf target datanucleus.log ant/target shims/target shims/0.20S/target 
shims/0.23/target shims/aggregator/target shims/common/target 
shims/scheduler/target packaging/target hbase-handler/target testutils/target 
jdbc/target metastore/target 
metastore/src/java/org/apache/hadoop/hive/metastore/MetaStoreUtils.java.orig 
itests/target itests/thirdparty itests/hcatalog-unit/target 
itests/test-serde/target itests/qtest/target itests/hive-unit-hadoop2/target 
itests/hive-minikdc/target itests/hive-jmh/target itests/hive-unit/target 
itests/custom-serde/target itests/util/target itests/qtest-spark/target 
hcatalog/target hcatalog/core/target hcatalog/streaming/target 
hcatalog/server-extensions/target hcatalog/hcatalog-pig-adapter/target 
hcatalog/webhcat/svr/target hcatalog/webhcat/java-client/target 
accumulo-handler/target hwi/target common/target common/src/gen 
spark-client/target contrib/target service/target serde/target beeline/target 
odbc/target cli/target ql/dependency-reduced-pom.xml ql/target 
ql/src/test/results/clientpositive/alter_table_invalidate_column_stats.q.out 
ql/src/test/queries/clientpositive/alter_table_invalidate_column_stats.q
+ svn update

Fetching external item into 'hcatalog/src/test/e2e/harness'
External at revision 1666279.

At revision 1666279.
+ patchCommandPath=/data/hive-ptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hive-ptest/working/scratch/build.patch
+ [[ -f /data/hive-ptest/working/scratch/build.patch ]]
+ chmod +x /data/hive-ptest/working/scratch/smart-apply-patch.sh
+ /data/hive-ptest/working/scratch/smart-apply-patch.sh 
/data/hive-ptest/working/scratch/build.patch
patch:  malformed patch at line 337:  

patch:  malformed patch at line 337:  

patch:  malformed patch at line 337:  

The patch does not appear to apply with p0, p1, or p2
+ exit 1
'
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12704117 - PreCommit-HIVE-TRUNK-Build

> Insert with custom separator not supported for non-local directory
> --
>
> Key: HIVE-5672
> URL: https://issues.apache.org/jira/browse/HIVE-5672
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.12.0
>Reporter: Romain Rigaux
>Assignee: Nemon Lou
> Attachments: HIVE-5672.1.patch
>
>
> https://issues.apache.org/jira/browse/HIVE-3682 is great but non local 
> directory don't seem to be supported:
> {code}
> in

[jira] [Assigned] (HIVE-5672) Insert with custom separator not supported for non-local directory

2015-03-12 Thread Xuefu Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuefu Zhang reassigned HIVE-5672:
-

Assignee: Nemon Lou  (was: Xuefu Zhang)

> Insert with custom separator not supported for non-local directory
> --
>
> Key: HIVE-5672
> URL: https://issues.apache.org/jira/browse/HIVE-5672
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.12.0
>Reporter: Romain Rigaux
>Assignee: Nemon Lou
> Attachments: HIVE-5672.1.patch
>
>
> https://issues.apache.org/jira/browse/HIVE-3682 is great but non local 
> directory don't seem to be supported:
> {code}
> insert overwrite directory '/tmp/test-02'
> row format delimited
> FIELDS TERMINATED BY ':'
> select description FROM sample_07
> {code}
> {code}
> Error while compiling statement: FAILED: ParseException line 2:0 cannot 
> recognize input near 'row' 'format' 'delimited' in select clause
> {code}
> This works (with 'local'):
> {code}
> insert overwrite local directory '/tmp/test-02'
> row format delimited
> FIELDS TERMINATED BY ':'
> select code, description FROM sample_07
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9800) Create scripts to do metastore upgrade tests on Jenkins

2015-03-12 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9800?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14359152#comment-14359152
 ] 

Lefty Leverenz commented on HIVE-9800:
--

Okay, thanks.

> Create scripts to do metastore upgrade tests on Jenkins
> ---
>
> Key: HIVE-9800
> URL: https://issues.apache.org/jira/browse/HIVE-9800
> Project: Hive
>  Issue Type: Improvement
>Reporter: Sergio Peña
>Assignee: Sergio Peña
> Fix For: 1.2.0
>
> Attachments: HIVE-9800.2.patch
>
>
> NO PRECOMMIT TESTS
> In order to have a better quality code for Hive Metastore, we need to create 
> some upgrade scripts that can run on Jenkins nightly or everytime a patch is 
> added to the ticket that makes structural changes on the database.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9937) LLAP: Vectorized Field-By-Field Serialize / Deserialize to support new Vectorized Map Join

2015-03-12 Thread Matt McCline (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14359153#comment-14359153
 ] 

Matt McCline commented on HIVE-9937:


Test failure udaf_percentile_approx_23 is a known issue.  See HIVE-9833: 
udaf_percentile_approx_23.q fails intermittently.

All tests passed.

> LLAP: Vectorized Field-By-Field Serialize / Deserialize to support new 
> Vectorized Map Join
> --
>
> Key: HIVE-9937
> URL: https://issues.apache.org/jira/browse/HIVE-9937
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Matt McCline
>Assignee: Matt McCline
> Attachments: HIVE-9937.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-3454) Problem with CAST(BIGINT as TIMESTAMP)

2015-03-12 Thread Jason Dere (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14359144#comment-14359144
 ] 

Jason Dere commented on HIVE-3454:
--

That's fine to break into separate tasks, we'll just need to make sure the 
followup config task gets in before the next release.

> Problem with CAST(BIGINT as TIMESTAMP)
> --
>
> Key: HIVE-3454
> URL: https://issues.apache.org/jira/browse/HIVE-3454
> Project: Hive
>  Issue Type: Bug
>  Components: Types, UDF
>Affects Versions: 0.8.0, 0.8.1, 0.9.0, 0.10.0, 0.11.0, 0.12.0, 0.13.0, 
> 0.13.1
>Reporter: Ryan Harris
>Assignee: Aihua Xu
>  Labels: newbie, newdev, patch
> Attachments: HIVE-3454.1.patch.txt, HIVE-3454.2.patch, 
> HIVE-3454.3.patch, HIVE-3454.patch
>
>
> Ran into an issue while working with timestamp conversion.
> CAST(unix_timestamp() as TIMESTAMP) should create a timestamp for the current 
> time from the BIGINT returned by unix_timestamp()
> Instead, however, a 1970-01-16 timestamp is returned.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9856) CBO (Calcite Return Path): Join cost calculation improvements and algorithm selection implementation [CBO branch]

2015-03-12 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-9856:
--
Attachment: HIVE-9856.03.cbo.patch

[~mmokhtar], the latest version of the patch applies cleanly to the CBO branch 
and solves and issue with the requirements that was raised by [~jpullokkaran].

> CBO (Calcite Return Path): Join cost calculation improvements and algorithm 
> selection implementation [CBO branch]
> -
>
> Key: HIVE-9856
> URL: https://issues.apache.org/jira/browse/HIVE-9856
> Project: Hive
>  Issue Type: Sub-task
>  Components: CBO
>Affects Versions: cbo-branch
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
> Fix For: cbo-branch
>
> Attachments: HIVE-9856.01.cbo.patch, HIVE-9856.02.cbo.patch, 
> HIVE-9856.03.cbo.patch, HIVE-9856.cbo.patch
>
>
> This patch implements more precise cost functions for join operators that may 
> help us decide which join algorithm we want to execute directly in the CBO.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9720) Metastore does not properly migrate column stats when renaming a table across databases.

2015-03-12 Thread Mostafa Mokhtar (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14359124#comment-14359124
 ] 

Mostafa Mokhtar commented on HIVE-9720:
---

[~ashutoshc]

> Metastore does not properly migrate column stats when renaming a table across 
> databases.
> 
>
> Key: HIVE-9720
> URL: https://issues.apache.org/jira/browse/HIVE-9720
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Affects Versions: 0.13.1
>Reporter: Alexander Behm
>Assignee: Chaoyu Tang
> Attachments: HIVE-9720.1.patch, HIVE-9720.1.patch, HIVE-9720.patch
>
>
> It appears that the Hive Metastore does not properly migrate column 
> statistics when renaming a table across databases. While renaming across 
> databases is not supported in HiveQL, it can be done via the Metastore Thrift 
> API.
> The problem is that such a newly renamed table cannot be dropped (unless 
> renamed back to its original database/name).
> Here are steps for reproducing the issue.
> 1. From the Hive shell/beeline:
> {code}
> create database db1;
> create database db2;
> create table db1.mv (i int);
> use db1;
> analyze table mv compute statistics for columns i;
> {code}
> 2. From a Java program:
> {code}
>   public static void main(String[] args) throws Exception {
> HiveConf conf = new HiveConf(MetaStoreClientPool.class);
> HiveMetaStoreClient hiveClient = new HiveMetaStoreClient(conf);
> Table t = hiveClient.getTable("db1", "mv");
> t.setDbName("db2");
> t.setTableName("mv2");
> hiveClient.alter_table("db1", "mv", t);
>   }
> {code}
> 3. From the Hive shell/beeline:
> {code}
> drop table db2.mv2;
> {code}
> Stack shown when running 3:
> {code}
> FAILED: Execution Error, return code 1 from 
> org.apache.hadoop.hive.ql.exec.DDLTask. 
> MetaException(message:javax.jdo.JDODataStoreException: Exception thrown 
> flushing changes to datastore
>   at 
> org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:451)
>   at 
> org.datanucleus.api.jdo.JDOTransaction.commit(JDOTransaction.java:165)
>   at 
> org.apache.hadoop.hive.metastore.ObjectStore.commitTransaction(ObjectStore.java:411)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:108)
>   at com.sun.proxy.$Proxy0.commitTransaction(Unknown Source)
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.drop_table_core(HiveMetaStore.java:1389)
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.drop_table_with_environment_context(HiveMetaStore.java:1525)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:106)
>   at com.sun.proxy.$Proxy1.drop_table_with_environment_context(Unknown 
> Source)
>   at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$drop_table_with_environment_context.getResult(ThriftHiveMetastore.java:8072)
>   at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$drop_table_with_environment_context.getResult(ThriftHiveMetastore.java:8056)
>   at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
>   at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
>   at 
> org.apache.hadoop.hive.metastore.TSetIpAddressProcessor.process(TSetIpAddressProcessor.java:48)
>   at 
> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:244)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:724)
> NestedThrowablesStackTrace:
> java.sql.BatchUpdateException: Batch entry 0 DELETE FROM "TBLS" WHERE 
> "TBL_ID"='1621' was aborted.  Call getNextException to see the cause.
>   at 
> org.postgresql.jdbc2.AbstractJdbc2Statement$BatchResultHandler.handleError(AbstractJdbc2Statement.java:2598)
>   at 
> org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1836)
>  

[jira] [Commented] (HIVE-9720) Metastore does not properly migrate column stats when renaming a table across databases.

2015-03-12 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14359034#comment-14359034
 ] 

Hive QA commented on HIVE-9720:
---



{color:green}Overall{color}: +1 all checks pass

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12704178/HIVE-9720.1.patch

{color:green}SUCCESS:{color} +1 7763 tests passed

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/3018/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/3018/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-3018/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12704178 - PreCommit-HIVE-TRUNK-Build

> Metastore does not properly migrate column stats when renaming a table across 
> databases.
> 
>
> Key: HIVE-9720
> URL: https://issues.apache.org/jira/browse/HIVE-9720
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Affects Versions: 0.13.1
>Reporter: Alexander Behm
>Assignee: Chaoyu Tang
> Attachments: HIVE-9720.1.patch, HIVE-9720.1.patch, HIVE-9720.patch
>
>
> It appears that the Hive Metastore does not properly migrate column 
> statistics when renaming a table across databases. While renaming across 
> databases is not supported in HiveQL, it can be done via the Metastore Thrift 
> API.
> The problem is that such a newly renamed table cannot be dropped (unless 
> renamed back to its original database/name).
> Here are steps for reproducing the issue.
> 1. From the Hive shell/beeline:
> {code}
> create database db1;
> create database db2;
> create table db1.mv (i int);
> use db1;
> analyze table mv compute statistics for columns i;
> {code}
> 2. From a Java program:
> {code}
>   public static void main(String[] args) throws Exception {
> HiveConf conf = new HiveConf(MetaStoreClientPool.class);
> HiveMetaStoreClient hiveClient = new HiveMetaStoreClient(conf);
> Table t = hiveClient.getTable("db1", "mv");
> t.setDbName("db2");
> t.setTableName("mv2");
> hiveClient.alter_table("db1", "mv", t);
>   }
> {code}
> 3. From the Hive shell/beeline:
> {code}
> drop table db2.mv2;
> {code}
> Stack shown when running 3:
> {code}
> FAILED: Execution Error, return code 1 from 
> org.apache.hadoop.hive.ql.exec.DDLTask. 
> MetaException(message:javax.jdo.JDODataStoreException: Exception thrown 
> flushing changes to datastore
>   at 
> org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:451)
>   at 
> org.datanucleus.api.jdo.JDOTransaction.commit(JDOTransaction.java:165)
>   at 
> org.apache.hadoop.hive.metastore.ObjectStore.commitTransaction(ObjectStore.java:411)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:108)
>   at com.sun.proxy.$Proxy0.commitTransaction(Unknown Source)
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.drop_table_core(HiveMetaStore.java:1389)
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.drop_table_with_environment_context(HiveMetaStore.java:1525)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:106)
>   at com.sun.proxy.$Proxy1.drop_table_with_environment_context(Unknown 
> Source)
>   at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$drop_table_with_environment_context.getResult(ThriftHiveMetastore.java:8072)
>   at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$drop_table_with_environment_context.getResult(ThriftHiveMetastore.java:8056)
>   at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
>   at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
>   at 
> o

[jira] [Updated] (HIVE-9856) CBO (Calcite Return Path): Join cost calculation improvements and algorithm selection implementation [CBO branch]

2015-03-12 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-9856:
--
Attachment: HIVE-9856.02.cbo.patch

[~ashutoshc], can you take a look to the latest patch? Thanks!

> CBO (Calcite Return Path): Join cost calculation improvements and algorithm 
> selection implementation [CBO branch]
> -
>
> Key: HIVE-9856
> URL: https://issues.apache.org/jira/browse/HIVE-9856
> Project: Hive
>  Issue Type: Sub-task
>  Components: CBO
>Affects Versions: cbo-branch
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
> Fix For: cbo-branch
>
> Attachments: HIVE-9856.01.cbo.patch, HIVE-9856.02.cbo.patch, 
> HIVE-9856.cbo.patch
>
>
> This patch implements more precise cost functions for join operators that may 
> help us decide which join algorithm we want to execute directly in the CBO.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-3454) Problem with CAST(BIGINT as TIMESTAMP)

2015-03-12 Thread Chao (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14358988#comment-14358988
 ] 

Chao commented on HIVE-3454:


+1. Breaking into separate tasks makes sense to me, but it's better to have 
others' inputs (I'm no expert on this matter), before we commit this.

> Problem with CAST(BIGINT as TIMESTAMP)
> --
>
> Key: HIVE-3454
> URL: https://issues.apache.org/jira/browse/HIVE-3454
> Project: Hive
>  Issue Type: Bug
>  Components: Types, UDF
>Affects Versions: 0.8.0, 0.8.1, 0.9.0, 0.10.0, 0.11.0, 0.12.0, 0.13.0, 
> 0.13.1
>Reporter: Ryan Harris
>Assignee: Aihua Xu
>  Labels: newbie, newdev, patch
> Attachments: HIVE-3454.1.patch.txt, HIVE-3454.2.patch, 
> HIVE-3454.3.patch, HIVE-3454.patch
>
>
> Ran into an issue while working with timestamp conversion.
> CAST(unix_timestamp() as TIMESTAMP) should create a timestamp for the current 
> time from the BIGINT returned by unix_timestamp()
> Instead, however, a 1970-01-16 timestamp is returned.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9584) Hive CLI hangs while waiting for a container

2015-03-12 Thread Hari Sekhon (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14358899#comment-14358899
 ] 

Hari Sekhon commented on HIVE-9584:
---

I've noticed this problem today too.

I suggest container start be done in another thread in the background and the 
CLI be presented to the user for metadata operations without requiring a tez 
container.

> Hive CLI hangs while waiting for a container
> 
>
> Key: HIVE-9584
> URL: https://issues.apache.org/jira/browse/HIVE-9584
> Project: Hive
>  Issue Type: Bug
>Reporter: Rich Haase
>
> The Hive CLI, with Tez set as the execution engine, hangs if a container 
> cannot be immediately be allocated as the Tez application master.  From a 
> user perspective this behavior is broken.  
> Users should be able to start a CLI and execute hive metadata commands 
> without needing a Tez application master.  Since users are accustomed to 
> queries with Hive on MapReduce taking a long time, but access to the CLI 
> being near instantaneous, the correct behavior should be to wait for a query 
> to be run before starting the Tez application master.
> This behavior is avoided with the Beeline CLI since it connects through 
> HiveServer2, however many users are accustomed to using the Hive CLI and 
> should not be penalized for their choice until the Hive CLI is completely 
> deprecated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9720) Metastore does not properly migrate column stats when renaming a table across databases.

2015-03-12 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14358874#comment-14358874
 ] 

Hive QA commented on HIVE-9720:
---



{color:green}Overall{color}: +1 all checks pass

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12704161/HIVE-9720.1.patch

{color:green}SUCCESS:{color} +1 7763 tests passed

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/3017/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/3017/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-3017/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12704161 - PreCommit-HIVE-TRUNK-Build

> Metastore does not properly migrate column stats when renaming a table across 
> databases.
> 
>
> Key: HIVE-9720
> URL: https://issues.apache.org/jira/browse/HIVE-9720
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Affects Versions: 0.13.1
>Reporter: Alexander Behm
>Assignee: Chaoyu Tang
> Attachments: HIVE-9720.1.patch, HIVE-9720.1.patch, HIVE-9720.patch
>
>
> It appears that the Hive Metastore does not properly migrate column 
> statistics when renaming a table across databases. While renaming across 
> databases is not supported in HiveQL, it can be done via the Metastore Thrift 
> API.
> The problem is that such a newly renamed table cannot be dropped (unless 
> renamed back to its original database/name).
> Here are steps for reproducing the issue.
> 1. From the Hive shell/beeline:
> {code}
> create database db1;
> create database db2;
> create table db1.mv (i int);
> use db1;
> analyze table mv compute statistics for columns i;
> {code}
> 2. From a Java program:
> {code}
>   public static void main(String[] args) throws Exception {
> HiveConf conf = new HiveConf(MetaStoreClientPool.class);
> HiveMetaStoreClient hiveClient = new HiveMetaStoreClient(conf);
> Table t = hiveClient.getTable("db1", "mv");
> t.setDbName("db2");
> t.setTableName("mv2");
> hiveClient.alter_table("db1", "mv", t);
>   }
> {code}
> 3. From the Hive shell/beeline:
> {code}
> drop table db2.mv2;
> {code}
> Stack shown when running 3:
> {code}
> FAILED: Execution Error, return code 1 from 
> org.apache.hadoop.hive.ql.exec.DDLTask. 
> MetaException(message:javax.jdo.JDODataStoreException: Exception thrown 
> flushing changes to datastore
>   at 
> org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:451)
>   at 
> org.datanucleus.api.jdo.JDOTransaction.commit(JDOTransaction.java:165)
>   at 
> org.apache.hadoop.hive.metastore.ObjectStore.commitTransaction(ObjectStore.java:411)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:108)
>   at com.sun.proxy.$Proxy0.commitTransaction(Unknown Source)
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.drop_table_core(HiveMetaStore.java:1389)
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.drop_table_with_environment_context(HiveMetaStore.java:1525)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:106)
>   at com.sun.proxy.$Proxy1.drop_table_with_environment_context(Unknown 
> Source)
>   at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$drop_table_with_environment_context.getResult(ThriftHiveMetastore.java:8072)
>   at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$drop_table_with_environment_context.getResult(ThriftHiveMetastore.java:8056)
>   at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
>   at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
>   at 
> o

[jira] [Updated] (HIVE-9720) Metastore does not properly migrate column stats when renaming a table across databases.

2015-03-12 Thread Chaoyu Tang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chaoyu Tang updated HIVE-9720:
--
Attachment: HIVE-9720.1.patch

Looks like the precommit is not queued for the updated patch. Resubmit.

> Metastore does not properly migrate column stats when renaming a table across 
> databases.
> 
>
> Key: HIVE-9720
> URL: https://issues.apache.org/jira/browse/HIVE-9720
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Affects Versions: 0.13.1
>Reporter: Alexander Behm
>Assignee: Chaoyu Tang
> Attachments: HIVE-9720.1.patch, HIVE-9720.1.patch, HIVE-9720.patch
>
>
> It appears that the Hive Metastore does not properly migrate column 
> statistics when renaming a table across databases. While renaming across 
> databases is not supported in HiveQL, it can be done via the Metastore Thrift 
> API.
> The problem is that such a newly renamed table cannot be dropped (unless 
> renamed back to its original database/name).
> Here are steps for reproducing the issue.
> 1. From the Hive shell/beeline:
> {code}
> create database db1;
> create database db2;
> create table db1.mv (i int);
> use db1;
> analyze table mv compute statistics for columns i;
> {code}
> 2. From a Java program:
> {code}
>   public static void main(String[] args) throws Exception {
> HiveConf conf = new HiveConf(MetaStoreClientPool.class);
> HiveMetaStoreClient hiveClient = new HiveMetaStoreClient(conf);
> Table t = hiveClient.getTable("db1", "mv");
> t.setDbName("db2");
> t.setTableName("mv2");
> hiveClient.alter_table("db1", "mv", t);
>   }
> {code}
> 3. From the Hive shell/beeline:
> {code}
> drop table db2.mv2;
> {code}
> Stack shown when running 3:
> {code}
> FAILED: Execution Error, return code 1 from 
> org.apache.hadoop.hive.ql.exec.DDLTask. 
> MetaException(message:javax.jdo.JDODataStoreException: Exception thrown 
> flushing changes to datastore
>   at 
> org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:451)
>   at 
> org.datanucleus.api.jdo.JDOTransaction.commit(JDOTransaction.java:165)
>   at 
> org.apache.hadoop.hive.metastore.ObjectStore.commitTransaction(ObjectStore.java:411)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:108)
>   at com.sun.proxy.$Proxy0.commitTransaction(Unknown Source)
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.drop_table_core(HiveMetaStore.java:1389)
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.drop_table_with_environment_context(HiveMetaStore.java:1525)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:106)
>   at com.sun.proxy.$Proxy1.drop_table_with_environment_context(Unknown 
> Source)
>   at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$drop_table_with_environment_context.getResult(ThriftHiveMetastore.java:8072)
>   at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$drop_table_with_environment_context.getResult(ThriftHiveMetastore.java:8056)
>   at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
>   at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
>   at 
> org.apache.hadoop.hive.metastore.TSetIpAddressProcessor.process(TSetIpAddressProcessor.java:48)
>   at 
> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:244)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:724)
> NestedThrowablesStackTrace:
> java.sql.BatchUpdateException: Batch entry 0 DELETE FROM "TBLS" WHERE 
> "TBL_ID"='1621' was aborted.  Call getNextException to see the cause.
>   at 
> org.postgresql.jdbc2.AbstractJdbc2Statement$BatchResultHandler.handleError(AbstractJdbc2Statement.java:2598)
>   at 
> org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExe

[jira] [Commented] (HIVE-9800) Create scripts to do metastore upgrade tests on Jenkins

2015-03-12 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HIVE-9800?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14358800#comment-14358800
 ] 

Sergio Peña commented on HIVE-9800:
---

HI [~leftylev]

There's no need to put it in the Wiki. 
I left the comment on this jira in order to give more information for the code 
reviewer.

> Create scripts to do metastore upgrade tests on Jenkins
> ---
>
> Key: HIVE-9800
> URL: https://issues.apache.org/jira/browse/HIVE-9800
> Project: Hive
>  Issue Type: Improvement
>Reporter: Sergio Peña
>Assignee: Sergio Peña
> Fix For: 1.2.0
>
> Attachments: HIVE-9800.2.patch
>
>
> NO PRECOMMIT TESTS
> In order to have a better quality code for Hive Metastore, we need to create 
> some upgrade scripts that can run on Jenkins nightly or everytime a patch is 
> added to the ticket that makes structural changes on the database.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9720) Metastore does not properly migrate column stats when renaming a table across databases.

2015-03-12 Thread Xuefu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14358791#comment-14358791
 ] 

Xuefu Zhang commented on HIVE-9720:
---

+1

> Metastore does not properly migrate column stats when renaming a table across 
> databases.
> 
>
> Key: HIVE-9720
> URL: https://issues.apache.org/jira/browse/HIVE-9720
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Affects Versions: 0.13.1
>Reporter: Alexander Behm
>Assignee: Chaoyu Tang
> Attachments: HIVE-9720.1.patch, HIVE-9720.patch
>
>
> It appears that the Hive Metastore does not properly migrate column 
> statistics when renaming a table across databases. While renaming across 
> databases is not supported in HiveQL, it can be done via the Metastore Thrift 
> API.
> The problem is that such a newly renamed table cannot be dropped (unless 
> renamed back to its original database/name).
> Here are steps for reproducing the issue.
> 1. From the Hive shell/beeline:
> {code}
> create database db1;
> create database db2;
> create table db1.mv (i int);
> use db1;
> analyze table mv compute statistics for columns i;
> {code}
> 2. From a Java program:
> {code}
>   public static void main(String[] args) throws Exception {
> HiveConf conf = new HiveConf(MetaStoreClientPool.class);
> HiveMetaStoreClient hiveClient = new HiveMetaStoreClient(conf);
> Table t = hiveClient.getTable("db1", "mv");
> t.setDbName("db2");
> t.setTableName("mv2");
> hiveClient.alter_table("db1", "mv", t);
>   }
> {code}
> 3. From the Hive shell/beeline:
> {code}
> drop table db2.mv2;
> {code}
> Stack shown when running 3:
> {code}
> FAILED: Execution Error, return code 1 from 
> org.apache.hadoop.hive.ql.exec.DDLTask. 
> MetaException(message:javax.jdo.JDODataStoreException: Exception thrown 
> flushing changes to datastore
>   at 
> org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:451)
>   at 
> org.datanucleus.api.jdo.JDOTransaction.commit(JDOTransaction.java:165)
>   at 
> org.apache.hadoop.hive.metastore.ObjectStore.commitTransaction(ObjectStore.java:411)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:108)
>   at com.sun.proxy.$Proxy0.commitTransaction(Unknown Source)
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.drop_table_core(HiveMetaStore.java:1389)
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.drop_table_with_environment_context(HiveMetaStore.java:1525)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:106)
>   at com.sun.proxy.$Proxy1.drop_table_with_environment_context(Unknown 
> Source)
>   at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$drop_table_with_environment_context.getResult(ThriftHiveMetastore.java:8072)
>   at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$drop_table_with_environment_context.getResult(ThriftHiveMetastore.java:8056)
>   at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
>   at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
>   at 
> org.apache.hadoop.hive.metastore.TSetIpAddressProcessor.process(TSetIpAddressProcessor.java:48)
>   at 
> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:244)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:724)
> NestedThrowablesStackTrace:
> java.sql.BatchUpdateException: Batch entry 0 DELETE FROM "TBLS" WHERE 
> "TBL_ID"='1621' was aborted.  Call getNextException to see the cause.
>   at 
> org.postgresql.jdbc2.AbstractJdbc2Statement$BatchResultHandler.handleError(AbstractJdbc2Statement.java:2598)
>   at 
> org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1836)
>   at 
> org.postgresql.core.v3.QueryEx

[jira] [Updated] (HIVE-9720) Metastore does not properly migrate column stats when renaming a table across databases.

2015-03-12 Thread Chaoyu Tang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chaoyu Tang updated HIVE-9720:
--
Attachment: HIVE-9720.1.patch

Fixed failed tests, but I could not reproduce one failure from 
index_auto_partitioned.q in my local machine, which seems not be relevant.

> Metastore does not properly migrate column stats when renaming a table across 
> databases.
> 
>
> Key: HIVE-9720
> URL: https://issues.apache.org/jira/browse/HIVE-9720
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Affects Versions: 0.13.1
>Reporter: Alexander Behm
>Assignee: Chaoyu Tang
> Attachments: HIVE-9720.1.patch, HIVE-9720.patch
>
>
> It appears that the Hive Metastore does not properly migrate column 
> statistics when renaming a table across databases. While renaming across 
> databases is not supported in HiveQL, it can be done via the Metastore Thrift 
> API.
> The problem is that such a newly renamed table cannot be dropped (unless 
> renamed back to its original database/name).
> Here are steps for reproducing the issue.
> 1. From the Hive shell/beeline:
> {code}
> create database db1;
> create database db2;
> create table db1.mv (i int);
> use db1;
> analyze table mv compute statistics for columns i;
> {code}
> 2. From a Java program:
> {code}
>   public static void main(String[] args) throws Exception {
> HiveConf conf = new HiveConf(MetaStoreClientPool.class);
> HiveMetaStoreClient hiveClient = new HiveMetaStoreClient(conf);
> Table t = hiveClient.getTable("db1", "mv");
> t.setDbName("db2");
> t.setTableName("mv2");
> hiveClient.alter_table("db1", "mv", t);
>   }
> {code}
> 3. From the Hive shell/beeline:
> {code}
> drop table db2.mv2;
> {code}
> Stack shown when running 3:
> {code}
> FAILED: Execution Error, return code 1 from 
> org.apache.hadoop.hive.ql.exec.DDLTask. 
> MetaException(message:javax.jdo.JDODataStoreException: Exception thrown 
> flushing changes to datastore
>   at 
> org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:451)
>   at 
> org.datanucleus.api.jdo.JDOTransaction.commit(JDOTransaction.java:165)
>   at 
> org.apache.hadoop.hive.metastore.ObjectStore.commitTransaction(ObjectStore.java:411)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:108)
>   at com.sun.proxy.$Proxy0.commitTransaction(Unknown Source)
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.drop_table_core(HiveMetaStore.java:1389)
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.drop_table_with_environment_context(HiveMetaStore.java:1525)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:106)
>   at com.sun.proxy.$Proxy1.drop_table_with_environment_context(Unknown 
> Source)
>   at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$drop_table_with_environment_context.getResult(ThriftHiveMetastore.java:8072)
>   at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$drop_table_with_environment_context.getResult(ThriftHiveMetastore.java:8056)
>   at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
>   at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
>   at 
> org.apache.hadoop.hive.metastore.TSetIpAddressProcessor.process(TSetIpAddressProcessor.java:48)
>   at 
> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:244)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:724)
> NestedThrowablesStackTrace:
> java.sql.BatchUpdateException: Batch entry 0 DELETE FROM "TBLS" WHERE 
> "TBL_ID"='1621' was aborted.  Call getNextException to see the cause.
>   at 
> org.postgresql.jdbc2.AbstractJdbc2Statement$BatchResultHandler.handleError(AbstractJdbc2Statement.java:2598)
>   at 
> org.postgresql.c

[jira] [Commented] (HIVE-9720) Metastore does not properly migrate column stats when renaming a table across databases.

2015-03-12 Thread Xuefu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14358658#comment-14358658
 ] 

Xuefu Zhang commented on HIVE-9720:
---

Patch looks good. Minor comment on RB.

> Metastore does not properly migrate column stats when renaming a table across 
> databases.
> 
>
> Key: HIVE-9720
> URL: https://issues.apache.org/jira/browse/HIVE-9720
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Affects Versions: 0.13.1
>Reporter: Alexander Behm
>Assignee: Chaoyu Tang
> Attachments: HIVE-9720.patch
>
>
> It appears that the Hive Metastore does not properly migrate column 
> statistics when renaming a table across databases. While renaming across 
> databases is not supported in HiveQL, it can be done via the Metastore Thrift 
> API.
> The problem is that such a newly renamed table cannot be dropped (unless 
> renamed back to its original database/name).
> Here are steps for reproducing the issue.
> 1. From the Hive shell/beeline:
> {code}
> create database db1;
> create database db2;
> create table db1.mv (i int);
> use db1;
> analyze table mv compute statistics for columns i;
> {code}
> 2. From a Java program:
> {code}
>   public static void main(String[] args) throws Exception {
> HiveConf conf = new HiveConf(MetaStoreClientPool.class);
> HiveMetaStoreClient hiveClient = new HiveMetaStoreClient(conf);
> Table t = hiveClient.getTable("db1", "mv");
> t.setDbName("db2");
> t.setTableName("mv2");
> hiveClient.alter_table("db1", "mv", t);
>   }
> {code}
> 3. From the Hive shell/beeline:
> {code}
> drop table db2.mv2;
> {code}
> Stack shown when running 3:
> {code}
> FAILED: Execution Error, return code 1 from 
> org.apache.hadoop.hive.ql.exec.DDLTask. 
> MetaException(message:javax.jdo.JDODataStoreException: Exception thrown 
> flushing changes to datastore
>   at 
> org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:451)
>   at 
> org.datanucleus.api.jdo.JDOTransaction.commit(JDOTransaction.java:165)
>   at 
> org.apache.hadoop.hive.metastore.ObjectStore.commitTransaction(ObjectStore.java:411)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:108)
>   at com.sun.proxy.$Proxy0.commitTransaction(Unknown Source)
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.drop_table_core(HiveMetaStore.java:1389)
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.drop_table_with_environment_context(HiveMetaStore.java:1525)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:106)
>   at com.sun.proxy.$Proxy1.drop_table_with_environment_context(Unknown 
> Source)
>   at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$drop_table_with_environment_context.getResult(ThriftHiveMetastore.java:8072)
>   at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$drop_table_with_environment_context.getResult(ThriftHiveMetastore.java:8056)
>   at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
>   at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
>   at 
> org.apache.hadoop.hive.metastore.TSetIpAddressProcessor.process(TSetIpAddressProcessor.java:48)
>   at 
> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:244)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:724)
> NestedThrowablesStackTrace:
> java.sql.BatchUpdateException: Batch entry 0 DELETE FROM "TBLS" WHERE 
> "TBL_ID"='1621' was aborted.  Call getNextException to see the cause.
>   at 
> org.postgresql.jdbc2.AbstractJdbc2Statement$BatchResultHandler.handleError(AbstractJdbc2Statement.java:2598)
>   at 
> org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1836)
>   at 
> org.postgresq

[jira] [Commented] (HIVE-9720) Metastore does not properly migrate column stats when renaming a table across databases.

2015-03-12 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14358646#comment-14358646
 ] 

Hive QA commented on HIVE-9720:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12704126/HIVE-9720.patch

{color:red}ERROR:{color} -1 due to 6 failed/errored test(s), 7763 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_index_auto_partitioned
org.apache.hadoop.hive.metastore.TestEmbeddedHiveMetaStore.testAlterViewParititon
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStore.testAlterViewParititon
org.apache.hadoop.hive.metastore.TestSetUGIOnBothClientServer.testAlterViewParititon
org.apache.hadoop.hive.metastore.TestSetUGIOnOnlyClient.testAlterViewParititon
org.apache.hadoop.hive.metastore.TestSetUGIOnOnlyServer.testAlterViewParititon
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/3016/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/3016/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-3016/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 6 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12704126 - PreCommit-HIVE-TRUNK-Build

> Metastore does not properly migrate column stats when renaming a table across 
> databases.
> 
>
> Key: HIVE-9720
> URL: https://issues.apache.org/jira/browse/HIVE-9720
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Affects Versions: 0.13.1
>Reporter: Alexander Behm
>Assignee: Chaoyu Tang
> Attachments: HIVE-9720.patch
>
>
> It appears that the Hive Metastore does not properly migrate column 
> statistics when renaming a table across databases. While renaming across 
> databases is not supported in HiveQL, it can be done via the Metastore Thrift 
> API.
> The problem is that such a newly renamed table cannot be dropped (unless 
> renamed back to its original database/name).
> Here are steps for reproducing the issue.
> 1. From the Hive shell/beeline:
> {code}
> create database db1;
> create database db2;
> create table db1.mv (i int);
> use db1;
> analyze table mv compute statistics for columns i;
> {code}
> 2. From a Java program:
> {code}
>   public static void main(String[] args) throws Exception {
> HiveConf conf = new HiveConf(MetaStoreClientPool.class);
> HiveMetaStoreClient hiveClient = new HiveMetaStoreClient(conf);
> Table t = hiveClient.getTable("db1", "mv");
> t.setDbName("db2");
> t.setTableName("mv2");
> hiveClient.alter_table("db1", "mv", t);
>   }
> {code}
> 3. From the Hive shell/beeline:
> {code}
> drop table db2.mv2;
> {code}
> Stack shown when running 3:
> {code}
> FAILED: Execution Error, return code 1 from 
> org.apache.hadoop.hive.ql.exec.DDLTask. 
> MetaException(message:javax.jdo.JDODataStoreException: Exception thrown 
> flushing changes to datastore
>   at 
> org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:451)
>   at 
> org.datanucleus.api.jdo.JDOTransaction.commit(JDOTransaction.java:165)
>   at 
> org.apache.hadoop.hive.metastore.ObjectStore.commitTransaction(ObjectStore.java:411)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:108)
>   at com.sun.proxy.$Proxy0.commitTransaction(Unknown Source)
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.drop_table_core(HiveMetaStore.java:1389)
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.drop_table_with_environment_context(HiveMetaStore.java:1525)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:10

[jira] [Commented] (HIVE-9813) Hive JDBC - DatabaseMetaData.getColumns method cannot find classes added with "add jar" command

2015-03-12 Thread Yongzhi Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14358635#comment-14358635
 ] 

Yongzhi Chen commented on HIVE-9813:


Thanks [~xuefuz] for reviewing and committing the change.

> Hive JDBC - DatabaseMetaData.getColumns method cannot find classes added with 
> "add jar" command
> ---
>
> Key: HIVE-9813
> URL: https://issues.apache.org/jira/browse/HIVE-9813
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Reporter: Yongzhi Chen
>Assignee: Yongzhi Chen
>  Labels: TODOC1.2
> Fix For: 1.2.0
>
> Attachments: HIVE-9813.1.patch, HIVE-9813.3.patch
>
>
> Execute following JDBC client program:
> {code}
> import java.sql.*;
> public class TestAddJar {
> private static Connection makeConnection(String connString, String 
> classPath) throws ClassNotFoundException, SQLException
> {
> System.out.println("Current Connection info: "+ connString);
> Class.forName(classPath);
> System.out.println("Current driver info: "+ classPath);
> return DriverManager.getConnection(connString);
> }
> public static void main(String[] args)
> {
> if(2 != args.length)
> {
> System.out.println("Two arguments needed: connection string, path 
> to jar to be added (include jar name)");
> System.out.println("Example: java -jar TestApp.jar 
> jdbc:hive2://192.168.111.111 /tmp/json-serde-1.3-jar-with-dependencies.jar");
> return;
> }
> Connection conn;
> try
> {
> conn = makeConnection(args[0], "org.apache.hive.jdbc.HiveDriver");
> 
> System.out.println("---");
> System.out.println("DONE");
> 
> System.out.println("---");
> System.out.println("Execute query: add jar " + args[1] + ";");
> Statement stmt = conn.createStatement();
> int c = stmt.executeUpdate("add jar " + args[1]);
> System.out.println("Returned value is: [" + c + "]\n");
> 
> System.out.println("---");
> final String createTableQry = "Create table if not exists 
> json_test(id int, content string) " +
> "row format serde 'org.openx.data.jsonserde.JsonSerDe'";
> System.out.println("Execute query:" + createTableQry + ";");
> stmt.execute(createTableQry);
> 
> System.out.println("---");
> System.out.println("getColumn() 
> Call---\n");
> DatabaseMetaData md = conn.getMetaData();
> System.out.println("Test get all column in a schema:");
> ResultSet rs = md.getColumns("Hive", "default", "json_test", 
> null);
> while (rs.next()) {
> System.out.println(rs.getString(1));
> }
> conn.close();
> }
> catch (ClassNotFoundException e)
> {
> e.printStackTrace();
> }
> catch (SQLException e)
> {
> e.printStackTrace();
> }
> }
> }
> {code}
> Get Exception, and from metastore log:
> 7:41:30.316 PMERROR   hive.log
> error in initSerDe: java.lang.ClassNotFoundException Class 
> org.openx.data.jsonserde.JsonSerDe not found
> java.lang.ClassNotFoundException: Class org.openx.data.jsonserde.JsonSerDe 
> not found
> at 
> org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:1803)
> at 
> org.apache.hadoop.hive.metastore.MetaStoreUtils.getDeserializer(MetaStoreUtils.java:183)
> at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_fields(HiveMetaStore.java:2487)
> at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_schema(HiveMetaStore.java:2542)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:105)
> at com.sun.proxy.$Proxy5.get_schema(Unknown Source)
> at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$get_schema.getResult(ThriftHiveMetastore.java:6425)
> at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$get_schema

[jira] [Updated] (HIVE-9813) Hive JDBC - DatabaseMetaData.getColumns method cannot find classes added with "add jar" command

2015-03-12 Thread Xuefu Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuefu Zhang updated HIVE-9813:
--
Labels: TODOC1.2  (was: )

> Hive JDBC - DatabaseMetaData.getColumns method cannot find classes added with 
> "add jar" command
> ---
>
> Key: HIVE-9813
> URL: https://issues.apache.org/jira/browse/HIVE-9813
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Reporter: Yongzhi Chen
>Assignee: Yongzhi Chen
>  Labels: TODOC1.2
> Fix For: 1.2.0
>
> Attachments: HIVE-9813.1.patch, HIVE-9813.3.patch
>
>
> Execute following JDBC client program:
> {code}
> import java.sql.*;
> public class TestAddJar {
> private static Connection makeConnection(String connString, String 
> classPath) throws ClassNotFoundException, SQLException
> {
> System.out.println("Current Connection info: "+ connString);
> Class.forName(classPath);
> System.out.println("Current driver info: "+ classPath);
> return DriverManager.getConnection(connString);
> }
> public static void main(String[] args)
> {
> if(2 != args.length)
> {
> System.out.println("Two arguments needed: connection string, path 
> to jar to be added (include jar name)");
> System.out.println("Example: java -jar TestApp.jar 
> jdbc:hive2://192.168.111.111 /tmp/json-serde-1.3-jar-with-dependencies.jar");
> return;
> }
> Connection conn;
> try
> {
> conn = makeConnection(args[0], "org.apache.hive.jdbc.HiveDriver");
> 
> System.out.println("---");
> System.out.println("DONE");
> 
> System.out.println("---");
> System.out.println("Execute query: add jar " + args[1] + ";");
> Statement stmt = conn.createStatement();
> int c = stmt.executeUpdate("add jar " + args[1]);
> System.out.println("Returned value is: [" + c + "]\n");
> 
> System.out.println("---");
> final String createTableQry = "Create table if not exists 
> json_test(id int, content string) " +
> "row format serde 'org.openx.data.jsonserde.JsonSerDe'";
> System.out.println("Execute query:" + createTableQry + ";");
> stmt.execute(createTableQry);
> 
> System.out.println("---");
> System.out.println("getColumn() 
> Call---\n");
> DatabaseMetaData md = conn.getMetaData();
> System.out.println("Test get all column in a schema:");
> ResultSet rs = md.getColumns("Hive", "default", "json_test", 
> null);
> while (rs.next()) {
> System.out.println(rs.getString(1));
> }
> conn.close();
> }
> catch (ClassNotFoundException e)
> {
> e.printStackTrace();
> }
> catch (SQLException e)
> {
> e.printStackTrace();
> }
> }
> }
> {code}
> Get Exception, and from metastore log:
> 7:41:30.316 PMERROR   hive.log
> error in initSerDe: java.lang.ClassNotFoundException Class 
> org.openx.data.jsonserde.JsonSerDe not found
> java.lang.ClassNotFoundException: Class org.openx.data.jsonserde.JsonSerDe 
> not found
> at 
> org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:1803)
> at 
> org.apache.hadoop.hive.metastore.MetaStoreUtils.getDeserializer(MetaStoreUtils.java:183)
> at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_fields(HiveMetaStore.java:2487)
> at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_schema(HiveMetaStore.java:2542)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:105)
> at com.sun.proxy.$Proxy5.get_schema(Unknown Source)
> at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$get_schema.getResult(ThriftHiveMetastore.java:6425)
> at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$get_schema.getResult(ThriftHiveMetastore.java:6409)
> at org.apache.thrift.ProcessFunction.process

[jira] [Commented] (HIVE-9939) Code cleanup for redundant if check in ExplainTask [Spark Branch]

2015-03-12 Thread Xuefu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14358596#comment-14358596
 ] 

Xuefu Zhang commented on HIVE-9939:
---

+1

> Code cleanup for redundant if check in ExplainTask [Spark Branch]
> -
>
> Key: HIVE-9939
> URL: https://issues.apache.org/jira/browse/HIVE-9939
> Project: Hive
>  Issue Type: Bug
>  Components: Spark
>Reporter: Chinna Rao Lalam
>Assignee: Chinna Rao Lalam
> Fix For: spark-branch
>
> Attachments: HIVE-9939.1-spark.patch
>
>
> ExplainTask.execute() method have redundant if check.
> Same applicable for trunk also..



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9939) Code cleanup for redundant if check in ExplainTask [Spark Branch]

2015-03-12 Thread Xuefu Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuefu Zhang updated HIVE-9939:
--
Component/s: Spark
Summary: Code cleanup for redundant if check in ExplainTask [Spark 
Branch]  (was: Code cleanup for redundant if check in ExplainTask)

> Code cleanup for redundant if check in ExplainTask [Spark Branch]
> -
>
> Key: HIVE-9939
> URL: https://issues.apache.org/jira/browse/HIVE-9939
> Project: Hive
>  Issue Type: Bug
>  Components: Spark
>Reporter: Chinna Rao Lalam
>Assignee: Chinna Rao Lalam
> Fix For: spark-branch
>
> Attachments: HIVE-9939.1-spark.patch
>
>
> ExplainTask.execute() method have redundant if check.
> Same applicable for trunk also..



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9937) LLAP: Vectorized Field-By-Field Serialize / Deserialize to support new Vectorized Map Join

2015-03-12 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14358565#comment-14358565
 ] 

Hive QA commented on HIVE-9937:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12704103/HIVE-9937.01.patch

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 7766 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_udaf_percentile_approx_23
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/3015/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/3015/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-3015/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12704103 - PreCommit-HIVE-TRUNK-Build

> LLAP: Vectorized Field-By-Field Serialize / Deserialize to support new 
> Vectorized Map Join
> --
>
> Key: HIVE-9937
> URL: https://issues.apache.org/jira/browse/HIVE-9937
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Matt McCline
>Assignee: Matt McCline
> Attachments: HIVE-9937.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9939) Code cleanup for redundant if check in ExplainTask

2015-03-12 Thread Chinna Rao Lalam (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14358562#comment-14358562
 ] 

Chinna Rao Lalam commented on HIVE-9939:


Test failure is not related to this patch and vectorized_timestamp_funcs.q this 
test passed in my box.

> Code cleanup for redundant if check in ExplainTask
> --
>
> Key: HIVE-9939
> URL: https://issues.apache.org/jira/browse/HIVE-9939
> Project: Hive
>  Issue Type: Bug
>Reporter: Chinna Rao Lalam
>Assignee: Chinna Rao Lalam
> Fix For: spark-branch
>
> Attachments: HIVE-9939.1-spark.patch
>
>
> ExplainTask.execute() method have redundant if check.
> Same applicable for trunk also..



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9939) Code cleanup for redundant if check in ExplainTask

2015-03-12 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14358549#comment-14358549
 ] 

Hive QA commented on HIVE-9939:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12704130/HIVE-9939.1-spark.patch

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 7644 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_vectorized_timestamp_funcs
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-SPARK-Build/787/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-SPARK-Build/787/console
Test logs: 
http://ec2-50-18-27-0.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-SPARK-Build-787/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12704130 - PreCommit-HIVE-SPARK-Build

> Code cleanup for redundant if check in ExplainTask
> --
>
> Key: HIVE-9939
> URL: https://issues.apache.org/jira/browse/HIVE-9939
> Project: Hive
>  Issue Type: Bug
>Reporter: Chinna Rao Lalam
>Assignee: Chinna Rao Lalam
> Fix For: spark-branch
>
> Attachments: HIVE-9939.1-spark.patch
>
>
> ExplainTask.execute() method have redundant if check.
> Same applicable for trunk also..



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9720) Metastore does not properly migrate column stats when renaming a table across databases.

2015-03-12 Thread Chaoyu Tang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14358482#comment-14358482
 ] 

Chaoyu Tang commented on HIVE-9720:
---

Patch has also uploaded to RB https://reviews.apache.org/r/31978/ and requested 
for review. Thanks in advance.

> Metastore does not properly migrate column stats when renaming a table across 
> databases.
> 
>
> Key: HIVE-9720
> URL: https://issues.apache.org/jira/browse/HIVE-9720
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Affects Versions: 0.13.1
>Reporter: Alexander Behm
>Assignee: Chaoyu Tang
> Attachments: HIVE-9720.patch
>
>
> It appears that the Hive Metastore does not properly migrate column 
> statistics when renaming a table across databases. While renaming across 
> databases is not supported in HiveQL, it can be done via the Metastore Thrift 
> API.
> The problem is that such a newly renamed table cannot be dropped (unless 
> renamed back to its original database/name).
> Here are steps for reproducing the issue.
> 1. From the Hive shell/beeline:
> {code}
> create database db1;
> create database db2;
> create table db1.mv (i int);
> use db1;
> analyze table mv compute statistics for columns i;
> {code}
> 2. From a Java program:
> {code}
>   public static void main(String[] args) throws Exception {
> HiveConf conf = new HiveConf(MetaStoreClientPool.class);
> HiveMetaStoreClient hiveClient = new HiveMetaStoreClient(conf);
> Table t = hiveClient.getTable("db1", "mv");
> t.setDbName("db2");
> t.setTableName("mv2");
> hiveClient.alter_table("db1", "mv", t);
>   }
> {code}
> 3. From the Hive shell/beeline:
> {code}
> drop table db2.mv2;
> {code}
> Stack shown when running 3:
> {code}
> FAILED: Execution Error, return code 1 from 
> org.apache.hadoop.hive.ql.exec.DDLTask. 
> MetaException(message:javax.jdo.JDODataStoreException: Exception thrown 
> flushing changes to datastore
>   at 
> org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:451)
>   at 
> org.datanucleus.api.jdo.JDOTransaction.commit(JDOTransaction.java:165)
>   at 
> org.apache.hadoop.hive.metastore.ObjectStore.commitTransaction(ObjectStore.java:411)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:108)
>   at com.sun.proxy.$Proxy0.commitTransaction(Unknown Source)
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.drop_table_core(HiveMetaStore.java:1389)
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.drop_table_with_environment_context(HiveMetaStore.java:1525)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:106)
>   at com.sun.proxy.$Proxy1.drop_table_with_environment_context(Unknown 
> Source)
>   at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$drop_table_with_environment_context.getResult(ThriftHiveMetastore.java:8072)
>   at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$drop_table_with_environment_context.getResult(ThriftHiveMetastore.java:8056)
>   at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
>   at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
>   at 
> org.apache.hadoop.hive.metastore.TSetIpAddressProcessor.process(TSetIpAddressProcessor.java:48)
>   at 
> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:244)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:724)
> NestedThrowablesStackTrace:
> java.sql.BatchUpdateException: Batch entry 0 DELETE FROM "TBLS" WHERE 
> "TBL_ID"='1621' was aborted.  Call getNextException to see the cause.
>   at 
> org.postgresql.jdbc2.AbstractJdbc2Statement$BatchResultHandler.handleError(AbstractJdbc2Statement.java:2598)
>   at 
> org.postgresql.core.v3.QueryExecutorImp

[jira] [Commented] (HIVE-9936) fix potential NPE in DefaultUDAFEvaluatorResolver

2015-03-12 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14358471#comment-14358471
 ] 

Hive QA commented on HIVE-9936:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12704080/HIVE-9936.1.patch

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 7762 tests executed
*Failed tests:*
{noformat}
org.apache.hive.jdbc.TestMultiSessionsHS2WithLocalClusterSpark.testSparkQuery
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/3014/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/3014/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-3014/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12704080 - PreCommit-HIVE-TRUNK-Build

> fix potential NPE in DefaultUDAFEvaluatorResolver
> -
>
> Key: HIVE-9936
> URL: https://issues.apache.org/jira/browse/HIVE-9936
> Project: Hive
>  Issue Type: Bug
>  Components: UDF
>Reporter: Alexander Pivovarov
>Assignee: Alexander Pivovarov
> Attachments: HIVE-9936.1.patch
>
>
> In some cases DefaultUDAFEvaluatorResolver calls new 
> AmbiguousMethodException(udafClass, null, null)  (line 94)
> This will throw NPE because AmbiguousMethodException calls 
> argTypeInfos.toString()
> argTypeInfos is the second parameter and it can not be null.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9941) sql std authorization on partitioned table: truncate and insert

2015-03-12 Thread Olaf Flebbe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Olaf Flebbe updated HIVE-9941:
--
Description: 
sql std authorization works as expected.

However if a table is partitioned any user can truncate it
User foo:
{code}
create table bla (a string) partitioned by (b string);
#.. loading values ...
{code}

Admin:
{code}
0: jdbc:hive2://localhost:1/default> set role admin;
No rows affected (0,074 seconds)
0: jdbc:hive2://localhost:1/default> show grant on bla;
+---+++-+-+-++---++--+--+
| database  | table  | partition  | column  | principal_name  | principal_type  
| privilege  | grant_option  |   grant_time   | grantor  |
+---+++-+-+-++---++--+--+
| default   | bla|| | foo | USER
| DELETE | true  | 1426158997000  | foo  |
| default   | bla|| | foo | USER
| INSERT | true  | 1426158997000  | foo  |
| default   | bla|| | foo | USER
| SELECT | true  | 1426158997000  | foo  |
| default   | bla|| | foo | USER
| UPDATE | true  | 1426158997000  | foo  |
+---+++-+-+-++---++--+--+
{code}

now user olaf
{code}
0: jdbc:hive2://localhost:1/default> select * from bla;
Error: Error while compiling statement: FAILED: HiveAccessControlException 
Permission denied: Principal [name=olaf, type=USER] does not have following 
privileges for operation QUERY [[SELECT] on Object [type=TABLE_OR_VIEW, 
name=default.bla]] (state=42000,code=4)
{code}
works as expected.


_BUT_
{code}
0: jdbc:hive2://localhost:1/default> truncate table bla;
No rows affected (0,18 seconds)
{code}

_And table is empty afterwards_.


Similarily: {{insert into table}} works, too.



  was:
sql std authorization works as expected.

However if a table is partitioned any user can truncate it
User foo:
{code}
create table bla (a string) partitioned by (b string);
#.. loading values ...
{code}

Admin:
{code}
0: jdbc:hive2://localhost:1/default> set role admin;
No rows affected (0,074 seconds)
0: jdbc:hive2://localhost:1/default> show grant on bla;
+---+++-+-+-++---++--+--+
| database  | table  | partition  | column  | principal_name  | principal_type  
| privilege  | grant_option  |   grant_time   | grantor  |
+---+++-+-+-++---++--+--+
| default   | bla|| | foo | USER
| DELETE | true  | 1426158997000  | foo  |
| default   | bla|| | foo | USER
| INSERT | true  | 1426158997000  | foo  |
| default   | bla|| | foo | USER
| SELECT | true  | 1426158997000  | foo  |
| default   | bla|| | foo | USER
| UPDATE | true  | 1426158997000  | foo  |
+---+++-+-+-++---++--+--+
{code}

now user olaf
{code}
0: jdbc:hive2://localhost:1/default> select * from bla;
Error: Error while compiling statement: FAILED: HiveAccessControlException 
Permission denied: Principal [name=olaf, type=USER] does not have following 
privileges for operation QUERY [[SELECT] on Object [type=TABLE_OR_VIEW, 
name=default.bla]] (state=42000,code=4)
{code}

_BUT_
{code}
0: jdbc:hive2://localhost:1/default> truncate table bla;
No rows affected (0,18 seconds)
{code}

And table is empty afterwards.


Similarily: {{insert into table}} works, too.




> sql std authorization on partitioned table: truncate and insert
> ---
>
> Key: HIVE-9941
> URL: https://issues.apache.org/jira/browse/HIVE-9941
> Project: Hive
>  Issue Type: Bug
>  Components: Authorization
>Affects Versions: 0.14.0
>Reporter: Olaf Flebbe
>
> sql std authorization works as expected.
> However if a table is partitioned any user can truncate it
> User foo:
> {code}
> create table bla (a string) partitioned by (b string);
> #.. loading values ...
> {code}
> Admin:
> {code}
> 0: jdbc:hive2:/

[jira] [Updated] (HIVE-9940) The standard output of Python reduce script can not be interpreted correctly by Hive

2015-03-12 Thread Eric Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Wang updated HIVE-9940:

Description: 
use HQL statement like:
FROM (
  select_statement
  ) map_output
INSERT OVERWRITE TABLE table
  REDUCE map_output.a, map_output.b
  USING 'py_script'
  AS col1, col2;

(1)original type
stdout of Python has Records where the 2nd column = 'Meerjungfrau'
527500  Meerjungfrau25  AO DE   20140704
...

(type of each column are: string, string, int, string, string)

Hive interprets these as:
527500  Meer  AO DE   20140704
...

stderr_log interprets these as:
527500  Meerjungfrau25  AO DE   20140704

(2)change all 'Meerjungfrau' to 'bug' in Python script
stdout of Python has Records where the 2nd column = 'bug'
527500  bug 25  AO DE   20140704
...

Hive interprets these as:
527500  b AO DE   20140704
...

stderr_log interprets these as:
527500  bug 25  AO DE   20140704

(3)put 2nd column to the last column
stdout of Python has Records where the 2nd column = 'Meerjungfrau'
527500  25  AO DE   20140704Meerjungfrau
...

Hive interprets these as:
527500  2520140704Meerjungfrau
...

stderr_log interprets these as:
527500  25  AO DE   20140704Meerjungfrau

  was:
use HQL statement like:
FROM (
  select_statement
  ) map_output
INSERT OVERWRITE TABLE table
  REDUCE map_output.a, map_output.b
  USING 'py_script'
  AS col1, col2;

(1)original type
stdout of Python has Records where the 2nd column = 'Meerjungfrau'
527500  Meerjungfrau25  AO DE   20140704
...

Hive interprets these as:
527500  Meer  AO DE   20140704
...

stderr_log interprets these as:
527500  Meerjungfrau25  AO DE   20140704

(2)change all 'Meerjungfrau' to 'bug' in Python script
stdout of Python has Records where the 2nd column = 'bug'
527500  bug 25  AO DE   20140704
...

Hive interprets these as:
527500  b AO DE   20140704
...

stderr_log interprets these as:
527500  bug 25  AO DE   20140704

(3)put 2nd column to the last column
stdout of Python has Records where the 2nd column = 'Meerjungfrau'
527500  25  AO DE   20140704Meerjungfrau
...

Hive interprets these as:
527500  2520140704Meerjungfrau
...

stderr_log interprets these as:
527500  25  AO DE   20140704Meerjungfrau


> The standard output of Python reduce script can not be interpreted correctly 
> by Hive
> 
>
> Key: HIVE-9940
> URL: https://issues.apache.org/jira/browse/HIVE-9940
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Eric Wang
>
> use HQL statement like:
> FROM (
>   select_statement
>   ) map_output
> INSERT OVERWRITE TABLE table
>   REDUCE map_output.a, map_output.b
>   USING 'py_script'
>   AS col1, col2;
> (1)original type
> stdout of Python has Records where the 2nd column = 'Meerjungfrau'
> 527500Meerjungfrau25  AO DE   20140704
> ...
> (type of each column are: string, string, int, string, string)
> Hive interprets these as:
> 527500Meer  AO DE   20140704
> ...
> stderr_log interprets these as:
> 527500Meerjungfrau25  AO DE   20140704
> (2)change all 'Meerjungfrau' to 'bug' in Python script
> stdout of Python has Records where the 2nd column = 'bug'
> 527500bug 25  AO DE   20140704
> ...
> Hive interprets these as:
> 527500b AO DE   20140704
> ...
> stderr_log interprets these as:
> 527500bug 25  AO DE   20140704
> (3)put 2nd column to the last column
> stdout of Python has Records where the 2nd column = 'Meerjungfrau'
> 52750025  AO DE   20140704Meerjungfrau
> ...
> Hive interprets these as:
> 5275002520140704Meerjungfrau
> ...
> stderr_log interprets these as:
> 52750025  AO DE   20140704Meerjungfrau



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9939) Code cleanup for redundant if check in ExplainTask

2015-03-12 Thread Chinna Rao Lalam (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinna Rao Lalam updated HIVE-9939:
---
Attachment: HIVE-9939.1-spark.patch

> Code cleanup for redundant if check in ExplainTask
> --
>
> Key: HIVE-9939
> URL: https://issues.apache.org/jira/browse/HIVE-9939
> Project: Hive
>  Issue Type: Bug
>Reporter: Chinna Rao Lalam
>Assignee: Chinna Rao Lalam
> Fix For: spark-branch
>
> Attachments: HIVE-9939.1-spark.patch
>
>
> ExplainTask.execute() method have redundant if check.
> Same applicable for trunk also..



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9857) Create Factorial UDF

2015-03-12 Thread Jason Dere (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14357453#comment-14357453
 ] 

Jason Dere commented on HIVE-9857:
--

Hmm, yeah I'm not sure which one to use .. I guess you can either match the 
version that spark uses, or maybe relocate the Hive dependency to common-math 
so it does not conflict.

> Create Factorial UDF
> 
>
> Key: HIVE-9857
> URL: https://issues.apache.org/jira/browse/HIVE-9857
> Project: Hive
>  Issue Type: Improvement
>  Components: UDF
>Reporter: Alexander Pivovarov
>Assignee: Alexander Pivovarov
> Attachments: HIVE-9857.1.patch, HIVE-9857.2.patch
>
>
> Function signature: factorial(int a): bigint
> For example 5!= 5*4*3*2*1=120
> {code}
> select factorial(5);
> OK
> 120
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9914) Post success comments on Jira from Jenkins metastore upgrades scripts

2015-03-12 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HIVE-9914?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergio Peña updated HIVE-9914:
--
Attachment: HIVE-9914.2.patch

> Post success comments on Jira from Jenkins metastore upgrades scripts
> -
>
> Key: HIVE-9914
> URL: https://issues.apache.org/jira/browse/HIVE-9914
> Project: Hive
>  Issue Type: Improvement
>Reporter: Sergio Peña
>Assignee: Sergio Peña
> Attachments: HIVE-9914.1.patch, HIVE-9914.2.patch
>
>
> Currently, the HMS upgrade testing post failure comments on Jira only. We 
> need to post success comments as well so that users know that their upgrade 
> changes are working.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >