Re: Proposal to move Hive Apache Jenkins jobs to Bigtop Jenkins?

2013-10-28 Thread Roman Shaposhnik
Hi Brock,

For as long as these jobs don't block Bigtop
builds too much -- I'd love to help.

I think the easiest would be for you to register
on our jenkins:
http://bigtop01.cloudera.org:8080/
and then let me know your creds. I can
give you enough karma to manage jobs/etc.

Given that we already have a few jobs
running unit tests:
http://bigtop01.cloudera.org:8080/view/UnitTests/
you can just follow those examples and
set up yours.

Thanks,
Roman.

On Fri, Oct 25, 2013 at 8:17 AM, Brock Noland br...@cloudera.com wrote:
 This proposal already has support from the Hive community but adding
 dev@hive as an FYI.

 On Thu, Oct 24, 2013 at 2:18 PM, Brock Noland br...@cloudera.com wrote:
 Hey guys,

 Hive doesn't have any dedicated Apache Jenkins executors and sometimes
 our precommit jobs wait for hours to execute.  I'd like to move our
 jenkins jobs to the BigTop jenkins.

 Post move, the Hive project would remain 100% responsible for
 maintaining and debugging our jobs. The only thing I see required on
 the BigTop front is creating accounts for the hive team members who
 need access.

 Thoughts?

 Cheers,
 Brock



 --
 Apache MRUnit - Unit testing MapReduce - http://mrunit.apache.org


[jira] [Commented] (HIVE-5581) Implement vectorized year/month/day... etc. for string arguments

2013-10-28 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13806591#comment-13806591
 ] 

Hive QA commented on HIVE-5581:
---



{color:green}Overall{color}: +1 all checks pass

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12610475/HIVE-5581.1.patch.txt

{color:green}SUCCESS:{color} +1 4502 tests passed

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/1264/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/1264/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

 Implement vectorized year/month/day... etc. for string arguments
 

 Key: HIVE-5581
 URL: https://issues.apache.org/jira/browse/HIVE-5581
 Project: Hive
  Issue Type: Sub-task
  Components: Query Processor
Affects Versions: 0.13.0
Reporter: Eric Hanson
Assignee: Teddy Choi
 Attachments: HIVE-5581.1.patch.txt


 Functions year(), month(), day(), weekofyear(), hour(), minute(), second() 
 need to be implemented for string arguments in vectorized mode. 
 They already work for timestamp arguments.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


Re: Review Request 14985: HIVE-5354: Decimal precision/scale support in ORC file

2013-10-28 Thread Jason Dere

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/14985/#review27598
---



ql/src/java/org/apache/hadoop/hive/ql/io/orc/OrcStruct.java
https://reviews.apache.org/r/14985/#comment53625

This will cause existing decimal data to not be readable, so probably not 
best to throw exception here. Or were you planning to change this in HIVE-5564?



Are there any existing orc/decimal tests? Otherwise might be good to test that 
the precision/scale is preserved correctly for ORC.

- Jason Dere


On Oct. 28, 2013, 4:49 a.m., Xuefu Zhang wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/14985/
 ---
 
 (Updated Oct. 28, 2013, 4:49 a.m.)
 
 
 Review request for hive and Brock Noland.
 
 
 Bugs: HIVE-5354
 https://issues.apache.org/jira/browse/HIVE-5354
 
 
 Repository: hive-git
 
 
 Description
 ---
 
 Support decimal precision/scale for Orc file, as part of HIVE-3976.
 
 
 Diffs
 -
 
   ql/src/java/org/apache/hadoop/hive/ql/io/orc/OrcStruct.java c993b37 
   ql/src/java/org/apache/hadoop/hive/ql/io/orc/RecordReaderImpl.java 71484a3 
   ql/src/java/org/apache/hadoop/hive/ql/io/orc/WriterImpl.java 7519fc1 
   ql/src/protobuf/org/apache/hadoop/hive/ql/io/orc/orc_proto.proto 53b93a0 
 
 Diff: https://reviews.apache.org/r/14985/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 Xuefu Zhang
 




[jira] [Updated] (HIVE-5666) use Path instead of String for IOContext.inputPath

2013-10-28 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5666?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-5666:


Issue Type: Improvement  (was: Bug)

 use Path instead of String for IOContext.inputPath
 --

 Key: HIVE-5666
 URL: https://issues.apache.org/jira/browse/HIVE-5666
 Project: Hive
  Issue Type: Improvement
Reporter: Thejas M Nair
Assignee: Thejas M Nair
 Attachments: HIVE-5666.1.patch


 Path is converted to string in HiveContextAwareRecordReader to be stored in 
 IOContext.inputPath, then in MapOperator normalizePath gets called on it 
 which converts it back to Path. 
 Path creation is expensive, so it is better to use Path instead of string 
 through the call stack.
 This is also a step towards HIVE-3616.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5666) use Path instead of String for IOContext.inputPath

2013-10-28 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5666?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-5666:


Attachment: HIVE-5666.1.patch

 use Path instead of String for IOContext.inputPath
 --

 Key: HIVE-5666
 URL: https://issues.apache.org/jira/browse/HIVE-5666
 Project: Hive
  Issue Type: Improvement
Reporter: Thejas M Nair
Assignee: Thejas M Nair
 Attachments: HIVE-5666.1.patch


 Path is converted to string in HiveContextAwareRecordReader to be stored in 
 IOContext.inputPath, then in MapOperator normalizePath gets called on it 
 which converts it back to Path. 
 Path creation is expensive, so it is better to use Path instead of string 
 through the call stack.
 This is also a step towards HIVE-3616.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HIVE-5666) use Path instead of String for IOContext.inputPath

2013-10-28 Thread Thejas M Nair (JIRA)
Thejas M Nair created HIVE-5666:
---

 Summary: use Path instead of String for IOContext.inputPath
 Key: HIVE-5666
 URL: https://issues.apache.org/jira/browse/HIVE-5666
 Project: Hive
  Issue Type: Bug
Reporter: Thejas M Nair
Assignee: Thejas M Nair
 Attachments: HIVE-5666.1.patch

Path is converted to string in HiveContextAwareRecordReader to be stored in 
IOContext.inputPath, then in MapOperator normalizePath gets called on it which 
converts it back to Path. 
Path creation is expensive, so it is better to use Path instead of string 
through the call stack.

This is also a step towards HIVE-3616.




--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5666) use Path instead of String for IOContext.inputPath

2013-10-28 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5666?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-5666:


Status: Patch Available  (was: Open)

 use Path instead of String for IOContext.inputPath
 --

 Key: HIVE-5666
 URL: https://issues.apache.org/jira/browse/HIVE-5666
 Project: Hive
  Issue Type: Improvement
Reporter: Thejas M Nair
Assignee: Thejas M Nair
 Attachments: HIVE-5666.1.patch


 Path is converted to string in HiveContextAwareRecordReader to be stored in 
 IOContext.inputPath, then in MapOperator normalizePath gets called on it 
 which converts it back to Path. 
 Path creation is expensive, so it is better to use Path instead of string 
 through the call stack.
 This is also a step towards HIVE-3616.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HIVE-5667) ThriftCLIService log messages jumbled up

2013-10-28 Thread Vaibhav Gumashta (JIRA)
Vaibhav Gumashta created HIVE-5667:
--

 Summary: ThriftCLIService log messages jumbled up
 Key: HIVE-5667
 URL: https://issues.apache.org/jira/browse/HIVE-5667
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2
Reporter: Vaibhav Gumashta


ThriftCLIService log messages are not aligned with the methods correctly



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5667) ThriftCLIService log messages jumbled up

2013-10-28 Thread Vaibhav Gumashta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vaibhav Gumashta updated HIVE-5667:
---

Attachment: HIVE-5667.1.patch

 ThriftCLIService log messages jumbled up
 

 Key: HIVE-5667
 URL: https://issues.apache.org/jira/browse/HIVE-5667
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2
Reporter: Vaibhav Gumashta
 Attachments: HIVE-5667.1.patch


 ThriftCLIService log messages are not aligned with the methods correctly



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5667) ThriftCLIService log messages jumbled up

2013-10-28 Thread Vaibhav Gumashta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vaibhav Gumashta updated HIVE-5667:
---

Fix Version/s: 0.13.0
 Assignee: Vaibhav Gumashta
   Status: Patch Available  (was: Open)

 ThriftCLIService log messages jumbled up
 

 Key: HIVE-5667
 URL: https://issues.apache.org/jira/browse/HIVE-5667
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2
Reporter: Vaibhav Gumashta
Assignee: Vaibhav Gumashta
 Fix For: 0.13.0

 Attachments: HIVE-5667.1.patch


 ThriftCLIService log messages are not aligned with the methods correctly



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5667) ThriftCLIService log messages jumbled up

2013-10-28 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13806620#comment-13806620
 ] 

Thejas M Nair commented on HIVE-5667:
-

+1

 ThriftCLIService log messages jumbled up
 

 Key: HIVE-5667
 URL: https://issues.apache.org/jira/browse/HIVE-5667
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2
Reporter: Vaibhav Gumashta
Assignee: Vaibhav Gumashta
 Fix For: 0.13.0

 Attachments: HIVE-5667.1.patch


 ThriftCLIService log messages are not aligned with the methods correctly



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5668) path normalization in MapOperator is expensive

2013-10-28 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5668?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-5668:


Attachment: HIVE-5668.1.patch

 path normalization in MapOperator is expensive
 --

 Key: HIVE-5668
 URL: https://issues.apache.org/jira/browse/HIVE-5668
 Project: Hive
  Issue Type: Bug
Reporter: Thejas M Nair
Assignee: Thejas M Nair
 Attachments: HIVE-5668.1.patch


 The conversion of paths in MapWork.getPathToAliases is happening multiple 
 times in MapOperator.cleanUpInputFileChangedOp. Caching the results of 
 conversion can improve the performance of hive map tasks.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HIVE-5668) path normalization in MapOperator is expensive

2013-10-28 Thread Thejas M Nair (JIRA)
Thejas M Nair created HIVE-5668:
---

 Summary: path normalization in MapOperator is expensive
 Key: HIVE-5668
 URL: https://issues.apache.org/jira/browse/HIVE-5668
 Project: Hive
  Issue Type: Bug
Reporter: Thejas M Nair
Assignee: Thejas M Nair
 Attachments: HIVE-5668.1.patch

The conversion of paths in MapWork.getPathToAliases is happening multiple times 
in MapOperator.cleanUpInputFileChangedOp. Caching the results of conversion can 
improve the performance of hive map tasks.




--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5668) path normalization in MapOperator is expensive

2013-10-28 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5668?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-5668:


Status: Patch Available  (was: Open)

 path normalization in MapOperator is expensive
 --

 Key: HIVE-5668
 URL: https://issues.apache.org/jira/browse/HIVE-5668
 Project: Hive
  Issue Type: Bug
Reporter: Thejas M Nair
Assignee: Thejas M Nair
 Attachments: HIVE-5668.1.patch


 The conversion of paths in MapWork.getPathToAliases is happening multiple 
 times in MapOperator.cleanUpInputFileChangedOp. Caching the results of 
 conversion can improve the performance of hive map tasks.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5666) use Path instead of String for IOContext.inputPath

2013-10-28 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13806636#comment-13806636
 ] 

Hive QA commented on HIVE-5666:
---



{color:green}Overall{color}: +1 all checks pass

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12610514/HIVE-5666.1.patch

{color:green}SUCCESS:{color} +1 4502 tests passed

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/1267/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/1267/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

 use Path instead of String for IOContext.inputPath
 --

 Key: HIVE-5666
 URL: https://issues.apache.org/jira/browse/HIVE-5666
 Project: Hive
  Issue Type: Improvement
Reporter: Thejas M Nair
Assignee: Thejas M Nair
 Attachments: HIVE-5666.1.patch


 Path is converted to string in HiveContextAwareRecordReader to be stored in 
 IOContext.inputPath, then in MapOperator normalizePath gets called on it 
 which converts it back to Path. 
 Path creation is expensive, so it is better to use Path instead of string 
 through the call stack.
 This is also a step towards HIVE-3616.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HIVE-5669) The '-i' argument in the Hive Shell doesn't work properly

2013-10-28 Thread chaozhang (JIRA)
chaozhang created HIVE-5669:
---

 Summary: The '-i' argument in the Hive Shell doesn't work properly
 Key: HIVE-5669
 URL: https://issues.apache.org/jira/browse/HIVE-5669
 Project: Hive
  Issue Type: Bug
Reporter: chaozhang
Priority: Minor


I‘m starting to read the source code about the hive-0.90. I found that the ’-i‘ 
arg in the hive shell can process more than one files. But when i tried 'hive 
-i ***.sql ***.sql' in the command line, it just processed the first file.
Then I looked into the source code carefully, finding that the use of apache 
common-cli may be not right. 
The 'options.addOption' code about '-i' in the OptionsProcessor.java file, may 
have some problem.The action hasArg() is the wrong thing. In the '-i' arg, it 
should be changed to hasOptionalArgs() according to the related processing code.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5667) ThriftCLIService log messages jumbled up

2013-10-28 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13806678#comment-13806678
 ] 

Hive QA commented on HIVE-5667:
---



{color:green}Overall{color}: +1 all checks pass

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12610516/HIVE-5667.1.patch

{color:green}SUCCESS:{color} +1 4502 tests passed

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/1268/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/1268/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

 ThriftCLIService log messages jumbled up
 

 Key: HIVE-5667
 URL: https://issues.apache.org/jira/browse/HIVE-5667
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2
Reporter: Vaibhav Gumashta
Assignee: Vaibhav Gumashta
 Fix For: 0.13.0

 Attachments: HIVE-5667.1.patch


 ThriftCLIService log messages are not aligned with the methods correctly



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5668) path normalization in MapOperator is expensive

2013-10-28 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5668?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13806704#comment-13806704
 ] 

Hive QA commented on HIVE-5668:
---



{color:green}Overall{color}: +1 all checks pass

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12610523/HIVE-5668.1.patch

{color:green}SUCCESS:{color} +1 4502 tests passed

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/1269/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/1269/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

 path normalization in MapOperator is expensive
 --

 Key: HIVE-5668
 URL: https://issues.apache.org/jira/browse/HIVE-5668
 Project: Hive
  Issue Type: Bug
Reporter: Thejas M Nair
Assignee: Thejas M Nair
 Attachments: HIVE-5668.1.patch


 The conversion of paths in MapWork.getPathToAliases is happening multiple 
 times in MapOperator.cleanUpInputFileChangedOp. Caching the results of 
 conversion can improve the performance of hive map tasks.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5355) JDBC support for decimal precision/scale

2013-10-28 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13806774#comment-13806774
 ] 

Brock Noland commented on HIVE-5355:


FWIW, the patch does this:
{noformat}
try {
  stmt.executeQuery(select from  + dataTypeTableName);
 } catch (SQLException e) {
   assertEquals(42000, e.getSQLState());
 }
fail(SQLException is expected); 
{noformat}
and it should do:
{noformat}
try {
  stmt.executeQuery(select from  + dataTypeTableName);
  fail(SQLException is expected); 
 } catch (SQLException e) {
   assertEquals(42000, e.getSQLState());
 }
{noformat}

 JDBC support for decimal precision/scale
 

 Key: HIVE-5355
 URL: https://issues.apache.org/jira/browse/HIVE-5355
 Project: Hive
  Issue Type: Task
  Components: Serializers/Deserializers
Affects Versions: 0.10.0
Reporter: Xuefu Zhang
Assignee: Xuefu Zhang
 Fix For: 0.13.0

 Attachments: HIVE-5355.1.patch, HIVE-5355.patch


 A subtask of HIVE-3976.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5665) Update PMC status for navis

2013-10-28 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5665?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13806776#comment-13806776
 ] 

Brock Noland commented on HIVE-5665:


+1

 Update PMC status for navis
 ---

 Key: HIVE-5665
 URL: https://issues.apache.org/jira/browse/HIVE-5665
 Project: Hive
  Issue Type: Bug
Reporter: Thejas M Nair
Assignee: Thejas M Nair
 Attachments: HIVE-5665.1.patch


 NO PRECOMMIT TESTS



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5667) ThriftCLIService log messages jumbled up

2013-10-28 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13806793#comment-13806793
 ] 

Brock Noland commented on HIVE-5667:


huge +1

 ThriftCLIService log messages jumbled up
 

 Key: HIVE-5667
 URL: https://issues.apache.org/jira/browse/HIVE-5667
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2
Reporter: Vaibhav Gumashta
Assignee: Vaibhav Gumashta
 Fix For: 0.13.0

 Attachments: HIVE-5667.1.patch


 ThriftCLIService log messages are not aligned with the methods correctly



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5667) ThriftCLIService log messages jumbled up

2013-10-28 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-5667:
---

Resolution: Fixed
Status: Resolved  (was: Patch Available)

I have committed this to trunk Vaibhav! Thank you for the contribution!

 ThriftCLIService log messages jumbled up
 

 Key: HIVE-5667
 URL: https://issues.apache.org/jira/browse/HIVE-5667
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2
Reporter: Vaibhav Gumashta
Assignee: Vaibhav Gumashta
 Fix For: 0.13.0

 Attachments: HIVE-5667.1.patch


 ThriftCLIService log messages are not aligned with the methods correctly



--
This message was sent by Atlassian JIRA
(v6.1#6144)


Re: Proposal to move Hive Apache Jenkins jobs to Bigtop Jenkins?

2013-10-28 Thread Brock Noland
Ok sounds good. FWIW, the code that will actually execute on the Jenkins
slave is actually just a very lightweight REST client that communicates
with our parallel test service.

My username is brock.

Hive Dev:  I think that at least two more committers should create accounts
so we don't have a bus factor here.

Brock


On Mon, Oct 28, 2013 at 1:06 AM, Roman Shaposhnik r...@apache.org wrote:

 Hi Brock,

 For as long as these jobs don't block Bigtop
 builds too much -- I'd love to help.

 I think the easiest would be for you to register
 on our jenkins:
 http://bigtop01.cloudera.org:8080/
 and then let me know your creds. I can
 give you enough karma to manage jobs/etc.

 Given that we already have a few jobs
 running unit tests:
 http://bigtop01.cloudera.org:8080/view/UnitTests/
 you can just follow those examples and
 set up yours.

 Thanks,
 Roman.

 On Fri, Oct 25, 2013 at 8:17 AM, Brock Noland br...@cloudera.com wrote:
  This proposal already has support from the Hive community but adding
  dev@hive as an FYI.
 
  On Thu, Oct 24, 2013 at 2:18 PM, Brock Noland br...@cloudera.com
 wrote:
  Hey guys,
 
  Hive doesn't have any dedicated Apache Jenkins executors and sometimes
  our precommit jobs wait for hours to execute.  I'd like to move our
  jenkins jobs to the BigTop jenkins.
 
  Post move, the Hive project would remain 100% responsible for
  maintaining and debugging our jobs. The only thing I see required on
  the BigTop front is creating accounts for the hive team members who
  need access.
 
  Thoughts?
 
  Cheers,
  Brock
 
 
 
  --
  Apache MRUnit - Unit testing MapReduce - http://mrunit.apache.org




-- 
Apache MRUnit - Unit testing MapReduce - http://mrunit.apache.org


[jira] [Updated] (HIVE-5665) Update PMC status for navis

2013-10-28 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5665?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-5665:
---

   Resolution: Fixed
Fix Version/s: 0.13.0
   Status: Resolved  (was: Patch Available)

Committed to the site! Thank you for doing this!

 Update PMC status for navis
 ---

 Key: HIVE-5665
 URL: https://issues.apache.org/jira/browse/HIVE-5665
 Project: Hive
  Issue Type: Bug
Reporter: Thejas M Nair
Assignee: Thejas M Nair
 Fix For: 0.13.0

 Attachments: HIVE-5665.1.patch


 NO PRECOMMIT TESTS



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5666) use Path instead of String for IOContext.inputPath

2013-10-28 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13806808#comment-13806808
 ] 

Ashutosh Chauhan commented on HIVE-5666:


+1

 use Path instead of String for IOContext.inputPath
 --

 Key: HIVE-5666
 URL: https://issues.apache.org/jira/browse/HIVE-5666
 Project: Hive
  Issue Type: Improvement
Reporter: Thejas M Nair
Assignee: Thejas M Nair
 Attachments: HIVE-5666.1.patch


 Path is converted to string in HiveContextAwareRecordReader to be stored in 
 IOContext.inputPath, then in MapOperator normalizePath gets called on it 
 which converts it back to Path. 
 Path creation is expensive, so it is better to use Path instead of string 
 through the call stack.
 This is also a step towards HIVE-3616.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5610) Merge maven branch into trunk

2013-10-28 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13806811#comment-13806811
 ] 

Brock Noland commented on HIVE-5610:


Thank you guys, very much, for your feedback!  I will be addressing those 
items, merging trunk into maven, and creating a fix issue for any new failing 
tests after the merge.

 Merge maven branch into trunk
 -

 Key: HIVE-5610
 URL: https://issues.apache.org/jira/browse/HIVE-5610
 Project: Hive
  Issue Type: Sub-task
Reporter: Brock Noland
Assignee: Brock Noland

 With HIVE-5566 nearing completion we will be nearly ready to merge the maven 
 branch to trunk. The following tasks will be done post-merge:
 * HIVE-5611 - Add assembly (i.e.) tar creation to pom
 * HIVE-5612 - Add ability to re-generate generated code stored in source 
 control
 The merge process will be as follows:
 1) svn merge ^/hive/branches/maven
 2) Commit result
 3) Modify the following line in maven-rollforward.sh:
 {noformat}
   mv $source $target
 {noformat}
 to
 {noformat}
   svn mv $source $target
 {noformat}
 4) Execute maven-rollfward.sh
 5) Commit result 
 6) Update trunk-mr1.properties and trunk-mr2.properties on the ptesting host, 
 adding the following:
 {noformat}
 mavenEnvOpts = -Dhttp.proxyHost=localhost -Dhttp.proxyPort=3128 
 testCasePropertyName = test
 buildTool = maven
 unitTests.directories = ./
 {noformat}
 Notes:
 * To build everything you must:
 {noformat}
 $ mvn clean install -DskipTests
 $ cd itests
 $ mvn clean install -DskipTests
 {noformat}
 because itests (any tests that has cyclical dependencies or requires that the 
 packages be built) is not part of the root reactor build.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5663) Refactor ORC RecordReader to operate on direct wrapped ByteBuffers

2013-10-28 Thread Gopal V (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13806815#comment-13806815
 ] 

Gopal V commented on HIVE-5663:
---

Ah, okay. I was just following the original code pattern, which had code like

{code}
private byte[] compressed = null;
{code}

which was being replaced with the ByteBuffer.

I can amend that, but it made my diffs look same-ish when I was viewing it 
side-by-side with the current patch.

 Refactor ORC RecordReader to operate on direct  wrapped ByteBuffers
 

 Key: HIVE-5663
 URL: https://issues.apache.org/jira/browse/HIVE-5663
 Project: Hive
  Issue Type: Improvement
  Components: File Formats
Affects Versions: 0.13.0
 Environment: Ubuntu LXC 
Reporter: Gopal V
Assignee: Gopal V
  Labels: ORC
 Attachments: HIVE-5663.01.patch


 The current ORC RecordReader implementation assumes array structures backing 
 the ByteBuffers it passes around between RecordReaderImpl and 
 Compressed/Uncompressed InStream objects.
 This patch attempts to refactor those assumptions out of both classes, 
 allowing the future use of direct byte buffers within ORC (as might come from 
 HDFS zero-copy readers).



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5663) Refactor ORC RecordReader to operate on direct wrapped ByteBuffers

2013-10-28 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13806821#comment-13806821
 ] 

Brock Noland commented on HIVE-5663:


Yep I saw that and I think keeping in line with the existing code base is a 
great goal. However, I don't think that makes sense when there is redundant or 
non-standard code like this case.

Thanks for the contribution!

 Refactor ORC RecordReader to operate on direct  wrapped ByteBuffers
 

 Key: HIVE-5663
 URL: https://issues.apache.org/jira/browse/HIVE-5663
 Project: Hive
  Issue Type: Improvement
  Components: File Formats
Affects Versions: 0.13.0
 Environment: Ubuntu LXC 
Reporter: Gopal V
Assignee: Gopal V
  Labels: ORC
 Attachments: HIVE-5663.01.patch


 The current ORC RecordReader implementation assumes array structures backing 
 the ByteBuffers it passes around between RecordReaderImpl and 
 Compressed/Uncompressed InStream objects.
 This patch attempts to refactor those assumptions out of both classes, 
 allowing the future use of direct byte buffers within ORC (as might come from 
 HDFS zero-copy readers).



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5218) datanucleus does not work with MS SQLServer in Hive metastore

2013-10-28 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-5218:
---

Description: 
HIVE-3632 upgraded datanucleus version to 3.2.x, however, this version of 
datanucleus doesn't work with SQLServer as the metastore. The problem is that 
datanucleus tries to use fully qualified object name to find a table in the 
database but couldn't find it.

If I downgrade the version to HIVE-2084, SQLServer works fine.

It could be a bug in datanucleus.

This is the detailed exception I'm getting when using datanucleus 3.2.x with 
SQL Server:

{noformat}
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTa
sk. MetaException(message:javax.jdo.JDOException: Exception thrown calling table
.exists() for a2ee36af45e9f46c19e995bfd2d9b5fd1hivemetastore..SEQUENCE_TABLE
at org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusExc
eption(NucleusJDOHelper.java:596)
at org.datanucleus.api.jdo.JDOPersistenceManager.jdoMakePersistent(JDOPe
rsistenceManager.java:732)
…
at org.apache.hadoop.hive.metastore.RetryingRawStore.invoke(RetryingRawS
tore.java:111)
at $Proxy0.createTable(Unknown Source)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_tabl
e_core(HiveMetaStore.java:1071)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_tabl
e_with_environment_context(HiveMetaStore.java:1104)
…
at $Proxy11.create_table_with_environment_context(Unknown Source)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$cr
eate_table_with_environment_context.getResult(ThriftHiveMetastore.java:6417)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$cr
eate_table_with_environment_context.getResult(ThriftHiveMetastore.java:6401)

NestedThrowablesStackTrace:
com.microsoft.sqlserver.jdbc.SQLServerException: There is already an object name
d 'SEQUENCE_TABLE' in the database.
at com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDatabaseError
(SQLServerException.java:197)
at com.microsoft.sqlserver.jdbc.SQLServerStatement.getNextResult(SQLServ
erStatement.java:1493)
at com.microsoft.sqlserver.jdbc.SQLServerStatement.doExecuteStatement(SQ
LServerStatement.java:775)
at com.microsoft.sqlserver.jdbc.SQLServerStatement$StmtExecCmd.doExecute
(SQLServerStatement.java:676)
at com.microsoft.sqlserver.jdbc.TDSCommand.execute(IOBuffer.java:4615)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.executeCommand(SQLSe
rverConnection.java:1400)
at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeCommand(SQLSer
verStatement.java:179)
at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeStatement(SQLS
erverStatement.java:154)
at com.microsoft.sqlserver.jdbc.SQLServerStatement.execute(SQLServerStat
ement.java:649)
at com.jolbox.bonecp.StatementHandle.execute(StatementHandle.java:300)
at org.datanucleus.store.rdbms.table.AbstractTable.executeDdlStatement(A
bstractTable.java:760)
at org.datanucleus.store.rdbms.table.AbstractTable.executeDdlStatementLi
st(AbstractTable.java:711)
at org.datanucleus.store.rdbms.table.AbstractTable.create(AbstractTable.
java:425)
at org.datanucleus.store.rdbms.table.AbstractTable.exists(AbstractTable.
java:488)
at org.datanucleus.store.rdbms.valuegenerator.TableGenerator.repositoryE
xists(TableGenerator.java:242)
at org.datanucleus.store.rdbms.valuegenerator.AbstractRDBMSGenerator.obt
ainGenerationBlock(AbstractRDBMSGenerator.java:86)
at org.datanucleus.store.valuegenerator.AbstractGenerator.obtainGenerati
onBlock(AbstractGenerator.java:197)
at org.datanucleus.store.valuegenerator.AbstractGenerator.next(AbstractG
enerator.java:105)
at org.datanucleus.store.rdbms.RDBMSStoreManager.getStrategyValueForGene
rator(RDBMSStoreManager.java:2019)
at org.datanucleus.store.AbstractStoreManager.getStrategyValue(AbstractS
toreManager.java:1385)
at org.datanucleus.ExecutionContextImpl.newObjectId(ExecutionContextImpl
.java:3727)
at org.datanucleus.state.JDOStateManager.setIdentity(JDOStateManager.jav
a:2574)
at org.datanucleus.state.JDOStateManager.initialiseForPersistentNew(JDOS
tateManager.java:526)
at org.datanucleus.state.ObjectProviderFactoryImpl.newForPersistentNew(O
bjectProviderFactoryImpl.java:202)
at org.datanucleus.ExecutionContextImpl.newObjectProviderForPersistentNe
w(ExecutionContextImpl.java:1326)
at org.datanucleus.ExecutionContextImpl.persistObjectInternal(ExecutionC
ontextImpl.java:2123)
at org.datanucleus.ExecutionContextImpl.persistObjectWork(ExecutionConte
xtImpl.java:1972)
at org.datanucleus.ExecutionContextImpl.persistObject(ExecutionContextIm
pl.java:1820)
at 

[jira] [Updated] (HIVE-5218) datanucleus does not work with MS SQLServer in Hive metastore

2013-10-28 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-5218:
---

Attachment: HIVE-5218.2.patch

Reuploading the patch with a correct name for testing.

 datanucleus does not work with MS SQLServer in Hive metastore
 -

 Key: HIVE-5218
 URL: https://issues.apache.org/jira/browse/HIVE-5218
 Project: Hive
  Issue Type: Bug
  Components: Metastore
Affects Versions: 0.12.0
Reporter: shanyu zhao
Assignee: shanyu zhao
 Fix For: 0.13.0

 Attachments: 
 0001-HIVE-5218-datanucleus-does-not-work-with-SQLServer-i.patch, 
 HIVE-5218.2.patch, HIVE-5218.patch, HIVE-5218-v2.patch


 HIVE-3632 upgraded datanucleus version to 3.2.x, however, this version of 
 datanucleus doesn't work with SQLServer as the metastore. The problem is that 
 datanucleus tries to use fully qualified object name to find a table in the 
 database but couldn't find it.
 If I downgrade the version to HIVE-2084, SQLServer works fine.
 It could be a bug in datanucleus.
 This is the detailed exception I'm getting when using datanucleus 3.2.x with 
 SQL Server:
 {noformat}
 FAILED: Execution Error, return code 1 from 
 org.apache.hadoop.hive.ql.exec.DDLTa
 sk. MetaException(message:javax.jdo.JDOException: Exception thrown calling 
 table
 .exists() for a2ee36af45e9f46c19e995bfd2d9b5fd1hivemetastore..SEQUENCE_TABLE
 at 
 org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusExc
 eption(NucleusJDOHelper.java:596)
 at 
 org.datanucleus.api.jdo.JDOPersistenceManager.jdoMakePersistent(JDOPe
 rsistenceManager.java:732)
 …
 at 
 org.apache.hadoop.hive.metastore.RetryingRawStore.invoke(RetryingRawS
 tore.java:111)
 at $Proxy0.createTable(Unknown Source)
 at 
 org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_tabl
 e_core(HiveMetaStore.java:1071)
 at 
 org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_tabl
 e_with_environment_context(HiveMetaStore.java:1104)
 …
 at $Proxy11.create_table_with_environment_context(Unknown Source)
 at 
 org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$cr
 eate_table_with_environment_context.getResult(ThriftHiveMetastore.java:6417)
 at 
 org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$cr
 eate_table_with_environment_context.getResult(ThriftHiveMetastore.java:6401)
 NestedThrowablesStackTrace:
 com.microsoft.sqlserver.jdbc.SQLServerException: There is already an object 
 name
 d 'SEQUENCE_TABLE' in the database.
 at 
 com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDatabaseError
 (SQLServerException.java:197)
 at 
 com.microsoft.sqlserver.jdbc.SQLServerStatement.getNextResult(SQLServ
 erStatement.java:1493)
 at 
 com.microsoft.sqlserver.jdbc.SQLServerStatement.doExecuteStatement(SQ
 LServerStatement.java:775)
 at 
 com.microsoft.sqlserver.jdbc.SQLServerStatement$StmtExecCmd.doExecute
 (SQLServerStatement.java:676)
 at com.microsoft.sqlserver.jdbc.TDSCommand.execute(IOBuffer.java:4615)
 at 
 com.microsoft.sqlserver.jdbc.SQLServerConnection.executeCommand(SQLSe
 rverConnection.java:1400)
 at 
 com.microsoft.sqlserver.jdbc.SQLServerStatement.executeCommand(SQLSer
 verStatement.java:179)
 at 
 com.microsoft.sqlserver.jdbc.SQLServerStatement.executeStatement(SQLS
 erverStatement.java:154)
 at 
 com.microsoft.sqlserver.jdbc.SQLServerStatement.execute(SQLServerStat
 ement.java:649)
 at com.jolbox.bonecp.StatementHandle.execute(StatementHandle.java:300)
 at 
 org.datanucleus.store.rdbms.table.AbstractTable.executeDdlStatement(A
 bstractTable.java:760)
 at 
 org.datanucleus.store.rdbms.table.AbstractTable.executeDdlStatementLi
 st(AbstractTable.java:711)
 at 
 org.datanucleus.store.rdbms.table.AbstractTable.create(AbstractTable.
 java:425)
 at 
 org.datanucleus.store.rdbms.table.AbstractTable.exists(AbstractTable.
 java:488)
 at 
 org.datanucleus.store.rdbms.valuegenerator.TableGenerator.repositoryE
 xists(TableGenerator.java:242)
 at 
 org.datanucleus.store.rdbms.valuegenerator.AbstractRDBMSGenerator.obt
 ainGenerationBlock(AbstractRDBMSGenerator.java:86)
 at 
 org.datanucleus.store.valuegenerator.AbstractGenerator.obtainGenerati
 onBlock(AbstractGenerator.java:197)
 at 
 org.datanucleus.store.valuegenerator.AbstractGenerator.next(AbstractG
 enerator.java:105)
 at 
 org.datanucleus.store.rdbms.RDBMSStoreManager.getStrategyValueForGene
 rator(RDBMSStoreManager.java:2019)
 at 
 org.datanucleus.store.AbstractStoreManager.getStrategyValue(AbstractS
 toreManager.java:1385)
 at 
 

[jira] [Commented] (HIVE-5604) Fix validation of nested expressions.

2013-10-28 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13806843#comment-13806843
 ] 

Ashutosh Chauhan commented on HIVE-5604:


Comment for Builder class of {{VectorExprDesc}} is still valid, right? Looks 
like you inadvertently removed it. 

 Fix validation of nested expressions.
 -

 Key: HIVE-5604
 URL: https://issues.apache.org/jira/browse/HIVE-5604
 Project: Hive
  Issue Type: Sub-task
  Components: Vectorization
Reporter: Jitendra Nath Pandey
Assignee: Jitendra Nath Pandey
 Attachments: HIVE-5604.1.patch, HIVE-5604.2.patch, HIVE-5604.3.patch, 
 HIVE-5604.tez.patch


 The bug fixes a few issues related to nested expressions:
 1) The nested expressions were not being validated at all.
 2) UDFRegExp was not handled correctly, but issue was not caught because of 
 the previous issue.
 3) HIVE-5642 will not show up when this jira is fixed, but still added a 
 sanity check to validate the number of arguments.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5663) Refactor ORC RecordReader to operate on direct wrapped ByteBuffers

2013-10-28 Thread Gopal V (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gopal V updated HIVE-5663:
--

Attachment: HIVE-5663.02.patch

Updated patch as per comments.

 Refactor ORC RecordReader to operate on direct  wrapped ByteBuffers
 

 Key: HIVE-5663
 URL: https://issues.apache.org/jira/browse/HIVE-5663
 Project: Hive
  Issue Type: Improvement
  Components: File Formats
Affects Versions: 0.13.0
 Environment: Ubuntu LXC 
Reporter: Gopal V
Assignee: Gopal V
  Labels: ORC
 Attachments: HIVE-5663.01.patch, HIVE-5663.02.patch


 The current ORC RecordReader implementation assumes array structures backing 
 the ByteBuffers it passes around between RecordReaderImpl and 
 Compressed/Uncompressed InStream objects.
 This patch attempts to refactor those assumptions out of both classes, 
 allowing the future use of direct byte buffers within ORC (as might come from 
 HDFS zero-copy readers).



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5653) Vectorized Shuffle Join produces incorrect results

2013-10-28 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13806849#comment-13806849
 ] 

Ashutosh Chauhan commented on HIVE-5653:


+1

 Vectorized Shuffle Join produces incorrect results
 --

 Key: HIVE-5653
 URL: https://issues.apache.org/jira/browse/HIVE-5653
 Project: Hive
  Issue Type: Sub-task
Affects Versions: 0.13.0
Reporter: Remus Rusanu
Assignee: Remus Rusanu
 Attachments: HIVE-5653.1.patch


 Vectorized shuffle join should work out-of-the-box, but it produces empty 
 result set. Investigating.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5656) Hive produces unclear, confusing SemanticException when dealing with mod or pmod by zero

2013-10-28 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13806850#comment-13806850
 ] 

Ashutosh Chauhan commented on HIVE-5656:


+1

 Hive produces unclear, confusing SemanticException when dealing with mod or 
 pmod by zero
 

 Key: HIVE-5656
 URL: https://issues.apache.org/jira/browse/HIVE-5656
 Project: Hive
  Issue Type: Bug
  Components: Types
Affects Versions: 0.12.0
Reporter: Xuefu Zhang
Assignee: Xuefu Zhang
 Attachments: HIVE-5656.patch


 {code}
 hive select 5%0 from tmp2 limit 1;
 FAILED: SemanticException [Error 10014]: Line 1:7 Wrong arguments '0': 
 org.apache.hadoop.hive.ql.metadata.HiveException: Unable to execute method 
 public org.apache.hadoop.io.IntWritable 
 org.apache.hadoop.hive.ql.udf.UDFOPMod.evaluate(org.apache.hadoop.io.IntWritable,org.apache.hadoop.io.IntWritable)
   on object org.apache.hadoop.hive.ql.udf.UDFOPMod@21b594a9 of class 
 org.apache.hadoop.hive.ql.udf.UDFOPMod with arguments 
 {5:org.apache.hadoop.io.IntWritable, 0:org.apache.hadoop.io.IntWritable} of 
 size 2
 hive select pmod(5,0) from tmp2 limit 1;
 FAILED: SemanticException [Error 10014]: Line 1:7 Wrong arguments '0': 
 org.apache.hadoop.hive.ql.metadata.HiveException: Unable to execute method 
 public org.apache.hadoop.io.IntWritable 
 org.apache.hadoop.hive.ql.udf.UDFPosMod.evaluate(org.apache.hadoop.io.IntWritable,org.apache.hadoop.io.IntWritable)
   on object org.apache.hadoop.hive.ql.udf.UDFPosMod@174ed99a of class 
 org.apache.hadoop.hive.ql.udf.UDFPosMod with arguments 
 {5:org.apache.hadoop.io.IntWritable, 0:org.apache.hadoop.io.IntWritable} of 
 size 2
 {code}
 Exception stack:
 {code}
 at 
 org.apache.hadoop.hive.ql.parse.TypeCheckProcFactory$DefaultExprProcessor.process(TypeCheckProcFactory.java:1112)
 at 
 org.apache.hadoop.hive.ql.lib.DefaultRuleDispatcher.dispatch(DefaultRuleDispatcher.java:90)
 at 
 org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatchAndReturn(DefaultGraphWalker.java:94)
 at 
 org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatch(DefaultGraphWalker.java:78)
 at 
 org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.walk(DefaultGraphWalker.java:132)
 at 
 org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.startWalking(DefaultGraphWalker.java:109)
 at 
 org.apache.hadoop.hive.ql.parse.TypeCheckProcFactory.genExprNode(TypeCheckProcFactory.java:181)
 at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genAllExprNodeDesc(SemanticAnalyzer.java:8870)
 at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genExprNodeDesc(SemanticAnalyzer.java:8826)
 at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genSelectPlan(SemanticAnalyzer.java:2734)
 at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genSelectPlan(SemanticAnalyzer.java:2531)
 at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPostGroupByBodyPlan(SemanticAnalyzer.java:7606)
 at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genBodyPlan(SemanticAnalyzer.java:7562)
 at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:8365)
 at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:8591)
 at 
 org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:284)
 at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:451)
 at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:351)
 at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1004)
 at org.apache.hadoop.hive.ql.Driver.run(Driver.java:915)
 at 
 org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:268)
 at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:220)
 at 
 org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:422)
 at 
 org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:790)
 at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:684)
 at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:623)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:601)
 at org.apache.hadoop.util.RunJar.main(RunJar.java:208)
 {code}
 The correct behaviour should be producing NULL.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5610) Merge maven branch into trunk

2013-10-28 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13806855#comment-13806855
 ] 

Ashutosh Chauhan commented on HIVE-5610:


Is there a way to run a single qfile test? 

 Merge maven branch into trunk
 -

 Key: HIVE-5610
 URL: https://issues.apache.org/jira/browse/HIVE-5610
 Project: Hive
  Issue Type: Sub-task
Reporter: Brock Noland
Assignee: Brock Noland

 With HIVE-5566 nearing completion we will be nearly ready to merge the maven 
 branch to trunk. The following tasks will be done post-merge:
 * HIVE-5611 - Add assembly (i.e.) tar creation to pom
 * HIVE-5612 - Add ability to re-generate generated code stored in source 
 control
 The merge process will be as follows:
 1) svn merge ^/hive/branches/maven
 2) Commit result
 3) Modify the following line in maven-rollforward.sh:
 {noformat}
   mv $source $target
 {noformat}
 to
 {noformat}
   svn mv $source $target
 {noformat}
 4) Execute maven-rollfward.sh
 5) Commit result 
 6) Update trunk-mr1.properties and trunk-mr2.properties on the ptesting host, 
 adding the following:
 {noformat}
 mavenEnvOpts = -Dhttp.proxyHost=localhost -Dhttp.proxyPort=3128 
 testCasePropertyName = test
 buildTool = maven
 unitTests.directories = ./
 {noformat}
 Notes:
 * To build everything you must:
 {noformat}
 $ mvn clean install -DskipTests
 $ cd itests
 $ mvn clean install -DskipTests
 {noformat}
 because itests (any tests that has cyclical dependencies or requires that the 
 packages be built) is not part of the root reactor build.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5610) Merge maven branch into trunk

2013-10-28 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13806862#comment-13806862
 ] 

Brock Noland commented on HIVE-5610:


Yes, I just updated this page:

https://cwiki.apache.org/confluence/display/Hive/HiveDeveloperFAQ

Basically, assuming you have built and installed:
{noformat}
mvn clean install -DskipTests
cd itests 
mvn clean install -DskipTests
{noformat}

It's just:

{noformat}
cd itests/qtest
mvn test -Dtest=TestCliDriver -Dqfile=alter1.q
{noformat}

 Merge maven branch into trunk
 -

 Key: HIVE-5610
 URL: https://issues.apache.org/jira/browse/HIVE-5610
 Project: Hive
  Issue Type: Sub-task
Reporter: Brock Noland
Assignee: Brock Noland

 With HIVE-5566 nearing completion we will be nearly ready to merge the maven 
 branch to trunk. The following tasks will be done post-merge:
 * HIVE-5611 - Add assembly (i.e.) tar creation to pom
 * HIVE-5612 - Add ability to re-generate generated code stored in source 
 control
 The merge process will be as follows:
 1) svn merge ^/hive/branches/maven
 2) Commit result
 3) Modify the following line in maven-rollforward.sh:
 {noformat}
   mv $source $target
 {noformat}
 to
 {noformat}
   svn mv $source $target
 {noformat}
 4) Execute maven-rollfward.sh
 5) Commit result 
 6) Update trunk-mr1.properties and trunk-mr2.properties on the ptesting host, 
 adding the following:
 {noformat}
 mavenEnvOpts = -Dhttp.proxyHost=localhost -Dhttp.proxyPort=3128 
 testCasePropertyName = test
 buildTool = maven
 unitTests.directories = ./
 {noformat}
 Notes:
 * To build everything you must:
 {noformat}
 $ mvn clean install -DskipTests
 $ cd itests
 $ mvn clean install -DskipTests
 {noformat}
 because itests (any tests that has cyclical dependencies or requires that the 
 packages be built) is not part of the root reactor build.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5576) Blank lines missing from .q.out files created on Windows for testcase=TestCliDriver

2013-10-28 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13806867#comment-13806867
 ] 

Ashutosh Chauhan commented on HIVE-5576:


+1

 Blank lines missing from .q.out files created on Windows for 
 testcase=TestCliDriver
 ---

 Key: HIVE-5576
 URL: https://issues.apache.org/jira/browse/HIVE-5576
 Project: Hive
  Issue Type: Bug
  Components: Testing Infrastructure
Affects Versions: 0.13.0
 Environment: Windows 8 using Hive Monarch build environment
Reporter: Eric Hanson
Assignee: Remus Rusanu
Priority: Minor
 Attachments: HIVE-5576.1.patch, vectorized_math_funcs.q, 
 vectorized_math_funcs.q.out.unix, vectorized_math_funcs.q.out.windows


 If you create a .q.out file on Windows using a command like this:
 ant test -Dhadoop.security.version=1.1.0-SNAPSHOT 
 -Dhadoop.root=c:\hw\project\hadoop-monarch -Dresolvers=internal 
 -Dhadoop-0.20S.version=1.1.0-SNAPSHOT -Dhadoop.mr.rev=20S 
 -Dhive.support.concurrency=false -Dshims.include=0.20S 
 -Dtest.continue.on.failure=true -Dtest.halt.on.failure=no 
 -Dtest.print.classpath=true  -Dtestcase=TestCliDriver 
 -Dqfile=vectorized_math_funcs.q,vectorized_string_funcs.q,vectorized_casts.q
  -Doverwrite=true -Dtest.silent=false
 Then the .q.out files generated in the hive directory under
 ql\src\test\results\clientpositive
 having missing blank lines.
 So, the .q tests will pass on your Windows machine. But when you upload them 
 in a patch, they fail on the automated build server. See HIVE-5517 for an 
 example. HIVE-5517.3.patch has .q.out files with missing blank lines. 
 Hive-5517.4.patch has .q.out files created on a Linux or Mac system. Those 
 have blank lines.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5648) error when casting partition column to varchar in where clause

2013-10-28 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13806888#comment-13806888
 ] 

Ashutosh Chauhan commented on HIVE-5648:


+1

 error when casting partition column to varchar in where clause 
 ---

 Key: HIVE-5648
 URL: https://issues.apache.org/jira/browse/HIVE-5648
 Project: Hive
  Issue Type: Bug
Reporter: Jason Dere
Assignee: Jason Dere
 Attachments: HIVE-5648.1.patch, HIVE-5648.2.patch


 hive select * from partition_varchar_2 where cast(dt as varchar(10)) = 
 '2000-01-01';
 FAILED: RuntimeException org.apache.hadoop.hive.ql.metadata.HiveException: 
 java.lang.RuntimeException: Internal error: Cannot find ObjectInspector  for 
 VARCHAR



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5657) TopN produces incorrect results with count(distinct)

2013-10-28 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13806882#comment-13806882
 ] 

Sergey Shelukhin commented on HIVE-5657:


is it ready for review? I skimmed a bit, it makes sense. Would be nice to have 
RB/FB for final patch. Thanks!

 TopN produces incorrect results with count(distinct)
 

 Key: HIVE-5657
 URL: https://issues.apache.org/jira/browse/HIVE-5657
 Project: Hive
  Issue Type: Bug
Reporter: Sergey Shelukhin
Assignee: Navis
Priority: Critical
 Attachments: example.patch, HIVE-5657.1.patch.txt


 Attached patch illustrates the problem.
 limit_pushdown test has various other cases of aggregations and distincts, 
 incl. count-distinct, that work correctly (that said, src dataset is bad for 
 testing these things because every count, for example, produces one record 
 only), so something must be special about this.
 I am not very familiar with distinct- code and these nuances; if someone 
 knows a quick fix feel free to take this, otherwise I will probably start 
 looking next week. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5659) HBaseStorageHandler overwrites Hive-set HBase properties with hbase-defaults

2013-10-28 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13806894#comment-13806894
 ] 

Ashutosh Chauhan commented on HIVE-5659:


[~ccondit] I think current behavior is correct. To achieve what you want (ie 
configure hbase for hive) you should put {{hbase-site.xml}} in hive's classpath 
so that hbase configs are picked up from there, instead of putting hbase 
configs in hive-site.xml

 HBaseStorageHandler overwrites Hive-set HBase properties with hbase-defaults
 

 Key: HIVE-5659
 URL: https://issues.apache.org/jira/browse/HIVE-5659
 Project: Hive
  Issue Type: Bug
  Components: HBase Handler
Affects Versions: 0.12.0
Reporter: Craig Condit
 Attachments: HIVE-5659.patch


 As part of the changes to HIVE-5260, it appears that HBase properties set in 
 hive-conf.xml are being clobbered by defaults from hbase-default.xml.
 Specifically, we noticed it when attempting to set hbase.zookeeper.quorum. 
 That value defaults to 'localhost' and results in queries of HBase tables 
 hanging attempting to acquire a lock from a Zookeeper instance which isn't 
 running.
 Any properties set in hive-site.xml will be overwritten by those in 
 hbase-default.xml, which doesn't seem good.
 The call to HBaseConfiguration.addHbaseResources(jobConf) seems to be the 
 culprit, around line 337.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Resolved] (HIVE-5658) count(distinct) produces confusing results for null

2013-10-28 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin resolved HIVE-5658.


Resolution: Not A Problem

 count(distinct) produces confusing results for null
 ---

 Key: HIVE-5658
 URL: https://issues.apache.org/jira/browse/HIVE-5658
 Project: Hive
  Issue Type: Bug
Reporter: Sergey Shelukhin

 {code}
 select csmallint, count(distinct(csmallint)) from alltypesorc group by 
 csmallint limit 3;
 {code}
 produces:
 {noformat}
 NULL  0
 -163791
 -163731
 {noformat}
 There are records in table with NULL values; however the count in this case 
 should be 1, not 0, it seems. This is with TopN disabled, so it is unrelated 
 to the other bug



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5658) count(distinct) produces confusing results for null

2013-10-28 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13806913#comment-13806913
 ] 

Sergey Shelukhin commented on HIVE-5658:


That does seem counter-intuitive...

 count(distinct) produces confusing results for null
 ---

 Key: HIVE-5658
 URL: https://issues.apache.org/jira/browse/HIVE-5658
 Project: Hive
  Issue Type: Bug
Reporter: Sergey Shelukhin

 {code}
 select csmallint, count(distinct(csmallint)) from alltypesorc group by 
 csmallint limit 3;
 {code}
 produces:
 {noformat}
 NULL  0
 -163791
 -163731
 {noformat}
 There are records in table with NULL values; however the count in this case 
 should be 1, not 0, it seems. This is with TopN disabled, so it is unrelated 
 to the other bug



--
This message was sent by Atlassian JIRA
(v6.1#6144)


Re: Review Request 14887: Subquery support: disallow nesting of SubQueries

2013-10-28 Thread Ashutosh Chauhan

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/14887/#review27614
---

Ship it!


+1 Harish, can you attach the patch on jira so that Hive QA gets to run on it.

- Ashutosh Chauhan


On Oct. 23, 2013, 8:46 p.m., Harish Butani wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/14887/
 ---
 
 (Updated Oct. 23, 2013, 8:46 p.m.)
 
 
 Review request for hive and Ashutosh Chauhan.
 
 
 Bugs: HIVE-5613
 https://issues.apache.org/jira/browse/HIVE-5613
 
 
 Repository: hive-git
 
 
 Description
 ---
 
 This is Restriction 9 from the SubQuery design doc:
 We will not do algebraic transformations for these kinds of queries:
 
 {noformat}
 -query 1
 select ...
 from x
 where  
 x.b in (select u 
 from y 
 where y.c = 10 and
   exists (select m from z where z.A = x.C)
)
 - query 2
 select ...
 from x
 where  
 x.b in (select u 
 from y 
 where y.c = 10 and
   exists (select m from z where z.A = y.D)
 {noformat}
 
 
 Diffs
 -
 
   ql/src/java/org/apache/hadoop/hive/ql/parse/QB.java 50b5a77 
   ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java 6fc3cd5 
   ql/src/java/org/apache/hadoop/hive/ql/parse/SubQueryUtils.java 2d7775c 
   ql/src/test/queries/clientnegative/subquery_nested_subquery.q PRE-CREATION 
   ql/src/test/results/clientnegative/subquery_nested_subquery.q.out 
 PRE-CREATION 
 
 Diff: https://reviews.apache.org/r/14887/diff/
 
 
 Testing
 ---
 
 tested subquery tests
 added new subquery_nested_subquery.q negative test
 
 
 Thanks,
 
 Harish Butani
 




[jira] [Updated] (HIVE-5354) Decimal precision/scale support in ORC file

2013-10-28 Thread Xuefu Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuefu Zhang updated HIVE-5354:
--

Attachment: HIVE-5354.2.patch

Patch #1 didn't include the generated file diff, which caused the test's 
failure to run. Patch #2 updated with that.

However, the generated file, OrcProto.java shouldn't be in the source control. 
I will log a separated JIRA if there isn't one already.

 Decimal precision/scale support in ORC file
 ---

 Key: HIVE-5354
 URL: https://issues.apache.org/jira/browse/HIVE-5354
 Project: Hive
  Issue Type: Task
  Components: Serializers/Deserializers
Affects Versions: 0.10.0
Reporter: Xuefu Zhang
Assignee: Xuefu Zhang
 Fix For: 0.13.0

 Attachments: HIVE-5354.1.patch, HIVE-5354.2.patch, HIVE-5354.patch


 A subtask of HIVE-3976.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5218) datanucleus does not work with MS SQLServer in Hive metastore

2013-10-28 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13806942#comment-13806942
 ] 

Hive QA commented on HIVE-5218:
---



{color:green}Overall{color}: +1 all checks pass

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12610570/HIVE-5218.2.patch

{color:green}SUCCESS:{color} +1 4502 tests passed

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/1270/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/1270/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

 datanucleus does not work with MS SQLServer in Hive metastore
 -

 Key: HIVE-5218
 URL: https://issues.apache.org/jira/browse/HIVE-5218
 Project: Hive
  Issue Type: Bug
  Components: Metastore
Affects Versions: 0.12.0
Reporter: shanyu zhao
Assignee: shanyu zhao
 Fix For: 0.13.0

 Attachments: 
 0001-HIVE-5218-datanucleus-does-not-work-with-SQLServer-i.patch, 
 HIVE-5218.2.patch, HIVE-5218.patch, HIVE-5218-v2.patch


 HIVE-3632 upgraded datanucleus version to 3.2.x, however, this version of 
 datanucleus doesn't work with SQLServer as the metastore. The problem is that 
 datanucleus tries to use fully qualified object name to find a table in the 
 database but couldn't find it.
 If I downgrade the version to HIVE-2084, SQLServer works fine.
 It could be a bug in datanucleus.
 This is the detailed exception I'm getting when using datanucleus 3.2.x with 
 SQL Server:
 {noformat}
 FAILED: Execution Error, return code 1 from 
 org.apache.hadoop.hive.ql.exec.DDLTa
 sk. MetaException(message:javax.jdo.JDOException: Exception thrown calling 
 table
 .exists() for a2ee36af45e9f46c19e995bfd2d9b5fd1hivemetastore..SEQUENCE_TABLE
 at 
 org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusExc
 eption(NucleusJDOHelper.java:596)
 at 
 org.datanucleus.api.jdo.JDOPersistenceManager.jdoMakePersistent(JDOPe
 rsistenceManager.java:732)
 …
 at 
 org.apache.hadoop.hive.metastore.RetryingRawStore.invoke(RetryingRawS
 tore.java:111)
 at $Proxy0.createTable(Unknown Source)
 at 
 org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_tabl
 e_core(HiveMetaStore.java:1071)
 at 
 org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_tabl
 e_with_environment_context(HiveMetaStore.java:1104)
 …
 at $Proxy11.create_table_with_environment_context(Unknown Source)
 at 
 org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$cr
 eate_table_with_environment_context.getResult(ThriftHiveMetastore.java:6417)
 at 
 org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$cr
 eate_table_with_environment_context.getResult(ThriftHiveMetastore.java:6401)
 NestedThrowablesStackTrace:
 com.microsoft.sqlserver.jdbc.SQLServerException: There is already an object 
 name
 d 'SEQUENCE_TABLE' in the database.
 at 
 com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDatabaseError
 (SQLServerException.java:197)
 at 
 com.microsoft.sqlserver.jdbc.SQLServerStatement.getNextResult(SQLServ
 erStatement.java:1493)
 at 
 com.microsoft.sqlserver.jdbc.SQLServerStatement.doExecuteStatement(SQ
 LServerStatement.java:775)
 at 
 com.microsoft.sqlserver.jdbc.SQLServerStatement$StmtExecCmd.doExecute
 (SQLServerStatement.java:676)
 at com.microsoft.sqlserver.jdbc.TDSCommand.execute(IOBuffer.java:4615)
 at 
 com.microsoft.sqlserver.jdbc.SQLServerConnection.executeCommand(SQLSe
 rverConnection.java:1400)
 at 
 com.microsoft.sqlserver.jdbc.SQLServerStatement.executeCommand(SQLSer
 verStatement.java:179)
 at 
 com.microsoft.sqlserver.jdbc.SQLServerStatement.executeStatement(SQLS
 erverStatement.java:154)
 at 
 com.microsoft.sqlserver.jdbc.SQLServerStatement.execute(SQLServerStat
 ement.java:649)
 at com.jolbox.bonecp.StatementHandle.execute(StatementHandle.java:300)
 at 
 org.datanucleus.store.rdbms.table.AbstractTable.executeDdlStatement(A
 bstractTable.java:760)
 at 
 org.datanucleus.store.rdbms.table.AbstractTable.executeDdlStatementLi
 st(AbstractTable.java:711)
 at 
 org.datanucleus.store.rdbms.table.AbstractTable.create(AbstractTable.
 java:425)
 at 
 org.datanucleus.store.rdbms.table.AbstractTable.exists(AbstractTable.
 java:488)
 at 
 org.datanucleus.store.rdbms.valuegenerator.TableGenerator.repositoryE
 xists(TableGenerator.java:242)
 at 
 

[jira] [Created] (HIVE-5670) annoying ZK exceptions are annoying

2013-10-28 Thread Sergey Shelukhin (JIRA)
Sergey Shelukhin created HIVE-5670:
--

 Summary: annoying ZK exceptions are annoying
 Key: HIVE-5670
 URL: https://issues.apache.org/jira/browse/HIVE-5670
 Project: Hive
  Issue Type: Task
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
Priority: Minor


when I run tests locally (or on cluster IIRC) there are bunch of ZK-related 
exceptions in Hive log, such as
{noformat}
2013-10-28 09:50:50,851 ERROR zookeeper.ClientCnxn 
(ClientCnxn.java:processEvent(523)) - Error while calling watcher 
java.lang.NullPointerException
   at 
org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:521)
   at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:497)
   
   2013-10-28 09:51:05,747 DEBUG server.NIOServerCnxn 
(NIOServerCnxn.java:closeSock(1024)) - ignoring exception during input shutdown
java.net.SocketException: Socket is not connected
   at sun.nio.ch.SocketChannelImpl.shutdown(Native Method)
   at sun.nio.ch.SocketChannelImpl.shutdownInput(SocketChannelImpl.java:633)
   at sun.nio.ch.SocketAdaptor.shutdownInput(SocketAdaptor.java:360)
   at 
org.apache.zookeeper.server.NIOServerCnxn.closeSock(NIOServerCnxn.java:1020)
   at org.apache.zookeeper.server.NIOServerCnxn.close(NIOServerCnxn.java:977)
   at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:347)
   at 
org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:224)
   at java.lang.Thread.run(Thread.java:680)
{noformat}

They are annoying when you look for actual problems in logs.

Those on DEBUG level should be silenced via log levels for ZK classes by 
default. Not sure what to do with ERROR level one(s?), I'd need to look if they 
can be silenced/logged as DEBUG on hive side, or maybe file a bug for ZK...



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5576) Blank lines missing from .q.out files created on Windows for testcase=TestCliDriver

2013-10-28 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-5576:
---

   Resolution: Fixed
Fix Version/s: 0.13.0
   Status: Resolved  (was: Patch Available)

Committed to trunk. Thanks, Remus!

 Blank lines missing from .q.out files created on Windows for 
 testcase=TestCliDriver
 ---

 Key: HIVE-5576
 URL: https://issues.apache.org/jira/browse/HIVE-5576
 Project: Hive
  Issue Type: Bug
  Components: Testing Infrastructure
Affects Versions: 0.13.0
 Environment: Windows 8 using Hive Monarch build environment
Reporter: Eric Hanson
Assignee: Remus Rusanu
Priority: Minor
 Fix For: 0.13.0

 Attachments: HIVE-5576.1.patch, vectorized_math_funcs.q, 
 vectorized_math_funcs.q.out.unix, vectorized_math_funcs.q.out.windows


 If you create a .q.out file on Windows using a command like this:
 ant test -Dhadoop.security.version=1.1.0-SNAPSHOT 
 -Dhadoop.root=c:\hw\project\hadoop-monarch -Dresolvers=internal 
 -Dhadoop-0.20S.version=1.1.0-SNAPSHOT -Dhadoop.mr.rev=20S 
 -Dhive.support.concurrency=false -Dshims.include=0.20S 
 -Dtest.continue.on.failure=true -Dtest.halt.on.failure=no 
 -Dtest.print.classpath=true  -Dtestcase=TestCliDriver 
 -Dqfile=vectorized_math_funcs.q,vectorized_string_funcs.q,vectorized_casts.q
  -Doverwrite=true -Dtest.silent=false
 Then the .q.out files generated in the hive directory under
 ql\src\test\results\clientpositive
 having missing blank lines.
 So, the .q tests will pass on your Windows machine. But when you upload them 
 in a patch, they fail on the automated build server. See HIVE-5517 for an 
 example. HIVE-5517.3.patch has .q.out files with missing blank lines. 
 Hive-5517.4.patch has .q.out files created on a Linux or Mac system. Those 
 have blank lines.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HIVE-5671) Generated src OrcProto.java shouldn't be in the source control

2013-10-28 Thread Xuefu Zhang (JIRA)
Xuefu Zhang created HIVE-5671:
-

 Summary: Generated src OrcProto.java shouldn't be in the source 
control
 Key: HIVE-5671
 URL: https://issues.apache.org/jira/browse/HIVE-5671
 Project: Hive
  Issue Type: Improvement
  Components: Build Infrastructure
Affects Versions: 0.12.0
Reporter: Xuefu Zhang
Assignee: Xuefu Zhang


orc_proto.proto generates OrcProto.java, which unfortunately made its way to 
source control, so changing the .proto file requires regenerating the .java 
file and checking it in again. This is unnecessary.

Also, the code generation should be part of the build process.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5355) JDBC support for decimal precision/scale

2013-10-28 Thread Xuefu Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuefu Zhang updated HIVE-5355:
--

Attachment: HIVE-5355.2.patch

Patch #2 fixed the test code. Manually running the test passed.

 JDBC support for decimal precision/scale
 

 Key: HIVE-5355
 URL: https://issues.apache.org/jira/browse/HIVE-5355
 Project: Hive
  Issue Type: Task
  Components: Serializers/Deserializers
Affects Versions: 0.10.0
Reporter: Xuefu Zhang
Assignee: Xuefu Zhang
 Fix For: 0.13.0

 Attachments: HIVE-5355.1.patch, HIVE-5355.2.patch, HIVE-5355.patch


 A subtask of HIVE-3976.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5355) JDBC support for decimal precision/scale

2013-10-28 Thread Xuefu Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuefu Zhang updated HIVE-5355:
--

Status: Patch Available  (was: Open)

 JDBC support for decimal precision/scale
 

 Key: HIVE-5355
 URL: https://issues.apache.org/jira/browse/HIVE-5355
 Project: Hive
  Issue Type: Task
  Components: Serializers/Deserializers
Affects Versions: 0.10.0
Reporter: Xuefu Zhang
Assignee: Xuefu Zhang
 Fix For: 0.13.0

 Attachments: HIVE-5355.1.patch, HIVE-5355.2.patch, HIVE-5355.patch


 A subtask of HIVE-3976.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5612) Ability to compile odbc and re-generate generated code stored in source control

2013-10-28 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-5612:
---

Summary: Ability to compile odbc and re-generate generated code stored in 
source control  (was: Add ability to re-generate generated code stored in 
source control)

 Ability to compile odbc and re-generate generated code stored in source 
 control
 ---

 Key: HIVE-5612
 URL: https://issues.apache.org/jira/browse/HIVE-5612
 Project: Hive
  Issue Type: Sub-task
Reporter: Brock Noland

 We need the ability to re-generate protocol buffers (and thrift?) via maven. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5612) Ability to compile odbc and re-generate generated code stored in source control

2013-10-28 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-5612:
---

Attachment: HIVE-5612.patch

 Ability to compile odbc and re-generate generated code stored in source 
 control
 ---

 Key: HIVE-5612
 URL: https://issues.apache.org/jira/browse/HIVE-5612
 Project: Hive
  Issue Type: Sub-task
Reporter: Brock Noland
 Attachments: HIVE-5612.patch


 We need the ability to re-generate protocol buffers (and thrift?) via maven. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5659) HBaseStorageHandler overwrites Hive-set HBase properties with hbase-defaults

2013-10-28 Thread Craig Condit (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13806975#comment-13806975
 ] 

Craig Condit commented on HIVE-5659:


In our case, I'm not sure the best way to handle this. We have some options 
(such as HBase direct reads) that require configuration parameters that would 
affect all HDFS users. Those configurations would be set if we included 
hbase-site.xml, and wouldn't be overridable.

It just seems like configuration in hive-site.xml should override 
hbase-site.xml, and not the other way around.

 HBaseStorageHandler overwrites Hive-set HBase properties with hbase-defaults
 

 Key: HIVE-5659
 URL: https://issues.apache.org/jira/browse/HIVE-5659
 Project: Hive
  Issue Type: Bug
  Components: HBase Handler
Affects Versions: 0.12.0
Reporter: Craig Condit
 Attachments: HIVE-5659.patch


 As part of the changes to HIVE-5260, it appears that HBase properties set in 
 hive-conf.xml are being clobbered by defaults from hbase-default.xml.
 Specifically, we noticed it when attempting to set hbase.zookeeper.quorum. 
 That value defaults to 'localhost' and results in queries of HBase tables 
 hanging attempting to acquire a lock from a Zookeeper instance which isn't 
 running.
 Any properties set in hive-site.xml will be overwritten by those in 
 hbase-default.xml, which doesn't seem good.
 The call to HBaseConfiguration.addHbaseResources(jobConf) seems to be the 
 culprit, around line 337.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Assigned] (HIVE-5612) Ability to compile odbc and re-generate generated code stored in source control

2013-10-28 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland reassigned HIVE-5612:
--

Assignee: Brock Noland

 Ability to compile odbc and re-generate generated code stored in source 
 control
 ---

 Key: HIVE-5612
 URL: https://issues.apache.org/jira/browse/HIVE-5612
 Project: Hive
  Issue Type: Sub-task
Reporter: Brock Noland
Assignee: Brock Noland
 Attachments: HIVE-5612.patch


 We need the ability to re-generate protocol buffers (and thrift?) via maven. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HIVE-5672) Insert with custom separator not supported for local directory

2013-10-28 Thread Romain Rigaux (JIRA)
Romain Rigaux created HIVE-5672:
---

 Summary: Insert with custom separator not supported for local 
directory
 Key: HIVE-5672
 URL: https://issues.apache.org/jira/browse/HIVE-5672
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.12.0
Reporter: Romain Rigaux


https://issues.apache.org/jira/browse/HIVE-3682 is great but non local 
directory don't seem to be supported:

{code}
insert overwrite directory '/tmp/test-02'
row format delimited
FIELDS TERMINATED BY ':'
select description FROM sample_07
{code}

{code}
Error while compiling statement: FAILED: ParseException line 2:0 cannot 
recognize input near 'row' 'format' 'delimited' in select clause
{code}

This works (with 'local'):
{code}
insert overwrite local directory '/tmp/test-02'
row format delimited
FIELDS TERMINATED BY ':'
select code, description FROM sample_07
{code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5670) annoying ZK exceptions are annoying

2013-10-28 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13806993#comment-13806993
 ] 

Brock Noland commented on HIVE-5670:


I think we should either fix the ZK NPE or upgrade to a version that has it 
fixed.

 annoying ZK exceptions are annoying
 ---

 Key: HIVE-5670
 URL: https://issues.apache.org/jira/browse/HIVE-5670
 Project: Hive
  Issue Type: Task
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
Priority: Minor

 when I run tests locally (or on cluster IIRC) there are bunch of ZK-related 
 exceptions in Hive log, such as
 {noformat}
 2013-10-28 09:50:50,851 ERROR zookeeper.ClientCnxn 
 (ClientCnxn.java:processEvent(523)) - Error while calling watcher 
 java.lang.NullPointerException
at 
 org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:521)
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:497)

2013-10-28 09:51:05,747 DEBUG server.NIOServerCnxn 
 (NIOServerCnxn.java:closeSock(1024)) - ignoring exception during input 
 shutdown
 java.net.SocketException: Socket is not connected
at sun.nio.ch.SocketChannelImpl.shutdown(Native Method)
at sun.nio.ch.SocketChannelImpl.shutdownInput(SocketChannelImpl.java:633)
at sun.nio.ch.SocketAdaptor.shutdownInput(SocketAdaptor.java:360)
at 
 org.apache.zookeeper.server.NIOServerCnxn.closeSock(NIOServerCnxn.java:1020)
at org.apache.zookeeper.server.NIOServerCnxn.close(NIOServerCnxn.java:977)
at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:347)
at 
 org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:224)
at java.lang.Thread.run(Thread.java:680)
 {noformat}
 They are annoying when you look for actual problems in logs.
 Those on DEBUG level should be silenced via log levels for ZK classes by 
 default. Not sure what to do with ERROR level one(s?), I'd need to look if 
 they can be silenced/logged as DEBUG on hive side, or maybe file a bug for 
 ZK...



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5612) Ability to compile odbc and re-generate generated code stored in source control

2013-10-28 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13807006#comment-13807006
 ] 

Brock Noland commented on HIVE-5612:


Thrift gen-works well:

{noformat}
mvn clean install -Pthriftif -DskipTests -Dthrift.home=/usr/local
{noformat}

ODBC errors out on my host with the same error I see on ant package.
{noformat}
cd odbc
mvn compile -Podbc -Dthrift.home=/usr/local -Dboost.home=/usr/local
{noformat}


 Ability to compile odbc and re-generate generated code stored in source 
 control
 ---

 Key: HIVE-5612
 URL: https://issues.apache.org/jira/browse/HIVE-5612
 Project: Hive
  Issue Type: Sub-task
Reporter: Brock Noland
Assignee: Brock Noland
 Attachments: HIVE-5612.patch


 We need the ability to re-generate protocol buffers (and thrift?) via maven. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5613) Subquery support: disallow nesting of SubQueries

2013-10-28 Thread Harish Butani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harish Butani updated HIVE-5613:


Attachment: HIVE-5613.1.patch

 Subquery support: disallow nesting of SubQueries
 

 Key: HIVE-5613
 URL: https://issues.apache.org/jira/browse/HIVE-5613
 Project: Hive
  Issue Type: New Feature
  Components: Query Processor
Reporter: Harish Butani
Assignee: Harish Butani
 Attachments: HIVE-5613.1.patch






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5556) Pushdown join conditions

2013-10-28 Thread Harish Butani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harish Butani updated HIVE-5556:


Status: Patch Available  (was: Open)

 Pushdown join conditions
 

 Key: HIVE-5556
 URL: https://issues.apache.org/jira/browse/HIVE-5556
 Project: Hive
  Issue Type: Sub-task
  Components: Query Processor
Reporter: Harish Butani
Assignee: Harish Butani
 Attachments: HIVE-5556.1.patch, HIVE-5556.2.patch


 See details in HIVE-



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5556) Pushdown join conditions

2013-10-28 Thread Harish Butani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harish Butani updated HIVE-5556:


Attachment: HIVE-5556.2.patch

 Pushdown join conditions
 

 Key: HIVE-5556
 URL: https://issues.apache.org/jira/browse/HIVE-5556
 Project: Hive
  Issue Type: Sub-task
  Components: Query Processor
Reporter: Harish Butani
Assignee: Harish Butani
 Attachments: HIVE-5556.1.patch, HIVE-5556.2.patch


 See details in HIVE-



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5613) Subquery support: disallow nesting of SubQueries

2013-10-28 Thread Harish Butani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harish Butani updated HIVE-5613:


Fix Version/s: 0.13.0
   Status: Patch Available  (was: Open)

 Subquery support: disallow nesting of SubQueries
 

 Key: HIVE-5613
 URL: https://issues.apache.org/jira/browse/HIVE-5613
 Project: Hive
  Issue Type: New Feature
  Components: Query Processor
Reporter: Harish Butani
Assignee: Harish Butani
 Fix For: 0.13.0

 Attachments: HIVE-5613.1.patch






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Resolved] (HIVE-5612) Ability to compile odbc and re-generate generated code stored in source control

2013-10-28 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland resolved HIVE-5612.


Resolution: Fixed

Committed to branch.

 Ability to compile odbc and re-generate generated code stored in source 
 control
 ---

 Key: HIVE-5612
 URL: https://issues.apache.org/jira/browse/HIVE-5612
 Project: Hive
  Issue Type: Sub-task
Reporter: Brock Noland
Assignee: Brock Noland
 Attachments: HIVE-5612.patch


 We need the ability to re-generate protocol buffers (and thrift?) via maven. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5671) Generated src OrcProto.java shouldn't be in the source control

2013-10-28 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13807021#comment-13807021
 ] 

Ashutosh Chauhan commented on HIVE-5671:


Reason for checking in {{OrcProto.java}} is same as checking in thrift 
generated code which is if not checked in then that implies devs need to have 
thrift compiler (for thrift generated code)  and protobuf compiler (for 
protobuf generated code) to be installed on their machine. In the past, many 
devs have felt that installing thrift compiler or protobuf compiler is 
complicated enough that if possible we should avoid the burden of having it 
installed. Also, having this requirement will be impediment for new hive devs.

 Generated src OrcProto.java shouldn't be in the source control
 --

 Key: HIVE-5671
 URL: https://issues.apache.org/jira/browse/HIVE-5671
 Project: Hive
  Issue Type: Improvement
  Components: Build Infrastructure
Affects Versions: 0.12.0
Reporter: Xuefu Zhang
Assignee: Xuefu Zhang

 orc_proto.proto generates OrcProto.java, which unfortunately made its way to 
 source control, so changing the .proto file requires regenerating the .java 
 file and checking it in again. This is unnecessary.
 Also, the code generation should be part of the build process.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5624) Create script for removing ant artifacts after merge

2013-10-28 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-5624:
---

Summary: Create script for removing ant artifacts after merge  (was: Remove 
ant artifacts from project)

 Create script for removing ant artifacts after merge
 

 Key: HIVE-5624
 URL: https://issues.apache.org/jira/browse/HIVE-5624
 Project: Hive
  Issue Type: Sub-task
Reporter: Brock Noland

 Before marking HIVE-5107 resolved we should remove the build.xml files and 
 other ant artifacts.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5671) Generated src OrcProto.java shouldn't be in the source control

2013-10-28 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13807032#comment-13807032
 ] 

Brock Noland commented on HIVE-5671:


Yeah I'd personally rather check-in the generated code. 

 Generated src OrcProto.java shouldn't be in the source control
 --

 Key: HIVE-5671
 URL: https://issues.apache.org/jira/browse/HIVE-5671
 Project: Hive
  Issue Type: Improvement
  Components: Build Infrastructure
Affects Versions: 0.12.0
Reporter: Xuefu Zhang
Assignee: Xuefu Zhang

 orc_proto.proto generates OrcProto.java, which unfortunately made its way to 
 source control, so changing the .proto file requires regenerating the .java 
 file and checking it in again. This is unnecessary.
 Also, the code generation should be part of the build process.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5624) Create script for removing ant artifacts after merge

2013-10-28 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-5624:
---

Attachment: HIVE-5624.patch

 Create script for removing ant artifacts after merge
 

 Key: HIVE-5624
 URL: https://issues.apache.org/jira/browse/HIVE-5624
 Project: Hive
  Issue Type: Sub-task
Reporter: Brock Noland
 Attachments: HIVE-5624.patch


 Before marking HIVE-5107 resolved we should remove the build.xml files and 
 other ant artifacts.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HIVE-5673) Create profile to generate protobuf

2013-10-28 Thread Brock Noland (JIRA)
Brock Noland created HIVE-5673:
--

 Summary: Create profile to generate protobuf
 Key: HIVE-5673
 URL: https://issues.apache.org/jira/browse/HIVE-5673
 Project: Hive
  Issue Type: Sub-task
Reporter: Brock Noland






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5610) Merge maven branch into trunk

2013-10-28 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-5610:
---

Description: 
With HIVE-5566 nearing completion we will be nearly ready to merge the maven 
branch to trunk. The following tasks will be done post-merge:

* HIVE-5611 - Add assembly (i.e.) tar creation to pom

The merge process will be as follows:

1) svn merge ^/hive/branches/maven
2) Commit result
3) Modify the following line in maven-rollforward.sh:
{noformat}
  mv $source $target
{noformat}
to
{noformat}
  svn mv $source $target
{noformat}
4) Execute maven-rollfward.sh and commit result 
5) Modify the following line in maven-delete-ant.sh:
{noformat}
  rm -rf $@
{noformat}
to
{noformat}
  svn rm $@
{noformat}
5) Execute maven-delete-ant.sh and commit result
6) Update trunk-mr1.properties and trunk-mr2.properties on the ptesting host, 
adding the following:

{noformat}
mavenEnvOpts = -Dhttp.proxyHost=localhost -Dhttp.proxyPort=3128 
testCasePropertyName = test
buildTool = maven
unitTests.directories = ./
{noformat}

Notes:

* To build everything you must:

{noformat}
$ mvn clean install -DskipTests
$ cd itests
$ mvn clean install -DskipTests
{noformat}

because itests (any tests that has cyclical dependencies or requires that the 
packages be built) is not part of the root reactor build.

  was:
With HIVE-5566 nearing completion we will be nearly ready to merge the maven 
branch to trunk. The following tasks will be done post-merge:

* HIVE-5611 - Add assembly (i.e.) tar creation to pom
* HIVE-5612 - Add ability to re-generate generated code stored in source control

The merge process will be as follows:

1) svn merge ^/hive/branches/maven
2) Commit result
3) Modify the following line in maven-rollforward.sh:
{noformat}
  mv $source $target
{noformat}
to
{noformat}
  svn mv $source $target
{noformat}
4) Execute maven-rollfward.sh
5) Commit result 
6) Update trunk-mr1.properties and trunk-mr2.properties on the ptesting host, 
adding the following:

{noformat}
mavenEnvOpts = -Dhttp.proxyHost=localhost -Dhttp.proxyPort=3128 
testCasePropertyName = test
buildTool = maven
unitTests.directories = ./
{noformat}

Notes:

* To build everything you must:

{noformat}
$ mvn clean install -DskipTests
$ cd itests
$ mvn clean install -DskipTests
{noformat}

because itests (any tests that has cyclical dependencies or requires that the 
packages be built) is not part of the root reactor build.


 Merge maven branch into trunk
 -

 Key: HIVE-5610
 URL: https://issues.apache.org/jira/browse/HIVE-5610
 Project: Hive
  Issue Type: Sub-task
Reporter: Brock Noland
Assignee: Brock Noland

 With HIVE-5566 nearing completion we will be nearly ready to merge the maven 
 branch to trunk. The following tasks will be done post-merge:
 * HIVE-5611 - Add assembly (i.e.) tar creation to pom
 The merge process will be as follows:
 1) svn merge ^/hive/branches/maven
 2) Commit result
 3) Modify the following line in maven-rollforward.sh:
 {noformat}
   mv $source $target
 {noformat}
 to
 {noformat}
   svn mv $source $target
 {noformat}
 4) Execute maven-rollfward.sh and commit result 
 5) Modify the following line in maven-delete-ant.sh:
 {noformat}
   rm -rf $@
 {noformat}
 to
 {noformat}
   svn rm $@
 {noformat}
 5) Execute maven-delete-ant.sh and commit result
 6) Update trunk-mr1.properties and trunk-mr2.properties on the ptesting host, 
 adding the following:
 {noformat}
 mavenEnvOpts = -Dhttp.proxyHost=localhost -Dhttp.proxyPort=3128 
 testCasePropertyName = test
 buildTool = maven
 unitTests.directories = ./
 {noformat}
 Notes:
 * To build everything you must:
 {noformat}
 $ mvn clean install -DskipTests
 $ cd itests
 $ mvn clean install -DskipTests
 {noformat}
 because itests (any tests that has cyclical dependencies or requires that the 
 packages be built) is not part of the root reactor build.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5667) ThriftCLIService log messages jumbled up

2013-10-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13807053#comment-13807053
 ] 

Hudson commented on HIVE-5667:
--

FAILURE: Integrated in Hive-trunk-hadoop2-ptest #154 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2-ptest/154/])
HIVE-5667 - ThriftCLIService log messages jumbled up (Vaibhav Gumashta via 
Brock Noland) (brock: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1536361)
* 
/hive/trunk/service/src/java/org/apache/hive/service/cli/thrift/ThriftCLIService.java


 ThriftCLIService log messages jumbled up
 

 Key: HIVE-5667
 URL: https://issues.apache.org/jira/browse/HIVE-5667
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2
Reporter: Vaibhav Gumashta
Assignee: Vaibhav Gumashta
 Fix For: 0.13.0

 Attachments: HIVE-5667.1.patch


 ThriftCLIService log messages are not aligned with the methods correctly



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Resolved] (HIVE-5624) Create script for removing ant artifacts after merge

2013-10-28 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland resolved HIVE-5624.


Resolution: Fixed
  Assignee: Brock Noland

Committed to branch

 Create script for removing ant artifacts after merge
 

 Key: HIVE-5624
 URL: https://issues.apache.org/jira/browse/HIVE-5624
 Project: Hive
  Issue Type: Sub-task
Reporter: Brock Noland
Assignee: Brock Noland
 Attachments: HIVE-5624.patch


 Before marking HIVE-5107 resolved we should remove the build.xml files and 
 other ant artifacts.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5355) JDBC support for decimal precision/scale

2013-10-28 Thread Jason Dere (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13807058#comment-13807058
 ] 

Jason Dere commented on HIVE-5355:
--

Hey Xuefu, for these changes:
{noformat}
 case Types.VARCHAR:
-  if (columnAttributes != null) {
-return columnAttributes.precision;
-  }
-  return Integer.MAX_VALUE; // hive has no max limit for strings
+  return columnPrecision(columnType, columnAttributes);
{noformat}

String columns end up getting translated to JDBC varchar type.  So changing 
this would result in a NPE if the user tries to call getColumnDisplaySize() for 
string columns, because I think the columnAttributes are null for non-qualified 
Hive types.
Not sure if you have to worry about a similar issue for decimal if you are 
connecting to a pre-HIVE-3976 Hive server, are newer clients meant to be 
backward compatible with older servers?

 JDBC support for decimal precision/scale
 

 Key: HIVE-5355
 URL: https://issues.apache.org/jira/browse/HIVE-5355
 Project: Hive
  Issue Type: Task
  Components: Serializers/Deserializers
Affects Versions: 0.10.0
Reporter: Xuefu Zhang
Assignee: Xuefu Zhang
 Fix For: 0.13.0

 Attachments: HIVE-5355.1.patch, HIVE-5355.2.patch, HIVE-5355.patch


 A subtask of HIVE-3976.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5355) JDBC support for decimal precision/scale

2013-10-28 Thread Jason Dere (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13807065#comment-13807065
 ] 

Jason Dere commented on HIVE-5355:
--

oops, disregard my comment, those changes are calling columnPrecision() which 
is doing the null checks.

 JDBC support for decimal precision/scale
 

 Key: HIVE-5355
 URL: https://issues.apache.org/jira/browse/HIVE-5355
 Project: Hive
  Issue Type: Task
  Components: Serializers/Deserializers
Affects Versions: 0.10.0
Reporter: Xuefu Zhang
Assignee: Xuefu Zhang
 Fix For: 0.13.0

 Attachments: HIVE-5355.1.patch, HIVE-5355.2.patch, HIVE-5355.patch


 A subtask of HIVE-3976.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5648) error when casting partition column to varchar in where clause

2013-10-28 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-5648:
---

   Resolution: Fixed
Fix Version/s: 0.13.0
   Status: Resolved  (was: Patch Available)

Committed to trunk. Thanks, Jason!

 error when casting partition column to varchar in where clause 
 ---

 Key: HIVE-5648
 URL: https://issues.apache.org/jira/browse/HIVE-5648
 Project: Hive
  Issue Type: Bug
Reporter: Jason Dere
Assignee: Jason Dere
 Fix For: 0.13.0

 Attachments: HIVE-5648.1.patch, HIVE-5648.2.patch


 hive select * from partition_varchar_2 where cast(dt as varchar(10)) = 
 '2000-01-01';
 FAILED: RuntimeException org.apache.hadoop.hive.ql.metadata.HiveException: 
 java.lang.RuntimeException: Internal error: Cannot find ObjectInspector  for 
 VARCHAR



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Resolved] (HIVE-5673) Create profile to generate protobuf

2013-10-28 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland resolved HIVE-5673.


Resolution: Fixed
  Assignee: Brock Noland

Committed to branch.

 Create profile to generate protobuf
 ---

 Key: HIVE-5673
 URL: https://issues.apache.org/jira/browse/HIVE-5673
 Project: Hive
  Issue Type: Sub-task
Reporter: Brock Noland
Assignee: Brock Noland
 Attachments: HIVE-5673.patch






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5673) Create profile to generate protobuf

2013-10-28 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-5673:
---

Attachment: HIVE-5673.patch

 Create profile to generate protobuf
 ---

 Key: HIVE-5673
 URL: https://issues.apache.org/jira/browse/HIVE-5673
 Project: Hive
  Issue Type: Sub-task
Reporter: Brock Noland
 Attachments: HIVE-5673.patch






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5673) Create profile to generate protobuf

2013-10-28 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13807069#comment-13807069
 ] 

Brock Noland commented on HIVE-5673:


Command is:

{noformat}
cd ql
mvn clean install -DskipTests -Pprotobuf -Phadoop-1
{noformat}

 Create profile to generate protobuf
 ---

 Key: HIVE-5673
 URL: https://issues.apache.org/jira/browse/HIVE-5673
 Project: Hive
  Issue Type: Sub-task
Reporter: Brock Noland
Assignee: Brock Noland
 Attachments: HIVE-5673.patch






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HIVE-5674) Merge latest trunk into branch and fix resulting tests

2013-10-28 Thread Brock Noland (JIRA)
Brock Noland created HIVE-5674:
--

 Summary: Merge latest trunk into branch and fix resulting tests
 Key: HIVE-5674
 URL: https://issues.apache.org/jira/browse/HIVE-5674
 Project: Hive
  Issue Type: Sub-task
Reporter: Brock Noland
Assignee: Brock Noland






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5653) Vectorized Shuffle Join produces incorrect results

2013-10-28 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5653?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-5653:
---

   Resolution: Fixed
Fix Version/s: 0.13.0
   Status: Resolved  (was: Patch Available)

Committed to trunk. Thanks, Remus!

 Vectorized Shuffle Join produces incorrect results
 --

 Key: HIVE-5653
 URL: https://issues.apache.org/jira/browse/HIVE-5653
 Project: Hive
  Issue Type: Sub-task
Affects Versions: 0.13.0
Reporter: Remus Rusanu
Assignee: Remus Rusanu
 Fix For: 0.13.0

 Attachments: HIVE-5653.1.patch


 Vectorized shuffle join should work out-of-the-box, but it produces empty 
 result set. Investigating.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5656) Hive produces unclear, confusing SemanticException when dealing with mod or pmod by zero

2013-10-28 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-5656:
---

   Resolution: Fixed
Fix Version/s: 0.13.0
   Status: Resolved  (was: Patch Available)

Committed to trunk. Thanks, Xuefu!

 Hive produces unclear, confusing SemanticException when dealing with mod or 
 pmod by zero
 

 Key: HIVE-5656
 URL: https://issues.apache.org/jira/browse/HIVE-5656
 Project: Hive
  Issue Type: Bug
  Components: Types
Affects Versions: 0.12.0
Reporter: Xuefu Zhang
Assignee: Xuefu Zhang
 Fix For: 0.13.0

 Attachments: HIVE-5656.patch


 {code}
 hive select 5%0 from tmp2 limit 1;
 FAILED: SemanticException [Error 10014]: Line 1:7 Wrong arguments '0': 
 org.apache.hadoop.hive.ql.metadata.HiveException: Unable to execute method 
 public org.apache.hadoop.io.IntWritable 
 org.apache.hadoop.hive.ql.udf.UDFOPMod.evaluate(org.apache.hadoop.io.IntWritable,org.apache.hadoop.io.IntWritable)
   on object org.apache.hadoop.hive.ql.udf.UDFOPMod@21b594a9 of class 
 org.apache.hadoop.hive.ql.udf.UDFOPMod with arguments 
 {5:org.apache.hadoop.io.IntWritable, 0:org.apache.hadoop.io.IntWritable} of 
 size 2
 hive select pmod(5,0) from tmp2 limit 1;
 FAILED: SemanticException [Error 10014]: Line 1:7 Wrong arguments '0': 
 org.apache.hadoop.hive.ql.metadata.HiveException: Unable to execute method 
 public org.apache.hadoop.io.IntWritable 
 org.apache.hadoop.hive.ql.udf.UDFPosMod.evaluate(org.apache.hadoop.io.IntWritable,org.apache.hadoop.io.IntWritable)
   on object org.apache.hadoop.hive.ql.udf.UDFPosMod@174ed99a of class 
 org.apache.hadoop.hive.ql.udf.UDFPosMod with arguments 
 {5:org.apache.hadoop.io.IntWritable, 0:org.apache.hadoop.io.IntWritable} of 
 size 2
 {code}
 Exception stack:
 {code}
 at 
 org.apache.hadoop.hive.ql.parse.TypeCheckProcFactory$DefaultExprProcessor.process(TypeCheckProcFactory.java:1112)
 at 
 org.apache.hadoop.hive.ql.lib.DefaultRuleDispatcher.dispatch(DefaultRuleDispatcher.java:90)
 at 
 org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatchAndReturn(DefaultGraphWalker.java:94)
 at 
 org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatch(DefaultGraphWalker.java:78)
 at 
 org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.walk(DefaultGraphWalker.java:132)
 at 
 org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.startWalking(DefaultGraphWalker.java:109)
 at 
 org.apache.hadoop.hive.ql.parse.TypeCheckProcFactory.genExprNode(TypeCheckProcFactory.java:181)
 at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genAllExprNodeDesc(SemanticAnalyzer.java:8870)
 at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genExprNodeDesc(SemanticAnalyzer.java:8826)
 at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genSelectPlan(SemanticAnalyzer.java:2734)
 at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genSelectPlan(SemanticAnalyzer.java:2531)
 at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPostGroupByBodyPlan(SemanticAnalyzer.java:7606)
 at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genBodyPlan(SemanticAnalyzer.java:7562)
 at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:8365)
 at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:8591)
 at 
 org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:284)
 at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:451)
 at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:351)
 at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1004)
 at org.apache.hadoop.hive.ql.Driver.run(Driver.java:915)
 at 
 org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:268)
 at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:220)
 at 
 org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:422)
 at 
 org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:790)
 at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:684)
 at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:623)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:601)
 at org.apache.hadoop.util.RunJar.main(RunJar.java:208)
 {code}
 The correct behaviour should be producing NULL.



--
This message was sent by Atlassian 

Re: [jira] [Updated] (HIVE-5610) Merge maven branch into trunk

2013-10-28 Thread Carl Steinbach
Any chance we can commit this instead of merging? I tried rebasing the
branch onto trunk and it seemed pretty straightforward.
On Oct 28, 2013 11:22 AM, Brock Noland (JIRA) j...@apache.org wrote:


  [
 https://issues.apache.org/jira/browse/HIVE-5610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel]

 Brock Noland updated HIVE-5610:
 ---

 Description:
 With HIVE-5566 nearing completion we will be nearly ready to merge the
 maven branch to trunk. The following tasks will be done post-merge:

 * HIVE-5611 - Add assembly (i.e.) tar creation to pom

 The merge process will be as follows:

 1) svn merge ^/hive/branches/maven
 2) Commit result
 3) Modify the following line in maven-rollforward.sh:
 {noformat}
   mv $source $target
 {noformat}
 to
 {noformat}
   svn mv $source $target
 {noformat}
 4) Execute maven-rollfward.sh and commit result
 5) Modify the following line in maven-delete-ant.sh:
 {noformat}
   rm -rf $@
 {noformat}
 to
 {noformat}
   svn rm $@
 {noformat}
 5) Execute maven-delete-ant.sh and commit result
 6) Update trunk-mr1.properties and trunk-mr2.properties on the ptesting
 host, adding the following:

 {noformat}
 mavenEnvOpts = -Dhttp.proxyHost=localhost -Dhttp.proxyPort=3128
 testCasePropertyName = test
 buildTool = maven
 unitTests.directories = ./
 {noformat}

 Notes:

 * To build everything you must:

 {noformat}
 $ mvn clean install -DskipTests
 $ cd itests
 $ mvn clean install -DskipTests
 {noformat}

 because itests (any tests that has cyclical dependencies or requires that
 the packages be built) is not part of the root reactor build.

   was:
 With HIVE-5566 nearing completion we will be nearly ready to merge the
 maven branch to trunk. The following tasks will be done post-merge:

 * HIVE-5611 - Add assembly (i.e.) tar creation to pom
 * HIVE-5612 - Add ability to re-generate generated code stored in source
 control

 The merge process will be as follows:

 1) svn merge ^/hive/branches/maven
 2) Commit result
 3) Modify the following line in maven-rollforward.sh:
 {noformat}
   mv $source $target
 {noformat}
 to
 {noformat}
   svn mv $source $target
 {noformat}
 4) Execute maven-rollfward.sh
 5) Commit result
 6) Update trunk-mr1.properties and trunk-mr2.properties on the ptesting
 host, adding the following:

 {noformat}
 mavenEnvOpts = -Dhttp.proxyHost=localhost -Dhttp.proxyPort=3128
 testCasePropertyName = test
 buildTool = maven
 unitTests.directories = ./
 {noformat}

 Notes:

 * To build everything you must:

 {noformat}
 $ mvn clean install -DskipTests
 $ cd itests
 $ mvn clean install -DskipTests
 {noformat}

 because itests (any tests that has cyclical dependencies or requires that
 the packages be built) is not part of the root reactor build.


  Merge maven branch into trunk
  -
 
  Key: HIVE-5610
  URL: https://issues.apache.org/jira/browse/HIVE-5610
  Project: Hive
   Issue Type: Sub-task
 Reporter: Brock Noland
 Assignee: Brock Noland
 
  With HIVE-5566 nearing completion we will be nearly ready to merge the
 maven branch to trunk. The following tasks will be done post-merge:
  * HIVE-5611 - Add assembly (i.e.) tar creation to pom
  The merge process will be as follows:
  1) svn merge ^/hive/branches/maven
  2) Commit result
  3) Modify the following line in maven-rollforward.sh:
  {noformat}
mv $source $target
  {noformat}
  to
  {noformat}
svn mv $source $target
  {noformat}
  4) Execute maven-rollfward.sh and commit result
  5) Modify the following line in maven-delete-ant.sh:
  {noformat}
rm -rf $@
  {noformat}
  to
  {noformat}
svn rm $@
  {noformat}
  5) Execute maven-delete-ant.sh and commit result
  6) Update trunk-mr1.properties and trunk-mr2.properties on the ptesting
 host, adding the following:
  {noformat}
  mavenEnvOpts = -Dhttp.proxyHost=localhost -Dhttp.proxyPort=3128
  testCasePropertyName = test
  buildTool = maven
  unitTests.directories = ./
  {noformat}
  Notes:
  * To build everything you must:
  {noformat}
  $ mvn clean install -DskipTests
  $ cd itests
  $ mvn clean install -DskipTests
  {noformat}
  because itests (any tests that has cyclical dependencies or requires
 that the packages be built) is not part of the root reactor build.



 --
 This message was sent by Atlassian JIRA
 (v6.1#6144)



Review Request 14996: HIVE-5664: Drop cascade database fails when the db has any tables with indexes

2013-10-28 Thread Venki Korukanti

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/14996/
---

Review request for hive and Prasad Mujumdar.


Repository: hive-git


Description
---

Repro steps:
CREATE DATABASE db2; 
USE db2; 
CREATE TABLE tab1 (id int, name string); 
CREATE INDEX idx1 ON TABLE tab1(id) as 'COMPACT' with DEFERRED REBUILD IN TABLE 
tab1_indx; 
DROP DATABASE db2 CASCADE;

Last DDL fails with following error:
FAILED: Execution Error, return code 1 from 
org.apache.hadoop.hive.ql.exec.DDLTask. Database does not exist: db2

See JIRA for details on the exception.


Diffs
-

  metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClient.java 
65406d9 
  ql/src/test/queries/clientpositive/database_drop.q 4e17c7a 
  ql/src/test/results/clientpositive/database_drop.q.out 6c4440f 

Diff: https://reviews.apache.org/r/14996/diff/


Testing
---

Added this particular index to database_drop.q unittest.


Thanks,

Venki Korukanti



[jira] [Updated] (HIVE-5666) use Path instead of String for IOContext.inputPath

2013-10-28 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5666?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-5666:
---

   Resolution: Fixed
Fix Version/s: 0.13.0
   Status: Resolved  (was: Patch Available)

Committed to trunk. Thanks, Thejas!

 use Path instead of String for IOContext.inputPath
 --

 Key: HIVE-5666
 URL: https://issues.apache.org/jira/browse/HIVE-5666
 Project: Hive
  Issue Type: Improvement
Reporter: Thejas M Nair
Assignee: Thejas M Nair
 Fix For: 0.13.0

 Attachments: HIVE-5666.1.patch


 Path is converted to string in HiveContextAwareRecordReader to be stored in 
 IOContext.inputPath, then in MapOperator normalizePath gets called on it 
 which converts it back to Path. 
 Path creation is expensive, so it is better to use Path instead of string 
 through the call stack.
 This is also a step towards HIVE-3616.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5664) Drop cascade database fails when the db has any tables with indexes

2013-10-28 Thread Venki Korukanti (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5664?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13807091#comment-13807091
 ] 

Venki Korukanti commented on HIVE-5664:
---

ReviewBoard link: https://reviews.apache.org/r/14996/

 Drop cascade database fails when the db has any tables with indexes
 ---

 Key: HIVE-5664
 URL: https://issues.apache.org/jira/browse/HIVE-5664
 Project: Hive
  Issue Type: Bug
  Components: Indexing, Metastore
Affects Versions: 0.10.0, 0.11.0, 0.12.0
Reporter: Venki Korukanti
Assignee: Venki Korukanti
 Fix For: 0.13.0


 {code}
 CREATE DATABASE db2; 
 USE db2; 
 CREATE TABLE tab1 (id int, name string); 
 CREATE INDEX idx1 ON TABLE tab1(id) as 'COMPACT' with DEFERRED REBUILD IN 
 TABLE tab1_indx; 
 DROP DATABASE db2 CASCADE;
 {code}
 Last DDL fails with the following error:
 {code}
 FAILED: Execution Error, return code 1 from 
 org.apache.hadoop.hive.ql.exec.DDLTask. Database does not exist: db2
 Hive.log has following exception
 2013-10-27 20:46:16,629 ERROR exec.DDLTask (DDLTask.java:execute(434)) - 
 org.apache.hadoop.hive.ql.metadata.HiveException: Database does not exist: db2
 at 
 org.apache.hadoop.hive.ql.exec.DDLTask.dropDatabase(DDLTask.java:3473)
 at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:231)
 at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:151)
 at 
 org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:65)
 at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1441)
 at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1219)
 at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1047)
 at org.apache.hadoop.hive.ql.Driver.run(Driver.java:915)
 at 
 org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:268)
 at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:220)
 at 
 org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:422)
 at 
 org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:790)
 at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:684)
 at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:623)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
 Caused by: NoSuchObjectException(message:db2.tab1_indx table not found)
 at 
 org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_table(HiveMetaStore.java:1376)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at 
 org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:103)
 at com.sun.proxy.$Proxy7.get_table(Unknown Source)
 at 
 org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getTable(HiveMetaStoreClient.java:890)
 at 
 org.apache.hadoop.hive.metastore.HiveMetaStoreClient.dropTable(HiveMetaStoreClient.java:660)
 at 
 org.apache.hadoop.hive.metastore.HiveMetaStoreClient.dropTable(HiveMetaStoreClient.java:652)
 at 
 org.apache.hadoop.hive.metastore.HiveMetaStoreClient.dropDatabase(HiveMetaStoreClient.java:546)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at 
 org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:89)
 at com.sun.proxy.$Proxy8.dropDatabase(Unknown Source)
 at org.apache.hadoop.hive.ql.metadata.Hive.dropDatabase(Hive.java:284)
 at 
 org.apache.hadoop.hive.ql.exec.DDLTask.dropDatabase(DDLTask.java:3470)
 ... 18 more
 {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


Re: [jira] [Updated] (HIVE-5610) Merge maven branch into trunk

2013-10-28 Thread Brock Noland
I'd be fine with that. Just curious what motivation is?


On Mon, Oct 28, 2013 at 1:41 PM, Carl Steinbach cwsteinb...@gmail.comwrote:

 Any chance we can commit this instead of merging? I tried rebasing the
 branch onto trunk and it seemed pretty straightforward.
 On Oct 28, 2013 11:22 AM, Brock Noland (JIRA) j...@apache.org wrote:

 
   [
 
 https://issues.apache.org/jira/browse/HIVE-5610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]
 
  Brock Noland updated HIVE-5610:
  ---
 
  Description:
  With HIVE-5566 nearing completion we will be nearly ready to merge the
  maven branch to trunk. The following tasks will be done post-merge:
 
  * HIVE-5611 - Add assembly (i.e.) tar creation to pom
 
  The merge process will be as follows:
 
  1) svn merge ^/hive/branches/maven
  2) Commit result
  3) Modify the following line in maven-rollforward.sh:
  {noformat}
mv $source $target
  {noformat}
  to
  {noformat}
svn mv $source $target
  {noformat}
  4) Execute maven-rollfward.sh and commit result
  5) Modify the following line in maven-delete-ant.sh:
  {noformat}
rm -rf $@
  {noformat}
  to
  {noformat}
svn rm $@
  {noformat}
  5) Execute maven-delete-ant.sh and commit result
  6) Update trunk-mr1.properties and trunk-mr2.properties on the ptesting
  host, adding the following:
 
  {noformat}
  mavenEnvOpts = -Dhttp.proxyHost=localhost -Dhttp.proxyPort=3128
  testCasePropertyName = test
  buildTool = maven
  unitTests.directories = ./
  {noformat}
 
  Notes:
 
  * To build everything you must:
 
  {noformat}
  $ mvn clean install -DskipTests
  $ cd itests
  $ mvn clean install -DskipTests
  {noformat}
 
  because itests (any tests that has cyclical dependencies or requires that
  the packages be built) is not part of the root reactor build.
 
was:
  With HIVE-5566 nearing completion we will be nearly ready to merge the
  maven branch to trunk. The following tasks will be done post-merge:
 
  * HIVE-5611 - Add assembly (i.e.) tar creation to pom
  * HIVE-5612 - Add ability to re-generate generated code stored in source
  control
 
  The merge process will be as follows:
 
  1) svn merge ^/hive/branches/maven
  2) Commit result
  3) Modify the following line in maven-rollforward.sh:
  {noformat}
mv $source $target
  {noformat}
  to
  {noformat}
svn mv $source $target
  {noformat}
  4) Execute maven-rollfward.sh
  5) Commit result
  6) Update trunk-mr1.properties and trunk-mr2.properties on the ptesting
  host, adding the following:
 
  {noformat}
  mavenEnvOpts = -Dhttp.proxyHost=localhost -Dhttp.proxyPort=3128
  testCasePropertyName = test
  buildTool = maven
  unitTests.directories = ./
  {noformat}
 
  Notes:
 
  * To build everything you must:
 
  {noformat}
  $ mvn clean install -DskipTests
  $ cd itests
  $ mvn clean install -DskipTests
  {noformat}
 
  because itests (any tests that has cyclical dependencies or requires that
  the packages be built) is not part of the root reactor build.
 
 
   Merge maven branch into trunk
   -
  
   Key: HIVE-5610
   URL: https://issues.apache.org/jira/browse/HIVE-5610
   Project: Hive
Issue Type: Sub-task
  Reporter: Brock Noland
  Assignee: Brock Noland
  
   With HIVE-5566 nearing completion we will be nearly ready to merge the
  maven branch to trunk. The following tasks will be done post-merge:
   * HIVE-5611 - Add assembly (i.e.) tar creation to pom
   The merge process will be as follows:
   1) svn merge ^/hive/branches/maven
   2) Commit result
   3) Modify the following line in maven-rollforward.sh:
   {noformat}
 mv $source $target
   {noformat}
   to
   {noformat}
 svn mv $source $target
   {noformat}
   4) Execute maven-rollfward.sh and commit result
   5) Modify the following line in maven-delete-ant.sh:
   {noformat}
 rm -rf $@
   {noformat}
   to
   {noformat}
 svn rm $@
   {noformat}
   5) Execute maven-delete-ant.sh and commit result
   6) Update trunk-mr1.properties and trunk-mr2.properties on the ptesting
  host, adding the following:
   {noformat}
   mavenEnvOpts = -Dhttp.proxyHost=localhost -Dhttp.proxyPort=3128
   testCasePropertyName = test
   buildTool = maven
   unitTests.directories = ./
   {noformat}
   Notes:
   * To build everything you must:
   {noformat}
   $ mvn clean install -DskipTests
   $ cd itests
   $ mvn clean install -DskipTests
   {noformat}
   because itests (any tests that has cyclical dependencies or requires
  that the packages be built) is not part of the root reactor build.
 
 
 
  --
  This message was sent by Atlassian JIRA
  (v6.1#6144)
 




-- 
Apache MRUnit - Unit testing MapReduce - http://mrunit.apache.org


[jira] [Commented] (HIVE-5610) Merge maven branch into trunk

2013-10-28 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13807095#comment-13807095
 ] 

Brock Noland commented on HIVE-5610:


FYI I just merged trunk into the maven branch so the branch is likely broken 
until I resolve HIVE-5674.

 Merge maven branch into trunk
 -

 Key: HIVE-5610
 URL: https://issues.apache.org/jira/browse/HIVE-5610
 Project: Hive
  Issue Type: Sub-task
Reporter: Brock Noland
Assignee: Brock Noland

 With HIVE-5566 nearing completion we will be nearly ready to merge the maven 
 branch to trunk. The following tasks will be done post-merge:
 * HIVE-5611 - Add assembly (i.e.) tar creation to pom
 The merge process will be as follows:
 1) svn merge ^/hive/branches/maven
 2) Commit result
 3) Modify the following line in maven-rollforward.sh:
 {noformat}
   mv $source $target
 {noformat}
 to
 {noformat}
   svn mv $source $target
 {noformat}
 4) Execute maven-rollfward.sh and commit result 
 5) Modify the following line in maven-delete-ant.sh:
 {noformat}
   rm -rf $@
 {noformat}
 to
 {noformat}
   svn rm $@
 {noformat}
 5) Execute maven-delete-ant.sh and commit result
 6) Update trunk-mr1.properties and trunk-mr2.properties on the ptesting host, 
 adding the following:
 {noformat}
 mavenEnvOpts = -Dhttp.proxyHost=localhost -Dhttp.proxyPort=3128 
 testCasePropertyName = test
 buildTool = maven
 unitTests.directories = ./
 {noformat}
 Notes:
 * To build everything you must:
 {noformat}
 $ mvn clean install -DskipTests
 $ cd itests
 $ mvn clean install -DskipTests
 {noformat}
 because itests (any tests that has cyclical dependencies or requires that the 
 packages be built) is not part of the root reactor build.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HIVE-5675) Ensure all artifacts are prefixed with hive-

2013-10-28 Thread Brock Noland (JIRA)
Brock Noland created HIVE-5675:
--

 Summary: Ensure all artifacts are prefixed with hive-
 Key: HIVE-5675
 URL: https://issues.apache.org/jira/browse/HIVE-5675
 Project: Hive
  Issue Type: Sub-task
Reporter: Brock Noland
Assignee: Brock Noland


The shims for example.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5668) path normalization in MapOperator is expensive

2013-10-28 Thread Gunther Hagleitner (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5668?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13807106#comment-13807106
 ] 

Gunther Hagleitner commented on HIVE-5668:
--

This makes a noticeable difference. And it's pretty straightforward. Looks 
good: +1

 path normalization in MapOperator is expensive
 --

 Key: HIVE-5668
 URL: https://issues.apache.org/jira/browse/HIVE-5668
 Project: Hive
  Issue Type: Bug
Reporter: Thejas M Nair
Assignee: Thejas M Nair
 Attachments: HIVE-5668.1.patch


 The conversion of paths in MapWork.getPathToAliases is happening multiple 
 times in MapOperator.cleanUpInputFileChangedOp. Caching the results of 
 conversion can improve the performance of hive map tasks.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5610) Merge maven branch into trunk

2013-10-28 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-5610:
---

Description: 
With HIVE-5566 nearing completion we will be nearly ready to merge the maven 
branch to trunk. The following tasks will be done post-merge:

* HIVE-5611 - Add assembly (i.e.) tar creation to pom

The merge process will be as follows:

1) svn merge ^/hive/branches/maven
2) Commit result
3) Modify the following line in maven-rollforward.sh:
{noformat}
  mv $source $target
{noformat}
to
{noformat}
  svn mv $source $target
{noformat}
4) Execute maven-rollfward.sh and commit result 
5) Modify the following line in maven-delete-ant.sh:
{noformat}
  rm -rf $@
{noformat}
to
{noformat}
  svn rm $@
{noformat}
5) Execute maven-delete-ant.sh (in addition to the maven-*.sh scripts) and 
commit result
6) Update trunk-mr1.properties and trunk-mr2.properties on the ptesting host, 
adding the following:

{noformat}
mavenEnvOpts = -Dhttp.proxyHost=localhost -Dhttp.proxyPort=3128 
testCasePropertyName = test
buildTool = maven
unitTests.directories = ./
{noformat}

Notes:

* To build everything you must:

{noformat}
$ mvn clean install -DskipTests
$ cd itests
$ mvn clean install -DskipTests
{noformat}

because itests (any tests that has cyclical dependencies or requires that the 
packages be built) is not part of the root reactor build.

  was:
With HIVE-5566 nearing completion we will be nearly ready to merge the maven 
branch to trunk. The following tasks will be done post-merge:

* HIVE-5611 - Add assembly (i.e.) tar creation to pom

The merge process will be as follows:

1) svn merge ^/hive/branches/maven
2) Commit result
3) Modify the following line in maven-rollforward.sh:
{noformat}
  mv $source $target
{noformat}
to
{noformat}
  svn mv $source $target
{noformat}
4) Execute maven-rollfward.sh and commit result 
5) Modify the following line in maven-delete-ant.sh:
{noformat}
  rm -rf $@
{noformat}
to
{noformat}
  svn rm $@
{noformat}
5) Execute maven-delete-ant.sh and commit result
6) Update trunk-mr1.properties and trunk-mr2.properties on the ptesting host, 
adding the following:

{noformat}
mavenEnvOpts = -Dhttp.proxyHost=localhost -Dhttp.proxyPort=3128 
testCasePropertyName = test
buildTool = maven
unitTests.directories = ./
{noformat}

Notes:

* To build everything you must:

{noformat}
$ mvn clean install -DskipTests
$ cd itests
$ mvn clean install -DskipTests
{noformat}

because itests (any tests that has cyclical dependencies or requires that the 
packages be built) is not part of the root reactor build.


 Merge maven branch into trunk
 -

 Key: HIVE-5610
 URL: https://issues.apache.org/jira/browse/HIVE-5610
 Project: Hive
  Issue Type: Sub-task
Reporter: Brock Noland
Assignee: Brock Noland

 With HIVE-5566 nearing completion we will be nearly ready to merge the maven 
 branch to trunk. The following tasks will be done post-merge:
 * HIVE-5611 - Add assembly (i.e.) tar creation to pom
 The merge process will be as follows:
 1) svn merge ^/hive/branches/maven
 2) Commit result
 3) Modify the following line in maven-rollforward.sh:
 {noformat}
   mv $source $target
 {noformat}
 to
 {noformat}
   svn mv $source $target
 {noformat}
 4) Execute maven-rollfward.sh and commit result 
 5) Modify the following line in maven-delete-ant.sh:
 {noformat}
   rm -rf $@
 {noformat}
 to
 {noformat}
   svn rm $@
 {noformat}
 5) Execute maven-delete-ant.sh (in addition to the maven-*.sh scripts) and 
 commit result
 6) Update trunk-mr1.properties and trunk-mr2.properties on the ptesting host, 
 adding the following:
 {noformat}
 mavenEnvOpts = -Dhttp.proxyHost=localhost -Dhttp.proxyPort=3128 
 testCasePropertyName = test
 buildTool = maven
 unitTests.directories = ./
 {noformat}
 Notes:
 * To build everything you must:
 {noformat}
 $ mvn clean install -DskipTests
 $ cd itests
 $ mvn clean install -DskipTests
 {noformat}
 because itests (any tests that has cyclical dependencies or requires that the 
 packages be built) is not part of the root reactor build.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5610) Merge maven branch into trunk

2013-10-28 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13807115#comment-13807115
 ] 

Brock Noland commented on HIVE-5610:


bq. When I remove the ~/.m2 directory 'mvn compile' fails with an unsatisfied 
dependency error.

Before merging trunk into maven I verified with no .m2 directory that 

{noformat}
mvn clean install -DskipTests
{noformat}

works. I had never tried compile and haven't used that in a maven workflow. 
Generally I believe install, test and package are more useful goals. I 
documented the prefered maven goals here: 
https://cwiki.apache.org/confluence/display/Hive/HiveDeveloperFAQ

bq. There are a bunch of JAR artifacts with names that aren't prepended with 
hive-*

These aren't public artifacts but we should fix and I will do in HIVE-5675.

bq. It would be nice if this patch removed the old Ant and Ivy files, 
eclipse-files directory, and anything else that it will make obsolete.

I added a script maven-delete-ant.sh and added instructions for executing that 
during the merge or commit process.

bq. Run the Thrift code generator.
{noformat}
mvn clean install -Pthriftif -Phadoop-1 -DskipTests -Dthrift.home=/usr/local
{noformat}
Now documented here: 
https://cwiki.apache.org/confluence/display/Hive/HiveDeveloperFAQ

bq. Compile the Thrift C++ bindings in the ODBC directory.

That doesn't work for me with ant, but nevertheless I get the same error with 
maven using this command:

{noformat}
cd odbc
mvn compile -Podbc -Dthrift.home=/usr/local -Dboost.home=/usr/local
{noformat}

Now documented here: 
https://cwiki.apache.org/confluence/display/Hive/HiveDeveloperFAQ

bq. Run a single TestCliDriver qfile test.

{noformat}
cd itests/qtest
mvn test -Dtest=TestCliDriver -Dqfile=alter1.q -Dtest.output.overwrite=true
{noformat}

Now documented here: 
https://cwiki.apache.org/confluence/display/Hive/HiveDeveloperFAQ


 Merge maven branch into trunk
 -

 Key: HIVE-5610
 URL: https://issues.apache.org/jira/browse/HIVE-5610
 Project: Hive
  Issue Type: Sub-task
Reporter: Brock Noland
Assignee: Brock Noland

 With HIVE-5566 nearing completion we will be nearly ready to merge the maven 
 branch to trunk. The following tasks will be done post-merge:
 * HIVE-5611 - Add assembly (i.e.) tar creation to pom
 The merge process will be as follows:
 1) svn merge ^/hive/branches/maven
 2) Commit result
 3) Modify the following line in maven-rollforward.sh:
 {noformat}
   mv $source $target
 {noformat}
 to
 {noformat}
   svn mv $source $target
 {noformat}
 4) Execute maven-rollfward.sh and commit result 
 5) Modify the following line in maven-delete-ant.sh:
 {noformat}
   rm -rf $@
 {noformat}
 to
 {noformat}
   svn rm $@
 {noformat}
 5) Execute maven-delete-ant.sh and commit result
 6) Update trunk-mr1.properties and trunk-mr2.properties on the ptesting host, 
 adding the following:
 {noformat}
 mavenEnvOpts = -Dhttp.proxyHost=localhost -Dhttp.proxyPort=3128 
 testCasePropertyName = test
 buildTool = maven
 unitTests.directories = ./
 {noformat}
 Notes:
 * To build everything you must:
 {noformat}
 $ mvn clean install -DskipTests
 $ cd itests
 $ mvn clean install -DskipTests
 {noformat}
 because itests (any tests that has cyclical dependencies or requires that the 
 packages be built) is not part of the root reactor build.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Resolved] (HIVE-5671) Generated src OrcProto.java shouldn't be in the source control

2013-10-28 Thread Xuefu Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuefu Zhang resolved HIVE-5671.
---

Resolution: Won't Fix

I can see merits of both sides, but no big deal. Close the issue as not-to-fix.

 Generated src OrcProto.java shouldn't be in the source control
 --

 Key: HIVE-5671
 URL: https://issues.apache.org/jira/browse/HIVE-5671
 Project: Hive
  Issue Type: Improvement
  Components: Build Infrastructure
Affects Versions: 0.12.0
Reporter: Xuefu Zhang
Assignee: Xuefu Zhang

 orc_proto.proto generates OrcProto.java, which unfortunately made its way to 
 source control, so changing the .proto file requires regenerating the .java 
 file and checking it in again. This is unnecessary.
 Also, the code generation should be part of the build process.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5425) Provide a configuration option to control the default stripe size for ORC

2013-10-28 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-5425:


Status: Open  (was: Patch Available)

Agree with Brock's comment. Canceling patch for comment to be addressed.


 Provide a configuration option to control the default stripe size for ORC
 -

 Key: HIVE-5425
 URL: https://issues.apache.org/jira/browse/HIVE-5425
 Project: Hive
  Issue Type: Bug
  Components: File Formats
Reporter: Owen O'Malley
Assignee: Owen O'Malley
 Attachments: D13233.1.patch


 We should provide a configuration option to control the default stripe size.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


Re: [jira] [Updated] (HIVE-5610) Merge maven branch into trunk

2013-10-28 Thread Carl Steinbach
Primarily because it keeps the commit history cleaner, but also because we
don't yet have rules in place to allow feature branch merges.
On Oct 28, 2013 11:46 AM, Brock Noland br...@cloudera.com wrote:

 I'd be fine with that. Just curious what motivation is?


 On Mon, Oct 28, 2013 at 1:41 PM, Carl Steinbach cwsteinb...@gmail.com
 wrote:

  Any chance we can commit this instead of merging? I tried rebasing the
  branch onto trunk and it seemed pretty straightforward.
  On Oct 28, 2013 11:22 AM, Brock Noland (JIRA) j...@apache.org wrote:
 
  
[
  
 
 https://issues.apache.org/jira/browse/HIVE-5610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
  ]
  
   Brock Noland updated HIVE-5610:
   ---
  
   Description:
   With HIVE-5566 nearing completion we will be nearly ready to merge the
   maven branch to trunk. The following tasks will be done post-merge:
  
   * HIVE-5611 - Add assembly (i.e.) tar creation to pom
  
   The merge process will be as follows:
  
   1) svn merge ^/hive/branches/maven
   2) Commit result
   3) Modify the following line in maven-rollforward.sh:
   {noformat}
 mv $source $target
   {noformat}
   to
   {noformat}
 svn mv $source $target
   {noformat}
   4) Execute maven-rollfward.sh and commit result
   5) Modify the following line in maven-delete-ant.sh:
   {noformat}
 rm -rf $@
   {noformat}
   to
   {noformat}
 svn rm $@
   {noformat}
   5) Execute maven-delete-ant.sh and commit result
   6) Update trunk-mr1.properties and trunk-mr2.properties on the ptesting
   host, adding the following:
  
   {noformat}
   mavenEnvOpts = -Dhttp.proxyHost=localhost -Dhttp.proxyPort=3128
   testCasePropertyName = test
   buildTool = maven
   unitTests.directories = ./
   {noformat}
  
   Notes:
  
   * To build everything you must:
  
   {noformat}
   $ mvn clean install -DskipTests
   $ cd itests
   $ mvn clean install -DskipTests
   {noformat}
  
   because itests (any tests that has cyclical dependencies or requires
 that
   the packages be built) is not part of the root reactor build.
  
 was:
   With HIVE-5566 nearing completion we will be nearly ready to merge the
   maven branch to trunk. The following tasks will be done post-merge:
  
   * HIVE-5611 - Add assembly (i.e.) tar creation to pom
   * HIVE-5612 - Add ability to re-generate generated code stored in
 source
   control
  
   The merge process will be as follows:
  
   1) svn merge ^/hive/branches/maven
   2) Commit result
   3) Modify the following line in maven-rollforward.sh:
   {noformat}
 mv $source $target
   {noformat}
   to
   {noformat}
 svn mv $source $target
   {noformat}
   4) Execute maven-rollfward.sh
   5) Commit result
   6) Update trunk-mr1.properties and trunk-mr2.properties on the ptesting
   host, adding the following:
  
   {noformat}
   mavenEnvOpts = -Dhttp.proxyHost=localhost -Dhttp.proxyPort=3128
   testCasePropertyName = test
   buildTool = maven
   unitTests.directories = ./
   {noformat}
  
   Notes:
  
   * To build everything you must:
  
   {noformat}
   $ mvn clean install -DskipTests
   $ cd itests
   $ mvn clean install -DskipTests
   {noformat}
  
   because itests (any tests that has cyclical dependencies or requires
 that
   the packages be built) is not part of the root reactor build.
  
  
Merge maven branch into trunk
-
   
Key: HIVE-5610
URL: https://issues.apache.org/jira/browse/HIVE-5610
Project: Hive
 Issue Type: Sub-task
   Reporter: Brock Noland
   Assignee: Brock Noland
   
With HIVE-5566 nearing completion we will be nearly ready to merge
 the
   maven branch to trunk. The following tasks will be done post-merge:
* HIVE-5611 - Add assembly (i.e.) tar creation to pom
The merge process will be as follows:
1) svn merge ^/hive/branches/maven
2) Commit result
3) Modify the following line in maven-rollforward.sh:
{noformat}
  mv $source $target
{noformat}
to
{noformat}
  svn mv $source $target
{noformat}
4) Execute maven-rollfward.sh and commit result
5) Modify the following line in maven-delete-ant.sh:
{noformat}
  rm -rf $@
{noformat}
to
{noformat}
  svn rm $@
{noformat}
5) Execute maven-delete-ant.sh and commit result
6) Update trunk-mr1.properties and trunk-mr2.properties on the
 ptesting
   host, adding the following:
{noformat}
mavenEnvOpts = -Dhttp.proxyHost=localhost -Dhttp.proxyPort=3128
testCasePropertyName = test
buildTool = maven
unitTests.directories = ./
{noformat}
Notes:
* To build everything you must:
{noformat}
$ mvn clean install -DskipTests
$ cd itests
$ mvn clean install -DskipTests
{noformat}
because itests (any tests that has cyclical dependencies or requires
   that the packages be built) is not part of 

Re: [jira] [Updated] (HIVE-5610) Merge maven branch into trunk

2013-10-28 Thread Brock Noland
Ok lets go that route. The in-branch history isn't terribly useful since
there are so few commits.


On Mon, Oct 28, 2013 at 2:17 PM, Carl Steinbach cwsteinb...@gmail.comwrote:

 Primarily because it keeps the commit history cleaner, but also because we
 don't yet have rules in place to allow feature branch merges.
 On Oct 28, 2013 11:46 AM, Brock Noland br...@cloudera.com wrote:

  I'd be fine with that. Just curious what motivation is?
 
 
  On Mon, Oct 28, 2013 at 1:41 PM, Carl Steinbach cwsteinb...@gmail.com
  wrote:
 
   Any chance we can commit this instead of merging? I tried rebasing the
   branch onto trunk and it seemed pretty straightforward.
   On Oct 28, 2013 11:22 AM, Brock Noland (JIRA) j...@apache.org
 wrote:
  
   
 [
   
  
 
 https://issues.apache.org/jira/browse/HIVE-5610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
   ]
   
Brock Noland updated HIVE-5610:
---
   
Description:
With HIVE-5566 nearing completion we will be nearly ready to merge
 the
maven branch to trunk. The following tasks will be done post-merge:
   
* HIVE-5611 - Add assembly (i.e.) tar creation to pom
   
The merge process will be as follows:
   
1) svn merge ^/hive/branches/maven
2) Commit result
3) Modify the following line in maven-rollforward.sh:
{noformat}
  mv $source $target
{noformat}
to
{noformat}
  svn mv $source $target
{noformat}
4) Execute maven-rollfward.sh and commit result
5) Modify the following line in maven-delete-ant.sh:
{noformat}
  rm -rf $@
{noformat}
to
{noformat}
  svn rm $@
{noformat}
5) Execute maven-delete-ant.sh and commit result
6) Update trunk-mr1.properties and trunk-mr2.properties on the
 ptesting
host, adding the following:
   
{noformat}
mavenEnvOpts = -Dhttp.proxyHost=localhost -Dhttp.proxyPort=3128
testCasePropertyName = test
buildTool = maven
unitTests.directories = ./
{noformat}
   
Notes:
   
* To build everything you must:
   
{noformat}
$ mvn clean install -DskipTests
$ cd itests
$ mvn clean install -DskipTests
{noformat}
   
because itests (any tests that has cyclical dependencies or requires
  that
the packages be built) is not part of the root reactor build.
   
  was:
With HIVE-5566 nearing completion we will be nearly ready to merge
 the
maven branch to trunk. The following tasks will be done post-merge:
   
* HIVE-5611 - Add assembly (i.e.) tar creation to pom
* HIVE-5612 - Add ability to re-generate generated code stored in
  source
control
   
The merge process will be as follows:
   
1) svn merge ^/hive/branches/maven
2) Commit result
3) Modify the following line in maven-rollforward.sh:
{noformat}
  mv $source $target
{noformat}
to
{noformat}
  svn mv $source $target
{noformat}
4) Execute maven-rollfward.sh
5) Commit result
6) Update trunk-mr1.properties and trunk-mr2.properties on the
 ptesting
host, adding the following:
   
{noformat}
mavenEnvOpts = -Dhttp.proxyHost=localhost -Dhttp.proxyPort=3128
testCasePropertyName = test
buildTool = maven
unitTests.directories = ./
{noformat}
   
Notes:
   
* To build everything you must:
   
{noformat}
$ mvn clean install -DskipTests
$ cd itests
$ mvn clean install -DskipTests
{noformat}
   
because itests (any tests that has cyclical dependencies or requires
  that
the packages be built) is not part of the root reactor build.
   
   
 Merge maven branch into trunk
 -

 Key: HIVE-5610
 URL:
 https://issues.apache.org/jira/browse/HIVE-5610
 Project: Hive
  Issue Type: Sub-task
Reporter: Brock Noland
Assignee: Brock Noland

 With HIVE-5566 nearing completion we will be nearly ready to merge
  the
maven branch to trunk. The following tasks will be done post-merge:
 * HIVE-5611 - Add assembly (i.e.) tar creation to pom
 The merge process will be as follows:
 1) svn merge ^/hive/branches/maven
 2) Commit result
 3) Modify the following line in maven-rollforward.sh:
 {noformat}
   mv $source $target
 {noformat}
 to
 {noformat}
   svn mv $source $target
 {noformat}
 4) Execute maven-rollfward.sh and commit result
 5) Modify the following line in maven-delete-ant.sh:
 {noformat}
   rm -rf $@
 {noformat}
 to
 {noformat}
   svn rm $@
 {noformat}
 5) Execute maven-delete-ant.sh and commit result
 6) Update trunk-mr1.properties and trunk-mr2.properties on the
  ptesting
host, adding the following:
 {noformat}
 mavenEnvOpts = -Dhttp.proxyHost=localhost -Dhttp.proxyPort=3128
 testCasePropertyName = test
 

[jira] [Updated] (HIVE-5675) Ensure all artifacts are prefixed with hive-

2013-10-28 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-5675:
---

Attachment: HIVE-5675.patch

 Ensure all artifacts are prefixed with hive-
 

 Key: HIVE-5675
 URL: https://issues.apache.org/jira/browse/HIVE-5675
 Project: Hive
  Issue Type: Sub-task
Reporter: Brock Noland
Assignee: Brock Noland
 Attachments: HIVE-5675.patch


 The shims for example.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Resolved] (HIVE-5675) Ensure all artifacts are prefixed with hive-

2013-10-28 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland resolved HIVE-5675.


Resolution: Fixed

 Ensure all artifacts are prefixed with hive-
 

 Key: HIVE-5675
 URL: https://issues.apache.org/jira/browse/HIVE-5675
 Project: Hive
  Issue Type: Sub-task
Reporter: Brock Noland
Assignee: Brock Noland
 Attachments: HIVE-5675.patch


 The shims for example.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5674) Merge latest trunk into branch and fix resulting tests

2013-10-28 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-5674:
---

Attachment: HIVE-5674.patch

 Merge latest trunk into branch and fix resulting tests
 --

 Key: HIVE-5674
 URL: https://issues.apache.org/jira/browse/HIVE-5674
 Project: Hive
  Issue Type: Sub-task
Reporter: Brock Noland
Assignee: Brock Noland
 Attachments: HIVE-5674.patch






--
This message was sent by Atlassian JIRA
(v6.1#6144)


Re: Review Request 14985: HIVE-5354: Decimal precision/scale support in ORC file

2013-10-28 Thread Xuefu Zhang

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/14985/#review27632
---



ql/src/java/org/apache/hadoop/hive/ql/io/orc/OrcStruct.java
https://reviews.apache.org/r/14985/#comment53683

This is a valid concern. However, there is no way to preserve previous 
precision/scale, as previously precision is always 38, and scale is variable 
while here we need to re-fit it to a precision and a fixed scale. I think a 
default one, (38, 18), may satisfy (most of cases). What do you think?


- Xuefu Zhang


On Oct. 28, 2013, 4:49 a.m., Xuefu Zhang wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/14985/
 ---
 
 (Updated Oct. 28, 2013, 4:49 a.m.)
 
 
 Review request for hive and Brock Noland.
 
 
 Bugs: HIVE-5354
 https://issues.apache.org/jira/browse/HIVE-5354
 
 
 Repository: hive-git
 
 
 Description
 ---
 
 Support decimal precision/scale for Orc file, as part of HIVE-3976.
 
 
 Diffs
 -
 
   ql/src/java/org/apache/hadoop/hive/ql/io/orc/OrcStruct.java c993b37 
   ql/src/java/org/apache/hadoop/hive/ql/io/orc/RecordReaderImpl.java 71484a3 
   ql/src/java/org/apache/hadoop/hive/ql/io/orc/WriterImpl.java 7519fc1 
   ql/src/protobuf/org/apache/hadoop/hive/ql/io/orc/orc_proto.proto 53b93a0 
 
 Diff: https://reviews.apache.org/r/14985/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 Xuefu Zhang
 




[jira] [Resolved] (HIVE-5674) Merge latest trunk into branch and fix resulting tests

2013-10-28 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland resolved HIVE-5674.


Resolution: Fixed

 Merge latest trunk into branch and fix resulting tests
 --

 Key: HIVE-5674
 URL: https://issues.apache.org/jira/browse/HIVE-5674
 Project: Hive
  Issue Type: Sub-task
Reporter: Brock Noland
Assignee: Brock Noland
 Attachments: HIVE-5674.patch






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5354) Decimal precision/scale support in ORC file

2013-10-28 Thread Xuefu Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuefu Zhang updated HIVE-5354:
--

Attachment: HIVE-5354.3.patch

Patch #3 addressed Jason's review feedback on RB.

 Decimal precision/scale support in ORC file
 ---

 Key: HIVE-5354
 URL: https://issues.apache.org/jira/browse/HIVE-5354
 Project: Hive
  Issue Type: Task
  Components: Serializers/Deserializers
Affects Versions: 0.10.0
Reporter: Xuefu Zhang
Assignee: Xuefu Zhang
 Fix For: 0.13.0

 Attachments: HIVE-5354.1.patch, HIVE-5354.2.patch, HIVE-5354.3.patch, 
 HIVE-5354.patch


 A subtask of HIVE-3976.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


Re: Review Request 14985: HIVE-5354: Decimal precision/scale support in ORC file

2013-10-28 Thread Xuefu Zhang

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/14985/
---

(Updated Oct. 28, 2013, 7:51 p.m.)


Review request for hive and Brock Noland.


Bugs: HIVE-5354
https://issues.apache.org/jira/browse/HIVE-5354


Repository: hive-git


Description
---

Support decimal precision/scale for Orc file, as part of HIVE-3976.


Diffs (updated)
-

  ql/src/java/org/apache/hadoop/hive/ql/io/orc/OrcStruct.java c993b37 
  ql/src/java/org/apache/hadoop/hive/ql/io/orc/RecordReaderImpl.java 71484a3 
  ql/src/java/org/apache/hadoop/hive/ql/io/orc/WriterImpl.java 7519fc1 
  ql/src/protobuf/org/apache/hadoop/hive/ql/io/orc/orc_proto.proto 53b93a0 

Diff: https://reviews.apache.org/r/14985/diff/


Testing
---


Thanks,

Xuefu Zhang



  1   2   3   >