[jira] [Commented] (HIVE-5643) ZooKeeperHiveLockManager.getQuorumServers incorrectly appends the custom zk port to quorum hosts

2013-10-26 Thread Venki Korukanti (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13806241#comment-13806241
 ] 

Venki Korukanti commented on HIVE-5643:
---

RB link: https://reviews.apache.org/r/14978/

> ZooKeeperHiveLockManager.getQuorumServers incorrectly appends the custom zk 
> port to quorum hosts
> 
>
> Key: HIVE-5643
> URL: https://issues.apache.org/jira/browse/HIVE-5643
> Project: Hive
>  Issue Type: Bug
>  Components: Locking
>Affects Versions: 0.12.0
>Reporter: Venki Korukanti
>Assignee: Venki Korukanti
> Fix For: 0.13.0
>
> Attachments: HIVE-5643.1.patch.txt
>
>
> ZooKeeperHiveLockManager calls the below method to construct the connection 
> string for ZooKeeper connection.
> {code}
>   private static String getQuorumServers(HiveConf conf) {
> String hosts = conf.getVar(HiveConf.ConfVars.HIVE_ZOOKEEPER_QUORUM);
> String port = conf.getVar(HiveConf.ConfVars.HIVE_ZOOKEEPER_CLIENT_PORT);
> return hosts + ":" + port;
>   }
> {code}
> For example:
> HIVE_ZOOKEEPER_QUORUM=node1, node2, node3
> HIVE_ZOOKEEPER_CLIENT_PORT=
> Connection string given to ZooKeeper object is "node1, node2, node3:". 
> ZooKeeper consider the default port as 2181 for hostnames that don't have any 
> port. 
> This works fine as long as HIVE_ZOOKEEPER_CLIENT_PORT is 2181. If it is 
> different then ZooKeeper client object tries to connect to node1 and node2 on 
> port 2181 which always fails. So it has only one choice the last host which 
> receives all the load from Hive.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5643) ZooKeeperHiveLockManager.getQuorumServers incorrectly appends the custom zk port to quorum hosts

2013-10-26 Thread Venki Korukanti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5643?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Venki Korukanti updated HIVE-5643:
--

Status: Patch Available  (was: Open)

> ZooKeeperHiveLockManager.getQuorumServers incorrectly appends the custom zk 
> port to quorum hosts
> 
>
> Key: HIVE-5643
> URL: https://issues.apache.org/jira/browse/HIVE-5643
> Project: Hive
>  Issue Type: Bug
>  Components: Locking
>Affects Versions: 0.12.0
>Reporter: Venki Korukanti
>Assignee: Venki Korukanti
> Fix For: 0.13.0
>
> Attachments: HIVE-5643.1.patch.txt
>
>
> ZooKeeperHiveLockManager calls the below method to construct the connection 
> string for ZooKeeper connection.
> {code}
>   private static String getQuorumServers(HiveConf conf) {
> String hosts = conf.getVar(HiveConf.ConfVars.HIVE_ZOOKEEPER_QUORUM);
> String port = conf.getVar(HiveConf.ConfVars.HIVE_ZOOKEEPER_CLIENT_PORT);
> return hosts + ":" + port;
>   }
> {code}
> For example:
> HIVE_ZOOKEEPER_QUORUM=node1, node2, node3
> HIVE_ZOOKEEPER_CLIENT_PORT=
> Connection string given to ZooKeeper object is "node1, node2, node3:". 
> ZooKeeper consider the default port as 2181 for hostnames that don't have any 
> port. 
> This works fine as long as HIVE_ZOOKEEPER_CLIENT_PORT is 2181. If it is 
> different then ZooKeeper client object tries to connect to node1 and node2 on 
> port 2181 which always fails. So it has only one choice the last host which 
> receives all the load from Hive.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5643) ZooKeeperHiveLockManager.getQuorumServers incorrectly appends the custom zk port to quorum hosts

2013-10-26 Thread Venki Korukanti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5643?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Venki Korukanti updated HIVE-5643:
--

Attachment: HIVE-5643.1.patch.txt

> ZooKeeperHiveLockManager.getQuorumServers incorrectly appends the custom zk 
> port to quorum hosts
> 
>
> Key: HIVE-5643
> URL: https://issues.apache.org/jira/browse/HIVE-5643
> Project: Hive
>  Issue Type: Bug
>  Components: Locking
>Affects Versions: 0.12.0
>Reporter: Venki Korukanti
>Assignee: Venki Korukanti
> Fix For: 0.13.0
>
> Attachments: HIVE-5643.1.patch.txt
>
>
> ZooKeeperHiveLockManager calls the below method to construct the connection 
> string for ZooKeeper connection.
> {code}
>   private static String getQuorumServers(HiveConf conf) {
> String hosts = conf.getVar(HiveConf.ConfVars.HIVE_ZOOKEEPER_QUORUM);
> String port = conf.getVar(HiveConf.ConfVars.HIVE_ZOOKEEPER_CLIENT_PORT);
> return hosts + ":" + port;
>   }
> {code}
> For example:
> HIVE_ZOOKEEPER_QUORUM=node1, node2, node3
> HIVE_ZOOKEEPER_CLIENT_PORT=
> Connection string given to ZooKeeper object is "node1, node2, node3:". 
> ZooKeeper consider the default port as 2181 for hostnames that don't have any 
> port. 
> This works fine as long as HIVE_ZOOKEEPER_CLIENT_PORT is 2181. If it is 
> different then ZooKeeper client object tries to connect to node1 and node2 on 
> port 2181 which always fails. So it has only one choice the last host which 
> receives all the load from Hive.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


Review Request 14978: HIVE-5643: ZooKeeperHiveLockManager.getQuorumServers incorrectly appends the custom zk port to quorum hosts

2013-10-26 Thread Venki Korukanti

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/14978/
---

Review request for hive and Brock Noland.


Bugs: HIVE-5643
https://issues.apache.org/jira/browse/HIVE-5643


Repository: hive-git


Description
---

ZooKeeperHiveLockManager calls the below method to construct the connection 
string for ZooKeeper connection.

  private static String getQuorumServers(HiveConf conf) {
String hosts = conf.getVar(HiveConf.ConfVars.HIVE_ZOOKEEPER_QUORUM);
String port = conf.getVar(HiveConf.ConfVars.HIVE_ZOOKEEPER_CLIENT_PORT);
return hosts + ":" + port;
  }

For example:
HIVE_ZOOKEEPER_QUORUM=node1, node2, node3
HIVE_ZOOKEEPER_CLIENT_PORT=

Connection string given to ZooKeeper client object is "node1, node2, 
node3:". ZooKeeper consider the default port as 2181 for hostnames that 
don't have any port. This works fine as long as HIVE_ZOOKEEPER_CLIENT_PORT is 
2181. If it is different then ZooKeeper client object tries to connect to node1 
and node2 on port 2181 which always fails. So it has only one choice the last 
host which receives all the load from Hive.


Diffs
-

  
ql/src/java/org/apache/hadoop/hive/ql/lockmgr/zookeeper/ZooKeeperHiveLockManager.java
 735e745 
  
ql/src/test/org/apache/hadoop/hive/ql/lockmgr/zookeeper/TestZookeeperLockManager.java
 2ff48f5 

Diff: https://reviews.apache.org/r/14978/diff/


Testing
---

Added a unittest for getQuorumServers which test different types of quorum 
settings.


Thanks,

Venki Korukanti



[jira] [Commented] (HIVE-5610) Merge maven branch into trunk

2013-10-26 Thread Carl Steinbach (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13806236#comment-13806236
 ] 

Carl Steinbach commented on HIVE-5610:
--

Here are some issues I found:
* When I remove the ~/.m2 directory 'mvn compile' fails with an unsatisfied 
dependency error.
* There are a bunch of JAR artifacts with names that aren't prepended with 
"hive-*"
* It would be nice if this patch removed the old Ant and Ivy files, 
eclipse-files directory, and anything else that it will make obsolete.

How do I do the following:
* Run the Thrift code generator.
* Compile the Thrift C++ bindings in the ODBC directory.
* Run a single TestCliDriver qfile test.


> Merge maven branch into trunk
> -
>
> Key: HIVE-5610
> URL: https://issues.apache.org/jira/browse/HIVE-5610
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Brock Noland
>Assignee: Brock Noland
>
> With HIVE-5566 nearing completion we will be nearly ready to merge the maven 
> branch to trunk. The following tasks will be done post-merge:
> * HIVE-5611 - Add assembly (i.e.) tar creation to pom
> * HIVE-5612 - Add ability to re-generate generated code stored in source 
> control
> The merge process will be as follows:
> 1) svn merge ^/hive/branches/maven
> 2) Commit result
> 3) Modify the following line in maven-rollforward.sh:
> {noformat}
>   mv $source $target
> {noformat}
> to
> {noformat}
>   svn mv $source $target
> {noformat}
> 4) Execute maven-rollfward.sh
> 5) Commit result 
> 6) Update trunk-mr1.properties and trunk-mr2.properties on the ptesting host, 
> adding the following:
> {noformat}
> mavenEnvOpts = -Dhttp.proxyHost=localhost -Dhttp.proxyPort=3128 
> testCasePropertyName = test
> buildTool = maven
> unitTests.directories = ./
> {noformat}
> Notes:
> * To build everything you must:
> {noformat}
> $ mvn clean install -DskipTests
> $ cd itests
> $ mvn clean install -DskipTests
> {noformat}
> because itests (any tests that has cyclical dependencies or requires that the 
> packages be built) is not part of the root reactor build.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5655) Hive incorrecly handles divide-by-zero case

2013-10-26 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13806235#comment-13806235
 ] 

Hive QA commented on HIVE-5655:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12610460/HIVE-5655.1.patch

{color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 4456 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_vectorization_13
org.apache.hive.hcatalog.listener.TestNotificationListener.testAMQListener
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/1258/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/1258/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests failed with: TestsFailedException: 2 tests failed
{noformat}

This message is automatically generated.

> Hive incorrecly handles divide-by-zero case
> ---
>
> Key: HIVE-5655
> URL: https://issues.apache.org/jira/browse/HIVE-5655
> Project: Hive
>  Issue Type: Improvement
>  Components: Types
>Affects Versions: 0.10.0, 0.11.0, 0.12.0
>Reporter: Xuefu Zhang
>Assignee: Xuefu Zhang
> Attachments: HIVE-5655.1.patch, HIVE-5655.patch
>
>
> Unlike other databases, Hive currently has only one mode (default mode) 
> regarding error handling, in which NULL value is returned. However, in case 
> of divide-by-zero, Hive demonstrated a different behavior.
> {code}
> hive> select 5/0 from tmp2 limit 1;
> Total MapReduce jobs = 1
> ...
> Total MapReduce CPU Time Spent: 860 msec
> OK
> Infinity
> {code}
> The correct behaviour should be Hive returning NULL instead in order to be 
> consistent w.r.t error handling. (BTW, the same situation is handled 
> corrected for decimal type.)
> MySQL has server modes control the behaviour. By default, NULL is returned. 
> For instance,
> {code}
> mysql> select 3/0 from dual;
> +--+
> | 3/0  |
> +--+
> | NULL |
> +--+
> 1 row in set (0.00 sec)
> {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5655) Hive incorrecly handles divide-by-zero case

2013-10-26 Thread Xuefu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13806224#comment-13806224
 ] 

Xuefu Zhang commented on HIVE-5655:
---

Thanks, Edward. I knew about your proposal of new way of testing udfs, but 
didn't realize that you have completed that already. I will take a look. Thank 
you for the review and bringing this up.

> Hive incorrecly handles divide-by-zero case
> ---
>
> Key: HIVE-5655
> URL: https://issues.apache.org/jira/browse/HIVE-5655
> Project: Hive
>  Issue Type: Improvement
>  Components: Types
>Affects Versions: 0.10.0, 0.11.0, 0.12.0
>Reporter: Xuefu Zhang
>Assignee: Xuefu Zhang
> Attachments: HIVE-5655.1.patch, HIVE-5655.patch
>
>
> Unlike other databases, Hive currently has only one mode (default mode) 
> regarding error handling, in which NULL value is returned. However, in case 
> of divide-by-zero, Hive demonstrated a different behavior.
> {code}
> hive> select 5/0 from tmp2 limit 1;
> Total MapReduce jobs = 1
> ...
> Total MapReduce CPU Time Spent: 860 msec
> OK
> Infinity
> {code}
> The correct behaviour should be Hive returning NULL instead in order to be 
> consistent w.r.t error handling. (BTW, the same situation is handled 
> corrected for decimal type.)
> MySQL has server modes control the behaviour. By default, NULL is returned. 
> For instance,
> {code}
> mysql> select 3/0 from dual;
> +--+
> | 3/0  |
> +--+
> | NULL |
> +--+
> 1 row in set (0.00 sec)
> {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


Review Request 14977: HIVE-5656: Hive produces unclear, confusing SemanticException when dealing with mod or pmod by zero

2013-10-26 Thread Xuefu Zhang

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/14977/
---

Review request for hive and Brock Noland.


Bugs: HIVE-5656
https://issues.apache.org/jira/browse/HIVE-5656


Repository: hive-git


Description
---

Fix the problem so that NULL is produced.


Diffs
-

  ql/src/java/org/apache/hadoop/hive/ql/udf/UDFOPMod.java 8653082 
  ql/src/java/org/apache/hadoop/hive/ql/udf/UDFPosMod.java dc6a862 
  ql/src/test/org/apache/hadoop/hive/ql/udf/TestUDFOPMod.java PRE-CREATION 
  ql/src/test/org/apache/hadoop/hive/ql/udf/TestUDFPosMod.java PRE-CREATION 
  ql/src/test/results/clientpositive/vectorization_14.q.out f00c1ea 

Diff: https://reviews.apache.org/r/14977/diff/


Testing
---

Manually tested and the seen error msg is gone. Added new testcase. Had to 
generate .out for an old test case.


Thanks,

Xuefu Zhang



[jira] [Updated] (HIVE-5656) Hive produces unclear, confusing SemanticException when dealing with mod or pmod by zero

2013-10-26 Thread Xuefu Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuefu Zhang updated HIVE-5656:
--

Attachment: HIVE-5656.patch

> Hive produces unclear, confusing SemanticException when dealing with mod or 
> pmod by zero
> 
>
> Key: HIVE-5656
> URL: https://issues.apache.org/jira/browse/HIVE-5656
> Project: Hive
>  Issue Type: Bug
>  Components: Types
>Affects Versions: 0.12.0
>Reporter: Xuefu Zhang
>Assignee: Xuefu Zhang
> Attachments: HIVE-5656.patch
>
>
> {code}
> hive> select 5%0 from tmp2 limit 1;
> FAILED: SemanticException [Error 10014]: Line 1:7 Wrong arguments '0': 
> org.apache.hadoop.hive.ql.metadata.HiveException: Unable to execute method 
> public org.apache.hadoop.io.IntWritable 
> org.apache.hadoop.hive.ql.udf.UDFOPMod.evaluate(org.apache.hadoop.io.IntWritable,org.apache.hadoop.io.IntWritable)
>   on object org.apache.hadoop.hive.ql.udf.UDFOPMod@21b594a9 of class 
> org.apache.hadoop.hive.ql.udf.UDFOPMod with arguments 
> {5:org.apache.hadoop.io.IntWritable, 0:org.apache.hadoop.io.IntWritable} of 
> size 2
> hive> select pmod(5,0) from tmp2 limit 1;
> FAILED: SemanticException [Error 10014]: Line 1:7 Wrong arguments '0': 
> org.apache.hadoop.hive.ql.metadata.HiveException: Unable to execute method 
> public org.apache.hadoop.io.IntWritable 
> org.apache.hadoop.hive.ql.udf.UDFPosMod.evaluate(org.apache.hadoop.io.IntWritable,org.apache.hadoop.io.IntWritable)
>   on object org.apache.hadoop.hive.ql.udf.UDFPosMod@174ed99a of class 
> org.apache.hadoop.hive.ql.udf.UDFPosMod with arguments 
> {5:org.apache.hadoop.io.IntWritable, 0:org.apache.hadoop.io.IntWritable} of 
> size 2
> {code}
> Exception stack:
> {code}
> at 
> org.apache.hadoop.hive.ql.parse.TypeCheckProcFactory$DefaultExprProcessor.process(TypeCheckProcFactory.java:1112)
> at 
> org.apache.hadoop.hive.ql.lib.DefaultRuleDispatcher.dispatch(DefaultRuleDispatcher.java:90)
> at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatchAndReturn(DefaultGraphWalker.java:94)
> at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatch(DefaultGraphWalker.java:78)
> at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.walk(DefaultGraphWalker.java:132)
> at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.startWalking(DefaultGraphWalker.java:109)
> at 
> org.apache.hadoop.hive.ql.parse.TypeCheckProcFactory.genExprNode(TypeCheckProcFactory.java:181)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genAllExprNodeDesc(SemanticAnalyzer.java:8870)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genExprNodeDesc(SemanticAnalyzer.java:8826)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genSelectPlan(SemanticAnalyzer.java:2734)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genSelectPlan(SemanticAnalyzer.java:2531)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPostGroupByBodyPlan(SemanticAnalyzer.java:7606)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genBodyPlan(SemanticAnalyzer.java:7562)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:8365)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:8591)
> at 
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:284)
> at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:451)
> at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:351)
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1004)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:915)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:268)
> at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:220)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:422)
> at 
> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:790)
> at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:684)
> at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:623)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:601)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:208)
> {code}
> The correct behaviour should be producing NULL.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5656) Hive produces unclear, confusing SemanticException when dealing with mod or pmod by zero

2013-10-26 Thread Xuefu Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuefu Zhang updated HIVE-5656:
--

Status: Patch Available  (was: Open)

> Hive produces unclear, confusing SemanticException when dealing with mod or 
> pmod by zero
> 
>
> Key: HIVE-5656
> URL: https://issues.apache.org/jira/browse/HIVE-5656
> Project: Hive
>  Issue Type: Bug
>  Components: Types
>Affects Versions: 0.12.0
>Reporter: Xuefu Zhang
>Assignee: Xuefu Zhang
> Attachments: HIVE-5656.patch
>
>
> {code}
> hive> select 5%0 from tmp2 limit 1;
> FAILED: SemanticException [Error 10014]: Line 1:7 Wrong arguments '0': 
> org.apache.hadoop.hive.ql.metadata.HiveException: Unable to execute method 
> public org.apache.hadoop.io.IntWritable 
> org.apache.hadoop.hive.ql.udf.UDFOPMod.evaluate(org.apache.hadoop.io.IntWritable,org.apache.hadoop.io.IntWritable)
>   on object org.apache.hadoop.hive.ql.udf.UDFOPMod@21b594a9 of class 
> org.apache.hadoop.hive.ql.udf.UDFOPMod with arguments 
> {5:org.apache.hadoop.io.IntWritable, 0:org.apache.hadoop.io.IntWritable} of 
> size 2
> hive> select pmod(5,0) from tmp2 limit 1;
> FAILED: SemanticException [Error 10014]: Line 1:7 Wrong arguments '0': 
> org.apache.hadoop.hive.ql.metadata.HiveException: Unable to execute method 
> public org.apache.hadoop.io.IntWritable 
> org.apache.hadoop.hive.ql.udf.UDFPosMod.evaluate(org.apache.hadoop.io.IntWritable,org.apache.hadoop.io.IntWritable)
>   on object org.apache.hadoop.hive.ql.udf.UDFPosMod@174ed99a of class 
> org.apache.hadoop.hive.ql.udf.UDFPosMod with arguments 
> {5:org.apache.hadoop.io.IntWritable, 0:org.apache.hadoop.io.IntWritable} of 
> size 2
> {code}
> Exception stack:
> {code}
> at 
> org.apache.hadoop.hive.ql.parse.TypeCheckProcFactory$DefaultExprProcessor.process(TypeCheckProcFactory.java:1112)
> at 
> org.apache.hadoop.hive.ql.lib.DefaultRuleDispatcher.dispatch(DefaultRuleDispatcher.java:90)
> at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatchAndReturn(DefaultGraphWalker.java:94)
> at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatch(DefaultGraphWalker.java:78)
> at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.walk(DefaultGraphWalker.java:132)
> at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.startWalking(DefaultGraphWalker.java:109)
> at 
> org.apache.hadoop.hive.ql.parse.TypeCheckProcFactory.genExprNode(TypeCheckProcFactory.java:181)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genAllExprNodeDesc(SemanticAnalyzer.java:8870)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genExprNodeDesc(SemanticAnalyzer.java:8826)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genSelectPlan(SemanticAnalyzer.java:2734)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genSelectPlan(SemanticAnalyzer.java:2531)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPostGroupByBodyPlan(SemanticAnalyzer.java:7606)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genBodyPlan(SemanticAnalyzer.java:7562)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:8365)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:8591)
> at 
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:284)
> at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:451)
> at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:351)
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1004)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:915)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:268)
> at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:220)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:422)
> at 
> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:790)
> at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:684)
> at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:623)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:601)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:208)
> {code}
> The correct behaviour should be producing NULL.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-3976) Support specifying scale and precision with Hive decimal type

2013-10-26 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13806218#comment-13806218
 ] 

Hive QA commented on HIVE-3976:
---



{color:green}Overall{color}: +1 all checks pass

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12610458/HIVE-3976.11.patch

{color:green}SUCCESS:{color} +1 4501 tests passed

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/1257/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/1257/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

> Support specifying scale and precision with Hive decimal type
> -
>
> Key: HIVE-3976
> URL: https://issues.apache.org/jira/browse/HIVE-3976
> Project: Hive
>  Issue Type: New Feature
>  Components: Query Processor, Types
>Affects Versions: 0.11.0
>Reporter: Mark Grover
>Assignee: Xuefu Zhang
> Fix For: 0.13.0
>
> Attachments: HIVE-3976.10.patch, HIVE-3976.11.patch, 
> HIVE-3976.1.patch, HIVE-3976.2.patch, HIVE-3976.3.patch, HIVE-3976.4.patch, 
> HIVE-3976.5.patch, HIVE-3976.6.patch, HIVE-3976.7.patch, HIVE-3976.8.patch, 
> HIVE-3976.9.patch, HIVE-3976.patch, remove_prec_scale.diff
>
>
> HIVE-2693 introduced support for Decimal datatype in Hive. However, the 
> current implementation has unlimited precision and provides no way to specify 
> precision and scale when creating the table.
> For example, MySQL allows users to specify scale and precision of the decimal 
> datatype when creating the table:
> {code}
> CREATE TABLE numbers (a DECIMAL(20,2));
> {code}
> Hive should support something similar too.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5655) Hive incorrecly handles divide-by-zero case

2013-10-26 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13806209#comment-13806209
 ] 

Hive QA commented on HIVE-5655:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12610460/HIVE-5655.1.patch

{color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 4484 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_vectorization_13
org.apache.hadoop.hive.cli.TestHBaseCliDriver.testCliDriver_hbase_stats3
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/1256/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/1256/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests failed with: TestsFailedException: 2 tests failed
{noformat}

This message is automatically generated.

> Hive incorrecly handles divide-by-zero case
> ---
>
> Key: HIVE-5655
> URL: https://issues.apache.org/jira/browse/HIVE-5655
> Project: Hive
>  Issue Type: Improvement
>  Components: Types
>Affects Versions: 0.10.0, 0.11.0, 0.12.0
>Reporter: Xuefu Zhang
>Assignee: Xuefu Zhang
> Attachments: HIVE-5655.1.patch, HIVE-5655.patch
>
>
> Unlike other databases, Hive currently has only one mode (default mode) 
> regarding error handling, in which NULL value is returned. However, in case 
> of divide-by-zero, Hive demonstrated a different behavior.
> {code}
> hive> select 5/0 from tmp2 limit 1;
> Total MapReduce jobs = 1
> ...
> Total MapReduce CPU Time Spent: 860 msec
> OK
> Infinity
> {code}
> The correct behaviour should be Hive returning NULL instead in order to be 
> consistent w.r.t error handling. (BTW, the same situation is handled 
> corrected for decimal type.)
> MySQL has server modes control the behaviour. By default, NULL is returned. 
> For instance,
> {code}
> mysql> select 3/0 from dual;
> +--+
> | 3/0  |
> +--+
> | NULL |
> +--+
> 1 row in set (0.00 sec)
> {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5655) Hive incorrecly handles divide-by-zero case

2013-10-26 Thread Edward Capriolo (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13806197#comment-13806197
 ] 

Edward Capriolo commented on HIVE-5655:
---

+1 . [~xuefuz] We(I) recently committed a new system that runs udf tests 
through the operator chain. Maybe you want to base your junit test on that.

see  ./ql/src/test/org/apache/hadoop/hive/ql/testutil/BaseScalarUdfTest.java

> Hive incorrecly handles divide-by-zero case
> ---
>
> Key: HIVE-5655
> URL: https://issues.apache.org/jira/browse/HIVE-5655
> Project: Hive
>  Issue Type: Improvement
>  Components: Types
>Affects Versions: 0.10.0, 0.11.0, 0.12.0
>Reporter: Xuefu Zhang
>Assignee: Xuefu Zhang
> Attachments: HIVE-5655.1.patch, HIVE-5655.patch
>
>
> Unlike other databases, Hive currently has only one mode (default mode) 
> regarding error handling, in which NULL value is returned. However, in case 
> of divide-by-zero, Hive demonstrated a different behavior.
> {code}
> hive> select 5/0 from tmp2 limit 1;
> Total MapReduce jobs = 1
> ...
> Total MapReduce CPU Time Spent: 860 msec
> OK
> Infinity
> {code}
> The correct behaviour should be Hive returning NULL instead in order to be 
> consistent w.r.t error handling. (BTW, the same situation is handled 
> corrected for decimal type.)
> MySQL has server modes control the behaviour. By default, NULL is returned. 
> For instance,
> {code}
> mysql> select 3/0 from dual;
> +--+
> | 3/0  |
> +--+
> | NULL |
> +--+
> 1 row in set (0.00 sec)
> {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-4723) DDLSemanticAnalyzer.addTablePartsOutputs eats several exceptions

2013-10-26 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13806194#comment-13806194
 ] 

Hive QA commented on HIVE-4723:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12610432/HIVE-4723.patch

{color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 4483 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_alter_rename_partition_failure3
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_touch2
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/1255/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/1255/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests failed with: TestsFailedException: 2 tests failed
{noformat}

This message is automatically generated.

> DDLSemanticAnalyzer.addTablePartsOutputs eats several exceptions
> 
>
> Key: HIVE-4723
> URL: https://issues.apache.org/jira/browse/HIVE-4723
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.12.0
>Reporter: Brock Noland
>Assignee: Szehon Ho
> Attachments: HIVE-4723.patch
>
>
> I accidently tried to archive a partition on a non-partitioned table. The 
> error message was bad, hive ate an exception, and NPE'ed.
> {noformat}
> 2013-06-09 16:36:12,628 ERROR parse.DDLSemanticAnalyzer 
> (DDLSemanticAnalyzer.java:addTablePartsOutputs(2899)) - Got HiveException 
> during obtaining list of partitions
> 2013-06-09 16:36:12,628 ERROR ql.Driver (SessionState.java:printError(383)) - 
> FAILED: NullPointerException null
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer.addTablePartsOutputs(DDLSemanticAnalyzer.java:2912)
>   at 
> org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer.addTablePartsOutputs(DDLSemanticAnalyzer.java:2877)
>   at 
> org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer.analyzeAlterTableArchive(DDLSemanticAnalyzer.java:2730)
>   at 
> org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer.analyzeInternal(DDLSemanticAnalyzer.java:316)
>   at 
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:277)
>   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:433)
>   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:337)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:902)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:259)
>   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:216)
>   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:413)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:782)
>   at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:675)
>   at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:614)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5655) Hive incorrecly handles divide-by-zero case

2013-10-26 Thread Xuefu Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuefu Zhang updated HIVE-5655:
--

Attachment: HIVE-5655.1.patch

Patch # fixed a c&p error.

> Hive incorrecly handles divide-by-zero case
> ---
>
> Key: HIVE-5655
> URL: https://issues.apache.org/jira/browse/HIVE-5655
> Project: Hive
>  Issue Type: Improvement
>  Components: Types
>Affects Versions: 0.10.0, 0.11.0, 0.12.0
>Reporter: Xuefu Zhang
>Assignee: Xuefu Zhang
> Attachments: HIVE-5655.1.patch, HIVE-5655.patch
>
>
> Unlike other databases, Hive currently has only one mode (default mode) 
> regarding error handling, in which NULL value is returned. However, in case 
> of divide-by-zero, Hive demonstrated a different behavior.
> {code}
> hive> select 5/0 from tmp2 limit 1;
> Total MapReduce jobs = 1
> ...
> Total MapReduce CPU Time Spent: 860 msec
> OK
> Infinity
> {code}
> The correct behaviour should be Hive returning NULL instead in order to be 
> consistent w.r.t error handling. (BTW, the same situation is handled 
> corrected for decimal type.)
> MySQL has server modes control the behaviour. By default, NULL is returned. 
> For instance,
> {code}
> mysql> select 3/0 from dual;
> +--+
> | 3/0  |
> +--+
> | NULL |
> +--+
> 1 row in set (0.00 sec)
> {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5503) TopN optimization in VectorReduceSink

2013-10-26 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13806184#comment-13806184
 ] 

Hive QA commented on HIVE-5503:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12610434/HIVE-5503.patch

{color:red}ERROR:{color} -1 due to 4 failed/errored test(s), 4483 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_vectorization_9
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_vectorization_limit
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_vectorization_part
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_vectorization_short_regress
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/1254/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/1254/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests failed with: TestsFailedException: 4 tests failed
{noformat}

This message is automatically generated.

> TopN optimization in VectorReduceSink
> -
>
> Key: HIVE-5503
> URL: https://issues.apache.org/jira/browse/HIVE-5503
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Jitendra Nath Pandey
>Assignee: Sergey Shelukhin
> Attachments: HIVE-5503.patch
>
>
> We need to add TopN optimization to VectorReduceSink as well, it would be 
> great if ReduceSink and VectorReduceSink share this code.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5610) Merge maven branch into trunk

2013-10-26 Thread Edward Capriolo (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13806176#comment-13806176
 ] 

Edward Capriolo commented on HIVE-5610:
---

[~brocknoland] All looks good to me. +1 Lets prepare a wiki doc on maven, and 
documented the simple changes building, testing, etc. Then we can pull the 
trigger this change.

> Merge maven branch into trunk
> -
>
> Key: HIVE-5610
> URL: https://issues.apache.org/jira/browse/HIVE-5610
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Brock Noland
>Assignee: Brock Noland
>
> With HIVE-5566 nearing completion we will be nearly ready to merge the maven 
> branch to trunk. The following tasks will be done post-merge:
> * HIVE-5611 - Add assembly (i.e.) tar creation to pom
> * HIVE-5612 - Add ability to re-generate generated code stored in source 
> control
> The merge process will be as follows:
> 1) svn merge ^/hive/branches/maven
> 2) Commit result
> 3) Modify the following line in maven-rollforward.sh:
> {noformat}
>   mv $source $target
> {noformat}
> to
> {noformat}
>   svn mv $source $target
> {noformat}
> 4) Execute maven-rollfward.sh
> 5) Commit result 
> 6) Update trunk-mr1.properties and trunk-mr2.properties on the ptesting host, 
> adding the following:
> {noformat}
> mavenEnvOpts = -Dhttp.proxyHost=localhost -Dhttp.proxyPort=3128 
> testCasePropertyName = test
> buildTool = maven
> unitTests.directories = ./
> {noformat}
> Notes:
> * To build everything you must:
> {noformat}
> $ mvn clean install -DskipTests
> $ cd itests
> $ mvn clean install -DskipTests
> {noformat}
> because itests (any tests that has cyclical dependencies or requires that the 
> packages be built) is not part of the root reactor build.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-4388) HBase tests fail against Hadoop 2

2013-10-26 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13806169#comment-13806169
 ] 

Hive QA commented on HIVE-4388:
---



{color:red}Overall{color}: -1 no tests executed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12610429/HIVE-4388.12.patch

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/1253/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/1253/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Tests failed with: NonZeroExitCodeException: Command 'bash 
/data/hive-ptest/working/scratch/source-prep.sh' failed with exit status 1 and 
output '+ [[ -n '' ]]
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ export 'M2_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ M2_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ cd /data/hive-ptest/working/
+ tee /data/hive-ptest/logs/PreCommit-HIVE-Build-1253/source-prep.txt
+ [[ true == \t\r\u\e ]]
+ rm -rf ivy maven
+ mkdir -p maven ivy
+ [[ svn = \s\v\n ]]
+ [[ -n '' ]]
+ [[ -d apache-svn-trunk-source ]]
+ [[ ! -d apache-svn-trunk-source/.svn ]]
+ [[ ! -d apache-svn-trunk-source ]]
+ cd apache-svn-trunk-source
+ svn revert -R .
Reverted 
'ql/src/java/org/apache/hadoop/hive/ql/optimizer/ppr/PartitionPruner.java'
Reverted 'ql/src/java/org/apache/hadoop/hive/ql/log/PerfLogger.java'
Reverted 'ql/src/java/org/apache/hadoop/hive/ql/Driver.java'
++ awk '{print $2}'
++ egrep -v '^X|^Performing status on external'
++ svn status --no-ignore
+ rm -rf build hcatalog/build hcatalog/core/build 
hcatalog/storage-handlers/hbase/build hcatalog/server-extensions/build 
hcatalog/webhcat/svr/build hcatalog/webhcat/java-client/build 
hcatalog/hcatalog-pig-adapter/build common/src/gen
+ svn update

Fetching external item into 'hcatalog/src/test/e2e/harness'
External at revision 1536038.

At revision 1536038.
+ patchCommandPath=/data/hive-ptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hive-ptest/working/scratch/build.patch
+ [[ -f /data/hive-ptest/working/scratch/build.patch ]]
+ chmod +x /data/hive-ptest/working/scratch/smart-apply-patch.sh
+ /data/hive-ptest/working/scratch/smart-apply-patch.sh 
/data/hive-ptest/working/scratch/build.patch
The patch does not appear to apply with p0, p1, or p2
+ exit 1
'
{noformat}

This message is automatically generated.

> HBase tests fail against Hadoop 2
> -
>
> Key: HIVE-4388
> URL: https://issues.apache.org/jira/browse/HIVE-4388
> Project: Hive
>  Issue Type: Bug
>  Components: HBase Handler
>Reporter: Gunther Hagleitner
>Assignee: Brock Noland
> Attachments: HIVE-4388.10.patch, HIVE-4388.11.patch, 
> HIVE-4388.12.patch, HIVE-4388.patch, HIVE-4388.patch, HIVE-4388.patch, 
> HIVE-4388.patch, HIVE-4388.patch, HIVE-4388.patch, HIVE-4388.patch, 
> HIVE-4388.patch, HIVE-4388-wip.txt
>
>
> Currently we're building by default against 0.92. When you run against hadoop 
> 2 (-Dhadoop.mr.rev=23) builds fail because of: HBASE-5963.
> HIVE-3861 upgrades the version of hbase used. This will get you past the 
> problem in HBASE-5963 (which was fixed in 0.94.1) but fails with: HBASE-6396.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5589) perflogger output is hard to associate with queries

2013-10-26 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13806168#comment-13806168
 ] 

Hive QA commented on HIVE-5589:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12610406/HIVE-5589.02.patch

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 4483 tests executed
*Failed tests:*
{noformat}
org.apache.hive.hcatalog.mapreduce.TestHCatHiveThriftCompatibility.testDynamicCols
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/1252/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/1252/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests failed with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

> perflogger output is hard to associate with queries
> ---
>
> Key: HIVE-5589
> URL: https://issues.apache.org/jira/browse/HIVE-5589
> Project: Hive
>  Issue Type: Improvement
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Minor
> Attachments: HIVE-5589.01.patch, HIVE-5589.02.patch
>
>
> It would be nice to dump the query somewhere in output.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5519) Use paging mechanism for templeton get requests.

2013-10-26 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13806161#comment-13806161
 ] 

Hive QA commented on HIVE-5519:
---



{color:green}Overall{color}: +1 all checks pass

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12610402/HIVE-5519.2.patch.txt

{color:green}SUCCESS:{color} +1 4483 tests passed

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/1251/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/1251/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

> Use paging mechanism for templeton get requests.
> 
>
> Key: HIVE-5519
> URL: https://issues.apache.org/jira/browse/HIVE-5519
> Project: Hive
>  Issue Type: Bug
>  Components: WebHCat
>Reporter: Hari Sankar Sivarama Subramaniyan
>Assignee: Hari Sankar Sivarama Subramaniyan
> Attachments: HIVE-5519.1.patch.txt, HIVE-5519.2.patch.txt
>
>
> Issuing a command to retrieve the jobs field using
> "https://mwinkledemo.azurehdinsight.net:563/templeton/v1/queue/?user.name=admin&fields=*"
>  --user u:p
> will result in timeout in windows machine. The issue happens because of the 
> amount of data that needs to be fetched. The proposal is to use paging based 
> encoding scheme so that we flush the contents regularly and the client does 
> not time out.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-3976) Support specifying scale and precision with Hive decimal type

2013-10-26 Thread Xuefu Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuefu Zhang updated HIVE-3976:
--

Attachment: HIVE-3976.11.patch

> Support specifying scale and precision with Hive decimal type
> -
>
> Key: HIVE-3976
> URL: https://issues.apache.org/jira/browse/HIVE-3976
> Project: Hive
>  Issue Type: New Feature
>  Components: Query Processor, Types
>Affects Versions: 0.11.0
>Reporter: Mark Grover
>Assignee: Xuefu Zhang
> Fix For: 0.13.0
>
> Attachments: HIVE-3976.10.patch, HIVE-3976.11.patch, 
> HIVE-3976.1.patch, HIVE-3976.2.patch, HIVE-3976.3.patch, HIVE-3976.4.patch, 
> HIVE-3976.5.patch, HIVE-3976.6.patch, HIVE-3976.7.patch, HIVE-3976.8.patch, 
> HIVE-3976.9.patch, HIVE-3976.patch, remove_prec_scale.diff
>
>
> HIVE-2693 introduced support for Decimal datatype in Hive. However, the 
> current implementation has unlimited precision and provides no way to specify 
> precision and scale when creating the table.
> For example, MySQL allows users to specify scale and precision of the decimal 
> datatype when creating the table:
> {code}
> CREATE TABLE numbers (a DECIMAL(20,2));
> {code}
> Hive should support something similar too.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-3976) Support specifying scale and precision with Hive decimal type

2013-10-26 Thread Xuefu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13806160#comment-13806160
 ] 

Xuefu Zhang commented on HIVE-3976:
---

Patch #11 rebased. It might need to rebase again if another two days are needed 
before the test can run.

> Support specifying scale and precision with Hive decimal type
> -
>
> Key: HIVE-3976
> URL: https://issues.apache.org/jira/browse/HIVE-3976
> Project: Hive
>  Issue Type: New Feature
>  Components: Query Processor, Types
>Affects Versions: 0.11.0
>Reporter: Mark Grover
>Assignee: Xuefu Zhang
> Fix For: 0.13.0
>
> Attachments: HIVE-3976.10.patch, HIVE-3976.11.patch, 
> HIVE-3976.1.patch, HIVE-3976.2.patch, HIVE-3976.3.patch, HIVE-3976.4.patch, 
> HIVE-3976.5.patch, HIVE-3976.6.patch, HIVE-3976.7.patch, HIVE-3976.8.patch, 
> HIVE-3976.9.patch, HIVE-3976.patch, remove_prec_scale.diff
>
>
> HIVE-2693 introduced support for Decimal datatype in Hive. However, the 
> current implementation has unlimited precision and provides no way to specify 
> precision and scale when creating the table.
> For example, MySQL allows users to specify scale and precision of the decimal 
> datatype when creating the table:
> {code}
> CREATE TABLE numbers (a DECIMAL(20,2));
> {code}
> Hive should support something similar too.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5295) HiveConnection#configureConnection tries to execute statement even after it is closed

2013-10-26 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13806151#comment-13806151
 ] 

Hive QA commented on HIVE-5295:
---



{color:green}Overall{color}: +1 all checks pass

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12610397/HIVE-5295.5.patch

{color:green}SUCCESS:{color} +1 4484 tests passed

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/1250/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/1250/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

> HiveConnection#configureConnection tries to execute statement even after it 
> is closed
> -
>
> Key: HIVE-5295
> URL: https://issues.apache.org/jira/browse/HIVE-5295
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
> Attachments: D12957.1.patch, D12957.2.patch, D12957.3.patch, 
> HIVE-5295.5.patch, HIVE-5295.D12957.3.patch, HIVE-5295.D12957.3.patch, 
> HIVE-5295.D12957.4.patch
>
>
> HiveConnection#configureConnection tries to execute statement even after it 
> is closed. For remote JDBC client, it tries to set the conf var using 'set 
> foo=bar' by calling HiveStatement.execute for each conf var pair, but closes 
> the statement after the 1st iteration through the conf var pairs.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-4997) HCatalog doesn't allow multiple input tables

2013-10-26 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13806136#comment-13806136
 ] 

Hive QA commented on HIVE-4997:
---



{color:green}Overall{color}: +1 all checks pass

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12610346/HIVE-4997.3.patch

{color:green}SUCCESS:{color} +1 4456 tests passed

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/1248/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/1248/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

> HCatalog doesn't allow multiple input tables
> 
>
> Key: HIVE-4997
> URL: https://issues.apache.org/jira/browse/HIVE-4997
> Project: Hive
>  Issue Type: Improvement
>  Components: HCatalog
>Affects Versions: 0.13.0
>Reporter: Daniel Intskirveli
> Fix For: 0.13.0
>
> Attachments: HIVE-4997.2.patch, HIVE-4997.3.patch
>
>
> HCatInputFormat does not allow reading from multiple hive tables in the same 
> MapReduce job. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5628) ListBucketingPrunnerTest and DynamicMultiDimeCollectionTest should start with Test not end with it

2013-10-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13806126#comment-13806126
 ] 

Hudson commented on HIVE-5628:
--

ABORTED: Integrated in Hive-trunk-hadoop2 #522 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/522/])
HIVE-5628 : ListBucketingPrunnerTest and DynamicMultiDimeCollectionTest should 
start with Test not end with it (Brock Noland via Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1535761)
* 
/hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/optimizer/listbucketingpruner/DynamicMultiDimeCollectionTest.java
* 
/hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/optimizer/listbucketingpruner/ListBucketingPrunnerTest.java
* 
/hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/optimizer/listbucketingpruner/TestDynamicMultiDimeCollection.java
* 
/hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/optimizer/listbucketingpruner/TestListBucketingPrunner.java


> ListBucketingPrunnerTest and DynamicMultiDimeCollectionTest should start with 
> Test not end with it
> --
>
> Key: HIVE-5628
> URL: https://issues.apache.org/jira/browse/HIVE-5628
> Project: Hive
>  Issue Type: Bug
>Reporter: Brock Noland
>Assignee: Brock Noland
> Fix For: 0.13.0
>
> Attachments: HIVE-5628.patch
>
>
> ListBucketingPrunnerTest and DynamicMultiDimeCollectionTest will not be run 
> by PTest because they end with Test and PTest requires tests start with Test.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5552) Merging of QBJoinTrees doesn't handle filter pushdowns correctly

2013-10-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13806129#comment-13806129
 ] 

Hudson commented on HIVE-5552:
--

ABORTED: Integrated in Hive-trunk-hadoop2 #522 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/522/])
HIVE-5552 : Merging of QBJoinTrees doesnt handle filter pushdowns correctly 
(Harish Butani via Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1535858)
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/ParseUtils.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/QBJoinTree.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java
* /hive/trunk/ql/src/test/queries/clientpositive/join_merging.q
* /hive/trunk/ql/src/test/results/clientpositive/join_merging.q.out


> Merging of QBJoinTrees doesn't handle filter pushdowns correctly
> 
>
> Key: HIVE-5552
> URL: https://issues.apache.org/jira/browse/HIVE-5552
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Reporter: Harish Butani
>Assignee: Harish Butani
> Fix For: 0.13.0
>
> Attachments: HIVE-5552.1.patch, HIVE-5552.2.patch
>
>
> The following query fails:
> (this based on the schema from auto_join_filters.q)
> {noformat}
> explain
> SELECT sum(hash(a.key,a.value,b.key,b.value)) 
> FROM myinput1 a LEFT OUTER JOIN myinput1 b on a.value = b.value  RIGHT OUTER 
> JOIN myinput1 c 
>  ON 
> b.value = c.value AND 
> a.key > 40
> {noformat}
> Whereas this query succeeds
> {noformat}
> explain
> SELECT sum(hash(a.key,a.value,b.key,b.value)) 
> FROM myinput1 a LEFT OUTER JOIN myinput1 b RIGHT OUTER JOIN myinput1 c 
>  ON  a.value = b.value and
> b.value = c.value AND 
> a.key > 40
> {noformat}
> Pushing the first condition to the first join, triggers a merge of the 2 
> QBJoinTrees. During merge all the right side  filters identified for pushing 
> are assumed to refer to the merging table (b in this case). But the pushable 
> filters can refer to any left table.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5511) percentComplete returned by job status from WebHCat is null

2013-10-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13806127#comment-13806127
 ] 

Hudson commented on HIVE-5511:
--

ABORTED: Integrated in Hive-trunk-hadoop2 #522 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/522/])
HIVE-5511 : percentComplete returned by job status from WebHCat is null (Eugene 
Koifman via Thejas Nair) (thejas: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1535796)
* /hive/trunk/hcatalog/src/test/e2e/templeton/README.txt
* /hive/trunk/hcatalog/src/test/e2e/templeton/drivers/TestDriverCurl.pm
* /hive/trunk/hcatalog/src/test/e2e/templeton/tests/jobsubmission.conf
* /hive/trunk/hcatalog/src/test/e2e/templeton/tests/jobsubmission_streaming.conf
* /hive/trunk/hcatalog/webhcat/svr/src/main/bin/webhcat_config.sh
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/config/override-container-log4j.properties
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/AppConfig.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/CompleteDelegator.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/tool/HDFSStorage.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/tool/HiveJobIDParser.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/tool/JarJobIDParser.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/tool/JobSubmissionConstants.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/tool/LaunchMapper.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/tool/PigJobIDParser.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/tool/TempletonControllerJob.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/tool/TempletonUtils.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/tool/TrivialExecService.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/test/java/org/apache/hive/hcatalog/templeton/tool/TestTrivialExecService.java
* 
/hive/trunk/shims/src/0.20S/java/org/apache/hadoop/mapred/WebHCatJTShim20S.java
* /hive/trunk/shims/src/0.23/java/org/apache/hadoop/mapred/WebHCatJTShim23.java
* 
/hive/trunk/shims/src/common/java/org/apache/hadoop/hive/shims/HadoopShims.java


> percentComplete returned by job status from WebHCat is null
> ---
>
> Key: HIVE-5511
> URL: https://issues.apache.org/jira/browse/HIVE-5511
> Project: Hive
>  Issue Type: Bug
>  Components: WebHCat
>Affects Versions: 0.12.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
> Fix For: 0.13.0
>
> Attachments: HIVE-5511.3.patch, HIVE-5511.5.patch
>
>
> In hadoop1 the logging from MR is sent to stderr.  In H2, by default, to 
> syslog.  templeton.tool.LaunchMapper expects to see the output on stderr to 
> produce 'percentComplete' in job status.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5440) HiveServer2 doesn't apply SQL operation's config property

2013-10-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13806128#comment-13806128
 ] 

Hudson commented on HIVE-5440:
--

ABORTED: Integrated in Hive-trunk-hadoop2 #522 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/522/])
HIVE-5440: HiveServer2 doesn't apply SQL operation's config property (Prasad 
Mujumdar via Thejas Nair) (thejas: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1535889)
* 
/hive/trunk/service/src/java/org/apache/hive/service/cli/operation/ExecuteStatementOperation.java
* 
/hive/trunk/service/src/java/org/apache/hive/service/cli/operation/SQLOperation.java
* /hive/trunk/service/src/test/org/apache/hive/service/cli/CLIServiceTest.java


> HiveServer2 doesn't apply SQL operation's config property 
> --
>
> Key: HIVE-5440
> URL: https://issues.apache.org/jira/browse/HIVE-5440
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 0.12.0
>Reporter: Prasad Mujumdar
>Assignee: Prasad Mujumdar
> Fix For: 0.13.0
>
> Attachments: HIVE-5440.1.patch, HIVE-5440.2.patch
>
>
> The HiveServer2 thrift IDL includes an optional config overlay map which is 
> currently not used.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5576) Blank lines missing from .q.out files created on Windows for testcase=TestCliDriver

2013-10-26 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13806122#comment-13806122
 ] 

Hive QA commented on HIVE-5576:
---



{color:green}Overall{color}: +1 all checks pass

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12610344/HIVE-5576.1.patch

{color:green}SUCCESS:{color} +1 4483 tests passed

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/1247/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/1247/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

> Blank lines missing from .q.out files created on Windows for 
> testcase=TestCliDriver
> ---
>
> Key: HIVE-5576
> URL: https://issues.apache.org/jira/browse/HIVE-5576
> Project: Hive
>  Issue Type: Bug
>  Components: Testing Infrastructure
>Affects Versions: 0.13.0
> Environment: Windows 8 using Hive Monarch build environment
>Reporter: Eric Hanson
>Assignee: Remus Rusanu
>Priority: Minor
> Attachments: HIVE-5576.1.patch, vectorized_math_funcs.q, 
> vectorized_math_funcs.q.out.unix, vectorized_math_funcs.q.out.windows
>
>
> If you create a .q.out file on Windows using a command like this:
> ant test "-Dhadoop.security.version=1.1.0-SNAPSHOT" 
> "-Dhadoop.root=c:\hw\project\hadoop-monarch" "-Dresolvers=internal" 
> "-Dhadoop-0.20S.version=1.1.0-SNAPSHOT" "-Dhadoop.mr.rev=20S" 
> "-Dhive.support.concurrency=false" "-Dshims.include=0.20S" 
> "-Dtest.continue.on.failure=true" "-Dtest.halt.on.failure=no" 
> "-Dtest.print.classpath=true"  "-Dtestcase=TestCliDriver" 
> "-Dqfile=vectorized_math_funcs.q,vectorized_string_funcs.q,vectorized_casts.q"
>  "-Doverwrite=true" "-Dtest.silent=false"
> Then the .q.out files generated in the hive directory under
> ql\src\test\results\clientpositive
> having missing blank lines.
> So, the .q tests will pass on your Windows machine. But when you upload them 
> in a patch, they fail on the automated build server. See HIVE-5517 for an 
> example. HIVE-5517.3.patch has .q.out files with missing blank lines. 
> Hive-5517.4.patch has .q.out files created on a Linux or Mac system. Those 
> have blank lines.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-3976) Support specifying scale and precision with Hive decimal type

2013-10-26 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13806118#comment-13806118
 ] 

Brock Noland commented on HIVE-3976:


Shoot, looks like it needs a rebase.

> Support specifying scale and precision with Hive decimal type
> -
>
> Key: HIVE-3976
> URL: https://issues.apache.org/jira/browse/HIVE-3976
> Project: Hive
>  Issue Type: New Feature
>  Components: Query Processor, Types
>Affects Versions: 0.11.0
>Reporter: Mark Grover
>Assignee: Xuefu Zhang
> Fix For: 0.13.0
>
> Attachments: HIVE-3976.10.patch, HIVE-3976.1.patch, 
> HIVE-3976.2.patch, HIVE-3976.3.patch, HIVE-3976.4.patch, HIVE-3976.5.patch, 
> HIVE-3976.6.patch, HIVE-3976.7.patch, HIVE-3976.8.patch, HIVE-3976.9.patch, 
> HIVE-3976.patch, remove_prec_scale.diff
>
>
> HIVE-2693 introduced support for Decimal datatype in Hive. However, the 
> current implementation has unlimited precision and provides no way to specify 
> precision and scale when creating the table.
> For example, MySQL allows users to specify scale and precision of the decimal 
> datatype when creating the table:
> {code}
> CREATE TABLE numbers (a DECIMAL(20,2));
> {code}
> Hive should support something similar too.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5610) Merge maven branch into trunk

2013-10-26 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13806117#comment-13806117
 ] 

Brock Noland commented on HIVE-5610:


Hi guys, hoping to get some feedback and then merge the latest trunk changes in.

[~ashutoshc] [~thejas], any thoughts on the state of the current maven branch?

> Merge maven branch into trunk
> -
>
> Key: HIVE-5610
> URL: https://issues.apache.org/jira/browse/HIVE-5610
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Brock Noland
>Assignee: Brock Noland
>
> With HIVE-5566 nearing completion we will be nearly ready to merge the maven 
> branch to trunk. The following tasks will be done post-merge:
> * HIVE-5611 - Add assembly (i.e.) tar creation to pom
> * HIVE-5612 - Add ability to re-generate generated code stored in source 
> control
> The merge process will be as follows:
> 1) svn merge ^/hive/branches/maven
> 2) Commit result
> 3) Modify the following line in maven-rollforward.sh:
> {noformat}
>   mv $source $target
> {noformat}
> to
> {noformat}
>   svn mv $source $target
> {noformat}
> 4) Execute maven-rollfward.sh
> 5) Commit result 
> 6) Update trunk-mr1.properties and trunk-mr2.properties on the ptesting host, 
> adding the following:
> {noformat}
> mavenEnvOpts = -Dhttp.proxyHost=localhost -Dhttp.proxyPort=3128 
> testCasePropertyName = test
> buildTool = maven
> unitTests.directories = ./
> {noformat}
> Notes:
> * To build everything you must:
> {noformat}
> $ mvn clean install -DskipTests
> $ cd itests
> $ mvn clean install -DskipTests
> {noformat}
> because itests (any tests that has cyclical dependencies or requires that the 
> packages be built) is not part of the root reactor build.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-4388) HBase tests fail against Hadoop 2

2013-10-26 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13806115#comment-13806115
 ] 

Brock Noland commented on HIVE-4388:


Thank you! [~sushanth] would you mind giving the maven branch a look over and 
comment on HIVE-5610 as to what your thoughts are?

> HBase tests fail against Hadoop 2
> -
>
> Key: HIVE-4388
> URL: https://issues.apache.org/jira/browse/HIVE-4388
> Project: Hive
>  Issue Type: Bug
>  Components: HBase Handler
>Reporter: Gunther Hagleitner
>Assignee: Brock Noland
> Attachments: HIVE-4388.10.patch, HIVE-4388.11.patch, 
> HIVE-4388.12.patch, HIVE-4388.patch, HIVE-4388.patch, HIVE-4388.patch, 
> HIVE-4388.patch, HIVE-4388.patch, HIVE-4388.patch, HIVE-4388.patch, 
> HIVE-4388.patch, HIVE-4388-wip.txt
>
>
> Currently we're building by default against 0.92. When you run against hadoop 
> 2 (-Dhadoop.mr.rev=23) builds fail because of: HBASE-5963.
> HIVE-3861 upgrades the version of hbase used. This will get you past the 
> problem in HBASE-5963 (which was fixed in 0.94.1) but fails with: HBASE-6396.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5582) Implement BETWEEN filter in vectorized mode

2013-10-26 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13806099#comment-13806099
 ] 

Hive QA commented on HIVE-5582:
---



{color:green}Overall{color}: +1 all checks pass

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12610210/HIVE-5582.7.patch

{color:green}SUCCESS:{color} +1 4490 tests passed

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/1246/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/1246/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

> Implement BETWEEN filter in vectorized mode
> ---
>
> Key: HIVE-5582
> URL: https://issues.apache.org/jira/browse/HIVE-5582
> Project: Hive
>  Issue Type: Sub-task
>  Components: Query Processor
>Affects Versions: 0.13.0
>Reporter: Eric Hanson
>Assignee: Eric Hanson
> Fix For: 0.13.0
>
> Attachments: hive-5582.1.patch.txt, hive-5582.3.patch.txt, 
> HIVE-5582.7.patch
>
>
> Implement optimized support for filters of the form
> column BETWEEN scalar1 AND scalar2
> in vectorized mode.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5652) Improve JavaDoc of UDF class

2013-10-26 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13806065#comment-13806065
 ] 

Hive QA commented on HIVE-5652:
---



{color:green}Overall{color}: +1 all checks pass

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12610319/HIVE-5652.1.patch

{color:green}SUCCESS:{color} +1 4483 tests passed

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/1245/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/1245/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

> Improve JavaDoc of UDF class
> 
>
> Key: HIVE-5652
> URL: https://issues.apache.org/jira/browse/HIVE-5652
> Project: Hive
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Lars Francke
>Assignee: Lars Francke
>Priority: Trivial
> Attachments: HIVE-5652.1.patch
>
>
> I think the JavaDoc for the UDF class can be improved. I'll attach a patch 
> shortly.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5511) percentComplete returned by job status from WebHCat is null

2013-10-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13806058#comment-13806058
 ] 

Hudson commented on HIVE-5511:
--

FAILURE: Integrated in Hive-trunk-h0.21 #2422 (See 
[https://builds.apache.org/job/Hive-trunk-h0.21/2422/])
HIVE-5511 : percentComplete returned by job status from WebHCat is null (Eugene 
Koifman via Thejas Nair) (thejas: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1535796)
* /hive/trunk/hcatalog/src/test/e2e/templeton/README.txt
* /hive/trunk/hcatalog/src/test/e2e/templeton/drivers/TestDriverCurl.pm
* /hive/trunk/hcatalog/src/test/e2e/templeton/tests/jobsubmission.conf
* /hive/trunk/hcatalog/src/test/e2e/templeton/tests/jobsubmission_streaming.conf
* /hive/trunk/hcatalog/webhcat/svr/src/main/bin/webhcat_config.sh
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/config/override-container-log4j.properties
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/AppConfig.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/CompleteDelegator.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/tool/HDFSStorage.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/tool/HiveJobIDParser.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/tool/JarJobIDParser.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/tool/JobSubmissionConstants.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/tool/LaunchMapper.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/tool/PigJobIDParser.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/tool/TempletonControllerJob.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/tool/TempletonUtils.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/tool/TrivialExecService.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/test/java/org/apache/hive/hcatalog/templeton/tool/TestTrivialExecService.java
* 
/hive/trunk/shims/src/0.20S/java/org/apache/hadoop/mapred/WebHCatJTShim20S.java
* /hive/trunk/shims/src/0.23/java/org/apache/hadoop/mapred/WebHCatJTShim23.java
* 
/hive/trunk/shims/src/common/java/org/apache/hadoop/hive/shims/HadoopShims.java


> percentComplete returned by job status from WebHCat is null
> ---
>
> Key: HIVE-5511
> URL: https://issues.apache.org/jira/browse/HIVE-5511
> Project: Hive
>  Issue Type: Bug
>  Components: WebHCat
>Affects Versions: 0.12.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
> Fix For: 0.13.0
>
> Attachments: HIVE-5511.3.patch, HIVE-5511.5.patch
>
>
> In hadoop1 the logging from MR is sent to stderr.  In H2, by default, to 
> syslog.  templeton.tool.LaunchMapper expects to see the output on stderr to 
> produce 'percentComplete' in job status.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5552) Merging of QBJoinTrees doesn't handle filter pushdowns correctly

2013-10-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13806060#comment-13806060
 ] 

Hudson commented on HIVE-5552:
--

FAILURE: Integrated in Hive-trunk-h0.21 #2422 (See 
[https://builds.apache.org/job/Hive-trunk-h0.21/2422/])
HIVE-5552 : Merging of QBJoinTrees doesnt handle filter pushdowns correctly 
(Harish Butani via Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1535858)
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/ParseUtils.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/QBJoinTree.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java
* /hive/trunk/ql/src/test/queries/clientpositive/join_merging.q
* /hive/trunk/ql/src/test/results/clientpositive/join_merging.q.out


> Merging of QBJoinTrees doesn't handle filter pushdowns correctly
> 
>
> Key: HIVE-5552
> URL: https://issues.apache.org/jira/browse/HIVE-5552
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Reporter: Harish Butani
>Assignee: Harish Butani
> Fix For: 0.13.0
>
> Attachments: HIVE-5552.1.patch, HIVE-5552.2.patch
>
>
> The following query fails:
> (this based on the schema from auto_join_filters.q)
> {noformat}
> explain
> SELECT sum(hash(a.key,a.value,b.key,b.value)) 
> FROM myinput1 a LEFT OUTER JOIN myinput1 b on a.value = b.value  RIGHT OUTER 
> JOIN myinput1 c 
>  ON 
> b.value = c.value AND 
> a.key > 40
> {noformat}
> Whereas this query succeeds
> {noformat}
> explain
> SELECT sum(hash(a.key,a.value,b.key,b.value)) 
> FROM myinput1 a LEFT OUTER JOIN myinput1 b RIGHT OUTER JOIN myinput1 c 
>  ON  a.value = b.value and
> b.value = c.value AND 
> a.key > 40
> {noformat}
> Pushing the first condition to the first join, triggers a merge of the 2 
> QBJoinTrees. During merge all the right side  filters identified for pushing 
> are assumed to refer to the merging table (b in this case). But the pushable 
> filters can refer to any left table.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5440) HiveServer2 doesn't apply SQL operation's config property

2013-10-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13806059#comment-13806059
 ] 

Hudson commented on HIVE-5440:
--

FAILURE: Integrated in Hive-trunk-h0.21 #2422 (See 
[https://builds.apache.org/job/Hive-trunk-h0.21/2422/])
HIVE-5440: HiveServer2 doesn't apply SQL operation's config property (Prasad 
Mujumdar via Thejas Nair) (thejas: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1535889)
* 
/hive/trunk/service/src/java/org/apache/hive/service/cli/operation/ExecuteStatementOperation.java
* 
/hive/trunk/service/src/java/org/apache/hive/service/cli/operation/SQLOperation.java
* /hive/trunk/service/src/test/org/apache/hive/service/cli/CLIServiceTest.java


> HiveServer2 doesn't apply SQL operation's config property 
> --
>
> Key: HIVE-5440
> URL: https://issues.apache.org/jira/browse/HIVE-5440
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 0.12.0
>Reporter: Prasad Mujumdar
>Assignee: Prasad Mujumdar
> Fix For: 0.13.0
>
> Attachments: HIVE-5440.1.patch, HIVE-5440.2.patch
>
>
> The HiveServer2 thrift IDL includes an optional config overlay map which is 
> currently not used.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5648) error when casting partition column to varchar in where clause

2013-10-26 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13806050#comment-13806050
 ] 

Hive QA commented on HIVE-5648:
---



{color:green}Overall{color}: +1 all checks pass

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12610292/HIVE-5648.2.patch

{color:green}SUCCESS:{color} +1 4486 tests passed

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/1244/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/1244/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

> error when casting partition column to varchar in where clause 
> ---
>
> Key: HIVE-5648
> URL: https://issues.apache.org/jira/browse/HIVE-5648
> Project: Hive
>  Issue Type: Bug
>Reporter: Jason Dere
>Assignee: Jason Dere
> Attachments: HIVE-5648.1.patch, HIVE-5648.2.patch
>
>
> hive> select * from partition_varchar_2 where cast(dt as varchar(10)) = 
> '2000-01-01';
> FAILED: RuntimeException org.apache.hadoop.hive.ql.metadata.HiveException: 
> java.lang.RuntimeException: Internal error: Cannot find ObjectInspector  for 
> VARCHAR



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-4388) HBase tests fail against Hadoop 2

2013-10-26 Thread Sushanth Sowmyan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13806035#comment-13806035
 ] 

Sushanth Sowmyan commented on HIVE-4388:


(Okay, the skip.javadocs seems to happen only if we build with 
-Dhadoop.mr.rev=23, which pulls down hadoop 2.x, which allows us to do 
something like build without it, and then test with it, but that's still hacky. 
Also, the reason for it failing seems to be that when running the javadoc 
target, the junit version pulled down does not support things like timeout 
annotation which is being used by hadoop 2.x. Mumblemumblegrumblemumble I can't 
wait for the mavenization jira to make it already, and then we can try to keep 
that build as clean as possible)

> HBase tests fail against Hadoop 2
> -
>
> Key: HIVE-4388
> URL: https://issues.apache.org/jira/browse/HIVE-4388
> Project: Hive
>  Issue Type: Bug
>  Components: HBase Handler
>Reporter: Gunther Hagleitner
>Assignee: Brock Noland
> Attachments: HIVE-4388.10.patch, HIVE-4388.11.patch, 
> HIVE-4388.12.patch, HIVE-4388.patch, HIVE-4388.patch, HIVE-4388.patch, 
> HIVE-4388.patch, HIVE-4388.patch, HIVE-4388.patch, HIVE-4388.patch, 
> HIVE-4388.patch, HIVE-4388-wip.txt
>
>
> Currently we're building by default against 0.92. When you run against hadoop 
> 2 (-Dhadoop.mr.rev=23) builds fail because of: HBASE-5963.
> HIVE-3861 upgrades the version of hbase used. This will get you past the 
> problem in HBASE-5963 (which was fixed in 0.94.1) but fails with: HBASE-6396.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-3976) Support specifying scale and precision with Hive decimal type

2013-10-26 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13806031#comment-13806031
 ] 

Hive QA commented on HIVE-3976:
---



{color:red}Overall{color}: -1 no tests executed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12610273/HIVE-3976.10.patch

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/1242/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/1242/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Tests failed with: NonZeroExitCodeException: Command 'bash 
/data/hive-ptest/working/scratch/source-prep.sh' failed with exit status 1 and 
output '+ [[ -n '' ]]
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ export 'M2_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ M2_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ cd /data/hive-ptest/working/
+ tee /data/hive-ptest/logs/PreCommit-HIVE-Build-1242/source-prep.txt
+ [[ true == \t\r\u\e ]]
+ rm -rf ivy maven
+ mkdir -p maven ivy
+ [[ svn = \s\v\n ]]
+ [[ -n '' ]]
+ [[ -d apache-svn-trunk-source ]]
+ [[ ! -d apache-svn-trunk-source/.svn ]]
+ [[ ! -d apache-svn-trunk-source ]]
+ cd apache-svn-trunk-source
+ svn revert -R .
Reverted 'shims/src/0.20/java/org/apache/hadoop/hive/shims/Hadoop20Shims.java'
Reverted 'build.properties'
Reverted 'jdbc/ivy.xml'
Reverted 'jdbc/src/java/org/apache/hive/jdbc/HiveConnection.java'
Reverted 'jdbc/build.xml'
Reverted 
'metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java'
Reverted 'common/src/java/org/apache/hadoop/hive/conf/HiveConf.java'
Reverted 'service/src/java/org/apache/hive/service/auth/HiveAuthFactory.java'
Reverted 
'service/src/java/org/apache/hive/service/cli/thrift/ThriftBinaryCLIService.java'
++ egrep -v '^X|^Performing status on external'
++ awk '{print $2}'
++ svn status --no-ignore
+ rm -rf build eclipse-templates/TestJdbcMiniHS2.launchtemplate 
jdbc/src/test/org/apache/hive/jdbc/TestSSL.java 
jdbc/src/test/org/apache/hive/jdbc/TestJdbcWithMiniHS2.java 
jdbc/src/test/org/apache/hive/jdbc/miniHS2 
metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java.orig 
hcatalog/build hcatalog/core/build hcatalog/storage-handlers/hbase/build 
hcatalog/server-extensions/build hcatalog/webhcat/svr/build 
hcatalog/webhcat/java-client/build hcatalog/hcatalog-pig-adapter/build 
common/src/gen
+ svn update

Fetching external item into 'hcatalog/src/test/e2e/harness'
External at revision 1535946.

At revision 1535946.
+ patchCommandPath=/data/hive-ptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hive-ptest/working/scratch/build.patch
+ [[ -f /data/hive-ptest/working/scratch/build.patch ]]
+ chmod +x /data/hive-ptest/working/scratch/smart-apply-patch.sh
+ /data/hive-ptest/working/scratch/smart-apply-patch.sh 
/data/hive-ptest/working/scratch/build.patch
The patch does not appear to apply with p0, p1, or p2
+ exit 1
'
{noformat}

This message is automatically generated.

> Support specifying scale and precision with Hive decimal type
> -
>
> Key: HIVE-3976
> URL: https://issues.apache.org/jira/browse/HIVE-3976
> Project: Hive
>  Issue Type: New Feature
>  Components: Query Processor, Types
>Affects Versions: 0.11.0
>Reporter: Mark Grover
>Assignee: Xuefu Zhang
> Fix For: 0.13.0
>
> Attachments: HIVE-3976.10.patch, HIVE-3976.1.patch, 
> HIVE-3976.2.patch, HIVE-3976.3.patch, HIVE-3976.4.patch, HIVE-3976.5.patch, 
> HIVE-3976.6.patch, HIVE-3976.7.patch, HIVE-3976.8.patch, HIVE-3976.9.patch, 
> HIVE-3976.patch, remove_prec_scale.diff
>
>
> HIVE-2693 introduced support for Decimal datatype in Hive. However, the 
> current implementation has unlimited precision and provides no way to specify 
> precision and scale when creating the table.
> For example, MySQL allows users to specify scale and precision of the decimal 
> datatype when creating the table:
> {code}
> CREATE TABLE numbers (a DECIMAL(20,2));
> {code}
> Hive should support something similar too.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-4388) HBase tests fail against Hadoop 2

2013-10-26 Thread Sushanth Sowmyan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sushanth Sowmyan updated HIVE-4388:
---

Status: Patch Available  (was: Open)

> HBase tests fail against Hadoop 2
> -
>
> Key: HIVE-4388
> URL: https://issues.apache.org/jira/browse/HIVE-4388
> Project: Hive
>  Issue Type: Bug
>  Components: HBase Handler
>Reporter: Gunther Hagleitner
>Assignee: Brock Noland
> Attachments: HIVE-4388.10.patch, HIVE-4388.11.patch, 
> HIVE-4388.12.patch, HIVE-4388.patch, HIVE-4388.patch, HIVE-4388.patch, 
> HIVE-4388.patch, HIVE-4388.patch, HIVE-4388.patch, HIVE-4388.patch, 
> HIVE-4388.patch, HIVE-4388-wip.txt
>
>
> Currently we're building by default against 0.92. When you run against hadoop 
> 2 (-Dhadoop.mr.rev=23) builds fail because of: HBASE-5963.
> HIVE-3861 upgrades the version of hbase used. This will get you past the 
> problem in HBASE-5963 (which was fixed in 0.94.1) but fails with: HBASE-6396.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-4388) HBase tests fail against Hadoop 2

2013-10-26 Thread Sushanth Sowmyan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sushanth Sowmyan updated HIVE-4388:
---

Status: Open  (was: Patch Available)

> HBase tests fail against Hadoop 2
> -
>
> Key: HIVE-4388
> URL: https://issues.apache.org/jira/browse/HIVE-4388
> Project: Hive
>  Issue Type: Bug
>  Components: HBase Handler
>Reporter: Gunther Hagleitner
>Assignee: Brock Noland
> Attachments: HIVE-4388.10.patch, HIVE-4388.11.patch, 
> HIVE-4388.12.patch, HIVE-4388.patch, HIVE-4388.patch, HIVE-4388.patch, 
> HIVE-4388.patch, HIVE-4388.patch, HIVE-4388.patch, HIVE-4388.patch, 
> HIVE-4388.patch, HIVE-4388-wip.txt
>
>
> Currently we're building by default against 0.92. When you run against hadoop 
> 2 (-Dhadoop.mr.rev=23) builds fail because of: HBASE-5963.
> HIVE-3861 upgrades the version of hbase used. This will get you past the 
> problem in HBASE-5963 (which was fixed in 0.94.1) but fails with: HBASE-6396.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Resolved] (HIVE-5639) Allow caching of Orc footers in Tez AM

2013-10-26 Thread Gunther Hagleitner (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gunther Hagleitner resolved HIVE-5639.
--

Resolution: Fixed

LGTM. Committed to branch. Thanks Sid and thanks Brock for the feedback.

> Allow caching of Orc footers in Tez AM
> --
>
> Key: HIVE-5639
> URL: https://issues.apache.org/jira/browse/HIVE-5639
> Project: Hive
>  Issue Type: Improvement
>  Components: Tez
>Reporter: Siddharth Seth
>Assignee: Siddharth Seth
> Fix For: tez-branch
>
> Attachments: HIVE-5639.1.txt, HIVE-5639.2.patch, 
> HIVE-5639-addendum.txt
>
>




--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5351) Secure-Socket-Layer (SSL) support for HiveServer2

2013-10-26 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13806017#comment-13806017
 ] 

Hive QA commented on HIVE-5351:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12610212/HIVE-5351.3.patch

{color:red}ERROR:{color} -1 due to 3 failed/errored test(s), 4490 tests executed
*Failed tests:*
{noformat}
org.apache.hive.jdbc.TestSSL.testSSLConnectionWithProperty
org.apache.hive.jdbc.TestSSL.testSSLConnectionWithURL
org.apache.hive.jdbc.TestSSL.testSSLFetch
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/1241/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/1241/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests failed with: TestsFailedException: 3 tests failed
{noformat}

This message is automatically generated.

> Secure-Socket-Layer (SSL) support for HiveServer2
> -
>
> Key: HIVE-5351
> URL: https://issues.apache.org/jira/browse/HIVE-5351
> Project: Hive
>  Issue Type: Improvement
>  Components: Authorization, HiveServer2, JDBC
>Affects Versions: 0.11.0, 0.12.0
>Reporter: Prasad Mujumdar
>Assignee: Prasad Mujumdar
> Attachments: HIVE-5351.3.patch
>
>
> HiveServer2 and JDBC driver should support encrypted communication using SSL



--
This message was sent by Atlassian JIRA
(v6.1#6144)