[jira] [Commented] (HIVE-5610) Merge maven branch into trunk

2013-10-27 Thread Carl Steinbach (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13806236#comment-13806236
 ] 

Carl Steinbach commented on HIVE-5610:
--

Here are some issues I found:
* When I remove the ~/.m2 directory 'mvn compile' fails with an unsatisfied 
dependency error.
* There are a bunch of JAR artifacts with names that aren't prepended with 
hive-*
* It would be nice if this patch removed the old Ant and Ivy files, 
eclipse-files directory, and anything else that it will make obsolete.

How do I do the following:
* Run the Thrift code generator.
* Compile the Thrift C++ bindings in the ODBC directory.
* Run a single TestCliDriver qfile test.


 Merge maven branch into trunk
 -

 Key: HIVE-5610
 URL: https://issues.apache.org/jira/browse/HIVE-5610
 Project: Hive
  Issue Type: Sub-task
Reporter: Brock Noland
Assignee: Brock Noland

 With HIVE-5566 nearing completion we will be nearly ready to merge the maven 
 branch to trunk. The following tasks will be done post-merge:
 * HIVE-5611 - Add assembly (i.e.) tar creation to pom
 * HIVE-5612 - Add ability to re-generate generated code stored in source 
 control
 The merge process will be as follows:
 1) svn merge ^/hive/branches/maven
 2) Commit result
 3) Modify the following line in maven-rollforward.sh:
 {noformat}
   mv $source $target
 {noformat}
 to
 {noformat}
   svn mv $source $target
 {noformat}
 4) Execute maven-rollfward.sh
 5) Commit result 
 6) Update trunk-mr1.properties and trunk-mr2.properties on the ptesting host, 
 adding the following:
 {noformat}
 mavenEnvOpts = -Dhttp.proxyHost=localhost -Dhttp.proxyPort=3128 
 testCasePropertyName = test
 buildTool = maven
 unitTests.directories = ./
 {noformat}
 Notes:
 * To build everything you must:
 {noformat}
 $ mvn clean install -DskipTests
 $ cd itests
 $ mvn clean install -DskipTests
 {noformat}
 because itests (any tests that has cyclical dependencies or requires that the 
 packages be built) is not part of the root reactor build.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


Review Request 14978: HIVE-5643: ZooKeeperHiveLockManager.getQuorumServers incorrectly appends the custom zk port to quorum hosts

2013-10-27 Thread Venki Korukanti

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/14978/
---

Review request for hive and Brock Noland.


Bugs: HIVE-5643
https://issues.apache.org/jira/browse/HIVE-5643


Repository: hive-git


Description
---

ZooKeeperHiveLockManager calls the below method to construct the connection 
string for ZooKeeper connection.

  private static String getQuorumServers(HiveConf conf) {
String hosts = conf.getVar(HiveConf.ConfVars.HIVE_ZOOKEEPER_QUORUM);
String port = conf.getVar(HiveConf.ConfVars.HIVE_ZOOKEEPER_CLIENT_PORT);
return hosts + : + port;
  }

For example:
HIVE_ZOOKEEPER_QUORUM=node1, node2, node3
HIVE_ZOOKEEPER_CLIENT_PORT=

Connection string given to ZooKeeper client object is node1, node2, 
node3:. ZooKeeper consider the default port as 2181 for hostnames that 
don't have any port. This works fine as long as HIVE_ZOOKEEPER_CLIENT_PORT is 
2181. If it is different then ZooKeeper client object tries to connect to node1 
and node2 on port 2181 which always fails. So it has only one choice the last 
host which receives all the load from Hive.


Diffs
-

  
ql/src/java/org/apache/hadoop/hive/ql/lockmgr/zookeeper/ZooKeeperHiveLockManager.java
 735e745 
  
ql/src/test/org/apache/hadoop/hive/ql/lockmgr/zookeeper/TestZookeeperLockManager.java
 2ff48f5 

Diff: https://reviews.apache.org/r/14978/diff/


Testing
---

Added a unittest for getQuorumServers which test different types of quorum 
settings.


Thanks,

Venki Korukanti



[jira] [Updated] (HIVE-5643) ZooKeeperHiveLockManager.getQuorumServers incorrectly appends the custom zk port to quorum hosts

2013-10-27 Thread Venki Korukanti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5643?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Venki Korukanti updated HIVE-5643:
--

Attachment: HIVE-5643.1.patch.txt

 ZooKeeperHiveLockManager.getQuorumServers incorrectly appends the custom zk 
 port to quorum hosts
 

 Key: HIVE-5643
 URL: https://issues.apache.org/jira/browse/HIVE-5643
 Project: Hive
  Issue Type: Bug
  Components: Locking
Affects Versions: 0.12.0
Reporter: Venki Korukanti
Assignee: Venki Korukanti
 Fix For: 0.13.0

 Attachments: HIVE-5643.1.patch.txt


 ZooKeeperHiveLockManager calls the below method to construct the connection 
 string for ZooKeeper connection.
 {code}
   private static String getQuorumServers(HiveConf conf) {
 String hosts = conf.getVar(HiveConf.ConfVars.HIVE_ZOOKEEPER_QUORUM);
 String port = conf.getVar(HiveConf.ConfVars.HIVE_ZOOKEEPER_CLIENT_PORT);
 return hosts + : + port;
   }
 {code}
 For example:
 HIVE_ZOOKEEPER_QUORUM=node1, node2, node3
 HIVE_ZOOKEEPER_CLIENT_PORT=
 Connection string given to ZooKeeper object is node1, node2, node3:. 
 ZooKeeper consider the default port as 2181 for hostnames that don't have any 
 port. 
 This works fine as long as HIVE_ZOOKEEPER_CLIENT_PORT is 2181. If it is 
 different then ZooKeeper client object tries to connect to node1 and node2 on 
 port 2181 which always fails. So it has only one choice the last host which 
 receives all the load from Hive.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5643) ZooKeeperHiveLockManager.getQuorumServers incorrectly appends the custom zk port to quorum hosts

2013-10-27 Thread Venki Korukanti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5643?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Venki Korukanti updated HIVE-5643:
--

Status: Patch Available  (was: Open)

 ZooKeeperHiveLockManager.getQuorumServers incorrectly appends the custom zk 
 port to quorum hosts
 

 Key: HIVE-5643
 URL: https://issues.apache.org/jira/browse/HIVE-5643
 Project: Hive
  Issue Type: Bug
  Components: Locking
Affects Versions: 0.12.0
Reporter: Venki Korukanti
Assignee: Venki Korukanti
 Fix For: 0.13.0

 Attachments: HIVE-5643.1.patch.txt


 ZooKeeperHiveLockManager calls the below method to construct the connection 
 string for ZooKeeper connection.
 {code}
   private static String getQuorumServers(HiveConf conf) {
 String hosts = conf.getVar(HiveConf.ConfVars.HIVE_ZOOKEEPER_QUORUM);
 String port = conf.getVar(HiveConf.ConfVars.HIVE_ZOOKEEPER_CLIENT_PORT);
 return hosts + : + port;
   }
 {code}
 For example:
 HIVE_ZOOKEEPER_QUORUM=node1, node2, node3
 HIVE_ZOOKEEPER_CLIENT_PORT=
 Connection string given to ZooKeeper object is node1, node2, node3:. 
 ZooKeeper consider the default port as 2181 for hostnames that don't have any 
 port. 
 This works fine as long as HIVE_ZOOKEEPER_CLIENT_PORT is 2181. If it is 
 different then ZooKeeper client object tries to connect to node1 and node2 on 
 port 2181 which always fails. So it has only one choice the last host which 
 receives all the load from Hive.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5643) ZooKeeperHiveLockManager.getQuorumServers incorrectly appends the custom zk port to quorum hosts

2013-10-27 Thread Venki Korukanti (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13806241#comment-13806241
 ] 

Venki Korukanti commented on HIVE-5643:
---

RB link: https://reviews.apache.org/r/14978/

 ZooKeeperHiveLockManager.getQuorumServers incorrectly appends the custom zk 
 port to quorum hosts
 

 Key: HIVE-5643
 URL: https://issues.apache.org/jira/browse/HIVE-5643
 Project: Hive
  Issue Type: Bug
  Components: Locking
Affects Versions: 0.12.0
Reporter: Venki Korukanti
Assignee: Venki Korukanti
 Fix For: 0.13.0

 Attachments: HIVE-5643.1.patch.txt


 ZooKeeperHiveLockManager calls the below method to construct the connection 
 string for ZooKeeper connection.
 {code}
   private static String getQuorumServers(HiveConf conf) {
 String hosts = conf.getVar(HiveConf.ConfVars.HIVE_ZOOKEEPER_QUORUM);
 String port = conf.getVar(HiveConf.ConfVars.HIVE_ZOOKEEPER_CLIENT_PORT);
 return hosts + : + port;
   }
 {code}
 For example:
 HIVE_ZOOKEEPER_QUORUM=node1, node2, node3
 HIVE_ZOOKEEPER_CLIENT_PORT=
 Connection string given to ZooKeeper object is node1, node2, node3:. 
 ZooKeeper consider the default port as 2181 for hostnames that don't have any 
 port. 
 This works fine as long as HIVE_ZOOKEEPER_CLIENT_PORT is 2181. If it is 
 different then ZooKeeper client object tries to connect to node1 and node2 on 
 port 2181 which always fails. So it has only one choice the last host which 
 receives all the load from Hive.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5511) percentComplete returned by job status from WebHCat is null

2013-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13806249#comment-13806249
 ] 

Hudson commented on HIVE-5511:
--

FAILURE: Integrated in Hive-trunk-hadoop2-ptest #152 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2-ptest/152/])
HIVE-5511 : percentComplete returned by job status from WebHCat is null (Eugene 
Koifman via Thejas Nair) (thejas: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1535796)
* /hive/trunk/hcatalog/src/test/e2e/templeton/README.txt
* /hive/trunk/hcatalog/src/test/e2e/templeton/drivers/TestDriverCurl.pm
* /hive/trunk/hcatalog/src/test/e2e/templeton/tests/jobsubmission.conf
* /hive/trunk/hcatalog/src/test/e2e/templeton/tests/jobsubmission_streaming.conf
* /hive/trunk/hcatalog/webhcat/svr/src/main/bin/webhcat_config.sh
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/config/override-container-log4j.properties
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/AppConfig.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/CompleteDelegator.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/tool/HDFSStorage.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/tool/HiveJobIDParser.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/tool/JarJobIDParser.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/tool/JobSubmissionConstants.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/tool/LaunchMapper.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/tool/PigJobIDParser.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/tool/TempletonControllerJob.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/tool/TempletonUtils.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/tool/TrivialExecService.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/test/java/org/apache/hive/hcatalog/templeton/tool/TestTrivialExecService.java
* 
/hive/trunk/shims/src/0.20S/java/org/apache/hadoop/mapred/WebHCatJTShim20S.java
* /hive/trunk/shims/src/0.23/java/org/apache/hadoop/mapred/WebHCatJTShim23.java
* 
/hive/trunk/shims/src/common/java/org/apache/hadoop/hive/shims/HadoopShims.java


 percentComplete returned by job status from WebHCat is null
 ---

 Key: HIVE-5511
 URL: https://issues.apache.org/jira/browse/HIVE-5511
 Project: Hive
  Issue Type: Bug
  Components: WebHCat
Affects Versions: 0.12.0
Reporter: Eugene Koifman
Assignee: Eugene Koifman
 Fix For: 0.13.0

 Attachments: HIVE-5511.3.patch, HIVE-5511.5.patch


 In hadoop1 the logging from MR is sent to stderr.  In H2, by default, to 
 syslog.  templeton.tool.LaunchMapper expects to see the output on stderr to 
 produce 'percentComplete' in job status.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5629) Fix two javadoc failures in HCatalog

2013-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13806252#comment-13806252
 ] 

Hudson commented on HIVE-5629:
--

FAILURE: Integrated in Hive-trunk-hadoop2-ptest #152 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2-ptest/152/])
HIVE-5629 : Fix two javadoc failures in HCatalog (Brock Noland via Ashutosh 
Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1535513)
* 
/hive/trunk/hcatalog/core/src/main/java/org/apache/hive/hcatalog/mapreduce/InputJobInfo.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/PigDelegator.java


 Fix two javadoc failures in HCatalog
 

 Key: HIVE-5629
 URL: https://issues.apache.org/jira/browse/HIVE-5629
 Project: Hive
  Issue Type: Bug
Reporter: Brock Noland
Assignee: Brock Noland
 Fix For: 0.13.0

 Attachments: HIVE-5629.patch


 I am seeing two javadoc failures on HCatalog. These are not being seen by 
 PTest and indeed I cannot reproduce on my Mac but can on Linux. Regardless 
 they should be fixed.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5403) Move loading of filesystem, ugi, metastore client to hive session

2013-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13806263#comment-13806263
 ] 

Hudson commented on HIVE-5403:
--

FAILURE: Integrated in Hive-trunk-hadoop2-ptest #152 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2-ptest/152/])
HIVE-5403: Perflogger broken due to HIVE-5403 (Vikram Dixit K via Gunther 
Hagleitner) (gunther: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1535598)
* 
/hive/trunk/metastore/src/test/org/apache/hadoop/hive/metastore/TestMetastoreVersion.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/session/SessionState.java
HIVE-5403 : Move loading of filesystem, ugi, metastore client to hive session 
(Vikram Dixit via Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1535039)
* 
/hive/trunk/metastore/src/test/org/apache/hadoop/hive/metastore/TestMetastoreVersion.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/session/SessionState.java


 Move loading of filesystem, ugi, metastore client to hive session
 -

 Key: HIVE-5403
 URL: https://issues.apache.org/jira/browse/HIVE-5403
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.13.0
Reporter: Vikram Dixit K
Assignee: Vikram Dixit K
 Fix For: 0.13.0

 Attachments: HIVE-5403.1.patch, HIVE-5403.2.patch, HIVE-5403.3.patch, 
 HIVE-5403.4.patch


 As part of HIVE-5184, the metastore connection, loading filesystem were done 
 as part of the tez session so as to speed up query times while paying a cost 
 at startup. We can do this more generally in hive to apply to both the 
 mapreduce and tez side of things.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5482) JDBC should depend on httpclient.version and httpcore.version 4.1.3 to be consistent with other modules

2013-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13806256#comment-13806256
 ] 

Hudson commented on HIVE-5482:
--

FAILURE: Integrated in Hive-trunk-hadoop2-ptest #152 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2-ptest/152/])
HIVE-5482 : JDBC should depend on httpclient.version and httpcore.version 4.1.3 
to be consistent with other modules (Vaibhav Gumashta via Thejas Nair) (thejas: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1535620)
* /hive/trunk/ivy/libraries.properties


 JDBC should depend on httpclient.version and httpcore.version 4.1.3 to be 
 consistent with other modules
 ---

 Key: HIVE-5482
 URL: https://issues.apache.org/jira/browse/HIVE-5482
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2, JDBC
Reporter: Vaibhav Gumashta
Assignee: Vaibhav Gumashta
 Fix For: 0.13.0

 Attachments: HIVE-5482.1.patch


 Currently depends on 4.2.4 and 4.2.5 which conflicts with thrift-0.9 which 
 depends on 4.1.3



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5625) Fix issue with metastore version restriction test.

2013-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13806254#comment-13806254
 ] 

Hudson commented on HIVE-5625:
--

FAILURE: Integrated in Hive-trunk-hadoop2-ptest #152 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2-ptest/152/])
HIVE-5625 - Fix issue with metastore version restriction test. (Vikram Dixit K 
via Brock Noland) (brock: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1535402)
* 
/hive/trunk/metastore/src/test/org/apache/hadoop/hive/metastore/TestMetastoreVersion.java


 Fix issue with metastore version restriction test.
 --

 Key: HIVE-5625
 URL: https://issues.apache.org/jira/browse/HIVE-5625
 Project: Hive
  Issue Type: Bug
  Components: Tests
Affects Versions: 0.13.0
Reporter: Vikram Dixit K
Assignee: Vikram Dixit K
 Fix For: 0.13.0

 Attachments: HIVE-5625.1.patch


 Based on Brock's comments, the change made in HIVE-5403 change the nature of 
 the test.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5637) Sporadic minimr test failure

2013-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13806266#comment-13806266
 ] 

Hudson commented on HIVE-5637:
--

FAILURE: Integrated in Hive-trunk-hadoop2-ptest #152 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2-ptest/152/])
HIVE-5637 : Sporadic minimr test failure (Navis via Ashutosh Chauhan) 
(hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1535510)
* /hive/trunk/ql/src/test/queries/clientpositive/import_exported_table.q


 Sporadic minimr test failure
 

 Key: HIVE-5637
 URL: https://issues.apache.org/jira/browse/HIVE-5637
 Project: Hive
  Issue Type: Test
  Components: Tests
Reporter: Navis
Assignee: Navis
Priority: Trivial
 Fix For: 0.13.0

 Attachments: HIVE-5637.1.patch.txt


 {noformat}
 ant test -Dtestcase=TestMinimrCliDriver 
 -Dqfile=import_exported_table.q,load_hdfs_file_with_space_in_the_name.q 
 -Dmodule=ql
 {noformat}
 Fails with message like this.
 {noformat}
 Begin query: load_hdfs_file_with_space_in_the_name.q
 mkdir: cannot create directory hdfs:///tmp/test/: File exists
 Exception: Client Execution failed with error code = -1
 See build/ql/tmp/hive.log, or try ant test ... -Dtest.silent=false to get 
 more logs.
 junit.framework.AssertionFailedError: Client Execution failed with error code 
 = -1
 See build/ql/tmp/hive.log, or try ant test ... -Dtest.silent=false to get 
 more logs.
 at junit.framework.Assert.fail(Assert.java:47)
 at 
 org.apache.hadoop.hive.cli.TestMinimrCliDriver.runTest(TestMinimrCliDriver.java:349)
 at 
 org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_load_hdfs_file_with_space_in_the_name(TestMinimrCliDriver.java:291)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at junit.framework.TestCase.runTest(TestCase.java:154)
 at junit.framework.TestCase.runBare(TestCase.java:127)
 at junit.framework.TestResult$1.protect(TestResult.java:106)
 at junit.framework.TestResult.runProtected(TestResult.java:124)
 at junit.framework.TestResult.run(TestResult.java:109)
 at junit.framework.TestCase.run(TestCase.java:118)
 at junit.framework.TestSuite.runTest(TestSuite.java:208)
 at junit.framework.TestSuite.run(TestSuite.java:203)
 at 
 org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:520)
 at 
 org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1060)
 at 
 org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:911)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5599) Change default logging level to INFO

2013-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13806260#comment-13806260
 ] 

Hudson commented on HIVE-5599:
--

FAILURE: Integrated in Hive-trunk-hadoop2-ptest #152 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2-ptest/152/])
HIVE-5599 - Change default logging level to INFO (Brock Noland, Reviewed by 
Thejas M Nair) (brock: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1535047)
* /hive/trunk/common/src/java/conf/hive-log4j.properties


 Change default logging level to INFO
 

 Key: HIVE-5599
 URL: https://issues.apache.org/jira/browse/HIVE-5599
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.12.0
Reporter: Brock Noland
Assignee: Brock Noland
 Fix For: 0.13.0

 Attachments: HIVE-5599.patch


 The default logging level is warn:
 https://github.com/apache/hive/blob/trunk/common/src/java/conf/hive-log4j.properties#L19
 but hive logs lot's of good information at INFO level. Additionally most 
 hadoop projects log at INFO by default. Let's change the logging level to 
 INFO by default.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5619) Allow concat() to accept mixed string/binary args

2013-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13806251#comment-13806251
 ] 

Hudson commented on HIVE-5619:
--

FAILURE: Integrated in Hive-trunk-hadoop2-ptest #152 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2-ptest/152/])
HIVE-5619 : Allow concat() to accept mixed string/binary args (Jason Dere via 
Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1535621)
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFConcat.java
* /hive/trunk/ql/src/test/queries/clientpositive/udf_concat.q
* /hive/trunk/ql/src/test/results/clientpositive/udf_concat.q.out
* 
/hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/objectinspector/primitive/PrimitiveObjectInspectorUtils.java


 Allow concat() to accept mixed string/binary args
 -

 Key: HIVE-5619
 URL: https://issues.apache.org/jira/browse/HIVE-5619
 Project: Hive
  Issue Type: Improvement
Reporter: Jason Dere
Assignee: Jason Dere
 Fix For: 0.13.0

 Attachments: HIVE-5619.1.patch


 concat() is currently strict about allowing either all binary or all 
 non-binary arguments. Loosen this to permit mixed params.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-784) Support uncorrelated subqueries in the WHERE clause

2013-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13806257#comment-13806257
 ] 

Hudson commented on HIVE-784:
-

FAILURE: Integrated in Hive-trunk-hadoop2-ptest #152 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2-ptest/152/])
HIVE-784 : Support uncorrelated subqueries in the WHERE clause (Harish Butani 
via Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1535040)
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/ErrorMsg.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/HiveParser.g
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/IdentifiersParser.g
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/QBSubQuery.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/SubQueryUtils.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/TypeCheckProcFactory.java
* /hive/trunk/ql/src/test/queries/clientnegative/subquery_exists_implicit_gby.q
* /hive/trunk/ql/src/test/queries/clientnegative/subquery_in_groupby.q
* /hive/trunk/ql/src/test/queries/clientnegative/subquery_in_select.q
* 
/hive/trunk/ql/src/test/queries/clientnegative/subquery_multiple_cols_in_select.q
* 
/hive/trunk/ql/src/test/queries/clientnegative/subquery_notexists_implicit_gby.q
* /hive/trunk/ql/src/test/queries/clientnegative/subquery_subquery_chain.q
* /hive/trunk/ql/src/test/queries/clientnegative/subquery_windowing_corr.q
* /hive/trunk/ql/src/test/queries/clientnegative/subquery_with_or_cond.q
* /hive/trunk/ql/src/test/queries/clientpositive/subquery_exists.q
* /hive/trunk/ql/src/test/queries/clientpositive/subquery_in.q
* /hive/trunk/ql/src/test/queries/clientpositive/subquery_multiinsert.q
* /hive/trunk/ql/src/test/queries/clientpositive/subquery_notexists.q
* /hive/trunk/ql/src/test/queries/clientpositive/subquery_notin.q
* 
/hive/trunk/ql/src/test/results/clientnegative/subquery_exists_implicit_gby.q.out
* /hive/trunk/ql/src/test/results/clientnegative/subquery_in_groupby.q.out
* /hive/trunk/ql/src/test/results/clientnegative/subquery_in_select.q.out
* 
/hive/trunk/ql/src/test/results/clientnegative/subquery_multiple_cols_in_select.q.out
* 
/hive/trunk/ql/src/test/results/clientnegative/subquery_notexists_implicit_gby.q.out
* /hive/trunk/ql/src/test/results/clientnegative/subquery_subquery_chain.q.out
* /hive/trunk/ql/src/test/results/clientnegative/subquery_windowing_corr.q.out
* /hive/trunk/ql/src/test/results/clientnegative/subquery_with_or_cond.q.out
* /hive/trunk/ql/src/test/results/clientpositive/subquery_exists.q.out
* /hive/trunk/ql/src/test/results/clientpositive/subquery_in.q.out
* /hive/trunk/ql/src/test/results/clientpositive/subquery_multiinsert.q.out
* /hive/trunk/ql/src/test/results/clientpositive/subquery_notexists.q.out
* /hive/trunk/ql/src/test/results/clientpositive/subquery_notin.q.out


 Support uncorrelated subqueries in the WHERE clause
 ---

 Key: HIVE-784
 URL: https://issues.apache.org/jira/browse/HIVE-784
 Project: Hive
  Issue Type: New Feature
  Components: Query Processor
Reporter: Ning Zhang
Assignee: Harish Butani
 Fix For: 0.13.0

 Attachments: D13443.1.patch, D13443.2.patch, HIVE-784.1.patch.txt, 
 HIVE-784.2.patch, SubQuerySpec.pdf, tpchQueriesUsingSubQueryClauses.sql


 Hive currently only support views in the FROM-clause, some Facebook use cases 
 suggest that Hive should support subqueries such as those connected by 
 IN/EXISTS in the WHERE-clause. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5454) HCatalog runs a partition listing with an empty filter

2013-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13806264#comment-13806264
 ] 

Hudson commented on HIVE-5454:
--

FAILURE: Integrated in Hive-trunk-hadoop2-ptest #152 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2-ptest/152/])
HIVE-5454 - HCatalog runs a partition listing with an empty filter (Harsh J via 
Brock Noland) (brock: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1535051)
* 
/hive/trunk/hcatalog/core/src/main/java/org/apache/hive/hcatalog/data/transfer/impl/HCatInputFormatReader.java
* 
/hive/trunk/hcatalog/core/src/main/java/org/apache/hive/hcatalog/mapreduce/HCatInputFormat.java
* 
/hive/trunk/hcatalog/core/src/test/java/org/apache/hive/hcatalog/mapreduce/HCatMapReduceTest.java
* 
/hive/trunk/hcatalog/hcatalog-pig-adapter/src/main/java/org/apache/hive/hcatalog/pig/HCatLoader.java
* /hive/trunk/hcatalog/src/docs/src/documentation/content/xdocs/inputoutput.xml
* 
/hive/trunk/hcatalog/src/test/e2e/hcatalog/udfs/java/org/apache/hcatalog/utils/HBaseReadWrite.java
* 
/hive/trunk/hcatalog/src/test/e2e/hcatalog/udfs/java/org/apache/hive/hcatalog/utils/GroupByAge.java
* 
/hive/trunk/hcatalog/src/test/e2e/hcatalog/udfs/java/org/apache/hive/hcatalog/utils/ReadJson.java
* 
/hive/trunk/hcatalog/src/test/e2e/hcatalog/udfs/java/org/apache/hive/hcatalog/utils/ReadRC.java
* 
/hive/trunk/hcatalog/src/test/e2e/hcatalog/udfs/java/org/apache/hive/hcatalog/utils/ReadText.java
* 
/hive/trunk/hcatalog/src/test/e2e/hcatalog/udfs/java/org/apache/hive/hcatalog/utils/ReadWrite.java
* 
/hive/trunk/hcatalog/src/test/e2e/hcatalog/udfs/java/org/apache/hive/hcatalog/utils/SimpleRead.java
* 
/hive/trunk/hcatalog/src/test/e2e/hcatalog/udfs/java/org/apache/hive/hcatalog/utils/StoreComplex.java
* 
/hive/trunk/hcatalog/src/test/e2e/hcatalog/udfs/java/org/apache/hive/hcatalog/utils/StoreDemo.java
* 
/hive/trunk/hcatalog/src/test/e2e/hcatalog/udfs/java/org/apache/hive/hcatalog/utils/StoreNumbers.java
* 
/hive/trunk/hcatalog/src/test/e2e/hcatalog/udfs/java/org/apache/hive/hcatalog/utils/SumNumbers.java
* 
/hive/trunk/hcatalog/src/test/e2e/hcatalog/udfs/java/org/apache/hive/hcatalog/utils/TypeDataCheck.java
* 
/hive/trunk/hcatalog/src/test/e2e/hcatalog/udfs/java/org/apache/hive/hcatalog/utils/WriteJson.java
* 
/hive/trunk/hcatalog/src/test/e2e/hcatalog/udfs/java/org/apache/hive/hcatalog/utils/WriteRC.java
* 
/hive/trunk/hcatalog/src/test/e2e/hcatalog/udfs/java/org/apache/hive/hcatalog/utils/WriteText.java
* 
/hive/trunk/hcatalog/src/test/e2e/hcatalog/udfs/java/org/apache/hive/hcatalog/utils/WriteTextPartitioned.java
* 
/hive/trunk/hcatalog/storage-handlers/hbase/src/test/org/apache/hive/hcatalog/hbase/TestHBaseInputFormat.java


 HCatalog runs a partition listing with an empty filter
 --

 Key: HIVE-5454
 URL: https://issues.apache.org/jira/browse/HIVE-5454
 Project: Hive
  Issue Type: Bug
  Components: HCatalog
Affects Versions: 0.12.0
Reporter: Harsh J
Assignee: Harsh J
 Fix For: 0.13.0

 Attachments: D13317.1.patch, D13317.2.patch, D13317.3.patch


 This is a HCATALOG-527 caused regression, wherein the HCatLoader's way of 
 calling HCatInputFormat causes it to do 2x partition lookups - once without 
 the filter, and then again with the filter.
 For tables with large number partitions (10, say), the non-filter lookup 
 proves fatal both to the client (Read timed out errors from 
 ThriftMetaStoreClient cause the server doesn't respond) and to the server 
 (too much data loaded into the cache, OOME, or slowdown).
 The fix would be to use a single call that also passes a partition filter 
 information, as was in the case of HCatalog 0.4 sources before HCATALOG-527.
 (HCatalog-release-wise, this affects all 0.5.x users)



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5430) Refactor VectorizationContext and handle NOT expression with nulls.

2013-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13806262#comment-13806262
 ] 

Hudson commented on HIVE-5430:
--

FAILURE: Integrated in Hive-trunk-hadoop2-ptest #152 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2-ptest/152/])
HIVE-5430 : Refactor VectorizationContext and handle NOT expression with nulls. 
(Jitendra Nath Pandey via Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1535055)
* /hive/trunk/ant/src/org/apache/hadoop/hive/ant/GenVectorTestCode.java
* 
/hive/trunk/ql/src/gen/vectorization/ExpressionTemplates/ColumnArithmeticColumn.txt
* 
/hive/trunk/ql/src/gen/vectorization/ExpressionTemplates/ColumnArithmeticScalar.txt
* 
/hive/trunk/ql/src/gen/vectorization/ExpressionTemplates/ColumnCompareColumn.txt
* 
/hive/trunk/ql/src/gen/vectorization/ExpressionTemplates/ColumnCompareScalar.txt
* /hive/trunk/ql/src/gen/vectorization/ExpressionTemplates/ColumnUnaryFunc.txt
* /hive/trunk/ql/src/gen/vectorization/ExpressionTemplates/ColumnUnaryMinus.txt
* 
/hive/trunk/ql/src/gen/vectorization/ExpressionTemplates/FilterColumnCompareColumn.txt
* 
/hive/trunk/ql/src/gen/vectorization/ExpressionTemplates/FilterColumnCompareScalar.txt
* 
/hive/trunk/ql/src/gen/vectorization/ExpressionTemplates/FilterScalarCompareColumn.txt
* 
/hive/trunk/ql/src/gen/vectorization/ExpressionTemplates/FilterStringColumnCompareColumn.txt
* 
/hive/trunk/ql/src/gen/vectorization/ExpressionTemplates/FilterStringColumnCompareScalar.txt
* 
/hive/trunk/ql/src/gen/vectorization/ExpressionTemplates/FilterStringScalarCompareColumn.txt
* 
/hive/trunk/ql/src/gen/vectorization/ExpressionTemplates/ScalarArithmeticColumn.txt
* 
/hive/trunk/ql/src/gen/vectorization/ExpressionTemplates/ScalarCompareColumn.txt
* 
/hive/trunk/ql/src/gen/vectorization/ExpressionTemplates/StringColumnCompareColumn.txt
* 
/hive/trunk/ql/src/gen/vectorization/ExpressionTemplates/StringColumnCompareScalar.txt
* 
/hive/trunk/ql/src/gen/vectorization/ExpressionTemplates/StringScalarCompareColumn.txt
* 
/hive/trunk/ql/src/gen/vectorization/TestTemplates/TestColumnScalarFilterVectorExpressionEvaluation.txt
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/OperatorFactory.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorExpressionDescriptor.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorFilterOperator.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorGroupByOperator.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorMapJoinOperator.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorReduceSinkOperator.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorSelectOperator.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorizationContext.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorizedExpressions.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/AbstractFilterStringColLikeStringScalar.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/ColAndCol.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/ColOrCol.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/ConstantVectorExpression.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/FilterColAndScalar.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/FilterColOrScalar.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/FilterExprAndExpr.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/FilterExprOrExpr.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/FilterNotExpr.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/FilterScalarAndColumn.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/FilterScalarOrColumn.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/FilterStringColLikeStringScalar.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/FilterStringColRegExpStringScalar.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/FuncLogWithBaseDoubleToDouble.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/FuncLogWithBaseLongToDouble.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/FuncLongToString.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/FuncPowerDoubleToDouble.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/FuncPowerLongToDouble.java
* 

[jira] [Commented] (HIVE-5220) Add option for removing intermediate directory for partition, which is empty

2013-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13806258#comment-13806258
 ] 

Hudson commented on HIVE-5220:
--

FAILURE: Integrated in Hive-trunk-hadoop2-ptest #152 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2-ptest/152/])
HIVE-5220 : Add option for removing intermediate directory for partition, which 
is empty (Navis via Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1535072)
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java
* /hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/Warehouse.java


 Add option for removing intermediate directory for partition, which is empty
 

 Key: HIVE-5220
 URL: https://issues.apache.org/jira/browse/HIVE-5220
 Project: Hive
  Issue Type: Improvement
  Components: Query Processor
Reporter: Navis
Assignee: Navis
Priority: Minor
 Fix For: 0.13.0

 Attachments: D12729.2.patch, HIVE-5220.D12729.1.patch


 For deeply nested partitioned table, intermediate directories are not removed 
 even if there is no partitions in it by removing them.
 {noformat}
 /deep_part/c=09/d=01
 /deep_part/c=09/d=01/e=01
 /deep_part/c=09/d=01/e=02
 /deep_part/c=09/d=02
 /deep_part/c=09/d=02/e=01
 /deep_part/c=09/d=02/e=02
 {noformat}
 After removing partition (c='09'), directory remains like this, 
 {noformat}
 /deep_part/c=09/d=01
 /deep_part/c=09/d=02
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5216) Need to annotate public API in HCatalog

2013-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13806259#comment-13806259
 ] 

Hudson commented on HIVE-5216:
--

FAILURE: Integrated in Hive-trunk-hadoop2-ptest #152 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2-ptest/152/])
HIVE-5216 : Need to annotate public API in HCatalog (Eugene Koifman via Thejas 
Nair) (thejas: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1535535)
* 
/hive/trunk/hcatalog/core/src/main/java/org/apache/hive/hcatalog/data/HCatRecord.java
* 
/hive/trunk/hcatalog/core/src/main/java/org/apache/hive/hcatalog/data/schema/HCatSchema.java
* 
/hive/trunk/hcatalog/core/src/main/java/org/apache/hive/hcatalog/data/transfer/HCatReader.java
* 
/hive/trunk/hcatalog/core/src/main/java/org/apache/hive/hcatalog/data/transfer/HCatWriter.java
* 
/hive/trunk/hcatalog/core/src/main/java/org/apache/hive/hcatalog/mapreduce/HCatOutputFormat.java
* 
/hive/trunk/hcatalog/hcatalog-pig-adapter/src/main/java/org/apache/hive/hcatalog/pig/HCatLoader.java
* 
/hive/trunk/hcatalog/hcatalog-pig-adapter/src/main/java/org/apache/hive/hcatalog/pig/HCatStorer.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/Main.java


 Need to annotate public API in HCatalog
 ---

 Key: HIVE-5216
 URL: https://issues.apache.org/jira/browse/HIVE-5216
 Project: Hive
  Issue Type: Bug
  Components: HCatalog, WebHCat
Affects Versions: 0.12.0
Reporter: Eugene Koifman
Assignee: Eugene Koifman
 Fix For: 0.13.0

 Attachments: HIVE-5216.2.patch, HIVE-5216.patch


 need to annotate which API is considered public using something like
 @InterfaceAudience.Public
 @InterfaceStability.Evolving
 Currently this is what is considered (at a minimum) public API
 HCatLoader
 HCatStorer
 HCatInputFormat
 HCatOutputFormat
 HCatReader
 HCatWriter
 HCatRecord
 HCatSchema
 This is needed so that clients/dependent projects know which API they can 
 rely on and which can change w/o notice.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5628) ListBucketingPrunnerTest and DynamicMultiDimeCollectionTest should start with Test not end with it

2013-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13806250#comment-13806250
 ] 

Hudson commented on HIVE-5628:
--

FAILURE: Integrated in Hive-trunk-hadoop2-ptest #152 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2-ptest/152/])
HIVE-5628 : ListBucketingPrunnerTest and DynamicMultiDimeCollectionTest should 
start with Test not end with it (Brock Noland via Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1535761)
* 
/hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/optimizer/listbucketingpruner/DynamicMultiDimeCollectionTest.java
* 
/hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/optimizer/listbucketingpruner/ListBucketingPrunnerTest.java
* 
/hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/optimizer/listbucketingpruner/TestDynamicMultiDimeCollection.java
* 
/hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/optimizer/listbucketingpruner/TestListBucketingPrunner.java


 ListBucketingPrunnerTest and DynamicMultiDimeCollectionTest should start with 
 Test not end with it
 --

 Key: HIVE-5628
 URL: https://issues.apache.org/jira/browse/HIVE-5628
 Project: Hive
  Issue Type: Bug
Reporter: Brock Noland
Assignee: Brock Noland
 Fix For: 0.13.0

 Attachments: HIVE-5628.patch


 ListBucketingPrunnerTest and DynamicMultiDimeCollectionTest will not be run 
 by PTest because they end with Test and PTest requires tests start with Test.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5605) AddResourceOperation, DeleteResourceOperation, DfsOperation, SetOperation should be removed from org.apache.hive.service.cli.operation

2013-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13806253#comment-13806253
 ] 

Hudson commented on HIVE-5605:
--

FAILURE: Integrated in Hive-trunk-hadoop2-ptest #152 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2-ptest/152/])
HIVE-5605 - AddResourceOperation, DeleteResourceOperation, DfsOperation, 
SetOperation should be removed from org.apache.hive.service.cli.operation 
(Vaibhav Gumashta via Brock Noland) (brock: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1535043)
* 
/hive/trunk/service/src/java/org/apache/hive/service/cli/operation/AddResourceOperation.java
* 
/hive/trunk/service/src/java/org/apache/hive/service/cli/operation/DeleteResourceOperation.java
* 
/hive/trunk/service/src/java/org/apache/hive/service/cli/operation/DfsOperation.java
* 
/hive/trunk/service/src/java/org/apache/hive/service/cli/operation/SetOperation.java


 AddResourceOperation, DeleteResourceOperation, DfsOperation, SetOperation 
 should be removed from org.apache.hive.service.cli.operation 
 ---

 Key: HIVE-5605
 URL: https://issues.apache.org/jira/browse/HIVE-5605
 Project: Hive
  Issue Type: Improvement
  Components: HiveServer2
Affects Versions: 0.13.0
Reporter: Vaibhav Gumashta
Assignee: Vaibhav Gumashta
Priority: Minor
 Fix For: 0.13.0

 Attachments: HIVE-5605.1.patch


 These classes are not used as the processing for Add, Delete, DFS and Set 
 commands is done by HiveCommandOperation



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5577) Remove TestNegativeCliDriver script_broken_pipe1

2013-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13806261#comment-13806261
 ] 

Hudson commented on HIVE-5577:
--

FAILURE: Integrated in Hive-trunk-hadoop2-ptest #152 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2-ptest/152/])
HIVE-5577 : Remove TestNegativeCliDriver script_broken_pipe1 (Brock Noland via 
Navis) (navis: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1535643)
* /hive/trunk/ql/src/test/queries/clientnegative/script_broken_pipe1.q
* /hive/trunk/ql/src/test/results/clientnegative/script_broken_pipe1.q.out


 Remove TestNegativeCliDriver script_broken_pipe1
 

 Key: HIVE-5577
 URL: https://issues.apache.org/jira/browse/HIVE-5577
 Project: Hive
  Issue Type: Bug
Reporter: Brock Noland
Assignee: Brock Noland
 Fix For: 0.13.0

 Attachments: HIVE-5577.1.patch.txt


 TestNegativeCliDriver script_broken_pipe1 is extremely flaky and not a 
 terribly important test. Let's remove it.
 Failures
 https://builds.apache.org/user/brock/my-views/view/hive/job/Hive-trunk-hadoop1-ptest/206/testReport/org.apache.hadoop.hive.cli/TestNegativeCliDriver/testNegativeCliDriver_script_broken_pipe1/
 https://builds.apache.org/user/brock/my-views/view/hive/job/Hive-trunk-hadoop1-ptest/206/testReport/junit/org.apache.hadoop.hive.cli/TestNegativeCliDriver/testNegativeCliDriver_script_broken_pipe1/
 https://builds.apache.org/user/brock/my-views/view/hive/job/Hive-trunk-hadoop1-ptest/204/testReport/org.apache.hadoop.hive.cli/TestNegativeCliDriver/testNegativeCliDriver_script_broken_pipe1/



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5350) Cleanup exception handling around parallel orderby

2013-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13806255#comment-13806255
 ] 

Hudson commented on HIVE-5350:
--

FAILURE: Integrated in Hive-trunk-hadoop2-ptest #152 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2-ptest/152/])
HIVE-5350 - Cleanup exception handling around parallel orderby (Navis via Brock 
Noland) (brock: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1535045)
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/PartitionKeySampler.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/mr/ExecDriver.java


 Cleanup exception handling around parallel orderby
 --

 Key: HIVE-5350
 URL: https://issues.apache.org/jira/browse/HIVE-5350
 Project: Hive
  Issue Type: Bug
Reporter: Brock Noland
Assignee: Navis
Priority: Minor
 Fix For: 0.13.0

 Attachments: D13617.1.patch


 I think we should log the message to the console and the full exception to 
 the log:
 ExecDriver:
 {noformat}
 try {
   handleSampling(driverContext, mWork, job, conf);
   job.setPartitionerClass(HiveTotalOrderPartitioner.class);
 } catch (Exception e) {
   console.printInfo(Not enough sampling data.. Rolling back to 
 single reducer task);
   rWork.setNumReduceTasks(1);
   job.setNumReduceTasks(1);
 }
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5440) HiveServer2 doesn't apply SQL operation's config property

2013-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13806265#comment-13806265
 ] 

Hudson commented on HIVE-5440:
--

FAILURE: Integrated in Hive-trunk-hadoop2-ptest #152 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2-ptest/152/])
HIVE-5440: HiveServer2 doesn't apply SQL operation's config property (Prasad 
Mujumdar via Thejas Nair) (thejas: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1535889)
* 
/hive/trunk/service/src/java/org/apache/hive/service/cli/operation/ExecuteStatementOperation.java
* 
/hive/trunk/service/src/java/org/apache/hive/service/cli/operation/SQLOperation.java
* /hive/trunk/service/src/test/org/apache/hive/service/cli/CLIServiceTest.java


 HiveServer2 doesn't apply SQL operation's config property 
 --

 Key: HIVE-5440
 URL: https://issues.apache.org/jira/browse/HIVE-5440
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2
Affects Versions: 0.12.0
Reporter: Prasad Mujumdar
Assignee: Prasad Mujumdar
 Fix For: 0.13.0

 Attachments: HIVE-5440.1.patch, HIVE-5440.2.patch


 The HiveServer2 thrift IDL includes an optional config overlay map which is 
 currently not used.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5552) Merging of QBJoinTrees doesn't handle filter pushdowns correctly

2013-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13806267#comment-13806267
 ] 

Hudson commented on HIVE-5552:
--

FAILURE: Integrated in Hive-trunk-hadoop2-ptest #152 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2-ptest/152/])
HIVE-5552 : Merging of QBJoinTrees doesnt handle filter pushdowns correctly 
(Harish Butani via Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1535858)
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/ParseUtils.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/QBJoinTree.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java
* /hive/trunk/ql/src/test/queries/clientpositive/join_merging.q
* /hive/trunk/ql/src/test/results/clientpositive/join_merging.q.out


 Merging of QBJoinTrees doesn't handle filter pushdowns correctly
 

 Key: HIVE-5552
 URL: https://issues.apache.org/jira/browse/HIVE-5552
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Reporter: Harish Butani
Assignee: Harish Butani
 Fix For: 0.13.0

 Attachments: HIVE-5552.1.patch, HIVE-5552.2.patch


 The following query fails:
 (this based on the schema from auto_join_filters.q)
 {noformat}
 explain
 SELECT sum(hash(a.key,a.value,b.key,b.value)) 
 FROM myinput1 a LEFT OUTER JOIN myinput1 b on a.value = b.value  RIGHT OUTER 
 JOIN myinput1 c 
  ON 
 b.value = c.value AND 
 a.key  40
 {noformat}
 Whereas this query succeeds
 {noformat}
 explain
 SELECT sum(hash(a.key,a.value,b.key,b.value)) 
 FROM myinput1 a LEFT OUTER JOIN myinput1 b RIGHT OUTER JOIN myinput1 c 
  ON  a.value = b.value and
 b.value = c.value AND 
 a.key  40
 {noformat}
 Pushing the first condition to the first join, triggers a merge of the 2 
 QBJoinTrees. During merge all the right side  filters identified for pushing 
 are assumed to refer to the merging table (b in this case). But the pushable 
 filters can refer to any left table.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5560) Hive produces incorrect results on multi-distinct query

2013-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13806268#comment-13806268
 ] 

Hudson commented on HIVE-5560:
--

FAILURE: Integrated in Hive-trunk-hadoop2-ptest #152 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2-ptest/152/])
HIVE-5560 : Hive produces incorrect results on multi-distinct query (Navis via 
Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1535059)
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java
* /hive/trunk/ql/src/test/queries/clientpositive/groupby2_map_multi_distinct.q
* 
/hive/trunk/ql/src/test/results/clientpositive/auto_join18_multi_distinct.q.out
* /hive/trunk/ql/src/test/results/clientpositive/count.q.out
* 
/hive/trunk/ql/src/test/results/clientpositive/groupby2_map_multi_distinct.q.out
* 
/hive/trunk/ql/src/test/results/clientpositive/groupby2_noskew_multi_distinct.q.out
* 
/hive/trunk/ql/src/test/results/clientpositive/groupby3_map_multi_distinct.q.out
* 
/hive/trunk/ql/src/test/results/clientpositive/groupby3_noskew_multi_distinct.q.out
* /hive/trunk/ql/src/test/results/clientpositive/groupby_distinct_samekey.q.out
* 
/hive/trunk/ql/src/test/results/clientpositive/groupby_map_ppr_multi_distinct.q.out
* 
/hive/trunk/ql/src/test/results/clientpositive/groupby_multi_single_reducer.q.out
* 
/hive/trunk/ql/src/test/results/clientpositive/groupby_ppr_multi_distinct.q.out
* /hive/trunk/ql/src/test/results/clientpositive/groupby_sort_11.q.out
* /hive/trunk/ql/src/test/results/clientpositive/join18_multi_distinct.q.out
* /hive/trunk/ql/src/test/results/clientpositive/limit_pushdown.q.out


 Hive produces incorrect results on multi-distinct query
 ---

 Key: HIVE-5560
 URL: https://issues.apache.org/jira/browse/HIVE-5560
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.11.0, 0.12.0
Reporter: Vikram Dixit K
Assignee: Navis
 Fix For: 0.13.0

 Attachments: D13599.1.patch, D13599.2.patch


 {noformat}
 select key, count(distinct key) + count(distinct value) from src tablesample 
 (10 ROWS) group by key
 POSTHOOK: type: QUERY
 POSTHOOK: Input: default@src
  A masked pattern was here 
 165 1
 val_165 1
 238 1
 val_238 1
 255 1
 val_255 1
 27  1
 val_27  1
 278 1
 val_278 1
 311 1
 val_311 1
 409 1
 val_409 1
 484 1
 val_484 1
 86  1
 val_86  1
 98  1
 val_98  1
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


Contribute to Hive Documentation

2013-10-27 Thread Brad Ruderman
Hi All-

I would like to add a section this page (
https://cwiki.apache.org/confluence/display/Hive/HiveServer2+Clients)
featuring the client libraries for all different languages to leverage hs2
drivers. Can I have access to edit the wiki page?

Known drivers:
python - pyhs2 https://github.com/BradRuderman/pyhs2
ruby - https://github.com/forward3d/rbhive
node - https://github.com/forward/node-hive

username: bradruder...@gmail.com

Thanks,
Brad


[jira] [Commented] (HIVE-5629) Fix two javadoc failures in HCatalog

2013-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13806279#comment-13806279
 ] 

Hudson commented on HIVE-5629:
--

SUCCESS: Integrated in Hive-trunk-hadoop1-ptest #215 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop1-ptest/215/])
HIVE-5629 : Fix two javadoc failures in HCatalog (Brock Noland via Ashutosh 
Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1535513)
* 
/hive/trunk/hcatalog/core/src/main/java/org/apache/hive/hcatalog/mapreduce/InputJobInfo.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/PigDelegator.java


 Fix two javadoc failures in HCatalog
 

 Key: HIVE-5629
 URL: https://issues.apache.org/jira/browse/HIVE-5629
 Project: Hive
  Issue Type: Bug
Reporter: Brock Noland
Assignee: Brock Noland
 Fix For: 0.13.0

 Attachments: HIVE-5629.patch


 I am seeing two javadoc failures on HCatalog. These are not being seen by 
 PTest and indeed I cannot reproduce on my Mac but can on Linux. Regardless 
 they should be fixed.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5628) ListBucketingPrunnerTest and DynamicMultiDimeCollectionTest should start with Test not end with it

2013-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13806277#comment-13806277
 ] 

Hudson commented on HIVE-5628:
--

SUCCESS: Integrated in Hive-trunk-hadoop1-ptest #215 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop1-ptest/215/])
HIVE-5628 : ListBucketingPrunnerTest and DynamicMultiDimeCollectionTest should 
start with Test not end with it (Brock Noland via Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1535761)
* 
/hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/optimizer/listbucketingpruner/DynamicMultiDimeCollectionTest.java
* 
/hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/optimizer/listbucketingpruner/ListBucketingPrunnerTest.java
* 
/hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/optimizer/listbucketingpruner/TestDynamicMultiDimeCollection.java
* 
/hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/optimizer/listbucketingpruner/TestListBucketingPrunner.java


 ListBucketingPrunnerTest and DynamicMultiDimeCollectionTest should start with 
 Test not end with it
 --

 Key: HIVE-5628
 URL: https://issues.apache.org/jira/browse/HIVE-5628
 Project: Hive
  Issue Type: Bug
Reporter: Brock Noland
Assignee: Brock Noland
 Fix For: 0.13.0

 Attachments: HIVE-5628.patch


 ListBucketingPrunnerTest and DynamicMultiDimeCollectionTest will not be run 
 by PTest because they end with Test and PTest requires tests start with Test.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5605) AddResourceOperation, DeleteResourceOperation, DfsOperation, SetOperation should be removed from org.apache.hive.service.cli.operation

2013-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13806280#comment-13806280
 ] 

Hudson commented on HIVE-5605:
--

SUCCESS: Integrated in Hive-trunk-hadoop1-ptest #215 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop1-ptest/215/])
HIVE-5605 - AddResourceOperation, DeleteResourceOperation, DfsOperation, 
SetOperation should be removed from org.apache.hive.service.cli.operation 
(Vaibhav Gumashta via Brock Noland) (brock: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1535043)
* 
/hive/trunk/service/src/java/org/apache/hive/service/cli/operation/AddResourceOperation.java
* 
/hive/trunk/service/src/java/org/apache/hive/service/cli/operation/DeleteResourceOperation.java
* 
/hive/trunk/service/src/java/org/apache/hive/service/cli/operation/DfsOperation.java
* 
/hive/trunk/service/src/java/org/apache/hive/service/cli/operation/SetOperation.java


 AddResourceOperation, DeleteResourceOperation, DfsOperation, SetOperation 
 should be removed from org.apache.hive.service.cli.operation 
 ---

 Key: HIVE-5605
 URL: https://issues.apache.org/jira/browse/HIVE-5605
 Project: Hive
  Issue Type: Improvement
  Components: HiveServer2
Affects Versions: 0.13.0
Reporter: Vaibhav Gumashta
Assignee: Vaibhav Gumashta
Priority: Minor
 Fix For: 0.13.0

 Attachments: HIVE-5605.1.patch


 These classes are not used as the processing for Add, Delete, DFS and Set 
 commands is done by HiveCommandOperation



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5440) HiveServer2 doesn't apply SQL operation's config property

2013-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13806292#comment-13806292
 ] 

Hudson commented on HIVE-5440:
--

SUCCESS: Integrated in Hive-trunk-hadoop1-ptest #215 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop1-ptest/215/])
HIVE-5440: HiveServer2 doesn't apply SQL operation's config property (Prasad 
Mujumdar via Thejas Nair) (thejas: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1535889)
* 
/hive/trunk/service/src/java/org/apache/hive/service/cli/operation/ExecuteStatementOperation.java
* 
/hive/trunk/service/src/java/org/apache/hive/service/cli/operation/SQLOperation.java
* /hive/trunk/service/src/test/org/apache/hive/service/cli/CLIServiceTest.java


 HiveServer2 doesn't apply SQL operation's config property 
 --

 Key: HIVE-5440
 URL: https://issues.apache.org/jira/browse/HIVE-5440
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2
Affects Versions: 0.12.0
Reporter: Prasad Mujumdar
Assignee: Prasad Mujumdar
 Fix For: 0.13.0

 Attachments: HIVE-5440.1.patch, HIVE-5440.2.patch


 The HiveServer2 thrift IDL includes an optional config overlay map which is 
 currently not used.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5220) Add option for removing intermediate directory for partition, which is empty

2013-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13806285#comment-13806285
 ] 

Hudson commented on HIVE-5220:
--

SUCCESS: Integrated in Hive-trunk-hadoop1-ptest #215 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop1-ptest/215/])
HIVE-5220 : Add option for removing intermediate directory for partition, which 
is empty (Navis via Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1535072)
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java
* /hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/Warehouse.java


 Add option for removing intermediate directory for partition, which is empty
 

 Key: HIVE-5220
 URL: https://issues.apache.org/jira/browse/HIVE-5220
 Project: Hive
  Issue Type: Improvement
  Components: Query Processor
Reporter: Navis
Assignee: Navis
Priority: Minor
 Fix For: 0.13.0

 Attachments: D12729.2.patch, HIVE-5220.D12729.1.patch


 For deeply nested partitioned table, intermediate directories are not removed 
 even if there is no partitions in it by removing them.
 {noformat}
 /deep_part/c=09/d=01
 /deep_part/c=09/d=01/e=01
 /deep_part/c=09/d=01/e=02
 /deep_part/c=09/d=02
 /deep_part/c=09/d=02/e=01
 /deep_part/c=09/d=02/e=02
 {noformat}
 After removing partition (c='09'), directory remains like this, 
 {noformat}
 /deep_part/c=09/d=01
 /deep_part/c=09/d=02
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5637) Sporadic minimr test failure

2013-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13806293#comment-13806293
 ] 

Hudson commented on HIVE-5637:
--

SUCCESS: Integrated in Hive-trunk-hadoop1-ptest #215 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop1-ptest/215/])
HIVE-5637 : Sporadic minimr test failure (Navis via Ashutosh Chauhan) 
(hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1535510)
* /hive/trunk/ql/src/test/queries/clientpositive/import_exported_table.q


 Sporadic minimr test failure
 

 Key: HIVE-5637
 URL: https://issues.apache.org/jira/browse/HIVE-5637
 Project: Hive
  Issue Type: Test
  Components: Tests
Reporter: Navis
Assignee: Navis
Priority: Trivial
 Fix For: 0.13.0

 Attachments: HIVE-5637.1.patch.txt


 {noformat}
 ant test -Dtestcase=TestMinimrCliDriver 
 -Dqfile=import_exported_table.q,load_hdfs_file_with_space_in_the_name.q 
 -Dmodule=ql
 {noformat}
 Fails with message like this.
 {noformat}
 Begin query: load_hdfs_file_with_space_in_the_name.q
 mkdir: cannot create directory hdfs:///tmp/test/: File exists
 Exception: Client Execution failed with error code = -1
 See build/ql/tmp/hive.log, or try ant test ... -Dtest.silent=false to get 
 more logs.
 junit.framework.AssertionFailedError: Client Execution failed with error code 
 = -1
 See build/ql/tmp/hive.log, or try ant test ... -Dtest.silent=false to get 
 more logs.
 at junit.framework.Assert.fail(Assert.java:47)
 at 
 org.apache.hadoop.hive.cli.TestMinimrCliDriver.runTest(TestMinimrCliDriver.java:349)
 at 
 org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_load_hdfs_file_with_space_in_the_name(TestMinimrCliDriver.java:291)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at junit.framework.TestCase.runTest(TestCase.java:154)
 at junit.framework.TestCase.runBare(TestCase.java:127)
 at junit.framework.TestResult$1.protect(TestResult.java:106)
 at junit.framework.TestResult.runProtected(TestResult.java:124)
 at junit.framework.TestResult.run(TestResult.java:109)
 at junit.framework.TestCase.run(TestCase.java:118)
 at junit.framework.TestSuite.runTest(TestSuite.java:208)
 at junit.framework.TestSuite.run(TestSuite.java:203)
 at 
 org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:520)
 at 
 org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1060)
 at 
 org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:911)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5625) Fix issue with metastore version restriction test.

2013-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13806281#comment-13806281
 ] 

Hudson commented on HIVE-5625:
--

SUCCESS: Integrated in Hive-trunk-hadoop1-ptest #215 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop1-ptest/215/])
HIVE-5625 - Fix issue with metastore version restriction test. (Vikram Dixit K 
via Brock Noland) (brock: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1535402)
* 
/hive/trunk/metastore/src/test/org/apache/hadoop/hive/metastore/TestMetastoreVersion.java


 Fix issue with metastore version restriction test.
 --

 Key: HIVE-5625
 URL: https://issues.apache.org/jira/browse/HIVE-5625
 Project: Hive
  Issue Type: Bug
  Components: Tests
Affects Versions: 0.13.0
Reporter: Vikram Dixit K
Assignee: Vikram Dixit K
 Fix For: 0.13.0

 Attachments: HIVE-5625.1.patch


 Based on Brock's comments, the change made in HIVE-5403 change the nature of 
 the test.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5577) Remove TestNegativeCliDriver script_broken_pipe1

2013-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13806288#comment-13806288
 ] 

Hudson commented on HIVE-5577:
--

SUCCESS: Integrated in Hive-trunk-hadoop1-ptest #215 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop1-ptest/215/])
HIVE-5577 : Remove TestNegativeCliDriver script_broken_pipe1 (Brock Noland via 
Navis) (navis: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1535643)
* /hive/trunk/ql/src/test/queries/clientnegative/script_broken_pipe1.q
* /hive/trunk/ql/src/test/results/clientnegative/script_broken_pipe1.q.out


 Remove TestNegativeCliDriver script_broken_pipe1
 

 Key: HIVE-5577
 URL: https://issues.apache.org/jira/browse/HIVE-5577
 Project: Hive
  Issue Type: Bug
Reporter: Brock Noland
Assignee: Brock Noland
 Fix For: 0.13.0

 Attachments: HIVE-5577.1.patch.txt


 TestNegativeCliDriver script_broken_pipe1 is extremely flaky and not a 
 terribly important test. Let's remove it.
 Failures
 https://builds.apache.org/user/brock/my-views/view/hive/job/Hive-trunk-hadoop1-ptest/206/testReport/org.apache.hadoop.hive.cli/TestNegativeCliDriver/testNegativeCliDriver_script_broken_pipe1/
 https://builds.apache.org/user/brock/my-views/view/hive/job/Hive-trunk-hadoop1-ptest/206/testReport/junit/org.apache.hadoop.hive.cli/TestNegativeCliDriver/testNegativeCliDriver_script_broken_pipe1/
 https://builds.apache.org/user/brock/my-views/view/hive/job/Hive-trunk-hadoop1-ptest/204/testReport/org.apache.hadoop.hive.cli/TestNegativeCliDriver/testNegativeCliDriver_script_broken_pipe1/



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5482) JDBC should depend on httpclient.version and httpcore.version 4.1.3 to be consistent with other modules

2013-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13806283#comment-13806283
 ] 

Hudson commented on HIVE-5482:
--

SUCCESS: Integrated in Hive-trunk-hadoop1-ptest #215 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop1-ptest/215/])
HIVE-5482 : JDBC should depend on httpclient.version and httpcore.version 4.1.3 
to be consistent with other modules (Vaibhav Gumashta via Thejas Nair) (thejas: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1535620)
* /hive/trunk/ivy/libraries.properties


 JDBC should depend on httpclient.version and httpcore.version 4.1.3 to be 
 consistent with other modules
 ---

 Key: HIVE-5482
 URL: https://issues.apache.org/jira/browse/HIVE-5482
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2, JDBC
Reporter: Vaibhav Gumashta
Assignee: Vaibhav Gumashta
 Fix For: 0.13.0

 Attachments: HIVE-5482.1.patch


 Currently depends on 4.2.4 and 4.2.5 which conflicts with thrift-0.9 which 
 depends on 4.1.3



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5403) Move loading of filesystem, ugi, metastore client to hive session

2013-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13806290#comment-13806290
 ] 

Hudson commented on HIVE-5403:
--

SUCCESS: Integrated in Hive-trunk-hadoop1-ptest #215 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop1-ptest/215/])
HIVE-5403: Perflogger broken due to HIVE-5403 (Vikram Dixit K via Gunther 
Hagleitner) (gunther: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1535598)
* 
/hive/trunk/metastore/src/test/org/apache/hadoop/hive/metastore/TestMetastoreVersion.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/session/SessionState.java
HIVE-5403 : Move loading of filesystem, ugi, metastore client to hive session 
(Vikram Dixit via Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1535039)
* 
/hive/trunk/metastore/src/test/org/apache/hadoop/hive/metastore/TestMetastoreVersion.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/session/SessionState.java


 Move loading of filesystem, ugi, metastore client to hive session
 -

 Key: HIVE-5403
 URL: https://issues.apache.org/jira/browse/HIVE-5403
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.13.0
Reporter: Vikram Dixit K
Assignee: Vikram Dixit K
 Fix For: 0.13.0

 Attachments: HIVE-5403.1.patch, HIVE-5403.2.patch, HIVE-5403.3.patch, 
 HIVE-5403.4.patch


 As part of HIVE-5184, the metastore connection, loading filesystem were done 
 as part of the tez session so as to speed up query times while paying a cost 
 at startup. We can do this more generally in hive to apply to both the 
 mapreduce and tez side of things.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5454) HCatalog runs a partition listing with an empty filter

2013-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13806291#comment-13806291
 ] 

Hudson commented on HIVE-5454:
--

SUCCESS: Integrated in Hive-trunk-hadoop1-ptest #215 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop1-ptest/215/])
HIVE-5454 - HCatalog runs a partition listing with an empty filter (Harsh J via 
Brock Noland) (brock: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1535051)
* 
/hive/trunk/hcatalog/core/src/main/java/org/apache/hive/hcatalog/data/transfer/impl/HCatInputFormatReader.java
* 
/hive/trunk/hcatalog/core/src/main/java/org/apache/hive/hcatalog/mapreduce/HCatInputFormat.java
* 
/hive/trunk/hcatalog/core/src/test/java/org/apache/hive/hcatalog/mapreduce/HCatMapReduceTest.java
* 
/hive/trunk/hcatalog/hcatalog-pig-adapter/src/main/java/org/apache/hive/hcatalog/pig/HCatLoader.java
* /hive/trunk/hcatalog/src/docs/src/documentation/content/xdocs/inputoutput.xml
* 
/hive/trunk/hcatalog/src/test/e2e/hcatalog/udfs/java/org/apache/hcatalog/utils/HBaseReadWrite.java
* 
/hive/trunk/hcatalog/src/test/e2e/hcatalog/udfs/java/org/apache/hive/hcatalog/utils/GroupByAge.java
* 
/hive/trunk/hcatalog/src/test/e2e/hcatalog/udfs/java/org/apache/hive/hcatalog/utils/ReadJson.java
* 
/hive/trunk/hcatalog/src/test/e2e/hcatalog/udfs/java/org/apache/hive/hcatalog/utils/ReadRC.java
* 
/hive/trunk/hcatalog/src/test/e2e/hcatalog/udfs/java/org/apache/hive/hcatalog/utils/ReadText.java
* 
/hive/trunk/hcatalog/src/test/e2e/hcatalog/udfs/java/org/apache/hive/hcatalog/utils/ReadWrite.java
* 
/hive/trunk/hcatalog/src/test/e2e/hcatalog/udfs/java/org/apache/hive/hcatalog/utils/SimpleRead.java
* 
/hive/trunk/hcatalog/src/test/e2e/hcatalog/udfs/java/org/apache/hive/hcatalog/utils/StoreComplex.java
* 
/hive/trunk/hcatalog/src/test/e2e/hcatalog/udfs/java/org/apache/hive/hcatalog/utils/StoreDemo.java
* 
/hive/trunk/hcatalog/src/test/e2e/hcatalog/udfs/java/org/apache/hive/hcatalog/utils/StoreNumbers.java
* 
/hive/trunk/hcatalog/src/test/e2e/hcatalog/udfs/java/org/apache/hive/hcatalog/utils/SumNumbers.java
* 
/hive/trunk/hcatalog/src/test/e2e/hcatalog/udfs/java/org/apache/hive/hcatalog/utils/TypeDataCheck.java
* 
/hive/trunk/hcatalog/src/test/e2e/hcatalog/udfs/java/org/apache/hive/hcatalog/utils/WriteJson.java
* 
/hive/trunk/hcatalog/src/test/e2e/hcatalog/udfs/java/org/apache/hive/hcatalog/utils/WriteRC.java
* 
/hive/trunk/hcatalog/src/test/e2e/hcatalog/udfs/java/org/apache/hive/hcatalog/utils/WriteText.java
* 
/hive/trunk/hcatalog/src/test/e2e/hcatalog/udfs/java/org/apache/hive/hcatalog/utils/WriteTextPartitioned.java
* 
/hive/trunk/hcatalog/storage-handlers/hbase/src/test/org/apache/hive/hcatalog/hbase/TestHBaseInputFormat.java


 HCatalog runs a partition listing with an empty filter
 --

 Key: HIVE-5454
 URL: https://issues.apache.org/jira/browse/HIVE-5454
 Project: Hive
  Issue Type: Bug
  Components: HCatalog
Affects Versions: 0.12.0
Reporter: Harsh J
Assignee: Harsh J
 Fix For: 0.13.0

 Attachments: D13317.1.patch, D13317.2.patch, D13317.3.patch


 This is a HCATALOG-527 caused regression, wherein the HCatLoader's way of 
 calling HCatInputFormat causes it to do 2x partition lookups - once without 
 the filter, and then again with the filter.
 For tables with large number partitions (10, say), the non-filter lookup 
 proves fatal both to the client (Read timed out errors from 
 ThriftMetaStoreClient cause the server doesn't respond) and to the server 
 (too much data loaded into the cache, OOME, or slowdown).
 The fix would be to use a single call that also passes a partition filter 
 information, as was in the case of HCatalog 0.4 sources before HCATALOG-527.
 (HCatalog-release-wise, this affects all 0.5.x users)



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5350) Cleanup exception handling around parallel orderby

2013-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13806282#comment-13806282
 ] 

Hudson commented on HIVE-5350:
--

SUCCESS: Integrated in Hive-trunk-hadoop1-ptest #215 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop1-ptest/215/])
HIVE-5350 - Cleanup exception handling around parallel orderby (Navis via Brock 
Noland) (brock: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1535045)
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/PartitionKeySampler.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/mr/ExecDriver.java


 Cleanup exception handling around parallel orderby
 --

 Key: HIVE-5350
 URL: https://issues.apache.org/jira/browse/HIVE-5350
 Project: Hive
  Issue Type: Bug
Reporter: Brock Noland
Assignee: Navis
Priority: Minor
 Fix For: 0.13.0

 Attachments: D13617.1.patch


 I think we should log the message to the console and the full exception to 
 the log:
 ExecDriver:
 {noformat}
 try {
   handleSampling(driverContext, mWork, job, conf);
   job.setPartitionerClass(HiveTotalOrderPartitioner.class);
 } catch (Exception e) {
   console.printInfo(Not enough sampling data.. Rolling back to 
 single reducer task);
   rWork.setNumReduceTasks(1);
   job.setNumReduceTasks(1);
 }
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5430) Refactor VectorizationContext and handle NOT expression with nulls.

2013-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13806289#comment-13806289
 ] 

Hudson commented on HIVE-5430:
--

SUCCESS: Integrated in Hive-trunk-hadoop1-ptest #215 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop1-ptest/215/])
HIVE-5430 : Refactor VectorizationContext and handle NOT expression with nulls. 
(Jitendra Nath Pandey via Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1535055)
* /hive/trunk/ant/src/org/apache/hadoop/hive/ant/GenVectorTestCode.java
* 
/hive/trunk/ql/src/gen/vectorization/ExpressionTemplates/ColumnArithmeticColumn.txt
* 
/hive/trunk/ql/src/gen/vectorization/ExpressionTemplates/ColumnArithmeticScalar.txt
* 
/hive/trunk/ql/src/gen/vectorization/ExpressionTemplates/ColumnCompareColumn.txt
* 
/hive/trunk/ql/src/gen/vectorization/ExpressionTemplates/ColumnCompareScalar.txt
* /hive/trunk/ql/src/gen/vectorization/ExpressionTemplates/ColumnUnaryFunc.txt
* /hive/trunk/ql/src/gen/vectorization/ExpressionTemplates/ColumnUnaryMinus.txt
* 
/hive/trunk/ql/src/gen/vectorization/ExpressionTemplates/FilterColumnCompareColumn.txt
* 
/hive/trunk/ql/src/gen/vectorization/ExpressionTemplates/FilterColumnCompareScalar.txt
* 
/hive/trunk/ql/src/gen/vectorization/ExpressionTemplates/FilterScalarCompareColumn.txt
* 
/hive/trunk/ql/src/gen/vectorization/ExpressionTemplates/FilterStringColumnCompareColumn.txt
* 
/hive/trunk/ql/src/gen/vectorization/ExpressionTemplates/FilterStringColumnCompareScalar.txt
* 
/hive/trunk/ql/src/gen/vectorization/ExpressionTemplates/FilterStringScalarCompareColumn.txt
* 
/hive/trunk/ql/src/gen/vectorization/ExpressionTemplates/ScalarArithmeticColumn.txt
* 
/hive/trunk/ql/src/gen/vectorization/ExpressionTemplates/ScalarCompareColumn.txt
* 
/hive/trunk/ql/src/gen/vectorization/ExpressionTemplates/StringColumnCompareColumn.txt
* 
/hive/trunk/ql/src/gen/vectorization/ExpressionTemplates/StringColumnCompareScalar.txt
* 
/hive/trunk/ql/src/gen/vectorization/ExpressionTemplates/StringScalarCompareColumn.txt
* 
/hive/trunk/ql/src/gen/vectorization/TestTemplates/TestColumnScalarFilterVectorExpressionEvaluation.txt
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/OperatorFactory.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorExpressionDescriptor.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorFilterOperator.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorGroupByOperator.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorMapJoinOperator.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorReduceSinkOperator.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorSelectOperator.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorizationContext.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorizedExpressions.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/AbstractFilterStringColLikeStringScalar.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/ColAndCol.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/ColOrCol.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/ConstantVectorExpression.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/FilterColAndScalar.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/FilterColOrScalar.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/FilterExprAndExpr.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/FilterExprOrExpr.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/FilterNotExpr.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/FilterScalarAndColumn.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/FilterScalarOrColumn.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/FilterStringColLikeStringScalar.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/FilterStringColRegExpStringScalar.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/FuncLogWithBaseDoubleToDouble.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/FuncLogWithBaseLongToDouble.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/FuncLongToString.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/FuncPowerDoubleToDouble.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/FuncPowerLongToDouble.java
* 

[jira] [Commented] (HIVE-5552) Merging of QBJoinTrees doesn't handle filter pushdowns correctly

2013-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13806294#comment-13806294
 ] 

Hudson commented on HIVE-5552:
--

SUCCESS: Integrated in Hive-trunk-hadoop1-ptest #215 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop1-ptest/215/])
HIVE-5552 : Merging of QBJoinTrees doesnt handle filter pushdowns correctly 
(Harish Butani via Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1535858)
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/ParseUtils.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/QBJoinTree.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java
* /hive/trunk/ql/src/test/queries/clientpositive/join_merging.q
* /hive/trunk/ql/src/test/results/clientpositive/join_merging.q.out


 Merging of QBJoinTrees doesn't handle filter pushdowns correctly
 

 Key: HIVE-5552
 URL: https://issues.apache.org/jira/browse/HIVE-5552
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Reporter: Harish Butani
Assignee: Harish Butani
 Fix For: 0.13.0

 Attachments: HIVE-5552.1.patch, HIVE-5552.2.patch


 The following query fails:
 (this based on the schema from auto_join_filters.q)
 {noformat}
 explain
 SELECT sum(hash(a.key,a.value,b.key,b.value)) 
 FROM myinput1 a LEFT OUTER JOIN myinput1 b on a.value = b.value  RIGHT OUTER 
 JOIN myinput1 c 
  ON 
 b.value = c.value AND 
 a.key  40
 {noformat}
 Whereas this query succeeds
 {noformat}
 explain
 SELECT sum(hash(a.key,a.value,b.key,b.value)) 
 FROM myinput1 a LEFT OUTER JOIN myinput1 b RIGHT OUTER JOIN myinput1 c 
  ON  a.value = b.value and
 b.value = c.value AND 
 a.key  40
 {noformat}
 Pushing the first condition to the first join, triggers a merge of the 2 
 QBJoinTrees. During merge all the right side  filters identified for pushing 
 are assumed to refer to the merging table (b in this case). But the pushable 
 filters can refer to any left table.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5216) Need to annotate public API in HCatalog

2013-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13806286#comment-13806286
 ] 

Hudson commented on HIVE-5216:
--

SUCCESS: Integrated in Hive-trunk-hadoop1-ptest #215 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop1-ptest/215/])
HIVE-5216 : Need to annotate public API in HCatalog (Eugene Koifman via Thejas 
Nair) (thejas: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1535535)
* 
/hive/trunk/hcatalog/core/src/main/java/org/apache/hive/hcatalog/data/HCatRecord.java
* 
/hive/trunk/hcatalog/core/src/main/java/org/apache/hive/hcatalog/data/schema/HCatSchema.java
* 
/hive/trunk/hcatalog/core/src/main/java/org/apache/hive/hcatalog/data/transfer/HCatReader.java
* 
/hive/trunk/hcatalog/core/src/main/java/org/apache/hive/hcatalog/data/transfer/HCatWriter.java
* 
/hive/trunk/hcatalog/core/src/main/java/org/apache/hive/hcatalog/mapreduce/HCatOutputFormat.java
* 
/hive/trunk/hcatalog/hcatalog-pig-adapter/src/main/java/org/apache/hive/hcatalog/pig/HCatLoader.java
* 
/hive/trunk/hcatalog/hcatalog-pig-adapter/src/main/java/org/apache/hive/hcatalog/pig/HCatStorer.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/Main.java


 Need to annotate public API in HCatalog
 ---

 Key: HIVE-5216
 URL: https://issues.apache.org/jira/browse/HIVE-5216
 Project: Hive
  Issue Type: Bug
  Components: HCatalog, WebHCat
Affects Versions: 0.12.0
Reporter: Eugene Koifman
Assignee: Eugene Koifman
 Fix For: 0.13.0

 Attachments: HIVE-5216.2.patch, HIVE-5216.patch


 need to annotate which API is considered public using something like
 @InterfaceAudience.Public
 @InterfaceStability.Evolving
 Currently this is what is considered (at a minimum) public API
 HCatLoader
 HCatStorer
 HCatInputFormat
 HCatOutputFormat
 HCatReader
 HCatWriter
 HCatRecord
 HCatSchema
 This is needed so that clients/dependent projects know which API they can 
 rely on and which can change w/o notice.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-784) Support uncorrelated subqueries in the WHERE clause

2013-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13806284#comment-13806284
 ] 

Hudson commented on HIVE-784:
-

SUCCESS: Integrated in Hive-trunk-hadoop1-ptest #215 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop1-ptest/215/])
HIVE-784 : Support uncorrelated subqueries in the WHERE clause (Harish Butani 
via Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1535040)
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/ErrorMsg.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/HiveParser.g
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/IdentifiersParser.g
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/QBSubQuery.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/SubQueryUtils.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/TypeCheckProcFactory.java
* /hive/trunk/ql/src/test/queries/clientnegative/subquery_exists_implicit_gby.q
* /hive/trunk/ql/src/test/queries/clientnegative/subquery_in_groupby.q
* /hive/trunk/ql/src/test/queries/clientnegative/subquery_in_select.q
* 
/hive/trunk/ql/src/test/queries/clientnegative/subquery_multiple_cols_in_select.q
* 
/hive/trunk/ql/src/test/queries/clientnegative/subquery_notexists_implicit_gby.q
* /hive/trunk/ql/src/test/queries/clientnegative/subquery_subquery_chain.q
* /hive/trunk/ql/src/test/queries/clientnegative/subquery_windowing_corr.q
* /hive/trunk/ql/src/test/queries/clientnegative/subquery_with_or_cond.q
* /hive/trunk/ql/src/test/queries/clientpositive/subquery_exists.q
* /hive/trunk/ql/src/test/queries/clientpositive/subquery_in.q
* /hive/trunk/ql/src/test/queries/clientpositive/subquery_multiinsert.q
* /hive/trunk/ql/src/test/queries/clientpositive/subquery_notexists.q
* /hive/trunk/ql/src/test/queries/clientpositive/subquery_notin.q
* 
/hive/trunk/ql/src/test/results/clientnegative/subquery_exists_implicit_gby.q.out
* /hive/trunk/ql/src/test/results/clientnegative/subquery_in_groupby.q.out
* /hive/trunk/ql/src/test/results/clientnegative/subquery_in_select.q.out
* 
/hive/trunk/ql/src/test/results/clientnegative/subquery_multiple_cols_in_select.q.out
* 
/hive/trunk/ql/src/test/results/clientnegative/subquery_notexists_implicit_gby.q.out
* /hive/trunk/ql/src/test/results/clientnegative/subquery_subquery_chain.q.out
* /hive/trunk/ql/src/test/results/clientnegative/subquery_windowing_corr.q.out
* /hive/trunk/ql/src/test/results/clientnegative/subquery_with_or_cond.q.out
* /hive/trunk/ql/src/test/results/clientpositive/subquery_exists.q.out
* /hive/trunk/ql/src/test/results/clientpositive/subquery_in.q.out
* /hive/trunk/ql/src/test/results/clientpositive/subquery_multiinsert.q.out
* /hive/trunk/ql/src/test/results/clientpositive/subquery_notexists.q.out
* /hive/trunk/ql/src/test/results/clientpositive/subquery_notin.q.out


 Support uncorrelated subqueries in the WHERE clause
 ---

 Key: HIVE-784
 URL: https://issues.apache.org/jira/browse/HIVE-784
 Project: Hive
  Issue Type: New Feature
  Components: Query Processor
Reporter: Ning Zhang
Assignee: Harish Butani
 Fix For: 0.13.0

 Attachments: D13443.1.patch, D13443.2.patch, HIVE-784.1.patch.txt, 
 HIVE-784.2.patch, SubQuerySpec.pdf, tpchQueriesUsingSubQueryClauses.sql


 Hive currently only support views in the FROM-clause, some Facebook use cases 
 suggest that Hive should support subqueries such as those connected by 
 IN/EXISTS in the WHERE-clause. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5599) Change default logging level to INFO

2013-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13806287#comment-13806287
 ] 

Hudson commented on HIVE-5599:
--

SUCCESS: Integrated in Hive-trunk-hadoop1-ptest #215 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop1-ptest/215/])
HIVE-5599 - Change default logging level to INFO (Brock Noland, Reviewed by 
Thejas M Nair) (brock: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1535047)
* /hive/trunk/common/src/java/conf/hive-log4j.properties


 Change default logging level to INFO
 

 Key: HIVE-5599
 URL: https://issues.apache.org/jira/browse/HIVE-5599
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.12.0
Reporter: Brock Noland
Assignee: Brock Noland
 Fix For: 0.13.0

 Attachments: HIVE-5599.patch


 The default logging level is warn:
 https://github.com/apache/hive/blob/trunk/common/src/java/conf/hive-log4j.properties#L19
 but hive logs lot's of good information at INFO level. Additionally most 
 hadoop projects log at INFO by default. Let's change the logging level to 
 INFO by default.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5619) Allow concat() to accept mixed string/binary args

2013-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13806278#comment-13806278
 ] 

Hudson commented on HIVE-5619:
--

SUCCESS: Integrated in Hive-trunk-hadoop1-ptest #215 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop1-ptest/215/])
HIVE-5619 : Allow concat() to accept mixed string/binary args (Jason Dere via 
Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1535621)
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFConcat.java
* /hive/trunk/ql/src/test/queries/clientpositive/udf_concat.q
* /hive/trunk/ql/src/test/results/clientpositive/udf_concat.q.out
* 
/hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/objectinspector/primitive/PrimitiveObjectInspectorUtils.java


 Allow concat() to accept mixed string/binary args
 -

 Key: HIVE-5619
 URL: https://issues.apache.org/jira/browse/HIVE-5619
 Project: Hive
  Issue Type: Improvement
Reporter: Jason Dere
Assignee: Jason Dere
 Fix For: 0.13.0

 Attachments: HIVE-5619.1.patch


 concat() is currently strict about allowing either all binary or all 
 non-binary arguments. Loosen this to permit mixed params.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5511) percentComplete returned by job status from WebHCat is null

2013-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13806276#comment-13806276
 ] 

Hudson commented on HIVE-5511:
--

SUCCESS: Integrated in Hive-trunk-hadoop1-ptest #215 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop1-ptest/215/])
HIVE-5511 : percentComplete returned by job status from WebHCat is null (Eugene 
Koifman via Thejas Nair) (thejas: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1535796)
* /hive/trunk/hcatalog/src/test/e2e/templeton/README.txt
* /hive/trunk/hcatalog/src/test/e2e/templeton/drivers/TestDriverCurl.pm
* /hive/trunk/hcatalog/src/test/e2e/templeton/tests/jobsubmission.conf
* /hive/trunk/hcatalog/src/test/e2e/templeton/tests/jobsubmission_streaming.conf
* /hive/trunk/hcatalog/webhcat/svr/src/main/bin/webhcat_config.sh
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/config/override-container-log4j.properties
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/AppConfig.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/CompleteDelegator.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/tool/HDFSStorage.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/tool/HiveJobIDParser.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/tool/JarJobIDParser.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/tool/JobSubmissionConstants.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/tool/LaunchMapper.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/tool/PigJobIDParser.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/tool/TempletonControllerJob.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/tool/TempletonUtils.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/tool/TrivialExecService.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/test/java/org/apache/hive/hcatalog/templeton/tool/TestTrivialExecService.java
* 
/hive/trunk/shims/src/0.20S/java/org/apache/hadoop/mapred/WebHCatJTShim20S.java
* /hive/trunk/shims/src/0.23/java/org/apache/hadoop/mapred/WebHCatJTShim23.java
* 
/hive/trunk/shims/src/common/java/org/apache/hadoop/hive/shims/HadoopShims.java


 percentComplete returned by job status from WebHCat is null
 ---

 Key: HIVE-5511
 URL: https://issues.apache.org/jira/browse/HIVE-5511
 Project: Hive
  Issue Type: Bug
  Components: WebHCat
Affects Versions: 0.12.0
Reporter: Eugene Koifman
Assignee: Eugene Koifman
 Fix For: 0.13.0

 Attachments: HIVE-5511.3.patch, HIVE-5511.5.patch


 In hadoop1 the logging from MR is sent to stderr.  In H2, by default, to 
 syslog.  templeton.tool.LaunchMapper expects to see the output on stderr to 
 produce 'percentComplete' in job status.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5560) Hive produces incorrect results on multi-distinct query

2013-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13806295#comment-13806295
 ] 

Hudson commented on HIVE-5560:
--

SUCCESS: Integrated in Hive-trunk-hadoop1-ptest #215 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop1-ptest/215/])
HIVE-5560 : Hive produces incorrect results on multi-distinct query (Navis via 
Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1535059)
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java
* /hive/trunk/ql/src/test/queries/clientpositive/groupby2_map_multi_distinct.q
* 
/hive/trunk/ql/src/test/results/clientpositive/auto_join18_multi_distinct.q.out
* /hive/trunk/ql/src/test/results/clientpositive/count.q.out
* 
/hive/trunk/ql/src/test/results/clientpositive/groupby2_map_multi_distinct.q.out
* 
/hive/trunk/ql/src/test/results/clientpositive/groupby2_noskew_multi_distinct.q.out
* 
/hive/trunk/ql/src/test/results/clientpositive/groupby3_map_multi_distinct.q.out
* 
/hive/trunk/ql/src/test/results/clientpositive/groupby3_noskew_multi_distinct.q.out
* /hive/trunk/ql/src/test/results/clientpositive/groupby_distinct_samekey.q.out
* 
/hive/trunk/ql/src/test/results/clientpositive/groupby_map_ppr_multi_distinct.q.out
* 
/hive/trunk/ql/src/test/results/clientpositive/groupby_multi_single_reducer.q.out
* 
/hive/trunk/ql/src/test/results/clientpositive/groupby_ppr_multi_distinct.q.out
* /hive/trunk/ql/src/test/results/clientpositive/groupby_sort_11.q.out
* /hive/trunk/ql/src/test/results/clientpositive/join18_multi_distinct.q.out
* /hive/trunk/ql/src/test/results/clientpositive/limit_pushdown.q.out


 Hive produces incorrect results on multi-distinct query
 ---

 Key: HIVE-5560
 URL: https://issues.apache.org/jira/browse/HIVE-5560
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.11.0, 0.12.0
Reporter: Vikram Dixit K
Assignee: Navis
 Fix For: 0.13.0

 Attachments: D13599.1.patch, D13599.2.patch


 {noformat}
 select key, count(distinct key) + count(distinct value) from src tablesample 
 (10 ROWS) group by key
 POSTHOOK: type: QUERY
 POSTHOOK: Input: default@src
  A masked pattern was here 
 165 1
 val_165 1
 238 1
 val_238 1
 255 1
 val_255 1
 27  1
 val_27  1
 278 1
 val_278 1
 311 1
 val_311 1
 409 1
 val_409 1
 484 1
 val_484 1
 86  1
 val_86  1
 98  1
 val_98  1
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5643) ZooKeeperHiveLockManager.getQuorumServers incorrectly appends the custom zk port to quorum hosts

2013-10-27 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13806308#comment-13806308
 ] 

Hive QA commented on HIVE-5643:
---



{color:green}Overall{color}: +1 all checks pass

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12610466/HIVE-5643.1.patch.txt

{color:green}SUCCESS:{color} +1 4484 tests passed

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/1259/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/1259/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

 ZooKeeperHiveLockManager.getQuorumServers incorrectly appends the custom zk 
 port to quorum hosts
 

 Key: HIVE-5643
 URL: https://issues.apache.org/jira/browse/HIVE-5643
 Project: Hive
  Issue Type: Bug
  Components: Locking
Affects Versions: 0.12.0
Reporter: Venki Korukanti
Assignee: Venki Korukanti
 Fix For: 0.13.0

 Attachments: HIVE-5643.1.patch.txt


 ZooKeeperHiveLockManager calls the below method to construct the connection 
 string for ZooKeeper connection.
 {code}
   private static String getQuorumServers(HiveConf conf) {
 String hosts = conf.getVar(HiveConf.ConfVars.HIVE_ZOOKEEPER_QUORUM);
 String port = conf.getVar(HiveConf.ConfVars.HIVE_ZOOKEEPER_CLIENT_PORT);
 return hosts + : + port;
   }
 {code}
 For example:
 HIVE_ZOOKEEPER_QUORUM=node1, node2, node3
 HIVE_ZOOKEEPER_CLIENT_PORT=
 Connection string given to ZooKeeper object is node1, node2, node3:. 
 ZooKeeper consider the default port as 2181 for hostnames that don't have any 
 port. 
 This works fine as long as HIVE_ZOOKEEPER_CLIENT_PORT is 2181. If it is 
 different then ZooKeeper client object tries to connect to node1 and node2 on 
 port 2181 which always fails. So it has only one choice the last host which 
 receives all the load from Hive.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5656) Hive produces unclear, confusing SemanticException when dealing with mod or pmod by zero

2013-10-27 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13806318#comment-13806318
 ] 

Hive QA commented on HIVE-5656:
---



{color:green}Overall{color}: +1 all checks pass

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12610465/HIVE-5656.patch

{color:green}SUCCESS:{color} +1 4485 tests passed

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/1260/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/1260/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

 Hive produces unclear, confusing SemanticException when dealing with mod or 
 pmod by zero
 

 Key: HIVE-5656
 URL: https://issues.apache.org/jira/browse/HIVE-5656
 Project: Hive
  Issue Type: Bug
  Components: Types
Affects Versions: 0.12.0
Reporter: Xuefu Zhang
Assignee: Xuefu Zhang
 Attachments: HIVE-5656.patch


 {code}
 hive select 5%0 from tmp2 limit 1;
 FAILED: SemanticException [Error 10014]: Line 1:7 Wrong arguments '0': 
 org.apache.hadoop.hive.ql.metadata.HiveException: Unable to execute method 
 public org.apache.hadoop.io.IntWritable 
 org.apache.hadoop.hive.ql.udf.UDFOPMod.evaluate(org.apache.hadoop.io.IntWritable,org.apache.hadoop.io.IntWritable)
   on object org.apache.hadoop.hive.ql.udf.UDFOPMod@21b594a9 of class 
 org.apache.hadoop.hive.ql.udf.UDFOPMod with arguments 
 {5:org.apache.hadoop.io.IntWritable, 0:org.apache.hadoop.io.IntWritable} of 
 size 2
 hive select pmod(5,0) from tmp2 limit 1;
 FAILED: SemanticException [Error 10014]: Line 1:7 Wrong arguments '0': 
 org.apache.hadoop.hive.ql.metadata.HiveException: Unable to execute method 
 public org.apache.hadoop.io.IntWritable 
 org.apache.hadoop.hive.ql.udf.UDFPosMod.evaluate(org.apache.hadoop.io.IntWritable,org.apache.hadoop.io.IntWritable)
   on object org.apache.hadoop.hive.ql.udf.UDFPosMod@174ed99a of class 
 org.apache.hadoop.hive.ql.udf.UDFPosMod with arguments 
 {5:org.apache.hadoop.io.IntWritable, 0:org.apache.hadoop.io.IntWritable} of 
 size 2
 {code}
 Exception stack:
 {code}
 at 
 org.apache.hadoop.hive.ql.parse.TypeCheckProcFactory$DefaultExprProcessor.process(TypeCheckProcFactory.java:1112)
 at 
 org.apache.hadoop.hive.ql.lib.DefaultRuleDispatcher.dispatch(DefaultRuleDispatcher.java:90)
 at 
 org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatchAndReturn(DefaultGraphWalker.java:94)
 at 
 org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatch(DefaultGraphWalker.java:78)
 at 
 org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.walk(DefaultGraphWalker.java:132)
 at 
 org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.startWalking(DefaultGraphWalker.java:109)
 at 
 org.apache.hadoop.hive.ql.parse.TypeCheckProcFactory.genExprNode(TypeCheckProcFactory.java:181)
 at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genAllExprNodeDesc(SemanticAnalyzer.java:8870)
 at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genExprNodeDesc(SemanticAnalyzer.java:8826)
 at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genSelectPlan(SemanticAnalyzer.java:2734)
 at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genSelectPlan(SemanticAnalyzer.java:2531)
 at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPostGroupByBodyPlan(SemanticAnalyzer.java:7606)
 at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genBodyPlan(SemanticAnalyzer.java:7562)
 at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:8365)
 at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:8591)
 at 
 org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:284)
 at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:451)
 at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:351)
 at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1004)
 at org.apache.hadoop.hive.ql.Driver.run(Driver.java:915)
 at 
 org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:268)
 at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:220)
 at 
 org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:422)
 at 
 org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:790)
 at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:684)
 at 

[jira] [Updated] (HIVE-5581) Implement vectorized year/month/day... etc. for string arguments

2013-10-27 Thread Teddy Choi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Teddy Choi updated HIVE-5581:
-

Attachment: HIVE-5581.1.patch.txt

Added {{VectorUDFTimestampFieldLong#evaluateBytes}} and 
{{VectorUDFTimestampFieldLong#getField(long)}} to handle {{BytesColumnVector}} 
s.

Changed {{TestVectorTimestampExpressions}} to cover string arguments also.

Review request at https://reviews.apache.org/r/14979/

 Implement vectorized year/month/day... etc. for string arguments
 

 Key: HIVE-5581
 URL: https://issues.apache.org/jira/browse/HIVE-5581
 Project: Hive
  Issue Type: Sub-task
  Components: Query Processor
Affects Versions: 0.13.0
Reporter: Eric Hanson
Assignee: Teddy Choi
 Attachments: HIVE-5581.1.patch.txt


 Functions year(), month(), day(), weekofyear(), hour(), minute(), second() 
 need to be implemented for string arguments in vectorized mode. 
 They already work for timestamp arguments.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5653) Vectorized Shuffle Join produces incorrect results

2013-10-27 Thread Remus Rusanu (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13806333#comment-13806333
 ] 

Remus Rusanu commented on HIVE-5653:


Cause is the VectorReduceSinkOperator, which does not initialize the tagBytes 
from the conf during initializeOp. This results in reduce side 
ExecReducer.reduce reading all keys as tag (==alias) 0. The reduce join 
operator sees all data in one side, nothing on the other, no rows match.

 Vectorized Shuffle Join produces incorrect results
 --

 Key: HIVE-5653
 URL: https://issues.apache.org/jira/browse/HIVE-5653
 Project: Hive
  Issue Type: Sub-task
Reporter: Remus Rusanu
Assignee: Remus Rusanu

 Vectorized shuffle join should work out-of-the-box, but it produces empty 
 result set. Investigating.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5653) Vectorized Shuffle Join produces incorrect results

2013-10-27 Thread Remus Rusanu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5653?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Remus Rusanu updated HIVE-5653:
---

Affects Version/s: 0.13.0
   Status: Patch Available  (was: Open)

 Vectorized Shuffle Join produces incorrect results
 --

 Key: HIVE-5653
 URL: https://issues.apache.org/jira/browse/HIVE-5653
 Project: Hive
  Issue Type: Sub-task
Affects Versions: 0.13.0
Reporter: Remus Rusanu
Assignee: Remus Rusanu
 Attachments: HIVE-5653.1.patch


 Vectorized shuffle join should work out-of-the-box, but it produces empty 
 result set. Investigating.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5653) Vectorized Shuffle Join produces incorrect results

2013-10-27 Thread Remus Rusanu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5653?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Remus Rusanu updated HIVE-5653:
---

Attachment: HIVE-5653.1.patch

 Vectorized Shuffle Join produces incorrect results
 --

 Key: HIVE-5653
 URL: https://issues.apache.org/jira/browse/HIVE-5653
 Project: Hive
  Issue Type: Sub-task
Affects Versions: 0.13.0
Reporter: Remus Rusanu
Assignee: Remus Rusanu
 Attachments: HIVE-5653.1.patch


 Vectorized shuffle join should work out-of-the-box, but it produces empty 
 result set. Investigating.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5643) ZooKeeperHiveLockManager.getQuorumServers incorrectly appends the custom zk port to quorum hosts

2013-10-27 Thread Edward Capriolo (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13806347#comment-13806347
 ] 

Edward Capriolo commented on HIVE-5643:
---

+1

 ZooKeeperHiveLockManager.getQuorumServers incorrectly appends the custom zk 
 port to quorum hosts
 

 Key: HIVE-5643
 URL: https://issues.apache.org/jira/browse/HIVE-5643
 Project: Hive
  Issue Type: Bug
  Components: Locking
Affects Versions: 0.12.0
Reporter: Venki Korukanti
Assignee: Venki Korukanti
 Fix For: 0.13.0

 Attachments: HIVE-5643.1.patch.txt


 ZooKeeperHiveLockManager calls the below method to construct the connection 
 string for ZooKeeper connection.
 {code}
   private static String getQuorumServers(HiveConf conf) {
 String hosts = conf.getVar(HiveConf.ConfVars.HIVE_ZOOKEEPER_QUORUM);
 String port = conf.getVar(HiveConf.ConfVars.HIVE_ZOOKEEPER_CLIENT_PORT);
 return hosts + : + port;
   }
 {code}
 For example:
 HIVE_ZOOKEEPER_QUORUM=node1, node2, node3
 HIVE_ZOOKEEPER_CLIENT_PORT=
 Connection string given to ZooKeeper object is node1, node2, node3:. 
 ZooKeeper consider the default port as 2181 for hostnames that don't have any 
 port. 
 This works fine as long as HIVE_ZOOKEEPER_CLIENT_PORT is 2181. If it is 
 different then ZooKeeper client object tries to connect to node1 and node2 on 
 port 2181 which always fails. So it has only one choice the last host which 
 receives all the load from Hive.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-3976) Support specifying scale and precision with Hive decimal type

2013-10-27 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-3976:
---

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thank you very much for this great contribution Xuefu! I have committed this to 
trunk.

 Support specifying scale and precision with Hive decimal type
 -

 Key: HIVE-3976
 URL: https://issues.apache.org/jira/browse/HIVE-3976
 Project: Hive
  Issue Type: New Feature
  Components: Query Processor, Types
Affects Versions: 0.11.0
Reporter: Mark Grover
Assignee: Xuefu Zhang
 Fix For: 0.13.0

 Attachments: HIVE-3976.10.patch, HIVE-3976.11.patch, 
 HIVE-3976.1.patch, HIVE-3976.2.patch, HIVE-3976.3.patch, HIVE-3976.4.patch, 
 HIVE-3976.5.patch, HIVE-3976.6.patch, HIVE-3976.7.patch, HIVE-3976.8.patch, 
 HIVE-3976.9.patch, HIVE-3976.patch, remove_prec_scale.diff


 HIVE-2693 introduced support for Decimal datatype in Hive. However, the 
 current implementation has unlimited precision and provides no way to specify 
 precision and scale when creating the table.
 For example, MySQL allows users to specify scale and precision of the decimal 
 datatype when creating the table:
 {code}
 CREATE TABLE numbers (a DECIMAL(20,2));
 {code}
 Hive should support something similar too.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-4974) JDBC2 statements and result sets are not able to return their parents

2013-10-27 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-4974:
---

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thank you for the the contribution Chris! I have committed this to trunk!

 JDBC2 statements and result sets are not able to return their parents
 -

 Key: HIVE-4974
 URL: https://issues.apache.org/jira/browse/HIVE-4974
 Project: Hive
  Issue Type: Bug
  Components: JDBC
Affects Versions: 0.11.0, 0.12.0
Reporter: Chris Drome
Assignee: Chris Drome
Priority: Minor
 Fix For: 0.13.0

 Attachments: HIVE-4974.2-trunk.patch.txt, 
 HIVE-4974-trunk-1.patch.txt, HIVE-4974-trunk-2.patch.txt, 
 HIVE-4974-trunk.patch.txt


 The getConnection method of HiveStatement and HivePreparedStatement throw a 
 not supported SQLException. The constructors should take the HiveConnection 
 that creates them as an argument.
 Similarly, HiveBaseResultSet is not capable of returning the Statement that 
 created it.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5653) Vectorized Shuffle Join produces incorrect results

2013-10-27 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13806367#comment-13806367
 ] 

Hive QA commented on HIVE-5653:
---



{color:green}Overall{color}: +1 all checks pass

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12610476/HIVE-5653.1.patch

{color:green}SUCCESS:{color} +1 4484 tests passed

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/1261/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/1261/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

 Vectorized Shuffle Join produces incorrect results
 --

 Key: HIVE-5653
 URL: https://issues.apache.org/jira/browse/HIVE-5653
 Project: Hive
  Issue Type: Sub-task
Affects Versions: 0.13.0
Reporter: Remus Rusanu
Assignee: Remus Rusanu
 Attachments: HIVE-5653.1.patch


 Vectorized shuffle join should work out-of-the-box, but it produces empty 
 result set. Investigating.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-4974) JDBC2 statements and result sets are not able to return their parents

2013-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13806376#comment-13806376
 ] 

Hudson commented on HIVE-4974:
--

FAILURE: Integrated in Hive-trunk-hadoop2-ptest #153 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2-ptest/153/])
HIVE-4974 - JDBC2 statements and result sets are not able to return their 
parents (Chris Dome via Brock Noland) (brock: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1536153)
* /hive/trunk/jdbc/src/java/org/apache/hive/jdbc/HiveBaseResultSet.java
* /hive/trunk/jdbc/src/java/org/apache/hive/jdbc/HiveCallableStatement.java
* /hive/trunk/jdbc/src/java/org/apache/hive/jdbc/HiveConnection.java
* /hive/trunk/jdbc/src/java/org/apache/hive/jdbc/HiveDatabaseMetaData.java
* /hive/trunk/jdbc/src/java/org/apache/hive/jdbc/HivePreparedStatement.java
* /hive/trunk/jdbc/src/java/org/apache/hive/jdbc/HiveQueryResultSet.java
* /hive/trunk/jdbc/src/java/org/apache/hive/jdbc/HiveStatement.java
* /hive/trunk/jdbc/src/test/org/apache/hive/jdbc/TestJdbcDriver2.java


 JDBC2 statements and result sets are not able to return their parents
 -

 Key: HIVE-4974
 URL: https://issues.apache.org/jira/browse/HIVE-4974
 Project: Hive
  Issue Type: Bug
  Components: JDBC
Affects Versions: 0.11.0, 0.12.0
Reporter: Chris Drome
Assignee: Chris Drome
Priority: Minor
 Fix For: 0.13.0

 Attachments: HIVE-4974.2-trunk.patch.txt, 
 HIVE-4974-trunk-1.patch.txt, HIVE-4974-trunk-2.patch.txt, 
 HIVE-4974-trunk.patch.txt


 The getConnection method of HiveStatement and HivePreparedStatement throw a 
 not supported SQLException. The constructors should take the HiveConnection 
 that creates them as an argument.
 Similarly, HiveBaseResultSet is not capable of returning the Statement that 
 created it.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-3976) Support specifying scale and precision with Hive decimal type

2013-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13806375#comment-13806375
 ] 

Hudson commented on HIVE-3976:
--

FAILURE: Integrated in Hive-trunk-hadoop2-ptest #153 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2-ptest/153/])
HIVE-3976 - Support specifying scale and precision with Hive decimal type 
(Xuefu Zhang via Brock Noland) (brock: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1536151)
* 
/hive/trunk/common/src/java/org/apache/hadoop/hive/common/type/HiveDecimal.java
* 
/hive/trunk/common/src/test/org/apache/hadoop/hive/common/type/TestHiveDecimal.java
* /hive/trunk/data/files/kv9.txt
* 
/hive/trunk/jdbc/src/java/org/apache/hadoop/hive/jdbc/HiveResultSetMetaData.java
* /hive/trunk/jdbc/src/java/org/apache/hadoop/hive/jdbc/Utils.java
* /hive/trunk/jdbc/src/test/org/apache/hadoop/hive/jdbc/TestJdbcDriver.java
* /hive/trunk/jdbc/src/test/org/apache/hive/jdbc/TestJdbcDriver2.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/Driver.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/FunctionRegistry.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/NumericOpMethodResolver.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/OrcStruct.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/WriterImpl.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/DDLSemanticAnalyzer.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/HiveParser.g
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/ParseUtils.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/TypeCheckProcFactory.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/UDFOPDivide.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/UDFOPPlus.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFBridge.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFToDecimal.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFToVarchar.java
* 
/hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/exec/TestFunctionRegistry.java
* /hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/io/orc/TestOrcFile.java
* 
/hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/parse/TestHiveDecimalParse.java
* /hive/trunk/ql/src/test/queries/clientpositive/decimal_1.q
* /hive/trunk/ql/src/test/queries/clientpositive/decimal_2.q
* /hive/trunk/ql/src/test/queries/clientpositive/decimal_3.q
* /hive/trunk/ql/src/test/queries/clientpositive/decimal_4.q
* /hive/trunk/ql/src/test/queries/clientpositive/decimal_5.q
* /hive/trunk/ql/src/test/queries/clientpositive/decimal_6.q
* /hive/trunk/ql/src/test/queries/clientpositive/decimal_join.q
* /hive/trunk/ql/src/test/queries/clientpositive/decimal_precision.q
* /hive/trunk/ql/src/test/queries/clientpositive/decimal_udf.q
* /hive/trunk/ql/src/test/queries/clientpositive/orc_predicate_pushdown.q
* /hive/trunk/ql/src/test/queries/clientpositive/ptf_decimal.q
* /hive/trunk/ql/src/test/queries/clientpositive/serde_regex.q
* /hive/trunk/ql/src/test/queries/clientpositive/udf_pmod.q
* /hive/trunk/ql/src/test/queries/clientpositive/udf_to_double.q
* /hive/trunk/ql/src/test/queries/clientpositive/udf_to_float.q
* /hive/trunk/ql/src/test/queries/clientpositive/udf_to_string.q
* /hive/trunk/ql/src/test/queries/clientpositive/windowing_expressions.q
* /hive/trunk/ql/src/test/queries/clientpositive/windowing_multipartitioning.q
* /hive/trunk/ql/src/test/queries/clientpositive/windowing_navfn.q
* /hive/trunk/ql/src/test/queries/clientpositive/windowing_ntile.q
* /hive/trunk/ql/src/test/queries/clientpositive/windowing_rank.q
* 
/hive/trunk/ql/src/test/results/clientnegative/invalid_cast_from_binary_1.q.out
* 
/hive/trunk/ql/src/test/results/clientnegative/invalid_cast_from_binary_2.q.out
* 
/hive/trunk/ql/src/test/results/clientnegative/invalid_cast_from_binary_3.q.out
* 
/hive/trunk/ql/src/test/results/clientnegative/invalid_cast_from_binary_4.q.out
* 
/hive/trunk/ql/src/test/results/clientnegative/invalid_cast_from_binary_5.q.out
* 
/hive/trunk/ql/src/test/results/clientnegative/invalid_cast_from_binary_6.q.out
* /hive/trunk/ql/src/test/results/clientnegative/wrong_column_type.q.out
* /hive/trunk/ql/src/test/results/clientpositive/decimal_1.q.out
* /hive/trunk/ql/src/test/results/clientpositive/decimal_2.q.out
* /hive/trunk/ql/src/test/results/clientpositive/decimal_3.q.out
* /hive/trunk/ql/src/test/results/clientpositive/decimal_4.q.out
* /hive/trunk/ql/src/test/results/clientpositive/decimal_5.q.out
* /hive/trunk/ql/src/test/results/clientpositive/decimal_6.q.out
* /hive/trunk/ql/src/test/results/clientpositive/decimal_join.q.out
* /hive/trunk/ql/src/test/results/clientpositive/decimal_precision.q.out
* /hive/trunk/ql/src/test/results/clientpositive/decimal_serde.q.out
* /hive/trunk/ql/src/test/results/clientpositive/decimal_udf.q.out

[jira] [Updated] (HIVE-5631) Index creation on a skew table fails

2013-10-27 Thread Venki Korukanti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Venki Korukanti updated HIVE-5631:
--

Status: Open  (was: Patch Available)

 Index creation on a skew table fails
 

 Key: HIVE-5631
 URL: https://issues.apache.org/jira/browse/HIVE-5631
 Project: Hive
  Issue Type: Bug
  Components: Database/Schema
Affects Versions: 0.12.0
Reporter: Venki Korukanti
Assignee: Venki Korukanti
 Fix For: 0.13.0

 Attachments: HIVE-5631.1.patch.txt, HIVE-5631.2.patch.txt


 REPRO STEPS:
 create database skewtest;
 use skewtest;
 create table skew (id bigint, acct string) skewed by (acct) on ('CC','CH');
 create index skew_indx on table skew (id) as 
 'org.apache.hadoop.hive.ql.index.compact.CompactIndexHandler' WITH DEFERRED 
 REBUILD;
 Last DDL fails with following error.
 FAILED: Execution Error, return code 1 from 
 org.apache.hadoop.hive.ql.exec.DDLTask. 
 InvalidObjectException(message:Invalid skew column [acct])
 When creating a table, Hive has sanity tests to make sure the columns have 
 proper names and the skewed columns are subset of the table columns. Here we 
 fail because index table has skewed column info. Index tables's skewed 
 columns include {acct} and the columns are {id, _bucketname, _offsets}. As 
 the skewed column {acct} is not part of the table columns Hive throws the 
 exception.
 The reason why Index table got skewed column info even though its definition 
 has no such info is: When creating the index table a deep copy of the base 
 table's StorageDescriptor (SD) (in this case 'skew') is made. And in that 
 copied SD, index specific parameters are set and unrelated parameters are 
 reset. Here skewed column info is not reset (there are few other params that 
 are not reset). That's why the index table contains the skewed column info.
 Fix: Instead of deep copying the base table StorageDescriptor, create a new 
 one from gathered info. This way it avoids the index table to inherit 
 unnecessary properties in SD from base table.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5631) Index creation on a skew table fails

2013-10-27 Thread Venki Korukanti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Venki Korukanti updated HIVE-5631:
--

Status: Patch Available  (was: Open)

 Index creation on a skew table fails
 

 Key: HIVE-5631
 URL: https://issues.apache.org/jira/browse/HIVE-5631
 Project: Hive
  Issue Type: Bug
  Components: Database/Schema
Affects Versions: 0.12.0
Reporter: Venki Korukanti
Assignee: Venki Korukanti
 Fix For: 0.13.0

 Attachments: HIVE-5631.1.patch.txt, HIVE-5631.2.patch.txt


 REPRO STEPS:
 create database skewtest;
 use skewtest;
 create table skew (id bigint, acct string) skewed by (acct) on ('CC','CH');
 create index skew_indx on table skew (id) as 
 'org.apache.hadoop.hive.ql.index.compact.CompactIndexHandler' WITH DEFERRED 
 REBUILD;
 Last DDL fails with following error.
 FAILED: Execution Error, return code 1 from 
 org.apache.hadoop.hive.ql.exec.DDLTask. 
 InvalidObjectException(message:Invalid skew column [acct])
 When creating a table, Hive has sanity tests to make sure the columns have 
 proper names and the skewed columns are subset of the table columns. Here we 
 fail because index table has skewed column info. Index tables's skewed 
 columns include {acct} and the columns are {id, _bucketname, _offsets}. As 
 the skewed column {acct} is not part of the table columns Hive throws the 
 exception.
 The reason why Index table got skewed column info even though its definition 
 has no such info is: When creating the index table a deep copy of the base 
 table's StorageDescriptor (SD) (in this case 'skew') is made. And in that 
 copied SD, index specific parameters are set and unrelated parameters are 
 reset. Here skewed column info is not reset (there are few other params that 
 are not reset). That's why the index table contains the skewed column info.
 Fix: Instead of deep copying the base table StorageDescriptor, create a new 
 one from gathered info. This way it avoids the index table to inherit 
 unnecessary properties in SD from base table.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5631) Index creation on a skew table fails

2013-10-27 Thread Venki Korukanti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Venki Korukanti updated HIVE-5631:
--

Attachment: HIVE-5631.3.patch.txt

 Index creation on a skew table fails
 

 Key: HIVE-5631
 URL: https://issues.apache.org/jira/browse/HIVE-5631
 Project: Hive
  Issue Type: Bug
  Components: Database/Schema
Affects Versions: 0.12.0
Reporter: Venki Korukanti
Assignee: Venki Korukanti
 Fix For: 0.13.0

 Attachments: HIVE-5631.1.patch.txt, HIVE-5631.2.patch.txt, 
 HIVE-5631.3.patch.txt


 REPRO STEPS:
 create database skewtest;
 use skewtest;
 create table skew (id bigint, acct string) skewed by (acct) on ('CC','CH');
 create index skew_indx on table skew (id) as 
 'org.apache.hadoop.hive.ql.index.compact.CompactIndexHandler' WITH DEFERRED 
 REBUILD;
 Last DDL fails with following error.
 FAILED: Execution Error, return code 1 from 
 org.apache.hadoop.hive.ql.exec.DDLTask. 
 InvalidObjectException(message:Invalid skew column [acct])
 When creating a table, Hive has sanity tests to make sure the columns have 
 proper names and the skewed columns are subset of the table columns. Here we 
 fail because index table has skewed column info. Index tables's skewed 
 columns include {acct} and the columns are {id, _bucketname, _offsets}. As 
 the skewed column {acct} is not part of the table columns Hive throws the 
 exception.
 The reason why Index table got skewed column info even though its definition 
 has no such info is: When creating the index table a deep copy of the base 
 table's StorageDescriptor (SD) (in this case 'skew') is made. And in that 
 copied SD, index specific parameters are set and unrelated parameters are 
 reset. Here skewed column info is not reset (there are few other params that 
 are not reset). That's why the index table contains the skewed column info.
 Fix: Instead of deep copying the base table StorageDescriptor, create a new 
 one from gathered info. This way it avoids the index table to inherit 
 unnecessary properties in SD from base table.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Assigned] (HIVE-4978) [WebHCat] Close the PrintWriter after writing data

2013-10-27 Thread Venki Korukanti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4978?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Venki Korukanti reassigned HIVE-4978:
-

Assignee: Venki Korukanti

 [WebHCat] Close the PrintWriter after writing data
 --

 Key: HIVE-4978
 URL: https://issues.apache.org/jira/browse/HIVE-4978
 Project: Hive
  Issue Type: Bug
  Components: HCatalog
Affects Versions: 0.11.0
Reporter: Venki Korukanti
Assignee: Venki Korukanti
Priority: Minor
 Attachments: HIVE-4978-1.patch, HIVE-4978-2.patch


 We are not closing the PrintWriter after writing data into it. I haven't seen 
 any problems so far, but it is good to close the PrintWriter so that 
 resources are released properly.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Assigned] (HIVE-4969) HCatalog HBaseHCatStorageHandler is not returning all the data

2013-10-27 Thread Venki Korukanti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Venki Korukanti reassigned HIVE-4969:
-

Assignee: Venki Korukanti

 HCatalog HBaseHCatStorageHandler is not returning all the data
 --

 Key: HIVE-4969
 URL: https://issues.apache.org/jira/browse/HIVE-4969
 Project: Hive
  Issue Type: Bug
  Components: HCatalog
Affects Versions: 0.11.0
Reporter: Venki Korukanti
Assignee: Venki Korukanti
Priority: Critical
 Attachments: HIVE-4969-1.patch


 Repro steps:
 1) Create an HCatalog table mapped to HBase table.
 hcat -e CREATE TABLE studentHCat(rownum int, name string, age int, gpa float)
  STORED BY 'org.apache.hcatalog.hbase.HBaseHCatStorageHandler'
  TBLPROPERTIES('hbase.table.name' ='studentHBase',  
'hbase.columns.mapping' =
 ':key,onecf:name,twocf:age,threecf:gpa');
 2) Load the following data from Pig.
 cat student_data
 1^Asarah laertes^A23^A2.40
 2^Atom allen^A72^A1.57
 3^Abob ovid^A61^A2.67
 4^Aethan nixon^A38^A2.15
 5^Acalvin robinson^A28^A2.53
 6^Airene ovid^A65^A2.56
 7^Ayuri garcia^A36^A1.65
 8^Acalvin nixon^A41^A1.04
 9^Ajessica davidson^A48^A2.11
 10^Akatie king^A39^A1.05
 grunt A = LOAD 'student_data' AS 
 (rownum:int,name:chararray,age:int,gpa:float);
 grunt STORE A INTO 'studentHCat' USING org.apache.hcatalog.pig.HCatStorer();
 3) Now from HBase do a scan on the studentHBase table
 hbase(main):026:0 scan 'studentPig', {LIMIT = 5}
 4) From pig access the data in table
 grunt A = LOAD 'studentHCat' USING org.apache.hcatalog.pig.HCatLoader();
 grunt STORE A INTO '/user/root/studentPig';
 5) Verify the output written in StudentPig
 hadoop fs -cat /user/root/studentPig/part-r-0
 1  23
 2  72
 3  61
 4  38
 5  28
 6  65
 7  36
 8  41
 9  48
 10 39
 The data returned has only two fields (rownum and age).
 Problem:
 While reading the data from HBase table, HbaseSnapshotRecordReader gets data 
 row in Result (org.apache.hadoop.hbase.client.Result) object and processes 
 the KeyValue fields in it. After processing, it creates another Result object 
 out of the processed KeyValue array. Problem here is KeyValue array is not 
 sorted. Result object expects the input KeyValue array to have sorted 
 elements. When we call the Result.getValue() it returns no value for some of 
 the fields as it does a binary search on un-ordered array.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-4974) JDBC2 statements and result sets are not able to return their parents

2013-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13806432#comment-13806432
 ] 

Hudson commented on HIVE-4974:
--

SUCCESS: Integrated in Hive-trunk-hadoop1-ptest #216 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop1-ptest/216/])
HIVE-4974 - JDBC2 statements and result sets are not able to return their 
parents (Chris Dome via Brock Noland) (brock: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1536153)
* /hive/trunk/jdbc/src/java/org/apache/hive/jdbc/HiveBaseResultSet.java
* /hive/trunk/jdbc/src/java/org/apache/hive/jdbc/HiveCallableStatement.java
* /hive/trunk/jdbc/src/java/org/apache/hive/jdbc/HiveConnection.java
* /hive/trunk/jdbc/src/java/org/apache/hive/jdbc/HiveDatabaseMetaData.java
* /hive/trunk/jdbc/src/java/org/apache/hive/jdbc/HivePreparedStatement.java
* /hive/trunk/jdbc/src/java/org/apache/hive/jdbc/HiveQueryResultSet.java
* /hive/trunk/jdbc/src/java/org/apache/hive/jdbc/HiveStatement.java
* /hive/trunk/jdbc/src/test/org/apache/hive/jdbc/TestJdbcDriver2.java


 JDBC2 statements and result sets are not able to return their parents
 -

 Key: HIVE-4974
 URL: https://issues.apache.org/jira/browse/HIVE-4974
 Project: Hive
  Issue Type: Bug
  Components: JDBC
Affects Versions: 0.11.0, 0.12.0
Reporter: Chris Drome
Assignee: Chris Drome
Priority: Minor
 Fix For: 0.13.0

 Attachments: HIVE-4974.2-trunk.patch.txt, 
 HIVE-4974-trunk-1.patch.txt, HIVE-4974-trunk-2.patch.txt, 
 HIVE-4974-trunk.patch.txt


 The getConnection method of HiveStatement and HivePreparedStatement throw a 
 not supported SQLException. The constructors should take the HiveConnection 
 that creates them as an argument.
 Similarly, HiveBaseResultSet is not capable of returning the Statement that 
 created it.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-3976) Support specifying scale and precision with Hive decimal type

2013-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13806431#comment-13806431
 ] 

Hudson commented on HIVE-3976:
--

SUCCESS: Integrated in Hive-trunk-hadoop1-ptest #216 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop1-ptest/216/])
HIVE-3976 - Support specifying scale and precision with Hive decimal type 
(Xuefu Zhang via Brock Noland) (brock: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1536151)
* 
/hive/trunk/common/src/java/org/apache/hadoop/hive/common/type/HiveDecimal.java
* 
/hive/trunk/common/src/test/org/apache/hadoop/hive/common/type/TestHiveDecimal.java
* /hive/trunk/data/files/kv9.txt
* 
/hive/trunk/jdbc/src/java/org/apache/hadoop/hive/jdbc/HiveResultSetMetaData.java
* /hive/trunk/jdbc/src/java/org/apache/hadoop/hive/jdbc/Utils.java
* /hive/trunk/jdbc/src/test/org/apache/hadoop/hive/jdbc/TestJdbcDriver.java
* /hive/trunk/jdbc/src/test/org/apache/hive/jdbc/TestJdbcDriver2.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/Driver.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/FunctionRegistry.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/NumericOpMethodResolver.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/OrcStruct.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/WriterImpl.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/DDLSemanticAnalyzer.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/HiveParser.g
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/ParseUtils.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/TypeCheckProcFactory.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/UDFOPDivide.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/UDFOPPlus.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFBridge.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFToDecimal.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFToVarchar.java
* 
/hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/exec/TestFunctionRegistry.java
* /hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/io/orc/TestOrcFile.java
* 
/hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/parse/TestHiveDecimalParse.java
* /hive/trunk/ql/src/test/queries/clientpositive/decimal_1.q
* /hive/trunk/ql/src/test/queries/clientpositive/decimal_2.q
* /hive/trunk/ql/src/test/queries/clientpositive/decimal_3.q
* /hive/trunk/ql/src/test/queries/clientpositive/decimal_4.q
* /hive/trunk/ql/src/test/queries/clientpositive/decimal_5.q
* /hive/trunk/ql/src/test/queries/clientpositive/decimal_6.q
* /hive/trunk/ql/src/test/queries/clientpositive/decimal_join.q
* /hive/trunk/ql/src/test/queries/clientpositive/decimal_precision.q
* /hive/trunk/ql/src/test/queries/clientpositive/decimal_udf.q
* /hive/trunk/ql/src/test/queries/clientpositive/orc_predicate_pushdown.q
* /hive/trunk/ql/src/test/queries/clientpositive/ptf_decimal.q
* /hive/trunk/ql/src/test/queries/clientpositive/serde_regex.q
* /hive/trunk/ql/src/test/queries/clientpositive/udf_pmod.q
* /hive/trunk/ql/src/test/queries/clientpositive/udf_to_double.q
* /hive/trunk/ql/src/test/queries/clientpositive/udf_to_float.q
* /hive/trunk/ql/src/test/queries/clientpositive/udf_to_string.q
* /hive/trunk/ql/src/test/queries/clientpositive/windowing_expressions.q
* /hive/trunk/ql/src/test/queries/clientpositive/windowing_multipartitioning.q
* /hive/trunk/ql/src/test/queries/clientpositive/windowing_navfn.q
* /hive/trunk/ql/src/test/queries/clientpositive/windowing_ntile.q
* /hive/trunk/ql/src/test/queries/clientpositive/windowing_rank.q
* 
/hive/trunk/ql/src/test/results/clientnegative/invalid_cast_from_binary_1.q.out
* 
/hive/trunk/ql/src/test/results/clientnegative/invalid_cast_from_binary_2.q.out
* 
/hive/trunk/ql/src/test/results/clientnegative/invalid_cast_from_binary_3.q.out
* 
/hive/trunk/ql/src/test/results/clientnegative/invalid_cast_from_binary_4.q.out
* 
/hive/trunk/ql/src/test/results/clientnegative/invalid_cast_from_binary_5.q.out
* 
/hive/trunk/ql/src/test/results/clientnegative/invalid_cast_from_binary_6.q.out
* /hive/trunk/ql/src/test/results/clientnegative/wrong_column_type.q.out
* /hive/trunk/ql/src/test/results/clientpositive/decimal_1.q.out
* /hive/trunk/ql/src/test/results/clientpositive/decimal_2.q.out
* /hive/trunk/ql/src/test/results/clientpositive/decimal_3.q.out
* /hive/trunk/ql/src/test/results/clientpositive/decimal_4.q.out
* /hive/trunk/ql/src/test/results/clientpositive/decimal_5.q.out
* /hive/trunk/ql/src/test/results/clientpositive/decimal_6.q.out
* /hive/trunk/ql/src/test/results/clientpositive/decimal_join.q.out
* /hive/trunk/ql/src/test/results/clientpositive/decimal_precision.q.out
* /hive/trunk/ql/src/test/results/clientpositive/decimal_serde.q.out
* /hive/trunk/ql/src/test/results/clientpositive/decimal_udf.q.out

[jira] [Updated] (HIVE-5663) Refactor ORC RecordReader to operate on direct wrapped ByteBuffers

2013-10-27 Thread Gopal V (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gopal V updated HIVE-5663:
--

Labels: ORC  (was: )

 Refactor ORC RecordReader to operate on direct  wrapped ByteBuffers
 

 Key: HIVE-5663
 URL: https://issues.apache.org/jira/browse/HIVE-5663
 Project: Hive
  Issue Type: Improvement
  Components: File Formats
Affects Versions: 0.13.0
 Environment: Ubuntu LXC 
Reporter: Gopal V
Assignee: Gopal V
  Labels: ORC

 The current ORC RecordReader implementation assumes array structures backing 
 the ByteBuffers it passes around between RecordReaderImpl and 
 Compressed/Uncompressed InStream objects.
 This patch attempts to refactor those assumptions out of both classes, 
 allowing the future use of direct byte buffers within ORC (as might come from 
 HDFS zero-copy readers).



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HIVE-5663) Refactor ORC RecordReader to operate on direct wrapped ByteBuffers

2013-10-27 Thread Gopal V (JIRA)
Gopal V created HIVE-5663:
-

 Summary: Refactor ORC RecordReader to operate on direct  wrapped 
ByteBuffers
 Key: HIVE-5663
 URL: https://issues.apache.org/jira/browse/HIVE-5663
 Project: Hive
  Issue Type: Improvement
  Components: File Formats
Affects Versions: 0.13.0
 Environment: Ubuntu LXC 
Reporter: Gopal V
Assignee: Gopal V


The current ORC RecordReader implementation assumes array structures backing 
the ByteBuffers it passes around between RecordReaderImpl and 
Compressed/Uncompressed InStream objects.

This patch attempts to refactor those assumptions out of both classes, allowing 
the future use of direct byte buffers within ORC (as might come from HDFS 
zero-copy readers).



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5663) Refactor ORC RecordReader to operate on direct wrapped ByteBuffers

2013-10-27 Thread Gopal V (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gopal V updated HIVE-5663:
--

Attachment: HIVE-5663.01.patch

 Refactor ORC RecordReader to operate on direct  wrapped ByteBuffers
 

 Key: HIVE-5663
 URL: https://issues.apache.org/jira/browse/HIVE-5663
 Project: Hive
  Issue Type: Improvement
  Components: File Formats
Affects Versions: 0.13.0
 Environment: Ubuntu LXC 
Reporter: Gopal V
Assignee: Gopal V
  Labels: ORC
 Attachments: HIVE-5663.01.patch


 The current ORC RecordReader implementation assumes array structures backing 
 the ByteBuffers it passes around between RecordReaderImpl and 
 Compressed/Uncompressed InStream objects.
 This patch attempts to refactor those assumptions out of both classes, 
 allowing the future use of direct byte buffers within ORC (as might come from 
 HDFS zero-copy readers).



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5663) Refactor ORC RecordReader to operate on direct wrapped ByteBuffers

2013-10-27 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13806457#comment-13806457
 ] 

Brock Noland commented on HIVE-5663:


Minor nit:

Lines like this:
{noformat}
private ByteBuffer compressed = null;
{noformat}

should be:
{noformat}
private ByteBuffer compressed;
{noformat}

since uninitialized variables are automatically set to null.

 Refactor ORC RecordReader to operate on direct  wrapped ByteBuffers
 

 Key: HIVE-5663
 URL: https://issues.apache.org/jira/browse/HIVE-5663
 Project: Hive
  Issue Type: Improvement
  Components: File Formats
Affects Versions: 0.13.0
 Environment: Ubuntu LXC 
Reporter: Gopal V
Assignee: Gopal V
  Labels: ORC
 Attachments: HIVE-5663.01.patch


 The current ORC RecordReader implementation assumes array structures backing 
 the ByteBuffers it passes around between RecordReaderImpl and 
 Compressed/Uncompressed InStream objects.
 This patch attempts to refactor those assumptions out of both classes, 
 allowing the future use of direct byte buffers within ORC (as might come from 
 HDFS zero-copy readers).



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5631) Index creation on a skew table fails

2013-10-27 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13806458#comment-13806458
 ] 

Hive QA commented on HIVE-5631:
---



{color:green}Overall{color}: +1 all checks pass

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12610478/HIVE-5631.3.patch.txt

{color:green}SUCCESS:{color} +1 4505 tests passed

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/1262/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/1262/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

 Index creation on a skew table fails
 

 Key: HIVE-5631
 URL: https://issues.apache.org/jira/browse/HIVE-5631
 Project: Hive
  Issue Type: Bug
  Components: Database/Schema
Affects Versions: 0.12.0
Reporter: Venki Korukanti
Assignee: Venki Korukanti
 Fix For: 0.13.0

 Attachments: HIVE-5631.1.patch.txt, HIVE-5631.2.patch.txt, 
 HIVE-5631.3.patch.txt


 REPRO STEPS:
 create database skewtest;
 use skewtest;
 create table skew (id bigint, acct string) skewed by (acct) on ('CC','CH');
 create index skew_indx on table skew (id) as 
 'org.apache.hadoop.hive.ql.index.compact.CompactIndexHandler' WITH DEFERRED 
 REBUILD;
 Last DDL fails with following error.
 FAILED: Execution Error, return code 1 from 
 org.apache.hadoop.hive.ql.exec.DDLTask. 
 InvalidObjectException(message:Invalid skew column [acct])
 When creating a table, Hive has sanity tests to make sure the columns have 
 proper names and the skewed columns are subset of the table columns. Here we 
 fail because index table has skewed column info. Index tables's skewed 
 columns include {acct} and the columns are {id, _bucketname, _offsets}. As 
 the skewed column {acct} is not part of the table columns Hive throws the 
 exception.
 The reason why Index table got skewed column info even though its definition 
 has no such info is: When creating the index table a deep copy of the base 
 table's StorageDescriptor (SD) (in this case 'skew') is made. And in that 
 copied SD, index specific parameters are set and unrelated parameters are 
 reset. Here skewed column info is not reset (there are few other params that 
 are not reset). That's why the index table contains the skewed column info.
 Fix: Instead of deep copying the base table StorageDescriptor, create a new 
 one from gathered info. This way it avoids the index table to inherit 
 unnecessary properties in SD from base table.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5657) TopN produces incorrect results with count(distinct)

2013-10-27 Thread Navis (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13806508#comment-13806508
 ] 

Navis commented on HIVE-5657:
-

Yes, top-n is not working with distincts. Test included in limit_pushdown.q was 
not good enough to show the flaw. I'll check this later.

 TopN produces incorrect results with count(distinct)
 

 Key: HIVE-5657
 URL: https://issues.apache.org/jira/browse/HIVE-5657
 Project: Hive
  Issue Type: Bug
Reporter: Sergey Shelukhin
Priority: Critical
 Attachments: example.patch


 Attached patch illustrates the problem.
 limit_pushdown test has various other cases of aggregations and distincts, 
 incl. count-distinct, that work correctly (that said, src dataset is bad for 
 testing these things because every count, for example, produces one record 
 only), so something must be special about this.
 I am not very familiar with distinct- code and these nuances; if someone 
 knows a quick fix feel free to take this, otherwise I will probably start 
 looking next week. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Assigned] (HIVE-5657) TopN produces incorrect results with count(distinct)

2013-10-27 Thread Navis (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5657?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Navis reassigned HIVE-5657:
---

Assignee: Navis

 TopN produces incorrect results with count(distinct)
 

 Key: HIVE-5657
 URL: https://issues.apache.org/jira/browse/HIVE-5657
 Project: Hive
  Issue Type: Bug
Reporter: Sergey Shelukhin
Assignee: Navis
Priority: Critical
 Attachments: example.patch


 Attached patch illustrates the problem.
 limit_pushdown test has various other cases of aggregations and distincts, 
 incl. count-distinct, that work correctly (that said, src dataset is bad for 
 testing these things because every count, for example, produces one record 
 only), so something must be special about this.
 I am not very familiar with distinct- code and these nuances; if someone 
 knows a quick fix feel free to take this, otherwise I will probably start 
 looking next week. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5355) JDBC support for decimal precision/scale

2013-10-27 Thread Xuefu Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuefu Zhang updated HIVE-5355:
--

Attachment: HIVE-5355.1.patch

 JDBC support for decimal precision/scale
 

 Key: HIVE-5355
 URL: https://issues.apache.org/jira/browse/HIVE-5355
 Project: Hive
  Issue Type: Task
  Components: Serializers/Deserializers
Affects Versions: 0.10.0
Reporter: Xuefu Zhang
Assignee: Xuefu Zhang
 Fix For: 0.13.0

 Attachments: HIVE-5355.1.patch, HIVE-5355.patch


 A subtask of HIVE-3976.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5355) JDBC support for decimal precision/scale

2013-10-27 Thread Xuefu Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuefu Zhang updated HIVE-5355:
--

Status: Patch Available  (was: Open)

 JDBC support for decimal precision/scale
 

 Key: HIVE-5355
 URL: https://issues.apache.org/jira/browse/HIVE-5355
 Project: Hive
  Issue Type: Task
  Components: Serializers/Deserializers
Affects Versions: 0.10.0
Reporter: Xuefu Zhang
Assignee: Xuefu Zhang
 Fix For: 0.13.0

 Attachments: HIVE-5355.1.patch, HIVE-5355.patch


 A subtask of HIVE-3976.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5355) JDBC support for decimal precision/scale

2013-10-27 Thread Xuefu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13806526#comment-13806526
 ] 

Xuefu Zhang commented on HIVE-5355:
---

Patch #1 rebased with latest trunk, and also addressed Brock's concern above.

 JDBC support for decimal precision/scale
 

 Key: HIVE-5355
 URL: https://issues.apache.org/jira/browse/HIVE-5355
 Project: Hive
  Issue Type: Task
  Components: Serializers/Deserializers
Affects Versions: 0.10.0
Reporter: Xuefu Zhang
Assignee: Xuefu Zhang
 Fix For: 0.13.0

 Attachments: HIVE-5355.1.patch, HIVE-5355.patch


 A subtask of HIVE-3976.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5355) JDBC support for decimal precision/scale

2013-10-27 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13806531#comment-13806531
 ] 

Brock Noland commented on HIVE-5355:


I might be mis reading but it looks like the fails were added to the wrong 
location. 

 JDBC support for decimal precision/scale
 

 Key: HIVE-5355
 URL: https://issues.apache.org/jira/browse/HIVE-5355
 Project: Hive
  Issue Type: Task
  Components: Serializers/Deserializers
Affects Versions: 0.10.0
Reporter: Xuefu Zhang
Assignee: Xuefu Zhang
 Fix For: 0.13.0

 Attachments: HIVE-5355.1.patch, HIVE-5355.patch


 A subtask of HIVE-3976.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-3976) Support specifying scale and precision with Hive decimal type

2013-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13806535#comment-13806535
 ] 

Hudson commented on HIVE-3976:
--

FAILURE: Integrated in Hive-trunk-h0.21 #2424 (See 
[https://builds.apache.org/job/Hive-trunk-h0.21/2424/])
HIVE-3976 - Support specifying scale and precision with Hive decimal type 
(Xuefu Zhang via Brock Noland) (brock: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1536151)
* 
/hive/trunk/common/src/java/org/apache/hadoop/hive/common/type/HiveDecimal.java
* 
/hive/trunk/common/src/test/org/apache/hadoop/hive/common/type/TestHiveDecimal.java
* /hive/trunk/data/files/kv9.txt
* 
/hive/trunk/jdbc/src/java/org/apache/hadoop/hive/jdbc/HiveResultSetMetaData.java
* /hive/trunk/jdbc/src/java/org/apache/hadoop/hive/jdbc/Utils.java
* /hive/trunk/jdbc/src/test/org/apache/hadoop/hive/jdbc/TestJdbcDriver.java
* /hive/trunk/jdbc/src/test/org/apache/hive/jdbc/TestJdbcDriver2.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/Driver.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/FunctionRegistry.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/NumericOpMethodResolver.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/OrcStruct.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/WriterImpl.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/DDLSemanticAnalyzer.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/HiveParser.g
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/ParseUtils.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/TypeCheckProcFactory.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/UDFOPDivide.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/UDFOPPlus.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFBridge.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFToDecimal.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFToVarchar.java
* 
/hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/exec/TestFunctionRegistry.java
* /hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/io/orc/TestOrcFile.java
* 
/hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/parse/TestHiveDecimalParse.java
* /hive/trunk/ql/src/test/queries/clientpositive/decimal_1.q
* /hive/trunk/ql/src/test/queries/clientpositive/decimal_2.q
* /hive/trunk/ql/src/test/queries/clientpositive/decimal_3.q
* /hive/trunk/ql/src/test/queries/clientpositive/decimal_4.q
* /hive/trunk/ql/src/test/queries/clientpositive/decimal_5.q
* /hive/trunk/ql/src/test/queries/clientpositive/decimal_6.q
* /hive/trunk/ql/src/test/queries/clientpositive/decimal_join.q
* /hive/trunk/ql/src/test/queries/clientpositive/decimal_precision.q
* /hive/trunk/ql/src/test/queries/clientpositive/decimal_udf.q
* /hive/trunk/ql/src/test/queries/clientpositive/orc_predicate_pushdown.q
* /hive/trunk/ql/src/test/queries/clientpositive/ptf_decimal.q
* /hive/trunk/ql/src/test/queries/clientpositive/serde_regex.q
* /hive/trunk/ql/src/test/queries/clientpositive/udf_pmod.q
* /hive/trunk/ql/src/test/queries/clientpositive/udf_to_double.q
* /hive/trunk/ql/src/test/queries/clientpositive/udf_to_float.q
* /hive/trunk/ql/src/test/queries/clientpositive/udf_to_string.q
* /hive/trunk/ql/src/test/queries/clientpositive/windowing_expressions.q
* /hive/trunk/ql/src/test/queries/clientpositive/windowing_multipartitioning.q
* /hive/trunk/ql/src/test/queries/clientpositive/windowing_navfn.q
* /hive/trunk/ql/src/test/queries/clientpositive/windowing_ntile.q
* /hive/trunk/ql/src/test/queries/clientpositive/windowing_rank.q
* 
/hive/trunk/ql/src/test/results/clientnegative/invalid_cast_from_binary_1.q.out
* 
/hive/trunk/ql/src/test/results/clientnegative/invalid_cast_from_binary_2.q.out
* 
/hive/trunk/ql/src/test/results/clientnegative/invalid_cast_from_binary_3.q.out
* 
/hive/trunk/ql/src/test/results/clientnegative/invalid_cast_from_binary_4.q.out
* 
/hive/trunk/ql/src/test/results/clientnegative/invalid_cast_from_binary_5.q.out
* 
/hive/trunk/ql/src/test/results/clientnegative/invalid_cast_from_binary_6.q.out
* /hive/trunk/ql/src/test/results/clientnegative/wrong_column_type.q.out
* /hive/trunk/ql/src/test/results/clientpositive/decimal_1.q.out
* /hive/trunk/ql/src/test/results/clientpositive/decimal_2.q.out
* /hive/trunk/ql/src/test/results/clientpositive/decimal_3.q.out
* /hive/trunk/ql/src/test/results/clientpositive/decimal_4.q.out
* /hive/trunk/ql/src/test/results/clientpositive/decimal_5.q.out
* /hive/trunk/ql/src/test/results/clientpositive/decimal_6.q.out
* /hive/trunk/ql/src/test/results/clientpositive/decimal_join.q.out
* /hive/trunk/ql/src/test/results/clientpositive/decimal_precision.q.out
* /hive/trunk/ql/src/test/results/clientpositive/decimal_serde.q.out
* /hive/trunk/ql/src/test/results/clientpositive/decimal_udf.q.out
* 

[jira] [Commented] (HIVE-4974) JDBC2 statements and result sets are not able to return their parents

2013-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13806536#comment-13806536
 ] 

Hudson commented on HIVE-4974:
--

FAILURE: Integrated in Hive-trunk-h0.21 #2424 (See 
[https://builds.apache.org/job/Hive-trunk-h0.21/2424/])
HIVE-4974 - JDBC2 statements and result sets are not able to return their 
parents (Chris Dome via Brock Noland) (brock: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1536153)
* /hive/trunk/jdbc/src/java/org/apache/hive/jdbc/HiveBaseResultSet.java
* /hive/trunk/jdbc/src/java/org/apache/hive/jdbc/HiveCallableStatement.java
* /hive/trunk/jdbc/src/java/org/apache/hive/jdbc/HiveConnection.java
* /hive/trunk/jdbc/src/java/org/apache/hive/jdbc/HiveDatabaseMetaData.java
* /hive/trunk/jdbc/src/java/org/apache/hive/jdbc/HivePreparedStatement.java
* /hive/trunk/jdbc/src/java/org/apache/hive/jdbc/HiveQueryResultSet.java
* /hive/trunk/jdbc/src/java/org/apache/hive/jdbc/HiveStatement.java
* /hive/trunk/jdbc/src/test/org/apache/hive/jdbc/TestJdbcDriver2.java


 JDBC2 statements and result sets are not able to return their parents
 -

 Key: HIVE-4974
 URL: https://issues.apache.org/jira/browse/HIVE-4974
 Project: Hive
  Issue Type: Bug
  Components: JDBC
Affects Versions: 0.11.0, 0.12.0
Reporter: Chris Drome
Assignee: Chris Drome
Priority: Minor
 Fix For: 0.13.0

 Attachments: HIVE-4974.2-trunk.patch.txt, 
 HIVE-4974-trunk-1.patch.txt, HIVE-4974-trunk-2.patch.txt, 
 HIVE-4974-trunk.patch.txt


 The getConnection method of HiveStatement and HivePreparedStatement throw a 
 not supported SQLException. The constructors should take the HiveConnection 
 that creates them as an argument.
 Similarly, HiveBaseResultSet is not capable of returning the Statement that 
 created it.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HIVE-5664) Drop cascade database fails when there any tables with indexes

2013-10-27 Thread Venki Korukanti (JIRA)
Venki Korukanti created HIVE-5664:
-

 Summary: Drop cascade database fails when there any tables with 
indexes
 Key: HIVE-5664
 URL: https://issues.apache.org/jira/browse/HIVE-5664
 Project: Hive
  Issue Type: Bug
  Components: Indexing, Metastore
Affects Versions: 0.12.0, 0.11.0, 0.10.0
Reporter: Venki Korukanti
Assignee: Venki Korukanti
 Fix For: 0.13.0



{code}
CREATE DATABASE db2; 
USE db2; 
CREATE TABLE tab1 (id int, name string); 
CREATE INDEX idx1 ON TABLE tab1(id) as 'COMPACT' with DEFERRED REBUILD IN TABLE 
tab1_indx; 
DROP DATABASE db2 CASCADE;
{code}

Last DDL fails with the following error:
{code}
FAILED: Execution Error, return code 1 from 
org.apache.hadoop.hive.ql.exec.DDLTask. Database does not exist: db2

Hive.log has following exception
2013-10-27 20:46:16,629 ERROR exec.DDLTask (DDLTask.java:execute(434)) - 
org.apache.hadoop.hive.ql.metadata.HiveException: Database does not exist: db2
at 
org.apache.hadoop.hive.ql.exec.DDLTask.dropDatabase(DDLTask.java:3473)
at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:231)
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:151)
at 
org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:65)
at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1441)
at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1219)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1047)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:915)
at 
org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:268)
at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:220)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:422)
at 
org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:790)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:684)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:623)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
Caused by: NoSuchObjectException(message:db2.tab1_indx table not found)
at 
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_table(HiveMetaStore.java:1376)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:103)
at com.sun.proxy.$Proxy7.get_table(Unknown Source)
at 
org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getTable(HiveMetaStoreClient.java:890)
at 
org.apache.hadoop.hive.metastore.HiveMetaStoreClient.dropTable(HiveMetaStoreClient.java:660)
at 
org.apache.hadoop.hive.metastore.HiveMetaStoreClient.dropTable(HiveMetaStoreClient.java:652)
at 
org.apache.hadoop.hive.metastore.HiveMetaStoreClient.dropDatabase(HiveMetaStoreClient.java:546)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:89)
at com.sun.proxy.$Proxy8.dropDatabase(Unknown Source)
at org.apache.hadoop.hive.ql.metadata.Hive.dropDatabase(Hive.java:284)
at 
org.apache.hadoop.hive.ql.exec.DDLTask.dropDatabase(DDLTask.java:3470)
... 18 more

{code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5664) Drop cascade database fails when the db has any tables with indexes

2013-10-27 Thread Venki Korukanti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Venki Korukanti updated HIVE-5664:
--

Summary: Drop cascade database fails when the db has any tables with 
indexes  (was: Drop cascade database fails when there any tables with indexes)

 Drop cascade database fails when the db has any tables with indexes
 ---

 Key: HIVE-5664
 URL: https://issues.apache.org/jira/browse/HIVE-5664
 Project: Hive
  Issue Type: Bug
  Components: Indexing, Metastore
Affects Versions: 0.10.0, 0.11.0, 0.12.0
Reporter: Venki Korukanti
Assignee: Venki Korukanti
 Fix For: 0.13.0


 {code}
 CREATE DATABASE db2; 
 USE db2; 
 CREATE TABLE tab1 (id int, name string); 
 CREATE INDEX idx1 ON TABLE tab1(id) as 'COMPACT' with DEFERRED REBUILD IN 
 TABLE tab1_indx; 
 DROP DATABASE db2 CASCADE;
 {code}
 Last DDL fails with the following error:
 {code}
 FAILED: Execution Error, return code 1 from 
 org.apache.hadoop.hive.ql.exec.DDLTask. Database does not exist: db2
 Hive.log has following exception
 2013-10-27 20:46:16,629 ERROR exec.DDLTask (DDLTask.java:execute(434)) - 
 org.apache.hadoop.hive.ql.metadata.HiveException: Database does not exist: db2
 at 
 org.apache.hadoop.hive.ql.exec.DDLTask.dropDatabase(DDLTask.java:3473)
 at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:231)
 at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:151)
 at 
 org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:65)
 at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1441)
 at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1219)
 at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1047)
 at org.apache.hadoop.hive.ql.Driver.run(Driver.java:915)
 at 
 org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:268)
 at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:220)
 at 
 org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:422)
 at 
 org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:790)
 at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:684)
 at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:623)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
 Caused by: NoSuchObjectException(message:db2.tab1_indx table not found)
 at 
 org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_table(HiveMetaStore.java:1376)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at 
 org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:103)
 at com.sun.proxy.$Proxy7.get_table(Unknown Source)
 at 
 org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getTable(HiveMetaStoreClient.java:890)
 at 
 org.apache.hadoop.hive.metastore.HiveMetaStoreClient.dropTable(HiveMetaStoreClient.java:660)
 at 
 org.apache.hadoop.hive.metastore.HiveMetaStoreClient.dropTable(HiveMetaStoreClient.java:652)
 at 
 org.apache.hadoop.hive.metastore.HiveMetaStoreClient.dropDatabase(HiveMetaStoreClient.java:546)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at 
 org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:89)
 at com.sun.proxy.$Proxy8.dropDatabase(Unknown Source)
 at org.apache.hadoop.hive.ql.metadata.Hive.dropDatabase(Hive.java:284)
 at 
 org.apache.hadoop.hive.ql.exec.DDLTask.dropDatabase(DDLTask.java:3470)
 ... 18 more
 {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5664) Drop cascade database fails when the db has any tables with indexes

2013-10-27 Thread Venki Korukanti (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5664?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13806541#comment-13806541
 ] 

Venki Korukanti commented on HIVE-5664:
---

What is happening here is:
1) HiveMetaStoreClient.dropDatabase() gets the list of table names in the given 
db. This list includes both the normal tables and index tables
2) dropTable is called for each table. When deleting a table all its indexes 
are also deleted. If a index consist of a table, then it will also be deleted, 
but the list got in step 1 is not updated and assumes that remaining tables in 
the list are still in database. So when the next request comes to delete the 
index table we get the table not found exception

Proposed fix: Instead of getting table names get the Table objects. Call 
dropTable for each table only if the table is not an index table and it still 
exists.

 Drop cascade database fails when the db has any tables with indexes
 ---

 Key: HIVE-5664
 URL: https://issues.apache.org/jira/browse/HIVE-5664
 Project: Hive
  Issue Type: Bug
  Components: Indexing, Metastore
Affects Versions: 0.10.0, 0.11.0, 0.12.0
Reporter: Venki Korukanti
Assignee: Venki Korukanti
 Fix For: 0.13.0


 {code}
 CREATE DATABASE db2; 
 USE db2; 
 CREATE TABLE tab1 (id int, name string); 
 CREATE INDEX idx1 ON TABLE tab1(id) as 'COMPACT' with DEFERRED REBUILD IN 
 TABLE tab1_indx; 
 DROP DATABASE db2 CASCADE;
 {code}
 Last DDL fails with the following error:
 {code}
 FAILED: Execution Error, return code 1 from 
 org.apache.hadoop.hive.ql.exec.DDLTask. Database does not exist: db2
 Hive.log has following exception
 2013-10-27 20:46:16,629 ERROR exec.DDLTask (DDLTask.java:execute(434)) - 
 org.apache.hadoop.hive.ql.metadata.HiveException: Database does not exist: db2
 at 
 org.apache.hadoop.hive.ql.exec.DDLTask.dropDatabase(DDLTask.java:3473)
 at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:231)
 at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:151)
 at 
 org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:65)
 at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1441)
 at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1219)
 at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1047)
 at org.apache.hadoop.hive.ql.Driver.run(Driver.java:915)
 at 
 org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:268)
 at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:220)
 at 
 org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:422)
 at 
 org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:790)
 at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:684)
 at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:623)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
 Caused by: NoSuchObjectException(message:db2.tab1_indx table not found)
 at 
 org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_table(HiveMetaStore.java:1376)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at 
 org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:103)
 at com.sun.proxy.$Proxy7.get_table(Unknown Source)
 at 
 org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getTable(HiveMetaStoreClient.java:890)
 at 
 org.apache.hadoop.hive.metastore.HiveMetaStoreClient.dropTable(HiveMetaStoreClient.java:660)
 at 
 org.apache.hadoop.hive.metastore.HiveMetaStoreClient.dropTable(HiveMetaStoreClient.java:652)
 at 
 org.apache.hadoop.hive.metastore.HiveMetaStoreClient.dropDatabase(HiveMetaStoreClient.java:546)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at 
 

[jira] [Updated] (HIVE-5581) Implement vectorized year/month/day... etc. for string arguments

2013-10-27 Thread Teddy Choi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Teddy Choi updated HIVE-5581:
-

Status: Patch Available  (was: Open)

 Implement vectorized year/month/day... etc. for string arguments
 

 Key: HIVE-5581
 URL: https://issues.apache.org/jira/browse/HIVE-5581
 Project: Hive
  Issue Type: Sub-task
  Components: Query Processor
Affects Versions: 0.13.0
Reporter: Eric Hanson
Assignee: Teddy Choi
 Attachments: HIVE-5581.1.patch.txt


 Functions year(), month(), day(), weekofyear(), hour(), minute(), second() 
 need to be implemented for string arguments in vectorized mode. 
 They already work for timestamp arguments.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5599) Change default logging level to INFO

2013-10-27 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13806560#comment-13806560
 ] 

Lefty Leverenz commented on HIVE-5599:
--

This needs to be documented in the wiki, with a version note, when Hive 0.13 is 
released.  Or it could be documented now as an upcoming change with a link to 
this JIRA.

In Getting Started, the third sentence of the [Error 
Logs|https://cwiki.apache.org/confluence/display/Hive/GettingStarted#GettingStarted-ErrorLogs]
 section says:  The default logging level is WARN and the logs are stored 
in

If you want this documented now, I can take care of it.  If later, do we have a 
doc-in-next-release label?  I've seen one somewhere, but don't know if it's 
being checked at release time.

 Change default logging level to INFO
 

 Key: HIVE-5599
 URL: https://issues.apache.org/jira/browse/HIVE-5599
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.12.0
Reporter: Brock Noland
Assignee: Brock Noland
 Fix For: 0.13.0

 Attachments: HIVE-5599.patch


 The default logging level is warn:
 https://github.com/apache/hive/blob/trunk/common/src/java/conf/hive-log4j.properties#L19
 but hive logs lot's of good information at INFO level. Additionally most 
 hadoop projects log at INFO by default. Let's change the logging level to 
 INFO by default.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5354) Decimal precision/scale support in ORC file

2013-10-27 Thread Xuefu Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuefu Zhang updated HIVE-5354:
--

Attachment: HIVE-5354.1.patch

 Decimal precision/scale support in ORC file
 ---

 Key: HIVE-5354
 URL: https://issues.apache.org/jira/browse/HIVE-5354
 Project: Hive
  Issue Type: Task
  Components: Serializers/Deserializers
Affects Versions: 0.10.0
Reporter: Xuefu Zhang
Assignee: Xuefu Zhang
 Fix For: 0.13.0

 Attachments: HIVE-5354.patch


 A subtask of HIVE-3976.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5354) Decimal precision/scale support in ORC file

2013-10-27 Thread Xuefu Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuefu Zhang updated HIVE-5354:
--

Attachment: (was: HIVE-5354.1.patch)

 Decimal precision/scale support in ORC file
 ---

 Key: HIVE-5354
 URL: https://issues.apache.org/jira/browse/HIVE-5354
 Project: Hive
  Issue Type: Task
  Components: Serializers/Deserializers
Affects Versions: 0.10.0
Reporter: Xuefu Zhang
Assignee: Xuefu Zhang
 Fix For: 0.13.0

 Attachments: HIVE-5354.patch


 A subtask of HIVE-3976.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-3777) add a property in the partition to figure out if stats are accurate

2013-10-27 Thread Dilip Joseph (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13806561#comment-13806561
 ] 

Dilip Joseph commented on HIVE-3777:


[~ashutoshc] : I am not working on it.  Please take over.


 add a property in the partition to figure out if stats are accurate
 ---

 Key: HIVE-3777
 URL: https://issues.apache.org/jira/browse/HIVE-3777
 Project: Hive
  Issue Type: Improvement
  Components: Query Processor
Reporter: Namit Jain
Assignee: Dilip Joseph

 Currently, stats task tries to update the statistics in the table/partition
 being updated after the table/partition is loaded. In case of a failure to 
 update these stats (due to the any reason), the operation either succeeds
 (writing inaccurate stats) or fails depending on whether hive.stats.reliable
 is set to true. This can be bad for applications who do not always care about
 reliable stats, since the query may have taken a long time to execute and then
 fail eventually.
 Another property should be added to the partition: areStatsAccurate. If 
 hive.stats.reliable is
 set to false, and stats could not be computed correctly, the operation would
 still succeed, update the stats, but set areStatsAccurate to false.
 If the application cares about accurate stats, it can be obtained in the 
 background.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-3777) add a property in the partition to figure out if stats are accurate

2013-10-27 Thread Dilip Joseph (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dilip Joseph updated HIVE-3777:
---

Assignee: Ashutosh Chauhan  (was: Dilip Joseph)

 add a property in the partition to figure out if stats are accurate
 ---

 Key: HIVE-3777
 URL: https://issues.apache.org/jira/browse/HIVE-3777
 Project: Hive
  Issue Type: Improvement
  Components: Query Processor
Reporter: Namit Jain
Assignee: Ashutosh Chauhan

 Currently, stats task tries to update the statistics in the table/partition
 being updated after the table/partition is loaded. In case of a failure to 
 update these stats (due to the any reason), the operation either succeeds
 (writing inaccurate stats) or fails depending on whether hive.stats.reliable
 is set to true. This can be bad for applications who do not always care about
 reliable stats, since the query may have taken a long time to execute and then
 fail eventually.
 Another property should be added to the partition: areStatsAccurate. If 
 hive.stats.reliable is
 set to false, and stats could not be computed correctly, the operation would
 still succeed, update the stats, but set areStatsAccurate to false.
 If the application cares about accurate stats, it can be obtained in the 
 background.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5354) Decimal precision/scale support in ORC file

2013-10-27 Thread Xuefu Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuefu Zhang updated HIVE-5354:
--

Status: Patch Available  (was: Open)

 Decimal precision/scale support in ORC file
 ---

 Key: HIVE-5354
 URL: https://issues.apache.org/jira/browse/HIVE-5354
 Project: Hive
  Issue Type: Task
  Components: Serializers/Deserializers
Affects Versions: 0.10.0
Reporter: Xuefu Zhang
Assignee: Xuefu Zhang
 Fix For: 0.13.0

 Attachments: HIVE-5354.1.patch, HIVE-5354.patch


 A subtask of HIVE-3976.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5354) Decimal precision/scale support in ORC file

2013-10-27 Thread Xuefu Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuefu Zhang updated HIVE-5354:
--

Attachment: HIVE-5354.1.patch

Patch #1 rebased with trunk.

 Decimal precision/scale support in ORC file
 ---

 Key: HIVE-5354
 URL: https://issues.apache.org/jira/browse/HIVE-5354
 Project: Hive
  Issue Type: Task
  Components: Serializers/Deserializers
Affects Versions: 0.10.0
Reporter: Xuefu Zhang
Assignee: Xuefu Zhang
 Fix For: 0.13.0

 Attachments: HIVE-5354.1.patch, HIVE-5354.patch


 A subtask of HIVE-3976.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


Review Request 14985: HIVE-5354: Decimal precision/scale support in ORC file

2013-10-27 Thread Xuefu Zhang

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/14985/
---

Review request for hive and Brock Noland.


Bugs: HIVE-5354
https://issues.apache.org/jira/browse/HIVE-5354


Repository: hive-git


Description
---

Support decimal precision/scale for Orc file, as part of HIVE-3976.


Diffs
-

  ql/src/java/org/apache/hadoop/hive/ql/io/orc/OrcStruct.java c993b37 
  ql/src/java/org/apache/hadoop/hive/ql/io/orc/RecordReaderImpl.java 71484a3 
  ql/src/java/org/apache/hadoop/hive/ql/io/orc/WriterImpl.java 7519fc1 
  ql/src/protobuf/org/apache/hadoop/hive/ql/io/orc/orc_proto.proto 53b93a0 

Diff: https://reviews.apache.org/r/14985/diff/


Testing
---


Thanks,

Xuefu Zhang



[jira] [Commented] (HIVE-5599) Change default logging level to INFO

2013-10-27 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13806565#comment-13806565
 ] 

Thejas M Nair commented on HIVE-5599:
-

Thanks for pointing that our Lefty!
Lets document it now and note that this will be part of next hive release 
(0.13). Yes, if you can take care if it, that would be great!


 Change default logging level to INFO
 

 Key: HIVE-5599
 URL: https://issues.apache.org/jira/browse/HIVE-5599
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.12.0
Reporter: Brock Noland
Assignee: Brock Noland
 Fix For: 0.13.0

 Attachments: HIVE-5599.patch


 The default logging level is warn:
 https://github.com/apache/hive/blob/trunk/common/src/java/conf/hive-log4j.properties#L19
 but hive logs lot's of good information at INFO level. Additionally most 
 hadoop projects log at INFO by default. Let's change the logging level to 
 INFO by default.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5355) JDBC support for decimal precision/scale

2013-10-27 Thread Xuefu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13806567#comment-13806567
 ] 

Xuefu Zhang commented on HIVE-5355:
---

Okay. Let me take a new look. I may need to put a return in the catch clause. 
Cancelling the patch.

 JDBC support for decimal precision/scale
 

 Key: HIVE-5355
 URL: https://issues.apache.org/jira/browse/HIVE-5355
 Project: Hive
  Issue Type: Task
  Components: Serializers/Deserializers
Affects Versions: 0.10.0
Reporter: Xuefu Zhang
Assignee: Xuefu Zhang
 Fix For: 0.13.0

 Attachments: HIVE-5355.1.patch, HIVE-5355.patch


 A subtask of HIVE-3976.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5355) JDBC support for decimal precision/scale

2013-10-27 Thread Xuefu Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuefu Zhang updated HIVE-5355:
--

Status: Open  (was: Patch Available)

 JDBC support for decimal precision/scale
 

 Key: HIVE-5355
 URL: https://issues.apache.org/jira/browse/HIVE-5355
 Project: Hive
  Issue Type: Task
  Components: Serializers/Deserializers
Affects Versions: 0.10.0
Reporter: Xuefu Zhang
Assignee: Xuefu Zhang
 Fix For: 0.13.0

 Attachments: HIVE-5355.1.patch, HIVE-5355.patch


 A subtask of HIVE-3976.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5355) JDBC support for decimal precision/scale

2013-10-27 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13806571#comment-13806571
 ] 

Hive QA commented on HIVE-5355:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12610501/HIVE-5355.1.patch

{color:red}ERROR:{color} -1 due to 3 failed/errored test(s), 4504 tests executed
*Failed tests:*
{noformat}
org.apache.hive.jdbc.TestJdbcDriver2.testErrorDiag0
org.apache.hive.jdbc.TestJdbcDriver2.testErrorDiag1
org.apache.hive.jdbc.TestJdbcDriver2.testErrorDiag2
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/1263/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/1263/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests failed with: TestsFailedException: 3 tests failed
{noformat}

This message is automatically generated.

 JDBC support for decimal precision/scale
 

 Key: HIVE-5355
 URL: https://issues.apache.org/jira/browse/HIVE-5355
 Project: Hive
  Issue Type: Task
  Components: Serializers/Deserializers
Affects Versions: 0.10.0
Reporter: Xuefu Zhang
Assignee: Xuefu Zhang
 Fix For: 0.13.0

 Attachments: HIVE-5355.1.patch, HIVE-5355.patch


 A subtask of HIVE-3976.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5657) TopN produces incorrect results with count(distinct)

2013-10-27 Thread Navis (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5657?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Navis updated HIVE-5657:


Attachment: HIVE-5657.1.patch.txt

Attaching patch, which is not yet completed. I will return to this tomorrow or 
later.

 TopN produces incorrect results with count(distinct)
 

 Key: HIVE-5657
 URL: https://issues.apache.org/jira/browse/HIVE-5657
 Project: Hive
  Issue Type: Bug
Reporter: Sergey Shelukhin
Assignee: Navis
Priority: Critical
 Attachments: example.patch, HIVE-5657.1.patch.txt


 Attached patch illustrates the problem.
 limit_pushdown test has various other cases of aggregations and distincts, 
 incl. count-distinct, that work correctly (that said, src dataset is bad for 
 testing these things because every count, for example, produces one record 
 only), so something must be special about this.
 I am not very familiar with distinct- code and these nuances; if someone 
 knows a quick fix feel free to take this, otherwise I will probably start 
 looking next week. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-3976) Support specifying scale and precision with Hive decimal type

2013-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13806578#comment-13806578
 ] 

Hudson commented on HIVE-3976:
--

FAILURE: Integrated in Hive-trunk-hadoop2 #524 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/524/])
HIVE-3976 - Support specifying scale and precision with Hive decimal type 
(Xuefu Zhang via Brock Noland) (brock: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1536151)
* 
/hive/trunk/common/src/java/org/apache/hadoop/hive/common/type/HiveDecimal.java
* 
/hive/trunk/common/src/test/org/apache/hadoop/hive/common/type/TestHiveDecimal.java
* /hive/trunk/data/files/kv9.txt
* 
/hive/trunk/jdbc/src/java/org/apache/hadoop/hive/jdbc/HiveResultSetMetaData.java
* /hive/trunk/jdbc/src/java/org/apache/hadoop/hive/jdbc/Utils.java
* /hive/trunk/jdbc/src/test/org/apache/hadoop/hive/jdbc/TestJdbcDriver.java
* /hive/trunk/jdbc/src/test/org/apache/hive/jdbc/TestJdbcDriver2.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/Driver.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/FunctionRegistry.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/NumericOpMethodResolver.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/OrcStruct.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/WriterImpl.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/DDLSemanticAnalyzer.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/HiveParser.g
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/ParseUtils.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/TypeCheckProcFactory.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/UDFOPDivide.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/UDFOPPlus.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFBridge.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFToDecimal.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFToVarchar.java
* 
/hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/exec/TestFunctionRegistry.java
* /hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/io/orc/TestOrcFile.java
* 
/hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/parse/TestHiveDecimalParse.java
* /hive/trunk/ql/src/test/queries/clientpositive/decimal_1.q
* /hive/trunk/ql/src/test/queries/clientpositive/decimal_2.q
* /hive/trunk/ql/src/test/queries/clientpositive/decimal_3.q
* /hive/trunk/ql/src/test/queries/clientpositive/decimal_4.q
* /hive/trunk/ql/src/test/queries/clientpositive/decimal_5.q
* /hive/trunk/ql/src/test/queries/clientpositive/decimal_6.q
* /hive/trunk/ql/src/test/queries/clientpositive/decimal_join.q
* /hive/trunk/ql/src/test/queries/clientpositive/decimal_precision.q
* /hive/trunk/ql/src/test/queries/clientpositive/decimal_udf.q
* /hive/trunk/ql/src/test/queries/clientpositive/orc_predicate_pushdown.q
* /hive/trunk/ql/src/test/queries/clientpositive/ptf_decimal.q
* /hive/trunk/ql/src/test/queries/clientpositive/serde_regex.q
* /hive/trunk/ql/src/test/queries/clientpositive/udf_pmod.q
* /hive/trunk/ql/src/test/queries/clientpositive/udf_to_double.q
* /hive/trunk/ql/src/test/queries/clientpositive/udf_to_float.q
* /hive/trunk/ql/src/test/queries/clientpositive/udf_to_string.q
* /hive/trunk/ql/src/test/queries/clientpositive/windowing_expressions.q
* /hive/trunk/ql/src/test/queries/clientpositive/windowing_multipartitioning.q
* /hive/trunk/ql/src/test/queries/clientpositive/windowing_navfn.q
* /hive/trunk/ql/src/test/queries/clientpositive/windowing_ntile.q
* /hive/trunk/ql/src/test/queries/clientpositive/windowing_rank.q
* 
/hive/trunk/ql/src/test/results/clientnegative/invalid_cast_from_binary_1.q.out
* 
/hive/trunk/ql/src/test/results/clientnegative/invalid_cast_from_binary_2.q.out
* 
/hive/trunk/ql/src/test/results/clientnegative/invalid_cast_from_binary_3.q.out
* 
/hive/trunk/ql/src/test/results/clientnegative/invalid_cast_from_binary_4.q.out
* 
/hive/trunk/ql/src/test/results/clientnegative/invalid_cast_from_binary_5.q.out
* 
/hive/trunk/ql/src/test/results/clientnegative/invalid_cast_from_binary_6.q.out
* /hive/trunk/ql/src/test/results/clientnegative/wrong_column_type.q.out
* /hive/trunk/ql/src/test/results/clientpositive/decimal_1.q.out
* /hive/trunk/ql/src/test/results/clientpositive/decimal_2.q.out
* /hive/trunk/ql/src/test/results/clientpositive/decimal_3.q.out
* /hive/trunk/ql/src/test/results/clientpositive/decimal_4.q.out
* /hive/trunk/ql/src/test/results/clientpositive/decimal_5.q.out
* /hive/trunk/ql/src/test/results/clientpositive/decimal_6.q.out
* /hive/trunk/ql/src/test/results/clientpositive/decimal_join.q.out
* /hive/trunk/ql/src/test/results/clientpositive/decimal_precision.q.out
* /hive/trunk/ql/src/test/results/clientpositive/decimal_serde.q.out
* /hive/trunk/ql/src/test/results/clientpositive/decimal_udf.q.out
* 

[jira] [Commented] (HIVE-4974) JDBC2 statements and result sets are not able to return their parents

2013-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13806579#comment-13806579
 ] 

Hudson commented on HIVE-4974:
--

FAILURE: Integrated in Hive-trunk-hadoop2 #524 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/524/])
HIVE-4974 - JDBC2 statements and result sets are not able to return their 
parents (Chris Dome via Brock Noland) (brock: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1536153)
* /hive/trunk/jdbc/src/java/org/apache/hive/jdbc/HiveBaseResultSet.java
* /hive/trunk/jdbc/src/java/org/apache/hive/jdbc/HiveCallableStatement.java
* /hive/trunk/jdbc/src/java/org/apache/hive/jdbc/HiveConnection.java
* /hive/trunk/jdbc/src/java/org/apache/hive/jdbc/HiveDatabaseMetaData.java
* /hive/trunk/jdbc/src/java/org/apache/hive/jdbc/HivePreparedStatement.java
* /hive/trunk/jdbc/src/java/org/apache/hive/jdbc/HiveQueryResultSet.java
* /hive/trunk/jdbc/src/java/org/apache/hive/jdbc/HiveStatement.java
* /hive/trunk/jdbc/src/test/org/apache/hive/jdbc/TestJdbcDriver2.java


 JDBC2 statements and result sets are not able to return their parents
 -

 Key: HIVE-4974
 URL: https://issues.apache.org/jira/browse/HIVE-4974
 Project: Hive
  Issue Type: Bug
  Components: JDBC
Affects Versions: 0.11.0, 0.12.0
Reporter: Chris Drome
Assignee: Chris Drome
Priority: Minor
 Fix For: 0.13.0

 Attachments: HIVE-4974.2-trunk.patch.txt, 
 HIVE-4974-trunk-1.patch.txt, HIVE-4974-trunk-2.patch.txt, 
 HIVE-4974-trunk.patch.txt


 The getConnection method of HiveStatement and HivePreparedStatement throw a 
 not supported SQLException. The constructors should take the HiveConnection 
 that creates them as an argument.
 Similarly, HiveBaseResultSet is not capable of returning the Statement that 
 created it.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HIVE-5665) Update PMC status for navis

2013-10-27 Thread Thejas M Nair (JIRA)
Thejas M Nair created HIVE-5665:
---

 Summary: Update PMC status for navis
 Key: HIVE-5665
 URL: https://issues.apache.org/jira/browse/HIVE-5665
 Project: Hive
  Issue Type: Bug
Reporter: Thejas M Nair
Assignee: Thejas M Nair


NO PRECOMMIT TESTS



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5665) Update PMC status for navis

2013-10-27 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5665?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-5665:


Attachment: HIVE-5665.1.patch

 Update PMC status for navis
 ---

 Key: HIVE-5665
 URL: https://issues.apache.org/jira/browse/HIVE-5665
 Project: Hive
  Issue Type: Bug
Reporter: Thejas M Nair
Assignee: Thejas M Nair
 Attachments: HIVE-5665.1.patch


 NO PRECOMMIT TESTS



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5665) Update PMC status for navis

2013-10-27 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5665?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-5665:


Status: Patch Available  (was: Open)

cc [~cwsteinbach]]

 Update PMC status for navis
 ---

 Key: HIVE-5665
 URL: https://issues.apache.org/jira/browse/HIVE-5665
 Project: Hive
  Issue Type: Bug
Reporter: Thejas M Nair
Assignee: Thejas M Nair
 Attachments: HIVE-5665.1.patch


 NO PRECOMMIT TESTS



--
This message was sent by Atlassian JIRA
(v6.1#6144)


  1   2   >