[jira] [Commented] (HIVE-4961) Create bridge for custom UDFs to operate in vectorized mode
[ https://issues.apache.org/jira/browse/HIVE-4961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13767701#comment-13767701 ] Hive QA commented on HIVE-4961: --- {color:red}Overall{color}: -1 at least one tests failed Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12603172/HIVE-4961.2-vectorization.patch {color:red}ERROR:{color} -1 due to 6 failed/errored test(s), 3955 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.ql.io.orc.TestFileDump.testDump org.apache.hadoop.hive.ql.io.orc.TestFileDump.testDictionaryThreshold org.apache.hive.hcatalog.pig.TestOrcHCatStorer.testStoreTableMulti org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_input4 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_plan_json org.apache.hadoop.hive.ql.exec.vector.util.TestUDF.initializationError {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/749/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/749/console Messages: {noformat} Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests failed with: TestsFailedException: 6 tests failed {noformat} This message is automatically generated. Create bridge for custom UDFs to operate in vectorized mode --- Key: HIVE-4961 URL: https://issues.apache.org/jira/browse/HIVE-4961 Project: Hive Issue Type: Sub-task Affects Versions: vectorization-branch Reporter: Eric Hanson Assignee: Eric Hanson Fix For: vectorization-branch Attachments: HIVE-4961.1-vectorization.patch, HIVE-4961.2-vectorization.patch, vectorUDF.4.patch, vectorUDF.5.patch, vectorUDF.8.patch, vectorUDF.9.patch Suppose you have a custom UDF myUDF() that you've created to extend hive. The goal of this JIRA is to create a facility where if you run a query that uses myUDF() in an expression, the query will run in vectorized mode. This would be a general-purpose bridge for custom UDFs that users add to Hive. It would work with existing UDFs. I'm considering a separate JIRA for a new kind of custom UDF implementation that is vectorized from the beginning, to optimize performance. That is not covered by this JIRA. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-4844) Add varchar data type
[ https://issues.apache.org/jira/browse/HIVE-4844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13767724#comment-13767724 ] Hive QA commented on HIVE-4844: --- {color:green}Overall{color}: +1 all checks pass Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12603206/HIVE-4844.19.patch {color:green}SUCCESS:{color} +1 3124 tests passed Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/751/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/751/console Messages: {noformat} Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase {noformat} This message is automatically generated. Add varchar data type - Key: HIVE-4844 URL: https://issues.apache.org/jira/browse/HIVE-4844 Project: Hive Issue Type: New Feature Components: Types Reporter: Jason Dere Assignee: Jason Dere Attachments: HIVE-4844.10.patch, HIVE-4844.11.patch, HIVE-4844.12.patch, HIVE-4844.13.patch, HIVE-4844.14.patch, HIVE-4844.15.patch, HIVE-4844.16.patch, HIVE-4844.17.patch, HIVE-4844.18.patch, HIVE-4844.19.patch, HIVE-4844.1.patch.hack, HIVE-4844.2.patch, HIVE-4844.3.patch, HIVE-4844.4.patch, HIVE-4844.5.patch, HIVE-4844.6.patch, HIVE-4844.7.patch, HIVE-4844.8.patch, HIVE-4844.9.patch, HIVE-4844.D12699.1.patch, HIVE-4844.D12891.1.patch, screenshot.png Add new varchar data types which have support for more SQL-compliant behavior, such as SQL string comparison semantics, max length, etc. Char type will be added as another task. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-4446) [HCatalog] Documentation for HIVE-4442, HIVE-4443, HIVE-4444
[ https://issues.apache.org/jira/browse/HIVE-4446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13767725#comment-13767725 ] Lefty Leverenz commented on HIVE-4446: -- The patch applies to the old xml docs, but the changes belong in the new wikidocs ([https://cwiki.apache.org/confluence/display/Hive/WebHCat+Reference]). [HCatalog] Documentation for HIVE-4442, HIVE-4443, HIVE- Key: HIVE-4446 URL: https://issues.apache.org/jira/browse/HIVE-4446 Project: Hive Issue Type: Improvement Components: HCatalog Reporter: Daniel Dai Assignee: Daniel Dai Fix For: 0.12.0 Attachments: HIVE-4446-1.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-5282) Some tests don't use ${system:test.dfs.mkdir} for mkdir
[ https://issues.apache.org/jira/browse/HIVE-5282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13767738#comment-13767738 ] Hudson commented on HIVE-5282: -- FAILURE: Integrated in Hive-trunk-hadoop2-ptest #97 (See [https://builds.apache.org/job/Hive-trunk-hadoop2-ptest/97/]) HIVE-5282 : Some tests don't use for mkdir (Brock Noland via Ashutosh Chauhan) (hashutosh: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1522741) * /hive/trunk/ql/src/test/queries/clientpositive/load_hdfs_file_with_space_in_the_name.q * /hive/trunk/ql/src/test/queries/clientpositive/schemeAuthority2.q Some tests don't use ${system:test.dfs.mkdir} for mkdir --- Key: HIVE-5282 URL: https://issues.apache.org/jira/browse/HIVE-5282 Project: Hive Issue Type: Sub-task Reporter: Brock Noland Assignee: Brock Noland Priority: Minor Fix For: 0.13.0 Attachments: HIVE-5282.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-4171) Current database in metastore.Hive is not consistent with SessionState
[ https://issues.apache.org/jira/browse/HIVE-4171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13767740#comment-13767740 ] Hudson commented on HIVE-4171: -- FAILURE: Integrated in Hive-trunk-hadoop2-ptest #97 (See [https://builds.apache.org/job/Hive-trunk-hadoop2-ptest/97/]) HIVE-4171 : Current database in metastore.Hive is not consistent with SessionState (Thejas Nair via Ashutosh Chauhan) (hashutosh: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1523100) * /hive/trunk/cli/src/java/org/apache/hadoop/hive/cli/CliDriver.java * /hive/trunk/cli/src/java/org/apache/hadoop/hive/cli/CliSessionState.java * /hive/trunk/cli/src/test/org/apache/hadoop/hive/cli/TestCliSessionState.java * /hive/trunk/hcatalog/core/src/main/java/org/apache/hcatalog/cli/SemanticAnalysis/HCatSemanticAnalyzer.java * /hive/trunk/hcatalog/core/src/main/java/org/apache/hcatalog/cli/SemanticAnalysis/HCatSemanticAnalyzerBase.java * /hive/trunk/hcatalog/core/src/main/java/org/apache/hive/hcatalog/cli/SemanticAnalysis/HCatSemanticAnalyzer.java * /hive/trunk/hcatalog/core/src/main/java/org/apache/hive/hcatalog/cli/SemanticAnalysis/HCatSemanticAnalyzerBase.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/Driver.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/ColumnStatsTask.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/DDLTask.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/DDLSemanticAnalyzer.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/ImportSemanticAnalyzer.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/session/SessionState.java * /hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/QTestUtil.java * /hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/exec/TestExecDriver.java * /hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/history/TestHiveHistory.java * /hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/metadata/TestHive.java * /hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/parse/TestMacroSemanticAnalyzer.java * /hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/session * /hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/session/TestSessionState.java Current database in metastore.Hive is not consistent with SessionState -- Key: HIVE-4171 URL: https://issues.apache.org/jira/browse/HIVE-4171 Project: Hive Issue Type: Bug Components: CLI Reporter: Navis Assignee: Thejas M Nair Labels: HiveServer2 Fix For: 0.12.0 Attachments: HIVE-4171.3.patch, HIVE-4171.4.patch, HIVE-4171.5.patch, HIVE-4171.6.patch, HIVE-4171.D9399.1.patch, HIVE-4171.D9399.2.patch metastore.Hive is thread local instance, which can have different status with SessionState. Currently the only status in metastore.Hive is database name in use. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-5206) Support parameterized primitive types
[ https://issues.apache.org/jira/browse/HIVE-5206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13767739#comment-13767739 ] Hudson commented on HIVE-5206: -- FAILURE: Integrated in Hive-trunk-hadoop2-ptest #97 (See [https://builds.apache.org/job/Hive-trunk-hadoop2-ptest/97/]) HIVE-5206 : Support parameterized primitive types (Jason Dere via Ashutosh Chauhan) (hashutosh: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1522715) * /hive/trunk/contrib/src/java/org/apache/hadoop/hive/contrib/util/typedbytes/TypedBytesRecordReader.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/FetchOperator.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/FunctionRegistry.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/metadata/VirtualColumn.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/ParseUtils.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/TypeCheckProcFactory.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/SettableUDF.java * /hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/dynamic_type/DynamicSerDe.java * /hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyFactory.java * /hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/lazy/objectinspector/primitive/LazyPrimitiveObjectInspectorFactory.java * /hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/lazybinary/LazyBinaryUtils.java * /hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/objectinspector/ObjectInspectorConverters.java * /hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/objectinspector/ObjectInspectorUtils.java * /hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/objectinspector/PrimitiveObjectInspector.java * /hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/objectinspector/primitive/AbstractPrimitiveObjectInspector.java * /hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/objectinspector/primitive/PrimitiveObjectInspectorFactory.java * /hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/objectinspector/primitive/PrimitiveObjectInspectorUtils.java * /hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/typeinfo/BaseTypeParams.java * /hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/typeinfo/ParameterizedPrimitiveTypeUtils.java * /hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/typeinfo/PrimitiveTypeInfo.java * /hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/typeinfo/PrimitiveTypeSpec.java * /hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/typeinfo/TypeInfoFactory.java * /hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/typeinfo/TypeInfoUtils.java Support parameterized primitive types - Key: HIVE-5206 URL: https://issues.apache.org/jira/browse/HIVE-5206 Project: Hive Issue Type: Improvement Components: Types Reporter: Jason Dere Assignee: Jason Dere Fix For: 0.13.0 Attachments: HIVE-5206.1.patch, HIVE-5206.2.patch, HIVE-5206.3.patch, HIVE-5206.4.patch, HIVE-5206.D12693.1.patch Support for parameterized types is needed for char/varchar/decimal support. This adds a type parameters value to the PrimitiveTypeEntry/PrimitiveTypeInfo/PrimitiveObjectInspector objects. NO PRECOMMIT TESTS - dependent on HIVE-5203/HIVE-5204 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-5079) Make Hive compile under Windows
[ https://issues.apache.org/jira/browse/HIVE-5079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13767745#comment-13767745 ] Hudson commented on HIVE-5079: -- FAILURE: Integrated in Hive-trunk-hadoop2-ptest #97 (See [https://builds.apache.org/job/Hive-trunk-hadoop2-ptest/97/]) HIVE-5079 : Make Hive compile under Windows (Daniel Dai via Ashutosh Chauhan) (hashutosh: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1523118) * /hive/trunk/common/src/scripts/saveVersion.sh Make Hive compile under Windows --- Key: HIVE-5079 URL: https://issues.apache.org/jira/browse/HIVE-5079 Project: Hive Issue Type: Bug Components: Build Infrastructure Reporter: Daniel Dai Assignee: Daniel Dai Fix For: 0.13.0 Attachments: HIVE-5079-1.patch, HIVE-5079-2.patch Hive compilation failed under Windows. Error message: {code} compile: [echo] Project: common [exec] D:\Program Files (x86)\GnuWin32\bin\xargs.exe: md5sum: No such file or directory [exec] md5sum: ../serde/src/java/org/apache/hadoop/hive/serde2/io/Timesta: No such file or directory [javac] Compiling 25 source files to D:\Users\Administrator\hive\build\common\classes [javac] D:\Users\Administrator\hive\common\src\gen\org\apache\hive\common\package-info.java:4: unclosed string literal [javac] @HiveVersionAnnotation(version=0.12.0-SNAPSHOT, revision=80eadd8fa2af5eeba61f921318ab8b2c19980ab3, branch=trunk [javac] ^ [javac] D:\Users\Administrator\hive\common\src\gen\org\apache\hive\common\package-info.java:5: unclosed string literal [javac] , [javac] ^ [javac] D:\Users\Administrator\hive\common\src\gen\org\apache\hive\common\package-info.java:6: class, interface, or enum expected [javac] user=Administrator [javac] ^ [javac] D:\Users\Administrator\hive\common\src\gen\org\apache\hive\common\package-info.java:6: unclosed string literal [javac] user=Administrator [javac] ^ [javac] D:\Users\Administrator\hive\common\src\gen\org\apache\hive\common\package-info.java:10: unclosed string literal [javac] , [javac] ^ [javac] D:\Users\Administrator\hive\common\src\gen\org\apache\hive\common\package-info.java:11: unclosed string literal [javac] srcChecksum=aadceb95c37a1704aaf19501f46f6e84 [javac] ^ [javac] D:\Users\Administrator\hive\common\src\gen\org\apache\hive\common\package-info.java:12: unclosed string literal [javac] ) [javac] ^ [javac] 7 errors {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-5290) Some HCatalog tests have been behaving flaky
[ https://issues.apache.org/jira/browse/HIVE-5290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13767741#comment-13767741 ] Hudson commented on HIVE-5290: -- FAILURE: Integrated in Hive-trunk-hadoop2-ptest #97 (See [https://builds.apache.org/job/Hive-trunk-hadoop2-ptest/97/]) HIVE-5290 : Some HCatalog tests have been behaving flaky (Brock Noland via Ashutosh Chauhan) (hashutosh: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1523178) * /hive/trunk/hcatalog/core/src/main/java/org/apache/hcatalog/common/HCatContext.java * /hive/trunk/hcatalog/core/src/main/java/org/apache/hive/hcatalog/common/HCatContext.java * /hive/trunk/hcatalog/core/src/main/java/org/apache/hive/hcatalog/mapreduce/HCatMapRedUtil.java * /hive/trunk/hcatalog/core/src/test/java/org/apache/hcatalog/cli/TestPermsGrp.java * /hive/trunk/hcatalog/core/src/test/java/org/apache/hcatalog/mapreduce/TestSequenceFileReadWrite.java * /hive/trunk/hcatalog/core/src/test/java/org/apache/hive/hcatalog/cli/TestPermsGrp.java * /hive/trunk/hcatalog/core/src/test/java/org/apache/hive/hcatalog/mapreduce/TestSequenceFileReadWrite.java * /hive/trunk/hcatalog/hcatalog-pig-adapter/src/test/java/org/apache/hcatalog/pig/TestHCatLoader.java * /hive/trunk/hcatalog/hcatalog-pig-adapter/src/test/java/org/apache/hcatalog/pig/TestHCatLoaderComplexSchema.java * /hive/trunk/hcatalog/hcatalog-pig-adapter/src/test/java/org/apache/hive/hcatalog/pig/TestE2EScenarios.java * /hive/trunk/hcatalog/hcatalog-pig-adapter/src/test/java/org/apache/hive/hcatalog/pig/TestHCatLoader.java * /hive/trunk/hcatalog/hcatalog-pig-adapter/src/test/java/org/apache/hive/hcatalog/pig/TestHCatLoaderComplexSchema.java * /hive/trunk/shims/src/0.20/java/org/apache/hadoop/hive/shims/Hadoop20Shims.java * /hive/trunk/shims/src/0.20S/java/org/apache/hadoop/hive/shims/Hadoop20SShims.java * /hive/trunk/shims/src/0.23/java/org/apache/hadoop/hive/shims/Hadoop23Shims.java * /hive/trunk/shims/src/common/java/org/apache/hadoop/hive/shims/HadoopShims.java Some HCatalog tests have been behaving flaky Key: HIVE-5290 URL: https://issues.apache.org/jira/browse/HIVE-5290 Project: Hive Issue Type: Test Affects Versions: 0.13.0 Reporter: Brock Noland Assignee: Brock Noland Fix For: 0.13.0 Attachments: HIVE-5290.patch, HIVE-5290.patch, HIVE-5290.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-5289) PTest2 should disable checking of libraries during batch exec
[ https://issues.apache.org/jira/browse/HIVE-5289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13767746#comment-13767746 ] Hudson commented on HIVE-5289: -- FAILURE: Integrated in Hive-trunk-hadoop2-ptest #97 (See [https://builds.apache.org/job/Hive-trunk-hadoop2-ptest/97/]) HIVE-5289 - PTest2 should disable checking of libraries during batch exec (Brock Noland) (brock: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1523088) * /hive/trunk/testutils/ptest2/src/main/resources/batch-exec.vm * /hive/trunk/testutils/ptest2/src/test/java/org/apache/hive/ptest/execution/TestScripts.testBatch.approved.txt PTest2 should disable checking of libraries during batch exec - Key: HIVE-5289 URL: https://issues.apache.org/jira/browse/HIVE-5289 Project: Hive Issue Type: Test Reporter: Brock Noland Assignee: Brock Noland Fix For: 0.13.0 Attachments: HIVE-5289.patch PTest2 has two phases: 1) Build from source 2) Exec in parallel during phase two we don't want ivy making HTTP requests. NO PRECOMMIT TESTS -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-5127) Upgrade xerces and xalan for WebHCat
[ https://issues.apache.org/jira/browse/HIVE-5127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13767743#comment-13767743 ] Hudson commented on HIVE-5127: -- FAILURE: Integrated in Hive-trunk-hadoop2-ptest #97 (See [https://builds.apache.org/job/Hive-trunk-hadoop2-ptest/97/]) HIVE-5127: Upgrade xerces and xalan for WebHCat (Eugene Koifman via Thejas Nair) (thejas: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1523134) * /hive/trunk/hcatalog/webhcat/svr/pom.xml Upgrade xerces and xalan for WebHCat Key: HIVE-5127 URL: https://issues.apache.org/jira/browse/HIVE-5127 Project: Hive Issue Type: Bug Components: WebHCat Affects Versions: 0.12.0 Reporter: Eugene Koifman Assignee: Eugene Koifman Fix For: 0.12.0 Attachments: HIVE-5127.patch Currently webhcat log files are full of exceptions like this, which obscures the real output and may cause perf issues. Upgrading to more recent versions of xerces/xalan fixes this. Add the following to hive/hcatalog/webhcat/svr/pom.xml dependency groupIdxerces/groupId artifactIdxercesImpl/artifactId version2.9.1/version /dependency dependency groupIdxalan/groupId artifactIdxalan/artifactId version2.7.1/version /dependency 13/08/20 16:54:04 ERROR conf.Configuration: Failed to set setXIncludeAware(true) for parser org.apache.xerces.jaxp.DocumentBuilderFactoryImpl@48dbb335:java.lang.UnsupportedOperationException: This parser does not support specification null version null java.lang.UnsupportedOperationException: This parser does not support specification null version null at javax.xml.parsers.DocumentBuilderFactory.setXIncludeAware(DocumentBuilderFactory.java:590) at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:1892) at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:1861) at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:1778) at org.apache.hadoop.conf.Configuration.get(Configuration.java:870) at org.apache.hadoop.fs.FileSystem.getDefaultUri(FileSystem.java:171) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:305) at org.apache.hadoop.fs.FileSystem.getLocal(FileSystem.java:288) at org.apache.hadoop.util.GenericOptionsParser.validateFiles(GenericOptionsParser.java:383) at org.apache.hadoop.util.GenericOptionsParser.processGeneralOptions(GenericOptionsParser.java:281) at org.apache.hadoop.util.GenericOptionsParser.parseGeneralOptions(GenericOptionsParser.java:422) at org.apache.hadoop.util.GenericOptionsParser.init(GenericOptionsParser.java:168) at org.apache.hadoop.util.GenericOptionsParser.init(GenericOptionsParser.java:151) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:64) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84) at org.apache.hcatalog.templeton.LauncherDelegator$1.run(LauncherDelegator.java:99) at org.apache.hcatalog.templeton.LauncherDelegator$1.run(LauncherDelegator.java:95) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1441) at org.apache.hcatalog.templeton.LauncherDelegator.queueAsUser(LauncherDelegator.java:95) at org.apache.hcatalog.templeton.LauncherDelegator.enqueueController(LauncherDelegator.java:77) at org.apache.hcatalog.templeton.JarDelegator.run(JarDelegator.java:52) at org.apache.hcatalog.templeton.StreamingDelegator.run(StreamingDelegator.java:53) at org.apache.hcatalog.templeton.Server.mapReduceStreaming(Server.java:596) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60) at com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$TypeOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:185) at com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75) at com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:302)
[jira] [Commented] (HIVE-5084) Fix newline.q on Windows
[ https://issues.apache.org/jira/browse/HIVE-5084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13767744#comment-13767744 ] Hudson commented on HIVE-5084: -- FAILURE: Integrated in Hive-trunk-hadoop2-ptest #97 (See [https://builds.apache.org/job/Hive-trunk-hadoop2-ptest/97/]) HIVE-5084 : Fix newline.q on Windows (Daniel Dai via Ashutosh Chauhan) (hashutosh: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1523322) * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/ScriptOperator.java * /hive/trunk/ql/src/test/queries/clientpositive/newline.q * /hive/trunk/ql/src/test/results/clientpositive/newline.q.out Fix newline.q on Windows Key: HIVE-5084 URL: https://issues.apache.org/jira/browse/HIVE-5084 Project: Hive Issue Type: Bug Components: Tests, Windows Reporter: Daniel Dai Assignee: Daniel Dai Fix For: 0.13.0 Attachments: HIVE-5084-1.patch Test failed with vague error message: [junit] Error during job, obtaining debugging information... [junit] junit.framework.AssertionFailedError: Client Execution failed with error code = 2 hive.log doesn't show something interesting either: 2013-08-14 00:47:29,411 DEBUG zookeeper.ClientCnxn (ClientCnxn.java:readResponse(723)) - Got ping response for sessionid: 0x1407a49fc1e0003 after 1ms 2013-08-14 00:47:31,391 ERROR exec.Task (SessionState.java:printError(416)) - Execution failed with exit status: 2 2013-08-14 00:47:31,391 ERROR exec.Task (SessionState.java:printError(416)) - Obtaining error information 2013-08-14 00:47:31,392 ERROR exec.Task (SessionState.java:printError(416)) - Task failed! Task ID: Stage-1 Logs: -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-5282) Some tests don't use ${system:test.dfs.mkdir} for mkdir
[ https://issues.apache.org/jira/browse/HIVE-5282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13767747#comment-13767747 ] Hudson commented on HIVE-5282: -- FAILURE: Integrated in Hive-trunk-hadoop1-ptest #164 (See [https://builds.apache.org/job/Hive-trunk-hadoop1-ptest/164/]) HIVE-5282 : Some tests don't use for mkdir (Brock Noland via Ashutosh Chauhan) (hashutosh: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1522741) * /hive/trunk/ql/src/test/queries/clientpositive/load_hdfs_file_with_space_in_the_name.q * /hive/trunk/ql/src/test/queries/clientpositive/schemeAuthority2.q Some tests don't use ${system:test.dfs.mkdir} for mkdir --- Key: HIVE-5282 URL: https://issues.apache.org/jira/browse/HIVE-5282 Project: Hive Issue Type: Sub-task Reporter: Brock Noland Assignee: Brock Noland Priority: Minor Fix For: 0.13.0 Attachments: HIVE-5282.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-5289) PTest2 should disable checking of libraries during batch exec
[ https://issues.apache.org/jira/browse/HIVE-5289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13767755#comment-13767755 ] Hudson commented on HIVE-5289: -- FAILURE: Integrated in Hive-trunk-hadoop1-ptest #164 (See [https://builds.apache.org/job/Hive-trunk-hadoop1-ptest/164/]) HIVE-5289 - PTest2 should disable checking of libraries during batch exec (Brock Noland) (brock: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1523088) * /hive/trunk/testutils/ptest2/src/main/resources/batch-exec.vm * /hive/trunk/testutils/ptest2/src/test/java/org/apache/hive/ptest/execution/TestScripts.testBatch.approved.txt PTest2 should disable checking of libraries during batch exec - Key: HIVE-5289 URL: https://issues.apache.org/jira/browse/HIVE-5289 Project: Hive Issue Type: Test Reporter: Brock Noland Assignee: Brock Noland Fix For: 0.13.0 Attachments: HIVE-5289.patch PTest2 has two phases: 1) Build from source 2) Exec in parallel during phase two we don't want ivy making HTTP requests. NO PRECOMMIT TESTS -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-5084) Fix newline.q on Windows
[ https://issues.apache.org/jira/browse/HIVE-5084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13767753#comment-13767753 ] Hudson commented on HIVE-5084: -- FAILURE: Integrated in Hive-trunk-hadoop1-ptest #164 (See [https://builds.apache.org/job/Hive-trunk-hadoop1-ptest/164/]) HIVE-5084 : Fix newline.q on Windows (Daniel Dai via Ashutosh Chauhan) (hashutosh: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1523322) * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/ScriptOperator.java * /hive/trunk/ql/src/test/queries/clientpositive/newline.q * /hive/trunk/ql/src/test/results/clientpositive/newline.q.out Fix newline.q on Windows Key: HIVE-5084 URL: https://issues.apache.org/jira/browse/HIVE-5084 Project: Hive Issue Type: Bug Components: Tests, Windows Reporter: Daniel Dai Assignee: Daniel Dai Fix For: 0.13.0 Attachments: HIVE-5084-1.patch Test failed with vague error message: [junit] Error during job, obtaining debugging information... [junit] junit.framework.AssertionFailedError: Client Execution failed with error code = 2 hive.log doesn't show something interesting either: 2013-08-14 00:47:29,411 DEBUG zookeeper.ClientCnxn (ClientCnxn.java:readResponse(723)) - Got ping response for sessionid: 0x1407a49fc1e0003 after 1ms 2013-08-14 00:47:31,391 ERROR exec.Task (SessionState.java:printError(416)) - Execution failed with exit status: 2 2013-08-14 00:47:31,391 ERROR exec.Task (SessionState.java:printError(416)) - Obtaining error information 2013-08-14 00:47:31,392 ERROR exec.Task (SessionState.java:printError(416)) - Task failed! Task ID: Stage-1 Logs: -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-5079) Make Hive compile under Windows
[ https://issues.apache.org/jira/browse/HIVE-5079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13767754#comment-13767754 ] Hudson commented on HIVE-5079: -- FAILURE: Integrated in Hive-trunk-hadoop1-ptest #164 (See [https://builds.apache.org/job/Hive-trunk-hadoop1-ptest/164/]) HIVE-5079 : Make Hive compile under Windows (Daniel Dai via Ashutosh Chauhan) (hashutosh: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1523118) * /hive/trunk/common/src/scripts/saveVersion.sh Make Hive compile under Windows --- Key: HIVE-5079 URL: https://issues.apache.org/jira/browse/HIVE-5079 Project: Hive Issue Type: Bug Components: Build Infrastructure Reporter: Daniel Dai Assignee: Daniel Dai Fix For: 0.13.0 Attachments: HIVE-5079-1.patch, HIVE-5079-2.patch Hive compilation failed under Windows. Error message: {code} compile: [echo] Project: common [exec] D:\Program Files (x86)\GnuWin32\bin\xargs.exe: md5sum: No such file or directory [exec] md5sum: ../serde/src/java/org/apache/hadoop/hive/serde2/io/Timesta: No such file or directory [javac] Compiling 25 source files to D:\Users\Administrator\hive\build\common\classes [javac] D:\Users\Administrator\hive\common\src\gen\org\apache\hive\common\package-info.java:4: unclosed string literal [javac] @HiveVersionAnnotation(version=0.12.0-SNAPSHOT, revision=80eadd8fa2af5eeba61f921318ab8b2c19980ab3, branch=trunk [javac] ^ [javac] D:\Users\Administrator\hive\common\src\gen\org\apache\hive\common\package-info.java:5: unclosed string literal [javac] , [javac] ^ [javac] D:\Users\Administrator\hive\common\src\gen\org\apache\hive\common\package-info.java:6: class, interface, or enum expected [javac] user=Administrator [javac] ^ [javac] D:\Users\Administrator\hive\common\src\gen\org\apache\hive\common\package-info.java:6: unclosed string literal [javac] user=Administrator [javac] ^ [javac] D:\Users\Administrator\hive\common\src\gen\org\apache\hive\common\package-info.java:10: unclosed string literal [javac] , [javac] ^ [javac] D:\Users\Administrator\hive\common\src\gen\org\apache\hive\common\package-info.java:11: unclosed string literal [javac] srcChecksum=aadceb95c37a1704aaf19501f46f6e84 [javac] ^ [javac] D:\Users\Administrator\hive\common\src\gen\org\apache\hive\common\package-info.java:12: unclosed string literal [javac] ) [javac] ^ [javac] 7 errors {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-5241) Default log4j log level for WebHCat should be INFO not DEBUG
[ https://issues.apache.org/jira/browse/HIVE-5241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13767751#comment-13767751 ] Hudson commented on HIVE-5241: -- FAILURE: Integrated in Hive-trunk-hadoop1-ptest #164 (See [https://builds.apache.org/job/Hive-trunk-hadoop1-ptest/164/]) HIVE-5241: Default log4j log level for WebHCat should be INFO not DEBUG (Eugene Koifman via Thejas Nair) (thejas: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1523130) * /hive/trunk/hcatalog/webhcat/svr/src/main/config/webhcat-log4j.properties * /hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/AppConfig.java * /hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/CompleteDelegator.java * /hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/Server.java * /hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/tool/HDFSStorage.java Default log4j log level for WebHCat should be INFO not DEBUG Key: HIVE-5241 URL: https://issues.apache.org/jira/browse/HIVE-5241 Project: Hive Issue Type: Bug Components: WebHCat Affects Versions: 0.11.0 Reporter: Eugene Koifman Assignee: Eugene Koifman Fix For: 0.12.0 Attachments: HIVE-5241.patch webhcat-logj4.properties -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-5127) Upgrade xerces and xalan for WebHCat
[ https://issues.apache.org/jira/browse/HIVE-5127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13767752#comment-13767752 ] Hudson commented on HIVE-5127: -- FAILURE: Integrated in Hive-trunk-hadoop1-ptest #164 (See [https://builds.apache.org/job/Hive-trunk-hadoop1-ptest/164/]) HIVE-5127: Upgrade xerces and xalan for WebHCat (Eugene Koifman via Thejas Nair) (thejas: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1523134) * /hive/trunk/hcatalog/webhcat/svr/pom.xml Upgrade xerces and xalan for WebHCat Key: HIVE-5127 URL: https://issues.apache.org/jira/browse/HIVE-5127 Project: Hive Issue Type: Bug Components: WebHCat Affects Versions: 0.12.0 Reporter: Eugene Koifman Assignee: Eugene Koifman Fix For: 0.12.0 Attachments: HIVE-5127.patch Currently webhcat log files are full of exceptions like this, which obscures the real output and may cause perf issues. Upgrading to more recent versions of xerces/xalan fixes this. Add the following to hive/hcatalog/webhcat/svr/pom.xml dependency groupIdxerces/groupId artifactIdxercesImpl/artifactId version2.9.1/version /dependency dependency groupIdxalan/groupId artifactIdxalan/artifactId version2.7.1/version /dependency 13/08/20 16:54:04 ERROR conf.Configuration: Failed to set setXIncludeAware(true) for parser org.apache.xerces.jaxp.DocumentBuilderFactoryImpl@48dbb335:java.lang.UnsupportedOperationException: This parser does not support specification null version null java.lang.UnsupportedOperationException: This parser does not support specification null version null at javax.xml.parsers.DocumentBuilderFactory.setXIncludeAware(DocumentBuilderFactory.java:590) at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:1892) at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:1861) at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:1778) at org.apache.hadoop.conf.Configuration.get(Configuration.java:870) at org.apache.hadoop.fs.FileSystem.getDefaultUri(FileSystem.java:171) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:305) at org.apache.hadoop.fs.FileSystem.getLocal(FileSystem.java:288) at org.apache.hadoop.util.GenericOptionsParser.validateFiles(GenericOptionsParser.java:383) at org.apache.hadoop.util.GenericOptionsParser.processGeneralOptions(GenericOptionsParser.java:281) at org.apache.hadoop.util.GenericOptionsParser.parseGeneralOptions(GenericOptionsParser.java:422) at org.apache.hadoop.util.GenericOptionsParser.init(GenericOptionsParser.java:168) at org.apache.hadoop.util.GenericOptionsParser.init(GenericOptionsParser.java:151) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:64) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84) at org.apache.hcatalog.templeton.LauncherDelegator$1.run(LauncherDelegator.java:99) at org.apache.hcatalog.templeton.LauncherDelegator$1.run(LauncherDelegator.java:95) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1441) at org.apache.hcatalog.templeton.LauncherDelegator.queueAsUser(LauncherDelegator.java:95) at org.apache.hcatalog.templeton.LauncherDelegator.enqueueController(LauncherDelegator.java:77) at org.apache.hcatalog.templeton.JarDelegator.run(JarDelegator.java:52) at org.apache.hcatalog.templeton.StreamingDelegator.run(StreamingDelegator.java:53) at org.apache.hcatalog.templeton.Server.mapReduceStreaming(Server.java:596) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60) at com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$TypeOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:185) at com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75) at com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:302)
[jira] [Commented] (HIVE-4171) Current database in metastore.Hive is not consistent with SessionState
[ https://issues.apache.org/jira/browse/HIVE-4171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13767749#comment-13767749 ] Hudson commented on HIVE-4171: -- FAILURE: Integrated in Hive-trunk-hadoop1-ptest #164 (See [https://builds.apache.org/job/Hive-trunk-hadoop1-ptest/164/]) HIVE-4171 : Current database in metastore.Hive is not consistent with SessionState (Thejas Nair via Ashutosh Chauhan) (hashutosh: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1523100) * /hive/trunk/cli/src/java/org/apache/hadoop/hive/cli/CliDriver.java * /hive/trunk/cli/src/java/org/apache/hadoop/hive/cli/CliSessionState.java * /hive/trunk/cli/src/test/org/apache/hadoop/hive/cli/TestCliSessionState.java * /hive/trunk/hcatalog/core/src/main/java/org/apache/hcatalog/cli/SemanticAnalysis/HCatSemanticAnalyzer.java * /hive/trunk/hcatalog/core/src/main/java/org/apache/hcatalog/cli/SemanticAnalysis/HCatSemanticAnalyzerBase.java * /hive/trunk/hcatalog/core/src/main/java/org/apache/hive/hcatalog/cli/SemanticAnalysis/HCatSemanticAnalyzer.java * /hive/trunk/hcatalog/core/src/main/java/org/apache/hive/hcatalog/cli/SemanticAnalysis/HCatSemanticAnalyzerBase.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/Driver.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/ColumnStatsTask.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/DDLTask.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/DDLSemanticAnalyzer.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/ImportSemanticAnalyzer.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/session/SessionState.java * /hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/QTestUtil.java * /hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/exec/TestExecDriver.java * /hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/history/TestHiveHistory.java * /hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/metadata/TestHive.java * /hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/parse/TestMacroSemanticAnalyzer.java * /hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/session * /hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/session/TestSessionState.java Current database in metastore.Hive is not consistent with SessionState -- Key: HIVE-4171 URL: https://issues.apache.org/jira/browse/HIVE-4171 Project: Hive Issue Type: Bug Components: CLI Reporter: Navis Assignee: Thejas M Nair Labels: HiveServer2 Fix For: 0.12.0 Attachments: HIVE-4171.3.patch, HIVE-4171.4.patch, HIVE-4171.5.patch, HIVE-4171.6.patch, HIVE-4171.D9399.1.patch, HIVE-4171.D9399.2.patch metastore.Hive is thread local instance, which can have different status with SessionState. Currently the only status in metastore.Hive is database name in use. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-5290) Some HCatalog tests have been behaving flaky
[ https://issues.apache.org/jira/browse/HIVE-5290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13767750#comment-13767750 ] Hudson commented on HIVE-5290: -- FAILURE: Integrated in Hive-trunk-hadoop1-ptest #164 (See [https://builds.apache.org/job/Hive-trunk-hadoop1-ptest/164/]) HIVE-5290 : Some HCatalog tests have been behaving flaky (Brock Noland via Ashutosh Chauhan) (hashutosh: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1523178) * /hive/trunk/hcatalog/core/src/main/java/org/apache/hcatalog/common/HCatContext.java * /hive/trunk/hcatalog/core/src/main/java/org/apache/hive/hcatalog/common/HCatContext.java * /hive/trunk/hcatalog/core/src/main/java/org/apache/hive/hcatalog/mapreduce/HCatMapRedUtil.java * /hive/trunk/hcatalog/core/src/test/java/org/apache/hcatalog/cli/TestPermsGrp.java * /hive/trunk/hcatalog/core/src/test/java/org/apache/hcatalog/mapreduce/TestSequenceFileReadWrite.java * /hive/trunk/hcatalog/core/src/test/java/org/apache/hive/hcatalog/cli/TestPermsGrp.java * /hive/trunk/hcatalog/core/src/test/java/org/apache/hive/hcatalog/mapreduce/TestSequenceFileReadWrite.java * /hive/trunk/hcatalog/hcatalog-pig-adapter/src/test/java/org/apache/hcatalog/pig/TestHCatLoader.java * /hive/trunk/hcatalog/hcatalog-pig-adapter/src/test/java/org/apache/hcatalog/pig/TestHCatLoaderComplexSchema.java * /hive/trunk/hcatalog/hcatalog-pig-adapter/src/test/java/org/apache/hive/hcatalog/pig/TestE2EScenarios.java * /hive/trunk/hcatalog/hcatalog-pig-adapter/src/test/java/org/apache/hive/hcatalog/pig/TestHCatLoader.java * /hive/trunk/hcatalog/hcatalog-pig-adapter/src/test/java/org/apache/hive/hcatalog/pig/TestHCatLoaderComplexSchema.java * /hive/trunk/shims/src/0.20/java/org/apache/hadoop/hive/shims/Hadoop20Shims.java * /hive/trunk/shims/src/0.20S/java/org/apache/hadoop/hive/shims/Hadoop20SShims.java * /hive/trunk/shims/src/0.23/java/org/apache/hadoop/hive/shims/Hadoop23Shims.java * /hive/trunk/shims/src/common/java/org/apache/hadoop/hive/shims/HadoopShims.java Some HCatalog tests have been behaving flaky Key: HIVE-5290 URL: https://issues.apache.org/jira/browse/HIVE-5290 Project: Hive Issue Type: Test Affects Versions: 0.13.0 Reporter: Brock Noland Assignee: Brock Noland Fix For: 0.13.0 Attachments: HIVE-5290.patch, HIVE-5290.patch, HIVE-5290.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-4844) Add varchar data type
[ https://issues.apache.org/jira/browse/HIVE-4844?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan updated HIVE-4844: --- Resolution: Fixed Fix Version/s: 0.13.0 Status: Resolved (was: Patch Available) Committed to trunk. Thanks, Jason for this useful addition in Hive! Add varchar data type - Key: HIVE-4844 URL: https://issues.apache.org/jira/browse/HIVE-4844 Project: Hive Issue Type: New Feature Components: Types Reporter: Jason Dere Assignee: Jason Dere Fix For: 0.13.0 Attachments: HIVE-4844.10.patch, HIVE-4844.11.patch, HIVE-4844.12.patch, HIVE-4844.13.patch, HIVE-4844.14.patch, HIVE-4844.15.patch, HIVE-4844.16.patch, HIVE-4844.17.patch, HIVE-4844.18.patch, HIVE-4844.19.patch, HIVE-4844.1.patch.hack, HIVE-4844.2.patch, HIVE-4844.3.patch, HIVE-4844.4.patch, HIVE-4844.5.patch, HIVE-4844.6.patch, HIVE-4844.7.patch, HIVE-4844.8.patch, HIVE-4844.9.patch, HIVE-4844.D12699.1.patch, HIVE-4844.D12891.1.patch, screenshot.png Add new varchar data types which have support for more SQL-compliant behavior, such as SQL string comparison semantics, max length, etc. Char type will be added as another task. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-5278) Move some string UDFs to GenericUDFs, for better varchar support
[ https://issues.apache.org/jira/browse/HIVE-5278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13767860#comment-13767860 ] Ashutosh Chauhan commented on HIVE-5278: +1 Move some string UDFs to GenericUDFs, for better varchar support Key: HIVE-5278 URL: https://issues.apache.org/jira/browse/HIVE-5278 Project: Hive Issue Type: Improvement Components: Types, UDF Reporter: Jason Dere Assignee: Jason Dere Attachments: D12909.1.patch, HIVE-5278.1.patch, HIVE-5278.2.patch To better support varchar/char types in string UDFs, select UDFs should be converted to GenericUDFs. This allows the UDF to return the resulting char/varchar length in the type metadata. This work is being split off as a separate task from HIVE-4844. The initial UDFs as part of this work are concat/lower/upper. NO PRECOMMIT TESTS - dependent on HIVE-4844 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-5221) Issue in colun type with data type as BINARY
[ https://issues.apache.org/jira/browse/HIVE-5221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan updated HIVE-5221: --- Status: Open (was: Patch Available) Thinking more about this. I think better approach here is to never do encoding or decoding in serdes. Further, on second thought even serde properties doesnt look like a good idea. I think, serdes should never deal with encoding, they should just read raw bytes. If user has data in some specific encoding which he wants to decode, he should be using a udf (which we already have in our software). Issue in colun type with data type as BINARY Key: HIVE-5221 URL: https://issues.apache.org/jira/browse/HIVE-5221 Project: Hive Issue Type: Bug Reporter: Arun Vasu Assignee: Mohammad Kamrul Islam Priority: Critical Attachments: HIVE-5221.1.patch Hi, I am using Hive 10. When I create an external table with column type as Binary, the query result on the table is showing some junk values for the column with binary datatype. Please find below the query I have used to create the table: CREATE EXTERNAL TABLE BOOL1(NB BOOLEAN,email STRING, bitfld BINARY) ROW FORMAT DELIMITED FIELDS TERMINATED BY '^' LINES TERMINATED BY '\n' STORED AS TEXTFILE LOCATION '/user/hivetables/testbinary'; The query I have used is : select * from bool1 The sample data in the hdfs file is: 0^a...@abc.com^001 1^a...@abc.com^010 ^a...@abc.com^011 ^a...@abc.com^100 t^a...@abc.com^101 f^a...@abc.com^110 true^a...@abc.com^111 false^a...@abc.com^001 123^^01100010 12344^^0111 Please share your inputs if it is possible. Thanks, Arun -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-3585) Integrate Trevni as another columnar oriented file format
[ https://issues.apache.org/jira/browse/HIVE-3585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13767876#comment-13767876 ] Edward Capriolo commented on HIVE-3585: --- [~yhuai] We have no way of judging who is using what. Maybe more people would be using Trevni if we (committers) had focused on getting the hive support committed. There are 34 watchers, and avro is here to stay. There seems to be a bit of politics around what is part of hive and what is not. Hive has support for other columnar inputs formats so who are we do say what should or should not be in hive? Integrate Trevni as another columnar oriented file format - Key: HIVE-3585 URL: https://issues.apache.org/jira/browse/HIVE-3585 Project: Hive Issue Type: Improvement Components: Serializers/Deserializers Affects Versions: 0.10.0 Reporter: alex gemini Assignee: Mark Wagner Priority: Minor Attachments: futurama_episodes.avro, HIVE-3585.1.patch.txt add new avro module trevni as another columnar format.New columnar format need a columnar SerDe,seems fastutil is a good choice.the shark project use fastutil library as columnar serde library but it seems too large (almost 15m) for just a few primitive array collection. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-5161) Additional SerDe support for varchar type
[ https://issues.apache.org/jira/browse/HIVE-5161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13767877#comment-13767877 ] Ashutosh Chauhan commented on HIVE-5161: +1 Additional SerDe support for varchar type - Key: HIVE-5161 URL: https://issues.apache.org/jira/browse/HIVE-5161 Project: Hive Issue Type: Bug Components: Serializers/Deserializers, Types Reporter: Jason Dere Assignee: Jason Dere Attachments: D12897.1.patch, HIVE-5161.1.patch, HIVE-5161.2.patch, HIVE-5161.3.patch Breaking out support for varchar for the various SerDes as an additional task. NO PRECOMMIT TESTS - can't run tests until HIVE-4844 is committed -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HIVE-5294) Create collect UDF and make evaluator reusable
Edward Capriolo created HIVE-5294: - Summary: Create collect UDF and make evaluator reusable Key: HIVE-5294 URL: https://issues.apache.org/jira/browse/HIVE-5294 Project: Hive Issue Type: New Feature Reporter: Edward Capriolo Assignee: Edward Capriolo -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-5294) Create collect UDF and make evaluator reusable
[ https://issues.apache.org/jira/browse/HIVE-5294?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Edward Capriolo updated HIVE-5294: -- Attachment: HIVE-5294.patch.txt Create collect UDF and make evaluator reusable -- Key: HIVE-5294 URL: https://issues.apache.org/jira/browse/HIVE-5294 Project: Hive Issue Type: New Feature Reporter: Edward Capriolo Assignee: Edward Capriolo Attachments: HIVE-5294.patch.txt -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-5294) Create collect UDF and make evaluator reusable
[ https://issues.apache.org/jira/browse/HIVE-5294?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Edward Capriolo updated HIVE-5294: -- Status: Patch Available (was: Open) Create collect UDF and make evaluator reusable -- Key: HIVE-5294 URL: https://issues.apache.org/jira/browse/HIVE-5294 Project: Hive Issue Type: New Feature Reporter: Edward Capriolo Assignee: Edward Capriolo Attachments: HIVE-5294.patch.txt -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-3585) Integrate Trevni as another columnar oriented file format
[ https://issues.apache.org/jira/browse/HIVE-3585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13767892#comment-13767892 ] Mark Wagner commented on HIVE-3585: --- As Yin said, Parquet has mostly taken the mindshare from Trevni. Looking at the Avro jira, it does seem that there are a few users of Trevni. [~appodictic], if you'd like to include this for them, then that's fine. The only change to the Avro Serde was a bit of refactoring, so it shouldn't be any burden on the main Avro Serde. That said, I think it'd be good for HIVE-4732 and HIVE-4734 to go in first. Both of those should be ready to commit shortly and will require a bit more rebasing of this patch. Also, the Avro version should get bumped to 1.7.5. Integrate Trevni as another columnar oriented file format - Key: HIVE-3585 URL: https://issues.apache.org/jira/browse/HIVE-3585 Project: Hive Issue Type: Improvement Components: Serializers/Deserializers Affects Versions: 0.10.0 Reporter: alex gemini Assignee: Mark Wagner Priority: Minor Attachments: futurama_episodes.avro, HIVE-3585.1.patch.txt add new avro module trevni as another columnar format.New columnar format need a columnar SerDe,seems fastutil is a good choice.the shark project use fastutil library as columnar serde library but it seems too large (almost 15m) for just a few primitive array collection. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-4943) An explode function that includes the item's position in the array
[ https://issues.apache.org/jira/browse/HIVE-4943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13767894#comment-13767894 ] Michael Haeusler commented on HIVE-4943: An awesome feature of Hive is the rich type system with excellent support for complex data-structures. To me, this ticket seems like a very useful extension to the hive built-ins. It is especially helpful for those users, that use complex data-structures. Right now, queries are often cumbersome when you access denormalized or nested data. E.g., let's consider a table that contains products together with their most popular accessories (cross-sellings). The order of the cross-selling products matter: {code:javascript} { productId: 42, name: most awesome mp3 player, manufacturer: acme corp, accessories: [ { productId : 23, name: batteries, manufacturer: acme corp }, { productId : 25, name: extra load earphones, manufacturer: noisemakers inc } ] } {code} Let's assume we want to know the average position in cross-sellings of the manufacturer noisemakers inc. Surprisingly, this is not possible with hive built-ins. You could try to come up with a custom UDFSequence and a query like this: {code:sql} SELECT AVG(SEQUENCE(p.productId)) AS wrongAverage FROM products p LATERAL VIEW EXPLODE(p.accessories) pa AS accessory WHERE pa.accessory.manufacturer = 'noisemakers inc'; {code} Unfortunately, the above query will give us wrong results, because Hive executes the predicate in the where clause first. Therefore, any UDF in the select clause has no chance to see and count all values. Using the UDTF from this ticket seems to be the best solution: {code:sql} SELECT AVG(pa.pos) AS correctAverage FROM products p LATERAL VIEW POSEXPLODE(p.accessories) pa AS pos, accessory WHERE pa.accessory.manufacturer = 'noisemakers inc'; {code} An explode function that includes the item's position in the array -- Key: HIVE-4943 URL: https://issues.apache.org/jira/browse/HIVE-4943 Project: Hive Issue Type: New Feature Components: Query Processor Affects Versions: 0.11.0 Reporter: Niko Stahl Labels: patch Fix For: 0.11.0 Attachments: HIVE-4943.1.patch, HIVE-4943.2.patch Original Estimate: 8h Remaining Estimate: 8h A function that explodes an array and includes an output column with the position of each item in the original array. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-3585) Integrate Trevni as another columnar oriented file format
[ https://issues.apache.org/jira/browse/HIVE-3585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13767899#comment-13767899 ] Edward Capriolo commented on HIVE-3585: --- {quote} Care for the updated patch w/o applicable .out files now? That way once I get an issue filed and fixed for just that last part of local-only avro+partitioned support someone just has to get test outputs? {quote} I would rather just fix it as part of this issue. We do not have to create issues just for the sake of creating issues. Unless you want to. Integrate Trevni as another columnar oriented file format - Key: HIVE-3585 URL: https://issues.apache.org/jira/browse/HIVE-3585 Project: Hive Issue Type: Improvement Components: Serializers/Deserializers Affects Versions: 0.10.0 Reporter: alex gemini Assignee: Mark Wagner Priority: Minor Attachments: futurama_episodes.avro, HIVE-3585.1.patch.txt add new avro module trevni as another columnar format.New columnar format need a columnar SerDe,seems fastutil is a good choice.the shark project use fastutil library as columnar serde library but it seems too large (almost 15m) for just a few primitive array collection. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-5084) Fix newline.q on Windows
[ https://issues.apache.org/jira/browse/HIVE-5084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13767916#comment-13767916 ] Hudson commented on HIVE-5084: -- FAILURE: Integrated in Hive-trunk-h0.21 #2333 (See [https://builds.apache.org/job/Hive-trunk-h0.21/2333/]) HIVE-5084 : Fix newline.q on Windows (Daniel Dai via Ashutosh Chauhan) (hashutosh: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1523322) * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/ScriptOperator.java * /hive/trunk/ql/src/test/queries/clientpositive/newline.q * /hive/trunk/ql/src/test/results/clientpositive/newline.q.out Fix newline.q on Windows Key: HIVE-5084 URL: https://issues.apache.org/jira/browse/HIVE-5084 Project: Hive Issue Type: Bug Components: Tests, Windows Reporter: Daniel Dai Assignee: Daniel Dai Fix For: 0.13.0 Attachments: HIVE-5084-1.patch Test failed with vague error message: [junit] Error during job, obtaining debugging information... [junit] junit.framework.AssertionFailedError: Client Execution failed with error code = 2 hive.log doesn't show something interesting either: 2013-08-14 00:47:29,411 DEBUG zookeeper.ClientCnxn (ClientCnxn.java:readResponse(723)) - Got ping response for sessionid: 0x1407a49fc1e0003 after 1ms 2013-08-14 00:47:31,391 ERROR exec.Task (SessionState.java:printError(416)) - Execution failed with exit status: 2 2013-08-14 00:47:31,391 ERROR exec.Task (SessionState.java:printError(416)) - Obtaining error information 2013-08-14 00:47:31,392 ERROR exec.Task (SessionState.java:printError(416)) - Task failed! Task ID: Stage-1 Logs: -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-4844) Add varchar data type
[ https://issues.apache.org/jira/browse/HIVE-4844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13767919#comment-13767919 ] Hudson commented on HIVE-4844: -- FAILURE: Integrated in Hive-trunk-hadoop2 #431 (See [https://builds.apache.org/job/Hive-trunk-hadoop2/431/]) HIVE-4844 : Add varchar data type (Jason Dere via Ashutosh Chauhan) (hashutosh: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1523463) * /hive/trunk/common/src/java/org/apache/hadoop/hive/common/type/HiveBaseChar.java * /hive/trunk/common/src/java/org/apache/hadoop/hive/common/type/HiveVarchar.java * /hive/trunk/common/src/test/org/apache/hadoop/hive/common/type * /hive/trunk/common/src/test/org/apache/hadoop/hive/common/type/TestHiveVarchar.java * /hive/trunk/data/files/datatypes.txt * /hive/trunk/data/files/vc1.txt * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/FunctionRegistry.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/BaseSemanticAnalyzer.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/DDLSemanticAnalyzer.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/HiveLexer.g * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/HiveParser.g * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/ParseUtils.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/TypeCheckProcFactory.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/CreateTableDesc.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/ExprNodeConstantDesc.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/GenericUDFEncode.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/UDFToString.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDAFComputeStats.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFBaseCompare.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFConcatWS.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFReflect2.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFStringToMap.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFToDate.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFToVarchar.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFUtils.java * /hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/exec/TestFunctionRegistry.java * /hive/trunk/ql/src/test/queries/clientnegative/invalid_varchar_length_1.q * /hive/trunk/ql/src/test/queries/clientnegative/invalid_varchar_length_2.q * /hive/trunk/ql/src/test/queries/clientnegative/invalid_varchar_length_3.q * /hive/trunk/ql/src/test/queries/clientpositive/alter_varchar1.q * /hive/trunk/ql/src/test/queries/clientpositive/ctas_varchar.q * /hive/trunk/ql/src/test/queries/clientpositive/partition_varchar1.q * /hive/trunk/ql/src/test/queries/clientpositive/varchar_1.q * /hive/trunk/ql/src/test/queries/clientpositive/varchar_2.q * /hive/trunk/ql/src/test/queries/clientpositive/varchar_cast.q * /hive/trunk/ql/src/test/queries/clientpositive/varchar_comparison.q * /hive/trunk/ql/src/test/queries/clientpositive/varchar_join1.q * /hive/trunk/ql/src/test/queries/clientpositive/varchar_nested_types.q * /hive/trunk/ql/src/test/queries/clientpositive/varchar_udf1.q * /hive/trunk/ql/src/test/queries/clientpositive/varchar_union1.q * /hive/trunk/ql/src/test/results/clientnegative/invalid_varchar_length_1.q.out * /hive/trunk/ql/src/test/results/clientnegative/invalid_varchar_length_2.q.out * /hive/trunk/ql/src/test/results/clientnegative/invalid_varchar_length_3.q.out * /hive/trunk/ql/src/test/results/clientpositive/alter_varchar1.q.out * /hive/trunk/ql/src/test/results/clientpositive/ctas_varchar.q.out * /hive/trunk/ql/src/test/results/clientpositive/partition_varchar1.q.out * /hive/trunk/ql/src/test/results/clientpositive/varchar_1.q.out * /hive/trunk/ql/src/test/results/clientpositive/varchar_2.q.out * /hive/trunk/ql/src/test/results/clientpositive/varchar_cast.q.out * /hive/trunk/ql/src/test/results/clientpositive/varchar_comparison.q.out * /hive/trunk/ql/src/test/results/clientpositive/varchar_join1.q.out * /hive/trunk/ql/src/test/results/clientpositive/varchar_nested_types.q.out * /hive/trunk/ql/src/test/results/clientpositive/varchar_udf1.q.out * /hive/trunk/ql/src/test/results/clientpositive/varchar_union1.q.out * /hive/trunk/serde/if/serde.thrift * /hive/trunk/serde/src/gen/thrift/gen-cpp/serde_constants.cpp * /hive/trunk/serde/src/gen/thrift/gen-cpp/serde_constants.h * /hive/trunk/serde/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/serde/serdeConstants.java * /hive/trunk/serde/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/serde/test/ThriftTestObj.java * /hive/trunk/serde/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/serde2/thrift/test/Complex.java *
Re: Export version of Wiki broken
I added a comment. Hopefully ASF INFRA will fix this soon. Thanks. Carl On Sat, Sep 14, 2013 at 2:58 AM, Lars Francke lars.fran...@gmail.comwrote: Hi, just a reminder that it'd be great if a committer could quickly comment on that issue so INFRA has some confidence that Lefty and I are not making those claims up. Thanks, Lars On Mon, Sep 9, 2013 at 2:41 PM, Lars Francke lars.fran...@gmail.com wrote: Hi, I did: https://issues.apache.org/jira/browse/INFRA-6736 As I'm not a committer it'd be great if one of you could comment on that issue to verify that I'm not making stuff up :) Thanks, Lars On Tue, Sep 3, 2013 at 3:19 AM, Thejas Nair the...@hortonworks.com wrote: Lars, Thanks for bringing this up! Can you please create an INFRA ticket for this ? The google search results often leads to the broken page versions of the doc. Thanks, Thejas On Mon, Sep 2, 2013 at 12:27 AM, Lars Francke lars.fran...@gmail.com wrote: Hi, does anyone know why the Auto export version[1] of the Confluence wiki exists? Most of the links as well as the styles seem broken to me. Not a big deal in itself it's just that Google seems to give preference to that version so that it appears in all search results. Is there any way for us to modify that page, disable the export or at least prevent Google from indexing it? I'm happy to take it up with @infra too if those are the guys that can help. Cheers, Lars [1] https://cwiki.apache.org/Hive/languagemanual.html CONFIDENTIALITY NOTICE NOTICE: This message is intended for the use of the individual or entity to which it is addressed and may contain information that is confidential, privileged and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient, you are hereby notified that any printing, copying, dissemination, distribution, disclosure or forwarding of this communication is strictly prohibited. If you have received this communication in error, please contact the sender immediately and delete it from your system. Thank You.
[jira] [Commented] (HIVE-5294) Create collect UDF and make evaluator reusable
[ https://issues.apache.org/jira/browse/HIVE-5294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13767935#comment-13767935 ] Hive QA commented on HIVE-5294: --- {color:green}Overall{color}: +1 all checks pass Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12603247/HIVE-5294.patch.txt {color:green}SUCCESS:{color} +1 3124 tests passed Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/752/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/752/console Messages: {noformat} Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase {noformat} This message is automatically generated. Create collect UDF and make evaluator reusable -- Key: HIVE-5294 URL: https://issues.apache.org/jira/browse/HIVE-5294 Project: Hive Issue Type: New Feature Reporter: Edward Capriolo Assignee: Edward Capriolo Attachments: HIVE-5294.patch.txt -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-4844) Add varchar data type
[ https://issues.apache.org/jira/browse/HIVE-4844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13767936#comment-13767936 ] Hudson commented on HIVE-4844: -- FAILURE: Integrated in Hive-trunk-hadoop2-ptest #98 (See [https://builds.apache.org/job/Hive-trunk-hadoop2-ptest/98/]) HIVE-4844 : Add varchar data type (Jason Dere via Ashutosh Chauhan) (hashutosh: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1523463) * /hive/trunk/common/src/java/org/apache/hadoop/hive/common/type/HiveBaseChar.java * /hive/trunk/common/src/java/org/apache/hadoop/hive/common/type/HiveVarchar.java * /hive/trunk/common/src/test/org/apache/hadoop/hive/common/type * /hive/trunk/common/src/test/org/apache/hadoop/hive/common/type/TestHiveVarchar.java * /hive/trunk/data/files/datatypes.txt * /hive/trunk/data/files/vc1.txt * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/FunctionRegistry.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/BaseSemanticAnalyzer.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/DDLSemanticAnalyzer.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/HiveLexer.g * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/HiveParser.g * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/ParseUtils.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/TypeCheckProcFactory.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/CreateTableDesc.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/ExprNodeConstantDesc.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/GenericUDFEncode.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/UDFToString.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDAFComputeStats.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFBaseCompare.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFConcatWS.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFReflect2.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFStringToMap.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFToDate.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFToVarchar.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFUtils.java * /hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/exec/TestFunctionRegistry.java * /hive/trunk/ql/src/test/queries/clientnegative/invalid_varchar_length_1.q * /hive/trunk/ql/src/test/queries/clientnegative/invalid_varchar_length_2.q * /hive/trunk/ql/src/test/queries/clientnegative/invalid_varchar_length_3.q * /hive/trunk/ql/src/test/queries/clientpositive/alter_varchar1.q * /hive/trunk/ql/src/test/queries/clientpositive/ctas_varchar.q * /hive/trunk/ql/src/test/queries/clientpositive/partition_varchar1.q * /hive/trunk/ql/src/test/queries/clientpositive/varchar_1.q * /hive/trunk/ql/src/test/queries/clientpositive/varchar_2.q * /hive/trunk/ql/src/test/queries/clientpositive/varchar_cast.q * /hive/trunk/ql/src/test/queries/clientpositive/varchar_comparison.q * /hive/trunk/ql/src/test/queries/clientpositive/varchar_join1.q * /hive/trunk/ql/src/test/queries/clientpositive/varchar_nested_types.q * /hive/trunk/ql/src/test/queries/clientpositive/varchar_udf1.q * /hive/trunk/ql/src/test/queries/clientpositive/varchar_union1.q * /hive/trunk/ql/src/test/results/clientnegative/invalid_varchar_length_1.q.out * /hive/trunk/ql/src/test/results/clientnegative/invalid_varchar_length_2.q.out * /hive/trunk/ql/src/test/results/clientnegative/invalid_varchar_length_3.q.out * /hive/trunk/ql/src/test/results/clientpositive/alter_varchar1.q.out * /hive/trunk/ql/src/test/results/clientpositive/ctas_varchar.q.out * /hive/trunk/ql/src/test/results/clientpositive/partition_varchar1.q.out * /hive/trunk/ql/src/test/results/clientpositive/varchar_1.q.out * /hive/trunk/ql/src/test/results/clientpositive/varchar_2.q.out * /hive/trunk/ql/src/test/results/clientpositive/varchar_cast.q.out * /hive/trunk/ql/src/test/results/clientpositive/varchar_comparison.q.out * /hive/trunk/ql/src/test/results/clientpositive/varchar_join1.q.out * /hive/trunk/ql/src/test/results/clientpositive/varchar_nested_types.q.out * /hive/trunk/ql/src/test/results/clientpositive/varchar_udf1.q.out * /hive/trunk/ql/src/test/results/clientpositive/varchar_union1.q.out * /hive/trunk/serde/if/serde.thrift * /hive/trunk/serde/src/gen/thrift/gen-cpp/serde_constants.cpp * /hive/trunk/serde/src/gen/thrift/gen-cpp/serde_constants.h * /hive/trunk/serde/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/serde/serdeConstants.java * /hive/trunk/serde/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/serde/test/ThriftTestObj.java * /hive/trunk/serde/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/serde2/thrift/test/Complex.java *
[jira] [Commented] (HIVE-4844) Add varchar data type
[ https://issues.apache.org/jira/browse/HIVE-4844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13767937#comment-13767937 ] Hudson commented on HIVE-4844: -- FAILURE: Integrated in Hive-trunk-hadoop1-ptest #165 (See [https://builds.apache.org/job/Hive-trunk-hadoop1-ptest/165/]) HIVE-4844 : Add varchar data type (Jason Dere via Ashutosh Chauhan) (hashutosh: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1523463) * /hive/trunk/common/src/java/org/apache/hadoop/hive/common/type/HiveBaseChar.java * /hive/trunk/common/src/java/org/apache/hadoop/hive/common/type/HiveVarchar.java * /hive/trunk/common/src/test/org/apache/hadoop/hive/common/type * /hive/trunk/common/src/test/org/apache/hadoop/hive/common/type/TestHiveVarchar.java * /hive/trunk/data/files/datatypes.txt * /hive/trunk/data/files/vc1.txt * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/FunctionRegistry.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/BaseSemanticAnalyzer.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/DDLSemanticAnalyzer.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/HiveLexer.g * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/HiveParser.g * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/ParseUtils.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/TypeCheckProcFactory.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/CreateTableDesc.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/ExprNodeConstantDesc.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/GenericUDFEncode.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/UDFToString.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDAFComputeStats.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFBaseCompare.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFConcatWS.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFReflect2.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFStringToMap.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFToDate.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFToVarchar.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFUtils.java * /hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/exec/TestFunctionRegistry.java * /hive/trunk/ql/src/test/queries/clientnegative/invalid_varchar_length_1.q * /hive/trunk/ql/src/test/queries/clientnegative/invalid_varchar_length_2.q * /hive/trunk/ql/src/test/queries/clientnegative/invalid_varchar_length_3.q * /hive/trunk/ql/src/test/queries/clientpositive/alter_varchar1.q * /hive/trunk/ql/src/test/queries/clientpositive/ctas_varchar.q * /hive/trunk/ql/src/test/queries/clientpositive/partition_varchar1.q * /hive/trunk/ql/src/test/queries/clientpositive/varchar_1.q * /hive/trunk/ql/src/test/queries/clientpositive/varchar_2.q * /hive/trunk/ql/src/test/queries/clientpositive/varchar_cast.q * /hive/trunk/ql/src/test/queries/clientpositive/varchar_comparison.q * /hive/trunk/ql/src/test/queries/clientpositive/varchar_join1.q * /hive/trunk/ql/src/test/queries/clientpositive/varchar_nested_types.q * /hive/trunk/ql/src/test/queries/clientpositive/varchar_udf1.q * /hive/trunk/ql/src/test/queries/clientpositive/varchar_union1.q * /hive/trunk/ql/src/test/results/clientnegative/invalid_varchar_length_1.q.out * /hive/trunk/ql/src/test/results/clientnegative/invalid_varchar_length_2.q.out * /hive/trunk/ql/src/test/results/clientnegative/invalid_varchar_length_3.q.out * /hive/trunk/ql/src/test/results/clientpositive/alter_varchar1.q.out * /hive/trunk/ql/src/test/results/clientpositive/ctas_varchar.q.out * /hive/trunk/ql/src/test/results/clientpositive/partition_varchar1.q.out * /hive/trunk/ql/src/test/results/clientpositive/varchar_1.q.out * /hive/trunk/ql/src/test/results/clientpositive/varchar_2.q.out * /hive/trunk/ql/src/test/results/clientpositive/varchar_cast.q.out * /hive/trunk/ql/src/test/results/clientpositive/varchar_comparison.q.out * /hive/trunk/ql/src/test/results/clientpositive/varchar_join1.q.out * /hive/trunk/ql/src/test/results/clientpositive/varchar_nested_types.q.out * /hive/trunk/ql/src/test/results/clientpositive/varchar_udf1.q.out * /hive/trunk/ql/src/test/results/clientpositive/varchar_union1.q.out * /hive/trunk/serde/if/serde.thrift * /hive/trunk/serde/src/gen/thrift/gen-cpp/serde_constants.cpp * /hive/trunk/serde/src/gen/thrift/gen-cpp/serde_constants.h * /hive/trunk/serde/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/serde/serdeConstants.java * /hive/trunk/serde/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/serde/test/ThriftTestObj.java * /hive/trunk/serde/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/serde2/thrift/test/Complex.java *
[jira] [Updated] (HIVE-5283) Merge vectorization branch to trunk
[ https://issues.apache.org/jira/browse/HIVE-5283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Carl Steinbach updated HIVE-5283: - Status: Open (was: Patch Available) Merge vectorization branch to trunk --- Key: HIVE-5283 URL: https://issues.apache.org/jira/browse/HIVE-5283 Project: Hive Issue Type: Bug Reporter: Jitendra Nath Pandey Assignee: Jitendra Nath Pandey Attachments: HIVE-5283.1.patch, HIVE-5283.2.patch The purpose of this jira is to upload vectorization patch, run tests etc. The actual work will continue under HIVE-4160 umbrella jira. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-4998) support jdbc documented table types in default configuration
[ https://issues.apache.org/jira/browse/HIVE-4998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13767944#comment-13767944 ] Thejas M Nair commented on HIVE-4998: - testNegativeCliDriver_script_broken_pipe1 is a known flaky test, and this patch is unrelated change in hive-server2 code. support jdbc documented table types in default configuration Key: HIVE-4998 URL: https://issues.apache.org/jira/browse/HIVE-4998 Project: Hive Issue Type: Bug Components: HiveServer2, JDBC Affects Versions: 0.11.0 Reporter: Thejas M Nair Assignee: Thejas M Nair Attachments: HIVE-4998.1.patch The jdbc table types supported by hive server2 are not the documented typical types [1] in jdbc, they are hive specific types (MANAGED_TABLE, EXTERNAL_TABLE, VIRTUAL_VIEW). HIVE-4573 added support for the jdbc documented typical types, but the HS2 default configuration is to return the hive types The default configuration should result in the expected jdbc typical behavior. [1] http://docs.oracle.com/javase/6/docs/api/java/sql/DatabaseMetaData.html?is-external=true#getTables(java.lang.String, java.lang.String, java.lang.String, java.lang.String[]) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-5261) Make the Hive HBase storage handler work from HCatalog, and use HiveStorageHandlers instead of HCatStorageHandlers
[ https://issues.apache.org/jira/browse/HIVE-5261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thejas M Nair updated HIVE-5261: Fix Version/s: (was: 0.13.0) 0.12.0 Make the Hive HBase storage handler work from HCatalog, and use HiveStorageHandlers instead of HCatStorageHandlers -- Key: HIVE-5261 URL: https://issues.apache.org/jira/browse/HIVE-5261 Project: Hive Issue Type: Sub-task Components: HBase Handler, HCatalog Affects Versions: 0.12.0 Reporter: Sushanth Sowmyan Assignee: Viraj Bhat Fix For: 0.12.0 Attachments: HIVE-5261.patch This is a task being created for the HCat side of HIVE-4331 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-5261) Make the Hive HBase storage handler work from HCatalog, and use HiveStorageHandlers instead of HCatStorageHandlers
[ https://issues.apache.org/jira/browse/HIVE-5261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13767947#comment-13767947 ] Thejas M Nair commented on HIVE-5261: - Patch committed to 0.12 branch. Make the Hive HBase storage handler work from HCatalog, and use HiveStorageHandlers instead of HCatStorageHandlers -- Key: HIVE-5261 URL: https://issues.apache.org/jira/browse/HIVE-5261 Project: Hive Issue Type: Sub-task Components: HBase Handler, HCatalog Affects Versions: 0.12.0 Reporter: Sushanth Sowmyan Assignee: Viraj Bhat Fix For: 0.12.0 Attachments: HIVE-5261.patch This is a task being created for the HCat side of HIVE-4331 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-5278) Move some string UDFs to GenericUDFs, for better varchar support
[ https://issues.apache.org/jira/browse/HIVE-5278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan updated HIVE-5278: --- Resolution: Fixed Fix Version/s: 0.13.0 Status: Resolved (was: Patch Available) Committed to trunk. Thanks, Jason! Move some string UDFs to GenericUDFs, for better varchar support Key: HIVE-5278 URL: https://issues.apache.org/jira/browse/HIVE-5278 Project: Hive Issue Type: Improvement Components: Types, UDF Reporter: Jason Dere Assignee: Jason Dere Fix For: 0.13.0 Attachments: D12909.1.patch, HIVE-5278.1.patch, HIVE-5278.2.patch To better support varchar/char types in string UDFs, select UDFs should be converted to GenericUDFs. This allows the UDF to return the resulting char/varchar length in the type metadata. This work is being split off as a separate task from HIVE-4844. The initial UDFs as part of this work are concat/lower/upper. NO PRECOMMIT TESTS - dependent on HIVE-4844 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-5161) Additional SerDe support for varchar type
[ https://issues.apache.org/jira/browse/HIVE-5161?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Dere updated HIVE-5161: - Description: Breaking out support for varchar for the various SerDes as an additional task. was: Breaking out support for varchar for the various SerDes as an additional task. NO PRECOMMIT TESTS - can't run tests until HIVE-4844 is committed Additional SerDe support for varchar type - Key: HIVE-5161 URL: https://issues.apache.org/jira/browse/HIVE-5161 Project: Hive Issue Type: Bug Components: Serializers/Deserializers, Types Reporter: Jason Dere Assignee: Jason Dere Attachments: D12897.1.patch, HIVE-5161.1.patch, HIVE-5161.2.patch, HIVE-5161.3.patch Breaking out support for varchar for the various SerDes as an additional task. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-5209) JDBC support for varchar
[ https://issues.apache.org/jira/browse/HIVE-5209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Dere updated HIVE-5209: - Description: Support returning varchar length in result set metadata was: Support returning varchar length in result set metadata NO PRECOMMIT TESTS - dependent on HIVE-4844 JDBC support for varchar Key: HIVE-5209 URL: https://issues.apache.org/jira/browse/HIVE-5209 Project: Hive Issue Type: Improvement Components: JDBC, Types Reporter: Jason Dere Assignee: Jason Dere Attachments: HIVE-5209.1.patch, HIVE-5209.2.patch, HIVE-5209.D12705.1.patch Support returning varchar length in result set metadata -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-5278) Move some string UDFs to GenericUDFs, for better varchar support
[ https://issues.apache.org/jira/browse/HIVE-5278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Dere updated HIVE-5278: - Description: To better support varchar/char types in string UDFs, select UDFs should be converted to GenericUDFs. This allows the UDF to return the resulting char/varchar length in the type metadata. This work is being split off as a separate task from HIVE-4844. The initial UDFs as part of this work are concat/lower/upper. was: To better support varchar/char types in string UDFs, select UDFs should be converted to GenericUDFs. This allows the UDF to return the resulting char/varchar length in the type metadata. This work is being split off as a separate task from HIVE-4844. The initial UDFs as part of this work are concat/lower/upper. NO PRECOMMIT TESTS - dependent on HIVE-4844 Move some string UDFs to GenericUDFs, for better varchar support Key: HIVE-5278 URL: https://issues.apache.org/jira/browse/HIVE-5278 Project: Hive Issue Type: Improvement Components: Types, UDF Reporter: Jason Dere Assignee: Jason Dere Fix For: 0.13.0 Attachments: D12909.1.patch, HIVE-5278.1.patch, HIVE-5278.2.patch To better support varchar/char types in string UDFs, select UDFs should be converted to GenericUDFs. This allows the UDF to return the resulting char/varchar length in the type metadata. This work is being split off as a separate task from HIVE-4844. The initial UDFs as part of this work are concat/lower/upper. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-5133) webhcat jobs that need to access metastore fails in secure mode
[ https://issues.apache.org/jira/browse/HIVE-5133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thejas M Nair updated HIVE-5133: Attachment: HIVE-5133.3.patch HIVE-5133.3.patch - patch with hcatloadstore.pig. Thanks Deepesh for pointing out that it was missing! webhcat jobs that need to access metastore fails in secure mode --- Key: HIVE-5133 URL: https://issues.apache.org/jira/browse/HIVE-5133 Project: Hive Issue Type: Bug Components: WebHCat Affects Versions: 0.11.0 Reporter: Thejas M Nair Assignee: Thejas M Nair Attachments: HIVE-5133.1.patch, HIVE-5133.1.test.patch, HIVE-5133.2.patch, HIVE-5133.3.patch Webhcat job submission requests result in the pig/hive/mr job being run from a map task that it launches. In secure mode, for the pig/hive/mr job that is run to be authorized to perform actions on metastore, it has to have the delegation tokens from the hive metastore. In case of pig/MR job this is needed if hcatalog is being used in the script/job. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-5133) webhcat jobs that need to access metastore fails in secure mode
[ https://issues.apache.org/jira/browse/HIVE-5133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thejas M Nair updated HIVE-5133: Attachment: (was: HIVE-5133.3.patch) webhcat jobs that need to access metastore fails in secure mode --- Key: HIVE-5133 URL: https://issues.apache.org/jira/browse/HIVE-5133 Project: Hive Issue Type: Bug Components: WebHCat Affects Versions: 0.11.0 Reporter: Thejas M Nair Assignee: Thejas M Nair Attachments: HIVE-5133.1.patch, HIVE-5133.1.test.patch, HIVE-5133.2.patch, HIVE-5133.3.patch Webhcat job submission requests result in the pig/hive/mr job being run from a map task that it launches. In secure mode, for the pig/hive/mr job that is run to be authorized to perform actions on metastore, it has to have the delegation tokens from the hive metastore. In case of pig/MR job this is needed if hcatalog is being used in the script/job. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-5133) webhcat jobs that need to access metastore fails in secure mode
[ https://issues.apache.org/jira/browse/HIVE-5133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thejas M Nair updated HIVE-5133: Attachment: HIVE-5133.3.patch webhcat jobs that need to access metastore fails in secure mode --- Key: HIVE-5133 URL: https://issues.apache.org/jira/browse/HIVE-5133 Project: Hive Issue Type: Bug Components: WebHCat Affects Versions: 0.11.0 Reporter: Thejas M Nair Assignee: Thejas M Nair Attachments: HIVE-5133.1.patch, HIVE-5133.1.test.patch, HIVE-5133.2.patch, HIVE-5133.3.patch Webhcat job submission requests result in the pig/hive/mr job being run from a map task that it launches. In secure mode, for the pig/hive/mr job that is run to be authorized to perform actions on metastore, it has to have the delegation tokens from the hive metastore. In case of pig/MR job this is needed if hcatalog is being used in the script/job. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
Re: Review Request 14130: Merge vectorization branch to trunk
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/14130/#review26119 --- .gitignore https://reviews.apache.org/r/14130/#comment50977 I think we should follow established convention of checking these file instead of generating them since it serves as a useful canary for catching accidental changes to the ORC format. common/src/java/org/apache/hadoop/hive/conf/HiveConf.java https://reviews.apache.org/r/14130/#comment50978 Please add this to hive-default.xml.template along with a description. metastore/src/java/org/apache/hadoop/hive/metastore/MetaStoreUtils.java https://reviews.apache.org/r/14130/#comment50979 Necessary? ql/src/gen/vectorization/ExpressionTemplates/ColumnArithmeticColumn.txt https://reviews.apache.org/r/14130/#comment50980 We currently use Apache Velocity to generate test code at compile-time (e.g. TestCliDriver, ...). I realize that the templating code in CodeGen and TestCodeGen is pretty simple, but was wondering if it might be better from a build and maintenance standpoint to use Velocity instead. Also, is it possible to select a less generic file suffix for the template files, e.g. *.t or *.tmpl? ql/src/gen/vectorization/ExpressionTemplates/ColumnArithmeticColumn.txt https://reviews.apache.org/r/14130/#comment50981 In addition to the name (and preferably path) of the template I think this comment should also include the name and path of the code generator, and a warning that it should not be modified by hand. ql/src/gen/vectorization/org/apache/hadoop/hive/ql/exec/vector/gen/TestCodeGen.java https://reviews.apache.org/r/14130/#comment50982 Maybe this should go in ql/src/test/gen. Thoughts? ql/src/test/queries/clientpositive/vectorization_0.q https://reviews.apache.org/r/14130/#comment50984 There are a lot of magic numbers in these these new test files. Do they have any special meaning or are they effectively random? ql/src/test/queries/clientpositive/vectorization_0.q https://reviews.apache.org/r/14130/#comment50983 What is the expected behavior when vectorized.execution=enabled and the source table is not reading ORC formatted data? I think it's worth adding some additional tests (either positive or negative) to lock down this behavior. - Carl Steinbach On Sept. 13, 2013, 5:51 p.m., Jitendra Pandey wrote: --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/14130/ --- (Updated Sept. 13, 2013, 5:51 p.m.) Review request for hive and Ashutosh Chauhan. Bugs: HIVE-5283 https://issues.apache.org/jira/browse/HIVE-5283 Repository: hive-git Description --- Merge vectorization branch to trunk. Diffs - .gitignore c0e9b3c build-common.xml ee219a9 common/src/java/org/apache/hadoop/hive/conf/HiveConf.java c5a8ff3 metastore/src/java/org/apache/hadoop/hive/metastore/MetaStoreUtils.java 15a2a81 ql/src/gen/vectorization/ExpressionTemplates/ColumnArithmeticColumn.txt PRE-CREATION ql/src/gen/vectorization/ExpressionTemplates/ColumnArithmeticScalar.txt PRE-CREATION ql/src/gen/vectorization/ExpressionTemplates/ColumnCompareScalar.txt PRE-CREATION ql/src/gen/vectorization/ExpressionTemplates/ColumnUnaryMinus.txt PRE-CREATION ql/src/gen/vectorization/ExpressionTemplates/FilterColumnCompareColumn.txt PRE-CREATION ql/src/gen/vectorization/ExpressionTemplates/FilterColumnCompareScalar.txt PRE-CREATION ql/src/gen/vectorization/ExpressionTemplates/FilterScalarCompareColumn.txt PRE-CREATION ql/src/gen/vectorization/ExpressionTemplates/FilterStringColumnCompareColumn.txt PRE-CREATION ql/src/gen/vectorization/ExpressionTemplates/FilterStringColumnCompareScalar.txt PRE-CREATION ql/src/gen/vectorization/ExpressionTemplates/FilterStringScalarCompareColumn.txt PRE-CREATION ql/src/gen/vectorization/ExpressionTemplates/ScalarArithmeticColumn.txt PRE-CREATION ql/src/gen/vectorization/TestTemplates/TestClass.txt PRE-CREATION ql/src/gen/vectorization/TestTemplates/TestColumnColumnFilterVectorExpressionEvaluation.txt PRE-CREATION ql/src/gen/vectorization/TestTemplates/TestColumnColumnOperationVectorExpressionEvaluation.txt PRE-CREATION ql/src/gen/vectorization/TestTemplates/TestColumnScalarFilterVectorExpressionEvaluation.txt PRE-CREATION ql/src/gen/vectorization/TestTemplates/TestColumnScalarOperationVectorExpressionEvaluation.txt PRE-CREATION ql/src/gen/vectorization/UDAFTemplates/VectorUDAFAvg.txt PRE-CREATION ql/src/gen/vectorization/UDAFTemplates/VectorUDAFMinMax.txt PRE-CREATION ql/src/gen/vectorization/UDAFTemplates/VectorUDAFMinMaxString.txt
Re: Review Request 14130: Merge vectorization branch to trunk
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/14130/#review26120 --- ql/src/test/org/apache/hadoop/hive/ql/QTestUtil.java https://reviews.apache.org/r/14130/#comment50985 AllVectorTypesRecord seems to add an additional level of indirection without providing any real benefit. I'd recommend following convention and just hardcoding it for now. - Carl Steinbach On Sept. 13, 2013, 5:51 p.m., Jitendra Pandey wrote: --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/14130/ --- (Updated Sept. 13, 2013, 5:51 p.m.) Review request for hive and Ashutosh Chauhan. Bugs: HIVE-5283 https://issues.apache.org/jira/browse/HIVE-5283 Repository: hive-git Description --- Merge vectorization branch to trunk. Diffs - .gitignore c0e9b3c build-common.xml ee219a9 common/src/java/org/apache/hadoop/hive/conf/HiveConf.java c5a8ff3 metastore/src/java/org/apache/hadoop/hive/metastore/MetaStoreUtils.java 15a2a81 ql/src/gen/vectorization/ExpressionTemplates/ColumnArithmeticColumn.txt PRE-CREATION ql/src/gen/vectorization/ExpressionTemplates/ColumnArithmeticScalar.txt PRE-CREATION ql/src/gen/vectorization/ExpressionTemplates/ColumnCompareScalar.txt PRE-CREATION ql/src/gen/vectorization/ExpressionTemplates/ColumnUnaryMinus.txt PRE-CREATION ql/src/gen/vectorization/ExpressionTemplates/FilterColumnCompareColumn.txt PRE-CREATION ql/src/gen/vectorization/ExpressionTemplates/FilterColumnCompareScalar.txt PRE-CREATION ql/src/gen/vectorization/ExpressionTemplates/FilterScalarCompareColumn.txt PRE-CREATION ql/src/gen/vectorization/ExpressionTemplates/FilterStringColumnCompareColumn.txt PRE-CREATION ql/src/gen/vectorization/ExpressionTemplates/FilterStringColumnCompareScalar.txt PRE-CREATION ql/src/gen/vectorization/ExpressionTemplates/FilterStringScalarCompareColumn.txt PRE-CREATION ql/src/gen/vectorization/ExpressionTemplates/ScalarArithmeticColumn.txt PRE-CREATION ql/src/gen/vectorization/TestTemplates/TestClass.txt PRE-CREATION ql/src/gen/vectorization/TestTemplates/TestColumnColumnFilterVectorExpressionEvaluation.txt PRE-CREATION ql/src/gen/vectorization/TestTemplates/TestColumnColumnOperationVectorExpressionEvaluation.txt PRE-CREATION ql/src/gen/vectorization/TestTemplates/TestColumnScalarFilterVectorExpressionEvaluation.txt PRE-CREATION ql/src/gen/vectorization/TestTemplates/TestColumnScalarOperationVectorExpressionEvaluation.txt PRE-CREATION ql/src/gen/vectorization/UDAFTemplates/VectorUDAFAvg.txt PRE-CREATION ql/src/gen/vectorization/UDAFTemplates/VectorUDAFMinMax.txt PRE-CREATION ql/src/gen/vectorization/UDAFTemplates/VectorUDAFMinMaxString.txt PRE-CREATION ql/src/gen/vectorization/UDAFTemplates/VectorUDAFSum.txt PRE-CREATION ql/src/gen/vectorization/UDAFTemplates/VectorUDAFVar.txt PRE-CREATION ql/src/gen/vectorization/org/apache/hadoop/hive/ql/exec/vector/gen/CodeGen.java PRE-CREATION ql/src/gen/vectorization/org/apache/hadoop/hive/ql/exec/vector/gen/TestCodeGen.java PRE-CREATION ql/src/java/org/apache/hadoop/hive/ql/ErrorMsg.java 393ef57 ql/src/java/org/apache/hadoop/hive/ql/exec/FetchOperator.java d2265e2 ql/src/java/org/apache/hadoop/hive/ql/exec/FileSinkOperator.java bcee201 ql/src/java/org/apache/hadoop/hive/ql/exec/FilterOperator.java d2c981d ql/src/java/org/apache/hadoop/hive/ql/exec/GroupByOperator.java e498327 ql/src/java/org/apache/hadoop/hive/ql/exec/KeyWrapper.java c303b30 ql/src/java/org/apache/hadoop/hive/ql/exec/Operator.java 3b15667 ql/src/java/org/apache/hadoop/hive/ql/exec/OperatorFactory.java 85a22b7 ql/src/java/org/apache/hadoop/hive/ql/exec/ReduceSinkOperator.java 869417f ql/src/java/org/apache/hadoop/hive/ql/exec/Utilities.java 2ece97e ql/src/java/org/apache/hadoop/hive/ql/exec/mr/ExecDriver.java 4cc7129 ql/src/java/org/apache/hadoop/hive/ql/exec/vector/BytesColumnVector.java PRE-CREATION ql/src/java/org/apache/hadoop/hive/ql/exec/vector/ColumnVector.java PRE-CREATION ql/src/java/org/apache/hadoop/hive/ql/exec/vector/DoubleColumnVector.java PRE-CREATION ql/src/java/org/apache/hadoop/hive/ql/exec/vector/LongColumnVector.java PRE-CREATION ql/src/java/org/apache/hadoop/hive/ql/exec/vector/TimestampUtils.java PRE-CREATION ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorAggregationBufferBatch.java PRE-CREATION ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorAggregationBufferRow.java PRE-CREATION ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorFileSinkOperator.java
[jira] [Commented] (HIVE-5283) Merge vectorization branch to trunk
[ https://issues.apache.org/jira/browse/HIVE-5283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13767957#comment-13767957 ] Carl Steinbach commented on HIVE-5283: -- I added some more comments on RB. I wanted to note here since the site seems overwhelmed by the size of this patch and I have my doubts that they're actually going to get reposted here. Merge vectorization branch to trunk --- Key: HIVE-5283 URL: https://issues.apache.org/jira/browse/HIVE-5283 Project: Hive Issue Type: Bug Reporter: Jitendra Nath Pandey Assignee: Jitendra Nath Pandey Attachments: HIVE-5283.1.patch, HIVE-5283.2.patch The purpose of this jira is to upload vectorization patch, run tests etc. The actual work will continue under HIVE-4160 umbrella jira. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-3585) Integrate Trevni as another columnar oriented file format
[ https://issues.apache.org/jira/browse/HIVE-3585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13767983#comment-13767983 ] Carl Steinbach commented on HIVE-3585: -- I agree with what Ed said earlier and want to add that as a project we shouldn't put ourselves in the position of picking winners and losers when it comes battles between competing data serialization formats. As long as a patch like this meets the same code quality standards that we apply to every other patch I think it should get committed. Integrate Trevni as another columnar oriented file format - Key: HIVE-3585 URL: https://issues.apache.org/jira/browse/HIVE-3585 Project: Hive Issue Type: Improvement Components: Serializers/Deserializers Affects Versions: 0.10.0 Reporter: alex gemini Assignee: Mark Wagner Priority: Minor Attachments: futurama_episodes.avro, HIVE-3585.1.patch.txt add new avro module trevni as another columnar format.New columnar format need a columnar SerDe,seems fastutil is a good choice.the shark project use fastutil library as columnar serde library but it seems too large (almost 15m) for just a few primitive array collection. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-5087) Rename npath UDF to matchpath
[ https://issues.apache.org/jira/browse/HIVE-5087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13767992#comment-13767992 ] Carl Steinbach commented on HIVE-5087: -- No one has threatened legal action over any of the other UDF names. If that happens I suppose we'll do the same thing. Rename npath UDF to matchpath - Key: HIVE-5087 URL: https://issues.apache.org/jira/browse/HIVE-5087 Project: Hive Issue Type: Bug Reporter: Edward Capriolo Assignee: Edward Capriolo Priority: Blocker Fix For: 0.12.0 Attachments: HIVE-5087.1.patch.txt, HIVE-5087.99.patch.txt, HIVE-5087-matchpath.1.patch.txt, HIVE-5087.patch.txt, HIVE-5087.patch.txt, regex_path.diff -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-5279) Kryo cannot instantiate GenericUDAFEvaluator in GroupByDesc
[ https://issues.apache.org/jira/browse/HIVE-5279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13767995#comment-13767995 ] Navis commented on HIVE-5279: - Ok, sure. Kryo cannot instantiate GenericUDAFEvaluator in GroupByDesc --- Key: HIVE-5279 URL: https://issues.apache.org/jira/browse/HIVE-5279 Project: Hive Issue Type: Bug Components: Query Processor Reporter: Navis Priority: Critical Attachments: 5279.patch We didn't forced GenericUDAFEvaluator to be Serializable. I don't know how previous serialization mechanism solved this but, kryo complaints that it's not Serializable and fails the query. The log below is the example, {noformat} java.lang.RuntimeException: com.esotericsoftware.kryo.KryoException: Class cannot be created (missing no-arg constructor): org.apache.hadoop.hive.serde2.objectinspector.StandardListObjectInspector Serialization trace: inputOI (org.apache.hadoop.hive.ql.udf.generic.GenericUDAFGroupOn$VersionedFloatGroupOnEval) genericUDAFEvaluator (org.apache.hadoop.hive.ql.plan.AggregationDesc) aggregators (org.apache.hadoop.hive.ql.plan.GroupByDesc) conf (org.apache.hadoop.hive.ql.exec.GroupByOperator) childOperators (org.apache.hadoop.hive.ql.exec.SelectOperator) childOperators (org.apache.hadoop.hive.ql.exec.TableScanOperator) aliasToWork (org.apache.hadoop.hive.ql.plan.MapWork) at org.apache.hadoop.hive.ql.exec.Utilities.getBaseWork(Utilities.java:312) at org.apache.hadoop.hive.ql.exec.Utilities.getMapWork(Utilities.java:261) at org.apache.hadoop.hive.ql.io.HiveInputFormat.init(HiveInputFormat.java:256) at org.apache.hadoop.hive.ql.io.HiveInputFormat.pushProjectionsAndFilters(HiveInputFormat.java:383) at org.apache.h {noformat} If this cannot be fixed in somehow, some UDAFs should be modified to be run on hive-0.13.0 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-5122) Add partition for multiple partition ignores locations for non-first partitions
[ https://issues.apache.org/jira/browse/HIVE-5122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13767996#comment-13767996 ] Navis commented on HIVE-5122: - Thanks for the good explanation [~thejas]. Should the annotation in 'AddPartitionDesc' (the 'Path') be removed or replaced with 'Location' not be shown in test result? Add partition for multiple partition ignores locations for non-first partitions --- Key: HIVE-5122 URL: https://issues.apache.org/jira/browse/HIVE-5122 Project: Hive Issue Type: Bug Components: Query Processor Reporter: Navis Assignee: Navis Priority: Minor Attachments: HIVE-5122.D12411.1.patch, HIVE-5122.D12411.2.patch http://www.mail-archive.com/user@hive.apache.org/msg09151.html When multiple partitions are being added in single alter table statement, the location for first partition is being used as the location of all partitions. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-5279) Kryo cannot instantiate GenericUDAFEvaluator in GroupByDesc
[ https://issues.apache.org/jira/browse/HIVE-5279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13768000#comment-13768000 ] Edward Capriolo commented on HIVE-5279: --- I just ran into this making the collect_list udaf, you need to do things java-bean style. No-arge constructor + setter and getter. It probably was valid before but it is not now. Kryo cannot instantiate GenericUDAFEvaluator in GroupByDesc --- Key: HIVE-5279 URL: https://issues.apache.org/jira/browse/HIVE-5279 Project: Hive Issue Type: Bug Components: Query Processor Reporter: Navis Priority: Critical Attachments: 5279.patch We didn't forced GenericUDAFEvaluator to be Serializable. I don't know how previous serialization mechanism solved this but, kryo complaints that it's not Serializable and fails the query. The log below is the example, {noformat} java.lang.RuntimeException: com.esotericsoftware.kryo.KryoException: Class cannot be created (missing no-arg constructor): org.apache.hadoop.hive.serde2.objectinspector.StandardListObjectInspector Serialization trace: inputOI (org.apache.hadoop.hive.ql.udf.generic.GenericUDAFGroupOn$VersionedFloatGroupOnEval) genericUDAFEvaluator (org.apache.hadoop.hive.ql.plan.AggregationDesc) aggregators (org.apache.hadoop.hive.ql.plan.GroupByDesc) conf (org.apache.hadoop.hive.ql.exec.GroupByOperator) childOperators (org.apache.hadoop.hive.ql.exec.SelectOperator) childOperators (org.apache.hadoop.hive.ql.exec.TableScanOperator) aliasToWork (org.apache.hadoop.hive.ql.plan.MapWork) at org.apache.hadoop.hive.ql.exec.Utilities.getBaseWork(Utilities.java:312) at org.apache.hadoop.hive.ql.exec.Utilities.getMapWork(Utilities.java:261) at org.apache.hadoop.hive.ql.io.HiveInputFormat.init(HiveInputFormat.java:256) at org.apache.hadoop.hive.ql.io.HiveInputFormat.pushProjectionsAndFilters(HiveInputFormat.java:383) at org.apache.h {noformat} If this cannot be fixed in somehow, some UDAFs should be modified to be run on hive-0.13.0 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-5276) Skip useless string encoding stage for hiveserver2
[ https://issues.apache.org/jira/browse/HIVE-5276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13768001#comment-13768001 ] Phabricator commented on HIVE-5276: --- cwsteinbach has commented on the revision HIVE-5276 [jira] Skip useless string encoding stage for hiveserver2. INLINE COMMENTS ql/src/java/org/apache/hadoop/hive/ql/exec/ListSinkOperator.java:57 Let's get rid of this comment and anything special we're doing for Hadoop 0.17. We deprecated support for that version a long time ago. service/src/java/org/apache/hive/service/cli/operation/SQLOperation.java:98 I admit that I haven't investigated this closely, but the fact that we're overriding a user-configurable property with a hardcoded value seems like a red flag. If this is genuinely necessary can you please add a comment explaining the rationale? Thanks. ql/src/java/org/apache/hadoop/hive/ql/exec/DefaultFetchConverter.java:50 We deprecated support for 0.17 and 0.18 a long time ago. Please remove. common/src/java/org/apache/hadoop/hive/conf/HiveConf.java:667 Please add this to conf/hive-default.xml.template along with a template explaining what it does. Also, is this something that we really want to expose to users at this point in time? REVISION DETAIL https://reviews.facebook.net/D12879 To: JIRA, navis Cc: cwsteinbach Skip useless string encoding stage for hiveserver2 -- Key: HIVE-5276 URL: https://issues.apache.org/jira/browse/HIVE-5276 Project: Hive Issue Type: Improvement Components: HiveServer2 Reporter: Navis Assignee: Navis Priority: Minor Attachments: HIVE-5276.D12879.1.patch Current hiveserver2 acquires rows in string format which is used for cli output. Then convert them into row again and convert to final format lastly. This is inefficient and memory consuming. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-5161) Additional SerDe support for varchar type
[ https://issues.apache.org/jira/browse/HIVE-5161?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan updated HIVE-5161: --- Resolution: Fixed Fix Version/s: 0.13.0 Status: Resolved (was: Patch Available) Committed to trunk. Thanks, Jason! Additional SerDe support for varchar type - Key: HIVE-5161 URL: https://issues.apache.org/jira/browse/HIVE-5161 Project: Hive Issue Type: Bug Components: Serializers/Deserializers, Types Reporter: Jason Dere Assignee: Jason Dere Fix For: 0.13.0 Attachments: D12897.1.patch, HIVE-5161.1.patch, HIVE-5161.2.patch, HIVE-5161.3.patch Breaking out support for varchar for the various SerDes as an additional task. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-5167) webhcat_config.sh checks for env variables being set before sourcing webhcat-env.sh
[ https://issues.apache.org/jira/browse/HIVE-5167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thejas M Nair updated HIVE-5167: Status: Patch Available (was: Open) webhcat_config.sh checks for env variables being set before sourcing webhcat-env.sh --- Key: HIVE-5167 URL: https://issues.apache.org/jira/browse/HIVE-5167 Project: Hive Issue Type: Bug Components: WebHCat Affects Versions: 0.12.0 Reporter: Thejas M Nair Assignee: Thejas M Nair Attachments: HIVE-5167.1.patch, HIVE-5167.2.patch HIVE-4820 introduced checks for env variables, but it does so before sourcing webhcat-env.sh. This order needs to be reversed. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-5167) webhcat_config.sh checks for env variables being set before sourcing webhcat-env.sh
[ https://issues.apache.org/jira/browse/HIVE-5167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thejas M Nair updated HIVE-5167: Attachment: HIVE-5167.2.patch HIVE-5167.2.patch - change to set HIVE_HOME to default value if HIVE_HOME is not set and the default HIVE_HOME is viable webhcat_config.sh checks for env variables being set before sourcing webhcat-env.sh --- Key: HIVE-5167 URL: https://issues.apache.org/jira/browse/HIVE-5167 Project: Hive Issue Type: Bug Components: WebHCat Affects Versions: 0.12.0 Reporter: Thejas M Nair Assignee: Thejas M Nair Attachments: HIVE-5167.1.patch, HIVE-5167.2.patch HIVE-4820 introduced checks for env variables, but it does so before sourcing webhcat-env.sh. This order needs to be reversed. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-5132) Can't access to hwi due to No Java compiler available
[ https://issues.apache.org/jira/browse/HIVE-5132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13768008#comment-13768008 ] Edward Capriolo commented on HIVE-5132: --- +1 Can't access to hwi due to No Java compiler available --- Key: HIVE-5132 URL: https://issues.apache.org/jira/browse/HIVE-5132 Project: Hive Issue Type: Bug Affects Versions: 0.10.0, 0.11.0 Environment: JDK1.6, hadoop 2.0.4-alpha Reporter: Bing Li Assignee: Bing Li Priority: Critical Attachments: HIVE-5132-01.patch I want to use hwi to submit hive queries, but after start hwi successfully, I can't open the web page of it. I noticed that someone also met the same issue in hive-0.10. Reproduce steps: -- 1. start hwi bin/hive --config $HIVE_CONF_DIR --service hwi 2. access to http://hive_hwi_node:/hwi via browser got the following error message: HTTP ERROR 500 Problem accessing /hwi/. Reason: No Java compiler available Caused by: java.lang.IllegalStateException: No Java compiler available at org.apache.jasper.JspCompilationContext.createCompiler(JspCompilationContext.java:225) at org.apache.jasper.JspCompilationContext.compile(JspCompilationContext.java:560) at org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:299) at org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:315) at org.apache.jasper.servlet.JspServlet.service(JspServlet.java:265) at javax.servlet.http.HttpServlet.service(HttpServlet.java:820) at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511) at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:401) at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216) at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182) at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766) at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450) at org.mortbay.jetty.servlet.Dispatcher.forward(Dispatcher.java:327) at org.mortbay.jetty.servlet.Dispatcher.forward(Dispatcher.java:126) at org.mortbay.jetty.servlet.DefaultServlet.doGet(DefaultServlet.java:503) at javax.servlet.http.HttpServlet.service(HttpServlet.java:707) at javax.servlet.http.HttpServlet.service(HttpServlet.java:820) at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511) at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:401) at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216) at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182) at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766) at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) at org.mortbay.jetty.handler.RequestLogHandler.handle(RequestLogHandler.java:49) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) at org.mortbay.jetty.Server.handle(Server.java:326) at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542) at org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928) at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549) at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212) at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404) at org.mortbay.jetty.bio.SocketConnector$Connection.run(SocketConnector.java:228) at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-5276) Skip useless string encoding stage for hiveserver2
[ https://issues.apache.org/jira/browse/HIVE-5276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13768009#comment-13768009 ] Phabricator commented on HIVE-5276: --- navis has commented on the revision HIVE-5276 [jira] Skip useless string encoding stage for hiveserver2. INLINE COMMENTS ql/src/java/org/apache/hadoop/hive/ql/exec/DefaultFetchConverter.java:50 ok common/src/java/org/apache/hadoop/hive/conf/HiveConf.java:667 Yes, it seemed not useful by exposing to end user. Will be moved to SQLOperation. ql/src/java/org/apache/hadoop/hive/ql/exec/ListSinkOperator.java:57 ok. service/src/java/org/apache/hive/service/cli/operation/SQLOperation.java:98 It should be an internal configuration not exposed to user, as commented in HiveConf. I'll add comments for that. REVISION DETAIL https://reviews.facebook.net/D12879 To: JIRA, navis Cc: cwsteinbach Skip useless string encoding stage for hiveserver2 -- Key: HIVE-5276 URL: https://issues.apache.org/jira/browse/HIVE-5276 Project: Hive Issue Type: Improvement Components: HiveServer2 Reporter: Navis Assignee: Navis Priority: Minor Attachments: HIVE-5276.D12879.1.patch Current hiveserver2 acquires rows in string format which is used for cli output. Then convert them into row again and convert to final format lastly. This is inefficient and memory consuming. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HIVE-5295) HiveConnection#configureConnection tries to execute statement even after it is closed
Vaibhav Gumashta created HIVE-5295: -- Summary: HiveConnection#configureConnection tries to execute statement even after it is closed Key: HIVE-5295 URL: https://issues.apache.org/jira/browse/HIVE-5295 Project: Hive Issue Type: Bug Reporter: Vaibhav Gumashta Assignee: Vaibhav Gumashta HiveConnection#configureConnection tries to execute statement even after it is closed. For remote JDBC client, it tries to set the conf var using 'set foo=bar' by calling HiveStatement.execute for each conf var pair, but closes the statement after the 1st iteration through the conf var pairs. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-5253) Create component to compile and jar dynamic code
[ https://issues.apache.org/jira/browse/HIVE-5253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13768010#comment-13768010 ] Edward Capriolo commented on HIVE-5253: --- I am pretty sure this is a bogus error message. Does anyway care to review? Create component to compile and jar dynamic code Key: HIVE-5253 URL: https://issues.apache.org/jira/browse/HIVE-5253 Project: Hive Issue Type: Sub-task Reporter: Edward Capriolo Assignee: Edward Capriolo Attachments: HIVE-5253.patch.txt -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-5122) Add partition for multiple partition ignores locations for non-first partitions
[ https://issues.apache.org/jira/browse/HIVE-5122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13768011#comment-13768011 ] Thejas M Nair commented on HIVE-5122: - Yes, I agree, using Location and letting it get masked would be good in general. But if the partition location is masked the existing test case will not test if the issue of this jira has been fixed, the test case would need some changes. Add partition for multiple partition ignores locations for non-first partitions --- Key: HIVE-5122 URL: https://issues.apache.org/jira/browse/HIVE-5122 Project: Hive Issue Type: Bug Components: Query Processor Reporter: Navis Assignee: Navis Priority: Minor Attachments: HIVE-5122.D12411.1.patch, HIVE-5122.D12411.2.patch http://www.mail-archive.com/user@hive.apache.org/msg09151.html When multiple partitions are being added in single alter table statement, the location for first partition is being used as the location of all partitions. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-5278) Move some string UDFs to GenericUDFs, for better varchar support
[ https://issues.apache.org/jira/browse/HIVE-5278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13768020#comment-13768020 ] Hudson commented on HIVE-5278: -- FAILURE: Integrated in Hive-trunk-hadoop2-ptest #99 (See [https://builds.apache.org/job/Hive-trunk-hadoop2-ptest/99/]) HIVE-5278 : Move some string UDFs to GenericUDFs, for better varchar support (Jason Dere via Ashutosh Chauhan) (hashutosh: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1523518) * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/FunctionRegistry.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/UDFConcat.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/UDFLower.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/UDFUpper.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFConcat.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFLower.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFUpper.java * /hive/trunk/ql/src/test/results/compiler/plan/groupby2.q.xml * /hive/trunk/ql/src/test/results/compiler/plan/udf6.q.xml Move some string UDFs to GenericUDFs, for better varchar support Key: HIVE-5278 URL: https://issues.apache.org/jira/browse/HIVE-5278 Project: Hive Issue Type: Improvement Components: Types, UDF Reporter: Jason Dere Assignee: Jason Dere Fix For: 0.13.0 Attachments: D12909.1.patch, HIVE-5278.1.patch, HIVE-5278.2.patch To better support varchar/char types in string UDFs, select UDFs should be converted to GenericUDFs. This allows the UDF to return the resulting char/varchar length in the type metadata. This work is being split off as a separate task from HIVE-4844. The initial UDFs as part of this work are concat/lower/upper. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-5278) Move some string UDFs to GenericUDFs, for better varchar support
[ https://issues.apache.org/jira/browse/HIVE-5278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13768022#comment-13768022 ] Hudson commented on HIVE-5278: -- FAILURE: Integrated in Hive-trunk-hadoop1-ptest #166 (See [https://builds.apache.org/job/Hive-trunk-hadoop1-ptest/166/]) HIVE-5278 : Move some string UDFs to GenericUDFs, for better varchar support (Jason Dere via Ashutosh Chauhan) (hashutosh: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1523518) * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/FunctionRegistry.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/UDFConcat.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/UDFLower.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/UDFUpper.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFConcat.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFLower.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFUpper.java * /hive/trunk/ql/src/test/results/compiler/plan/groupby2.q.xml * /hive/trunk/ql/src/test/results/compiler/plan/udf6.q.xml Move some string UDFs to GenericUDFs, for better varchar support Key: HIVE-5278 URL: https://issues.apache.org/jira/browse/HIVE-5278 Project: Hive Issue Type: Improvement Components: Types, UDF Reporter: Jason Dere Assignee: Jason Dere Fix For: 0.13.0 Attachments: D12909.1.patch, HIVE-5278.1.patch, HIVE-5278.2.patch To better support varchar/char types in string UDFs, select UDFs should be converted to GenericUDFs. This allows the UDF to return the resulting char/varchar length in the type metadata. This work is being split off as a separate task from HIVE-4844. The initial UDFs as part of this work are concat/lower/upper. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-5161) Additional SerDe support for varchar type
[ https://issues.apache.org/jira/browse/HIVE-5161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13768023#comment-13768023 ] Hudson commented on HIVE-5161: -- FAILURE: Integrated in Hive-trunk-hadoop1-ptest #166 (See [https://builds.apache.org/job/Hive-trunk-hadoop1-ptest/166/]) HIVE-5161 : Additional SerDe support for varchar type (Jason Dere via Ashutosh Chauhan) (hashutosh: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1523532) * /hive/trunk/ql/src/gen/protobuf/gen-java/org/apache/hadoop/hive/ql/io/orc/OrcProto.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/ColumnStatisticsImpl.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/OrcStruct.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/RecordReaderImpl.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/WriterImpl.java * /hive/trunk/ql/src/protobuf/org/apache/hadoop/hive/ql/io/orc/orc_proto.proto * /hive/trunk/ql/src/test/queries/clientpositive/varchar_serde.q * /hive/trunk/ql/src/test/results/clientpositive/varchar_serde.q.out * /hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/RegexSerDe.java * /hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/binarysortable/BinarySortableSerDe.java * /hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyUtils.java * /hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/lazybinary/LazyBinarySerDe.java Additional SerDe support for varchar type - Key: HIVE-5161 URL: https://issues.apache.org/jira/browse/HIVE-5161 Project: Hive Issue Type: Bug Components: Serializers/Deserializers, Types Reporter: Jason Dere Assignee: Jason Dere Fix For: 0.13.0 Attachments: D12897.1.patch, HIVE-5161.1.patch, HIVE-5161.2.patch, HIVE-5161.3.patch Breaking out support for varchar for the various SerDes as an additional task. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-5173) Wincompat : Add .cmd/text/crlf to .gitattributes
[ https://issues.apache.org/jira/browse/HIVE-5173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thejas M Nair updated HIVE-5173: Fix Version/s: 0.13.0 Wincompat : Add .cmd/text/crlf to .gitattributes Key: HIVE-5173 URL: https://issues.apache.org/jira/browse/HIVE-5173 Project: Hive Issue Type: Sub-task Components: Windows Reporter: Sushanth Sowmyan Assignee: Sushanth Sowmyan Fix For: 0.13.0 Attachments: HIVE-5173.patch Add .cmd entry to .gitattributes -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-5173) Wincompat : Add .cmd/text/crlf to .gitattributes
[ https://issues.apache.org/jira/browse/HIVE-5173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thejas M Nair updated HIVE-5173: Resolution: Fixed Status: Resolved (was: Patch Available) Patch committed to trunk. Thanks for the contribution, Sushanth! Wincompat : Add .cmd/text/crlf to .gitattributes Key: HIVE-5173 URL: https://issues.apache.org/jira/browse/HIVE-5173 Project: Hive Issue Type: Sub-task Components: Windows Reporter: Sushanth Sowmyan Assignee: Sushanth Sowmyan Attachments: HIVE-5173.patch Add .cmd entry to .gitattributes -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-5175) Wincompat : adds HADOOP_TIME_ZONE env property and user.timezone sysproperty
[ https://issues.apache.org/jira/browse/HIVE-5175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thejas M Nair updated HIVE-5175: Status: Open (was: Patch Available) Canceling patch as there are comments to be addressed. Wincompat : adds HADOOP_TIME_ZONE env property and user.timezone sysproperty Key: HIVE-5175 URL: https://issues.apache.org/jira/browse/HIVE-5175 Project: Hive Issue Type: Sub-task Components: Windows Reporter: Sushanth Sowmyan Assignee: Sushanth Sowmyan Attachments: HIVE-5175.patch Adding HADOOP_TIME_ZONE and env property user.timezone as US/Pacific, needed for certain tests in windows to pass. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-5174) Wincompat : junit.file.schema configurability
[ https://issues.apache.org/jira/browse/HIVE-5174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thejas M Nair updated HIVE-5174: Resolution: Fixed Status: Resolved (was: Patch Available) Patch committed to trunk. Thanks for the contribution, Sushanth! Wincompat : junit.file.schema configurability - Key: HIVE-5174 URL: https://issues.apache.org/jira/browse/HIVE-5174 Project: Hive Issue Type: Sub-task Components: Windows Reporter: Sushanth Sowmyan Assignee: Sushanth Sowmyan Attachments: HIVE-5174.2.patch, HIVE-5174.patch Adding junit.file.schema and hadoop.testcp configurability to build, adding set-hadoop-test-classpath target. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-4763) add support for thrift over http transport in HS2
[ https://issues.apache.org/jira/browse/HIVE-4763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13768031#comment-13768031 ] Vaibhav Gumashta commented on HIVE-4763: [~cwsteinbach][~thejas] Phab created a new link here: https://reviews.facebook.net/D12951 since previously I had manually uploaded the patch. Incorporates all the changes except: modification to SessionState in previous patch (I am not very clear why it was needed in first place - so keeping it here for feedback) and fixing the OOM test in TestHiveServer2Http and enabling it. For enabling use of both binary and http modes, I'll create a follow up JIRA to give more thought to the design. add support for thrift over http transport in HS2 - Key: HIVE-4763 URL: https://issues.apache.org/jira/browse/HIVE-4763 Project: Hive Issue Type: Sub-task Components: HiveServer2 Reporter: Thejas M Nair Assignee: Vaibhav Gumashta Fix For: 0.12.0 Attachments: HIVE-4763.1.patch, HIVE-4763.2.patch, HIVE-4763.D12855.1.patch Subtask for adding support for http transport mode for thrift api in hive server2. Support for the different authentication modes will be part of another subtask. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-5278) Move some string UDFs to GenericUDFs, for better varchar support
[ https://issues.apache.org/jira/browse/HIVE-5278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13768038#comment-13768038 ] Hudson commented on HIVE-5278: -- FAILURE: Integrated in Hive-trunk-hadoop2 #432 (See [https://builds.apache.org/job/Hive-trunk-hadoop2/432/]) HIVE-5278 : Move some string UDFs to GenericUDFs, for better varchar support (Jason Dere via Ashutosh Chauhan) (hashutosh: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1523518) * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/FunctionRegistry.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/UDFConcat.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/UDFLower.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/UDFUpper.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFConcat.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFLower.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFUpper.java * /hive/trunk/ql/src/test/results/compiler/plan/groupby2.q.xml * /hive/trunk/ql/src/test/results/compiler/plan/udf6.q.xml Move some string UDFs to GenericUDFs, for better varchar support Key: HIVE-5278 URL: https://issues.apache.org/jira/browse/HIVE-5278 Project: Hive Issue Type: Improvement Components: Types, UDF Reporter: Jason Dere Assignee: Jason Dere Fix For: 0.13.0 Attachments: D12909.1.patch, HIVE-5278.1.patch, HIVE-5278.2.patch To better support varchar/char types in string UDFs, select UDFs should be converted to GenericUDFs. This allows the UDF to return the resulting char/varchar length in the type metadata. This work is being split off as a separate task from HIVE-4844. The initial UDFs as part of this work are concat/lower/upper. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-5295) HiveConnection#configureConnection tries to execute statement even after it is closed
[ https://issues.apache.org/jira/browse/HIVE-5295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Phabricator updated HIVE-5295: -- Attachment: D12957.1.patch vaibhavgumashta requested code review of HIVE-5295 [jira] HiveConnection#configureConnection tries to execute statement even after it is closed. Reviewers: JIRA HIVE-5295: HiveConnection#configureConnection does not close the statement prematurely now HiveConnection#configureConnection tries to execute statement even after it is closed. For remote JDBC client, it tries to set the conf var using 'set foo=bar' by calling HiveStatement.execute for each conf var pair, but closes the statement after the 1st iteration through the conf var pairs. TEST PLAN EMPTY REVISION DETAIL https://reviews.facebook.net/D12957 AFFECTED FILES jdbc/src/java/org/apache/hive/jdbc/HiveConnection.java MANAGE HERALD RULES https://reviews.facebook.net/herald/view/differential/ WHY DID I GET THIS EMAIL? https://reviews.facebook.net/herald/transcript/30957/ To: JIRA, vaibhavgumashta HiveConnection#configureConnection tries to execute statement even after it is closed - Key: HIVE-5295 URL: https://issues.apache.org/jira/browse/HIVE-5295 Project: Hive Issue Type: Bug Reporter: Vaibhav Gumashta Assignee: Vaibhav Gumashta Attachments: D12957.1.patch HiveConnection#configureConnection tries to execute statement even after it is closed. For remote JDBC client, it tries to set the conf var using 'set foo=bar' by calling HiveStatement.execute for each conf var pair, but closes the statement after the 1st iteration through the conf var pairs. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Work started] (HIVE-5295) HiveConnection#configureConnection tries to execute statement even after it is closed
[ https://issues.apache.org/jira/browse/HIVE-5295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HIVE-5295 started by Vaibhav Gumashta. HiveConnection#configureConnection tries to execute statement even after it is closed - Key: HIVE-5295 URL: https://issues.apache.org/jira/browse/HIVE-5295 Project: Hive Issue Type: Bug Reporter: Vaibhav Gumashta Assignee: Vaibhav Gumashta Attachments: D12957.1.patch HiveConnection#configureConnection tries to execute statement even after it is closed. For remote JDBC client, it tries to set the conf var using 'set foo=bar' by calling HiveStatement.execute for each conf var pair, but closes the statement after the 1st iteration through the conf var pairs. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-5161) Additional SerDe support for varchar type
[ https://issues.apache.org/jira/browse/HIVE-5161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13768048#comment-13768048 ] Hudson commented on HIVE-5161: -- FAILURE: Integrated in Hive-trunk-hadoop2-ptest #100 (See [https://builds.apache.org/job/Hive-trunk-hadoop2-ptest/100/]) HIVE-5161 : Additional SerDe support for varchar type (Jason Dere via Ashutosh Chauhan) (hashutosh: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1523532) * /hive/trunk/ql/src/gen/protobuf/gen-java/org/apache/hadoop/hive/ql/io/orc/OrcProto.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/ColumnStatisticsImpl.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/OrcStruct.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/RecordReaderImpl.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/WriterImpl.java * /hive/trunk/ql/src/protobuf/org/apache/hadoop/hive/ql/io/orc/orc_proto.proto * /hive/trunk/ql/src/test/queries/clientpositive/varchar_serde.q * /hive/trunk/ql/src/test/results/clientpositive/varchar_serde.q.out * /hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/RegexSerDe.java * /hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/binarysortable/BinarySortableSerDe.java * /hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyUtils.java * /hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/lazybinary/LazyBinarySerDe.java Additional SerDe support for varchar type - Key: HIVE-5161 URL: https://issues.apache.org/jira/browse/HIVE-5161 Project: Hive Issue Type: Bug Components: Serializers/Deserializers, Types Reporter: Jason Dere Assignee: Jason Dere Fix For: 0.13.0 Attachments: D12897.1.patch, HIVE-5161.1.patch, HIVE-5161.2.patch, HIVE-5161.3.patch Breaking out support for varchar for the various SerDes as an additional task. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-5295) HiveConnection#configureConnection tries to execute statement even after it is closed
[ https://issues.apache.org/jira/browse/HIVE-5295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vaibhav Gumashta updated HIVE-5295: --- Fix Version/s: 0.13.0 HiveConnection#configureConnection tries to execute statement even after it is closed - Key: HIVE-5295 URL: https://issues.apache.org/jira/browse/HIVE-5295 Project: Hive Issue Type: Bug Reporter: Vaibhav Gumashta Assignee: Vaibhav Gumashta Fix For: 0.13.0 Attachments: D12957.1.patch HiveConnection#configureConnection tries to execute statement even after it is closed. For remote JDBC client, it tries to set the conf var using 'set foo=bar' by calling HiveStatement.execute for each conf var pair, but closes the statement after the 1st iteration through the conf var pairs. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-5253) Create component to compile and jar dynamic code
[ https://issues.apache.org/jira/browse/HIVE-5253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13768054#comment-13768054 ] Carl Steinbach commented on HIVE-5253: -- Can you post a review request? Thanks. Create component to compile and jar dynamic code Key: HIVE-5253 URL: https://issues.apache.org/jira/browse/HIVE-5253 Project: Hive Issue Type: Sub-task Reporter: Edward Capriolo Assignee: Edward Capriolo Attachments: HIVE-5253.patch.txt -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-4443) [HCatalog] Have an option for GET queue to return all job information in single call
[ https://issues.apache.org/jira/browse/HIVE-4443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Dai updated HIVE-4443: - Attachment: HIVE-4443-4.patch Adjust format [HCatalog] Have an option for GET queue to return all job information in single call - Key: HIVE-4443 URL: https://issues.apache.org/jira/browse/HIVE-4443 Project: Hive Issue Type: Improvement Components: HCatalog Reporter: Daniel Dai Assignee: Daniel Dai Fix For: 0.12.0 Attachments: HIVE-4443-1.patch, HIVE-4443-2.patch, HIVE-4443-3.patch, HIVE-4443-4.patch Currently do display a summary of all jobs, one has to call GET queue to retrieve all the jobids and then call GET queue/:jobid for each job. It would be nice to do this in a single call. I would suggest: * GET queue - mark deprecate * GET queue/jobID - mark deprecate * DELETE queue/jobID - mark deprecate * GET jobs - return the list of JSON objects jobid but no detailed info * GET jobs/fields=* - return the list of JSON objects containing detailed Job info * GET jobs/jobID - return the single JSON object containing the detailed Job info for the job with the given ID (equivalent to GET queue/jobID) * DELETE jobs/jobID - equivalent to DELETE queue/jobID NO PRECOMMIT TESTS -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-5279) Kryo cannot instantiate GenericUDAFEvaluator in GroupByDesc
[ https://issues.apache.org/jira/browse/HIVE-5279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13768058#comment-13768058 ] Navis commented on HIVE-5279: - The point is we've not required anything for implementation of UDAFs till now. Kryo cannot instantiate GenericUDAFEvaluator in GroupByDesc --- Key: HIVE-5279 URL: https://issues.apache.org/jira/browse/HIVE-5279 Project: Hive Issue Type: Bug Components: Query Processor Reporter: Navis Priority: Critical Attachments: 5279.patch We didn't forced GenericUDAFEvaluator to be Serializable. I don't know how previous serialization mechanism solved this but, kryo complaints that it's not Serializable and fails the query. The log below is the example, {noformat} java.lang.RuntimeException: com.esotericsoftware.kryo.KryoException: Class cannot be created (missing no-arg constructor): org.apache.hadoop.hive.serde2.objectinspector.StandardListObjectInspector Serialization trace: inputOI (org.apache.hadoop.hive.ql.udf.generic.GenericUDAFGroupOn$VersionedFloatGroupOnEval) genericUDAFEvaluator (org.apache.hadoop.hive.ql.plan.AggregationDesc) aggregators (org.apache.hadoop.hive.ql.plan.GroupByDesc) conf (org.apache.hadoop.hive.ql.exec.GroupByOperator) childOperators (org.apache.hadoop.hive.ql.exec.SelectOperator) childOperators (org.apache.hadoop.hive.ql.exec.TableScanOperator) aliasToWork (org.apache.hadoop.hive.ql.plan.MapWork) at org.apache.hadoop.hive.ql.exec.Utilities.getBaseWork(Utilities.java:312) at org.apache.hadoop.hive.ql.exec.Utilities.getMapWork(Utilities.java:261) at org.apache.hadoop.hive.ql.io.HiveInputFormat.init(HiveInputFormat.java:256) at org.apache.hadoop.hive.ql.io.HiveInputFormat.pushProjectionsAndFilters(HiveInputFormat.java:383) at org.apache.h {noformat} If this cannot be fixed in somehow, some UDAFs should be modified to be run on hive-0.13.0 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-5279) Kryo cannot instantiate GenericUDAFEvaluator in GroupByDesc
[ https://issues.apache.org/jira/browse/HIVE-5279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Navis updated HIVE-5279: Status: Patch Available (was: Open) Kryo cannot instantiate GenericUDAFEvaluator in GroupByDesc --- Key: HIVE-5279 URL: https://issues.apache.org/jira/browse/HIVE-5279 Project: Hive Issue Type: Bug Components: Query Processor Reporter: Navis Priority: Critical Attachments: 5279.patch We didn't forced GenericUDAFEvaluator to be Serializable. I don't know how previous serialization mechanism solved this but, kryo complaints that it's not Serializable and fails the query. The log below is the example, {noformat} java.lang.RuntimeException: com.esotericsoftware.kryo.KryoException: Class cannot be created (missing no-arg constructor): org.apache.hadoop.hive.serde2.objectinspector.StandardListObjectInspector Serialization trace: inputOI (org.apache.hadoop.hive.ql.udf.generic.GenericUDAFGroupOn$VersionedFloatGroupOnEval) genericUDAFEvaluator (org.apache.hadoop.hive.ql.plan.AggregationDesc) aggregators (org.apache.hadoop.hive.ql.plan.GroupByDesc) conf (org.apache.hadoop.hive.ql.exec.GroupByOperator) childOperators (org.apache.hadoop.hive.ql.exec.SelectOperator) childOperators (org.apache.hadoop.hive.ql.exec.TableScanOperator) aliasToWork (org.apache.hadoop.hive.ql.plan.MapWork) at org.apache.hadoop.hive.ql.exec.Utilities.getBaseWork(Utilities.java:312) at org.apache.hadoop.hive.ql.exec.Utilities.getMapWork(Utilities.java:261) at org.apache.hadoop.hive.ql.io.HiveInputFormat.init(HiveInputFormat.java:256) at org.apache.hadoop.hive.ql.io.HiveInputFormat.pushProjectionsAndFilters(HiveInputFormat.java:383) at org.apache.h {noformat} If this cannot be fixed in somehow, some UDAFs should be modified to be run on hive-0.13.0 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-5279) Kryo cannot instantiate GenericUDAFEvaluator in GroupByDesc
[ https://issues.apache.org/jira/browse/HIVE-5279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Phabricator updated HIVE-5279: -- Attachment: D12963.1.patch navis requested code review of HIVE-5279 [jira] Kryo cannot instantiate GenericUDAFEvaluator in GroupByDesc. Reviewers: JIRA HIVE-5279 Kryo cannot instantiate GenericUDAFEvaluator in GroupByDesc We didn't forced GenericUDAFEvaluator to be Serializable. I don't know how previous serialization mechanism solved this but, kryo complaints that it's not Serializable and fails the query. The log below is the example, java.lang.RuntimeException: com.esotericsoftware.kryo.KryoException: Class cannot be created (missing no-arg constructor): org.apache.hadoop.hive.serde2.objectinspector.StandardListObjectInspector Serialization trace: inputOI (org.apache.hadoop.hive.ql.udf.generic.GenericUDAFGroupOn$VersionedFloatGroupOnEval) genericUDAFEvaluator (org.apache.hadoop.hive.ql.plan.AggregationDesc) aggregators (org.apache.hadoop.hive.ql.plan.GroupByDesc) conf (org.apache.hadoop.hive.ql.exec.GroupByOperator) childOperators (org.apache.hadoop.hive.ql.exec.SelectOperator) childOperators (org.apache.hadoop.hive.ql.exec.TableScanOperator) aliasToWork (org.apache.hadoop.hive.ql.plan.MapWork) at org.apache.hadoop.hive.ql.exec.Utilities.getBaseWork(Utilities.java:312) at org.apache.hadoop.hive.ql.exec.Utilities.getMapWork(Utilities.java:261) at org.apache.hadoop.hive.ql.io.HiveInputFormat.init(HiveInputFormat.java:256) at org.apache.hadoop.hive.ql.io.HiveInputFormat.pushProjectionsAndFilters(HiveInputFormat.java:383) at org.apache.h If this cannot be fixed in somehow, some UDAFs should be modified to be run on hive-0.13.0 TEST PLAN EMPTY REVISION DETAIL https://reviews.facebook.net/D12963 AFFECTED FILES ql/src/java/org/apache/hadoop/hive/ql/plan/AggregationDesc.java ql/src/test/org/apache/hadoop/hive/ql/udf/generic/GenericUDAFSumList.java ql/src/test/queries/clientpositive/udaf_sum_list.q ql/src/test/results/clientpositive/udaf_sum_list.q.out MANAGE HERALD RULES https://reviews.facebook.net/herald/view/differential/ WHY DID I GET THIS EMAIL? https://reviews.facebook.net/herald/transcript/30963/ To: JIRA, navis Kryo cannot instantiate GenericUDAFEvaluator in GroupByDesc --- Key: HIVE-5279 URL: https://issues.apache.org/jira/browse/HIVE-5279 Project: Hive Issue Type: Bug Components: Query Processor Reporter: Navis Priority: Critical Attachments: 5279.patch, D12963.1.patch We didn't forced GenericUDAFEvaluator to be Serializable. I don't know how previous serialization mechanism solved this but, kryo complaints that it's not Serializable and fails the query. The log below is the example, {noformat} java.lang.RuntimeException: com.esotericsoftware.kryo.KryoException: Class cannot be created (missing no-arg constructor): org.apache.hadoop.hive.serde2.objectinspector.StandardListObjectInspector Serialization trace: inputOI (org.apache.hadoop.hive.ql.udf.generic.GenericUDAFGroupOn$VersionedFloatGroupOnEval) genericUDAFEvaluator (org.apache.hadoop.hive.ql.plan.AggregationDesc) aggregators (org.apache.hadoop.hive.ql.plan.GroupByDesc) conf (org.apache.hadoop.hive.ql.exec.GroupByOperator) childOperators (org.apache.hadoop.hive.ql.exec.SelectOperator) childOperators (org.apache.hadoop.hive.ql.exec.TableScanOperator) aliasToWork (org.apache.hadoop.hive.ql.plan.MapWork) at org.apache.hadoop.hive.ql.exec.Utilities.getBaseWork(Utilities.java:312) at org.apache.hadoop.hive.ql.exec.Utilities.getMapWork(Utilities.java:261) at org.apache.hadoop.hive.ql.io.HiveInputFormat.init(HiveInputFormat.java:256) at org.apache.hadoop.hive.ql.io.HiveInputFormat.pushProjectionsAndFilters(HiveInputFormat.java:383) at org.apache.h {noformat} If this cannot be fixed in somehow, some UDAFs should be modified to be run on hive-0.13.0 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-4444) [HCatalog] WebHCat Hive should support equivalent parameters as Pig
[ https://issues.apache.org/jira/browse/HIVE-?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Dai updated HIVE-: - Attachment: HIVE--3.patch Adjust formatting. [HCatalog] WebHCat Hive should support equivalent parameters as Pig Key: HIVE- URL: https://issues.apache.org/jira/browse/HIVE- Project: Hive Issue Type: Improvement Components: HCatalog Reporter: Daniel Dai Assignee: Daniel Dai Fix For: 0.12.0 Attachments: HIVE--1.patch, HIVE--2.patch, HIVE--3.patch Currently there is no files and args parameter in Hive. We shall add them to make them similar to Pig. NO PRECOMMIT TESTS -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-5261) Make the Hive HBase storage handler work from HCatalog, and use HiveStorageHandlers instead of HCatStorageHandlers
[ https://issues.apache.org/jira/browse/HIVE-5261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13768075#comment-13768075 ] Viraj Bhat commented on HIVE-5261: -- Thanks Thejas!! Make the Hive HBase storage handler work from HCatalog, and use HiveStorageHandlers instead of HCatStorageHandlers -- Key: HIVE-5261 URL: https://issues.apache.org/jira/browse/HIVE-5261 Project: Hive Issue Type: Sub-task Components: HBase Handler, HCatalog Affects Versions: 0.12.0 Reporter: Sushanth Sowmyan Assignee: Viraj Bhat Fix For: 0.12.0 Attachments: HIVE-5261.patch This is a task being created for the HCat side of HIVE-4331 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-5167) webhcat_config.sh checks for env variables being set before sourcing webhcat-env.sh
[ https://issues.apache.org/jira/browse/HIVE-5167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13768077#comment-13768077 ] Hive QA commented on HIVE-5167: --- {color:green}Overall{color}: +1 all checks pass Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12603265/HIVE-5167.2.patch {color:green}SUCCESS:{color} +1 3125 tests passed Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/753/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/753/console Messages: {noformat} Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase {noformat} This message is automatically generated. webhcat_config.sh checks for env variables being set before sourcing webhcat-env.sh --- Key: HIVE-5167 URL: https://issues.apache.org/jira/browse/HIVE-5167 Project: Hive Issue Type: Bug Components: WebHCat Affects Versions: 0.12.0 Reporter: Thejas M Nair Assignee: Thejas M Nair Attachments: HIVE-5167.1.patch, HIVE-5167.2.patch HIVE-4820 introduced checks for env variables, but it does so before sourcing webhcat-env.sh. This order needs to be reversed. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira