[jira] [Created] (HIVE-2736) Hive UDFs cannot emit binary constants
Hive UDFs cannot emit binary constants -- Key: HIVE-2736 URL: https://issues.apache.org/jira/browse/HIVE-2736 Project: Hive Issue Type: Bug Components: Query Processor, Serializers/Deserializers, UDF Affects Versions: 0.9.0 Reporter: Philip Tromans Priority: Minor I recently rote a UDF which emits BINARY values (as implemented in [HIVE-2380|https://issues.apache.org/jira/browse/HIVE-2380]). When testing this, I encountered the following exception (because I was evaluating f(g(constant string))) and g() was emitting a BytesWritable type. FAILED: Hive Internal Error: java.lang.RuntimeException(Internal error: Cannot find ConstantObjectInspector for BINARY) java.lang.RuntimeException: Internal error: Cannot find ConstantObjectInspector for BINARY at org.apache.hadoop.hive.serde2.objectinspector.primitive.PrimitiveObjectInspectorFactory.getPrimitiveWritableConstantObjectInspector(PrimitiveObjectInspectorFactory.java:196) at org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorUtils.getConstantObjectInspector(ObjectInspectorUtils.java:899) at org.apache.hadoop.hive.ql.udf.generic.GenericUDF.initializeAndFoldConstants(GenericUDF.java:128) at org.apache.hadoop.hive.ql.plan.ExprNodeGenericFuncDesc.newInstance(ExprNodeGenericFuncDesc.java:214) at org.apache.hadoop.hive.ql.parse.TypeCheckProcFactory$DefaultExprProcessor.getXpathOrFuncExprNodeDesc(TypeCheckProcFactory.java:684) at org.apache.hadoop.hive.ql.parse.TypeCheckProcFactory$DefaultExprProcessor.process(TypeCheckProcFactory.java:805) at org.apache.hadoop.hive.ql.lib.DefaultRuleDispatcher.dispatch(DefaultRuleDispatcher.java:89) at org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatch(DefaultGraphWalker.java:88) at org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.walk(DefaultGraphWalker.java:125) at org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.startWalking(DefaultGraphWalker.java:102) at org.apache.hadoop.hive.ql.parse.TypeCheckProcFactory.genExprNode(TypeCheckProcFactory.java:161) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genExprNodeDesc(SemanticAnalyzer.java:7708) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genSelectPlan(SemanticAnalyzer.java:2301) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genSelectPlan(SemanticAnalyzer.java:2103) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPostGroupByBodyPlan(SemanticAnalyzer.java:6126) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genBodyPlan(SemanticAnalyzer.java:6097) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:6723) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:7484) at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:243) at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:430) at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:337) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:889) at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:255) at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:212) at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:403) at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:671) at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:554) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.util.RunJar.main(RunJar.java:156) It looks like a pretty simple fix - add a case for BINARY in PrimitiveObjectInspectorFactory.getPrimitiveWritableConstantObjectInspector() and implement a WritableConstantByteArrayObjectInspector class (almost identical to the others). I'm happy to do this, although this is my first foray into the world of contributing to FOSS so I might end up asking a few stupid questions. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Assigned] (HIVE-2736) Hive UDFs cannot emit binary constants
[ https://issues.apache.org/jira/browse/HIVE-2736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan reassigned HIVE-2736: -- Assignee: Philip Tromans Philip, Welcome to awesome world of FOSS. Go ahead, create a patch and submit it. Someone, will take a look. I have assigned the jira to you as well. Hive UDFs cannot emit binary constants -- Key: HIVE-2736 URL: https://issues.apache.org/jira/browse/HIVE-2736 Project: Hive Issue Type: Bug Components: Query Processor, Serializers/Deserializers, UDF Affects Versions: 0.9.0 Reporter: Philip Tromans Assignee: Philip Tromans Priority: Minor Original Estimate: 4h Remaining Estimate: 4h I recently rote a UDF which emits BINARY values (as implemented in [HIVE-2380|https://issues.apache.org/jira/browse/HIVE-2380]). When testing this, I encountered the following exception (because I was evaluating f(g(constant string))) and g() was emitting a BytesWritable type. FAILED: Hive Internal Error: java.lang.RuntimeException(Internal error: Cannot find ConstantObjectInspector for BINARY) java.lang.RuntimeException: Internal error: Cannot find ConstantObjectInspector for BINARY at org.apache.hadoop.hive.serde2.objectinspector.primitive.PrimitiveObjectInspectorFactory.getPrimitiveWritableConstantObjectInspector(PrimitiveObjectInspectorFactory.java:196) at org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorUtils.getConstantObjectInspector(ObjectInspectorUtils.java:899) at org.apache.hadoop.hive.ql.udf.generic.GenericUDF.initializeAndFoldConstants(GenericUDF.java:128) at org.apache.hadoop.hive.ql.plan.ExprNodeGenericFuncDesc.newInstance(ExprNodeGenericFuncDesc.java:214) at org.apache.hadoop.hive.ql.parse.TypeCheckProcFactory$DefaultExprProcessor.getXpathOrFuncExprNodeDesc(TypeCheckProcFactory.java:684) at org.apache.hadoop.hive.ql.parse.TypeCheckProcFactory$DefaultExprProcessor.process(TypeCheckProcFactory.java:805) at org.apache.hadoop.hive.ql.lib.DefaultRuleDispatcher.dispatch(DefaultRuleDispatcher.java:89) at org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatch(DefaultGraphWalker.java:88) at org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.walk(DefaultGraphWalker.java:125) at org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.startWalking(DefaultGraphWalker.java:102) at org.apache.hadoop.hive.ql.parse.TypeCheckProcFactory.genExprNode(TypeCheckProcFactory.java:161) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genExprNodeDesc(SemanticAnalyzer.java:7708) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genSelectPlan(SemanticAnalyzer.java:2301) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genSelectPlan(SemanticAnalyzer.java:2103) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPostGroupByBodyPlan(SemanticAnalyzer.java:6126) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genBodyPlan(SemanticAnalyzer.java:6097) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:6723) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:7484) at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:243) at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:430) at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:337) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:889) at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:255) at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:212) at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:403) at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:671) at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:554) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.util.RunJar.main(RunJar.java:156) It looks like a pretty simple fix - add a case for BINARY in PrimitiveObjectInspectorFactory.getPrimitiveWritableConstantObjectInspector() and implement a WritableConstantByteArrayObjectInspector class (almost identical to the others). I'm happy to do this, although this is my first foray into the world of contributing to FOSS so I might end up asking a few stupid questions. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please
[jira] [Updated] (HIVE-2736) Hive UDFs cannot emit binary constants
[ https://issues.apache.org/jira/browse/HIVE-2736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Philip Tromans updated HIVE-2736: - Description: I recently wrote a UDF which emits BINARY values (as implemented in [HIVE-2380|https://issues.apache.org/jira/browse/HIVE-2380]). When testing this, I encountered the following exception (because I was evaluating f(g(constant string))) and g() was emitting a BytesWritable type. FAILED: Hive Internal Error: java.lang.RuntimeException(Internal error: Cannot find ConstantObjectInspector for BINARY) java.lang.RuntimeException: Internal error: Cannot find ConstantObjectInspector for BINARY at org.apache.hadoop.hive.serde2.objectinspector.primitive.PrimitiveObjectInspectorFactory.getPrimitiveWritableConstantObjectInspector(PrimitiveObjectInspectorFactory.java:196) at org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorUtils.getConstantObjectInspector(ObjectInspectorUtils.java:899) at org.apache.hadoop.hive.ql.udf.generic.GenericUDF.initializeAndFoldConstants(GenericUDF.java:128) at org.apache.hadoop.hive.ql.plan.ExprNodeGenericFuncDesc.newInstance(ExprNodeGenericFuncDesc.java:214) at org.apache.hadoop.hive.ql.parse.TypeCheckProcFactory$DefaultExprProcessor.getXpathOrFuncExprNodeDesc(TypeCheckProcFactory.java:684) at org.apache.hadoop.hive.ql.parse.TypeCheckProcFactory$DefaultExprProcessor.process(TypeCheckProcFactory.java:805) at org.apache.hadoop.hive.ql.lib.DefaultRuleDispatcher.dispatch(DefaultRuleDispatcher.java:89) at org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatch(DefaultGraphWalker.java:88) at org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.walk(DefaultGraphWalker.java:125) at org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.startWalking(DefaultGraphWalker.java:102) at org.apache.hadoop.hive.ql.parse.TypeCheckProcFactory.genExprNode(TypeCheckProcFactory.java:161) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genExprNodeDesc(SemanticAnalyzer.java:7708) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genSelectPlan(SemanticAnalyzer.java:2301) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genSelectPlan(SemanticAnalyzer.java:2103) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPostGroupByBodyPlan(SemanticAnalyzer.java:6126) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genBodyPlan(SemanticAnalyzer.java:6097) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:6723) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:7484) at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:243) at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:430) at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:337) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:889) at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:255) at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:212) at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:403) at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:671) at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:554) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.util.RunJar.main(RunJar.java:156) It looks like a pretty simple fix - add a case for BINARY in PrimitiveObjectInspectorFactory.getPrimitiveWritableConstantObjectInspector() and implement a WritableConstantByteArrayObjectInspector class (almost identical to the others). I'm happy to do this, although this is my first foray into the world of contributing to FOSS so I might end up asking a few stupid questions. was: I recently rote a UDF which emits BINARY values (as implemented in [HIVE-2380|https://issues.apache.org/jira/browse/HIVE-2380]). When testing this, I encountered the following exception (because I was evaluating f(g(constant string))) and g() was emitting a BytesWritable type. FAILED: Hive Internal Error: java.lang.RuntimeException(Internal error: Cannot find ConstantObjectInspector for BINARY) java.lang.RuntimeException: Internal error: Cannot find ConstantObjectInspector for BINARY at org.apache.hadoop.hive.serde2.objectinspector.primitive.PrimitiveObjectInspectorFactory.getPrimitiveWritableConstantObjectInspector(PrimitiveObjectInspectorFactory.java:196) at
Hive-trunk-h0.21 - Build # 1215 - Still Failing
Changes for Build #1180 Changes for Build #1181 Changes for Build #1182 [heyongqiang] HIVE-2621:Allow multiple group bys with the same input data and spray keys to be run on the same reducer. (Kevin via He Yongqiang) Changes for Build #1184 [namit] HIVE-2690 a bug in 'alter table concatenate' that causes filenames getting double url encoded (He Yongqiang via namit) Changes for Build #1185 Changes for Build #1186 Changes for Build #1187 Changes for Build #1188 Changes for Build #1189 Changes for Build #1190 [amareshwari] HIVE-2629. Make a single Hive binary work with both 0.20.x and 0.23.0. (Thomas Weise via amareshwari) Changes for Build #1191 [amareshwari] HIVE-2629. Reverting previous commit Changes for Build #1192 [heyongqiang] HIVE-2706 [jira] StackOverflowError when using custom UDF after adding archive after adding jars (Kevin Wilfong via Yongqiang He) Summary: https://issues.apache.org/jira/browse/HIVE-2706 The issue was that the current thread's classloader and the classloader in the conf differed due to the prehook updating only the current thread's classloader with new jars. Now, it updates both classloaders, fixing the issue. When a custom UDF is used in a query after add an archive, such as a zip file, after adding jars, the XMLEncoder enters an infinite loop when serializing the map reduce task, as part of sending it to be executed. This results in a stack overflow error. Test Plan: Verified it fixed the stack overflow error. Reviewers: JIRA, heyongqiang, njain Reviewed By: heyongqiang CC: heyongqiang Differential Revision: https://reviews.facebook.net/D1167 Changes for Build #1193 [hashutosh] HIVE-2705: SemanticAnalyzer twice swallows an exception it shouldn't (jghoman via hashutosh) Changes for Build #1194 Changes for Build #1195 [hashutosh] HIVE-2589: Newly created partition should inherit properties from table (Ashutosh Chauhan) [hashutosh] HIVE-2682: Clean-up logs (Rajat Goel via Ashutosh Chauhan) Changes for Build #1196 [amareshwari] HIVE-2629. Make a single Hive binary work with both 0.20.x and 0.23.0. (Thomas Weise via amareshwari) Changes for Build #1197 Changes for Build #1198 [namit] HIVE-2504 Warehouse table subdirectories should inherit the group permissions of the warehouse parent directory (Chinna Rao Lalam via namit) [namit] HIVE-2695 Add PRINTF() Udf (Zhenxiao Luo via namit) Changes for Build #1199 Changes for Build #1200 Changes for Build #1201 Changes for Build #1202 Changes for Build #1203 Changes for Build #1204 [cws] HIVE-2719. Revert HIVE-2589 (He Yongqiang via cws) Changes for Build #1205 Changes for Build #1207 [namit] HIVE-2718 NPE in union followed by join (He Yongqiang via namit) Changes for Build #1208 Changes for Build #1209 Changes for Build #1210 [namit] HIVE-2674 get_partitions_ps throws TApplicationException if table doesn't exist (Kevin Wilfong via namit) Changes for Build #1211 [cws] HIVE-2203. Extend concat_ws() UDF to support arrays of strings (Zhenxiao Luo via cws) [cws] HIVE-2279. Implement sort(array) UDF (Zhenxiao Luo via cws) Changes for Build #1212 [hashutosh] HIVE-2589 : Newly created partition should inherit properties from table (Ashutosh Chauhan) Changes for Build #1213 Changes for Build #1214 Changes for Build #1215 1 tests failed. REGRESSION: org.apache.hadoop.hive.thrift.TestHadoop20SAuthBridge.testSaslWithHiveMetaStore Error Message: null Stack Trace: java.lang.NullPointerException at org.apache.hadoop.hive.metastore.HiveMetaStore.getDelegationToken(HiveMetaStore.java:3499) at org.apache.hadoop.hive.thrift.TestHadoop20SAuthBridge.getDelegationTokenStr(TestHadoop20SAuthBridge.java:298) at org.apache.hadoop.hive.thrift.TestHadoop20SAuthBridge.obtainTokenAndAddIntoUGI(TestHadoop20SAuthBridge.java:305) at org.apache.hadoop.hive.thrift.TestHadoop20SAuthBridge.testSaslWithHiveMetaStore(TestHadoop20SAuthBridge.java:214) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at junit.framework.TestCase.runTest(TestCase.java:168) at junit.framework.TestCase.runBare(TestCase.java:134) at junit.framework.TestResult$1.protect(TestResult.java:110) at junit.framework.TestResult.runProtected(TestResult.java:128) at junit.framework.TestResult.run(TestResult.java:113) at junit.framework.TestCase.run(TestCase.java:124) at junit.framework.TestSuite.runTest(TestSuite.java:243) at junit.framework.TestSuite.run(TestSuite.java:238) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:518) at
[jira] [Created] (HIVE-2737) CombineFileInputFormat fails if mapred.job.tracker is set to local with a sub-query
CombineFileInputFormat fails if mapred.job.tracker is set to local with a sub-query --- Key: HIVE-2737 URL: https://issues.apache.org/jira/browse/HIVE-2737 Project: Hive Issue Type: Bug Affects Versions: 0.8.0 Reporter: Esteban Gutierrez If the CombineFileInputFormat and mapred.job.tracker=local are used together, the CombineFileInputFormat throws a java.io.FileNotFoundException if the query statment contains a sub-query: {code} hive select count(*) from (select count(*), a from hivetest2 group by a) x; Total MapReduce jobs = 2 Launching Job 1 out of 2 Number of reduce tasks not specified. Estimated from input data size: 1 In order to change the average load for a reducer (in bytes): set hive.exec.reducers.bytes.per.reducer=number In order to limit the maximum number of reducers: set hive.exec.reducers.max=number In order to set a constant number of reducers: set mapred.reduce.tasks=number Execution log at: /tmp/esteban/esteban_20120119134040_5d105797-1444-43ce-8ca8-3b4735b7a70d.log Job running in-process (local Hadoop) 2012-01-19 13:40:49,618 null map = 100%, reduce = 100% Ended Job = job_local_0001 Launching Job 2 out of 2 Number of reduce tasks determined at compile time: 1 In order to change the average load for a reducer (in bytes): set hive.exec.reducers.bytes.per.reducer=number In order to limit the maximum number of reducers: set hive.exec.reducers.max=number In order to set a constant number of reducers: set mapred.reduce.tasks=number Execution log at: /tmp/esteban/esteban_20120119134040_5d105797-1444-43ce-8ca8-3b4735b7a70d.log java.io.FileNotFoundException: File does not exist: /tmp/esteban/hive_2012-01-19_13-40-45_277_494412568828098242/-mr-10002/00_0 at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:546) at org.apache.hadoop.mapred.lib.CombineFileInputFormat$OneFileInfo.init(CombineFileInputFormat.java:462) at org.apache.hadoop.mapred.lib.CombineFileInputFormat.getMoreSplits(CombineFileInputFormat.java:256) at org.apache.hadoop.mapred.lib.CombineFileInputFormat.getSplits(CombineFileInputFormat.java:212) at org.apache.hadoop.hive.shims.Hadoop20SShims$CombineFileInputFormatShim.getSplits(Hadoop20SShims.java:347) at org.apache.hadoop.hive.shims.Hadoop20SShims$CombineFileInputFormatShim.getSplits(Hadoop20SShims.java:313) at org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getSplits(CombineHiveInputFormat.java:377) at org.apache.hadoop.mapred.JobClient.writeOldSplits(JobClient.java:971) at org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:963) at org.apache.hadoop.mapred.JobClient.access$500(JobClient.java:170) at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:880) at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:833) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127) at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:833) at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:807) at org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:671) at org.apache.hadoop.hive.ql.exec.ExecDriver.main(ExecDriver.java:1092) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.util.RunJar.main(RunJar.java:186) Job Submission failed with exception 'java.io.FileNotFoundException(File does not exist: /tmp/esteban/hive_2012-01-19_13-40-45_277_494412568828098242/-mr-10002/00_0)' {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HIVE-2738) NPE in ExprNodeGenericFuncEvaluator
NPE in ExprNodeGenericFuncEvaluator --- Key: HIVE-2738 URL: https://issues.apache.org/jira/browse/HIVE-2738 Project: Hive Issue Type: Bug Affects Versions: 0.8.0 Reporter: Nicolas Lalevée Attachments: MapMaxUDF.java, MapToJsonUDF.java, hive_job_logs.txt Here is the query: bq. {{SELECT t.lid, '2011-12-12', s_map2json(s_maxmap(UNION_MAP(t.categoryCount), 100)) FROM ( SELECT theme_lid AS theme_lid, MAP(s_host(referer), COUNT( * )) AS categoryCount FROM PageViewEvent WHERE day = '20130104' AND day = '20130112' AND date_ = '2012-01-04' AND date_ '2012-01-13' AND lid IS NOT NULL GROUP BY lid, s_host(referer) ) t GROUP BY t.lid}} Removing the call s_map2json make it work but not by removing s_maxmap, but I don't understand what could be wrong with the implementation of my udf. And I don't know how to debug remote hadoop jobs. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-2738) NPE in ExprNodeGenericFuncEvaluator
[ https://issues.apache.org/jira/browse/HIVE-2738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nicolas Lalevée updated HIVE-2738: -- Attachment: MapToJsonUDF.java MapMaxUDF.java hive_job_logs.txt Find attached the log of the hadoop Job, and the implementations of my custom udfs. NPE in ExprNodeGenericFuncEvaluator --- Key: HIVE-2738 URL: https://issues.apache.org/jira/browse/HIVE-2738 Project: Hive Issue Type: Bug Affects Versions: 0.8.0 Reporter: Nicolas Lalevée Attachments: MapMaxUDF.java, MapToJsonUDF.java, hive_job_logs.txt Here is the query: bq. {{SELECT t.lid, '2011-12-12', s_map2json(s_maxmap(UNION_MAP(t.categoryCount), 100)) FROM ( SELECT theme_lid AS theme_lid, MAP(s_host(referer), COUNT( * )) AS categoryCount FROM PageViewEvent WHERE day = '20130104' AND day = '20130112' AND date_ = '2012-01-04' AND date_ '2012-01-13' AND lid IS NOT NULL GROUP BY lid, s_host(referer) ) t GROUP BY t.lid}} Removing the call s_map2json make it work but not by removing s_maxmap, but I don't understand what could be wrong with the implementation of my udf. And I don't know how to debug remote hadoop jobs. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-2737) CombineFileInputFormat fails if mapred.job.tracker is set to local with a sub-query
[ https://issues.apache.org/jira/browse/HIVE-2737?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Carl Steinbach updated HIVE-2737: - Component/s: Query Processor CombineFileInputFormat fails if mapred.job.tracker is set to local with a sub-query --- Key: HIVE-2737 URL: https://issues.apache.org/jira/browse/HIVE-2737 Project: Hive Issue Type: Bug Components: Query Processor Affects Versions: 0.8.0 Reporter: Esteban Gutierrez If the CombineFileInputFormat and mapred.job.tracker=local are used together, the CombineFileInputFormat throws a java.io.FileNotFoundException if the query statment contains a sub-query: {code} hive select count(*) from (select count(*), a from hivetest2 group by a) x; Total MapReduce jobs = 2 Launching Job 1 out of 2 Number of reduce tasks not specified. Estimated from input data size: 1 In order to change the average load for a reducer (in bytes): set hive.exec.reducers.bytes.per.reducer=number In order to limit the maximum number of reducers: set hive.exec.reducers.max=number In order to set a constant number of reducers: set mapred.reduce.tasks=number Execution log at: /tmp/esteban/esteban_20120119134040_5d105797-1444-43ce-8ca8-3b4735b7a70d.log Job running in-process (local Hadoop) 2012-01-19 13:40:49,618 null map = 100%, reduce = 100% Ended Job = job_local_0001 Launching Job 2 out of 2 Number of reduce tasks determined at compile time: 1 In order to change the average load for a reducer (in bytes): set hive.exec.reducers.bytes.per.reducer=number In order to limit the maximum number of reducers: set hive.exec.reducers.max=number In order to set a constant number of reducers: set mapred.reduce.tasks=number Execution log at: /tmp/esteban/esteban_20120119134040_5d105797-1444-43ce-8ca8-3b4735b7a70d.log java.io.FileNotFoundException: File does not exist: /tmp/esteban/hive_2012-01-19_13-40-45_277_494412568828098242/-mr-10002/00_0 at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:546) at org.apache.hadoop.mapred.lib.CombineFileInputFormat$OneFileInfo.init(CombineFileInputFormat.java:462) at org.apache.hadoop.mapred.lib.CombineFileInputFormat.getMoreSplits(CombineFileInputFormat.java:256) at org.apache.hadoop.mapred.lib.CombineFileInputFormat.getSplits(CombineFileInputFormat.java:212) at org.apache.hadoop.hive.shims.Hadoop20SShims$CombineFileInputFormatShim.getSplits(Hadoop20SShims.java:347) at org.apache.hadoop.hive.shims.Hadoop20SShims$CombineFileInputFormatShim.getSplits(Hadoop20SShims.java:313) at org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getSplits(CombineHiveInputFormat.java:377) at org.apache.hadoop.mapred.JobClient.writeOldSplits(JobClient.java:971) at org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:963) at org.apache.hadoop.mapred.JobClient.access$500(JobClient.java:170) at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:880) at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:833) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127) at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:833) at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:807) at org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:671) at org.apache.hadoop.hive.ql.exec.ExecDriver.main(ExecDriver.java:1092) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.util.RunJar.main(RunJar.java:186) Job Submission failed with exception 'java.io.FileNotFoundException(File does not exist: /tmp/esteban/hive_2012-01-19_13-40-45_277_494412568828098242/-mr-10002/00_0)' {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-2589) Newly created partition should inherit properties from table
[ https://issues.apache.org/jira/browse/HIVE-2589?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Carl Steinbach updated HIVE-2589: - Resolution: Fixed Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) Backported to branch-0.8-r2. Thanks Ashutosh! Newly created partition should inherit properties from table Key: HIVE-2589 URL: https://issues.apache.org/jira/browse/HIVE-2589 Project: Hive Issue Type: Improvement Components: Metastore Reporter: Ashutosh Chauhan Assignee: Ashutosh Chauhan Fix For: 0.8.1, 0.9.0 Attachments: HIVE-2589.D1335.1.patch, HIVE-2589.D1335.2.patch, hive-2589.patch, hive-2589.patch, hive-2589_1.patch, hive-2589_2.patch, hive-2589_3.patch, hive-2589_4.patch, hive-2589_branch8.patch This will make all the info contained in table properties available to partitions. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-2727) add a testcase for partitioned view on union and base tables have index
[ https://issues.apache.org/jira/browse/HIVE-2727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Phabricator updated HIVE-2727: -- Attachment: HIVE-2727.D1323.3.patch heyongqiang updated the revision HIVE-2727 [jira] add a testcase for partitioned view on union and base tables have index. Reviewers: JIRA address namit's comments REVISION DETAIL https://reviews.facebook.net/D1323 AFFECTED FILES ql/src/test/results/clientpositive/union_view.q.out ql/src/test/org/apache/hadoop/hive/ql/QTestUtil.java ql/src/test/queries/clientpositive/union_view.q ql/src/java/org/apache/hadoop/hive/ql/optimizer/IndexUtils.java ql/src/java/org/apache/hadoop/hive/ql/index/IndexMetadataChangeTask.java ql/src/java/org/apache/hadoop/hive/ql/index/compact/CompactIndexHandler.java add a testcase for partitioned view on union and base tables have index --- Key: HIVE-2727 URL: https://issues.apache.org/jira/browse/HIVE-2727 Project: Hive Issue Type: Test Reporter: He Yongqiang Assignee: He Yongqiang Attachments: HIVE-2727.1.patch, HIVE-2727.D1323.1.patch, HIVE-2727.D1323.2.patch, HIVE-2727.D1323.3.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-2698) Enable Hadoop-1.0.0 in Hive
[ https://issues.apache.org/jira/browse/HIVE-2698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Carl Steinbach updated HIVE-2698: - Status: Open (was: Patch Available) Please submit a review request on phabricator. https://cwiki.apache.org/Hive/phabricatorcodereview.html Thanks. Enable Hadoop-1.0.0 in Hive --- Key: HIVE-2698 URL: https://issues.apache.org/jira/browse/HIVE-2698 Project: Hive Issue Type: New Feature Components: Security, Shims Affects Versions: 0.9.0 Reporter: Enis Soztutar Assignee: Enis Soztutar Labels: hadoop, hadoop-1.0, jars Attachments: HIVE-2698_v1.patch, HIVE-2698_v2.patch, HIVE-2698_v3.patch Hadoop-1.0.0 is recently released, which is AFAIK, API compatible to the 0.20S release. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-2724) Remove unused lib/log4j-1.2.15.jar
[ https://issues.apache.org/jira/browse/HIVE-2724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Carl Steinbach updated HIVE-2724: - Resolution: Fixed Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) Committed to trunk. Thanks Andrew! Remove unused lib/log4j-1.2.15.jar -- Key: HIVE-2724 URL: https://issues.apache.org/jira/browse/HIVE-2724 Project: Hive Issue Type: Bug Components: Build Infrastructure Affects Versions: 0.8.0 Reporter: Andrew Bayer Assignee: Andrew Bayer Attachments: HIVE-2724.diff.txt There's still a file, lib/log4j-1.2.15.jar, even though log4j is now pulled in via Ivy. As a result, this older log4j gets pulled into the Hive release tarball, and may end up in classpaths as well. It should be removed. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
Hive-0.8.1-SNAPSHOT-h0.21 - Build # 171 - Fixed
Changes for Build #170 Changes for Build #171 All tests passed The Apache Jenkins build system has built Hive-0.8.1-SNAPSHOT-h0.21 (build #171) Status: Fixed Check console output at https://builds.apache.org/job/Hive-0.8.1-SNAPSHOT-h0.21/171/ to view the results.
[jira] [Created] (HIVE-2739) Manage Eclipse integration with IvyDE
Manage Eclipse integration with IvyDE - Key: HIVE-2739 URL: https://issues.apache.org/jira/browse/HIVE-2739 Project: Hive Issue Type: Improvement Components: Build Infrastructure Reporter: Carl Steinbach We should use Apache IvyDE to generate the Eclipse .classpath file instead of maintaining this file by hand. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-1634) Allow access to Primitive types stored in binary format in HBase
[ https://issues.apache.org/jira/browse/HIVE-1634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13191519#comment-13191519 ] John Sichi commented on HIVE-1634: -- This issue is not resolved; the patch has not been completed. Allow access to Primitive types stored in binary format in HBase Key: HIVE-1634 URL: https://issues.apache.org/jira/browse/HIVE-1634 Project: Hive Issue Type: Improvement Components: HBase Handler Affects Versions: 0.7.0 Reporter: Basab Maulik Assignee: Basab Maulik Attachments: HIVE-1634.0.patch, HIVE-1634.1.patch, TestHiveHBaseExternalTable.java This addresses HIVE-1245 in part, for atomic or primitive types. The serde property hbase.columns.storage.types = -,b,b,b,b,b,b,b,b is a specification of the storage option for the corresponding column in the serde property hbase.columns.mapping. Allowed values are '-' for table default, 's' for standard string storage, and 'b' for binary storage as would be obtained from o.a.h.hbase.utils.Bytes. Map types for HBase column families use a colon separated pair such as 's:b' for the key and value part specifiers respectively. See the test cases and queries for HBase handler for additional examples. There is also a table property hbase.table.default.storage.type = string to specify a table level default storage type. The other valid specification is binary. The table level default is overridden by a column level specification. This control is available for the boolean, tinyint, smallint, int, bigint, float, and double primitive types. The attached patch also relaxes the mapping of map types to HBase column families to allow any primitive type to be the map key. Attached is a program for creating a table and populating it in HBase. The external table in Hive can access the data as shown in the example below. hive create external table TestHiveHBaseExternalTable (key string, c_bool boolean, c_byte tinyint, c_short smallint, c_int int, c_long bigint, c_string string, c_float float, c_double double) stored by 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' with serdeproperties (hbase.columns.mapping = :key,cf:boolean,cf:byte,cf:short,cf:int,cf:long,cf:string,cf:float,cf:double) tblproperties (hbase.table.name = TestHiveHBaseExternalTable); OK Time taken: 0.691 seconds hive select * from TestHiveHBaseExternalTable; OK key-1 NULLNULLNULLNULLNULLTest-String NULLNULL Time taken: 0.346 seconds hive drop table TestHiveHBaseExternalTable; OK Time taken: 0.139 seconds hive create external table TestHiveHBaseExternalTable (key string, c_bool boolean, c_byte tinyint, c_short smallint, c_int int, c_long bigint, c_string string, c_float float, c_double double) stored by 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' with serdeproperties ( hbase.columns.mapping = :key,cf:boolean,cf:byte,cf:short,cf:int,cf:long,cf:string,cf:float,cf:double, hbase.columns.storage.types = -,b,b,b,b,b,b,b,b ) tblproperties ( hbase.table.name = TestHiveHBaseExternalTable, hbase.table.default.storage.type = string); OK Time taken: 0.139 seconds hive select * from TestHiveHBaseExternalTable; OK key-1 true-128-32768 -2147483648 -9223372036854775808 Test-String -2.1793132E-11 2.01345E291 Time taken: 0.151 seconds hive drop table TestHiveHBaseExternalTable; OK Time taken: 0.154 seconds hive create external table TestHiveHBaseExternalTable (key string, c_bool boolean, c_byte tinyint, c_short smallint, c_int int, c_long bigint, c_string string, c_float float, c_double double) stored by 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' with serdeproperties ( hbase.columns.mapping = :key,cf:boolean,cf:byte,cf:short,cf:int,cf:long,cf:string,cf:float,cf:double, hbase.columns.storage.types = -,b,b,b,b,b,-,b,b ) tblproperties (hbase.table.name = TestHiveHBaseExternalTable); OK Time taken: 0.347 seconds hive select * from TestHiveHBaseExternalTable; OK key-1 true-128-32768 -2147483648 -9223372036854775808 Test-String -2.1793132E-11 2.01345E291 Time taken: 0.245 seconds hive -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[VOTE] Hive 0.8.1 Release Candidate 0
Apache Hive 0.8.1 Release Candidate 0 is available here: http://people.apache.org/~cws/hive-0.8.1-candidate-0 Maven artifacts are available here: https://repository.apache.org/content/repositories/orgapachehive-121/ Changes: HIVE-2589. Newly created partition should inherit properties from table HIVE-2629. Make a single Hive binary work with both 0.20.x and 0.23.0 HIVE-2616. Passing user identity from metastore client to server in non-secure mode HIVE-2631. Make Hive work with Hadoop 1.0.0 Voting will conclude in 72 hours. Hive PMC Members: Please test and vote. Thanks. Carl
[jira] [Commented] (HIVE-2698) Enable Hadoop-1.0.0 in Hive
[ https://issues.apache.org/jira/browse/HIVE-2698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13191660#comment-13191660 ] jack su commented on HIVE-2698: --- I ran with patch -p1 ./HIVE-2698_v3.patch, and seeing the following error message: patch -p1 HIVE-2698_v3.patch can't find file to patch at input line 5 Perhaps you used the wrong -p or --strip option? The text leading up to this was: -- |diff --git a/build-common.xml b/build-common.xml |index a746966..7b652cf 100644 |--- a/build-common.xml |+++ b/build-common.xml -- File to patch: The following is the first 10 lines of HIVE-2698_v3.patch diff --git a/build-common.xml b/build-common.xml index a746966..7b652cf 100644 --- a/build-common.xml +++ b/build-common.xml @@ -160,6 +160,12 @@ pathelement location=${hadoop.hdfs.jar}/ pathelement location=${hadoop.mapreduce.jar}/ pathelement location=${hadoop.mapreduce.tools.jar}/ +fileset dir=${hadoop.root} erroronmissingdir=false + !-- below is for 1.0 -- I am not seeing directory a, or b. But I see build-common.xml under src directory. Thanks. Enable Hadoop-1.0.0 in Hive --- Key: HIVE-2698 URL: https://issues.apache.org/jira/browse/HIVE-2698 Project: Hive Issue Type: New Feature Components: Security, Shims Affects Versions: 0.9.0 Reporter: Enis Soztutar Assignee: Enis Soztutar Labels: hadoop, hadoop-1.0, jars Attachments: HIVE-2698_v1.patch, HIVE-2698_v2.patch, HIVE-2698_v3.patch Hadoop-1.0.0 is recently released, which is AFAIK, API compatible to the 0.20S release. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-2734) Fix some nondeterministic test output
[ https://issues.apache.org/jira/browse/HIVE-2734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zhenxiao Luo updated HIVE-2734: --- Status: Open (was: Patch Available) Fix some nondeterministic test output - Key: HIVE-2734 URL: https://issues.apache.org/jira/browse/HIVE-2734 Project: Hive Issue Type: Bug Reporter: Zhenxiao Luo Assignee: Zhenxiao Luo Attachments: HIVE-2734.D1359.1.patch, HIVE-2734.D1365.1.patch Many Hive query tests lack an ORDER BY clause, and consequently the ordering of the rows in the result set is nondeterministic: groupby1_limit input11_limit input1_limit input_lazyserde join18_multi_distinct join_1to1 join_casesensitive join_filters join_nulls merge3 metadataonly1 rcfile_columnar rcfile_lazydecompress rcfile_union sample10 udf_sentences union24 virtual_column -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-2734) Fix some nondeterministic test output
[ https://issues.apache.org/jira/browse/HIVE-2734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13191796#comment-13191796 ] Zhenxiao Luo commented on HIVE-2734: The following testcases are also in this category: columnarserde_create_shortcut combine1 global_limit Will submit new patch, including these fixes soon. Fix some nondeterministic test output - Key: HIVE-2734 URL: https://issues.apache.org/jira/browse/HIVE-2734 Project: Hive Issue Type: Bug Reporter: Zhenxiao Luo Assignee: Zhenxiao Luo Attachments: HIVE-2734.D1359.1.patch, HIVE-2734.D1365.1.patch Many Hive query tests lack an ORDER BY clause, and consequently the ordering of the rows in the result set is nondeterministic: groupby1_limit input11_limit input1_limit input_lazyserde join18_multi_distinct join_1to1 join_casesensitive join_filters join_nulls merge3 metadataonly1 rcfile_columnar rcfile_lazydecompress rcfile_union sample10 udf_sentences union24 virtual_column -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
Hive-trunk-h0.21 - Build # 1216 - Still Failing
Changes for Build #1181 Changes for Build #1182 [heyongqiang] HIVE-2621:Allow multiple group bys with the same input data and spray keys to be run on the same reducer. (Kevin via He Yongqiang) Changes for Build #1184 [namit] HIVE-2690 a bug in 'alter table concatenate' that causes filenames getting double url encoded (He Yongqiang via namit) Changes for Build #1185 Changes for Build #1186 Changes for Build #1187 Changes for Build #1188 Changes for Build #1189 Changes for Build #1190 [amareshwari] HIVE-2629. Make a single Hive binary work with both 0.20.x and 0.23.0. (Thomas Weise via amareshwari) Changes for Build #1191 [amareshwari] HIVE-2629. Reverting previous commit Changes for Build #1192 [heyongqiang] HIVE-2706 [jira] StackOverflowError when using custom UDF after adding archive after adding jars (Kevin Wilfong via Yongqiang He) Summary: https://issues.apache.org/jira/browse/HIVE-2706 The issue was that the current thread's classloader and the classloader in the conf differed due to the prehook updating only the current thread's classloader with new jars. Now, it updates both classloaders, fixing the issue. When a custom UDF is used in a query after add an archive, such as a zip file, after adding jars, the XMLEncoder enters an infinite loop when serializing the map reduce task, as part of sending it to be executed. This results in a stack overflow error. Test Plan: Verified it fixed the stack overflow error. Reviewers: JIRA, heyongqiang, njain Reviewed By: heyongqiang CC: heyongqiang Differential Revision: https://reviews.facebook.net/D1167 Changes for Build #1193 [hashutosh] HIVE-2705: SemanticAnalyzer twice swallows an exception it shouldn't (jghoman via hashutosh) Changes for Build #1194 Changes for Build #1195 [hashutosh] HIVE-2589: Newly created partition should inherit properties from table (Ashutosh Chauhan) [hashutosh] HIVE-2682: Clean-up logs (Rajat Goel via Ashutosh Chauhan) Changes for Build #1196 [amareshwari] HIVE-2629. Make a single Hive binary work with both 0.20.x and 0.23.0. (Thomas Weise via amareshwari) Changes for Build #1197 Changes for Build #1198 [namit] HIVE-2504 Warehouse table subdirectories should inherit the group permissions of the warehouse parent directory (Chinna Rao Lalam via namit) [namit] HIVE-2695 Add PRINTF() Udf (Zhenxiao Luo via namit) Changes for Build #1199 Changes for Build #1200 Changes for Build #1201 Changes for Build #1202 Changes for Build #1203 Changes for Build #1204 [cws] HIVE-2719. Revert HIVE-2589 (He Yongqiang via cws) Changes for Build #1205 Changes for Build #1207 [namit] HIVE-2718 NPE in union followed by join (He Yongqiang via namit) Changes for Build #1208 Changes for Build #1209 Changes for Build #1210 [namit] HIVE-2674 get_partitions_ps throws TApplicationException if table doesn't exist (Kevin Wilfong via namit) Changes for Build #1211 [cws] HIVE-2203. Extend concat_ws() UDF to support arrays of strings (Zhenxiao Luo via cws) [cws] HIVE-2279. Implement sort(array) UDF (Zhenxiao Luo via cws) Changes for Build #1212 [hashutosh] HIVE-2589 : Newly created partition should inherit properties from table (Ashutosh Chauhan) Changes for Build #1213 Changes for Build #1214 Changes for Build #1215 Changes for Build #1216 [cws] HIVE-2724. Remove unused lib/log4j-1.2.15.jar (Andrew Bayer via cws) 7 tests failed. FAILED: org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_inputddl5 Error Message: Unexpected exception See build/ql/tmp/hive.log, or try ant test ... -Dtest.silent=false to get more logs. Stack Trace: junit.framework.AssertionFailedError: Unexpected exception See build/ql/tmp/hive.log, or try ant test ... -Dtest.silent=false to get more logs. at junit.framework.Assert.fail(Assert.java:50) at org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_inputddl5(TestCliDriver.java:16883) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at junit.framework.TestCase.runTest(TestCase.java:168) at junit.framework.TestCase.runBare(TestCase.java:134) at junit.framework.TestResult$1.protect(TestResult.java:110) at junit.framework.TestResult.runProtected(TestResult.java:128) at junit.framework.TestResult.run(TestResult.java:113) at junit.framework.TestCase.run(TestCase.java:124) at junit.framework.TestSuite.runTest(TestSuite.java:243) at junit.framework.TestSuite.run(TestSuite.java:238) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:518) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1052) at
[jira] [Commented] (HIVE-2724) Remove unused lib/log4j-1.2.15.jar
[ https://issues.apache.org/jira/browse/HIVE-2724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13191804#comment-13191804 ] Hudson commented on HIVE-2724: -- Integrated in Hive-trunk-h0.21 #1216 (See [https://builds.apache.org/job/Hive-trunk-h0.21/1216/]) HIVE-2724. Remove unused lib/log4j-1.2.15.jar (Andrew Bayer via cws) cws : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1234962 Files : * /hive/trunk/lib/log4j-1.2.15.jar Remove unused lib/log4j-1.2.15.jar -- Key: HIVE-2724 URL: https://issues.apache.org/jira/browse/HIVE-2724 Project: Hive Issue Type: Bug Components: Build Infrastructure Affects Versions: 0.8.0 Reporter: Andrew Bayer Assignee: Andrew Bayer Attachments: HIVE-2724.diff.txt There's still a file, lib/log4j-1.2.15.jar, even though log4j is now pulled in via Ivy. As a result, this older log4j gets pulled into the Hive release tarball, and may end up in classpaths as well. It should be removed. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-2720) Merge MetaStoreListener and HiveMetaHook interfaces
[ https://issues.apache.org/jira/browse/HIVE-2720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13191809#comment-13191809 ] Carl Steinbach commented on HIVE-2720: -- bq. The patch as it is, does not change the event listener calling semantics or hivemetahook calling semantics. Actually, I think this is a pretty significant change. HiveMetaStoreListener is purely a listener interface. All of the methods defined in the interface are triggered after the underlying metastore transaction has been committed or rolled back, which means that all of the methods are effectively post methods. Supporting more than one listener is uncomplicated because the listeners don't have the ability to affect the outcome of the operation. This patch replaces the listener interface with one that allows plugins to fail metastore operations by raising exceptions in pre-methods. I don't think this interface can support more than one plugin in a maintainable fashion for the reasons I mentioned earlier. bq. introduce HiveExtendedMetaHook interface to limited private to replace HiveMetaStoreEventListener, and allow only hive and hcat implementations (by documenting it). We guarantee correct behavior by carefully ordering the hooks. I don't see any reason to remove HiveMetaStoreListener. It's useful in its current form for writing audit and logging plugins, the implementation is good, and people are probably already using it since it appeared in the 0.8.0 release. As for the proposed HiveExtendedMetaHook interface, in practice I don't think you will be able to guarantee correct behavior by ordering the hooks, and making this a requirement places a significant burden on hook implementors. Merge MetaStoreListener and HiveMetaHook interfaces --- Key: HIVE-2720 URL: https://issues.apache.org/jira/browse/HIVE-2720 Project: Hive Issue Type: Sub-task Components: JDBC, Metastore, ODBC, Security Reporter: Enis Soztutar Assignee: Enis Soztutar Attachments: HIVE-2720.D1299.1.patch, HIVE-2720.D1299.2.patch, HIVE-2720.D1299.3.patch MetaStoreListener and HiveMetaHook both serve as a notification mechanism for metastore-related events. The former is used by hcat and the latter is by the hbase-storage handler, and invoked by the client. I propose to merge these interfaces, and extend the MetaStoreListener, to add most of the on- and pre- methods at the Thrift interface. This way, extending metastore will be easier, and validation, storage-driver notification, and enforcement can be delegated to individual listeners. Besides, more functionality can be plugged-in by Hcat at this level. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-2589) Newly created partition should inherit properties from table
[ https://issues.apache.org/jira/browse/HIVE-2589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13191814#comment-13191814 ] Hudson commented on HIVE-2589: -- Integrated in Hive-0.8.1-SNAPSHOT-h0.21 #172 (See [https://builds.apache.org/job/Hive-0.8.1-SNAPSHOT-h0.21/172/]) HIVE-2589. Newly created partition should inherit properties from table (Ashutosh Chauhan via cws) cws : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1234958 Files : * /hive/branches/branch-0.8-r2/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java * /hive/branches/branch-0.8-r2/conf/hive-default.xml.template * /hive/branches/branch-0.8-r2/metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java * /hive/branches/branch-0.8-r2/ql/src/test/org/apache/hadoop/hive/ql/QTestUtil.java * /hive/branches/branch-0.8-r2/ql/src/test/queries/clientpositive/part_inherit_tbl_props.q * /hive/branches/branch-0.8-r2/ql/src/test/queries/clientpositive/part_inherit_tbl_props_empty.q * /hive/branches/branch-0.8-r2/ql/src/test/queries/clientpositive/part_inherit_tbl_props_with_star.q * /hive/branches/branch-0.8-r2/ql/src/test/results/clientpositive/part_inherit_tbl_props.q.out * /hive/branches/branch-0.8-r2/ql/src/test/results/clientpositive/part_inherit_tbl_props_empty.q.out * /hive/branches/branch-0.8-r2/ql/src/test/results/clientpositive/part_inherit_tbl_props_with_star.q.out Newly created partition should inherit properties from table Key: HIVE-2589 URL: https://issues.apache.org/jira/browse/HIVE-2589 Project: Hive Issue Type: Improvement Components: Metastore Reporter: Ashutosh Chauhan Assignee: Ashutosh Chauhan Fix For: 0.8.1, 0.9.0 Attachments: HIVE-2589.D1335.1.patch, HIVE-2589.D1335.2.patch, hive-2589.patch, hive-2589.patch, hive-2589_1.patch, hive-2589_2.patch, hive-2589_3.patch, hive-2589_4.patch, hive-2589_branch8.patch This will make all the info contained in table properties available to partitions. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-2734) Fix some nondeterministic test output
[ https://issues.apache.org/jira/browse/HIVE-2734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Phabricator updated HIVE-2734: -- Attachment: HIVE-2734.D1365.2.patch zhenxiao updated the revision HIVE-2734 [jira] Fix some nondeterministic test output. Reviewers: JIRA HIVE-2734: fix non-deterministic test output in: columnarserde_create_shortcut combine1 global_limit REVISION DETAIL https://reviews.facebook.net/D1365 AFFECTED FILES ql/src/test/queries/clientpositive/columnarserde_create_shortcut.q ql/src/test/queries/clientpositive/combine1.q ql/src/test/queries/clientpositive/global_limit.q ql/src/test/queries/clientpositive/groupby1_limit.q ql/src/test/queries/clientpositive/input11_limit.q ql/src/test/queries/clientpositive/input1_limit.q ql/src/test/queries/clientpositive/input_lazyserde.q ql/src/test/queries/clientpositive/join18_multi_distinct.q ql/src/test/queries/clientpositive/join_1to1.q ql/src/test/queries/clientpositive/join_casesensitive.q ql/src/test/queries/clientpositive/join_filters.q ql/src/test/queries/clientpositive/join_nulls.q ql/src/test/queries/clientpositive/merge3.q ql/src/test/queries/clientpositive/metadataonly1.q ql/src/test/queries/clientpositive/rcfile_columnar.q ql/src/test/queries/clientpositive/rcfile_lazydecompress.q ql/src/test/queries/clientpositive/rcfile_union.q ql/src/test/queries/clientpositive/sample10.q ql/src/test/queries/clientpositive/udf_sentences.q ql/src/test/queries/clientpositive/union24.q ql/src/test/queries/clientpositive/virtual_column.q ql/src/test/results/clientpositive/columnarserde_create_shortcut.q.out ql/src/test/results/clientpositive/combine1.q.out ql/src/test/results/clientpositive/global_limit.q.out ql/src/test/results/clientpositive/groupby1_limit.q.out ql/src/test/results/clientpositive/input11_limit.q.out ql/src/test/results/clientpositive/input1_limit.q.out ql/src/test/results/clientpositive/input_lazyserde.q.out ql/src/test/results/clientpositive/join18_multi_distinct.q.out ql/src/test/results/clientpositive/join_1to1.q.out ql/src/test/results/clientpositive/join_casesensitive.q.out ql/src/test/results/clientpositive/join_filters.q.out ql/src/test/results/clientpositive/join_nulls.q.out ql/src/test/results/clientpositive/merge3.q.out ql/src/test/results/clientpositive/metadataonly1.q.out ql/src/test/results/clientpositive/rcfile_columnar.q.out ql/src/test/results/clientpositive/rcfile_lazydecompress.q.out ql/src/test/results/clientpositive/rcfile_union.q.out ql/src/test/results/clientpositive/sample10.q.out ql/src/test/results/clientpositive/udf_sentences.q.out ql/src/test/results/clientpositive/union24.q.out ql/src/test/results/clientpositive/virtual_column.q.out Fix some nondeterministic test output - Key: HIVE-2734 URL: https://issues.apache.org/jira/browse/HIVE-2734 Project: Hive Issue Type: Bug Reporter: Zhenxiao Luo Assignee: Zhenxiao Luo Attachments: HIVE-2734.D1359.1.patch, HIVE-2734.D1365.1.patch, HIVE-2734.D1365.2.patch Many Hive query tests lack an ORDER BY clause, and consequently the ordering of the rows in the result set is nondeterministic: groupby1_limit input11_limit input1_limit input_lazyserde join18_multi_distinct join_1to1 join_casesensitive join_filters join_nulls merge3 metadataonly1 rcfile_columnar rcfile_lazydecompress rcfile_union sample10 udf_sentences union24 virtual_column -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-2734) Fix some nondeterministic test output
[ https://issues.apache.org/jira/browse/HIVE-2734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Phabricator updated HIVE-2734: -- Attachment: HIVE-2734.D1359.2.patch zhenxiao updated the revision HIVE-2734 [jira] Fix some nondeterministic test output. Reviewers: JIRA HIVE-2734: fix non-deterministic test output in: columnarserde_create_shortcut combine1 global_limit REVISION DETAIL https://reviews.facebook.net/D1359 AFFECTED FILES ql/src/test/queries/clientpositive/columnarserde_create_shortcut.q ql/src/test/queries/clientpositive/combine1.q ql/src/test/queries/clientpositive/global_limit.q ql/src/test/queries/clientpositive/groupby1_limit.q ql/src/test/queries/clientpositive/input11_limit.q ql/src/test/queries/clientpositive/input1_limit.q ql/src/test/queries/clientpositive/input_lazyserde.q ql/src/test/queries/clientpositive/join18_multi_distinct.q ql/src/test/queries/clientpositive/join_1to1.q ql/src/test/queries/clientpositive/join_casesensitive.q ql/src/test/queries/clientpositive/join_filters.q ql/src/test/queries/clientpositive/join_nulls.q ql/src/test/queries/clientpositive/merge3.q ql/src/test/queries/clientpositive/metadataonly1.q ql/src/test/queries/clientpositive/rcfile_columnar.q ql/src/test/queries/clientpositive/rcfile_lazydecompress.q ql/src/test/queries/clientpositive/rcfile_union.q ql/src/test/queries/clientpositive/sample10.q ql/src/test/queries/clientpositive/udf_sentences.q ql/src/test/queries/clientpositive/union24.q ql/src/test/queries/clientpositive/virtual_column.q ql/src/test/results/clientpositive/columnarserde_create_shortcut.q.out ql/src/test/results/clientpositive/combine1.q.out ql/src/test/results/clientpositive/global_limit.q.out ql/src/test/results/clientpositive/groupby1_limit.q.out ql/src/test/results/clientpositive/input11_limit.q.out ql/src/test/results/clientpositive/input1_limit.q.out ql/src/test/results/clientpositive/input_lazyserde.q.out ql/src/test/results/clientpositive/join18_multi_distinct.q.out ql/src/test/results/clientpositive/join_1to1.q.out ql/src/test/results/clientpositive/join_casesensitive.q.out ql/src/test/results/clientpositive/join_filters.q.out ql/src/test/results/clientpositive/join_nulls.q.out ql/src/test/results/clientpositive/merge3.q.out ql/src/test/results/clientpositive/metadataonly1.q.out ql/src/test/results/clientpositive/rcfile_columnar.q.out ql/src/test/results/clientpositive/rcfile_lazydecompress.q.out ql/src/test/results/clientpositive/rcfile_union.q.out ql/src/test/results/clientpositive/sample10.q.out ql/src/test/results/clientpositive/udf_sentences.q.out ql/src/test/results/clientpositive/union24.q.out ql/src/test/results/clientpositive/virtual_column.q.out Fix some nondeterministic test output - Key: HIVE-2734 URL: https://issues.apache.org/jira/browse/HIVE-2734 Project: Hive Issue Type: Bug Reporter: Zhenxiao Luo Assignee: Zhenxiao Luo Attachments: HIVE-2734.D1359.1.patch, HIVE-2734.D1359.2.patch, HIVE-2734.D1365.1.patch, HIVE-2734.D1365.2.patch Many Hive query tests lack an ORDER BY clause, and consequently the ordering of the rows in the result set is nondeterministic: groupby1_limit input11_limit input1_limit input_lazyserde join18_multi_distinct join_1to1 join_casesensitive join_filters join_nulls merge3 metadataonly1 rcfile_columnar rcfile_lazydecompress rcfile_union sample10 udf_sentences union24 virtual_column -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-2734) Fix some nondeterministic test output
[ https://issues.apache.org/jira/browse/HIVE-2734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zhenxiao Luo updated HIVE-2734: --- Status: Patch Available (was: Open) Fix some nondeterministic test output - Key: HIVE-2734 URL: https://issues.apache.org/jira/browse/HIVE-2734 Project: Hive Issue Type: Bug Reporter: Zhenxiao Luo Assignee: Zhenxiao Luo Attachments: HIVE-2734.D1359.1.patch, HIVE-2734.D1359.2.patch, HIVE-2734.D1365.1.patch, HIVE-2734.D1365.2.patch Many Hive query tests lack an ORDER BY clause, and consequently the ordering of the rows in the result set is nondeterministic: groupby1_limit input11_limit input1_limit input_lazyserde join18_multi_distinct join_1to1 join_casesensitive join_filters join_nulls merge3 metadataonly1 rcfile_columnar rcfile_lazydecompress rcfile_union sample10 udf_sentences union24 virtual_column -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-2249) When creating constant expression for numbers, try to infer type from another comparison operand, instead of trying to use integer first, and then long and double
[ https://issues.apache.org/jira/browse/HIVE-2249?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zhiqiu Kong updated HIVE-2249: -- Attachment: HIVE-2249.2.patch.txt This patch adds support to smartly infer the constant's type when encountering the query like column CMP constant, where CMP could be any of the comparators supported by Hive. This aims to improving the performance by moving the type conversion from runtime stage to compiling stage. To be more detailed, the smart type inference will happen when the type of the column is on e of the following: * TINYINT * SMALLINT * INT * BIGINT * FLOAT * DOUBLE If the type of the columns fits any of the above, the constant on the other hand side will be converted firstly to the column's type. When failing, the constant will then be converted to DOUBLE. If both tries fail, the constant will be left as what type it is. One exception is when the column is STRING while the constant is BIGINT. In this case, we do nothing. Otherwise, the constant will be converted to DOUBLE. Other improvements include returning false immediately for the query like int_col = not_convertable_double_constant, such as uid = 1.5. NOTE: ~130 unit test cases need to be updated due to this diff. All updates are limited to convert to the plan like col = 10 to col = 10.0, and are carefully checked individually. TWO test cases failed during the unit testing: * testCliDriver_insert2_overwrite_partitions * testCliDriver_ppr_pushdown When looking into the query as well as the output, the plans generated were found to be the same while the query results changed. As the queries in these two cases are simple select queries, maybe the default sorting criteria was changed unintentionally by this diff or other diffs. When creating constant expression for numbers, try to infer type from another comparison operand, instead of trying to use integer first, and then long and double -- Key: HIVE-2249 URL: https://issues.apache.org/jira/browse/HIVE-2249 Project: Hive Issue Type: Improvement Reporter: Siying Dong Assignee: Joseph Barillari Attachments: HIVE-2249.1.patch.txt, HIVE-2249.2.patch.txt The current code to build constant expression for numbers, here is the code: try { v = Double.valueOf(expr.getText()); v = Long.valueOf(expr.getText()); v = Integer.valueOf(expr.getText()); } catch (NumberFormatException e) { // do nothing here, we will throw an exception in the following block } if (v == null) { throw new SemanticException(ErrorMsg.INVALID_NUMERICAL_CONSTANT .getMsg(expr)); } return new ExprNodeConstantDesc(v); The for the case that WHERE BIG_INT_COLUMN = 0, or WHERE DOUBLE_COLUMN = 0, we always have to do a type conversion when comparing, which is unnecessary if it is slightly smarter to choose type when creating the constant expression. We can simply walk one level up the tree, find another comparison party and use the same type with that one if it is possible. For user's wrong query like 'INT_COLUMN=1.1', we can even do more. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HIVE-2740) c results in compilation errors
c results in compilation errors --- Key: HIVE-2740 URL: https://issues.apache.org/jira/browse/HIVE-2740 Project: Hive Issue Type: Bug Components: Build Infrastructure Affects Versions: 0.8.1 Reporter: Amareshwari Sriramadasu Assignee: Thomas Weise ant clean package -Dshims.include=0.23 results in compilation errors for component common. For ex: {noformat} [echo] Project: common [javac] Compiling 15 source files to /home/amarsri/workspace/hive/build/common/classes [javac] /home/amarsri/workspace/hive/common/src/java/org/apache/hadoop/hive/common/FileUtils.java:26: package org.apache.hadoop.conf does not exist [javac] import org.apache.hadoop.conf.Configuration; [javac] ^ {noformat} Thomas, Can you look into it? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-2740) ant clean package -Dshims.include=0.23 results in compilation errors
[ https://issues.apache.org/jira/browse/HIVE-2740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amareshwari Sriramadasu updated HIVE-2740: -- Summary: ant clean package -Dshims.include=0.23 results in compilation errors (was: c results in compilation errors) ant clean package -Dshims.include=0.23 results in compilation errors Key: HIVE-2740 URL: https://issues.apache.org/jira/browse/HIVE-2740 Project: Hive Issue Type: Bug Components: Build Infrastructure Affects Versions: 0.8.1 Reporter: Amareshwari Sriramadasu Assignee: Thomas Weise ant clean package -Dshims.include=0.23 results in compilation errors for component common. For ex: {noformat} [echo] Project: common [javac] Compiling 15 source files to /home/amarsri/workspace/hive/build/common/classes [javac] /home/amarsri/workspace/hive/common/src/java/org/apache/hadoop/hive/common/FileUtils.java:26: package org.apache.hadoop.conf does not exist [javac] import org.apache.hadoop.conf.Configuration; [javac] ^ {noformat} Thomas, Can you look into it? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HIVE-2740) ant clean package -Dshims.include=0.23 results in compilation errors
[ https://issues.apache.org/jira/browse/HIVE-2740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amareshwari Sriramadasu resolved HIVE-2740. --- Resolution: Invalid Assignee: (was: Thomas Weise) Hadoop to pass hadoop.version and hadoop.security.version explicitly. Could compile against 0.23 alone by running: ant clean package -Dshims.include=0.23 -Dhadoop.version=0.23.0 -Dhadoop.security.version=0.23.0 ant clean package -Dshims.include=0.23 results in compilation errors Key: HIVE-2740 URL: https://issues.apache.org/jira/browse/HIVE-2740 Project: Hive Issue Type: Bug Components: Build Infrastructure Affects Versions: 0.8.1 Reporter: Amareshwari Sriramadasu ant clean package -Dshims.include=0.23 results in compilation errors for component common. For ex: {noformat} [echo] Project: common [javac] Compiling 15 source files to /home/amarsri/workspace/hive/build/common/classes [javac] /home/amarsri/workspace/hive/common/src/java/org/apache/hadoop/hive/common/FileUtils.java:26: package org.apache.hadoop.conf does not exist [javac] import org.apache.hadoop.conf.Configuration; [javac] ^ {noformat} Thomas, Can you look into it? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HIVE-2708) Hive MR local jobs fail on Hadoop 0.23
[ https://issues.apache.org/jira/browse/HIVE-2708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amareshwari Sriramadasu resolved HIVE-2708. --- Resolution: Invalid Hive MR local jobs fail on Hadoop 0.23 -- Key: HIVE-2708 URL: https://issues.apache.org/jira/browse/HIVE-2708 Project: Hive Issue Type: Bug Reporter: Amareshwari Sriramadasu Assignee: Amareshwari Sriramadasu Attachments: localjob-hive-mr23.txt Hive MR local jobs fail on 0.23 with following exception: Job running in-process (local Hadoop) Hadoop job information for null: number of mappers: 0; number of reducers: 0 java.io.IOException: Could not find status of job:job_local_0001 at org.apache.hadoop.hive.ql.exec.HadoopJobExecHelper.progress(HadoopJobExecHelper.java:291) at org.apache.hadoop.hive.ql.exec.HadoopJobExecHelper.progress(HadoopJobExecHelper.java:685) at org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:458) at org.apache.hadoop.hive.ql.exec.ExecDriver.main(ExecDriver.java:710) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.util.RunJar.main(RunJar.java:189) Ended Job = job_local_0001 with exception 'java.io.IOException(Could not find status of job:job_local_0001)' Execution -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HIVE-2741) Single binary built against 0.20 and 0.23, does not work against 0.23 clusters.
Single binary built against 0.20 and 0.23, does not work against 0.23 clusters. --- Key: HIVE-2741 URL: https://issues.apache.org/jira/browse/HIVE-2741 Project: Hive Issue Type: Bug Affects Versions: 0.8.1 Reporter: Amareshwari Sriramadasu After HIVE-2629, if single binary is built for 0.20 and 0.23, it results in following exception in 0.23 clusters: java.lang.IncompatibleClassChangeError: Found interface org.apache.hadoop.mapred.Counters$Counter, but class was expected at org.apache.hadoop.hive.ql.exec.HadoopJobExecHelper.progress(HadoopJobExecHelper.java:341) at org.apache.hadoop.hive.ql.exec.HadoopJobExecHelper.progress(HadoopJobExecHelper.java:685) at org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:458) at org.apache.hadoop.hive.ql.exec.MapRedTask.execute(MapRedTask.java:136) at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:133) at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:57) at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1332) at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1123) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:931) at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:255) at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:212) at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:403) at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:671) at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:554) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.util.RunJar.main(RunJar.java:200) FAILED: Execution Error, return code -101 from org.apache.hadoop.hive.ql.exec.MapRedTask If we have to make single binary work against both 0.20 and 0.23, we need to move all such in-compatibilities to Shims. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira